The ill-posedness of derivative interpolation and regularized derivative interpolation for band-limited functions by sampling

In this paper, the ill-posedness of derivative interpolation is discussed, and a regularized derivative interpolation for band-limited signals is presented. The ill-posedness is analyzed by the Shannon sampling theorem. The convergence of the regularized derivative interpolation is studied by the combination of a regularized Fourier transform and the Shannon sampling theorem. The error estimation is given, and high-order derivatives are also considered. The algorithm of the regularized derivative interpolation is compared with derivative interpolation using some other algorithms.


Introduction
The computation of the derivative is widely applied in engineering, signal processing, and neural networks [1][2][3]. It is also widely applied in complex dynamical systems in networks [4].
In this section, we describe the problem of finding the derivative of band-limited signals by the Shannon sampling theorem [5]. Recall that a function is calledband-limited if its Fourier transform has the property that f (ω) = 0 for every ω / ∈[ − , ]. Shannon sampling theorem If f ∈ L 2 (R) and isband-limited, then it can be exactly reconstructed from its samples f (nh): where sinc (t − nh) := sin (t−nh) (t−nh) and h = π/ . Here, the convergence is in L 2 and uniform on R, which means where f E (nh) is the exact signal and {η n } = {η(nh)} is the noise with the bound δ > 0.
It was pointed out in [7] that the interpolation noise level increases significantly with k. In [8], Ignjatovic stated that numerical evaluation of higher-order derivatives of a signal from its samples is very noise sensitive and presented a motivation for the notions of chromatic derivatives. The Chromatic derivatives in [8] is too special since it is the operator polynomial K n t = (−i) n P L n (i d dt ), where P L n are the normalized and scaled Legendre polynomials. A Fourier truncation method to compute high-order numerical derivatives was proposed in [9], and we will compare with the method in the simulation section. The advantage of [9] is that it does not require the function to be band-limited. The disadvantage is that the accuracy is not good for band-limited functions. In [10], the series to approximate a band-limited function and its derivative is given, but the ill-posedness and noise are not considered. In [11], an optimal algorithm for approximating bandlimited functions from localized sampling is given. It is shown that a certain truncated series is the best estimate for using the local information operator; but the noisy case is not considered which means η n = 0 in (4). In [12], the noisy case is considered for approximating band-limited functions, but the computation of the derivatives is not considered in the noisy case.
In [13], pp. 235-249, cubic smoothing spline is discussed in noisy cases, but no proof of convergence is given. In [14][15][16], regularization methods are used to find the stable solutions in the computation of derivatives. In [14,16], the approximation error is measured with respect to L 2 -norm. This means the regularized solution approximates the exact solution according to the L 2 -norm. Furthermore, the noise models are different: in [14], the disturbance is assumed to be bounded in the L 2 -norm, whereas in [16], it is bounded in the maximum norm. On a finite interval in R, the maximum norm is more strict. In [14], the error of the first derivative is given In [16], for the regularization parameter α = δ 2 , the error of the jth derivative is given by is the approximate jth derivative by regularization, δ is the error bound in (4), and h is the step size for the derivative interpolation. In the current paper, we measure the approximation error in the L ∞ -norm on any finite interval in R and assume additive l ∞ noise. The notion of robustness naturally differs significantly for these different norms and spaces since the regularized solution approximates the exact solution according to the maximum norm on any finite interval in R implies the regularized solution approximates the exact solution according to the L 2 -norm on the same interval. In accordance to the estimated error in [14,16], h must be close to zero to guarantee the accuracy. This is also needed in [15]. In this paper, since f is band-limited, there is no O(h) in the error estimate. So, the estimate of approximation error obtained in this paper is better since the condition h → 0 is not required for the approximation.
In [17,18], other kernels such as Gaussian functions and the power of sinc functions are studied. However, the noise and ill-posedness are not considered. In this paper, the kernel will be given by the regularized Fourier transform in [19]. The regularized transform is found by the Tikhonov regularization method in [20]. The estimate of approximation error is obtained in the noisy case. The illposedness is taken into account in the error estimation. In [21] and [22], the ill-posedness of the problem of computing f from the samples of f is analyzed and a regularized sampling algorithm is presented in [21].
In this paper, we will consider a more complex problem. In Section 2, we will analyze the ill-posedness of the problem of computing f (k) from the samples of f and conclude that, in the presence of noise, formula (3) is not reliable even if k is relatively small. For the case k = 1, we can see the ill-posedness by the bipolar δ noise in Section 2. In Section 3, a regularized derivative interpolation formula will be presented and its convergence property is proved. We will show the bipolar δ noise that causes ill-posedness can be controlled by the regularized solution. In Section 4, a regularized high -order derivative interpolation formula is presented and its convergence property is proved. In Section 5, applications will be shown by some examples. We will see that the error of the regularized solution is small for the bipolar δ noise. Finally, the conclusion is given in Section 6. In our algorithm, it is not necessary for the step size h of the samples to be close to 0.

Definition 1 A function f (z)
: C → C is said to be entire if is is holomorphic throughout the set of complex numbers C.
By this definition, we can see that any band-limited function is an entire function.

Definition 5 The Bernstein class B
p σ consists of those functions which belong to E σ and whose restriction to R belongs to L p (R).
Bernstein inequality For every f ∈ B p σ , positive r ∈ Z and p ≥ 1, we have Plancherel-Pólya inequality Let f ∈ B p σ , p > 0, and let = (λ n ), n ∈ Z be a real increasing sequence such that λ n+1 − λ n ≥ 2δ. Then, By this inequality, we can see that f ∈ B p σ implies f ∈ L ∞ (R). Paley-Wiener theorem ([6], p. 67) The class functions {f (z)} whose members are entire, belong to L 2 when restricted to the real axis and are such that |f (z)| = o(e σ z ) is identical to the class of functions whose members have a representation By the Paley-Wiener theorem, the space of -bandlimited functions is equivalent to the classical Paley-Wiener space of entire functions. We denote Then, for each f ∈ PW 2 , the Plancherel-Pólya inequality implies that also f ∈ L ∞ ; hence, the possibility of applying the Bernstein inequality on f and its derivatives f (k) which yields that all f (k) ∈ L ∞ have a bounded L ∞ norm for k ≥ 1. We will consider a restricted function space which only includes derivatives of functions f ∈ PW 2 which are not only -band-limited but also have a bounded Fourier transform. The conditionf ∈ L ∞ (R) is required to prove the approximation property of the derivative interpolation by regularization. This can be seen in the proof of Lemma 3 in Section 3. Let us denote the space by The samples we considers are always bounded sequences in l ∞ .
The concept of ill-posed problems was introduced in [20]. Here, we borrow the following definition from it: (

ii) The solution is unique; in other words, the mapping
In other words, the inverse mapping A −1 is uniformly continuous.
Problems that violate any of the three conditions are said to be ill-posed.
In this section, we discuss the ill-posedness of derivative interpolation by the sampling theorem in the pair of is the space of bounded sequences with the norm We define the operator Let us describe the case k = 1 in (3). For the -bandlimited function f, the first derivative is where h = π/ . Here, the convergence is both in L 2 and uniformly on R. The proof of this fact is similar to the proof of the Shannon sampling theorem [23].
i) The existence condition is not satisfied. The proof is given in (iii) that the stability condition is not satisfied. In the proof, we will find a kind of noise (ii) The uniqueness condition is satisfied. Since S is a linear operator, it is one-to-one if and only if The stability condition is not satisfied, in other words, S −1 is not continuous.
This can be seen from the following example.

Example 1 Consider any band-limited signal f along
Assume the noise has the form η where t 0 is a given point in the time domain and δ is a small positive number. Then, the noise of the derivative in formula (5) is At t = t 0 the noise of the derivative is by the divergence property of the Harmonic series and converges by the convergence property of p-series in the case p = 2.
If we do not set η b (nh) = 0, for |n| > N and define them in the same way as |n| ≤ N, then η b (t 0 ) = ∞. This shows that the existence condition is not satisfied.
Also at any point, Therefore, this is an ill-posed problem.
If we consider the problem on the pair of spaces (PW ∞ , l 2 ), the problem is well posed. But the condition ||η|| l 2 ≤ δ → 0 is too strict.
In Section 3, we will show that the regularized solution will converge to the exact signal as ||η|| l ∞ ≤ δ → 0 according to the l ∞ -norm for suitable regularization parameters.

The derivative interpolation by regularization
To solve the ill-posed problem in last section, we introduce the regularized Fourier transform [19]: The regularized Fourier transform was found by finding the minimum of a smoothing function and solving an Euler equation. The detail can be seen in [19].
In [19], we have proved that has the function of stabilization. We have successfully used the regularization factor K α (t) for band-limited extrapolations in [19]. In [24], we also successfully used the factor K α (nh) in the computation of Fourier transform. The analysis of convergence property and the computation are in the frequency domain. In [21], the weight function K α (nh) is applied to the Shannon sampling theorem. In this paper, we will compute the derivatives by the combination of (5) and (6). The analysis of convergence property and computation are in the time domain for the derivatives. The error estimations of the computation of the derivatives are also given. The proof is quite different.
The infinite series is uniformly convergent in R for any α > 0 since both sinc (t − nh) and {f (nh) : n ∈ Z} are bounded.
By the differentiation of f α (t) in Definition 3.2, we obtain the regularized derivative interpolation: This derivative is well defined since the infinite series is also uniformly convergent on R.

Lemma 1 If f is band-limited, then
It can be seen from the convolution For the proof of the convergence of the regularized derivative interpolation, we will need the definition of periodic extension of the function e iωt .
The next Lemma is from [25].
for each t ∈ R.
In order to prove the convergence property of the regularized derivative interpolation, we need some more lemmas which are listed in the Appendix.
We are now in a position to state and prove our main theorem.
The proof is in the Appendix. and The proof is in the Appendix.

Derivative interpolation of higher order
In this section, we prove the convergence property of the derivative interpolation formula of high order: Some lemmas are in the Appendix.
We can now state and prove a version of Theorem 1 for higher-order derivatives.

E (t) uniformly in any finite interval [-T, T] as δ → 0. Furthermore, we have the estimate
The proof is in the Appendix.

Remarks 1
This theorem shows that evaluation of higher order derivatives from Nyquist rate samples with any accuracy is possible. Here, the Nyquist rate samples mean the samples with the step size h = π . and The proof is similar to the proof of Theorem 2. We omit it here.

Methods, experimental results, and discussion
In this section, we give some examples to show that the regularized sampling algorithm is more effective in controlling the noise than some other algorithms. We will compare it with the Fourier truncation method ( [9]) and Tikhonov regularization method ( [16]). The procedure of how the Tikhonov regularization method was performed is described in detail in [16].
In practice, only finite terms can be used in (8), so we choose a large integer N and use next formula in computation: where f (nh) is the noisy sampling data given in (4) in Section 1 . Due to the weight function, the series above converges much faster than the series (3) of using Shannon's sampling theorem. We give the estimate of the truncation error next.
Then, the truncation error Therefore, So, if N is large enough, the truncation error can be very small.
In next three examples, we choose N = 100 and α = 0.01 in (9). We will consider three types of noise: (i) Bipolar δ noise where t 0 = 30, and δ = 0.1, This is the noise given in Section 2 for which we have shown the stability condition is not satisfied. (iii) White Gaussian noise whose variance is 0.01. For the bipolar δ noise, we will give the square errors (SE). For the white noise, we use the three methods 100 times and will give the mean square errors (MSE). Then, The simulation results for the bipolar δ noise are in Fig. 1

Example 3 In this example, we choose a function that has a triangular spectrum. Let
Then, where = 1.5.
We still choose α = 0.01. The simulation results for the bipolar δ noise are in Fig. 4.
We give the SE in the sin ω c t t cos βt. Then, where ω c = 1 and β = 0.5. Here, = ω c (1 + β) = 1.5. We still choose α = 0.01. The simulation results for the bipolar δ noise are in Fig. 7. We give the SE in the The simulation results for the uniform noise are in Fig. 8

Remarks 2
The results of the Fourier truncation method in [9] are not good since the condition ||η|| L 2 ≤ δ is not satisfied here. In this paper, the condition on the samples |η(nh)| ≤ δ is much weaker. The condition to apply the regularization method in [16] is the same, so the result is better than the Fourier truncation method, but it is not as good as the regularized sampling algorithm in this paper. This is just for band-limited signals. The Tikhonov regularization method is a general method for ill-posed problems. It may be better for other signals such as non-band-limited signals. For the Tikhonov regularization method in [16], one must solve a system of linear equations. The amount of computation is of the order O(N 3 ) to compute the derivative of N points. For each t, the amount of computation of (9) is of the order O(N). Then, for N points, the amount of computation of (9) is of the order O(N 2 ).

Example 5
In this example, we show how the square error depends on the regularization parameter α and give the optimal α. We choose the function in example 1 and the bipolar δ noise. The results are in the Fig. 10. For the first derivative, the optimal α = 0.0590. For the second derivative, the optimal α = 0.0650.

Remarks 3 By the proof of Theorem 1 and 3, the error bound depends on the exact signal f E which is not known.
So it is not easy to find an optimized α. However, there are some methods in which α can be determined such as discrepancy principle ( [26]), the GCV and L-curve ( [27], [28]). And by Example 4, we can see that the regularization parameter α should be a little larger in the computation of the second derivative than in the first derivative. This means higher derivatives are more sensitive to noise. So a larger regularization parameter α is required.

Conclusion
The interpolation formula obtained by differentiating the formula of the Shannon sampling theorem is not stable. The presence of noise can give rise to the unreliable results. For certain kind of noise, the error can even approach infinity. So, this is a highly ill-posed problem. The regularization method is an effective method for ill-posed problems. The derivative interpolation by regularized sampling algorithm is presented and the method is extended to high order derivative interpolation. The convergence property is proved and tested by some examples. The numerical results show that the derivative interpolation by regularized sampling algorithm is more effective in reducing noise for band-limited signals. Proof We will see that this improper integral is uniformly convergent, so we can interchange the order of the differentiation and integration.
In the case ω ≥ , where the last equality uses the definition of a in lemma 3.1.

Proof of Theorem 2
By the proof of Theorem 1, we can see that
Proof We can prove it by integration by parts: Lemma 5 If f is band-limited, then is uniformly convergent. where A l k = l j=1 (k − j + 1) and A 0 k = 1 by lemma 4

Abbreviations
Not applicable.