Green's function for the Schrödinger equation with a generalized point interaction and stability of superoscillations

In this paper we study the time dependent Schr\"odinger equation with all possible self-adjoint singular interactions located at the origin, which include the $\delta$ and $\delta'$-potentials as well as boundary conditions of Dirichlet, Neumann, and Robin type as particular cases. We derive an explicit representation of the time dependent Green's function and give a mathematical rigorous meaning to the corresponding integral for holomorphic initial conditions, using Fresnel integrals. Superoscillatory functions appear in the context of weak measurements in quantum mechanics and are naturally treated as holomorphic entire functions. As an application of the Green's function we study the stability and oscillatory properties of the solution of the Schr\"odinger equation subject to a generalized point interaction when the initial datum is a superoscillatory function.


Introduction
The main purpose of this paper is to study the Green's function of the time dependent Schrödinger equation subject to general self-adjoint point interactions located at the origin, and to prove stability results for the solutions corresponding to superoscillating initial data. As a consequence of our detailed analysis we also obtain an explicit expression and asymptotic expansion of the time dependent plane wave solution, which allows to discuss the oscillatory properties of the time evolution of superoscillations under generalized point interactions.
Strongly localized potentials, also called pseudo-potentials or nowadays better known as δ-potentials, were already considered by Kronig and Penney in [45] and Fermi in [39]. Heuristically speaking, these δ-potentials are represented by the Hamiltonian where c y δ(x − y) is a point source of strength c y located at the point y ∈ R d , d ≥ 1. The δ-potentials may form a discrete set, e.g., a periodic lattice Y = Z d , or a single point Y = {0}.
The rigorous mathematical meaning of the Hamiltonian (1.1) was given only much later by Berezin and Faddeev in [21]. In this paper we will restrict ourselves to a single point interaction in R and hence assume Y = {0} and d = 1 from now on. In this context H in (1.1) is defined as a proper self-adjoint extension of the symmetric operator −∆ on C ∞ 0 (R \ {0}) which corresponds to interface (or jump) conditions at the origin of the form a detailed discussion can be found in the standard monograph [15]. Besides the δ-potential also other types of self-adjoint interface conditions can be treated (see [31,34,35,37,43,50,52] and [20,38,49] for interactions on hypersurfaces), among them so-called δ ′ -potentials and further generalizations, as well as decoupled systems with Dirichlet, Neumann, or Robin conditions. There are various ways to describe the complete family of self-adjoint interface conditions at the origin and for our purposes it is convenient to use the parametrization with unitary 2 × 2-matrices J (see Example 3.2 for identifying (1.2) as a special case of (1.3)).
To be more precise: The class of jump conditions (1.3) coincides with the class of self-adjoint interface conditions at the point 0. In other words, each unitary matrix J ∈ C 2×2 leads to a self-adjoint realization of the Laplacian in L 2 (R) with a generalized point interaction supported at the point 0, and conversely, for each self-adjoint Laplacian in L 2 (R) with a generalized point interaction supported at the point 0 there exists a unitary matrix J ∈ C 2×2 such that the interface condition has the form (1.3); cf. [19,Chapter 2.2]. An important problem we study in this paper is the time dependent Schrödinger equation with holomorphic initial datum F subject to a general self-adjoint singular interaction supported at the origin, that is, we consider It will be shown in Section 2 that the corresponding Green's function is given by , t > 0, x, y ∈ R \ {0}, (1.5) where the entire function Λ is defined in (2.2) and the coefficients µ ± , µ 0 , and ω ± are explicitly determined in terms of the entries of the unitary matrix J in the jump condition (1.4b); cf. Theorem 2.4, the examples in Section 3, and [13,14,42,48,53] for related results. Using the Green's function (1.5) the solution Ψ of (1.4) can be written as the integral While this integral is well defined for compactly supported continuous functions F , one has difficulties in making sense of (1.6) already for plane waves F (x) = e ikx . A mathematical rigorous analysis of this issue for a certain class of holomorphic functions with growth condition is provided in Section 4, where the main tool is the Fresnel integral approach.
The general results in Section 2 and Section 4 are applied to superoscillations in Section 5. Superoscillating functions are band-limited functions that can oscillate faster than their fastest Fourier component. They appear in quantum mechanics as results of weak measurements and, in particular, their time evolution under the Schrödinger equation is of crucial importance, see [1,10,12,30]. For a rigorous treatment of this subject we refer to [2,3,4,5,6,7,17,18,32] and [8]. These kind of functions (or sequences) also appear in antenna theory [54] (see also [24]) and various applications in optics were studied by M.V. Berry and many others, see, e.g., [23,25,26,27,28,29,40,41,46,47]. More information can also be found in the introductory papers [9,11,16,44] and in the Roadmap on superoscillations [22].
A weak measurement of a quantum observable represented by the self-adjoint operator A, involving a pre-selected state ψ 0 and a post-selected state ψ 1 , leads to the weak value where the real part of A weak can be interpreted as the shift and the imaginary part as the momentum of the pointer recording the measurement. An important feature of the weak measurement is that, in contrast with strong measurements A strong := (ψ, Aψ), the real part of A weak may become very large when the states ψ 0 and ψ 1 are almost orthogonal; this leads to superoscillations. A typical superoscillatory function is where |k| > 1 and If we fix x ∈ R and let n tend to infinity, we obtain lim n→∞ F n (x, k) = e ikx uniformly for x in compact subsets of R. The notion superoscillations comes from the fact that the frequencies (1 − 2l n ) in (1.7) are in modulus bounded by 1, but the frequency k of the limit function can be arbitrarily large.
As a consequence of the representation (1.6) of the solution of the Schrödinger equation subject to a general self-adjoint singular interaction we ask: When does a convergent sequence of initial conditions lim n→∞ F n = F (1.8) also lead to a convergent sequence of solutions lim n→∞ Ψ(t, x; F n ) = Ψ(t, x; F ), (1.9) and which type of convergence should be considered in (1.8) and (1.9) ? Our abstract result Theorem 4.6, which is also the bridge to investigate the time evolution of superoscillations in Section 5, shows that (1.9) holds uniformly on compact subsets of (0, ∞) × R, whenever the sequence (F n ) n satisfy some exponential boundedness conditions and the convergence in for some C ≥ 0 and certain sectors S α and −S α in the complex plane; cf. Section 4 for more details. These abstract assumptions are in accordance with the convergence properties of (holomorphic extensions of) superoscillating functions in spaces of entire functions with exponential growth that have been clarified just in the recent years, see [36]. The case of superoscillatory initial data is then discussed in Corollary 5.2 and the explicit form, oscillatory behaviour, and long time asymptotics of the corresponding limit in (1.9)

Green's function for the Schrödinger equation with a generalized point interaction
In this section we derive the Green's function of the time dependent Schrödinger equation (1.4) with a generalized singular interaction located at the origin. That is, we construct a function G which depends on the matrix J, such that the solution Ψ of (1.4) can be written in the form In Section 4 we shall clarify for which initial conditions F and in which sense this integral is understood. Here, we only want to derive the explicit form and some properties of the Green's function G itself.
We start by defining the entire function where erf(z) = 2 √ π z 0 e −ξ 2 dξ is the well known error function. Some important properties of this function are collected in the following lemma; cf. [18, Lemma 3.1].
Lemma 2.1. The function Λ in (2.2) has the following properties: (i) The function Λ satisfies the differential equation The value of the function Λ at −z is given by The absolute value of Λ(z) can be estimated by The function Λ is monotonically decreasing on R and asymptotically on C one has For all a > 0 and b, c ∈ C one has the integral identities Now we use that the complex integral over the entire function e −ξ 2 is path independent and that lim x→∞ x+z x e −ξ 2 dξ = 0. Hence, the two integrals on the right-hand side of (2.8) can be replaced by a path integral from z to ∞, parallel to the real axis. This gives This representation can now be used to estimate the absolute value e −s 2 −2 Re(z)s ds = Λ(Re(z)).
(iv) The monotonicity is a direct consequence of the representation (2.9) and the asymptotics were shown in [18, Lemma 3.1].
(v) Substituting t = s √ a in the integral (2.9) gives The assertion on the integral in (2.7b) now simply follows by evaluating at x = 0 and x → ∞; observe that by (2.6) the limit x → ∞ vanishes. Similarly, also in the case b = 2 √ a c we get the primitive and we also get the second case of the integral (2.7b) by evaluating the primitive at x = 0 and x → ∞.
Using (2.2) we now define for every t > 0, x ∈ R \ {0}, z ∈ C, and ω ∈ R, the functions which will appear as components of the Green's function (1.5) later on. In the following preparatory lemma we check that each of these components is a solution of the free Schrödinger equation on R \ {0}.
, z ∈ C, the functions in (2.10) satisfy the differential equations Proof. In order to verify (2.11) we compute the derivatives of the functions (2.10) explicitly. . (2.13c) Finally, for G free we get, in a similar way as for G 0 , the derivatives In all three cases it is obvious that the differential equation (2.11) is satisfied.
Next we will collect some elementary estimates of the functions G 0 , G 1 , and G free , which will be needed throughout the paper. Lemma 2.3. For every t > 0, x ∈ R \ {0}, and z ∈ C with Arg(z) ∈ [0, π 2 ] the following estimates for the functions (2.10) hold: In particular, the functions (2.10) satisfy the common estimate Proof. The estimates (2.15a) and (2.15b) for G 0 and G free are obvious. For the estimate (2.15a) of G 1 we use Lemma 2.1 (iii) and (iv) to get where the monotonicity of Λ is applicable since Re(z), Im(z) ≥ 0 due to Arg(z) ∈ [0, π 2 ]. Finally, the estimate (2.16) follows immediately from (2.15) by further estimating the exponents. Now we turn to our main objective in this section and introduce the Green's function which is expressed in terms of the functions (2.10) and we have added the additional argument ω ± in G 1 to emphasize the dependence of the parameter ω in (2.10b). The function in (2.17) coincides with the Green's function (1.5) mentioned in the Introduction. We prove in Theorem 2.4 below that for a proper choice of coefficients µ ± and µ 0 the function (2.17) satisfies the differential equation (1.4a) as well as the jump condition (1.4b) for a fixed unitary matrix J. The connection to the initial value (1.4c) is postponed to Lemma 4.2 and Theorem 4.4 in Section 4, where the precise meaning of the integral (2.1) is clarified first. Next we provide the coefficients ω ± and the piecewise constant functions µ ± and µ 0 explicitly in terms of the unitary 2 × 2-matrix J in (1.4b). Note that with parameters φ ∈ [0, π) and α, β ∈ C satisfying |α| 2 + |β| 2 = 1. It is convenient to use , and to distinguish the following three cases.
These three cases correspond to the rank of the matrix I + J on the right hand side of the jump conditions (1.4b) or (2.21). More precisely, in Case I we have rank(I + J) = 2, in Case II we have rank(I + J) = 1, and finally, in Case III we have rank(I + J) = 0.
Theorem 2.4. For every fixed y ∈ R\{0} the Green's function (2.17) satisfies the differential equation as well as the jump condition Proof. Note first that the coefficients µ (x,y) ± and µ (x,y) 0 in the representation (2.17) of the function G only depend on the signs of x and y. In particular, the coefficients are constant on the half lines x > 0 and x < 0, and hence it follows from Lemma 2.2 that the function G in (2.17) is a solution of the differential equation (2.20).
In the following we will verify that the jump condition (2.21) is satisfied. Using (2.12b), (2.13b), and (2.14b) we find that the spatial derivative of the function G is given by For the jump condition (2.21) we have to evaluate G and ∂ ∂x G at x = 0 ± . As in (2.21) this will be done in a vector form, where the first entry is the limit x = 0 + and the second entry the limit x = 0 − . We have and since (2.21) has to be satisfied for all y ∈ R \ {0} it suffices to compare and match the coefficients corresponding to the terms which leads to the following four equations .
Since the variable y only appears as sgn(y) each equation splits up in one for y > 0 and one for y < 0. We will consider this by writing (A ± ), (B), and (C) as matrix equations, where the first column is for y > 0 and the second column for y < 0. For a shorter notation we will use the matrices where j ∈ {0, ±}. Note that the matrix N satisfies the identity by (2.19a) for | Re(α)| = 1 and also for | Re(α)| = 1, since then Im(α) = β = 0 due to |α| 2 + |β| 2 = 1. From (2.24) and |α| 2 + |β| 2 = 1 it immediately follows that and, consequently, to which we will refer throughout the proof. With the help of the matrices (2.23) we now rewrite the equations (A ± ), (B), and (C) above in the matrix form Plugging in the matrix J from (2.18) and multiplying by e −iφ these equations turn into In the following we will discuss the three cases above Theorem 2.4 separately and verify that in each case with the proper choice of the coefficients ω ± and µ ± , µ 0 the equations (A ± ), (B), and (C) are satisfied, that is, the jump condition (2.21) holds.
Case I. Observe first that the equation (2.26c) is satisfied since µ (x,y) 0 = sgn(xy) in this case, and hence we conclude M 0 = 2I − ½. Next we use |α| 2 + |β| 2 = 1 to compute where we also used the assumption Re(α) = − cos(φ) in Case I. It follows that the matrix on the right hand side of (A ± ) and (B) is invertible with the inverse and this leads to where in the last line we used the identity (2.24). Hence the equations (2.26a) and (2.26b) turn into Using the explicit form Since we treat Case I we have µ (x,y) ± = − ω ± 2 Θ(xy) ± η (x,y) and from that we conclude In particular, this yields where in both cases we used (2.24) and 1 − Re(α) 2 = sin(φ), because Re(α) = − cos(φ). Using this in (2.26) leads to Since in Case II we have µ  .27)) and hence we conclude together with (2.25) that equation (A + ) is valid; note that we can apply (2.25) since Re(α) = −1 by the assumption in Case II and also Re(α) = − cos(φ) = 1 as φ ∈ [0, π). Next, we observe that also equation (C) holds by (2.25) and µ

Special cases of generalized point interactions and their Green's functions
In this section we consider some particular generalized point interactions and derive the explicit form of the Green's function in these situations. As an almost trivial case we start with the free particle in Example 3.1, discuss the well-known δ and δ ′ -interactions afterwards in Example 3.2 and Example 3.3, respectively, and in Examples 3.4-3.6 we treat decoupled systems with Dirichlet, Neumann, and Robin boundary conditions at the origin. In each of the examples we first provide the corresponding matrix J for the interface conditions (1.4b) with parameters φ, α, β as in (2.18), then we determine which of the Cases I-III above Theorem 2.4 appears, and finally we compute the coefficients in the Green's function (1.5) or (2.17). The special Green's functions in this section are known from the mathematical and physical literature.
Example 3.1 (Free particle). The wave function corresponding to a free particle is continuous with continuous first derivative and hence at the point x = 0 we have These continuity conditions are described in (1.4b) if we consider the matrix This matrix is of the form (2.18) with α = 0, β = −i, and φ = π 2 . In this situation the coefficient η (x,y) in (2.19a) is Since we are in Case II the coefficients of the corresponding Green function in (1.5) have the explicit form Therefore, the Green's function of the free particle is given by In the next example we treat the classical δ-point interaction located at the origin. Such singular potentials were studied intensively in the mathematical and physical literature; we refer the interested reader to the standard monograph [15] for a detailed treatment and further references. The particular Green's function that appears below can also be found (sometimes in a slightly different form) in the papers [33,42,48].
Example 3.2 (δ-potential). We consider the standard δ-interaction of strength 2c ∈ R \ {0} located at the point x = 0. This situation is described by the formal Schrödinger equation and is made mathematically rigorous in the form The jump condition (3.1a)-(3.1b) is realized in (1.4b) by using the matrix In fact, with this choice of J and multiplication by (c − i) the condition (1.4b) reads as or, more explicitely, we have the two equations By subtracting these equations from each other we first conclude (3.1a) and adding the equations leads to (3.1b). In order to write the matrix J in the form (2.18) we choose φ ∈ (0, π) such that cot(φ) = c. Next we set α = − cos(φ) and β = −i sin(φ). It follows, in particular, that and therefore Plugging these values in (2.19a) gives and since we are in Case II the coefficients of the Green's function are With these quantities we conclude from (1.5) that the Green's function of the δ-potential is given by .
The δ ′ -interaction in the next example is another popular singular potential that appears in various situations.
which in a mathematically rigorous form reads as One verifies in a similar way as in in the previous example that the jump conditions are realized in (1.4b) by using the matrix This matrix is of the form (2.18) if we choose φ ∈ (0, π) \ { π 2 } such that tan(φ) = −c and set α = cos(φ) and β = −i sin(φ). The coefficient η (x,y) in (2.19a) then becomes It follows that the Green's function of the δ ′ -potential is given by . Now we turn to generalized point interactions that lead to decoupled systems. In the following examples we discuss Dirichlet, Neumann, and Robin boundary conditions at the origin.

Example 3.4 (Dirichlet boundary conditions). We consider the free Schrödinger equation on the two half lines R \ {0} with Dirichlet boundary conditions
These boundary conditions are realized in (1.4b) by using the matrix  In the next example we consider Robin boundary conditions at the origin. The Neumann boundary conditions in Example 3.5 are contained as a special case and the Dirichlet boundary conditions in Example 3.4 formally appear as a limit; cf. Remark 3.7.
Example 3.6 (Robin boundary conditions). We consider the free Schrödinger equation on the two half lines R \ {0} with Robin boundary conditions for some a, b ∈ R; note that the minus sign for the derivative at x = 0 − on the right hand side of (1.4b) is omitted here. These boundary conditions are realized in (1.4b) by using the matrix and φ ∈ [0, π) chosen such that where we use sgn(0) = 1. One verifies that Case I applies and a (more technical) computation finally leads to the Green's function .

(3.5)
Remark 3.7. It is clear that for a = b = 0 the boundary condition and Green's function in Example 3.6 reduces to those in Example 3.5. Moreover, also the boundary condition and Green's function for the Dirichlet decoupling in Example 3.4 can be recovered from Example 3.6. In fact, for a → ∞ and b → −∞ the matrix J in (3.4) tends to the one in (3.2) and using Lemma 2.1 (iv) one obtains the asymptotics

Solution of the Schrödinger equation with a generalized point interaction
In this section we continue the theme from Section 2, where in Theorem 2.4 it was already shown that the Green's function (1.5) satisfies the Schrödinger equation (2.20) and the jump condition (2.21) that represents the generalized point interaction at the origin. Now we turn our attention to the initial value (1.4c). This missing part will be provided in Theorem 4.4 below. However, the main technical issue here is to make sense of the integral (1.6). Since we want to consider, e.g., plane waves F (x) = e ikx as initial conditions, we have to deal with integrands that are not absolutely integrable. For this purpose the so-called Fresnel integral, discussed in Lemma 4.1, will be useful. The resulting representation of the integral then also ensures, in a mathematical rigorous way, that the properties (2.20) and (2.21) of the Green's function G carry over to the respective properties (1.4a) and (1.4b) of the wave function Ψ.
, and assume that f satisfies the estimate for some A ≥ 0 and ε > 0. Then we get where the integral on the right hand side is absolutely convergent.
Proof. For simplicity we will write k = tan(α) > 0. For every R > 0 we consider the integration path The estimate |f (ye iα )| ≤ Ae −ε sin(2α)y 2 , y > 0, implies that the integral on the right hand side is absolutely convergent and hence the identity (4.3) follows.
In the next lemma we define functions Ψ 0 , Ψ 1 , and Ψ free that are closely related to the functions G 0 , G 1 , and G free in (2.10), which will then lead to a solution of the Schrödinger equation (1.4) in Theorem 4.4 below.
Lemma 4.2. Let F : Ω → C be holomorphic on an open set Ω ⊆ C which contains the sector S α from (4.1) for some α ∈ (0, π 2 ), and assume that F satisfies the estimate |F (z)| ≤ A e B Im(z) , z ∈ S α , (4.5) for some A, B ≥ 0. For every fixed t > 0, x ∈ R \ {0} we consider the function Then the following assertions hold: (i) The integral on the right hand side in (4.6) exists as the improper Riemann integral and the function Ψ j admits the absolute integrable representation The functions Ψ j , j ∈ {0, 1, free}, in (4.6) are solutions of the differential equation and Proof. (i) This assertion is a direct consequence of Lemma 4.1 if we verify that the functions y → G j (t, x, y)F (y), j ∈ {0, 1, free}, satisfy an estimate of the form (4.2). In fact, the estimate (2.16) together with the assumption (4.5) leads to the bound Since for every z ∈ S α we have Im(z) ≤ tan(α) Re(z), and hence Im(z 2 ) ≥ 2 tan(α) Im(z) 2 , the exponent in (4.12) can be further estimated by Taking into account that a polynomial of the form −a Im(z) 2 (4.13) and thus (4.12) can be estimated by This shows that (4.2) indeed holds in the present context and the integral (4.6) exists in the form (4.7) and admits the absolute integrable representation (4.8).
Since we have already shown in Lemma 2.2 that G 0 , G 1 , and G free solve (2.11), it remains to interchange the integral and the derivatives in the representation (4.8). We verify this property for the time derivative of G 0 and leave the analog arguments for the spatial derivatives and the functions G 1 and G free to the reader. Note that from (2.12a) with z = ye iα one obtains and hence together with (4.5) +By sin(α) .
The term e − y 2 sin(2α) 4t now ensures the integrability of the right hand side. Since all terms are continuous functions in t, we can also choose an integrable upper bound, which is locally uniform in t. Hence, by classical theorems for Lebesgue integral (see, e.g., [51]) the time derivative of Ψ 0 (t, x; F ) exists and is given by Similar arguments also apply to the other eight derivatives in (2.12), (2.13), and (2.14), and we conclude for every j ∈ {0, 1, free} As was already mentioned the functions G j solve (2.11) and hence it follows that the functions Ψ j satisfy (4.9).
(iii) To check the initial conditions (4.10) for Ψ 0 and Ψ 1 , we plug in the estimates (2.15a) and (4.5) into the representation (4.8). This yields for j ∈ {0, 1}, where we have used the integral (2.7a) in the second line; the convergence follows from the asymptotics (2.6) and the fact that c j (t) √ t is bounded (for the precise form of the constants see Lemma 2.3). For the initial value of Ψ free we distinguish two cases. For x < 0 we use the estimate (2.15b) and get the same convergence as in (4.15). The remaining case x > 0 is more involved. Here we split up the integral (4.6) into In the first integral we use the derivative d dz erf(z) = 2 √ π e −z 2 of the error function, as well as integration by parts, to get Using lim t→0 + erf ξ 2 √ it = sgn(ξ), ξ ∈ R, and dominated convergence we get In the second integral in (4.16) we substitute y → y + 2x and obtain This is the same integral as the one for Ψ 0 , with the initial function F ( · + 2x) instead F . Consequently, this integral also vanishes in the limit t → 0 + . Thus, we have also shown the initial condition (4.11) for Ψ free .
As the last preparatory statement we prove the following lemma about the representation of the functions Ψ 0 , Ψ 1 , and Ψ free at the support of the singular interaction x = 0 ± . Lemma 4.3. Let F : Ω → C be holomorphic on an open set Ω ⊆ C which contains the sector S α from (4.1) for some α ∈ (0, π 2 ), and assume that F satisfies the estimate |F (z)| ≤ A e B Im(z) , z ∈ S α , (4.17) for some A, B ≥ 0. Then for the functions Ψ j , j ∈ {0, 1, free}, from (4.6) and their spatial derivatives we are allowed to carry the limit x → 0 ± inside the integral where, similar to (4.7), the integrals exist as improper Riemann integrals Proof. For the function Ψ j , in the representation (4.8), we have the estimate which follows from the assumption (4.17) on F and (2.16). Since this upper bound is continuous in x, we can choose it to be uniform for all x in a neighborhood of 0. Now we can use dominated convergence in (4.8) to get the absolute integrable representation Once more from (4.17) and (2.16) we get the estimate The estimate (4.13) for x = 0 allows to further estimate the integrand by This estimate shows, in particular, that the assumption (4.2) of Lemma 4.1 is satisfied and hence we can use (4.3) to rewrite the absolute integrable representation (4.20) into the improper Riemann integral (4.18a).
The same argument applies also to the spatial derivative in (4.14b). Here, the explicit representations (2.12b), (2.13b), and (2.14b) lead to a similar estimate as in (2.16), and consequently also to estimates of the form (4.19) and (4.21).
The next theorem is the main result of this section, where a solution Ψ of the Schrödinger equation (1.4) is obtained by assembling the components Ψ j from (4.6) based on the structure of the Green's function in (2.17). Besides the four parts of the Green's function we also have to consider that now integrals over R appear, whereas the integrals in (4.6) are only over the positive half line (0, ∞). exists as an improper Riemann integral of the form R G(t, x, y)F (y)dy := lim Proof. For y > 0 the Green's function (2.17) can be written as Hence we conclude from Lemma 4.2 (i) that the limit exists. Moreover, for y > 0 we also have where we used G free (t, x, −y) = G free (t, −x, y), a direct consequence of (2.10c). Again from Lemma 4.2 (i) we conclude that also the limit exists. Here we used the mirrored function F (z) := F (−z), which also satisfies the assumption (4.5), since (4.23) holds on the double sector S α ∪ (−S α ). This leads to the existence of the function Ψ in (4.24) in the sense of (4.25), and also shows that it can be decomposed into (4.26) Due to (4.9) the functions Ψ 0 , Ψ 1 , and Ψ free are solutions of the differential equation, and so is its linear combination Ψ a solution of (1.4a). Note, that the coefficients µ ± and µ 0 only depend on the sign of x and hence do not influence the differential equation. Moreover, although the term Ψ free (t, −x, F ) depends on the variable −x, this function also solves (1.4a) since the x-derivative is of second order.
In order to check the jump condition (1.4b) we notice that by Lemma 4.3 we are allowed to carry the limit x → 0 ± inside the integral. Hence we get the representations also for the linear combination. Again, note that the negative x argument of Ψ free (t, −x; F ) does not matter, since G free (t, 0 + , y) = G free (t, 0 − , y) by definition (2.10c). Since G satisfies the jump condition (2.21), the function Ψ satisfies the jump condition (1.4b). Finally, the initial values (4.10) and (4.11) imply the initial condition (1.4c) of the wave function Ψ.
In preparation for the analysis of superoscillations in the next section we will now briefly discuss convergent sequences of initial conditions (F n ) n and the convergence of the corresponding solutions (Ψ(t, x; F n )) n of the Schrödinger equation (1.4). As before we shall first deal with the functions Ψ j , j ∈ {0, 1, free}, in (4.6) and assemble these components afterwards to the whole wave function Ψ; cf. (4.26) in the proof of Theorem 4.4.
Lemma 4.5. Let F, F n : Ω → C, n ∈ N 0 , be holomorphic on an open set Ω ⊆ C which contains the sector S α from (4.1) for some α ∈ (0, π 2 ), and assume that for some A, B ≥ 0 and A n , B n ≥ 0, n ∈ N 0 , the exponential bounds (4.5) hold. If the sequence (F n ) n converges as for some C ≥ 0, then also the corresponding wave functions Ψ j , j ∈ {0, 1, free}, in (4.6) converge as uniformly on compact subsets of (0, ∞) × R.
Proof. First of all, we have the estimate where C n := sup z∈Sα |F n (z) − F (z)|e −C|z| . Using the representation (4.8), this inequality together with the estimate (2.16) of the Green's function, leads to where in the last line we used the integral (2.7a). Since the right hand side of this inequality is continuous in t ∈ (0, ∞) and x ∈ R, and we have C n → 0 by the assumption (4.27), the uniform convergence (4.28) on compact subsets of (0, ∞) × R follows.
Lemma 4.5 now leads to the following theorem, which is an important ingredient in the next section.
Proof. It is clear that the convergence (4.29) of the functions F n , n ∈ N 0 , implies the same convergence of the mirrored functions F n (z) = F n (−z), n ∈ N 0 , and F (z) = F (−z). Since the function Ψ can be decomposed in the form (4.26), the convergence (4.30) follows immediately from Lemma 4.5.

Superoscillatory initial data and plane wave asymptotics
In this section we allow superoscillatory functions as initial data in the Schrödinger equation (1.4) and we show that the corresponding solutions converge uniformly on compact sets. To discuss the oscillatory properties of these solutions we study the long time asymptotics of the plane wave solution in Theorem 5.4 and Remark 5.5, where the expected oscillatory behaviour and also possible stationary terms reflecting negative bound states of the singular potential are identified.
Superoscillating functions are band-limited functions that can oscillate faster than their fastest Fourier component. This is made precise in the next definition.
(ii) There exists a compact subset K ⊂ R, called superoscillation set, such that In the next corollary, which is a simple consequence of Theorem 4.6, it will be shown that superoscillating initial data (F n ) n (with a slightly stronger convergence property) leads to solutions (Ψ(t, x; F n )) n that converge on compact subsets for all times t > 0. We mention that the characteristic superoscillatory behaviour of the functions (F n ) n is on a compact set K in (5.2), but this is not enough to ensure the same convergence for the sequence of solutions (Ψ(t, x; F n )) n . As the functions (5.1) admit entire extensions to the whole complex plane,  uniformly on compact subsets of (0, ∞) × R.
Together with the convergence (5.4) this means that the functions F n satisfy the assumptions of Theorem 4.6 for any α ∈ (0, π 2 ), and hence the statement follows.
To analyse the oscillatory behaviour of the functions Ψ(t, x; F n ) and Ψ(t, x; e ik · ) in Corollary 5.2 it is useful compute the explicit form of the plane wave solution Ψ(t, x; e ik · ) and to provide its long time asymptotics.
Proof. We start by calculating the functions Ψ j (t, x, e ik · ) for j ∈ {0, 1, free} from (4.6). Since the holomorphic continuation F (z) = e ikz of the initial condition satisfies the assumption (4.5) for the special choice α = π 4 , we can use the absolute convergent integral representation (4.8). For the functions Ψ 0 and Ψ free we now use the integral identity (2.7a) to get Ψ 0 t, x; e ±ik · = 1 2 √ πt as well as For the function Ψ 1 we use the integral (2.7b) to get, at least for ω and k not both vanishing, the explicit solution behaviour Ψ(t, x; e ik · ) = µ (x,0 + ) + which easily simplifies to (5.6). For k = 0 we get from (5.5) the representation Using the Taylor series erf(z) = 2 √ π ∞ n=0 (−1) n n!(2n+1) z 2n+1 we get the asymptotics Hence the wave function Ψ(t, x; 1) reduces to (5.7) in the limit t → ∞.