Inverse problems for Jacobi operators III: Mass-spring perturbations of semi-infinite systems

Consider an infinite linear mass-spring system and a modification of it obtained by changing the first mass and spring of the system. We give results on the interplay of the spectra of such systems and on the reconstruction of the system from its spectrum and the one of the modified system. Furthermore, we provide necessary and sufficient conditions for two sequences to be the spectra of the mass-spring system and the perturbed one.


Introduction
Let l fin (N) be the linear space of complex sequences with a finite number of nonzero elements. In the Hilbert space l 2 (N), consider the operator J 0 defined for every f = {f k } ∞ k=1 in l fin (N) by where q n ∈ R and b n > 0 for any n ∈ N. The operator J 0 is symmetric and has deficiency indices (1, 1) or (0, 0) [1, Chap. 4, Sec. 1.2]. Fix a self-adjoint extension of J 0 and denote it by J. Thus, either J J 0 or J = J 0 . According to the definition of the matrix representation for an unbounded symmetric operator [2,Sec. 47], J 0 is the operator whose matrix representation with respect to the canonical basis Along with J, we consider the operator J = J + [q 1 (θ 2 − 1) + θ 2 h] δ 1 , · δ 1 + b 1 (θ − 1)( δ 1 , · δ 2 + δ 2 , · δ 1 ) , θ > 0 , h ∈ R , (1.4) which is a self-adjoint extension of the operator whose matrix representation with respect to the canonical basis in l 2 (N) is  (1.5) Note that J is obtained from J by a particular kind of rank-two perturbation. Under the assumption that J has discrete spectrum (as explained in Section 2, when J 0 has deficiency indices (1,1), this is always the case), this work treats the inverse spectral problem of reconstructing, from the spectra of J and J , the matrix (1.3) and the "boundary condition at infinity" defining the self-adjoint extension J if necessary (i. e. if J 0 is not essentially self-adjoint, cf. [8,Sec. 2]). To solve this inverse problem, one should elucidate the distribution of the perturbed spectrum relative to the unperturbed one and determine the necessary input data for recovering the matrix. An important point to note is that this work provides necessary and sufficient conditions for two sequences to be the spectra of J and J. Also, we discuss (the lack of) uniqueness of the reconstruction.
Although the two spectra inverse problem for the rank-one perturbation family of Jacobi operators has been thoroughly studied (see for instance [9,12,16,20] and [5,6,10,11] for the case of finite matrices), there is scarce literature dealing with inverse problems of other kind of perturbations (cf. [8]).
The motivation for this work is the inverse spectral problem studied in [15] and [7] which is in its turn related with the physical problem of measuring micro-masses with the help of micro-cantilevers [18,19]. Micro-cantilevers are modeled by springmass systems whose masses and spring constants are determined by the mechanical parameters of the micro-cantilevers.
In this work we consider the semi-infinite mass-spring system given in Fig. 1. with masses {m j } ∞ j=1 and spring constants {k j } ∞ j=1 . This system is modeled by the Jacobi matrix (1.3) with In [11,14] it is explained how to deduce these formulae. Since J is considered to have discrete spectrum, the movement of the system is a superposition of harmonic oscillations whose frequencies are the square roots of the modules of the eigenvalues. The modified mass-spring system corresponding to the perturbed operator J is Figure 1: Semi-infinite mass-spring system obtained by changing the first mass by ∆m = m 1 (θ −2 − 1) and the first spring by ∆k = −hm 1 (see Fig. 2). Here we also consider negative values of ∆m and ∆k which correspond to θ > 1 and h < 0, respectively. Note that the perturbation Figure 2: Perturbed semi-infinite mass-spring system involved here is the result of the combined effect of a rank-one perturbation (studied thoroughly in [16]) and the particular rank-two perturbation studied in [8]. However, most of the results obtained here cannot be found from the results in [16] and [8], and require their own proof. Moreover, it turns out that one can single-out classes of isospectral operators within the two parameter perturbation family considered in this work that was not studied before. The paper is organized as follows. In Section 2 we fix the notation, lay down a convention for enumerating sequences and recall some results of the inverse spectral theory for Jacobi operators. Section 3 gives a detailed spectral analysis of the family of perturbed Jacobi operators. The solution of the two spectra inverse problem for J and J is given in Section 4. This section also discusses the non-uniqueness of the reconstruction and gives some characterization of isospectral operators in the perturbation family under consideration.

A review on inverse spectral theory for Jacobi operators
Let us denote by σ(J) the spectrum of J and consider the spectral resolution of the identity E for J given by the spectral theorem. Then the spectral function ρ of J is defined by Moreover, since J turns out to be simple with δ 1 being a cyclic vector, the operator of multiplication by the independent variable in L 2 (R, ρ) (defined on the maximal domain) is unitarily equivalent to J. Alongside the spectral function we consider the corresponding Weyl m-function given by Because of the inverse Stieltjes transform one uniquely recovers ρ from m, so ρ and m are in one-to-one correspondence.
The Weyl m-function has the following asymptotic behavior 3) is the matrix representation of a non-self-adjoint operator, then the condition at infinity may be found by the method exposed in [16,Sec. 2].
In this work we restrict our considerations to the case of σ(J) being discrete, viz., σ ess (J) = ∅. It is well known that this is always the case when J 0 is not essentially self-adjoint [17,Thm. 4.11], [21,Lem. 2.19]. The discreteness of σ(J) implies that (2.1) can be written as follows .
The function m is meromorphic, and, since it is also Herglotz, its zeros and poles interlace, i. e., between two contiguous zeros there is only one pole and between two contiguous poles there is only one zero (see the proof of [13, Chap. 7, Thm. 1]). Now, in the subspace δ ⊥ 1 of l 2 (N), consider the operator J T which is the restriction of J to dom(J) ∩ δ ⊥ 1 . Note that J T is a self-adjoint extension of the operator whose matrix representation with respect to the basis {δ k } ∞ k=2 of the space δ ⊥ 1 is (1.3) with the first column and row removed. The following proposition is well known (see for instance [16]). Proof. Clearly, one should only establish that the zeros and poles of m are as stated in the proposition. But this is a straightforward conclusion from the definition of the Weyl m-function and the formula (C1) Convention for enumerating a sequence. Let S be an infinite countable set of real numbers without finite points of accumulation and M an infinite subset of consecutive integers such that there is a strictly increasing function f : M → S such that f −1 (0) = 0. We write S = {λ k } k∈M , where λ k = f (k). Note that M is semi-bounded from above (below) if and only if the same holds for S and that in {λ k } k∈M only λ 0 is allowed to be zero.

Remark 1.
Clearly, if two real sequences S, S ′ without finite accumulation points interlace, then one always can find M and functions f : M → S and f ′ : M → S ′ with the properties given in our convention (C1) such that, for any k ∈ M, either where λ k = f (k) and λ ′ k = f ′ (k). If S is not semi-bounded, then both possibilities hold simultaneously.
The proof of the following proposition can be found in [8,Lem. 4.1] and [16,Sec. 4] Moreover, C < 0 and if σ(J) is semi-bounded from above, while, C > 0 and otherwise.

Direct spectral analysis for J and J
Let J and J be the operators defined in the Introduction. Since J T = J T , where J T is the operator in the space δ ⊥ 1 obtained by restricting J to dom( J) ∩ δ ⊥ 1 , one obtains from (2.6) that where m is the Weyl m-function corresponding to J . Let us define the function Immediately from (3.1) one proves the following proposition. Prior to stating it, in order to simplify the writing of some expressions, let us introduce a constant that will be used recurrently throughout the paper.
Proposition 3.1. Consider the Jacobi operator J and the operator J as given in (1.4) with θ = 1. If J has discrete spectrum, then i) the set of poles of m is a subset of σ(J) and the set of zeros is contained in iii) the sets σ(J) and σ( J) can intersect only at γ.
The following alternative expression for m: which is obtained by combining (3.1) and (3.2), is the main ingredient in the proof of the following proposition.
Proof. Let us first prove that between two contiguous eigenvalues of J there is exactly one eigenvalue of J. Assume that θ > 1 and consider two contiguous eigenvalues λ, λ of J such that γ < λ < λ. Then, by (2.5) and (3.4), one has The function m ↾ R , should cross the 0-axis in (λ, λ) an odd number of times. Actually, it crosses the 0-axis only once. Indeed, if one assumes that m ↾ R crosses the 0-axis three or more times as in Fig. 3 (a), then, in view of Propositions 2.1 and 3.1, there would be at least two elements of σ(J T ) in (λ, λ). Note that one crossing of the 0-axis and a tangential touch of it as in Fig. 3 and by the same reasoning used above the interlacing of the spectra in (γ, +∞) is established. The interlacing in (−∞, γ) is proven analogously.
Let us now prove the second assertion of the proposition. To this end suppose first that γ ∈ σ(J) and observe that, under this assumption, (3.4) implies that m (γ) = θ 2 . (3.5) Let us now assume that the contiguous eigenvalues λ, λ of J are such that Under the premise that θ > 1, we have In view of (3.5) and (3.6), if m ↾ R crosses the 0-axis one time in the interval (λ, γ), it should cross it in (λ, γ) at least twice. The same is true for the interval (γ, λ). Note that m ↾ R cannot tangentially touch the 0-axis due to the simplicity of its zeros. So, the assumption that m ↾ R crosses the 0-axis, from what has already been proven above, would imply that in (λ, γ), respectively (γ, λ), there is at least one eigenvalue of J, which contradicts the fact that λ and λ are contiguous. Thus, there is no crossing of the 0-axis by m ↾ R in the interval (λ, λ), which means the absence of eigenvalues of J in (λ, λ). If now θ < 1, instead of (3.6), one has From this asymptotic behavior, together with (3.5) and a similar reasoning as the one given above, it follows that m ↾ R crosses the 0-axis exactly once in (λ, γ) and once in (γ, λ).
The case when γ is in σ(J) is treated analogously. Here one only has to take into account two things: firstly that now and secondly, that, since − [Res ζ=γ m(ζ)] −1 is the normalizing constant of J corresponding to the eigenvalue γ (see (2.5)), one has m (γ) > 0 either when θ > 1 or θ < 1.
when θ > 1, and if θ < 1. Here, implicitly, the intersection of σ(J) with the semi-infinite intervals is not empty, but we are also considering the case of when the intersection with one of the semi-infinite intervals is empty (see Remark 2). Also, we are not excluding the case when γ is in σ(J) for which there is k 0 ∈ M such that λ k 0 = µ k 0 = γ.
Proof. Consider the sequence {η k } k∈M being the spectrum of J, where In the proof of [16,Thm. 3.4] it is shown that Then the assertion follows from the linearity of the limit lim n→∞ k∈Mn as soon as one notices that the enumeration has been done according to Remark 4 when θ = 1.
Proof. When θ = 1 the assertion follows from the proof of [

By Proposition 3.3, one actually has
Indeed, (3.10) implies the convergence of the products in (3.11). Now, the assertion of the proposition follows from lim ζ→∞ Im ζ≥ǫ>0 m(ζ) = 1 and lim ζ→∞ Im ζ≥ǫ k∈M The first limit is obtained from (2.3) and (3.4). The second one is a consequence of the uniform convergence of in compacts of C \ R, which, in its turn, can be proven on the basis of (3.10).

Inverse spectral analysis for J and J
In this section we give results on reconstruction of the operator J from its spectrum and the one of J. Additionally, we provide necessary and sufficient conditions for two sequences to be the spectra of the operators J and J . Finally, we discuss isospectral operators within the perturbed family of Jacobi operators. Proof. In view of what has been said in Section 2, it suffices to show that the input data uniquely determine the Weyl m-function of J, and the parameters θ and h.
On the basis of Proposition 3.4, one construct m from the sets σ(J) and σ( J). Then, since γ ∈ σ(J), it follows from (3.4) that m(γ) = θ 2 . Now, the constants γ and θ allow to find h. Finally, by means of (3.4), one determines the function m. Thus θ or h determine θ and h. On the other hand, from (3.4) and taking into account that γ ∈ σ(J), we obtain where α is the normalizing constant corresponding to the eigenvalue γ.
Suppose now that we are required to enumerate the sequences σ(J) and σ( J ) according to Remark 4, but no information is given about the constant γ other than it is not in σ(J). Clearly, one does not need this number for accomplishing this task, as is stated in the following remark.

Remark 5.
Assuming that J has discrete spectrum, let S = σ(J), S = σ( J) be disjoint, and take any θ = 1 and h ∈ R. It follows from Proposition 3.2 that one can find a set M and functions f : M → S, f : M → S, with the properties given in our convention for enumerating sequences (C1), such that there exists a unique k 0 ∈ M for which the following conditions hold under the assumption that λ k = f (k) and Before we state the necessary and sufficient conditions for two sequences to be the spectra of a Jacobi operator J and its perturbation J, let us introduce the following parameterized sequence. Suppose that two sequences {λ k } k∈M and {µ k } k∈M are given and enumerated by the set M as convened before. Whenever the series k∈M (µ k − λ k ) converges, the sequence is well defined for any ω ∈ R. ii) The series k∈M (µ k − λ k ) is convergent.
iii) There exists ω ∈ (λ k 0 −1 , λ k 0 ) such that a) For m = 0, 1, 2, . . . , the series Proof. Due to Propositions 3.2 and 3.3, for proving the necessity of the conditions, it only remains to show the existence of ω in (λ k 0 −1 , λ k 0 ) such that τ n ( ω) = α −1 n for all n ∈ M. Indeed iiia) and iiib) will follow from the fact that all moments of the spectral measure (2.4) exist and that the polynomials are dense in L 2 (R, ρ).
Clearly, γ ∈ (λ k 0 −1 , λ k 0 ), so let ω = γ. Then, from (2.5), (3.4), and Proposition 3.4 , it follows that Hence, taking into account (3.5), one verifies that τ n ( ω) = α −1 n . We now prove that conditions i), ii), iiia), and iiib) are sufficient. The condition i) implies that On the other hand, by ii) one can define the number Hence, for all n ∈ M, τ n ( ω) > 0, so define the function It follows from iiia) that the moments of the measure corresponding to ρ are finite. Now, on the basis of i) and ii), define the meromorphic functionš .
On the other hand, using again the first equality in (3.12), one obtains so it has been proven that, for the function given in (4.5), R dρ(t) = 1 .
Thus the measure corresponding to ρ is appropriately normalized and, because of iiia), all the moments exist, so in L 2 (R, ρ) apply the Gram-Schmidt procedure of orthonormalization to the sequence {t k } ∞ k=0 to obtain a Jacobi matrix as was explained in the Section 2. Consider the operator J 0 with domain l fin (N) generated by this Jacobi matrix as explained in the Introduction. Now, as a consequence of condition iiib), which means that the polynomials are dense in L 2 (R, ρ), ρ corresponds to the resolution of the identity of a self-adjoint extension J of J 0 [17,Prop. 4.15].
On the other hand, from (4.6) and (4.9), it follows thať But θ = ϑ and we have already proven that α −1 k = τ k ( ω) for k ∈ M. Thus m =m, meaning that the zeros of m are given by the sequence {µ k } k∈M . Remark 6. In accordance with Theorem 4.1, the proof of Theorem 4.3 shows that the sequences S, S, and the parameter ω satisfying i), ii), and iii), uniquely determine the perturbation parameters θ and h, and the matrix (1.3) with the boundary condition at infinity if necessary. Thus, S, S, and ω amount to the complete input data for solving uniquely the inverse spectral problem.
By hypothesis all the moments of the measure ρ ω are finite and the polynomials are dense in L 2 (R, ρ ω ). For the proposition to be proven, one needs to show that this implies that all the moments of the measure ρ ω are finite and the polynomials are dense in L 2 (R, ρ ω ) for all ω ∈ (λ k 0 −1 , λ k 0 ). But, since the support of the measure is the same for all ω ∈ (λ k 0 −1 , λ k 0 ), this implication will indeed take place if for any fixed ω ∈ (λ k 0 −1 , λ k 0 ) there are positive constants C 1 , C 2 such that Fix ω ∈ (λ k 0 −1 , λ k 0 ). From (4.2), it follows that By elementary estimates of λn− ω λn−ω , one verifies from (4.12) that if then (4.11) holds.
Proof. Suppose that θ > 1 and that m ↾ (λ k 0 −1 ,λ k 0 ) has more than one local extremum. Then one verifies that there are three different points On the other hand, Lemma 4.1 tells us that the equation where θ ′ = + m(ω 1 ), has only the solutions ω 1 and the only element of J ′ T in (λ k 0 −1 , λ k 0 ). This is in contradiction with (4.16).
Let us now reformulate and summarize some of our results in terms of the massspring systems mentioned in the Introduction.
Suppose that one knows the spectrum of the Jacobi operator corresponding to the mass-spring system given in Fig. 1, and then, after carrying out a mass-spring perturbation on the system as illustrated in Fig. 2, one is given the new spectrum, which does not intersect with the first one. Clearly, by the spectra alone, one determines if ∆m is positive or negative (see Proposition 3.2). For definiteness, suppose that ∆m > 0. If no more information is given, then for any value of the ratio of masses θ ∈ (0, max t∈(µ k 0 −1 ,µ k 0 ) m(t)] there are mass-spring systems corresponding to Figs. 1 and 2 having the measured spectra (see Theorem 4.4). However, when one knows the ratio of masses θ then, in general, there are only two mass-spring systems corresponding to Fig. 1 that comply with the conditions after the corresponding perturbation (see Theorem 4.5). Moreover, if there is only one system with the required properties (see Theorem 4.5).
On the basis of I) and II), define the meromorphic functionš . (4.20) As it was shown in the proof of Theorem 4.3, one verifies that It is also straightforward to show that Thus, since the functionm(ζ) vanishes as ζ → ∞ along curves in the upper complex half plane, according to [13, Chap. VII, Sec.1 Theorem 2], one can writě On the other hand, by IIIa), all the moments of ρ exist. Hence, using the method explained in Section 2, one obtains a Jacobi matrix and the operator J 0 generated by it (see the Introduction). Condition IIIb) implies that ρ is the resolution of the identity of a self-adjoint extension J of J 0 [17,Prop. 4.15]. Now, consider (4.10), where now By construction the sequence {λ k } k∈M is the spectrum of J. For the proof to be complete it only remains to show that {µ k } k∈M is the spectrum of J. For the function given in (3.2), taking into account (2.5) and (3.4), one has .
In view of (4.20) and (4.21), one haš But, since θ = ω and the fact that α −1 k = υ k ( ω) for k ∈ M, it follows that m =m. In its turn, this means that the zeros of m are given by the sequence {µ k } k∈M .
Remark 10. By repeating the reasoning of the proof of Theorem 4.6, it is straightforward to verify that Theorem 4.6 remains true if one substitutes θ > 1 by θ < 1, (3.8) by (3.9) in I), and Proof. For proving the claim one repeats the reasoning of the proof of Proposition 4.1. Here we observe that, for n ∈ M, n = k 0 , where C = ω−1 ω−1 . Remark 11. If, in Proposition 4.2, one substitutes (3.8) by (3.9) in I) and then the new assertion holds true.
By repeating the proof of Theorem 4.4 with a minor modification one arrives at the following theorem.
Assume that the spectra of the mass-spring system given in Fig. 1 and Fig. 2 are given and they intersect. By Proposition 3.2, these input data determine the sign of ∆m. Let us suppose that ∆m > 0. Due to Theorem 4.7, for any value of the ratio of masses θ < m(γ) there are mass-spring systems corresponding to Figs. 1 and 2 having the measured spectra. The knowledge of the ratio of masses completely determines the mass-spring systems.
We have given above the ratio of masses as a parameter of the system when the spectra intersect (see Theorems 4.6 and 4.2 where ω and ω play the role of the ratio of masses). This is a "natural" choice because the parameter used in the case when the spectra are disjoint, namely γ, is now given with the spectra. There is also another choice for the parameter: the spring constant h. Below we briefly discuss this parameterization where now the role of the spring constant is played by ω and ω. We begin by defining υ n (ω) :=  Proof. The proof is similar to the one of Theorem 4.6 and we restrict ourselves to the case when γ > 0. The other cases are proven analogously. Thus, for the necessity of the conditions to be proven, one only should establish that there is such that υ k ( ω) = α −1 k for all k ∈ M. On the basis of (3.3) and (4.18), one has 1 < m(γ) < γ γ + h .