Boundary feedback stabilization of a chain of serially connected strings

We consider N strings connected to one another and forming a particular network which is a chain of strings. We study a stabilization problem and precisley we prove that the energy of the solutions of the dissipative system decay exponentially to zero when the time tends to infinity, independently of the densities of the strings. Our technique is based on a frequency domain method and a special analysis for the resolvent. Moreover, by same appraoch, we study the transfert function associated to the chain of strings and the stability of the Schr\"odinger system.

Mathematical analysis of transmission partial differential equations is detailed in [12].
Let us first introduce some notation and definitions which will be used throughout the rest of the paper, in particular some which are linked to the notion of C ν -networks, ν ∈ N (as introduced in [9]).
Let Γ be a connected topological graph embedded in R, with N edges (N ∈ N * ). Let K = {k j : 0 ≤ j ≤ N −1} be the set of the edges of Γ. Each edge k j is a Jordan curve in R and is assumed to be parametrized by its arc length x j such that the parametrization π j : [j, j + 1] → k j : x j → π j (x j ) is ν-times differentiable, i.e. π j ∈ C ν ([j, j + 1], R) for all 0 ≤ j ≤ N − 1. The density of the edge k j is ρ j > 0. The C ν -network R associated with Γ is then defined as the union We study a feedback stabilization problem for a wave and a Schrdinger equations in networks, see [3]- [7], [12] and Figure 1. More precisely, we study a linear system modelling the vibrations of a chain of strings.
For each edge k j , the scalar function u j (t, x) for x ∈ R and t > 0 contains the information on the vertical displacement of the string, 0 ≤ j ≤ N − 1.
Our aim is to study the behaviour of the resolvent of the spatial operator which is defined in Section 3 and to obtain stability result for (P ).
We define the natural energy E(t) of a solution u = (u 0 , ..., u N −1 ) of (P ) and the natural energy of a solution V of (P ′ ), respectively, by We note that E(t) ≍ e(t), ∀ t ≥ 0.
We can easily check that every sufficiently smooth solution of (P ) satisfies the following dissipation law 4) and therefore, the energy is a nonincreasing function of the time variable t.
The result concerns the well-posedness of the solutions of (P ) and the exponential decay of the energy E(t) of the solutions of (P ).
The main result of this paper then concerns the precise asymptotic behaviour of the solutions of (P ). Our technique is based on a frequency domain method and a special analysis for the resolvent.
This paper is organized as follows: In Section 2, we give the proper functional setting for system (P ) and prove that the system is well-posed. In Section 3, we then show that the energie of system (P ) tends to zero. We study, in Section 3, the stabilization result for (P ) by the frequency domain technique and give the explicit decay rate of the energy of the solutions of (P ). Finally, in the last sections, we study the transfert function associated to a string network and the exponential stability of the Schrödinger system.
2 Well-posedness of the system In order to study system (P ) we need a proper functional setting. We define the following equipped with the inner products It is well-known that system (P ) may be rewritten as the first order evolution equation where U is the vector U = (u, ∂ t u) t and the operator A : It is clear that H is a Hilbert space, equipped with the usual inner product By the same way we define the operator A as following: Now we can prove the well-posedness of system (P ) and that the solution of (P ) satisfies the dissipation law (1.4). (ii) The solution u of (P ) with initial datum in D(A) satisfies (1.4). Therefore the energy is decreasing.
We first prove that A is dissipative. Take U = (u, v) t ∈ D(A). Then By integration by partsand by using the transmission and boundary conditions, we have This shows the dissipativeness of A.
Let us now prove that A is maximal, i.e. that λI − A is surjective for some λ > 0. It remains to find u. By (2.12) and (2.13), u j must satisfy, for all j = 0, ..., N − 1, Multiplying these identities by a test function φ, integrating in space and using integration by parts, we obtain Since (u, v) ∈ D(A) and (u, v) satisfies (2.13), we then have This problem has a unique solution u ∈ V by Lax-Milgram's lemma, because the lefthand side of (2.14) is coercive on V . If we consider φ ∈ This directly implies that u ∈ Coming back to (2.14) and by integrating by parts, we find Consequently, by taking particular test functions φ, we obtain In summary we have found (u, v) t ∈ D(A) satisfying (2.11), which finishes the proof of (i).
(ii) To prove (ii), it suffices to derivate the energy (1.2) for regular solutions and to use system (P ). The calculations are analogous to those of the proof of the dissipativeness of A in (i), and then, are left to the reader.
Remark 2.2. By the same we can prove that the operator A is a m-dissipatif operator of H and generates a C 0 − semigroup of contractions of H.

Exponential stability
We prove a decay result of the energy of system (P ), independently of N and of the densities, for all initial data in the energy space. Our technique is based on a frequency domain method and a special analysis for the resolvent.
Theorem 3.1. There exists a constant C, ω > 0 such that, for all (u 0 , u 1 ) ∈ H, the solution of system (P ) satisfies the following estimate Proof. By classical result (see Huang [11] and Prüss [15]) it suffices to show that A satisfies the following two conditions:) of a C 0 semigroup of contractions on a Hilbert space: where ρ(A) denotes the resolvent set of the operator A.
Then the proof of Theorem 3.1 is based on the following two lemmas.
Lemma 3.2. The spectrum of A contains no point on the imaginary axis.
Proof. Since A has compact resolvent, its spectrum σ(A) only consists of eigenvalues of A. We will show that the equation and β = 0 has only the trivial solution.
By taking the inner product of (3.18) with Z ∈ H and using we obtain that v 0 (0) = 0. Next, we eliminate v in (3.18) to get a second order ordinary differential system: (3.20) The above system has only trivial solution. Proof. In order to prove (3.17) or by equivalence the following we will compute and estimate the resolvent of the operator A associated to the problem where B = (B 0 , ..., B N −1 ) and We want to prove that there exists a constant C independent of β such that First step : Computation of the resolvent and Using the transmission conditions at nodes j = 1, ..., N − 1 we have Note that the solution W is completely determined if F 0 is known. Indeed, it suffices to insert the identity (3.30) in (3.25).
Thus, we give the equation satisfied by F 0 . The boundary conditions at nodes x = 0 Inserting (3.30) in the previous equation we get If we denote by H N −1 the 2 × 2 matrix whose the first line is the vector line is and Y N −1 the 2 × 1 vectors columns by then equation (3.31) and (3.32) are equivalent to the following system: For all j = 0, ..., N − 1, the matrix is B j is invertible: and we easily find after some computation that Since β ∈ R, from the previous identity, we directly get the following estimates From the definition of Γ j in (3.29) we also get It follows that then it follows with (3.40) that It remains to prove (3.41).
The idea of the proof is that (3.41) is well known for N = 1 and that this property spreads by iteration.
First, similarly to (3.33) we define for all N ∈ N * the matrix and we set Particularly, for N = 1 we have It is useful for the sequel to remark that ℜ(D 0D0 ) = 1.
Using (3.36) we have the following identity A simple computation shows that consequently, it follows that where µ min,N −2 is the smallest eigenvalue of the matrix in the previous identity. The determinant of this matrix is: Since D N −2 andD N −2 are clearly bounded the trace of this matrix is bounded, i.e It follows that

Comments and related questions
The same strategy can be applied to stabilize the following models and to verify and compute the transfer function.

Transfer function
We can use the same strategy to verify that the operator H(λ) = λ C * (λ 2 I + A) −1 C ∈ L(U ), λ ∈ C + , satisfies the property (4.48) of the following lemma, where here A is the self-adjoint operator corresponding to the conservative problem associated to problem (P), namely we replace in (P) the boundary feedback condition by i.e., H 2 (j, j + 1) : satisfies (4.46) to (4.47) hereafter where A −1 is the extension of A to (D(A)) ′ (the duality is in the sense of H) and N is the Neumann map, for γ > 0.
Proof. In the same way as the proof of Lemma 3.3, we give an equivalent formulation of the function H. For that purpose, we consider W = (W j ) 0≤j≤N −1 ∈ H, W j ∈ (H 1 (j, j + 1)) 2 solution of (λ − B∂ x )W = 0, where z ∈ C, B is defined as in the proof of Lemma 3.3, C N −1 is the matrix given in Therefore, for λ ∈ C, ℜ(λ) = γ > 0, the transfer function is Consequently to prove (4.48) it suffices to check that for a fixed γ > 0, there exists a constant c γ > 0 such that Therefore, from the boundary conditions at x = 0 and x = N, we find that F 0 is the solution of with the convention that

Estimate of F 0
Since for all j it is clear that there exists a constant c ′ γ > 0 sucht that We need a similar estimate for H −1 N −1 ; this will be done by giving a lower uniform bound of |D N −1 | on the line ℜ(λ) = γ, where we have set D N −1 = det(H N −1 ). Thus we introduce the matrixH and setD N −1 = det(H N −1 ). Now, we prove by iteration that Assume that there exists a constant k N −2 > 0 such that We have the following easily checked identities A computation leads to We have proved (4.52). It follows But |D N −1 | is obviously upper bounded on the line ℜ(λ) = γ, consequently there exists Finally, with (4.51) we deduce that H −1 N −1 is bounded on the line ℜ(λ) = γ and it follows that there exits c γ > 0 such that ∀z ∈ C, ∀λ : ℜ(λ) = γ, |F 0 | ≤ c γ z.
Conclusion: (4.50) is a direct consequence of the previous estimate. The proof is complete As application is that the open loop system associated to (P ) is satisfies a regularity property.
We define the natural energy E(t) of a solution u = (u 0 , ..., u N −1 ) of (S) by We can easily check that every sufficiently smooth solution of (S) satisfies the following dissipation law and therefore, the energy is a nonincreasing function of the time variable t.
In order to study system (S) we introduce the following Hilbert space equipped with the inner product The system (S) is a first order evolution equation which as the form where u 0 = (u 0 0 , u 0 1 , ..., u 0 N −1 ) ∈ H and the operator A : D(A) → H is defined by (ii) The solution u of (S) with initial datum in D(A) satisfies (5.2). Therefore the energy is decreasing.
Proof. (i) By Lumer-Phillips' theorem, it suffices to show that A is dissipative and maximal.
A is clearly dissipative. The general solution of (5.9) is where each p j is a polynomial of degree 1. It remains to find p j , j = 0, .. Then by integrations by parts and using (5.4)-(5.6) we get Consequently, the polynomials p j are constant and finally vanish from the continuity conditions and the right Dirichlet condition. Hence we have proved that (5.8) admits an unique solution. Consequently A is maximal.
(ii) To prove (ii), we use the same argument as in the proof of Theorem 2.1.

Exponential stability of the Schrödinger system
The stability result of system (S) is given by There exist constants C > 0 and ω > 0 such that, for all u 0 ∈ H, the solution of system (S) satisfies the following estimate Proof. As in the proof of Theorem (3.1) the result is based on the following two lemmas.
Lemma 5.3. The spectrum of A contains no point on the imaginary axis.
Proof. Since A has compact resolvent, its spectrum σ(A) only consists of eigenvalues of A. We will show that the equation with u = (u 0 , ..., u N −1 ) ∈ D(A) and β = 0, β ∈ R has only the trivial solution.
By taking the inner product of (5.11) with u ∈ H and using (5.7) we get that u 0 (0) = 0.
We will consider two cases since for each case the method is different.

First step : Computation of the resolvent
The solution of( 5.13) satisfies 14) An easy calculation shows that and c 1 j , c 2 j , ∈ C. Note that c 1 j = u j (j) and ρ j (∂ x u j )(j) = c 2 j .
.., N − 1, then using the transmission conditions (5.5)- For simplification we introduce the matrix M j and the vector W j as Thus we find that This last identity completely determine the solution u of (5.13).
Second step : Estimate of F 0 for β large.
From one hand, since the order of each matrix M j is then it is easy to see that all the matrices involved in (5.21) have the same order.
If β < 0 the previous procedure doesn't work but fortunately, in this case, we can get the estimate (5.27) directly. Indeed, multiplying (5.14) by u