SPECTRAL THEORY FOR NONCONSERVATIVE TRANSMISSION LINE NETWORKS

. The global theory of transmission line networks with nonconservative junction conditions is developed from a spectral theoretic viewpoint. The rather general junction conditions lead to spectral problems for nonnormal operators. The theory of analytic functions which are almost periodic in a strip is used to establish the existence of an inﬁnite sequence of eigenvalues and the completeness of generalized eigenfunctions. Simple eigenvalues are generic. The asymptotic behavior of an eigenvalue counting function is determined. Specialized results are developed for rational graphs.


Introduction. The transmission line equations
provide a basic model for wave propagation in one space dimension. This system, or a dissipative variant, is frequently used in electrical engineering [24,25,32] for describing the propagation of voltage and current along a transmission line. These equations and related nonlinear systems also appear in one dimensional models of arterial blood flow [11,26,28,30], where q(t, x) is the flow in the direction of increasing x, and p(t, x) is the pressure. With A representing the vessel crosssectional area divided by the fluid density, and C being the vessel compliance, both assumed constant between arterial junctions, the equations are derived using conservation of mass and momentum. In many applications of the transmission line equations the natural spatial domain is a network, with multiple finite length transmission lines meeting at junctions. Transmission and reflection at junctions are typically described by energy conserving boundary conditions or scattering matrices. The description of solutions is often local in both space and time, since trying to track even simple initial data through multiple junctions quickly becomes intimidating. This typical approach to transmission line networks fails to address two important issues: the impact of the global network structure, and the consequences of nonconservative junction conditions. These issues seem particularly important when trying to come to grips with a complex network like the human circulatory system, with its extraordinary number of vessels, junctions, and scales.
This work uses 'quantum graph' Hilbert space operator methods to treat (1) and more general problems on an arbitrary finite network. Losses (and gains) at network junctions are allowed. Although the evolution is generated by a formally skew-adjoint operator, the rather general junction conditions lead to nonnormal operators. It is a pleasant surprise to find that the spectral theory for the generator of the evolution preserves much of the structure found in the skew-adjoint case.
The analysis is developed in three subsequent sections. The second section introduces coordinate changes used to interpret (1) as a scalar evolution on a directed graph. Generalizing the usual energy conserving junction conditions, nonconservative conditions are introduced, along with their vertex scattering matrix formulation. In the fluid model these conditions conserve mass, but allow energy loss as a function of the flows at the junction.
The third section treats the spectral theory of more general problems on directed finite networks with invertible junction scattering matrices. The eigenvalues associated to these systems are the roots of an entire function of the complex eigenvalue parameter which are almost periodic in a vertical strip. Aspects of the theory of analytic almost periodic functions play an important role in the analysis. Among other results, the eigenvalues are shown to be generically simple, to satisfy a 'Weyl' estimate relating asymptotics of the eigenvalue counting function to the graph 'volume', and to have a complete set of generalized eigenfunctions.
The fourth section treats the case when network edge lengths are rational multiples of a common value. Here the link between the differential operator eigenvalues and the eigenvalues of an aggregate junction scattering matrix are more explicit. In addition, the completeness result for generalized eigenfunctions can be sharpened to give a Reisz basis of generalized eigenfunctions. Returning to the more specialized models of the second section, the existence of eigenvalues with real part zero is shown to depend on rational relations among cycle lengths of the network.
There is a considerable literature for differential equations on networks. Early work considering diffusion problems and the definition of the Laplace operator includes [22,23]. Basic results on eigenvalues for the standard Laplace operator on networks are established in [31] and [27]. Nonlinear scalar wave equations are treated in [3]. A D'Alembert formula for the wave equation on networks is discussed in [6]. In contrast to this paper, these older works generally study a self adjoint Laplace operator, with junction conditions enforcing continuity across the junctions.
Some themes related to this paper appear in older work [8,19] on vibrating strings with dissipative boundary conditions. Beam networks deliberately incorporating dissipative joints have been studied [7], particularly in the context of large scale space structures. A very recent paper [1] considers current flow across dissipative junctions linking multiple quantum wires. Other applications could potentially include modeling of fluid transport with losses associated to altered vessel compliance at junctions, changes in flow direction or flow velocity at junctions, or, as in the beam example, the introduction of junction devices to enhance stability.
The methods employed here are related to ideas in the 'quantum graph' literature. We will often employ graph terminology, using edges instead of transmission lines, and vertices in place of junctions. Some of the problems and techniques of this work have antecedents in the quantum graph literature. The problem of finding vertex conditions leading to an energy conserving or energy dissipating evolutions was previously considered in [4,5]. We will make use of a scheme for replacing an undirected graph with a directed cover; this is at least implicit in [16]. Characterizations of a different class of contraction semigroups on graphs is considered in [15].
Problems most closely related to this work are treated in [17,18]. In particular [17] develops an L 1 semigroup analysis for differential equations on directed networks, with somewhat different vertex transmission matrices. Almost periodic functions make an appearance, and there is considerable emphasis on having rationally related edge lengths. In contrast, the present paper emphasizes the L 2 spectral theory, with eigenvalue distribution, generic simplicity, and eigenfunction completeness results not treated in [17].
This work complements recent studies [9,10] using similar methods of analysis. Davies [9] considers more general nonnormal ordinary differential operators. In this more general setting, rather precise eigenvalue asymptotics are still available, but poor resolvent estimates and eigenfunction expansion results are typical. Our operators correspond to Example 30 of [9], where it is noted that they generate one-parameter groups. We do not explicitly consider the semigroup aspects in our analysis, but note that there are consequences for more general linear and nonlinear evolution equations [29]. Davies raises the problem of equating eigenvalue multiplicity with the multiplicity of the corresponding root of the characteristic function. For the operators considered in this paper, an affirmative answer is provided in 3.4 and 3.5. In a different direction, but using similar tools, the recent work [10] considers resonance asymptotics for networks with infinite 'leads'.
2. Junction scattering. If N transmission lines e 1 , . . . , e N meet at a junction ( Figure 1) the equations (1) describing propagation along a line are supplemented by junction conditions. This section considers extensions of the standard junction conditions to allow for energy loss (or gain) at the junctions. Anticipating more general developments, these nonconservative junction conditions are recast as junction scattering matrices for the propagation of incoming signals into outgoing signals.

Figure 1. A transmission line junction
For each transmission line (or edge) e n , the parameters A n and C n from (1) are assumed positive, and constant on e n , but possibly varying with n. Assume that e n has a local coordinate x which increases with distance from the junction. Introduce the wave speed c n = A n /C n , the impedance [11, p. 157] and the functions In these new variables, each equation (1) is equivalent to a diagonal system On each edge e n solutions R n (t, x), respectively L n (t, x), are simply traveling waves moving at constant speed in the direction of increasing, respectively decreasing, x.
2.1. Junction conditions. Propagation through a junction v is typically described by junction conditions or junction scattering matrices. An illustrative family of junction conditions is The first condition is the natural requirement of mass conservation (respectively charge conservation) for transmission line models of fluid flow (respectively voltage and current flow). Conventional models [28,30] typically use v = 0, so that pressures (or voltages) agree at a junction. Notice that (5) implies the further conditions The example (5) mainly serves here as a simple theoretical example, although a similar model is described in [1] in the context of dissipative quantum wire junctions. Alternatively, this model could describe junctions with a dashpot type stabilizer, or energy losses at junctions resulting from increased fluid turbulence due to flow rate changes, as suggested by the conditions (5) with v < 0.
To manage notation, introduce the vectors Suppose B 1 and B 2 are N × N matrices, with a set of N independent junction conditions written as Using (3), these conditions become Simple algebra gives the following connection between P − Q junction conditions and scattering matrices.
matrices, then the junction conditions (7) imply the junction scattering relation R = S(v)L, where the invertible junction scattering matrix is In the other direction, if S(v) is a scattering matrix, then R = S(v)L implies the junction condition For the examples (5) additional details about the scattering matrices may be obtained. Define z ± n = Z n ± √ Z n , n = 1, . . . , N, The equations (5) are equivalent to and if Z ± are invertible the junction scattering matrix is Proposition 2. If | | < Z n for n = 1, . . . , N , then the matrices Z ± are invertible.
Proof. The argument is the same for the two matrices Z ± , so consider Z + . The condition | | < Z n for n = 1, . . . , N implies z + n > 0. Alter the first row of Z + to obtain a new matrix Observe that all rows are nonzero vectors, the first row is orthogonal to the other rows, and rows 2, . . . , N are clearly independent. Since its rows are independent, Z + 1 is invertible. Shifting back to the matrix Z + , notice that its first row is nonzero, and not orthogonal to the first row of Z + 1 , since the dot product of the first rows is positive. This makes all rows of Z + independent, as desired.
Proof. To see that V satisfies In case = 0, let Z = Z + = Z − . For n = 1, . . . , N let V n denote the transpose of the n − th row of Z. Then and V 1 is orthogonal to the independent vectors V 2 , . . . , V N . Since V 1 is an eigenvector of Z −1 J Z with eigenvalue 1, and V 2 , . . . , V N are eigenvectors with eigenvalue −1, the matrix Z −1 J Z is orthogonal and real symmetric.

Vertex forms.
With a suitable inner product, the spatial part of (1) is formally skew adjoint. For vector functions [P, Q] and [U, V ] with 2N components, define the inner product If L acts by and the functions vanish at the upper integration limit (as they might with a partition of unity), then integration by parts gives Subject to the vertex conditions, L will be formally skew adjoint if S(v) is unitary, and formally dissipative if S(v) * S(v) ≤ I. It is not difficult [4] to extend these formal remarks to theorems about the corresponding Hilbert space operators. For the junction conditions (5) the observation (6) leads to the following calculation.
Thus this class of junction conditions will be dissipative for v ≤ 0.

2.3.
A directed graph framework. The next step is to place nonconservative transmission line networks in a more general context. Suppose G u is an undirected finite graph. We would like to consider equations of the form (1) or equivalently (4) on each edge of G u , subject to junction conditions such as (5). A difficulty is that q, or the functions R and L, depend on the edge direction. For many networks there is no preferred choice of edge directions, and arbitrary choices can lead to considerable notational confusion. A productive alternative is to replace the undirected graph G u with a directed graph G having twice as many edges. Given two adjacent vertices v, w in G u , replace the undirected edge e 0 joining them with two directed edges, e 1 from v to w, and e 2 from w to v. If e 0 has length l, so do e 1 and e 2 . If x is a local coordinate for e 0 , increasing from v to w, choose coordinates X = x for e 1 and X = l − x for e 2 , so X increases in the edge direction for all directed edges. The vector of functions (R, L) on an undirected edge e 0 can now be replaced by scalar functions f 1 (t, X) = R(t, x) and f 2 (t, X) = L(t, l − x) on the directed edges e 1 , e 2 . With respect to such local coordinates the system (4) becomes on the edges e of the new directed graph G. Solutions on all directed edges are traveling waves moving at speed c e in the direction of increasing x. To complete the global description, introduce invertible junction scattering matrices S(v), with S (v) from (9) being special cases. Suppose that λ is an eigenvalue with eigenfunction F for the spatial part (or right hand side of) system (12), subject to a set of junction scattering conditions, For the undirected edges e 1 , . . . , e N incident on a vertex v in G u , using local edge coordinates which are the distance from v, the solution F (t, x) may be written as a vector solution of (4) e λt [R, L]. Using the substitution (3), we find, in these local coordinates, separated solutions of (1) 3. Derivative operators on directed graphs. Having discussed alternative formulations of the transmission line equations, and generalized junction conditions with their corresponding junction scattering matrices, eigenvalues for these and more general problems on networks are now considered. Assume that G is a directed graph with a finite vertex set and a finite edge set. Each vertex appears in at least one edge, and for convenience loops and multiple edges between vertex pairs are not considered directly, although these can be incorporated by adding vertices. A vertex v of G will have an equal number δ(v) of incoming and outgoing edges. Each edge has a positive finite length or weight, and for notational convenience the edges are assumed numbered e n , n = 1, . . . , N . Directed edges e n of length l n may be identified with a real interval e n = [a n , b n ] of the same length. The Euclidean interval length and Lebesgue measure extend in a standard way to G. Treatment of the system (12) is simplified by a linear change of variables identifying each directed edge with [0, 1], so that edges exit a vertex at 0 and enter at 1. The formal operator c n ∂/∂X is then replaced by 1 wn ∂/∂x, with w n = l n /c n . Introduce the weighted Hilbert space with inner product where [f 1 , . . . , f N ] denotes the column vector (f 1 , . . . , f N ) T . Let D be the operator acting formally by To study D as a Hilbert space operator, start with the maximal operator D max , whose domain consists of all F : [0, 1] → C N with absolutely continuous components, and whose derivatives are in The domain of D is determined by invertible scattering matrices. For a continuous function F : G → C, let F i (v), respectively F o (v), be the δ(v)-tuple of values of F at x = 1, respectively x = 0, for incoming, respectively outgoing edges at v.
Satisfaction of the boundary conditions Y (0) = T Y (1) requires After some elementary manipulations, Introduce the characteristic function The resolvent is bounded and compact if [T −1 − exp(λW )] −1 exists, or equivalently if χ(λ) = 0. Using (λ) to denote the real part of the complex number λ, the next result addresses resolvent bounds.
In addition, the resolvent R D (λ) exists and is uniformly bounded on Proof. Since To establish (17), pick A sufficiently large, and consider λ in the half plane (λ) ≥ A. By (16), R D (λ)f (x) = I 1 + I 2 , with where B(λ) denotes a matrix valued function bounded for (λ) ≥ A. Thus

ROBERT CARLSON
For each x the Cauchy-Schwarz inequality gives the componentwise bound so the method used to bound I 1 may be used for I 2 to complete estimate (17). A similar argument applies in the half plane (λ) ≤ −A.
The usual formula for a matrix inverse where adj denotes the transposed cofactor matrix, together with (16), shows that the resolvent will be uniformly bounded if λ ∈ B δ .
These are entire functions which are almost periodic ( [20, pp. 264-273], and for more detail [13]) in any strip For χ(λ) ∈ E, direct estimation shows that for A large enough, the strip S A contains all roots of χ(λ) in its interior. Let C A be the set of bounded continuous complex valued functions on S A , equipped with the norm Given any δ > 0, and letting λ j be the roots of χ(λ), define to be the set of points with distance at least δ from any root.
The next result summarizes important properties of functions in E, including the characteristic function χ(λ). Proofs for the more general setting of almost periodic functions in a strip may be found in [20, pp. 264-9]. For completeness we sketch the arguments.
Lemma 3.2. Suppose χ(λ) ∈ E. For τ ∈ R, the set of all translates χ(λ + iτ ) has compact closure in C A . The number of roots of χ(λ), counted with multiplicity, in a rectangle is bounded by some number N R which is independent of τ . There is a number C(δ) > 0 such that |χ(λ)| ≥ C(δ) for all λ ∈ S δ .
Proof. Translates of χ(λ) are The J + 1 coefficients c j exp(iτ r j ) remain in a bounded subset of C J+1 , which is the essential point for compactness. Suppose the bound N R for the number of roots did not exist. Select a sequence of translates f j (λ) = χ(λ + iτ j ) with f j having at least j roots in The sequence {f j } may be assumed to converge uniformly on compact subsets of C to an entire function g(λ) with the same form as χ(λ). In particular, g(λ) is not the zero function. On the other hand, applying Rouche's Theorem to a contour surrounding B 0 forces us to conclude that g(λ) has infinitely many roots in a compact set, a contradiction.
The last claim is established by contradiction in a similar manner. Suppose there were a sequence of points z j ∈ S δ with |χ(z j )| < 1/j. Chose a sequence of translates τ j ∈ R so that z j − iτ j = w j ∈ B 0 , and w j → w. Passing to a subsequence, we create a sequence of functions f j (λ) = χ(λ + iτ j ) converging uniformly on compact subsets of C to an entire function g(λ). Since |f j (w j )| < 1/j, g(w) = 0.
However, if f j (z j ) = 0, then |w j − z j | ≥ δ. For j large enough the open ball Ω centered at w with radius δ/2 contains no roots of f j . The limit function g is not identically 0, so by a theorem of Hurwitz [2, p. 176] g(λ) = 0 for all λ ∈ Ω.
To emphasize the dependence of χ(λ) on T , let The previous lemma will now be used to show that the spectrum of D may be partitioned into finite systems of eigenvalues [14, VII.1.3], this decomposition remaining valid for small variations of T . Lemma 3.3. Fix W and suppose T 0 is invertible. There are numbers σ > 0 and > 0, and an integer indexed sequence {h n } with n < h n < n + 1, such that for every T with T − T 0 < ,

ROBERT CARLSON
Proof. The result is easy to establish outside of some strip −A ≤ (λ) ≤ A. Recall that with r J = 0 and c J = det(T −1 ). If is sufficiently small then there is a number σ 1 such that | det(T −1 )| ≥ σ 1 > 0 for all T with T −T 0 < . Moreover the coefficients c j are polynomial functions of the entries of T , so are continuous functions of T . The form of χ T (λ) shows that there is a number A with and since the growth of χ T (λ) is controlled by exp(λr 0 ) as (λ) → +∞, Next, pick δ > 0, and recall that S δ is the set of points in the strip S A whose distance from the roots of χ(λ) = χ T0 (λ) is at least δ. By 3.2, the number of roots of χ(λ) in a rectangle is bounded independent of n, so if δ is small enough, then every box B n contains a horizontal line (λ) = h n which is contained in S δ . Choose σ > 0 so that |χ(λ)| ≥ 2σ for λ ∈ S δ , and in particular for λ on the lines (λ) = h n .
Finally, since the coefficients c j are continuous functions of T , and the functions exp(λr j ) are bounded in the strip −A ≤ (λ) ≤ A, it is possible to shrink sufficiently that

Generic behavior.
Among all invertible matrices T : C N → C N , the ones well matched to a given directed graph will be those given by a collection {S(v) : } indexed by the graph vertices and taking incoming boundary data at v to outgoing boundary data at v. We may identify this collection with a single matrix T : C N → C N by choosing an ordering of the directed edges. A theorem related to the next result appeared in [4].
there is a dense set whose corresponding characteristic functions χ(λ) have all roots of multiplicity 1, and whose corresponding operators D have all eigenvalues of algebraic multiplicity 1.
Proof. The first step is to show that some collection {S 0 (v)} has the desired property. Beginning at any vertex, follow the directed edges of G until a vertex v is repeated. Let γ denote the portion of this path from v to v. Since each vertex of G has the same number of incoming and outgoing edges, the removal of γ leaves another graph of the same type. In this way, G may be decomposed into an edge disjoint collection {γ i } of directed cycles. Pick one vertex v ∈ γ with incoming edge e j and outgoing edge e k . The set {S 0 (v)} may be defined by requiring that for each γ i and chosen vertex v i the vertex conditions satisfy f k = µ(j, k)f j at v i for some complex number µ(j, k) = 0, and S(v) is multiplication by 1 at the other vertices of γ i .
The operator D may now be viewed as having an associated graphG which is a decoupled collection of circles. Simple computations give values of µ(j, k) so that the eigenvalues of D all have algebraic multiplicity 1, and the characteristic function χ(λ) has all roots of multiplicity 1.
Next, consider a line The discussion in [14, VII.1.3] may be applied, showing that the eigenvalues in each system all have algebraic multiplicity 1 except for a finite set of exceptional values of t, the exceptional sets depending on n.
We elaborate a bit on the last paragraph, and the analogous argument for the characteristic functions. The sum r n of the dimensions of the generalized eigenspaces inside each B n is independent of t, as is the number of roots, counted with multiplicity, of the characteristic functions χ t (λ). To detect multiple eigenvalues or roots [21, pp. 132-139], consider the function g(z 1 , . . . , z K ) of K = r n complex variables given by which is the discriminant of the monic polynomial p(z) with roots z 1 , . . . , z N . Obviously g vanishes if and only if two coordinates agree, and g is a symmetric function of the coordinates. It follows that g is a polynomial of the elementary symmetric functions of z 1 , . . . , z N , that is the coefficients of p(z). By Newton's identities [12, p. 208] g is a polynomial function of the power sums s j = z j 1 + · · · + z j K , j = 1, . . . , K. By using traces of Dunford-Taylor integrals [14, I.5.6], the power sums s j (t) for the eigenvalues inside a rectangle B n are analytic in t. Since the discriminant of the eigenvalues in B n is nonzero at t = 0, it can only vanish at finitely many points, and all eigenvalues have generalized eigenspaces of dimension 1 except for a countable set of t. A similar argument replacing the Dunford-Taylor integral by the Cauchy integral formula applies to the characteristic functions χ t (λ).
Finally, since the set Gl(N, C) of invertible complex N × N matrices is path connected, the argument may be extended by piecewise linear paths throughout the entire set.
If Γ is a simple closed curve in the resolvent set of D(t) (defined in the proof of 3.4) for 0 ≤ t ≤ 1, then the sum of the dimensions of the generalized eigenspaces inside Γ is independent of t, as is the number of roots, counted with multiplicity, of the characteristic functions χ t (λ). Together with the previous result, this gives the following conclusion.
the eigenvalues counted with algebraic multiplicity. Theorem 3.6. For any 0 < p < 1, Proof. By 3.5 it suffices to count roots of the characteristic function χ(λ) using the argument principle [2, p. 151]. From the form (18) of χ(λ) there is a constant σ > 0 such that the following estimates hold for sufficiently large positive A. First, Since and Finally, there is a constant C such that Now consider the sequence of rectangular contours γ(n) with counterclockwise orientation, with sides γ 1 , . . . , γ 4 , Since χ is entire, the argument principle says that is the number of zeros of χ, counted with multiplicity, inside γ. The main contribution comes from integration over γ 2 , with 1 2πi γ2 The estimates above give and similarly for the integration over γ 3 . The contribution from γ 4 is Taking A = (h n − h −n ) p for 0 < p < 1 gives the desired estimate.
Deeper results [13] from the theory of almost periodic functions can be used to address the distribution of eigenvalues with respect to (λ).

Generalized eigenfunctions.
For nonnormal operators, completeness questions about generalized eigenfunctions and convergence of generalized eigenfunction expansions are usually treated by considering a sequence of spectral projections given by contour integration of the resolvent, which are [14, III.6.5, III.6.8] projections onto the span of the generalized eigenfunctions of D with eigenvalues contained in γ.
then there is a sequence of positively oriented simple closed curves γ(n) such that Consequently, the generalized eigenfunctions of D have dense span in H.
Proof. Given an invertible vertex scattering matrix T 0 , let D 0 denote the operator (13) with domain defined by T 0 . Select a second vertex scattering matrix T 1 such that each vertex map S 1 (v) is unitary, so the corresponding D 1 is a skew-adjoint operator [4]. A sequence {h n } may be selected so the properties from 3.3 are satisfied for both D 0 and D 1 . The contours γ(n) will be the positively oriented boundary of the rectangular regions given by Let P γ(n) be the spectral projections of (23) for D 0 using the contours γ(n), and letP γ(n) be the analogous sequence of projection operators for D 1 . Letting M j (λ) = (T −1 j − exp(λW )) −1 , j = 0, 1, the resolvent formula (16) shows that Suppose each component of f has K continuous derivatives, with Then integration by parts gives Using 3.1, for λ ∈ γ(n) there are constants C(K) such that since D 1 is skew adjoint.
4. Rational graphs. Consider as before the operator D acting componentwise by w −1 n ∂ x on the weighted space ⊕ n L 2 ([0, 1], w n ). In this section the edge weights w n are assumed to be positive integer multiples of a common value, assumed to be 1 without loss of generality. The assumption of integer weights leads to a considerable strengthening of the results of the previous section; the first reflection of this added structure is that the characteristic function χ(λ) becomes a periodic function of λ.
To probe more deeply into the features implied by integer weights, it is helpful to recast the operator D. First, we employ an edgewise linear change of variables x = w nx taking ⊕ n L 2 ([0, 1], w n ) to ⊕ n L 2 ([0, w n ]). In these local coordinates the operator acts by ∂ x . Next, add vertices v of degree two at integer distances from the edge endpoints so all edges have length 1, without change of direction. The transition conditions for the new vertices are f (v + ) = f (v − ). These changes have no effect on the domain or action of the differential operator, but they do modify its description upon return to the system description in ⊕ n L 2 [0, 1]. In particular the new number N of components and the size N × N of the matrix T have increased.
The set of eigenvalues for D is unchanged, but the characteristic function now has the form The spectrum of D is the set exp(λ) ∈ spec(T −1 ), making explicit the connection between the spectrum of the operator D and the set of eigenvalues of a matrix. In particular the spectrum of D is invariant under translation by 2πi. Suppose F = [f 1 , . . . , f N ] is in the domain of D. A simple computation shows that The boundary values of exp(2πimxI N )F are the same as those of F . In particular the multiplication map F → exp(2πimx)F is an isomorphism from a generalized eigenspace E(λ) with eigenvalue λ to the generalized eigenspace E(λ + 2πim) with eigenvalue λ + 2πim.
This discussion is summarized in the next result. Then the spectrum of D is invariant under translation by 2πi. In the system representation with edge weights 1, multiplication by exp(2πimx) is an isomorphism from the λ generalized eigenspace to the λ + 2πim generalized eigenspace.
Having established the previous result, consider an operator D = D when the directed graphs G comes from an undirected graph G u by edge splitting as described before (12), with the dissipative junction conditions (5) and junction scattering matrices (9). In this context, the next result highlights an interesting contrast between integral and nonintegral weights. In contrast, assume that (v) < 0 in the junction conditions (5) for each vertex v of G u with deg(v) ≥ 2. Suppose that for any nontrivial eigenfunction ψ of D there is a vertex v where ψ(v) = 0, and that v lies on two cycles in the directed graph G with the property that the sums of the weights w n on the cycles are linearly independent over the rationals. Then no eigenvalue for D has real part zero.
Proof. Suppose the weights w n are positive integers. Each edge e of the undirected graph G u has an associated impedance Z e . If e 1 , . . . , e N is a local indexing of the incident edges at a vertex v, 3 shows that the junction scattering matrices S Extending this construction to all vertices v produces an eigenfunction of D with eigenvalue 2πim.
To construct graphs so that no eigenvalue for D has real part zero, begin by assuming that there is a nontrivial eigenfunction ψ for D with a purely imaginary λ. For each vertex v with in degree at least 2, pick a local indexing of the incoming and outgoing edges, and let ψ i (v), respectively ψ o (v), denote the vector of incoming, respectively outgoing, values for ψ. Integration by parts gives By (11), if (v) < 0, then the right hand side can be 0 only if q n (v) = 0 for all edges e n of G u incident on v. This in turn means that p m (v) = p n (v). Notice that if deg(v) = 1 in G u , then the junction scattering matrix is just multiplication by 1. Thus by (3) the eigenfunction ψ extends continuously across all vertices.
Let γ 1 be a cycle (simple closed curve) in G, with the edge directions consistent with the direction of the cycle. Since ψ is continuous at each vertex, it is continuous along γ 1 . On each directed edge e n the function ψ is a multiple of exp(λw n x). Pick a starting vertex v for the cycle γ 1 ; if ψ is not the zero function along γ 1 , we may assume that ψ(v) = 1.
Suppose γ 1 has length l 1 = w 1 +· · ·+w K . Follow ψ around γ 1 . The normalization ψ(v) = 1, together with the continuity of ψ, means that ψ(x) = exp(λw 1 x) on the first edge, is exp(λw 1 ) exp(λw 2 x) on the second edge, etc. Upon returning to v we find 1 = exp(λl 1 ), so for some nonzero integer m 1 , Now suppose γ 2 is another cycle starting at v with length l 2 and integer m 2 defined as before. Then l 1 /m 1 = l 2 /m 2 , the cycle lengths are rationally dependent, and the proof is established.
Notice in 4.2 that the conditions implying that no eigenvalue for D has real part zero will be satisfied if every vertex v of G u lies on two independent cycles, and the set of all edge weights w n for edges e n of G u is independent over the rationals.
The next lemma describes the linkage between generalized eigenspaces of T and generalized eigenspaces of D.
Theorem 4.3. Suppose G has weights w n = 1, λ is an eigenvalue for D, and µ = exp(−λ). The dimensions of the λ eigenspace for D and µ eigenspace for T are the same, as are the dimensions of the generalized eigenspace E(λ) and the generalized eigenspace for T with eigenvalue µ.
If ψ(x, λ) is in the generalized eigenspace E(λ) for D, then there is a K such that ψ = exp(λx) where the constant vectors F k satisfy Proof. The last claim is established first. Suppose ψ ∈ E(λ) and Any solution of the differential equation (∂ x − λ) K ψ = 0 has the form in (24) for some constant vectors F k . In addition, for k = 0, . . . , K − 1 the functions φ k (x, λ) = (D − λI) k ψ(x, λ) must be in the domain of D, so φ k (0, λ) = T φ k (1, λ).
An induction argument will show that (T − µI) j F K−j = 0.
Starting with j = 1, note that The condition (25)  To establish the equality of dimensions for generalized eigenspaces, first use 3.4 to find a sequence of invertible matrices T n and corresponding operators D n such that T n → T , and all eigenvalues of D n have algebraic multiplicity 1. Suppose λ is an eigenvalue of D with algebraic multiplicity K. Find a small circle γ centered at λ so that λ is the only eigenvalue of D contained on or within γ, and the characteristic function for D n does not vanish on γ for n large enough. For these n there are K eigenvalues of D n inside γ. A perturbation of the matrices T n gives all eigenvalues of T n algebraic multiplicity 1. Now by continuity of the dimension of the range of the spectral projection associated to γ for D, and the corresponding projection for a contour γ 2 near µ, we see that the dimension of E(λ) is the same as the dimension of the generalized eigenspace for T with eigenvalue µ.
Recall that a set of vectors {ψ k } in a Hilbert space is a Riesz basis if there is an orthonormal basis {φ k } and a bounded, boundedly invertible operator U such that ψ k = U φ k . Proof. We pick a system representation of D so all edge weights are one, as above. By 4.3 we can pick a basis F 1 , . . . , F K of eigenvectors for T with eigenvalues µ k = exp(−λ k ) and 0 ≤ (λ k ) < 2π such that a basis of eigenfunctions of D is given by ψ k,n (x) = exp((λ k + 2πin)x)F k .
Let {E k } be the standard basis for C K , written as column vectors, and construct the orthonormal basis φ k,n = exp(2πinx)E k for ⊕ K k=1 L 2 [0, 1]. Let F denote the K × K matrix whose k-th column is F k . Then