q-DOMINANT AND q-RECESSIVE MATRIX SOLUTIONS FOR LINEAR QUANTUM SYSTEMS

In this study, linear second-order matrix q-difference equations are shown to be formally self-adjoint equations with respect to a certain inner prod- uct and the associated self-adjoint boundary conditions. A generalized Wronskian is introduced and a Lagrange identity and Abel's formula are established. Two reduction-of-order theorems are given. The analysis and characterization of q- dominant and q-recessive solutions at infinity are presented, emphasizing the case when the quantum system is disconjugate.


Introduction
Quantum calculus has been utilized since at least the time of Pierre de Fermat [10,Chapter B.5] to augment mathematical understanding gained from the more traditional continuous calculus and other branches of the discipline [3].In this study we will analyze a second-order linear self-adjoint matrix q-difference system, especially in the case that admits q-dominant and q-recessive solutions at infinity.Historically, dominant and recessive solutions of linear matrix differential systems of the form (P X ) (t) + Q(t)X(t) = 0 were introduced and extensively studied in a series of classic works by W. T. Reid [5,6,7,8,9], and in matrix difference systems of the form ∆ (P (t)∆X(t − 1)) + Q(t)X(t) = 0 by Ahlbrandt [1], Ahlbrandt and Peterson [2], and recently by Ma [4]; there the forward difference operator ∆X(t) := X(t + 1) − X(t) was used.We introduce here an analysis of the quantum (q-difference) system (1.1)D q (P D q X) (t) + Q(t)X(t) = 0, where the real scalar q > 1 and the q-derivatives are given, respectively, by the difference quotients (D q y)(t) = y(qt) − y(t) (q − 1)t and (D q y)(t) = y(t) − y(t/q) (1 − 1/q)t = (D q y)(t/q).
We will be particularly interested in the case where invertible solutions of (1.1) exist, and their characterization as q-dominant and/or q-recessive solutions at infinity.
The analysis of (1.1) and its solutions will unfold as follows.In Section 2 we explore (1.1), show how it is formally a self-adjoint equation, introduce a generalized Wronskian and establish a Lagrange identity and Abel's formula.Section 3 contains two reduction of order theorems, followed in Section 4 by the notion of a prepared basis.In the main section, Section 5, we give definitions of q-dominant and q-recessive solutions, a connection to disconjugacy, and the construction of q-recessive solutions.Finally, future directions are touched on in Section 6, where a Pólya factorization of (1.1) leads to a variation of parameters result.
(A matrix M is Hermitian iff M * = M, where * indicates conjugate transpose.)In this section we are concerned with the second-order matrix q-difference equation (2.1) LX = 0, where LX(t) := D q (P D q X) (t) + Q(t)X(t) = 0, t ∈ (0, ∞) q , which will be shown to be (formally) self-adjoint.
Theorem 2.1.Let a ∈ (0, ∞) q be fixed and X a , X a be given constant n × n matrices.Then the initial value problem LX(t) = D q (P D q X) (t) + Q(t)X(t) = 0, X(a) = X a , D q X(a) = X a has a unique solution.
In view of the theorem just proven, the following definition is now possible.
Definition 2.2.The unique solution of the initial value problem LX = 0, X(a) = 0, D q X(a) = P −1 (a) is called the principal solution of (2.1) (at a), while the unique solution of the initial value problem LX = 0, X(a) = −I, D q X(a) = 0 is called the associated (coprincipal) solution of (2.1) (at a).Definition 2.3.For matrix functions X and Y , the function W (X, Y ) given by is the (generalized) Wronskian matrix of X and Y .
Lemma 2.4.The product rule for D q is given by and for D q is given by for matrix functions X and Y defined on (0, ∞) q .
Proof.The proof is straightforward using the definitions of D q and D q and is omitted.Theorem 2.5 (Lagrange Identity).The Wronskian matrix W (X, Y ) satisfies for matrix functions X and Y defined on (0, ∞) q .
EJQTDE, 2007 No. 11, p. 3 Proof.For matrix functions X and Y , using the product rule for D q derivatives we have Definition 2.6.Let a, b ∈ (0, ∞) q with a < b.We define the q-inner product of n × n matrix functions M and N on [a, b] q to be Since a = q α , b = q β , and t = q τ for integers α ≤ τ ≤ β, the q-inner product is given by the expression Corollary 2.7 (Self-Adjoint Operator).The operator L in (2.1) is formally selfadjoint with respect to the q-inner product (2.2); that is, the identity Proof.Let the matrix functions X and Y satisfy W (X, Y )(t) b a = 0. From Definition 2.3 and Theorem 2.5 we see that Green's formula holds, namely and the proof is complete.
From Abel's formula we get that if X is a solution of (2.1) on (0, ∞) q , then where C is a constant matrix.With this in mind we make the following definition.
Definition 2.10.Let X and Y be matrix functions and W (X, Y ) be given as in (2.3).
(i) The matrix function X is a prepared (conjoined, isotropic) solution of (2.1) iff X is a solution of (2.1) and (ii) The matrix functions X and Y are normalized prepared bases of (2.1) iff X, Y are two prepared solutions of (2.1) with Theorem 2.11.Any two prepared solutions of (2.1) on (0, ∞) q are linearly independent iff their Wronskian is nonzero.
Theorem 2.13 (Converse of Abel's Formula).Assume X is a solution of (2.1) on (0, ∞) q such that X −1 exists on (0, ∞) q .If Y satisfies W (X, Y )(t) ≡ C, where C is a constant matrix, then Y is also a solution of (2.1).
Proof.Use the Wronskian W and Abel's formula.
Note that one can easily get prepared solutions of (2.1) by taking initial conditions at a ∈ (0, ∞) q so that X * (a)P (a)D q X(a) is Hermitian.
In the Sturmian theory for equations of the form (2.1) the matrix function X * (t)P (t)X(qt) is important.We note the following result.
Conversely, if there is an a ∈ (0, ∞) q such that X * (a)P (a)X(qa) is Hermitian, then X is a prepared solution of (2.1).Moreover, if X is an invertible prepared solution, then P (t)X(qt)X −1 (t), P (t)X(t)X −1 (qt), and Z(t) := P (D q X)X −1 (t) are all Hermitian for all t ∈ (0, ∞) q .
Proof.First note that X * (qt)P (t)X(t) > 0 for t ∈ (0, ∞) q implies that X(t) is invertible for t ∈ (0, ∞) q .Since X is a prepared solution of (2.1), by Lemma 2.15 we have (2.5) for all t ∈ (0, ∞) q .We multiply the right-hand side of the first equation in (2.5) from the right with (XX −1 ) (t) to obtain the equivalence of (i) and (ii).For the equivalence of (i) and (iii), multiply the right-hand side of the second equation in (2.5) from the right with (XX −1 ) (qt).The other implications are similar.
Proof.Use the product rules given in Lemma 2.4 on the equation XX −1 = I.Remark 3.2.Throughout this work it is to be understood that for any matrix function M defined on (0, ∞) q .Theorem 3.3 (Reduction of Order I).Let a ∈ (0, ∞) q , and assume X is a prepared solution of (2.1) with X invertible on [a, ∞) q .Then a second prepared solution Y of (2.1) is given by such that X, Y are normalized prepared bases of (2.1).
EJQTDE, 2007 No. 11, p. 7 Proof.For Y defined above, by the product rule in Lemma 2.4 for D q we have Hermitian by Theorem 2.14 (ii).By Theorem 2.13, W (X, Y ) = I guarantees that Y is a solution of (2.1).To see that Y is prepared, note that which is Hermitian by Lemma 2.15 since X is prepared and Z is Hermitian.Consequently, X, Y are normalized prepared bases for (2.1).
Lemma 3.4.Assume X, Y are normalized prepared bases of (2.1).Then U := XE + Y F is a prepared solution of (2.1) for constant n × n matrices E, F if and only if F * E is Hermitian.If F = I, then X, U are normalized prepared bases of (2.1) if and only if E is a constant Hermitian matrix.
Proof.Assume X, Y are normalized prepared bases of (2.1).Then by Theorem 2.14 and Definition 2.3, By linearity U := XE + Y F is a solution of (2.1).Checking appropriate Wronskians, Clearly the first claim holds.If F = I, then W (X, U) = I, and U = XE + Y is a prepared solution of (2.1) if and only if E is a constant Hermitian matrix.
Theorem 3.5 (Reduction of Order II).Let a ∈ (0, ∞) q , and assume X is a prepared solution of (2.1) with X invertible on [a, ∞) q .Then U is a second n × n matrix solution of (2.1) iff U satisfies the first-order matrix equation for some constant n × n matrix F iff U is of the form where E and F are constant n × n matrices.In the latter case, Proof.Assume X is a prepared solution of (2.1) with X invertible on [a, ∞) q .Let U be any n × n matrix solution of (2.1); we must show U is of the form (3.2).Using the Wronskian from Definition 2.3, set and X is prepared we have that EJQTDE, 2007 No. 11, p. 9 Multiplying by the variable and summing both sides from a to t, Conversely, assume U is given by (3.2).By Theorem 3.3 and linearity U is a solution of (2.1) on [a, ∞) q .Setting t = a in (3.2) leads to E in (3.3).By the constancy of the Wronskian, W (X, U)(t) ≡ W (X, U)(a); suppressing the a, and using (3.2) and the fact that X is prepared, From Lemma 3.4, U is a prepared solution of (2.1) iff F * E is Hermitian.

Prepared Bases
Let X be an n × p matrix function defined on (0, ∞) q , and define the 2n × p matrix X by (4.1) we also define the block matrix It follows that EJQTDE, 2007 No. 11, p. 10 Theorem 4.1.Assume X is an n × p matrix solution of (2.1).Then X has constant rank on (0, ∞) q .Furthermore, if X is a prepared solution of (2.1) and rank X = p, then p ≤ n.
Proof.Assume X is an n × p matrix solution of (2.1).Let a ∈ (0, ∞) q , and suppose X (a)v = 0 for some vector v ∈ C p .Then by assumption; since X solves (2.1), as in the proof of Theorem 2.1 we have that Therefore X has constant rank on (0, ∞) q .Now suppose X is an n × p prepared solution of (2.1) with rank X = p.Since X is prepared, W (X, X) ≡ 0 on (0, ∞) q .By (4.2), X * PX ≡ 0 on (0, ∞) q .
As P is invertible, rank (PX ) = p, and it follows from the previous line that the nullity of X * is at least p.Since rank X * + nullity X * = 2n, we have that 2p = p + p ≤ p + nullity X * = 2n, putting p ≤ n.
Definition 4.2.An n × n solution X of (2.1) is a prepared basis for (2.1) iff X is a prepared solution of (2.1) and rank X = n on (0, ∞) q , where X is given in (4.1).
Theorem 4.3.Assume X, Y are n × n prepared solutions of (2.1).If W (X, Y ) is invertible, then X and Y are both prepared bases of (2.1).
Proof.Assume X, Y are n × n prepared solutions of (2.1) with W (X, Y ) invertible.Note that by Abel's Formula and the definitions above, Let a ∈ (0, ∞) q , and suppose Y(a)v = 0 for some vector v ∈ C n .Then W (X, Y )v = 0, so that by the assumption of invertibility v = 0. Hence rank Y(a) = n and due to constant rank by the theorem above, rank Y ≡ n.Thus Y is a prepared basis.In the EJQTDE, 2007 No. 11, p. 11 same manner v * X * (a) = 0 implies v * W (X, Y ) = 0 implies v = 0, and rank X (a) = rank X (t) = n and X is a prepared basis as well.
5. q-Dominant and q-Recessive Solutions In this main section we seek to introduce the notions of q-dominant and q-recessive solutions for the q-difference equation (2.1) when the equation has an invertible solution; in particular, we ultimately will be able to construct an (essentially) unique q-recessive solution for (2.1) in the event that it admits invertible solutions.Note that throughout the rest of the paper we assume a ∈ (0, ∞) q .
Definition 5.1.A solution V of (2.1) is q-dominant at infinity iff V is a prepared basis and there exists an a such that V is invertible on [a, ∞) q and s∈[a,∞)q sΥ −1 (s), Υ(s) := V * (s)P (s)V (qs) converges to a Hermitian matrix with finite entries.Lemma 5.2.Assume the self-adjoint equation LX = 0 has a q-dominant solution V at ∞.If X is any other n × n solution of (2.1), then Proof.Since V is a q-dominant solution at ∞ of (2.1), there exists an a such that V is invertible on [a, ∞) q .By the second reduction of order theorem, Theorem 3.5, Multiplying on the left by V −1 (t) we have Since V is q-dominant at ∞, the following limit exists: The proof is complete.
EJQTDE, 2007 No. 11, p. 12 Definition 5.3.A solution U of (2.1) is q-recessive at infinity iff U is a prepared basis and whenever X is any other n × n solution of (2.1) such that W (X, U) is invertible, X is eventually invertible and Lemma 5.4.If U is a solution of (2.1) which is q-recessive at ∞, then for any invertible constant matrix K, the solution UK of (2.1) is q-recessive at ∞ as well.
Proof.The proof follows from the definition.
Lemma 5.5.If U is a solution of (2.1) which is q-recessive at ∞, and V is a prepared solution of (2.1) Proof.Note that by the assumptions and Theorem 4.3, V is a prepared basis.By the definition of q-recessive, W (V, U) invertible implies that V is invertible on [a, ∞) q for some a ∈ (0, ∞) q , and (5.1) Let K := W (V, U); by assumption K is invertible, and by Definition 2.3 Multiply by s, sum both sides from a to ∞, and use (5.1) to see that converges to a Hermitian matrix, as Υ is Hermitian.Thus V is q-dominant at ∞.
Theorem 5.6.Assume (2.1) has a solution V which is q-dominant at ∞. Then is a solution of (2.1) which is q-recessive at ∞ and W (V, U) = −I.EJQTDE, 2007 No. 11, p. 13 Proof.Since V is q-dominant at ∞, U is a well-defined function and can be written as by the second reduction of order theorem, Theorem 3.5, U is a solution of (2.1) of the form (3.2) with is Hermitian, U is a prepared solution of (2.1), and W (−V, U) = I implies that U and −V are normalized prepared bases.Let X be an n × n matrix solution of LX = 0 such that W (X, U) is invertible.By the second reduction of order theorem, where and As W (X, U) is invertible by assumption, C 1 is invertible.From (5.2), Therefore U is a q-recessive solution at ∞.
Theorem 5.7.Assume (2.1) has a solution U which is q-recessive at ∞, and U(a) is invertible for some a ∈ (0, ∞) q .Then U is uniquely determined by U(a), and (2.1) has a solution V which is q-dominant at ∞.
Proof.Assume U(a) is invertible; let V be the unique solution of the intial value problem Then V is a prepared basis and is invertible.It follows from Lemma 5.5 that V is q-dominant at ∞. Let Γ be an arbitrary but fixed n × n constant matrix.Let X solve the initial value problem LX = 0, X(a) = I, D q X(a) = Γ.
By Theorem 5.2, where K is an n × n constant matrix; note that K is independent of the q-recessive solution U. Using the initial conditions at a, by uniqueness of solutions it is easy to see that there exist constant n × n matrices C 1 and C 2 such that where C 1 = U(a) is invertible.Consequently, using the q-recessive nature of U, we have and the q-recessive solution U is uniquely determined by its initial value U(a).EJQTDE, 2007 No. 11, p. 15 Theorem 5.8.Assume (2.1) has a solution U which is q-recessive at ∞ and a solution V which is q-dominant at ∞.If U and s∈[t,∞)q s(V * (s)P (s)V (qs)) −1 are both invertible for large t ∈ (0, ∞) q , then there exists an invertible constant matrix K such that for large t.In addition, W (U, V ) is invertible and Proof.For sufficiently large t ∈ (0, ∞) q define By Theorem 5.6, Y is also a q-recessive solution of (2.1) at ∞ and W (V, Y ) = −I.Because U and s∈[t,∞)q s(V * (s)P (s)V (qs)) −1 are both invertible for large t ∈ (0, ∞) q , Y is likewise invertible for large t, and by the q-recessive nature of Y .Choose a ∈ (0, ∞) q large enough to ensure that U and Y are invertible in [a, ∞) q .By Lemma 5.4 the solution given by is yet another q-recessive solution at ∞. Since U and X are q-recessive solutions at ∞ and U(a) = X(a), we conclude from the uniqueness established in Theorem 5.7 that X ≡ U. Thus where K := Y −1 (a)U(a) is an invertible constant matrix.
The next result, when the domain is Z instead of (0, ∞) q , relates the convergence of infinite series, the convergence of certain continued fractions, and the existence of recessive solutions; for more see [2] and the references therein.
EJQTDE, 2007 No. 11, p. 16 Theorem 5.9 (Connection Theorem).Let X and V be solutions of (2.1) determined by the initial conditions X(a) = I, D q X(a) = P −1 (a)K, and V (a) = 0, D q V (a) = P −1 (a), respectively, where a ∈ (0, ∞) q and K is a constant Hermitian matrix.Then X, V are normalized prepared bases of (2.1), and the following are equivalent: (ii) V is invertible for large t ∈ (0, ∞) q and lim t→∞ V −1 (t)X(t) exists as a Hermitian matrix Ω(K) with finite entries; (iii) there exists a solution U of (2.1) which is q-recessive at ∞, with U(a) invertible.
If (i), (ii), and (iii) hold then Proof.Since V (a) = 0, V is a prepared solution of (2.1).Also, as K is Hermitian, making X a prepared solution of (2.1) as well.Checking we see that X, V are normalized prepared bases of (2.1).Now we show that (i) implies (ii).If V is a q-dominant solution of (2.1) at ∞, then there exists a t 1 ∈ (a, ∞) q such that V (t) is invertible for t ∈ [t 1 , ∞) q , and the sum converges to a Hermitian matrix with finite entries.By the second reduction of order theorem, (5.3) where Since X is prepared, E * F = −E * is Hermitian, whence E is Hermitian.As a result, by (5.3) we have converges to a Hermitian matrix with finite entries, and (ii) holds.Next we show that (ii) implies (iii).If V is invertible on [t 1 , ∞) q and (5.4) exists as a Hermitian matrix, then from (5.3) and ( 5.4) we have in other words, (q − 1) Then and U(a) = X(a) = I, making U a prepared basis for (2.1).If X 1 is an n × n matrix solution of LX = 0 such that W (X 1 , U) is invertible, then (5.6) for some constant matrices C 1 and C 2 determined by the initial conditions at a.It follows that by (5.5), so that C 1 is invertible.From (5.4) and (5.5) we have that which is invertible.Thus X 1 (t) is invertible for large t ∈ (0, ∞) q , and lim t→∞ Hence U is a q-recessive solution of (2.1) at ∞ and (iii) holds.Finally we show that (iii) implies (i).If U is a q-recessive solution of (2.1) at ∞ with U(a) invertible, then is also invertible.Hence by Lemma 5.5, V is a q-dominant solution of (2.1) at ∞.
To complete the proof, assume (i), (ii), and (iii) hold.It can be shown via initial conditions at a that for some suitable constant matrix C. By (ii), and thus As U is a q-recessive solution at ∞ by (iii), An application of the quantum derivative D q at a yields

Now let Y be the unique solution of the initial value problem
Using the initial conditions at a we see that implies, by (ii) and the fact that X = Y when K = 0, that Thus the proof is complete.
We will also be interested in analyzing the self-adjoint vector q-difference equation (5.7) Lx = 0, where Lx(t where x is an n × 1 vector-valued function defined on (0, ∞) q .We will see interesting relationships between the so-called unique two-point property (defined below) of the nonhomogeneous vector equation Lx = h, disconjugacy of Lx = 0, and the construction of q-recessive solutions at infinity to the matrix equation LX = 0.The following theorem can be proven by modifying the proof of Theorem 2.1.
Theorem 5.10.Let h be an n × 1 vector function defined on [a, ∞) q .Then the nonhomogeneous vector initial value problem (5.8) Ly = D q (P D q y) + Qy = h, y(a) = y a , D q y(a) = y a has a unique solution.
Definition 5.11.Assume h is an n × 1 vector function defined on [a, ∞) q .Then the vector dynamic equation Lx = h has the unique two-point property on [a, ∞) q provided given any a ≤ t 1 < t 2 in (0, ∞) q , if u and v are solutions of Lx = h with u(t 1 ) = v(t 1 ) and u(t 2 ) = v(t 2 ), then u ≡ v on [a, ∞) q .
Theorem 5.12.If the homogeneous vector equation (5.7) has the unique two-point property on [a, ∞) q , then the boundary value problem where a ≤ t 1 < t 2 in (0, ∞) q and α, β ∈ C n , has a unique solution on [a, ∞) q .
Proof.If t 2 = qt 1 , then the boundary value problem is an initial value problem and result holds by Theorem 5.10.Assume t 2 > qt 1 .Let X(t, t 1 ) and Y (t, t 1 ) be the unique n × n matrix solutions of (2.1) determined by the initial conditions X(t 1 , t 1 ) = 0, D q X(t 1 , t 1 ) = I, and Y (t 1 , t 1 ) = I, D q Y (t 1 , t 1 ) = 0; EJQTDE, 2007 No. 11, p. 20 then a general solution of (5.7) is given by (5.9) x(t) = X(t, t 1 )γ + Y (t, t 1 )δ, for γ, δ ∈ C n , as x(t 1 ) = δ and D q x(t 1 ) = γ.By the unique two-point property the homogeneous boundary value problem Lx = 0, x(t 1 ) = 0, x(t 2 ) = 0 has only the trivial solution.For x given by (5.9), the boundary condition at t 1 implies that δ = 0, and the boundary condition at t 2 yields X(t 2 , t 1 )γ = 0; by uniqueness and the fact that x is trivial, γ = 0 is the unique solution, meaning X(t 2 , t 1 ) is invertible.Next let v be the solution of the initial value problem Then the general solution of Lx = h is given by We now show that the boundary value problem has a unique solution.The boundary condition at t 1 implies that δ = α.The condition at t 2 leads to the equation since X(t 2 , t 1 ) is invertible, this can be solved uniquely for γ.
Corollary 5.13.If the homogeneous vector equation (5.7) has the unique two-point property on [a, ∞) q , then the matrix boundary value problem has a unique solution, where M and N are given constant n × n matrices.
Proof.Modify the proof of Theorem 5.12 to get existence and uniqueness.EJQTDE, 2007 No. 11, p. 21 Theorem 5.14.Assume the homogeneous vector equation (5.7) has the unique twopoint property on [a, ∞) q .Further assume U is a solution of (2.1) which is q-recessive at ∞ with U(a) invertible.For each fixed s ∈ (a, ∞) q , let Y (t, s) be the solution of the boundary value problem Then the q-recessive solution U(t)U −1 (a) is uniquely determined by Proof.Assume U is a solution of (2.1) which is q-recessive at ∞ with U(a) invertible.Let V be the unique solution of the initial value problem By the connection theorem, Theorem 5.9, V is invertible for large t.By checking boundary conditions at a and s for s large, we get that is invertible, and by the q-recessive nature of U, As a result, lim s→∞ Y (t, s) = 0 + U(t)U −1 (a), and the proof is complete.
Definition 5.16.A prepared basis X of (2.1) has a generalized zero at a ∈ (0, ∞) q iff X(a) is noninvertible or X * (a/q)P (a/q)X(a) is invertible but X * (a/q)P (a/q)X(a) ≤ 0.
Lemma 5.17.If a prepared basis X of (2.1) has a generalized zero at a ∈ (0, ∞) q , then there exists a vector γ ∈ C n such that x = Xγ is a nontrivial prepared solution of (5.7) with a generalized zero at a. EJQTDE, 2007 No. 11, p. 22 Proof.The proof follows from Definitions 5.15 and 5.16.
Theorem 5.18.If the vector equation (5.7) is disconjugate on [a, ∞) q , then the matrix equation (2.1) has a solution U which is q-recessive at ∞ with U(t) invertible for t ∈ [qa, ∞) q .
Proof.Let X be the solution of the initial value problem then X is a prepared solution of (2.1).If X is not invertible on [qa, ∞) q , then there exists a t 1 > a such that X(t 1 ) is singular.But then there exists a nontrivial vector δ ∈ C n such that X(t 1 = 0.If x(t) := X(t)δ, then x is a nontrivial prepared solution of (5.7) with x(a) = 0, x(t 1 ) = 0, a contradiction of disconjugacy.Hence X is invertible in [qa, ∞) q .We next claim that (5.11) if not, there exists t 2 ∈ [qa, ∞) q such that X * (t 2 )P (t 2 )X(qt 2 ) > 0.
It follows that there exists a nontrivial vector γ such that x(t) := X(t)γ is a nontrivial prepared vector solution of Lx = 0 with a generalized zero at qt 2 .Using the initial condition for X, however, we have x(a) = 0, another generalized zero, a contradiction of the assumption that the vector equation (5.7) is disconjugate on [a, ∞) q .Thus (5.11) holds.Define the matrix function By Theorem 3.5, V is a prepared solution of LV = 0 with W (X, V ) = I.Note that V is also invertible on [qa, ∞) q , so that by the reduction of order theorem again, EJQTDE, 2007 No. 11, p. 23 Consequently, where Υ(s) := V * (s)P (s)V (qs), Ξ(s) := X * (s)P (s)X(qs) > 0.
Since the second factor is strictly increasing by (5.11) and bounded below by I, the first factor is positive definite and strictly decreasing, ensuring the existence of a limit, in other words, we have s∈[qa,t)q sΥ −1 (s) ≤ I.
and V is a q-dominant solution of (2.1) at ∞. Set By Theorem 5.6, U is a q-recessive solution of (2.1) at ∞. Since V is invertible on [qa, ∞) q , and the difference in brackets is positive definite on [qa, ∞) q , we get that U is invertible on [qa, ∞) q as well, and the conclusion of the theorem follows.
Corollary 5.19.Assume the vector equation (5.7) is disconjugate on [a, ∞) q , and K is a constant Hermitian matrix.Let U, V be the matrix solutions of LX = 0 satisfying the initial conditions Then V is invertible in [qa, ∞) q , V is a q-dominant solution of (2.1) at ∞, and exists as a Hermitian matrix.EJQTDE, 2007 No. 11, p. 24 Proof.By Theorem 5.18, the matrix equation (2.1) has a solution U which is qrecessive at ∞ with U(t) invertible for t ∈ [qa, ∞) q .Thus (iii) of the connection theorem, Theorem 5.9 holds; by (i), then, V is a q-dominant solution of (2.1) at ∞, and by (ii), exists as a Hermitian matrix.Since V (a) = 0 and the vector equation (5 In particular, V is invertible in [qa, ∞) q .
Proof.By Theorem 5.18, disconjugacy of (5.7) implies the existence of a prepared, invertible matrix solution of (2.1).Thus by Theorem 5.12, it suffices to show that (5.7) has the unique two-point property in [a, ∞) q .To this end, assume u, v are solutions of Lx = 0, and there exist points s 1 , s 2 ∈ (0, ∞) q such that a ≤ s 1 < s 2 and u(s 1 ) = v(s 1 ), u(s 2 ) = v(s 2 ).
If s 2 = qs 1 , then u and v satisfy the same initial conditions and u ≡ v by uniqueness; hence we assume s 2 > qs 1 .Setting x = u − v, we see that x solves the initial value problem Since Lx = 0 is disconjugate and x is a prepared solution with two generalized zeros, it must be that x ≡ 0 in [a, ∞) q .Consequently, u = v and the two-point property holds.
Corollary 5.21 (Construction of the Recessive Solution).Assume the vector equation (5.7) is disconjugate on [a, ∞) q .For each s ∈ (a, ∞) q , let U(t, s) be the solution of the boundary value problem LU(•, s) = 0, U(a, s) = I, D q U(s, s) = 0. EJQTDE, 2007 No. 11, p. 25 Then the solution U with U(a) = I which is q-recessive at ∞ is given by Proof.By Theorem 5.18 and Theorem 5.20, LX = 0 has a q-recessive solution and Lx = h has the unique two-point property.The conclusion then follows from Theorem 5.14, except for (5.12).From the initial condition U(s, s) = 0 and the fact that Lx = 0 is disconjugate, it follows that U * (t, s)P (t)U(qt, s) > 0 holds in [a, s/q) q .Again from Theorem 5.14, so that U invertible on [a, ∞) q and (5.12) holds.
Remark 5.22.In an analogous way we could analyze the related (formally) selfadjoint quantum (h-difference) system where the real scalar h > 0 and the h-derivatives are given, respectively, by the difference quotients and (D h y)(t) = y(t) − y(t − h) h = (D h y)(t − h).
In the case where invertible solutions of (5.13) exist, their characterization as h-dominant and/or h-recessive solutions at infinity can be developed in parallel with the previous results on the q-equation (2.1).As q approaches 1 in the limit or h approaches zero in the limit, we can recover results from classical ordinary differential equations.

Future Directions
In this section we lay the groundwork for possible further exploration of the nonhomogeneous equation by introducing the Pólya factorization for the self-adjoint matrix q-difference operator L defined in (2.1), which in turn leads to a variation of parameters result.EJQTDE, 2007 No. 11, p. 26 Theorem 6.1 (Pólya Factorization).If (2.1) has a prepared solution X > 0 (positive definite) on an interval I ⊂ (0, ∞) q such that X * (t)P (t)X(qt) > 0 for t ∈ I, then for any matrix function Y defined on (0, ∞) q we have on the interval I a Pólya factorization LY = M * 1 D q {M 2 D q (M 1 Y )} , M 1 (t) := X −1 (t) > 0, M 2 (t) := X * (t)P (t)X(qt) > 0.
Proof.Assume X > 0 is a prepared solution of (2.1) on I ⊂ (0, ∞) q such that M 2 > 0 on I, and let Y be a matrix function defined on (0, ∞) q .Then X is invertible and for M 1 and M 2 as defined in the statement of the theorem.Theorem 6.2 (Variation of Parameters).Let H be an n × n matrix function defined on [a, ∞) q .If the homogeneous matrix equation (2.1) has a prepared solution X with X(t) invertible for t ∈ [a, ∞) q , then the nonhomogeneous equation LY = H has a solution Y given by Y (t) = X(t)X −1 (a)Y (a) + (q − 1)X(t) s∈[a,t)q s (X * (s)P (s)X(qs)) −1 W (X, Y )(a) + (q − 1) 2 q X(t) s∈[a,t)q s   (X * (s)P (s)X(qs)) −1 τ ∈(a,s]q τ X * (τ )H(τ )   .
Proof.Let Y be a matrix function defined on (0, ∞) q , and assume X is a prepared solution of (2.1) invertible on [a, ∞) q .As in Theorem 6.1, we factor LY to get H(t) = LY (t) = X * −1 (t)D q X * (t)P (t)X(qt)D q (X −1 Y )(t) .EJQTDE, 2007 No. 11, p. 27 Multiplying by sX * and summing over (a, t] q we arrive at X * (t)P (t)X(qt)D q (X −1 Y )(t) − W (X, Y )(a) = 1 − 1 q s∈(a,t]q sX * (s)H(s), where W (X, Y )(a) = X * (a)P (a)X(qa)D q (X −1 Y )(a) since X is prepared.This leads to D q (X −1 Y )(t) = (X * (t)P (t)X(qt) which is then multiplied by (q − 1)s and summed over [a, t) q to obtain the form for Y given in the statement of the theorem.Clearly the right-hand side of the form of Y above reduces to Y (a) at a, and since X is an invertible prepared solution, by Theorem 2.15 the quantum derivative reduces to D q Y (a) at a. Corollary 6.3.Let H be an n × n matrix function defined on [a, ∞) q .If the homogeneous matrix equation (2.1) has a prepared solution X with X(t) invertible for t ∈ [a, ∞) q , then the nonhomogeneous initial value problem