On the Yang-Baxter-like matrix equation for rank-two matrices

Abstract Let A = PQT, where P and Q are two n × 2 complex matrices of full column rank such that QTP is singular. We solve the quadratic matrix equation AXA = XAX. Together with a previous paper devoted to the case that QTP is nonsingular, we have completely solved the matrix equation with any given matrix A of rank-two.


Introduction
Recently [1], the authors have found all the solutions of the Yang-Baxter-like matrix equation where the given n n complex matrix A D PQ T , with two n 2 matrices P and Q, satisfies the assumption that the 2 2 matrix Q T P is nonsingular. In the above situation, the eigenvalue 0 of A is semi-simple with multiplicity n 2. This leaves the case unsolved that Q T P is singular, which makes it much more challenging to solve the corresponding matrix equation. The purpose of this paper is to find all the solutions of (1) for the remaining case that the algebraic multiplicity of the eigenvalue 0 of A is more than n 2. The equation (1) has a similar format to the classical Yang-Baxter equation [2]. Yang [3] in 1967 first considered a one dimensional quantum mechanical many body problem with a combination of delta functions as the potential and found a factorization of the scattering matrix, and the Yang-Baxter equation was obtained as a consistence property for the factorization. Then Baxter in 1972 solved an eight-vertex model in statistical mechanics, resulting in a similar matrix equation, which, together with that from [3], was first called the Yang-Baxter equation by Russian researchers at the end of the 1970s. Since then the Yang-Baxter equation has been extensively investigated by mathematicians and physicists in knot theory, braid group theory, and quantum group theory in the past thirty years; see, e.g., [2,[4][5][6][7][8] and the references therein. In the past several years, the quadratic matrix equation (1) has been studied with linear algebra techniques; see, for example, the references [9][10][11][12][13][14].
Solving (1) is a tough job in general since we need to solve a large system of n 2 quadratic equations with n 2 variables if we multiply out its both sides. By restricting the task to only finding the solutions that commute with A, several solution results have been obtained in [10] for matrices A of special Jordan forms, and a more general result was proved in [15] for the class of diagonalizable matrices. However, no general result has been found so far for non-commuting solutions for arbitrary matrices. Thus, it is our hope to find all the solutions of (1) for general matrices A. When A is a rank one matrix, all the solutions of (1) have been found in [13]. The case of A being of rank-two turns out to be much more tedious to analyze, and some special cases have been solved in our previous paper [1]. In the current paper we continue to study the rank two case to find all the solutions for the remaining Jordan form structure of A.
Matrices A of rank at most two in the equation (1) have appeared in the classic Yang-Baxter equation (see, e.g., the references in [2,5]). For example, the two classes of 4 4 matrices with certain values of the given parameters have been studied in [16][17][18] for completely integrable systems and inverse scattering problems. Each matrix in the second class is actually the tensor product T˝I 2 of the 2 2 matrix T D OEt ij with the 2 2 identity matrix I 2 , which constitutes a basic operation in the context of the classical Yang-Baxter equation. Our complete solutions to the quadratic matrix equation with a given rank-two matrix are expected to find applications in such physical applications. We shall pick up one matrix from (2) to apply our result in Section 5.
It is well known [10] that solving the Yang-Baxter-like matrix equation for any given matrix A is equivalent to solving the same equation with A replaced by a matrix that is similar to A, and all the solutions of the first equation are similar to those of the second one with the same similarity matrix. Thus, solving (1) for the given A can be reduced to solving the same equation with the Jordan form of A. Our approach in this paper will follow the above principle. That is, since any matrix is similar to its simplest possible canonical form J , the Jordan form of the matrix, which is called a Jordan matrix here for simplicity, is to be found. For the purpose of the present paper that will supplement the work of [1], we shall solve the following simpler Yang-Baxter-like matrix equation with J satisfying the conditions: (i) J is a Jordan matrix of rank-2.
(ii) The eigenvalue 0 of J has algebraic multiplicity at least n 1.
If we can find the solutions of (3) for such Jordan matrices, then the solutions of (1) are available immediately for any A that is similar to J . It turns out that the Jordan matrices J that satisfy the conditions (i) and (ii) above are such that ƒ is one of the following three matrices all associated with eigenvalue 0. When ƒ D ƒ 2 , the first n 2 columns of W are eigenvectors of A associated with eigenvalue 0, the column w n 1 is a generalized eigenvector of degree 2 with respect to eigenvalue 0, and the column w n is an eigenvector of A associated with eigenvalue ¤ 0. If ƒ D ƒ 3 , then the columns w 1 ; : : : ; w n 3 , and w n 1 of W are eigenvectors of A, and the columns w n 2 and w n of W are generalized eigenvectors of degrees 2, all with respect to eigenvalue 0.
In the next three sections we look for the solutions of the Yang-Baxter-like matrix equation (1) when the corresponding Jordan matrix J of A is given by (4) with ƒ D ƒ 1 ; ƒ 2 ; ƒ 3 , respectively; such cases will be referred to as type I, type II, and type III for the given matrix A. We present two examples of our solution result in Section 5 and conclude with Section 6.

Solutions when A is of type I
In this and subsequent sections we let A D PQ T in (1) with P D OEp 1 ; p 2 and Q D OEq 1 ; q 2 of rank-2 such that det Q T P D 0. Let J be the Jordan form of A given by (4), where the diagonal zero sub-matrix 0 is either .n 3/ .n 3/ and the sub-matrix ƒ D ƒ 1 or ƒ 2 defined by (5), or the zero sub-matrix 0 is .n 4/ .n 4/ and ƒ D ƒ 3 in (5). As pointed out in Section 1, it is enough to find all the solutions of the equation (3) with J the Jordan form of A, so we just focus on solving (3).
Let Y be partitioned the same way as J into the 2 2 block matrix where M is .n 3/ .n 3/ or .n 4/ .n 4/; U D OEu 1 ; u 2 ; u 3 or OEu 1 ; (4) is which is equivalent to the system 8 < : In the current section we assume that J D diag.0; ƒ 1 /, and the other two cases that J D diag.0; ƒ 2 / and J D diag.0; ƒ 3 / will be investigated in Sections 3 and 4, respectively. So we solve (8) with ƒ D ƒ 1 . Because of the special zero structure of ƒ 1 , the two unknown vectors u 3 and v 1 actually do not appear at all in the above system, so they always appear as free variables in the solutions. In addition, the first equation is independent of K and is in fact is itself a Yang-Baxter-like matrix equation of small size when ƒ D ƒ j with j D 1; 2; 3, and finding all of its solutions is the first step for solving (8). ex C ay f x C by gx C cy e 2 C af ef C bf eg C cf ae C ab af C b 2 ag C bc 3 7

:
Note that there is no z in the above equation, so all the solutions have z as an arbitrary parameter. The two equations Thus we have the remaining three equations 8 < : which gives the first matrix K 1 in the lemma. When x D 0, we have cy D 0. So c ¤ 0 or c D 0, resulting in the other two matrices K 2 and K 3 , respectively.
By Lemma 2.1, all the solutions of the last equation of (8) are K 1 ; K 2 , and K 3 . Substituting such matrices into the first three equations of (8) and solving them respectively, we obtain the following result. (4). Then all the solutions of (1) are X D W Y W 1 , where W is as given by (6) and Y is partitioned as (7) in which M is an arbitrary .n 3/ .
Here u H is the conjugate transpose of u.
Proof. We just solve the first three equations of (8) with K D K 1 ; K 2 , and K 3 in succession. When K D K 1 , those equations of (8) are 8 < : If the former is satisfied, then u 2 D yu 1 =x and the second equation is satisfied. This gives the first solution matrix of (10). In the case that v 3 D 0, either u 2 D yu 1 =x or c D 0 from the second equation. The former case still leads to the first matrix of (10), while the latter gives the second matrix of (10).
Thus z is a free variable in the solutions. Since c ¤ 0, (14) is equivalent to u 2 D gu 1 =c and u 1 .v 2 gv 3 =c/ T D 0. Letting u 1 D 0 gives the first matrix of (11), the second matrix of (11) is the consequence of u 1 ¤ 0, and v 2 gv 3 =c D 0.
Letting u 1 D 0; u 2 D 0, and y D 0 gives the first matrix of (12), and the choice of u 1 D 0 and v 3 D 0 produces the second matrix of (12). If u 1 ¤ 0, then the first equation of (15) implies that v 2 D .u H 1 u 2 /v 3 =ku 1 k 2 , and the other two equations give that either g D 0 and y D 0, from which we get the first matrix of (13), or g D 0 and y ¤ 0, which gives the second one.

Solutions when A is of type II
We now consider the second case that the Jordan form of the matrix A is J D diag.0; ƒ 2 /, so we solve (8) with ƒ D ƒ 2 . Clearly, the structure of ƒ makes u 2 and v 1 free vectors in all the solutions of (8), and the first equation of (8) is now ex C az f x C bz gx C cz e 2 C ag ef C bg eg C cg ae C ac af C bc ag C c 2 If g D 0, then the above is just f x D bz. When x D 0, we get K 4 and K 5 , and x ¤ 0 implies K 6 . In the case that g ¤ 0, we obtain K 7 . The other possibility of c D implies K 8 and K 9 . Now let a ¤ 0. Then e D c via equating entries .3; 1/ of the both sides of (16). Since ag D e 2 = from comparing the .2; 1/ entries, it follows from equating entries .3; 3/ in (16) that So c D 0 and then e D 0. By equating .3; 2/ entries, we have f D . Also z D 0 and g D 0 from comparing entries .1; 1/ and .2; 1/ of (16). Finally x D 0 via comparing the entries .1; 2/ of (16), thus arriving at the corresponding solution matrix K 10 .
Lemma 3.1 gives all the solutions K 4 ; : : : ; K 10 of the last equation of (8) with ƒ D ƒ 2 . Substituting them for K in the system and solving the resulting three equations in succession, we are lead to the next theorem.
Theorem 3.2. Suppose A D PQ T is such that its Jordan form is given by (4) with ƒ D ƒ 2 . Then all the solutions of (1) are X D W Y W 1 , where W is given by (6) and Y is partitioned as (7) in which M is an arbitrary .n 3/ .n 3/ matrix such that Y D with u 3 ¤ 0, Proof. When K D K 4 , the system (8) is reduced to which does not contain y so it will appear in all the solutions. Letting u 3 D 0. Then v 3 and b can be arbitrary, and the above system becomes u 1 v T 2 D 0 and f u 1 D 0. If u 1 D 0, then v 2 and f are arbitrary, giving the first solution matrix of (17). If v 2 D 0, then u 1 is arbitrary and f D 0, which results in the second matrix of (17). Now let u 3 ¤ 0. Then from the above system, This gives the third matrix of (17).
Next suppose K D K 5 . Then (8) is simply u 1 v T 2 D 0; f u 1 D 0; v 3 D 0 since ¤ 0 and z ¤ 0. All the solutions are those in (18). Assume K D K 6 . Then (8) can be written as Since x ¤ 0, we can solve v 2 out from the last equation and substitute it into the first one in the above system, getting Letting b D 0 and v 3 D 0 above gives the first matrix of (19), and if u 3 D zu 1 =x, then b and v 3 are arbitrary, giving the second one of (19). For K D K 7 , the system is simplified to u 1 v T 2 C u 3 v T 3 D 0; u 1 D 0; v 2 C zv T 3 D 0; v 3 D 0 since g ¤ 0, whose solutions are given by the third matrix of (19). When K D K 8 , the system (8) is actually u 3 D 0; v 3 D 0; u 1 v T 2 D 0; f u 1 D 0 since ¤ 0. The solutions are the two matrices of (20). With K D K 9 , the system becomes v 2 D 0; v 3 D 0; u 1 v T 2 C u 3 v T 3 D 0; u 3 D 0 since ¤ 0 and x ¤ 0, so the first matrix of (21) is obtained. Finally, as K D K 10 , we have u 1 D 0; v 2 D 0; u 1 v T 2 C u 3 v T 3 D 0; u 3 D 0 since a ¤ 0, thus obtaining the second matrix of (21).

Solutions when A is of type III
Unlike the previous two cases that involve only 3 3 matrices ƒ 1 and ƒ 2 , the third case that we shall study involves the 4 4 matrix ƒ 3 . The zero structure of ƒ 3 ensures that u 2 ; u 4 ; v 1 ; v 3 are not present in the equations, so they are free vectors in the solutions. The following lemma gives all the solutions of the last equation of (8). ; a 11 ¤ 0;ˇa 11 a 13 a 31 a 33ˇ¤ 0: Proof. Denote with a 31 ¤ 0; u 3 ¤ a 33 u 1 =a 31 in the left matrix, and a 11 ¤ 0, u 3 ¤ a 13 u 1 =a 11 in the second one, with a 11 ¤ 0; a 33 ¤ a 13 a 31 =a 11 .
Proof. Clearly the system (8) does not involve u 2 ; u 4 ; v 1 ; v 3 , so they are free vectors in all the solutions. The first equation of (8) is u 1 v T 2 C u 3 v T 4 D 0. Now we solve the first three equations of (8) with K D K 11 ; : : : ; K 17 separately.
When K D K 11 , the system (8) is reduced to Since a 33 ¤ 0, we have v 4 D 0, so the above is simplified to u 1 v T 2 D 0; a 22 u 1 D 0; a 24 u 1 D 0. Letting u 1 D 0 gives the first matrix of (27), and otherwise we have v 2 D 0; a 22 D a 24 D 0, leading to the second matrix in (27).
If K D K 13 , then (8) is simplified to 8 < : v 4 D 0; u 1 v T 2 D 0; a 22 u 1 D 0; a 24 u 1 D 0: u 1 D 0 produces the left matrix of (28) and otherwise, v 2 D 0; a 22 D a 24 D 0, so the second matrix in (28).
With K D K 14 , we have 8 < : Since a 13 ¤ 0 and a 31 ¤ 0, we have v 2 D v 4 D 0, and the first matrix of (29) is obtained. The choice of K D K 15 gives the system By the forth equation, v 2 D a 13 v 4 =a 11 . Substituting into the first one gives .u 3 a 13 u 1 =a 11 /v T 4 D 0. Hence, if u 3 ¤ a 13 u 1 =a 11 , then v 4 D 0; a 42 D a 44 D 0 and we get the right matrix of (30). Otherwise v 4 ; a 42 ; a 44 are arbitrary, so the left matrix of (31).
The last case is K D K 17 . Then

<
: the solutions of which are given by the last matrix of (31).