CODIAGONALIZATION OF MATRICES AND EXISTENCE OF MULTIPLE HOMOCLINIC SOLUTIONS∗

The purpose of this paper is twofold. First, we use Lagrange’s method and the generalized eigenvalue problem to study systems of two quadratic equations. We find exact conditions so the system can be codiagonalized and can have up to 4 solutions. Second, we use this result to study homoclinic bifurcations for a periodically perturbed system. The homoclinic bifurcation is determined by 3 bifurcation equations. To the lowest order, they are 3 quadratic equations, which can be simplified by the codiagonalization of quadratic forms. We find that up to 4 transverse homoclinic orbits can be created near the degenerate homoclinic orbit.

When µ = 0, (1.4) may have bifurcations near γ. The case d = 1 has been extensively studied. In this case breaking of the homoclinic orbit γ is restored by choosing the parameter τ , as in [5]. Hale [6] proposed to study the degenerate cases where d ≥ 2. The case d = 2 has been considered in [14]. The purpose of the present work is to treat the case d = 3. Using the method of Lyapunov-Schmidt reduction, we derive a system of bifurcation functions H j , 1 ≤ j ≤ 3, the zeros of which correspond to the persistence of homoclinic solutions for (1.4). The last equation H 3 = 0 can be dealt with by selecting the parameter τ as usual, while H j = 0, j = 1, 2 can be reduced to a system of quadratic equations. By the Lagrange's method and codiagnalization of quadratic forms, we show that the quadratic system can have up to 4 solutions. Finally, if the solutions to the quadratic system are nondegenerate, then the bifurcation functions have nondegenerate zeros and the perturbed system has transverse homoclinic orbits.
Codiagnalization of matrices has been used by Jibin Li and Lin [12] to study systems of coupled KdV equations. It may also be useful when studinging the 2x2 systems of hyperbolic conservation laws with quadratic nonlinearities [19,20], base on personal conversation with Shearer. In [14], the method based on circular and hyperbolic rotations, was used to codiagonalize two quadratic forms. The new method in this paper is easier to use if one wants to find conditions for the existence of 4 solutions to quadratic systems.
Given a symmetric real matrix B ∈ R 2×2 , then where (x 1 , x 2 ) T = M (y 1 , y 2 ) T . The symmetric transformation described above is also called the congruence diagonalization. It should not confused with the similarity transformation of B which is defined by M −1 BM . For example the matrix diag(λ 1 , −λ 2 ), λ j > 0, can be reduced to diag(1, −1) by the matrix , which is a symmetric reduction, not similarity reduction. In §2, we introduce notations to be used in this paper. We also present the reduced bifurcation functions which, to the lowest degree, represent the breaking of the homoclinc orbits under the periodic perturbations. In §3 we derive the bifurcation equation by using the Lyapunov-Schmidt reduction. To the lowest degree, they reduce to three quadratic equations. In §4, we introduce the Lagrange's method and generalized eigenvalue problems to study solutions of two quadratic forms. The cases when one equation is elliptic are considered in §4.1. The other cases when one equation is hyperbolic and none is elliptic are considered in §4.2. In §4.3, we present the method of codiagonalization of two quadratic equations based on cases studied in §4.1 and §4.2. In §5, we derive the reduced bifurcation function F (τ ). We show a simple zero of F corresponds to the existence of a homoclinic solution near γ. In §6, we present an example showing that our conditions work consistently.

Notations and preliminaries
Notations. Since y = 0 is a hyperbolic equilibrium, from [17], (1.3) has exponential dichotomies on J = R ± respectively. In particular, there exist projections to the stable and unstable subspaces, P s + P u = I, and constants m > 0, , for t s on J. (2.1) For the same m > 0, define the Banach space with the norm z = sup t∈R |z(t)|e m|t| . The linear variational system will be considered in Z. The adjoint operator for L is L * ψ :=ψ + (Df (γ)) * ψ. From the theory of homoclinic bifurcations [17], L : Z → Z is a Fredholm operator with index 0. The range of L is orthogonal to the null space of L * . That is We define some Melnikov types of integrals [16] that will be used in the future. For integers p, q = 1, 2 and i = 1, 2, 3, let We look for conditions so that (1.4) can have homoclinic solutions near γ. Let β = (β 1 , β 2 ) T . We shall use that the reduced bifurcation functions M i : R 2 ×R×R → R defined bellow: To the lowest degree, (2.5) describes the jump discontinuity x(0 − ) − x(0 + ) along the direction of ψ i (0), see [13].
. We need to solve the following system of quadratic equations (2.6)
Recall that L(u) =u − Df (γ)u in the Banach space Z. As in [17], we define the subspace of Z, which consists the range of L in Z.

Consider a nonhomogeneous equatioṅ
Let Z ⊥ be the subspace of Z consisting of z(t) with z(0) ⊥γ(0). If h ∈ Z, using the variation of constants, there exists an operator K : Z → N (L) ⊥ such that Kh is a solution of (3.3). Clearly, the general bounded solution of (3.3) is As in [17], one can prove that P satisfies the following properties: We now use the Lyapunov-Schmidt reduction to (3.1). Applying P and (I − P ) on (3.1), we find that (3.1) is equivalent to the following systeṁ First, we solve (3.4) for z ∈ Z ⊥ . Then the bifurcation equations are obtained by substituting the solution z into (3.5).
Through direct calculations, we can prove the following Lemma.
has the following properties: The quadratic functions M i : R 2 × R × R → R 3 given by (2.5) represents the lowest order terms of H i (β, τ, µ). We are lead to solving the system of quadratic equations (2.6).

Codiagonalization and solutions of two quadratic equations
We say that the quadratic equation F (x, y) = h, h = 0 is of elliptic (or hyperbolic, or line) type if the graph of the equation is an ellipse (or two hyperbolas, or two lines). The graph of two symmetric parallel lines is a special case of two hyperbolas, where the normal direction to two lines replaces the real axis of a hyperbola. The hyperbolic rotation is well-known for its use in relativity theory [2]. We shall define various transformations that keep a quadratic form F (x, y) = ax 2 +2bxy+cy 2 invariant. Consider the Hamiltonian system and its solution mapping T (t).
Definition 4.1. The solution mapping T (t) for (4.1) that maps the ray − − → OP 1 to − − → OP 2 , where P 2 = T (t)P 1 will be called the quadratic rotation by the angle t. It will also be called the circular, elliptic or hyperbolic rotation if the graph of F (x, y) = h is a circle, ellipse or hyperbola. The angle θ from , then the angle between the two rays is undefined.
Just like the polar coordinates, if P 0 is a point on the major axis (or semi-real, or semi-imaginary axis), then we define the angle coordinate of P 0 to be θ(P 0 ) = 0. For any other P ∈ R 2 , we define its angle coordinate θ(P ) to be the angle from defines the circular rotation in counter-clockwise direction.
defines the standard hyperbolic rotation in R 2 . However, given two rays in R 2 , the hyperbolic angle between them can be undefined. More precisely, the two lines y = ±x divides R 2 into 4 sectors: ). The hyperbolic rotation simply draws a hyperbola in sector Similarly, if (x 0 , y 0 ) T ∈ S 2 or S 4 , then there exists an r 0 > 0 or r 0 < 0 such that (x 0 , y 0 ) = r 0 (sinh(t 0 ), cosh(t 0 )). The hyperbolic rotation draws a hyperbola in Notice that the circular and standard hyperbolic rotations satisfy: If T (t) is the solution mapping for (4.1), we always have (1) If the vector field (4.1) corresponding to F (x, y) satisfies x = 0 on the x-axis, or y = 0 on the y-axis, then the matrix B is diagonal. ( We now study the system of two quadratic equations (4.2) (H5) : Assume that the two quadratic forms F 1 (x, y), F 2 (x, y) are linearly independent, i.e., the two matrices B 1 , B 2 are linearly independent.
Consider the conditional maximum/minimum problems: We look for critical points from the Lagranginan: To find critical points P j = (x j , y j ) T , j = 1, 2, of the Lagrangian, we solve the generalized eigenvalue/eigenvector problem (4.5)

Solutions of (4.2) if one equation is elliptic
In this subsection we assume that F 2 (x, y) = h 2 is of elliptic type. Hence b 2 2 −a 2 c 2 < 0. By changing ψ i to −ψ i , we can change B (i) to −B (i) . Hence for elliptic type quadratic forms we assume a 2 > 0, c 2 > 0 and h 2 > 0.
(i) (EE) type: Assume that F 1 reaches the minimum r 1 at P 1 and the maximum r 2 at P 2 . System (4.2) has 4 solutions if r 1 < h 1 < r 2 .
(iii) (LE) type: In this case, the graph of F 1 (x, y) = h 1 consists of two parallel lines symmetric about the origin. The eigenvalues are λ 1 = 0 with eigenvector P 1 on which F 1 (x 2 , y 2 ) = 0; and λ 2 = 0 with the eigenvector P 2 that solves the conditional minimum problem with F 1 = r 1 < 0, or the maximum problem with F 1 = r 2 > 0. System (4.2) has 4 solutions if r 1 < h 1 < 0 or 0 < h 1 < r 2 .

Solutions of (4.2) if both equations are hyperbolic
For a give h 2 = 0, the hyperbola defined by F 2 (x, y) = h 2 does not circle the origin as the ellips in §4.1. Observe that for the (HH) type systems, the equilibrium (0, 0) of (4.1) is hyperbolic and there exist stable and unstable eigenspaces for the equilibrium (0, 0). Before giving a counter example, we introduce the following definition. j , i = 1, 2, be the stable and unstable eigenspaces of the equilibrium for (4.1), where (a, b, c, ) = (a j , b j , c j ). They are called the asymptotes for F j (x, y) = h j . The asymptotes L (i) j , i = 1, 2, divide R 2 into four sectors. We say (x, y) is in the positive (or negative) sector if F j (x, y) > 0 (or F j (x, y) < 0).

Example 4.3 (A Counter Example)
. Assume that the asymptotes of two hyperbolas are alternating, for example Following the curve F 2 (x, y) = h 2 , the values of F 1 are not bounded below or above. Therefore, conditional max/min as in §4.1 is not well posed. It is easy to see that in such case, (4.2) has exactly 2 solutions and the two quadratic forms cannot be codiagonalized.
Although the general max/mn problem is not well posed, to each of the cases listed below, it is not hard to find a well posed conditional max/min problem.
Consider 4 sub-cases, as depicted in the four figures: (We skipped part of the graphs can be obtained by symmetry for simplicity.) (HH i) The two sectors of F 1 > 0 are inside the sectors of F 2 > 0. (HH ii) The two sectors of F 1 > 0 are inside the sectors of F 2 < 0. (HH iii) The two sectors of F 1 < 0 are inside the sectors of F 2 > 0. (HH iv) The two sectors of F 1 < 0 are inside the sectors of F 2 < 0. (4.6) Then for (4.6), there exists r 3 = max F 1 . System (4.2) has 4 solutions if h 1 < r 3 . For cases (HH iii) and (HH iv), and h 2 > 0 or < 0, consider the conditional minimum problem: (4.7) Then for (4.7), there exists r 4 = min F 1 System (4.2) has 4 solutions if r 4 < h 1 . Finally, after rescalling the generalized eigenvectors (P 1 , P 2 ), we can assume that P 2 solves the max/min problems (4.6) or (4.7). And P 1 solves the complementary max/min problem (4.6*) for cases (HH i) and (HH ii), or (4.7*) for cases (HH iii) and (HH iv), defined as: Proof. Following the curve F 2 (x, y) = h 2 , or −h 2 , the range of F 1 (x, y) can be bounded above and unbounded below, or bounded below and unbounded above. Therefore, either a conditional max problem or a conditional min problem is wellposed, but not both. The (LH) case can be treated just like the (HH) case. Consider 4 sub-cases: (LH i) F 1 ≤ 0 and the line F 1 = 0 is inside the sectors of F 2 > 0.
(LH ii) F 1 ≤ 0 and the line F 1 = 0 is inside the sectors of F 2 < 0. (LH iii) F 1 ≥ 0 and the line F 1 = 0 are inside the sectors of F 2 > 0. (LH iv) F 1 ≥ 0 and the line F 1 = 0 are inside the sectors of F 2 < 0. Theorem 4.3. For cases (i) and (ii), consider the conditional maximum problem: (4.8) Then for (4.8), there exists r 5 = max F 1 . System (4.2) has 4 solutions if h 1 < r 5 For cases (iii) and (iv), consider the conditional minimum problem: Then for (4.9), there exists r 6 = min F 1 . System (4.2) has 4 solutions if r 6 < h 1 . Finally, after rescalling the generalized eigenvector (P 1 , P 2 ), we can assume that P 2 solves the max/min problems (4.8) or (4.9). And P 1 solves the complementary max/min problem (4.8*) for cases (LH i) and (LH ii), or (4.9*) for cases (LH iii) and (LH iv), defined as: For the (LL) case, if two family of lines are not parallel, there are 4 solutions. To simplify the paper, we shall not discuss (LL) case in the sequel.

Codiagonalization of two quadratic equations
In this subsection, we consider codiagonalization of two quadratic equations, but not the coexistence of real valued solutions. The method is based the generalized eigenvalue/eigenvector problems. For the cases listed in §4.1 and §4.2, we have the following results: Theorem 4.4. If one equation of the quadratic system is elliptic, then the two quadratic form can always be codiagonalized by real valued matrices.
If both equations are hyperbolic, then in all the cases (HH i)-(HH iv), the two quadratic forms can be dociagonalized by real valued matrices.
If F 1 (x, y) is the line type and F 2 (x, y) is hyperbolic, then in all the cases (LH i)-(LH iv), the two quadratic forms can be cociagonalized by real valued matrices.
Proof. Let (P 1 , P 2 ) be the generalized eigenvector corresponding to the generalized eigenvalue problem (4.5). After rescaling, assume that P 2 solves the max/min problem. In all the three cases, there exists an angle θ 0 such that T 2 (−θ 0 )P 2 coincides with the major axis or the minor axis of the graphs of F 2 (x, y) = h 2 .
Based on the results from previous subsections, each generalized eigenvalue problem has two lindearly independent eigenvectors. Thus, the eigenvalus are distinct. This implies Therefore, in all the cases listed in Theorems 4.1, 4.2 and 4.3, the image of T 2 (−θ 0 )P 1 should coincide with the minor axis or the major axis of F 2 = h 2 . Assume that under the rotation T 2 (θ 0 ), the quadratic form F 1 (x, y) = h 1 becomes F 3 (x, y) = h 1 while F 2 (x, y) = h 2 is unchanged. Now apply a circular rotation R(−θ 0 ) to both F 3 (x, y) = h 1 and F 2 (x, y) = h 2 so the major axis of F 2 (x, y) = h 2 is mapped to the x-axis. The matrices that represent the two quadratic forms are Clearly F 2 (x, y) = h 2 has been diagonalized. From Lemma 4.1, F 1 (x, y) = h 1 has also been diagonalized. Proof. If not, then the solutions of the system are on the lines spanned by − − → OP 1 or − − → OP 2 where the graphs are tangent to each other. Contradicting to the fact that the system has 4 solutions.
Proof. Observe that .
The proof has been completed. By Theorem 5.2, the bifurcation function H = (H 1 , H 2 , H 3 ) = 0 at (s(β (j) + ω j (s)), τ (j) + η j (s), s 2 µ). Then system (3.1) has the solution φ(β, τ, µ). Hence system (1.4) has 2 or 4 homoclinic solutions given by for 0 = s ∈ I j , 1 ≤ j ≤ 4 or 1 ≤ j ≤ 2. Clearly, lim s→0 γ s , we find that the solutions are robust with respect to small perturbation of g. This alone shows that each of the solution obtained is a transversal homoclinic solution. The same argument was used by Mallet-Paret in [15] to show that the homoclinic orbits in some delay equations are transverse.
Alternatively, it is shown in [13] that the functions H i , 1 ≤ i ≤ 3, as in (3.10), measure the gap between the unstable manifold at t = 0 − and the stable manifold at is also a nonsingular matrix. Therefore, the intersection of W u (0) and W s (0) is transverse.

An Example
Although the example given in this section is not from applications, it shows that the conditions given in this paper are consistent. Consider the following system (6.1) The unperturbed system is It is easy to check that 0 is an equilibrium and the eigenvalues of Df (0) are {−1, −1, −1, 1, 1, 1}. Hence 0 is a hyperbolic equilibrium. Let r(t) = sech(t) and γ = (0, 0, 0, 0, r,ṙ). By direct calculations, we see that γ is a homoclinic solution to the origin.
Remark 6.1. The example is modefied from [4]. At the first look, it seems to be unnatural to consider a homoclinic orbit with x 1 = x 2 = x 3 = x 4 = 0 in R 6 . However, if γ(t) is a homoclinic orbit that can be embedded in a smooth 2D submanifold, by a change of variables, we can assume that γ(t), −∞ < t < ∞ lies in the (x 5 , x 6 )-plane.