Real positive solutions of operator equations AX = C and XB = D

: In this paper, we mainly consider operator equations AX = C and XB = D in the framework of Hilbert space. A new representation of the reduced solution of AX = C is given by a convergent operator sequence. The common solutions and common real positive solutions of the system of two operator equations AX = C and XB = D are studied. The detailed representations of these solutions are provided which extend the classical closed range case with a short proof.


Introduction
The study of operator equations has a long history. Khatri and Mitra [10], Wu and Cain [15], Xiong and Qin [16] and Yuan et al. [18] studied matrix equations AX = C and XB = C in matrix algebra. Dajić and Koliha [1], Douglas [4] and Liang and Deng [12] investigated these equations in operator space. Xu et al. [5,13,14,17], Fang and Yu [6] studied these equations of adjoint operator space on Hilbert C * -modules. The famous Douglas' range inclusion theorem played a key role in the existence of the solutions of equation AX = C. Many scholars discussed the existence and the general formulae of self-adjoint solutions (resp. positive or real positive solutions) of one equation or common solutions of two equations. In finite dimensional case, Groβ gave the necessary and sufficient conditions for the matrix equation AX = C to have a real positive solution in [7]. However, the detailed formula for the real positive solution of this equation is not fully provided. Dajíc and Koliha [2] first provided a general form of real positive solutions of equation AX = C in Hilbert space under certain conditions. Recently, the real positive solutions of AX = C were considered with corresponding operator A not necessarily having closed range in [6,12] for adjoint operators. However, these formulae of real positive solutions still have some additional restrictions. In [16], the authors considered the equivalent conditions for the existence of common real positive solutions to the pair of the matrix equations AX = C, XB = D in matrix algebra and offered partial common real positive solutions.
Let H and K be complex Hilbert spaces. We denote the set of all bounded linear operators from H into K by B(H, K) and by B(H) when H = K. For an operator A, we shall denote by A * , R(A), R(A) and N(A) the adjoint, the range, the closure of the range and the null space of A, respectively. An operator A ∈ B(H) is said to be positive (A ≥ 0), if ⟨Ax, x⟩ ≥ 0 for all x ∈ H. Note that a positive operator has unique square root and |A| := (A * A) 1 2 is the absolute value of A. Let A † be the Moore-Penrose inverse of A, which is bounded if and only if R(A) is closed [9]. An operator A is called real positive if Let P M denote the orthogonal projection on the closed subspace M. The identity onto a closed subspace M is denoted by I M or I, if there is no confusion. In this paper, we focus our work on the problem of characterizing the real positive solutions of operator equations AX = C and XB = D with corresponding operators A and B not necessarily having closed ranges in infinite dimensional Hilbert space. Our current goal is three-fold: Firstly, by using polar decomposition and the strong operator convergence, a completely new representation of the reduced solution F of operator equation AX = C is given by where A = U A |A| is the polar decomposition of A. This solution has property F = A † C when R(A) is closed. Furthermore, the necessary and sufficient conditions for the existence of real positive solutions and the detailed formulae for the real positive solutions of equation AX = C are obtained, which improves the related results in [2,7,12]. Some comments on the reduced solution and real positive solutions are given. Next, we discuss common solutions of the system of two operator equations AX = C and XB = D. The necessary and sufficient conditions for the existence common solutions and the detailed representations of general common solutions are provided, which extends the classical closed range case with a short proof. Furthermore, we consider the problem of finding the sufficient conditions which ensure this system has a real positive solution as well as the formula of these real positive solutions.
Finally, two examples are provided. As shown by Example 4.1, the system of equations AX = C and XB = D has common real positive solutions under above given sufficient conditions. It is shown by Example 4.2 that a gap is unfortunately contained in the original paper [16], where the authors gave two equivalent conditions for the existence of common real positive solutions for this system in matrix algebra. Here, it is still an open question to give an equivalent condition for the existence of common real positive solutions of AX = C and XB = D in infinite dimensional Hilbert space.

The reduced solution of AX = C
In general, A † C is the reduced solution of equation AX = C if R(A) is closed. Liang and Deng gave an expression of the reduced solution denoted A ‡ C through the spectral measure of positive operators where operator A may not be closed, but sometimes A ‡ is only a symbol since the spectral integral ∞ 0 λ ‡ dE λ (see [12]) may be divergent. In this section, we will give a new representation of reduced solution of AX = C by operator sequence. we begin with some lemmas. If T is also invertible and positive, then For an operator T ∈ B(H), then T has a unique polar decomposition T = U T |T |, where U T is a partial isometry and P R(T * ) = U * T U T ( [8]). Denote for each positive integer n. In [11], the authors paid attention to the operator sequence T n |T | over Hilbert C * -module. It was given that T n |T | converges to P R(T * ) in strong operator topology for T is a positive operator. Here we have some relative results as follows, (ii) T † = s.o. − lim n→∞ T n U * T if R(T ) is closed. Proof. We only prove the statement (ii). If R(T ) is closed, then R(T * ) is also closed. It is natural that H = R(T * ) ⊕ N(T ) = R(T ) ⊕ N(T * ). Thus T and |T | have the following matrix forms, respectively, and where T 11 ∈ B(R(T * ), R(T )) is invertible. Then T † has the matrix form And from (2.2), it is easy to get that 1 n I H + |T | = : By the invertiblity of diagonal operator matrix and the above matrix form (2.4), we have Because the operator sequence converges to |T 11 | and 1 n I R(T * ) + |T 11 | is invertible in B(R(T * )) for each n, then the operator sequence converges to |T 11 | −1 by Lemma 2.1. Moreover, partial isometry U T has the matrix form with respect to the space decomposition H = R(T * ) ⊕ N(T ), where U 11 is a unitary from R(T * ) onto R(T ). Then T 11 = U 11 |T 11 | and T −1 11 = |T 11 | −1 U * 11 by T = U T |T | and the matrix forms (2.1), (2.2), and (2.6). From matrix forms (2.5) and (2.6), we have Since the operator sequence ( 1 n I R(T * ) + |T 11 |) −1 U * 11 converges to we obtain that the operator sequence {T n U * T } converges to is not closed. The following is an example. Example 2.3. Let {e 1 , e 2 , · · · } be a basis of an infinite Hilbert space H. Define an operator T as follows, Then since T is a positive operator. It is easy to get that T n e k = kn n + k e k for any k. Suppose Then x ∈ H with By direct computing, we have It is shown that the number sequence {∥T n x∥} is divergent since the harmonic progression ∞ k=1 1 k is divergent. This implies that the operator sequence {T n } is divergent in strong operator topology.
The following three conditions are equivalent: (2) There exists λ > 0 such that CC * ≤ λAA * . Theorem 2.5. Let A, C ∈ B(H) be such that the equation AX = C has a solution. Then the unique reduced solution F can be formulated by Assuming that X is a solution of AX = C, then |A|X = U * A C. By multiplication (|A| + 1 n I) −1 on the left, it follows that

This shows that
It is shown that F − F ′ = 0 and then F = F ′ . That is to say, F is the unique reduced solution of . As a consequence of the above Theorem 2.5, Lemma 2.4 and Theorem 3.5 in [12], the following result holds.
where F is the reduced solution of AX = C and P = P R(A * ) . Particularly, if R(A) is closed, then the general solutions can be represented by Firstly, some preliminaries are given. For a given operator A ∈ B(H), denote P = P R(A * ) in the sequel.
Lemma 3.1. Let A, C ∈ B(H) be such that Eq (3.1) has a solution and F is the reduced solution. Then the following statements hold: This shows that FP + PF * ≥ 0. Contrarily, if FP + PF * ≥ 0, it is elementary to get that The following two statements hold: .
Proof. Assume that the statements (i) and (ii) hold. Then Conversely, if A ≥ 0, it follows from Lemma 3.2 that A 11 ≥ 0. And also we get that in strong operator topology. Thus be such that A 11 has closed range. Then A ≥ 0 if and only if Proof. Since A 11 is an operator with closed range, then R(A  Set H = R(A * ) ⊕ N(A). Then F has the following matrix form since R(F) ⊂ R(A * ). The operator Y can be expressed as This follows that "⇒": If X is a real positive solution, then F 11 + F * 11 ≥ 0 from Lemma 3.3 and the matrix form (3.7) of X + X * . And so PF * + FP ≥ 0. According to Lemma 3.1, CA * + AC * ≥ 0.
"⇐": Assume that CA * + AC * ≥ 0, then PF * + FP ≥ 0 and so F 11 + F 11 * ≥ 0. Denote X 0 = F − (I − P)F * . Then AX 0 = C and That is, X 0 is a real positive solution of AX = C. Next, we analyse the general form of real positive solutions. Suppose that X is a real positive solution of AX = C. Then X + X * has the matrix form (3.7). From Lemma 3.3, there exists an operator In this case, X can be represented by the following form On the contrary, for any operator Y ∈ B(H) and Z ∈ ReB + (H) such that X has the form (3.3), it is clear that AX = C. Let Y = (Y i j ) 2×2 and Z = (Z i j ) 2×2 with respect to the space decomposition H = R(A * ) ⊕ N(A). Then Particularly, if R(CA * + AC * ) is closed, then R(F 11 + F * 11 ) is closed by Lemma 3.1. Suppose that X is a real positive solution of AX = C, then X + X * has the matrix form (3.7). From Corollary 3.4, there exists an operator Y 12 ∈ B(N(A), R(A * )) such that On the contrary, for any Y ∈ B(H) and Z ∈ ReB + (H) such that X has a form (3.2), it is clear that AX = C and also we have Proof. The necessity is clear. We only need to prove the sufficient condition. From R(C) ⊆ R(A) and Theorem 2.7, a solution X of Eq (3.1) can be represented by Proof. Combined Theorems 3.5 and 3.6 with statements (i) and (ii), the system of equations AX = C, XB = D has common solutions. For any Y ∈ ReB + (H), is a real positive solution of AX = C. We only need to prove that there exists Y ∈ ReB + (H) such that above X is also a solution of XB = D. Since and also then A, F and B have the following operator matrix forms, respectively, where A 11 is an injective operator from R(A * ) into R(A) with dense range. The operator D has the matrix form Then X has a matrix form as follows: (3.14) By the matrix forms (3.11) and (3.14) of B and X, respectively, it is easy to get that Combining AD = CB = AFB with the matrix forms (3.9)-(3.12), we have It is immediate that A 11 (F 11 B 11 + F 12 B 21 ) = A 11 D 11 .

Examples
Some examples are given in this section to demonstrate our results are valid. And also it is shown that a gap is unfortunately contained in the original paper [16] about existence of real positive solutions.  Whereas the statements (i)-(iii) in Theorem 3.7 are only sufficient conditions for the existence of common real positive solutions of system (3.2). The following is an example.
Example 4.2. Let A, B, C, D be 2 × 2 complex matrices. Denote By direct computing, AX 0 = C, X 0 B = D and So X 0 is a common real positive solution of system equations AX = C, XB = D. But in this case, Theorem 3.7 does not work. In fact, Therefore, combining the above result with formula (4.1), we obtain that  and (4.5) shows that Theorems 2.1 and 2.2 in [16] do not work, respectively. Actually, the conditions in [16] are also only sufficient conditions for the existence of common real positive solutions of Eq (3.2). So here is still an open question.

Conclusions
In this work, a new representation of reduced solution of AX = C is given by a strong operator convergent sequence. This result provides us a method to discuss the general solutions of Eq (3.2). By making full use of block operator matrix methods, the formula of real positive solutions of AX = C is obtained in Theorem 3.5, which is the basis of finding common real positive solutions of Eq (3.2). Through Example 4.1, it is demonstrated that Theorem 3.7 is useful to find some common real positive solutions. But unfortunately, it is complicated to consider all the common real positive solutions by using of the method in Theorem 3.7. Maybe, we need some other techniques. This will be our next problem to solve.