Bounded solutions for a class of Hamiltonian systems

We obtain bounded for all $t$ solutions of ordinary differential equations as limits of the solutions of the corresponding Dirichlet problems on $(-L,L)$, with $L \rightarrow \infty$. We derive a priori estimates for the Dirichlet problems, allowing passage to the limit, via a diagonal sequence. This approach carries over to the PDE case.


Introduction
For −∞ < t < ∞, we consider the equation with continuous functions a(t) > 0 and f (t). Clearly, "most" solutions of (1.1) blow up in finite time, for both increasing and decreasing t. By using twodimensional shooting, S.P. Hastings and J.B. McLeod [3] showed that the equation (1.1) has a uniformly bounded on (−∞, ∞) solution, in case of constant a(t) and uniformly bounded f (t). Their proof used some non-trivial topological property of a plane. We use a continuation method and passage to the limit as in P. Korman and A.C. Lazer [4] to obtain the existence of a uniformly bounded on (−∞, ∞) solution for (1.1), and for similar systems. We produce a bounded solution as a limit of the solutions of the corresponding Dirichlet problems (1.2) u ′′ − a(t)u 3 = f (t) for t ∈ (−L, L), u(−L) = u(L) = 0 , as L → ∞. If f (t) is bounded, it follows by the maximum principle that the solution of (1.2) satisfies a uniform in L a priori estimate, which allows passage to the limit. Then we use a variational approach motivated by P. Korman and A.C. Lazer [4] (see also P. Korman, A.C. Lazer and Y. Li [5]), to get a similar result for a class of Hamiltonian systems. Again, we consider the corresponding Dirichlet problem on (−L, L), which we solve by the minimization of the corresponding functional, obtaining in the process a uniform in L a priori estimate, which allows passage to the limit as L → ∞.
We used a similar approach to obtain uniformly bounded solutions for a class of PDE systems of Hamiltonian type. The challenge was to adapt the elliptic estimates in case only the L ∞ bound is known for the right hand side.

A model equation
where the given functions a(t) ∈ C(R) and f (t) ∈ C(R) are assumed to satisfy |f (t)| ≤ M, for all t ∈ R, and some constant M > 0 , and a 0 ≤ a(t) ≤ a 1 , for all t ∈ R, and some constants a 1 ≥ a 0 > 0 .
Then the problem (2.1) has a classical solution uniformly bounded for all t ∈ R, i.e., |u(t)| ≤ K for all t ∈ R, and some K > 0. Such a solution is unique.
Proof. We shall obtain a bounded solution as a limit of solutions to the corresponding Dirichlet problems as L → ∞. To prove the existence of solutions, we embed (2.2) into a family of problems The solution at λ = 0, and other λ, can be locally continued in λ by the implicit function theorem, since the corresponding linearized problem has only the trivial solution w(t) ≡ 0, as follows by the maximum principle. Multiplying (2.3) by u and integrating, we get a uniform in λ bound on H 1 norm of the solution, which implies the bound in C 2 (using Sobolev's embedding and the equation (2.3); this bound depends on L). It follows that the continuation can be performed for all 0 ≤ λ ≤ 1. At λ = 1, we get the desired solution of (2.2). We claim that there is a uniform in L bound in C 2 [−L, L] for any solution of (2.2), i.e., there is a constant K > 0, so that for all t ∈ [−L, L], and all L > 0, Indeed, if t 0 is a point of positive maximum of u(t), then from the equation which gives us an upper bound on u(t 0 ). Arguing similarly at a point of negative minimum of u(t), we get a lower bound on u(t), and then conclude the first inequality in (2.4). From the equation (2.2) we get a uniform bound on |u ′′ (t)|. Note that for all t ∈ R, we can write from which we immediately deduce a uniform bound on |u ′ (t)|. We now take a sequence L j → ∞ , and denote by u j (t) ∈ H 1 0 (−∞, ∞) the bounded solution of the problem (2.2) on the interval (−L j , L j ), extended as zero to the outside of the interval (−L j , L j ). For all t 1 < t 2 , writing in view of (2.4), we conclude that the sequence {u j (t)} is equicontinuous and uniformly bounded on every interval [−L p , L p ] . By the Arzela-Ascoli theorem, it has a uniformly convergent subsequence on every [−L p , L p ] . So let {u 1 j k } be a subsequence of {u j } that converges uniformly on [−L 1 , L 1 ] . Consider this subsequence on [−L 2 , L 2 ] and select a further subsequence {u 2 j k } of {u 1 j k } that converges uniformly on [−L 2 , L 2 ] . We repeat this procedure for all m, and then take the diagonal sequence {u k j k } . It follows that it converges uniformly on any bounded interval to a function u(t) .
Expressing u k j k ′′ from the equation (2.2), we conclude that the sequence and conclude that u(t) ∈ C 2 (−∞, ∞), and u ′′ (t) = v(t). Hence, we can pass to the limit in the equation (2.2), and conclude that u(t) solves this equation on (−∞, ∞). We have |u(t)| ≤ K on (−∞, ∞), proving the existence of a uniformly bounded solution.
Turning to the uniqueness, the difference w(t) of any two bounded solutions u(t) andũ(t) of (2.1) would be a bounded for all t solution of the linear equation To prove the existence of solutions of (2.2) , we could alternatively consider the corresponding variational functional J(u) : Since for any ǫ > 0 for some c 3 , c 4 > 0, so that J(u) is bounded from below, coercive and convex in u ′ . Hence J(u) has a minimizer in H 1 0 (−L, L), which gives us a classical solution of (2.2), see e.g., L. Evans [1]. However, to get a uniform in L estimate of L −L (u ′ ) 2 dt (needed to conclude the equicontinuity in (2.6)), one would have giving a weaker result than above. We now discuss the dynamical significance of the bounded solution, established in Theorem 2.1, let us call it u 0 (t). The difference of any two solutions of (2.1) satisfies (2.7). We see from (2.7) that any two solutions of (2.1) intersect at most once. Also from (2.7), we can expect u 0 (t) to have one-dimensional stable manifold as t → ±∞. It follows that u 0 (t) provides the only possible asymptotic form of the solutions that are bounded as t → ∞ (or t → −∞), while all other solutions become unbounded.
Next we show that the conditions of this theorem cannot be completely removed. If a(t) ≡ 0, then for f (t) = 1, all solutions of (2.1) are unbounded as t → ±∞. The same situation may occur in case a(t) > 0, if f (t) is unbounded. Indeed, the equation has a solution u(t) = t sin t. Letũ(t) be any other solution of (2.8). Then cannot have points of positive local maximum, or negative local minimum. But thenũ(t) cannot remain bounded as t → ±∞, since in such a case the function w(t) would be unbounded with points of positive local maximum and negative local minimum. It follows that all solutions of (2.8) are unbounded as t → ±∞.
The approach of Theorem 2.1 is applicable to more general equations and systems. For example, we have the following theorem.
Assume that the functions f (x, y) and g(x, y) are continuous on R 2 , and Assume that for some α ∈ R, and all (x, y) ∈ R 2 . Assume finally that the quadratic form in (w, z) is positive semi-definite for all t, x and y. Then the problem (2.9) has a classical solution uniformly bounded for all t ∈ (−∞, ∞).
Proof. To prove the existence of solutions for the corresponding Dirichlet problem on (−L, L), we embed it into a family of problems The implicit function theorem applies, since the corresponding linearized problem has only the trivial solution w = z = 0. This follows by multiplying the first equation by w, the second one by z, integrating, adding the results, and using the condition (2.13). Using (2.12), we obtain a uniform in λ bound on the H 1 norm of the solution of (2.15), so that the continuation can be performed for all 0 ≤ λ ≤ 1. At λ = 1, we obtain a solution of (2.14).
From the first equation in (2.14) and the assumption (2.10) we conclude the bound (2.4) on u(t), and a similar bound on v(t) follows from the second equation in (2.14) and the assumption (2.11), the same way as we did for a single equation. Using the equations in (2.14), we obtain uniform bounds on u ′′ and v ′′ , and the uniform bounds on u ′ and v ′ follow from (2.5). Hence, we have the estimates (2.4) for u and v. We then let L → ∞, and pass to the limit along the diagonal sequence, as in the proof of Theorem 2.1, to conclude the proof of Theorem 2.2. Example 1. Theorem 2.2 applies in case f (x, y) = x + x 2n+1 + r(y), g(x, y) = y + y 2m+1 + s(x), with positive integers n and m, assuming that the functions r(y) and s(x) are bounded and have small enough derivatives for all x and y, and the functions a i (t) and h i (t), i = 1, 2, satisfy the assumptions of the theorem.

Bounded solutions of Hamiltonian systems
We use variational approach to get a similar result for a class of Hamiltonian systems. We shall be looking for uniformly bounded solutions u ∈ H 1 (R; R m ) of the system Here u i (t) are the unknown functions, a(t) and f i (t) are given functions on R, i = 1, . . . , m, and V (z) is a given function on R m .
Theorem 3.1. Assume that a(t) ∈ C(R) satisfies a 0 ≤ a(t) ≤ a 1 for all t, and some constants 0 < a 0 ≤ a 1 . Assume that f i (t) ∈ C(R), with |f i (t)| ≤ M for some M > 0 and all i and t ∈ R. Also assume that V (z) ∈ C 1 (R m ) satisfies Then the system (3.1) has a uniformly bounded solution u i (t) ∈ H 1 (R), i = 1, . . . , m (i.e., for some constant K > 0, |u i (t)| < K for all t ∈ R, and all i).
Proof. As in the previous section, we approximate solution of (3.1) by solutions of the corresponding Dirichlet problems (i = 1, . . . , m) for some positive constants c 1 and c 2 , so that J(u) is bounded from below, coercive and convex in u ′ . Hence, J(u) has a minimizer in H 1 0 (−L, L) m , giving us a classical solution of (3.4), see e.g., L. Evans [1].
We now take a sequence L j → ∞ , and denote by u j (t) ∈ H 1 (R; R m ) a vector solution of the problem (3.4) on the interval (−L j , L j ), extended as zero vector to the outside of the interval (−L j , L j ). By our condition (3.2), we conclude a component-wise bound of |u j (t)|, uniformly in j and t. The crucial observation (originated from [4]) is that the variational method provides a uniform in j bound on u ′ j (t) L 2 (−∞,∞) . Indeed, we have H Example 2. Consider the case m = 2, V (z) = z 4 1 + z 2 2 + h(z 1 , z 2 ), with h(z 1 , z 2 ) > 0 and h z1 (z), h z2 (z) bounded on R 2 . We consider the system where the functions a(t), f 1 (t), f 2 (t) satisfy the assumptions of Theorem 3.1. Applying Young's inequality, we obtain . Therefore, we get for some c 3 > 0 Hence, Theorem 3.1 applies provided that

Bounded solutions of Hamiltonian PDE systems
In this section, we use a combination of the variational approach and elliptic estimates to show that similar results can be obtained for Hamiltonian PDE systems. We shall be looking for uniformly bounded solutions u = (u 1 , ..., u m ) ∈ H 1 (R n ; R m ), for n > 1, of the system Here u i (x) are the unknown functions, a(x) and f i (x) are given functions on R n , i = 1, . . . , m, and V (z) is a given function on R m . We shall denote the gradient of a(x) by Da(x).
Theorem 4.1. Assume that a(x), f i (x) ∈ C ∞ (R n ) and V (z) ∈ C ∞ (R m ). In addition, assume that there exist constants 0 < a 0 ≤ a 1 and M > 0 such that a 0 ≤ a(x) ≤ a 1 and |f i (x)|, |Da(x)|, |Df i (x)| ≤ M for all x ∈ R n and i = 1, ..., m. Assume also that for all x ∈ R n , z ∈ R m and some function f 0 (x) > 0 satisfying R n f 0 (x) dx < ∞. Then the system (4.1) has a uniformly bounded classical solution u(x), with u i (x) ∈ C 2 (R n ), i = 1, . . . , m.
As in the proof of Theorem 3.1, we approximate solutions of the system (4.1) by solutions of the following system where B L (0) = {x ∈ R n : |x| < L}.
Proof. We consider the following variational approach: the functional . From the condition (4.3), we have for some positive constants c 1 , c 2 . Therefore, J is bounded below, coercive and convex in ∇u. Hence, it has a minimizer u L ∈ H 1 0 (B L (0); R m ) that satisfies the system (4.4). (See Theorem 2 in Section 8.2.2 of [1].) Now u L solves the following elliptic system For any i, since a, f i and V are all smooth and u L ∈ H 1 0 , it follows from standard elliptic estimates that u L,i ∈ H 3 (B L (0)), and therefore u L ∈ H 3 (B L (0); R m ). (See Theorem 8.13 in [2].) By a bootstrapping argument and the Sobolev embedding theorem, one has u L,i ∈ C 2 (B L (0)) for all i and hence u L is a classical solution to (4.4).
In the next lemma, we apply interior estimates for classical solutions of the Poisson equation to the function u L found in Lemma 4.1. We introduce some notations from [2]. Let Ω ∈ R n be a bounded domain and u ∈ C 2,α (Ω) for some 0 < α < 1. We set  Proof. We fix an arbitrary index i ∈ {1, ..., m}, and omit the subscript L. Therefore, we denote u = u L and u i = u L,i . Suppose is a positive maximum of u i . Then since ∆u i (x 0 ) ≤ 0, it follows from (4.4) that The assumption (4.2) and (4.6) then guarantee that u i (x 0 ) is bounded from above independent of L. Similarly, we have the minimum of u i is bounded from below independent of L. Since this holds for all i, we deduce . It follows from Lemma 4.1 and (4.7) that F i ∈ C 2 (B L (0)) and |F i (u, x)| 0;BL(0) is bounded independent of L.
Letx ∈ B L ′ (0) and w be the Newtonian potential of F i on B 1 (x), then it is clear that where Γ is the fundamental solution of the Laplacian in R n (see [2] Lemma 4.1). Using properties of Γ and uniform boundedness of F i , it is easy to check that for some constant C depending only on n. Therefore we have Using interior estimates for harmonic functions (see [2] Theorem 2.10), we have (4.10) for some constant C depending only on n, since for any x ∈ B 1 2 (x), we have dist(x, ∂B 1 (x)) ≥ 1 2 . Now combining (4.9)-(4.10) we obtain for some constant C depending only on n. This along with (4.8) yields for some constant C depending only on n. Now sincex is arbitrary in B L ′ (0), it follows that In particular, since |u i | 0;BL(0) and |F i | 0;BL(0) are bounded independent of L, we obtain a uniform bound on |Du i | 0;B L ′ (0) independent of L. Hence we have (4.11) |Du| 0;B L ′ (0) ≤ K 1 for some K 1 independent of L.
By assumption, both |Da| 0;R n and |Df i | 0;R n are bounded. Since V is smooth, and both |u| 0;B ′ L (0) and |Du| 0;B ′ L (0) are bounded independent of L, it is clear for some constant C depending only on n and α. Sincex ∈ B L ′′ (0) is arbitrary and the above right hand side is bounded independent of L, we conclude that for some K 2 independent of L. Putting (4.7), (4.11), (4.12) together and setting K := max{K 1 , K 2 , K 3 }, we obtain (4.5).
Proof of Theorem 4.1. We take an increasing sequence {L j } j with L 1 > 2 and lim j→∞ L j = ∞, and denote by u j = u Lj the function found in Lemma 4.1. We extend u j to be zero outside B Lj (0). Note that u j ∈ C 2,α (B Lj (0); R m ) but does not need to be smooth on R n . On each B L ′′ p (0), it follows from Lemma 4.2 that the sequences {u j } j≥p , {Du j } j≥p and {D 2 u j } j≥p are all uniformly bounded and equicontinuous. Using the diagonal arguments as in the proof of Theorem 2.1, one can find a subsequence {u j k } such that {u j k }, {Du j k } and {D 2 u j k } are all uniformly convergent on all B L ′′ p (0). In particular, there exists u ∈ C(R n ; R m ) such that (4.13) u j k → u uniformly on all bounded domains in R n .
It is clear from Lemma 4.2 that u is bounded on R n . It remains to show that the vector valued function u satisfies the system (4.1). Let Ω ⊂ R n be any bounded convex domain and i ∈ {1, ..., m} be any index. Note that u j k ,i ∈ C 2 (Ω) for all k sufficiently large, and there exist v ∈ C(Ω; R n ) and w ∈ C(Ω; R n×n ) such that (4.14) ∇u j k ,i → v and ∇ 2 u j k ,i → w uniformly on Ω, where ∇ 2 u j k ,i is the Hessian matrix of u j k ,i . Fix x 0 ∈ Ω. For any x ∈ Ω, we have where l x x0 is the line segment joining x 0 and x and τ is the unit tangent vector of l x x0 . Using (4.13) and (4.14), we obtain and therefore u i ∈ C 1 (Ω) and ∇u i = v. Using similar arguments and (4.14), we obtain that v ∈ C 1 (Ω) and ∇v = w, and hence u i ∈ C 2 (Ω) and ∇ 2 u i = w in Ω.
For k sufficiently large, we know u j k ,i solves , for x ∈ Ω.
Passing to the limit as k → ∞, we have ∆u i − a(x)V zi (u) = f i (x) , for x ∈ Ω.
Since this holds for all bounded convex domains Ω ∈ R n , we conclude that u ∈ C 2 (R n ; R m ) is a bounded solution of the system (4.1).

Remark 2.
We can apply Theorem 4.1 to the system given in Example 2, but with smooth h and the functions a(x), f 1 (x), f 2 (x) satisfying the additional assumptions in Theorem 4.1.