SUCCESSIVE ITERATIONS FOR POSITIVE EXTREMAL SOLUTIONS OF BOUNDARY VALUE PROBLEMS ON THE HALF-LINE

The authors study the existence of positive extremal solutions to the diﬀerential equation


Introduction
In 1837, Liouville introduced a fundamental approximation scheme in fixed point theory known as the fixed point iterative method.It was further developed by several mathematicians including Picard [22] in 1890.A number of mathematicians have used this method over the years, for example, see [12,17,20,21,24,25].
In this work, we give sufficient conditions for the existence of a maximal and minimal positive solution to the second order boundary value problem posed on the half-line where I = (0, ∞), f : R + × R + → R + is continuous, a : I → R + , λ > 0 is a parameter, and there exists t 0 ∈ I such that f (t 0 , 0) = 0.
Investigations of second order boundary value problems on the half-line have been conducted in a number of settings.For example, there have been studies of problems in which the equation under consideration is linear [5], nonlinear [3,10,24,25], singular [2], contains the derivative of the unknown function [19,23], contains a parameter [19], involves the Laplacian [9,14,15], be of the Kirchhoff type [13], or there may be impulse conditions involved [1,6,15,17].In addition, the boundary conditions themselves may be of the Dirichlet type [9], the multi-point type [14,18,20], contain a functional [23], or the problem may be at resonance [14][15][16]18].The techniques used to prove the results have involved variational methods [7,9,11], critical point theory [4], or fixed point methods.
In our paper we choose to use a successive approximation approach to prove the existence of maximal and minimal positive solutions to problem (1.1).After presenting some preliminary concepts in the next section of the paper, our main result, its proof, and an example appear in Section 3.

Preliminaries
Let G(t, s) the Green's function associated to our problem (1.1) which is given by Notice that for t = s, we have Proof.This follows easily from the Mean Value Theorem.
For our construction, we let We next have a lemma that gives an integral representation for solutions of our original problem.
Proof.Notice that here we need that the function g satisfies g ∈ L 1 (0, +∞) to assure that the integral +∞ 0 G(t, s)g(s)ds is defined.From Lemma 2.1 we have that We also have Then, Next, we define a cone P in E by On P , we define the operator T by The following compactness criteria is based on [8, p. 62].

Main result
Our main existence result in this paper is contained in the following theorem.(H2) f is nondecreasing with respect to its second variable.
Then there exists a positive constant R such that in (0, R] the problem (1.1) has a minimal positive solution u * and a maximal positive solution v * with and for which we have Proof.To show that the operator T : E → E is relatively compact, let Ω be any bounded subset of E. Then there exists a constant M > 0 such that, for any u ∈ Ω with u ≤ M , we have This implies that T (Ω) is uniformly bounded.
To see that S = {T u : u ∈ P ∩ Ω} is almost equicontinuous on I, that is, S is equicontinuous on compact subsets of I, let K ⊂ I be compact and let t 2 , t 1 ∈ K.For all u ∈ P ∩ Ω, we have proving the equicontinuity of T (Ω) on K by Lemma 2.4.
To show that the functions {T u : u ∈ P ∩ Ω} are equiconvergent at ∞, notice that for all u ∈ Ω and t > 0, we have Thus, the equiconvergence of T follows from the fact that lim t→∞ G(t, s) = 0.
Next, we need to show that T is continuous, so let (u n ) ⊂ P ∩ Ω be such that u n → u as n → +∞.Since f is continuous, we have that f (t, u n (t)) → f (t, u(t)) as n → +∞ and From condition (H1), we see that so we can apply the Lebesgue Dominated Convergence Theorem to obtain, We then have T u ≤ R, for all u ∈ B, and so T (B) ⊂ B as needed.
From the definition of the operator T and condition (H2), we see that T is nondecreasing.Define a sequence (u n ) as follows: u 0 ≡ 0, u n+1 = T u n for n = 0, 1, 2, . . .and for all t ∈ R + .
Since u 0 ≡ 0 ∈ B and T : B → B, we have (u n ) n≥1 ⊂ T (B) ⊂ B. Notice that for all t ∈ R + , u j (t) = (T u j−1 )(t) ≥ u j−1 (t) for j = 1, 2, . . .By the complete continuity of the operator T, we have that (u n ) n≥1 has a convergent subsequence (u n k ) k≥1 , and there exists a u * ∈ B such that u n k → u * as k → +∞.This, together with (3.1), implies that lim n→∞ u n = u * .Since T is continuous and u n+1 = T u n , we have T u * = u * , that is, u * is a fixed point of the operator T .In a similar way, we define a sequence (v n ) by v 0 ≡ R, v n+1 = T v n for n = 0, 1, 2, . . .and for all t ∈ R + .

Reasoning as above, we can prove the existence of
Next, we will prove that v * and u * are the maximal and minimal positive solutions of (1.1), respectively, in [0, R].Let w ∈ [0, R] be a solution of (1.1); then u 0 ≡ 0 ≤ w ≤ R ≡ v 0 , and so u 1 (t) = T u 0 (t) ≤ w(t) ≤ T v 0 (t) = v 1 (t) for all t ∈ R + .By induction, we obtain u n (t) ≤ w(t) ≤ v n (t) for all t ∈ I and n = 0, 1, 2, . . .Passing to the limit, we see that This completes the proof of the theorem.
and set B = {u ∈ E : u ≤ R}.In order to show that T (B) ⊂ B, let u ∈ B. By Lemma 2.1, we have sup t∈R + |T u(t)| = sup t∈R + +∞ 0 a(s)G(t, s)f (s, u(s)) ds
For all t 1 , t 2 ≥ 0 and all s ∈ I, we have