Multiple solutions for a second order equation with radiation boundary conditions

A second order ordinary differential equation with a superlinear term is studied under radiation boundary conditions. Employing the variational method and an accurate shooting-type argument, we prove the existence of at least three or five solutions, depending on the interaction of the nonlinearity with the spectrum of the associated linear operator and the values of the radiation parameters.


Introduction
In a recent paper [1], the following problem was studied u (x) = g(x, u(x)) + A(x) (1.1) under radiation boundary conditions = a 1 u(1). (1.2) Unlike the standard Robin condition, both coefficients a 0 and a 1 in the radiation boundary condition (1.2) are assumed to be strictly positive.Here, A ∈ C([0, 1]) and g : [0, 1] × R → R is continuous, of class C 1 with respect to u and superlinear, that is: uniformly in x ∈ [0, 1].Without loss of generality, it is assumed that g(x, 0) = 0 for all x ∈ [0, 1].
Corresponding author.Emails: pamster@dm.uba.ar(P.Amster), mpkuna@dm.uba.ar(M.P. Kuna) The previous problem was motivated as a generalization of the following Painlevé II model for two-ion electrodiffusion studied in [2], u (x) = Ku(x) 3 + L(x)u(x) + A, (1.4) with K and A positive constants and L(x) := a 2 0 + (a 2 1 − a 2 0 )x.As readily observed, the associated functional J is coercive over a subspace H ⊂ H 1 (0, 1) of codimension 1 and −J is coercive over a linear complement of H.This geometry explains the nature of the results in [2], where it was shown that the functional is in fact coercive over the whole space and, consequently, it achieves a global minimum but, under appropriate conditions, it admits also a local minimum and a saddle type critical point.In more precise terms, it was proved that the global minimizer of J corresponds to a negative solution of the problem; moreover, if a 1 ≤ a 0 then there are no other solutions.When a 1 > a 0 , the solution is still unique for A 0 but, when A is sufficiently small, the problem has at least three solutions.
With these ideas in mind, an extension of the above mentioned results was obtained in [1] for a general superlinear g nondecreasing in u and A ≥ 0; namely, a solution always exists and, moreover, it is unique if A is sufficiently large or if ∂g ∂u (•, u) > −λ 1 for all u, where λ 1 is the first eigenvalue of the linear operator Lu := −u under the boundary condition (1.2).When A is close to 0, the problem has at least three solutions if ∂g ∂u (•, 0) −λ 1 .Furthermore, under an extra assumption, which is fulfilled for the particular case (1.4), the previous multiplicity result is sharp, in the sense that the number of solutions is exactly equal to 3. As a corollary, the mentioned results yield, for arbitrary A, the existence of at least three solutions when a 1 is large and a unique solution when a 1 is small.
The present paper is devoted to obtain further generalizations of the results in [1], by dropping the monotonicity assumption for g.After deducing some basic properties on the spectrum and eigenfunctions of the operator L, in the two first results we shall assume that − ∂g ∂u (x, 0) lies between consecutive eigenvalues, that is: where λ 1 < λ 2 < • • • → +∞ are the eigenvalues of L and, for convenience, we denote λ 0 := −∞.As we shall see, λ 2 is always positive, so Theorem 2.9 in [1] is a direct consequence of the following theorem with k = 1.
In order to study the case in which k is even, it is worth recalling, from the mentioned uniqueness result in [1], that multiple solutions cannot be expected under the sole assumption that (1.5) holds for k = 0.In this sense, the following result can be regarded as complementary to Theorem 1.1 and allows to gain more solutions provided that λ 1 < 0. As we shall see, the latter condition is equivalent to a largeness condition on a 1 .Theorem 1.2.Let (1.3) hold and assume that a 1 > a 0 a 0 +1 .If (1.5) is satisfied for some even k > 0, then there exists A 1 > 0 such that (1.1)-(1.2) has at least five solutions for A ∞ < A 1 .
The preceding multiplicity results shall be proved by a shooting-type argument.It is worthy noticing, however, that solutions of the initial value problem typically blow up before x = 1 for large values of the shooting parameter λ.In order to overcome this difficulty, we shall define an endpoint function that allows to reduce the problem to an equivalent one, with λ belonging to some appropriate interval [−M, M].Once that a shooting operator T is established, the existence of multiple solutions is deduced by studying the sign changes of T. With this aim, we shall prove a fundamental lemma concerning the linearised problem at u = 0, which yields a straightforward proof of Theorem 1.1.Indeed, for an appropriate value M > 0 and A = 0, it shall be seen that T(M) > 0 > T(−M) and T(0) = 0.Moreover, assumption (1.5) with k odd implies that T (0) < 0; thus, by a continuity argument it shall be deduced that T has at least three roots when A is small.The situation in Theorem 1.2 is different because if k is even then T (0) > 0; however, employing the extra assumption λ 1 < 0 we shall adapt the method of upper and lower solutions in order to obtain values λ − < 0 < λ + such that T(λ − ) > 0 > T(λ + ).This ensures that T has at least five roots when A is small.It is interesting to observe that some of the conclusions can be extended to more general situations, e.g. a system of equations, by the use of topological degree, although the results for the scalar case are sharper and, for this reason, deserve to be treated separately.Generalizations for systems of equations and other situations shall take part of a forthcoming paper.
Variational methods allow to obtain multiple solutions under a different condition, which is weaker than (1.5), by imposing a lower bound on the primitive of g, namely G(x, u) := u 0 g(t, x) ds.In order to formulate a precise statement, let us denote by ϕ k the (unique) eigenfunction associated to λ k such that ϕ k (0) > 0 and ϕ k L 2 = 1 and observe that, by superlinearity, the function G achieves a (nonpositive) minimum.
for all x and |u| ≤ K. Then there exists a constant A 1 > 0 such that problem (1.1)-(1. 2) has at least three solutions for A L 2 < A 1 .
Our last result is devoted to analyse uniqueness and multiplicity according to the different values of the parameter a 1 .As we shall see, if a 1 is large then λ 1 0; thus, condition (1.6) is fulfilled with k = 1.Further considerations will show that the value A 1 in Theorem 1.3 can be made arbitrarily large as a 1 increases, yielding the existence of at least three solutions.The situation is different when a 1 → 0 + , because λ 1 tends to some positive constant and the validity of conditions like (1.5) or (1.6) depends on the choice of g.Indeed, for a 1 arbitrarily small it is possible to find g and A such that the problem admits three solutions; however, if the derivative of g lies always above an appropriate constant, then the uniqueness condition given in [1] holds for a 1 small.More precisely, the following theorem holds.

Theorem 1.4.
There exists a constant a * (depending only on A L 2 ) such that problem (1.1)-(1.2) has at least three solutions for a 1 > a * .Assume, moreover, that ∂g ∂u (•, u) ≥ c > −r 2 1 for all u where r 1 ∈ (0, π 2 ) is the unique value such that r 1 tan r 1 = a 0 .Then there exists a * > 0 such that the solution of The paper is organized as follows.In the next section, we shall prove the basic facts concerning the eigenvalues of the linear operator L under the radiation boundary conditions that shall be used in the proofs of our main results.In Section 3, we define an accurate shooting-type operator and prove two existence results from which Theorems 1.1 and 1.2 are deduced in a straightforward manner.Finally, in Section 4 we shall apply a variational method in order to prove a quantitative version of Theorem 1.3, which provides a lower bound for A 1 and yields a proof of Theorem 1.4.

Spectrum of the associated linear operator
In this section, we shall obtain some elementary properties of the spectrum of the operator Lu := −u , which is symmetric under the boundary condition (1.2).It is readily seen that all the eigenvalues of L are simple and form a sequence λ 1 < λ 2 < • • • → +∞.Zeros of an arbitrary eigenfunction ϕ are obviously simple; in particular, from the boundary condition it is deduced that ϕ does not vanish on the boundary.As mentioned, we shall denote λ 0 := −∞ and, for k > 0, we shall set ϕ k as the unique eigenfunction associated to λ k such that ϕ k (0) > 0 and ϕ k L 2 = 1.Thus, {ϕ j } j≥1 is an orthonormal basis of L 2 (0, 1).A standard argument shows that ϕ 1 does not vanish and, by comparison, the k-th eigenfunction ϕ k has exactly k − 1 zeros in (0, 1).This, in turn, implies sgn (ϕ k (1)) = (−1) k−1 .
Lemma 2.1.The following properties hold: Proof.Let a 1 < ã1 and consider the respective eigenvalues and eigenfunctions λ 1 , λ1 and and, after integration, −a 1 ϕ 1 (1) φ1 (1) ≤ − ã1 φ1 (1)ϕ 1 (1), a contradiction.Continuity of λ 1 is left as an exercise for the reader.Moreover, a simple computation shows that 0 is eigenvalue if and only if a 1 = a 0 a 0 +1 ; in this case, the corresponding eigenfunction is a linear function which, due to the boundary condition, cannot change sign and hence 0 = λ 1 .As a consequence, we deduce that λ 1 < 0 if and only if a 1 > a 0 a 0 +1 , in which case we may write λ 1 = −r 2 < 0, where r > 0 satisfies In particular, this shows that λ 1 < −a 2 1 if and only if a 1 > a 0 .Finally, let us show that the second eigenvalue is always positive: indeed, otherwise λ 2 < 0, which implies sgn(ϕ 2 (x)) = sgn(ϕ 2 (x)) for all x such that ϕ 2 (x) = 0. From the boundary condition, we deduce that ϕ 2 does not vanish in [0, 1], a contradiction.
The next lemma shall be the key for our proofs of Theorems 1.1 and 1.2.
Proof.Let us firstly prove that u (1) = a 1 u(1).With this aim, set X ⊂ H 2 (0, 1) as the set of those functions satisfying (1.2) and define the symmetric bilinear form given by Let X k := span{ϕ 1 , . . ., ϕ k }.If u ∈ X k \ {0}, then we may write u = ∑ k j=1 s j ϕ j and −u = ∑ k j=1 s j λ j ϕ j .Thus, Moreover, if B(u, u) = 0 then s j = 0 for all j < k and 1 0 and hence ϕ k vanishes over a non-zero measure interval I, a contradiction.In the same way, if Y k ⊂ H is the subspace spanned by {ϕ j } j>k , then , where u = u a is defined by (2.1).Observe that T is continuous and does not vanish.Moreover, A k is connected (for example, because it is convex); thus the sign of T is constant over A k and we may assume that a is constant.Hence we may take, for each k, the first (in fact, unique) value a ∈ (λ k , λ k+1 ) such that u a (1) = 0.It follows that u a (1) and ϕ k (1) have opposite signs; thus, sgn(T(a)) = sgn(u a (1)) = (−1) k .

Shooting method revisited
Let us recall that the multiplicity results in [1] were obtained from the application of a shooting-type method.However, the success of this procedure was strongly based on the monotonicity of g, which was employed to guarantee that the graphs of two different solutions of (1.1) satisfying the first condition in (1.2) do not intersect.This property does not hold for the general case, so it is required to define the shooting operator in a more careful way.
With this aim observe, in the first place, that solutions of (1.1)-(1.2) are bounded.This fact was proved in [1] although, for the sake of completeness, a short proof is sketched here.Let u be a solution and ψ(x) := (a 1 − a 0 )x + a 0 .Multiply by u and integrate to obtain, for some constant C: Next, choose a constant K such that and so completes the proof.
In order to define our shooting operator, let us fix a constant M > √ 2K such that for |u| ≥ M and some R to be specified.For each λ ∈ R, let u λ be the unique solution of (1.1) with initial condition u (0) = a 0 u(0) = a 0 λ.If |λ| ≤ M and |u λ | reaches the value 2M for the first time at some t 1 , then we may fix t 0 < t 1 such that |u λ (t 0 )| = M and M < |u λ | < 2M over (t 0 , t 1 ).Since provided that R > 4 3 a 2 1 .This implies, on the one hand, that the 'endpoint' function e(λ) := t 1 if t 1 exists 1 otherwise is continuous.On the other hand, the (continuous) function T : [−M, M] → R given by characterizes the solutions of (1.1)-(1.2), in the sense that u is a solution if and only if there exists λ ∈ (−M, M) such that u = u λ .Furthermore, observe that T(M) > 0 > T(−M), which proves that a solution always exists.Moreover, w λ := ∂u λ ∂λ satisfies Thus we deduce the following result, more general than Theorem 1.1.
Proposition 3.1.Let Φ be defined as the unique solution of the problem 2) has at least three solutions for A ∞ small.
In the context of Proposition 3.1, let us now consider the opposite case Φ (1) > a 1 Φ(1), for which the existence of nontrivial roots of T for A = 0 cannot be ensured since T (0) > 0. However, if for some λ it is verified that λT(λ) < 0, then T has at least two zeros with the same sign of λ and, consequently, the problem has at least two nontrivial solutions.This fact shall be the main argument in our proof of Theorem 1.2, based on the existence of a positive and a negative λ as before.With this aim, let us firstly prove the following lemma.Lemma 3.2.Assume ∂g ∂u (x, 0) < 0 and a 1 ≥ a 0 a 0 +1 .Then (1.1)-(1.2) with A = 0 has at least a positive solution and a negative solution.
Proof.Fix ε > 0 such that ∂g ∂u (x, u) ≤ 0 for |u| ≤ ε(a 0 + 1) and define α(x) := ε(a 0 x + 1), then and On the other hand, we may take for example β(x) = e mx 2 +c with m > 2a 1 and c 0, then and Thus, the result is deduced from a straightforward adaptation of the method of upper and lower solutions (see e.g.[3]).The existence of a negative solution follows in a similar way.
Again, Theorem 1.2 shall be deduced from a more general result, namely the following proposition.
Proposition 3.3.Let Φ be defined as before and assume that 2) has at least five solutions for A ∞ small.Proof.Fix ã1 ∈ a 0 a 0 +1 , a 1 .From the previous lemma, there exist u > 0 > v solutions of (1.1) with In other words, and the result follows.

Variational formulation
In this section, we introduce a variational formulation for problem (1.1)-(1.2),that allows to study multiplicity of solutions from a different point of view.To this end, let us define the functional J : where G(x, u) := u 0 g(x, s)ds.It is readily seen that J ∈ C 1 (H 1 (0, 1), R), with and that u ∈ H 1 (0, 1) is a critical point of J if and only if u is a classical solution of (1.1)-(1.2).
From standard results (see e.g.[5]), J is weakly lower semi-continuous.Moreover, write as before a 1 u(1) 2 − a 0 u(0) 2 ≤ C u 2 L 2 + 1 2 u 2 L 2 to conclude, from the superlinearity, that for some constant K > 0. Thus, the functional is coercive and hence achieves a global minimum.This proves, again, that the problem has at least one solution.
Remark 4.1.Observe that the existence of solutions holds, in fact, for arbitrary A ∈ L 2 (0, 1) and a weaker form of (1.3), namely: for every M > 0 there exists K > 0 such that for all u.
In order to prove the existence of multiple solutions, let us firstly observe that J satisfies the Palais-Smale condition, that is, if {J (u n )} is bounded and DJ (u n ) → 0 as n → ∞, then {u n } has a convergent subsequence in H 1 (0, 1).
Indeed, let {u n } ⊂ H 1 (0, 1) be a Palais-Smale sequence.Because J is coercive, we may assume that {u n } converges weakly in H 1 (0, 1) and uniformly to some u.Since J ∈ C 1 and DJ (u n )(u) → 0, we deduce Moreover, using the fact that we conclude that u n → u for the H 1 -norm.Before stating our multiplicity result, for convenience we define, for each K > 0, In particular, condition (1.6) implies that C K < −λ k .Moreover, set C 0 > 0 as the best constant such that for all A and all u such that u(1) = 0.
Remark 4.2.The value of C 0 can be computed in the following way.Note that, for fixed A, the minimum of F (u) := 1 0 1 2 u (x) 2 + A(x)u(x) dx + a 0 2 u(0) 2 subject to the constraint u(1) = 0 is attained at where A(x) := 1 x A(s) ds and the Lagrange multiplier λ(A) is given by Thus, C 0 is obtained by minimizing the functional G(A) := F (u A ) under the constraint Thus, we deduce the following quantitative version of Theorem 1.3.
Proof.Define X 1 := span{ϕ k } and X On the one hand, from the previous definition we know that inf On the other hand, recall that the eigenfunction ϕ k does not vanish at x = 1 and was chosen in such a way that ϕ k (0) > 0 and ϕ k L 2 = 1.Writing G(x, u) = 1 2 ∂g ∂u (x, ξ)u 2 , we may compute: From a well known linking theorem by Rabinowitz (see [6]), there exists a critical point u 1 such that J (u 1 ) ≥ ρ > min u∈H 1 (0,1) J (u).This implies, in particular, that if u 0 is a global minimizer of J then u 0 = u 1 and u 0 (1) = 0. Let s := sgn(u 0 (1)) and observe that there exists u 2 such that J (u 2 ) = min . Again, this implies that u 2 = u 1 and that u 2 / ∈ X 2 .It follows that u 2 is a local minimum of J and sgn(u 2 (1)) = sgn(u 0 (1)).Thus, u 2 = u 0 and the proof is complete.

Proof of Theorem 1.3: Let
Thus, the result follows from Theorem 4.3, taking Proof of Theorem 1.4: From the computations in section 2, it is readily verified, on the one hand, that if we let a 1 → +∞ then λ 1 = −r 2 with r a 1 → 1 + .In particular, when a 1 0, ϕ 1 (x) = a e rx + r − a 0 r + a 0 e −rx with a 2r e 2r −1 and r a 1 0. This implies for a 1 0. In particular, fixing an arbitrary K > 0 it follows that condition (1.6) with k = 1 is satisfied for a 1 sufficiently large.Furthermore, setting η and θ as in the previous proof it is verified that θ = O(a 1 ) and, consequently, A 1 → +∞ as a 1 → +∞.On the other hand, observe that if a 1 < a 0 a 0 +1 then λ 1 = r 2 , where r is the first positive solution of the equation −r sin r + a 0 cos r = a 1 cos r + a 0 sin r r .
Thus, letting a 1 → 0, it is seen that λ 1 → r 2 1 .Hence, if a 1 is sufficiently small then ∂g ∂u (x, u) > −λ 1 for all x and all u.Uniqueness follows then from Theorem 2.2 in [1].
As a final remark, it is worth mentioning that Theorem 1.4 is not directly deduced from the above shooting arguments.On the one hand, when a 1 is large, it is readily verified that (1.5), as well as the weaker condition of Proposition 3.1, do not necessarily hold.Furthermore, even if one of these conditions holds, it is not clear how to get rid of the smallness condition on A. Finally, it is worth mentioning that the shooting operator T depends on a 1 , which makes it difficult to handle when a 1 gets large.On the other hand, when a 1 is small it is possible to prove the existence of multiple solutions for some specific choices of g.For example, it suffices to observe that λ 2 tends, as a 1 → 0, to some r 2 2 > r 2 1 .Thus, fixing g such that −r 2 1 > ∂g ∂u (•, 0) > −r 2 2 , the existence of three solutions follows from Theorem 1.1 if a 1 and A ∞ are small enough.Other possible multiplicity conditions, however, require that a 1 is not too small: for example, in Theorem 1.2, it is not clear whether or not the condition on a 1 can be relaxed.