Multiplicity results for extremal operators through bifurcation

We study nonproper uniformly elliptic fully nonlinear equations involving extremal operators of Pucci type. We prove the existence of all radial spectrum for this type of operators and establish a multiplicity results through global bifurcation.

In this paper we want to solve, through global bifurcation, non proper equations of the type    M ± C (D 2 u) + g(u) = 0, in Ω, where Ω ⊂ R N and M ± C are general Hamilton-Jacobi-Bellman (HJB) operators. Specifically theses operators are defined as:  [27], [28].
This general class of extremal operators were introduced, in this form, by Felmer and Quaas in [19] and [20]. Observe that the operators in the class are positively homogeneous of degree one and so, the associated eigenvalue problem is Therefore in the case λ = Λ = 1 the classical Rabinowitz bifurcation theory [34], [33], [35] gives an answer to existence of solution to (1.1).
Concerning the first half eigenvalue problem, recent results have been established by Quaas Sirakov in [32], see also [31] in the general setting of convex (concave) operators. For the case of Pucci type of operator with C = [λ, Λ] N , some of the results were established earlier by Felmer Quaas [18], Quaas in [29] and Busca, Esteban and Quaas [5]. Other previous results of first half eigenvalue are due to Lions [25] and Pucci [27].
The aim of this paper is to extend the results for C = [λ, Λ] N in [5] of local bifurcation from the first two half eigenvalues and from the radial spectrum to this class of HJB operator. Moreover, we establish existence results for equation of (1.1) through global bifurcation. These results are new even in the case C = [λ, Λ] N .
We start recalling the results of [32] that we need.  [3], [4] and Juutienn [23]. Now we give our first result which deals with the existence of the radial spectrum for this class of operators. Notice that nothing is known about higher eigenvalue for general domains. We think that the result below gives light on the fact that infinitely many eigenvalues exist. For the radially symmetric situation we can apply ODE techniques to get the results.
Moreover, the set of radial solutions of (1.3) for µ = µ + k is positively spanned by a function ϕ + k , which is positive at the origin and has exactly k-1 zeros in (0, 1), all these zeros being simple. The same holds for µ = µ − k , but considering ϕ − k negative at the origin.
This result is related with the radial Fucik spectrum studied by Arias and Campos in [1], see [5] for more details and other related references.
Finally, we want to adress our original motivation, that is, we want to prove existence results for an equation of the type (1.1). For this purpose we consider the nonlinear bifurcation problem associated, that is: where f is continuous, f (s, µ) = o(|s|) near s = 0, uniformly for µ ∈ R and Ω is a general regular bounded domain. Concerning this problem we have the following Theorem.
The first bifurcation results in the context of HJB equation are due to Lions [25] where very particular nonlinearities are considered. In that paper the author also gave the existence of half eigenvalue and its probabilistic interpretation in terms of stochastic control. It is interesting to understand how such a bifurcation result can be extended to the context of [3] or [23].
denotes the set of solutions that are positive (resp. negative) at the origin, with exactly k − 1 zeros in (0, 1). Now we are in position to state our first existence result for (1.1) with Ω = B 1 , where g : R → R is a continuos function and g(0) = 0.
and that for some positive natural numbers k y n, with k ≤ n Then (1.1) possesses at least n−k+1 nontrivial radial solution. More precisely, for each j such that k ≤ j ≤ n , there is a radial solution of the problem (1.1) positive in the origin with exactly j − 1 zeros in (0, 1).
An analogous result can be established when µ + k,n is replaced by µ − k,n but the solutions are negative at the origin. Remark 1.3 i) Similar results can be obtained for in general domains when k = n = 1 using Theorem 1.3 , see [30] in the case of C = [λ, Λ] N with a different proof. ii) Another multiplicity result can be obtained in a ball through global bifurcation assuming sub-critical nonlinearity with critical exponent p * found by Felmer and Quaas in [19] for this type of operators. The sub-critical behavior of the nonlinearity will give the desired bound on the branch though blow-up method, see the proof of Theorem 1.5 and Gidas Spruck [21] .
If g is an odd function, then we have more solutions since we can distinguish the positive and minus the negative solution at the origin. Observe that this result is only valid for nonlinear operators.
and that for some positive natural numbers k y n, with k ≤ n Then (1.1) possesses at least 2(n − k + 1) nontrivial radial solutions. The paper is organized as follows. In the section 2 we define the homotopy of the HJB operator and the Laplacian that is used to compute the degree. Then we give a sketch of the proof Theorem 1.3. In section 3 we study the radial case. First, we prove the existence of the radial spectrum, then the local bifurcation and finally we establish our main existence results.

Homotopy and General Domain
We start this section by recalling some results of [32] that we need.
We need some preliminaries in order to compute the Leray-Schauder degree of our equivalent problem.
We start by constructing an homotopic deformation of C to a point. More precisely, letĈ * : Notice thatĈ α is convex for all α ∈ [0, 1] andĈ 0 = {ā} andC 1 = C. In a natural way we define the extremal operator related toĈ α and define Therefore, we have M 0 :=ā∆ and M 1 := M + C . Define now L α as the inverse of −M α . It is well known that L α is well defined in S := {u ∈ C(Ω) | u = 0 on ∂Ω} (see for example [8], also Theorem 3.1 in [36]) and L + α is compact (see [6], also Proposition 4.2 in [9] and Teorem 3.8 [7] ). Let observe that in [8] C 2,α estimates holds up to the boundary if Ω is smooth, this give compactness in C 2 .
The aim is to compute the degree deg S (I − µL α , B(0, r), 0) for some values of µ where it is well define..
This Proposition and then Theorem 1.3 can be proven without changes form the form the proof in [5] the only different is that we need to use the above homotopy to prove Proposition 2.1 (to get degree -1) and the non-existence result of Theorem 2.1 ii) (to get degree 0) and i) of Theorem 2.1 (degree 1). For more details see the proof of proposition 3.1 below.

Spectrum in the Radial Case and existence results
We start this section by studying the operator acting on radial function, most of the this is done in [19] the only point here is that we need to define the operator for negative and positive right hand side simultaneously and where the derivative vanishes.
In the case of radially symmetric solutions then we can define the operator M + C acting on C 2 radially symmetric functions as In the rest of the paper we will write C forC to simplify the notation. In order to describe the set C in a more convenient way, and to avoid trivialization, we make a further assumption. (D) The set C ⊂ R 2 + is compact, convex and its projection onto the y-axis is not a singleton.
Assuming (D) we exclude the case when the projection of (D) onto the y-axis is a singleton, which is equivalent to C = {(a 1 , a 1 )}. This particular case can be analyzed as the radial Laplacian. Observe that C is a symmetric set. Under the assumption (D) we may describe ∂C by means of two functions. Let 0 < θ min < θ max be defined as With these definitions we see that S is convex,S is concave and Being S convex, it has one-sided derivatives S − (θ) and S + (θ), consequently it is locally Lipschitz continuous in (θ min , θ max ). The sub-differential of S is then defined as ∂S(θ) = [S − (θ), S + (θ)], for θ ∈ (θ min , θ max ). The cases θ = θ min and θ = θ max are special. At θ max we have two possibilities, either S − (θ max ) exists, and then we define An analogous situation occurs at θ min . We observe that with these definitions, for every Q ∈ R there is at least one solution θ ∈ [θ min , θ max ] of the equation The case when this equation has multiple solutions is very important for our analysis and occurs when the function S coincide locally with an affine function.
where the function S is affine, we may write for All the above holds forS with natural modification sinceS is concave and ∂S is the super-differential ofS. We would also like to define θ as a function of Q, but we cannot do it in a unique way because of the possible multiplicity of solutions of the equation above. We make a choice considering Θ : Now we are interested in the study of the problem where θ = Θ(v/(N − 1)v ) when v = 0 and (3.10) When v = 0, then θ := θ min if v > 0 and θ := θ max if v < 0. Notice that the functions θ(r) y N d (r) are mesurable functions, having discontinuities whenever r is so that v(r)r/(N − 1)v (r) = Q i and S,S are affine functions. Moreover, both θ(r) and N d (r) are bounded and bounded away from 0.
Next we briefly study the existence, uniqueness, global existence, and oscillation of the solutions to the related initial value problem Then we will come back to (3.8), (3.9) and to the proof of Theorem 1.2. In the rest of the paper we will use the following two important remarks.
for allθ ∈ [θ min , θ max ] and the same is valid forS.
Remark 3.2 We have the following estimates for the dimension number N d (r) First using Theorem 1.1 and the symmetry result of [12] we can conclude, after a rescaling if necessary, the existence of a unique u ∈ C 2 solution to Observe that the existence can be obtain from Schauder fixed point (see [19] for similar argument) but the uniqueness is not trivial to obtain and is related to the simplicity result in Theorem 1.1 .
Now for some δ > 0 small, u satisfies Next we consider (3.11) with initial values u(δ) = 0 and u (δ) at r = δ. From the standard theory of ordinary differential equations we find a unique C 2 solution of this problem for r ∈ [δ, a), for a > δ. Using Gronwall's inequality we can extend the local solution to [0, ∞).
In the following Lemma we will show that the solution u is oscillatory.
Proof. Suppose that u is not oscillatory, that is, for some r 0 , u does not vanish on (r 0 , ∞). Assume first that u > 0 in (r 0 , ∞). Let φ be a solution to (3.11) and (3.12) for fixθ and S(θ) = θ =θ, then it is known, since it corresponds to the Laplace operator, that φ is oscillatory. So, we can take r 0 < r 1 , r 2 such that φ(r) > 0 if r ∈ (r 1 , r 2 ) and φ(r 1 ) = φ(r 2 ) = 0. From the definition and the optimality of the operator, u and φ satisfy If we multiply the first equation by φ and the second by u, subtract them and then integrate, we get getting a contradiction. Suppose now that u < 0 in (r 0 , ∞). In that case, from the the equation (3.13), we claim that u > 0 in (r 0 , ∞), taking if necessary a larger r 0 . If there exists a r * such that u (r * ) = 0, then using the equation we have that u > 0 in (r * , ∞). So we only need to discard the case u < 0 in (r 0 , ∞). In that case u satisfies Let denote by g(r) = u exp r r 0 N d (τ )−1 τ dτ , we have that g(r) is monotone, then there exists a finite c 1 < 0 such as lim r→∞ g(r) = c 1 . On the other hand, since u < 0, there exists c 2 ∈ [−∞, 0) such that lim r→∞ u(r) = c 2 , then from the equation, we get that That is a contradiction with lim r→∞ g(r) = c 1 , proving the claim. Using u > 0, Remark 3.2 and the definition of the maximal operator we get now if we multiply this equation by where b(r) = r N − −1 u (r) u(r) , r ∈ (r 0 , ∞).
Integrating (3.14) from r 0 to t > r 0 we get Define now and notice from the previous fact that k(t) ≥ct N + +2 for some c > 0 and t large. (3.16)

By (3.15) k(t) ≤ −b(t), and so
From this last inequality follows that for some C 1 and t, s large. Noting that N − > N > 2 and taking s → ∞ we find that k(t) satisfies However (3.16) and (3.17) are not compatible, hence u must be oscillatory.
Next Lemma is a principal step in proving that the branches conserve the number of zeros and that the zeros are isolated .

Remark 3.3 It can also be proven that all pairs of zeros of the above equation can not be arbitrarily close.
Proof. Observe that u satisfies the equation Let r * the first zero of u, then by ABP estimate in B r * since u satisfies M + C (D 2 u(|x|)) + a(|x|)u(|x|) = 0 in the L N viscosity sense , see for example [7], we have sup getting a contradiction if r * is sufficiently small.
Proof of Theorem 1.2. Let denote u ν the above solutions of (3.11) with initial conditions u ν = ±1 (here and in the rest of the proof ν ∈ {+, −}). From Lemma 3.1, u ν has infinitely many zeros: Using the previous Lemma all the zeros of the equation are simples. Now if we take r = β ν kr wherer ∈ [0, 1] and define µ ν k = (β ν k ) 2 , then µ ν k is an eigenvalue for M + C in B 1 , with u(β ν kr ) andr ∈ [0, 1] the corresponding eigenfunction with k − 1 zeros.
Let now µ be an eigenvalue of M + C in B 1 with a radial eigenfunction z(r) such that z(0) > 0. Notice that µ > 0 and by uniqueness z(r) = z(0)u + (µ 1/2 r). But z(1) = 0, then µ = (β + k ) 2 for some k ∈ N and z = z(0)u + . Similarly if z(0) < 0. Therefore, there are not others radial eigenvalues different form µ ν k s. From here we obtain Theorem 1.2. Now we will show some properties about the distibution of the eigenvalues.
Proof. We will prove the Lemma in terms of the functions defined above. We claim that if u + has to change sign between two consecutive zeros of u − , if u + has the same sign of u − . Suppose first by contradiction that u − (r 1 ) = u − (r 2 ) = 0, u − (r) > 0 for all r ∈ (r 1 , r 2 ) and u + > 0 for all r ∈ [r 1 , r 2 ]. Let r 3 < r 1 < r 2 < r 4 be the next zeros of u + , that is, u + (r 3 ) = u + (r 4 ) = 0, u + > 0 for all r ∈ (r 3 , r 4 ). Then, the first half-eigenvalue in A 1 := {r 1 < |x| < r 2 } is µ + (A 1 ) = 1 and first half-eigenvalue in A 2 := {r 3 < |x| < r 4 } is µ + (A 2 ) = 1. Define now u(r) = u + (βr), with β > 1 such that r 4 /β > r 2 and r 3 /β < r 1 . So, u is a positive eigenfunction in getting a contradiction with the monotonicity respect the domain of the eigenvalues see Theorem 1.1 . The same kind of argument can be used in the case when u − negative in (r 1 , r 2 ) and u + negative in! [r 1 , r 2 ]. Hence, the claim follows. By inverting the roles of u − , if u + we get the conclusion of the Lemma.

Remark 3.4 The above Lemma implies that in the case
The same holds true in the case β + k > β − k . Next, we prove some preliminary non existence result to prepare the proof of Theorem 1.4, the proof use some ideas that can be found for example in the book by [13].
Lemma 3.4 Assume that µ + k = µ − k and that there exists r 0 ∈ (0, 1) such that u ± (r) > 0 for all r ∈ (r 0 , 1]. Then, there exists a continuous function g such that there is no solution to the problem

19)
and u (0) = 0, u(1) = 0, (3.21) for µ between µ + k and µ − k and s,θ given by the optimal condition −u r in the case u > 0, s = S. Similar in the other cases, so that we have the extremal operator.
Now we can compute the Leray-Schauder degree in the radial case and get.
Proof of Theorem 1.4. Using the same argument of Rabinowitz (change of degree produce solution, see [5] for this setting), we obtain the existence of a halfcomponent H + k ⊂ R × C([0, 1]) of radially symmetric solutions of (1.4), whose clausure H + k contains (µ + k , 0), and is either unbounded or contains a point (µ ± j , 0), with j = k in the case of µ + j . Let us first prove that if H + k ⊂ S + k , By the convergence to the eigenfunction we find a neighborhood N of (µ + k , 0) such that N ∪ H + k ⊂ S + k . Moreover, from Lemma 3.2 we can extend the previous local properties of H + k to all of it. Hence, H + k must be unbounded.
In the rest of the paper we will apply Theorem 1.4 to prove existence of the problem (1.1) and prove Theorem 1.5 and its corollary. For the prove we will need the following Sturm comparison Lemma.
, u (0) = v (0) = 0, and respectively satisfy − ρ u (r)u (r) =ρ u (r)u(r)a(r) a.e. (0, 1), where ρ u (r) denote the integral factor of the equation ,ρ u (r) := ρ u (r)/θ , ρ u and θ are characterized by the the optimal condition. Then i) If v has a zero in (0, 1), then u does too. The first zero of u is less than or equal to the first zero of v.
Proof. First we consider the case when u, v > 0 in the corresponding interval for i) and ii). To prove i), suppose that u do not have a zero, then using the definition of the maximal operator we have  a(s))ds, getting a contradiction. To prove ii), supose that u do not have a zero in [r 0 , r 1 ], then integrating (3.26) in [r 0 , r 1 ], then we have that a(s))ds, getting a contradiction. Now if we consider the case when u < 0 and v > 0 in the corresponding interval for i) and ii), to prove i), using the same argument we have that − ρ u (r)u (r) v(r) =ρ u (r)a(r)u(r)v(r), − ρ u (r)v (r) u(r) ≥ρ u (r)b(r)u(r)v(r). and H + j for k ≤ j ≤ n be the set of nontrivial solutions of (3.28) given by Theorem 1.4. Let (λ, v) ∈ H + j and C = µ j + sup s∈R |g(s)/s|. Then we claim that λ ≤ C. Indeed, observe that the pair (λ, v) satisfy where b(r) = λ + g(v(r))/v(r). Suppose λ > C. Then b(r) > µ j for r ∈ (0, 1). Using Lemma (3.5) we have that v has at least j zeros in (0, 1), this is impossible, then λ ≤ C. Now, if λ ∈ [λ, C], we claim that exist M > 0 such as (λ, u) ∈ H + j then u C[0,1] ≤ M . Indeed, suppose by contradiction, then exist a sequence {λ n } n∈N in [λ, C] and {u n } n∈N in C 1 [0, 1] such as (λ n , u n ) ∈ H + j , where λ n → λ 0 and u n| L ∞ → ∞. Define nowû n = u n / u n L ∞ , thenû n satisfy the equation − ρû n (r)û n =ρû nû n λ n + f (u n ) u n . (3.29) We can assume, up to a subsequence, thatû n →û in C 2 . Also, from the boundedness of g(s)/s, we obtain that the sequence {f (u n )/u n } → h ∈ L ∞ . Taking limits in (3.29) we obtain a solution of − ρû(r)û =ρû(r)ûh(r), a.e (0, 1).
Proof of Corollary 1.1. We only need to prove that a solution u 1 positive at the origin is different from minus a negative solution at the origin u 2 . Suppose by contradiction that u 1 = −u 2 , then using that g is odd and the equation satisfies by u 1 and u 2 we fine This is a contradiction since C and u 1 are both not trivial .