Notions of sublinearity and superlinearity for nonvariational elliptic systems

We study existence of solutions of 
boundary-value problems for elliptic systems of type ($\po$) 
below. We introduce notions of sublinearity and superlinearity for 
such systems and show that sublinear systems always have a 
positive solution, while superlinear systems admit a positive 
solution provided the set of their positive solutions is bounded 
in the uniform norm. These facts have long been known for scalar 
equations.

We pursue two goals. First, we give hypotheses on the nonlinearities f 1 , . . . , f n , which make the system sublinear or superlinear, in a sense which extends the standard notions of sublinearity and superlinearity for scalar equations. Second, we are concerned with finding conditions under which (P 0 ) possesses non-trivial solutions.
In order to introduce the subsequent discussion, we recall a standard definition for scalar equations. We denote with λ 1 the first eigenvalue of the Laplacian in Ω. Let f be a nonnegative Hölder continuous real function. Then the equation that is, there exist positive numbers a, b, r, R such that There is a very well developed existence theory for both types of scalar equations, which uses for example monotone iterations (for the sublinear type), variational techniques, topological methods (for both types of problems). Standard references on these topics are for example [2], [15], [9], [6].
It is our purpose here to show that the notions of sublinearity and superlinearity, as defined for a scalar equation, can be extended to a general system of type (P 0 ), through a simple matrix notation. We show that for both types of systems standard topological methods (fixed point theorems and index theory) permit to prove the existence of nontrivial solutions of problem (P 0 ).
Let us introduce some notations and conventions. We write system (P 0 ) in the form where U = (u 1 , . . . , u n ) T ∈ R n , F = (f 1 , . . . , f n ) T . On R n we shall use the norm U = max 1≤i≤n |u i |. Throughout the paper equalities and inequalities between vectors or matrices will be understood to hold component-wise.
Further, if A and B are two square matrices, We shall also consider another way of defining the relation "≺", namely Note that, geometrically, (5) means that A ≺ B if the (closed) positive cone generated by the vector-columns of B − A does not meet the negative hyper-quadrant {U ≤ 0}, except at the origin. It is obvious that B − A has this property if B − A is positive definite (take a vector U ≥ 0 such that (B − A)U ≤ 0, and multiply this inequality scalarly by U ). So (5) is more general than (6) -see the discussion at the end of this section. Finally, we denote with I the identity matrix. Next we give our definitions of sublinearity and superlinearity for (P 0 ).
(H ∞ ) there exist R > 0 and a matrix A ∈ M n (R) such that Remark 1. When n = 1 Definition 1 (resp. Definition 2) reduce to (2) (resp. (3)). Remark 2. Note that the condition in (H ∞ ) requires that the inequality F (U ) ≥ AU be satisfied only by the vectors U whose coordinates are all larger than R, and not by all vectors with U ≥ R. Remark 3. If F is differentiable at 0 ∈ R n then (H 0 ) (resp. (H 0 )) can be written as where F (0) is the matrix of the partial derivatives of f i at the origin. In case (6) is used to define the relation "≺", F (0) λ 1 I implies (7) but not vice versa. In case (5) is used, these two are equivalent. With these definitions we can prove, by using index theory, that a sublinear system of type (P 0 ) always admits a nontrivial solution, while a superlinear system has such a solution provided (P 0 ) admits a priori estimates. These facts are well-known for scalar equations, see for example [3], [7]. Under a nontrivial solution of (P 0 ) we shall mean a vector U which satisfies (P 0 ) in the classical sense and has at least one component which does not vanish identically in Ω (and which is then strictly positive in Ω, by the strong maximum principle). If we suppose in addition that the system is cooperative and appropriately coupled (for example, is fully coupled in the sense that it cannot be split into two subsystems one of which does not depend on the other), then all components of a nontrivial solution are strictly positive in Ω, by the strong maximum principle. For a notion of full coupling and a strong maximum principle for nonlinear elliptic systems we refer to [5].
Here are our main results. The first theorem concerns sublinear systems and is, to our knowledge, the first existence result for general nonvariational systems of more than two equations. Theorem 1 If system (P 0 ) is sublinear according to Definition 1, then (P 0 ) has a nontrivial solution.
In order to state the result for superlinear systems we need to consider the auxiliary system (P t ), t ≥ 0, obtained from (P 0 ) by replacing the condition u i ≥ 0 by u i ≥ t in Ω and the condition u i = 0 by u i = t on ∂Ω. for any solution (u 1 , . . . , u n ) of (P t ), t ∈ [0, t 0 ].

Remark 1.
A priori bounds for systems are a very active area of research. A priori bounds for general systems of type (P 0 ) were obtained by Nussbaum [14], but they require quite restrictive growth hypotheses on the nonlinearities. We refer also to [17] and to the forthcoming paper [8], where much sharper a priori estimates for systems of two equations are established. Remark 2. The condition in Theorem 2 is equivalent to the existence of a priori estimates for (P 0 ) only, under any of the known hypotheses which ensure that a system of type (P 0 ) admits a priori estimates -namely, polynomial growth of the functions f i (see the papers quoted in Remark 1, and the references in these papers). Indeed, Remark 3. All results in the paper hold if we replace the hypothesis that the nonlinearities f i are nonnegative by In this case one simply has to add ξ i u i to both sides of the i-th equation in system (P 0 ).
Index theory and its applications to the search of fixed points of nonlinear maps were essentially developed by Krasnoselskii, see [11]. We shall use here an extension of Krasnoselskii's results, due to Benjamin [4]. For a general survey on Leray-Schauder degree and its applications to nonlinear differential equations we refer to Mawhin [13].
Let us now quote the previous works on our subject. A reference for the use of index theory as a tool for obtaining existence results for sublinear scalar equations is the paper [3], see also the survey paper [2]. In [14] Nussbaum considered superlinear systems and essentially proved Theorem 2 under the additional hypothesis that in (H ∞ ) we have A = (λ 1 + ε)I for some ε > 0 and that in (H 0 ) the sum of the entries in each column of B is strictly smaller than λ 1 (this condition easily implies B ≺ λ 1 I). In [12] Liu gave a definition of sublinearity, which applies to systems of two equations. Liu's definition turns out to be equivalent to (H 0 ) and (H ∞ ) for n = 2, under some additional hypotheses made in [12]. Finally, in [1] Alves and de Figueiredo extended Liu's results, in the framework of systems of two equations as well, by considering superlinear systems and by giving various hypotheses on the 2×2 matrices from the definitions, under which Theorems 1 and 2 hold. The present paper unifies and generalizes all these results.
Finally, we give some discussion concerning hypotheses (5) and (6). When only positive solutions and positive nonlinearities in (P 0 ) are considered, as in the present paper, the right way to define sublinearity and superlinearity is in general through (5). For example, the system   never falls under any of Definitions 1 and 2, if (6) is used in these definitions. On the other hand, (9) is sublinear for 0 < p, q, r < 1 and superlinear for p, q, r > 1, if (5) is used in Definitions 1 and 2.
The reason for which we have paid some attention to the possibility of associating positive definiteness to the notions of sublinearity and superlinearity is that we believe this -though of little interest with respect to the goals of the present paper -will prove to be necessary when dealing with systems of type (P 0 ) in other contexts, for example when sign changing solutions are considered or other methods for establishing existence are employed. For instance, the functional associated with the simple linear system −∆U = AU and clearly a condition of definiteness of A is needed to get some estimates for the integral.
Remark. More precise results are known for some particular systems which possess variational structure. For example, the famous Lane-Emden system has properties of a sublinear (resp. superlinear) system if p, q > 0, pq < 1 (resp. pq > 1). Such a precise result is of course out of hope for general systems of type (P 0 ).

Proofs
The proofs of Theorems 1 and 2 are based on some classical results from index theory. We use a standard functional setting, as in most of the papers quoted in the introduction. We describe this setting in Section 2.1. Section 2.2 is devoted to the proof of Theorem 1. We propose two different proofs of this result. In one of these proofs we first assume that the relation "≺" in Definition 1 is defined through (6) -we have explained at the end of the introduction why we find it important to understand this case. After we obtain a proof of Theorem 1 we explain how, under our hypotheses, an algebraic proposition permits to extend the argument to the more general situation when (5) is used. In the second proof we also use properties of the relation "≺", together with a bootstrap argument.
Next, in Section 2.3 we give a proof of Theorem 2. The proof is relatively simple, which is natural since -as is well known -the difficulty in obtaining existence results for superlinear equations via topological methods lies in finding a priori bounds.
Finally, in Section 2.4 we prove two algebraic propositions which are used in the proof of Theorem 1.

The Setting
Here we briefly recall the results that we shall use. Let C be a closed cone in a Banach space, and let B r = {x ∈ C : x < r}. We are going to use the following theorem (see Proposition 2.1 and Remark 2.1 in [7]).
Theorem 3 Let T : B r → C be a compact mapping. Let σ, ρ ∈ (0, r), σ = ρ be such that (i) T x = tx for all x ∈ ∂B σ and all t ≥ 1 ; and there exists a mapping H : Then there exists a fixed point x of T (i.e. T x = x), such that x is between σ and ρ.
Note that (i) implies that i C (T, B σ ) = 1, while (ii),(iii), and (iv) imply i C (T, B ρ ) = 0, so Theorem 3 follows from the excision property of the index.
We denote with X the space C 0 (Ω) n and introduce the linear mapping We set T (U ) = S(F (U )) and note that T maps compactly X into itself, by standard regularity and imbedding theorems. With this notation, solving (P 0 ) clearly amounts to finding a fixed point of T in the cone Of course T maps C into itself, by the maximum principle. Consequently, finding a nontrivial fixed point of T in C will be our task in the following sections.
We are going to show that the hypotheses of Theorem 3 are satisfied by the mappings T and H. Note that H(U, t) = S(F (U ) + tΦ 1 ), hence f 1 (u 1 , . . . , u n ) + tϕ 1 in Ω −∆u 2 = f 2 (u 1 , . . . , u n ) + tϕ 1 in Ω · · · −∆u n = f n (u 1 , . . . , u n ) + tϕ 1 in Ω u i ≥ 0, i = 1, . . . , n, in Ω u i = 0, i = 1, . . . , n, on ∂Ω First of all, hypothesis (ii) in Theorem 3 is clearly verified by H. Let us now show that hypotheses (iii) and (iv) in Theorem 3 hold with ρ = r, where r is the number which appears in (H 0 ). Suppose that H(U, t) = U for some U ∈ C, U ≤ R, and some t ∈ [0, ∞). We multiply each equation in (11) by ϕ 1 and integrate over Ω. After integration by parts we obtain, by (H 0 ), provided U ≤ r. In other words, setting we have V ≥ 0 and But B − λ 1 I ≺ 0 by (H 0 ), so (5) and the last inequality imply V = 0, t = 0. Since U is nonnegative V = 0 implies U ≡ 0. Next we are going to show that hypothesis (i) of Theorem 3 is satisfied by T , which will conclude the proof of Theorem 1. Suppose for contradiction that (i) does not hold, that is, for any σ > ρ we can find a vector U and a number t ≥ 1 such that U ≥ σ and 1 f 2 (u 1 , . . . , u n ) in Ω · · · −∆u n = t −1 f n (u 1 , . . . , u n ) in Ω u i ≥ 0, i = 1, . . . , n, in Ω u i = 0, i = 1, . . . , n, on ∂Ω By (H ∞ ) and the continuity of F we have where k is a constant. As we explained above, we are going to exhibit two ways to reach a contradiction with (13). First, suppose that (H ∞ ) holds with, in addition, A ≺ 1 λ 1 I (see (6)).
By multiplying the i-th inequality in (13) by u i and by integrating over Ω we obtain after an integration by parts and a use of (14). By using the Poincaré inequality and by summing the n inequalities in (15) we get (recall that t ≥ 1) ; here |·| denotes the l 2 -norm on R n . We now fix ε > 0 such that Going back to (15) we obtain which is a contradiction, since we can take σ > C 3 . It turns out that the additional condition A ≺ 1 λ 1 I is actually not a restriction under our hypotheses. To prove this, observe that for all U ∈ R n + , U ≥ R, implies that all entries of A are nonnegative. Hence all off-diagonal entries of λ 1 I − A are nonpositive and we can use the following proposition.
We give the elementary algebraic proof of Proposition 2.1 in Section 2.4. This finishes the proof of Theorem 1. 2 We shall next give an alternative proof of the fact that (13) cannot hold for vectors with arbitrarily large norm. We multiply (13) by the first eigenfunction of the Laplacian in a domainΩ slightly larger than Ω, such that Ω ⊂⊂Ω and λ 1 (−∆,Ω) > λ 1 (−∆, Ω) − ε, for some ε > 0 to be chosen. After integration over Ω and use of the Green identity (note that the U = 0 on ∂Ω and U ≥ 0 in Ω imply ∂U ∂ν ≤ 0, where ν is the exterior normal to ∂Ω) and where K is a constant vector and Here we use the following proposition.
then (a) there exists ε > 0 such that D − εI has the same property (19).
We give the proof of this proposition in Section 2.4. It follows from (18), (H ∞ ), and Proposition 2.2 that V ≤ C(K, A). Since the restriction ofφ 1 to Ω is bounded below by a positive constant, it follows that U is bounded in scalarly by U and integrating yields where c is a constant which depends only on F and Ω and may change from line to line. On the other hand a standard bootstrap argument in (20) yields Combining the last two inequalities we infer that , which is what we need to finish the proof of Theorem 1.

Superlinear Systems. Proof of Theorem 2
We are going to use Theorem 3 again. First we show that (H 0 ) permits us to verify hypothesis (i). Suppose U is a solution of T U = tU with t ≥ 1, that is, (13) holds. By (H 0 ) we have As before we multiply the i-th equation in (23) by ϕ 1 and integrate by parts, to obtain This implies V = 0 and hence U ≡ 0. Therefore (i) is satisfied for any σ ∈ (0, r).
We now turn to the remaining three conditions required for Theorem 3 to hold. Here we define H(U, t) = T (U + t 1 ).
By multiplying each inequality in (26) by ϕ 1 and by an integration over Ω we obtain where a = ϕ 1 L 1 (Ω) > 0, and V ≥ 0 is defined in (24). This is equivalent to By (H ∞ ) this implies V + t 1 = 0, which contradicts t ≥ R. Thus we have proved that (iii) for t ≥ R and (iv) are verified. Finally, the validity of hypothesis (iii) for t < R is a consequence of the a priori estimates for (P t ), which we assume in Theorem 2 (see also Remark 2 after this theorem). So we have verified the hypotheses of Theorem 3 with ρ = max{σ 0 , M (R)} + 1.

Auxiliary results
This Section contains the proofs of Propositions 2.1 and 2.2. Though elementary, these proofs are included for completeness.
Proof of Proposition 2.1. We already explained in the introduction that if D is positive definite then D satisfies (17). We are going to prove the converse. Given a matrix D, we denote with D ij the minor of D obtained by removing its i th line and j th column. We denote with D ij,kl the minor obtained by removing the k th line and l th column of D ij . (17). Then (c) the matrix D is positive definite.
Proof. To prove (a) let for instance i = n and suppose for contradiction that there exists U ∈ R n−1 \ {0}, such that U ≥ 0 and D nn U ≤ 0. Then the vector U = (U , 0) violates (17).
Suppose now (b) and (c) hold for matrices with dimension n − 1, in particular, (b) and (c) hold for D replaced by D ii , i = 1, . . . , n.
We first prove (b) for D. To this end, for each k = j we set and Then, by developing det D ij with respect to the j th line of D ij , det D ij = k =j d jk (−1) l ij +θ jk det D ij,jk .
However, it is easy to check that D ij,jk is also the minor of D obtained by removing the m ij -th line and the θ jk -th column of D jj . Since the induction hypothesis applies to D jj , we have (−1) m ij +θ jk det D ij,jk ≥ 0.
To prove (b), we use the same reasoning. The only difference is that now