Fractional diffusion with Neumann boundary conditions: the logistic equation

Motivated by experimental studies on the anomalous diffusion of biological populations, we introduce a nonlocal differential operator which can be interpreted as the spectral square root of the Laplacian in bounded domains with Neumann homogeneous boundary conditions. Moreover, we study related linear and nonlinear problems exploiting a local realization of such operator as performed in [X. Cabre' and J. Tan. Positive solutions of nonlinear problems involving the square root of the Laplacian. Adv. Math. 2010] for Dirichlet homogeneous data. In particular we tackle a class of nonautonomous nonlinearities of logistic type, proving some existence and uniqueness results for positive solutions by means of variational methods and bifurcation theory.


Introduction
Nonlocal operators, and notably fractional ones, are a classical topic in harmonic analysis and operator theory, and they are recently becoming impressively popular because of their connection with many real-world phenomena, from physics [20,14,21] to mathematical nonlinear analysis [1,24], from finance [4,13] to ecology [6,23,17,5]. A typical example in this context is provided by Lévy flights in ecology: optimal search theory predicts that predators should adopt search strategies based on long jumps -frequently called Lévy flightswhere prey is sparse and distributed unpredictably, Brownian motion being more efficient only for locating abundant prey (see [25,29,17]). As the dynamic of a population dispersing via random walk is well described by a local operator -typically the Laplacian-Lévy diffusion processes are generated by fractional powers of the Laplacian (−∆) s for s ∈ (0, 1) in all R N . These operators in R N can be defined equivalently in different ways, all of them enlightening their nonlocal nature, but, as shown in [8] and [9], they admit also local realizations: the fractional Laplacian of a given function u corresponds to the Dirichlet to Neumann map of a suitable extension of u to R N × [0, +∞). On the contrary, on bounded domains, different not equivalent definitions are available (see e.g. [15,3,7] and references therein). This variety reflects the different ways in which the boundary conditions can be understood in the definition of the nonlocal operator. In particular, we wish to mention the recent paper by Cabré and Tan [7], where the operator (−∆) 1/2 on a bounded domain Ω ⊂ R N and associated to homogenous Dirichlet boundary conditions is defined by Fourier series, using a basis of corresponding eigenfunctions of −∆. Their point of view allows to recover also in the case of a bounded domain the aforementioned local realization: indeed, interpreting Ω = Ω × {0} as a part of the boundary of the cylinder Ω × (0, +∞) ⊂ R N +1 , the Dirichlet spectral square root of the Laplacian coincides with the Dirichlet to Neumann map for functions which are harmonic in the cylinder and zero on its lateral surface. These arguments can be extended also to different powers of −∆, see [12]. On the other hand, in population dynamic, Neumann boundary data are as natural as Dirichlet ones, as they represent a boundary acting as a perfect barrier for the population. The aim of this paper is then to provide a first contribution in the study of the spectral square root of the Laplacian with Neumann boundary conditions. Inspired by [7], our first goal is to provide a formulation of the problem (1.1) where Ω is a C 2,α bounded domain in R N , N ≥ 1, and f can be thought, for instance, as an L 2 (Ω) function. To this aim, let us denote with φ k k≥0 an orthonormal basis in L 2 (Ω) formed by eigenfunctions associated to eigenvalues µ k of the Laplace operator subjected to homogenous Neumann boundary conditions, that is We can define the operator (−∆) 1/2 : H 1 (Ω) → L 2 (Ω)) by (1.3) (−∆) 1/2 u = +∞ k=1 µ 1/2 k u k φ k for u given by u = +∞ k=0 u k φ k .
The first series in (1.3) starts from k = 1 since the first eigenvalue and the corresponding eigenfunction in (1.2) are given by (µ 0 , φ 0 ) = (0, 1/ |Ω|). This simple difference with the Laplacian subjected to homogeneous Dirichlet boundary conditions has considerable effects. First of all, this implies that (−∆) 1/2 , as the usual Neumann Laplacian, has a nontrivial kernel made of the constant functions, then it is not an invertible operator and (1.1) cannot be solved without imposing additional conditions on the datum f ; on the other hand, given any u defined on Ω, its harmonic extension on C := Ω × (0, +∞) having zero normal derivative on the lateral surface needs not to belong to any Sobolev space, as constant functions show. These features has to be taken into account when establishing the functional framework where to set the variational formulation of (1.1). In this direction, we will first provide a proper interpretation of (1.1), and a corresponding local realization, in the zero mean setting. To this aim, let us introduce the space of functions defined in the cylinder C H 1 (C ) := v ∈ H 1 (C ) : Ω v(x, y) d x = 0, ∀ y ∈ (0, +∞) .
An easy application of the Poincaré-Wirtinger inequality shows that we can choose as a norm of v ∈ H 1 (C ) the L 2 norm of the gradient of v (see Proposition 2.2 and Lemma 2.3). It comes out that, when the datum f has zero mean, a possible solution of (1.1) is the trace of a function belonging to H 1 (C ). The corresponding space of traces can be equivalently defined in different ways, since Proposition 2.4 shows that H 1/2 (Ω) := u ∈ H 1/2 (Ω) : In proving this result, one obtains that every u ∈ H 1/2 (Ω) has an harmonic extension v ∈ H 1 (C ) given by and which is also the unique weak solution of the problem (1.5) Thus, given u ∈ H 1/2 (Ω) we can find a unique v ∈ H 1 (C ) solving (1.5), for which it is well defined the functional acting on H 1/2 (Ω) as 〈−∂ y v(·, 0), g 〉 := C ∇v · ∇g d xd y, whereg is any H 1 (C ) extension of g . Since this functional is actually an element of the dual of H 1/2 (Ω), it is well defined the operator L 1/2 (u) = −∂ y v(·, 0) between H 1/2 (Ω) and its dual. Thus, restricting the study to the zero mean function spaces, and taking into account equations (1.3) and (1.4), we have that L 1/2 conincides with (−∆) 1/2 , but it is invertible: for every f in the dual space of H 1/2 (Ω) there exists a unique u ∈ H 1/2 (Ω) such that L 1/2 u = f , and this function u is the trace on Ω of the unique solution v ∈ H 1 (C ) of the problem (see Lemma 2.14) The link between L 1/2 and (−∆) 1/2 now becomes transparent since that is, the image of a function u trough (−∆) 1/2 is the same of the one yield by L 1/2 acting on the zero mean component of u (see Definition 2.12). In this way we have recovered the local realization of (−∆) 1/2 as a map Dirichlet-Neumann since Therefore, if f has zero mean, denoting withũ(x) =ṽ(x, 0) the unique solution of L 1/2ũ = f then the solutions set of (1.1) is given byũ + h for h ∈ R.
Since we are interested in ecological applications, as a first study we focus our attention on the logistic equation. More precisely, consider a population dispersing via the above defined anomalous diffusion in a bounded region Ω, with Neumann boundary conditions, growing logistically within the region; then u, the population density, solves the diffusive equation where d > 0 acts as a diffusion coefficient, the term −u 2 express the self-limitation of the population and m ∈ C 0,1 (Ω) corresponds to the birth rate of the population if self-limitation is ignored. The weight m may be positive or negative in different regions, denoting favorable or hostile habitat, respectively. The stationary states of this equation are the solutions of the following nonlinear problem where λ = 1/d > 0. When the diffusion follows the rules of the Brownian motion this model has been introduced in [26] and studied by many authors (see [11] and the references therein). One of the major task in this problem is describing how favorable and unfavorable habitats, represented by the interaction between λ and m, affects the overall suitability of an environment for a given populations [10]. The typical known facts for the stationary problem associated to Brownian motion can be summarized as follows: and u λ → 0 as λ → µ + 1 . ii) If m has nonnegative average, then for every λ > 0 there exists a unique positive solution u λ of (1.8) and u λ → h * as λ → 0 + , for h * expressed by The number µ 1 appearing in i) is the first positive eigenvalue with positive eigenfunction of the operator −∆ with Neumann boundary condition and with a weight m satisfying the hypotheses in i).
In our situation, we have, first of all, to clarify that by a weak positive solution of (1.7) we mean a function u ∈ In other words, we impose that the right hand side has zero mean, choosing, in this way, the mean of a solution u as h = h u . Then we obtain the well posedeness of the problem since now the right hand side has zero mean, and we obtain in this way the zero part mean of u. Moreover, notice that the mean of the function v(x, y) solution of (1.10) with v(x, y) =ṽ + h u andṽ(x, 0) = 0 is exactly the mean of u.
Our main existence result is the following Theorem 1.2. Let m ∈ C 0,1 (Ω), m 0. Then the following conclusion hold: i) If Ω m(x)d x < 0 and there exists x 0 ∈ Ω such that m(x 0 ) > 0, then there exists a positive number λ 1 such that for every λ > λ 1 there exists a unique positive solution u λ of (1.7), with u λ → 0 as λ → λ + 1 . ii) If Ω m(x)d x ≥ 0, then for every λ > 0 there exists a unique positive solution u λ of (1.7) with u λ → h * for λ → 0 + .
As in the standard diffusion case, λ 1 is the first positive eigenvalue with positive eigenfunction of the problem which existence is proved in Theorem 3.7. Theorem 1.2 will be obtained via classical bifurcation theory, indeed, in case i), we can show that a smooth cartesian branch of positive solutions bifurcates from the trivial solution (λ, h,ũ) = (λ 1 , 0, 0), this branch can be continued in all the interval (λ 1 , +∞), and contains all the positive solutions of (1.7), that is to say that for every λ > 0 there exists a unique positive solution (see Proposition 3.15,and Theorem3.20). We tackle case ii) first assuming that the mean of m is positive. This allows us choose as a bifurcation parameter h, the future mean of u, instead of λ, and find a branch bifurcating from the trivial solution (λ, h,ũ) = (0, h * , 0), with h * defined as in (1.9). As in the previous case we can show that this branch is global and contains all the positive solutions (see Proposition 3.16,and Theorem3.20). Finally, we complete the proof of case ii) by approximation in Theorem 3.22.
All the effort made in finding the proper formulation for the linear and the nonlinear problem enables us to prove the existence results for (1.7), which are in accordance with the case of standard diffusion. But, trying to enlighten the differences between the two models, one has to take care of the eigenvalues appearing in Theorems 1.1 and 1.2, that is µ 1 and λ 1 . Since such eigenvalues act as a survival threshold in hostile habitat, it is a natural question to wonder which is the lowest one, indeed this indicates whether or not the fractional search strategy is preferable with respect to the brownian one. This appears to be a difficult question, since the eigenvalues depend in a nontrivial way on m, and also on the sequence (µ k ) k defined in (1.2). At the end of Section 3 we report some simple numerical experiments to hint such complexity.

Functional setting
In this section we will introduce the functional spaces where the spectral Laplacian associated to homogeneous Neumann boundary conditions will be defined. Moreover, we will study the main properties of this operator and find the proper conditions under which the inverse operator is well defined. Finally, we will prove summability and regularity properties enjoyed by the solutions of the linear problem. Throughout the paper Ω is a C 2,α bounded domain and we will use the notation In this plan we will make use of the following projections operators.
Definition 2.1. Let us define the operators A C , Z C : H 1 (C ) → H 1 (C ) by for |Ω| denoting the Lebesgue measure of the domain Ω. A C and Z C give the average (with respect to x) and the zero-averaged part of a function v, respectively. Analogously, for u ∈ H 1/2 (Ω), we write When no confusion is possible, we drop the subscript in A, Z .
It is standard to prove that, in both cases, A and Z are linear and continuous, and that tr Ω • Z C = Z Ω • tr Ω . Since the integration in the definition of A C is performed only with respect to the x variable, it is natural to interpret the image of a function v through the operator A C as a function of one variable. A C v(y) enjoys the following properties.
In particular, it is a continuous function up to 0 + , and it vanishes as y tends to infinity.
Proof. Since ∂ y v(·, y) ∈ L 2 (Ω) for almost every y, we can compute (A C v) (y) and obtain, by Hölder's inequality, As a consequence, A C v ∈ H 1 (0, ∞), so that it is continuous in y and it vanishes as y tends to +∞.
Introducing the following functional spaces it is worth noticing that the former is well defined by Proposition 2.2. Moreover, we can choose as a norm on H 1 (C ) the quantity as it is equivalent to the H 1 -norm thanks to the following lemma.

Lemma 2.3. There exists a positive constant K such that for every
Proof. We set ∇ x v = (∂ x 1 v, . . . , ∂ x n v) and we notice that for any v ∈ H 1 (C ) the Poincaré-Wirtinger inequality implies proving the claim.
The following proposition gives a complete description of the space H 1/2 (Ω).

Proof.
Since Ω is of class C 2,α , we have that H 1/2 (Ω) can be equivalently characterized as u = tr Ω v : v ∈ H 1 (C ) , where we write trv = v| Ω = v(·, 0). Then Proposition 2.2 provides the inclusion In order to show the opposite one, consider u ∈ H 1/2 (Ω) and consider v ∈ H 1 (C ) such that u = tr Ω v. Notice that Z C v ∈ H 1 (C ) and Proposition 2.2 implies that then we have foundṽ = Z C v belonging to H 1 (C ) and such that u = tr Ω (ṽ), yielding the first equality in (i). As far as the second equality is concerned, we start by proving the inclusion Let us fixȳ such that implying the desired inclusion. On the other hand, let k≥1 µ 1/2 k u 2 k < +∞, and let us define It is a direct check to verify that v ∈ H 1 (C ) (see also Lemma 2.10 in [7]), obtaining that all the equalities in (i) hold.
Let us now show conclusion (ii), starting with proving that there exist constants A, B such that The right hand side inequality holds for B = 1; in order to show the left hand side inequality, let us argue by contradiction and suppose that there exists a sequence u n ∈ H 1/2 (Ω), with u n L 2 (Ω) = 1 and u n H 1/2 (Ω) → 0. Then u n is uniformly bounded in H 1/2 (Ω) and there exists u ∈ H 1/2 (Ω) such that u n converges to u weakly in H 1/2 (Ω) and strongly in L 2 (Ω) (notice that we do not know that the quantity · H 1/2 (Ω) is a norm on H 1/2 (Ω)). As a consequence, u L 2 (Ω) = 1 and which is an obvious contradiction. As a byproduct of inequalities (2.5) we obtain that · H 1/2 (Ω) is a well defined norm and since H 1/2 (Ω) is a closed subspace of H 1/2 (Ω) with respect to the usual norm conclusion (ii) holds.
Carefully reading the proof of the second equality in (i) of the previous proposition, one realizes that for any u ∈ H 1/2 (Ω) we can construct a suitable extension v ∈ H 1 (C ) which is harmonic and that can be written in terms of a Fourier expansion as shown in (2.4). In the next lemma we provide a variational characterization of such extension.

Lemma 2.5. For every u
Moreover, the function v is the unique (weak) solution of the problem Finally, Proof. We observe that the functional to be minimized is simply the square of the norm in H 1 (C ), and the set on which we minimize is non empty and weakly closed thanks to the compact embedding of H 1/2 (Ω) in L p (Ω), for any exponent p < 2N /(N − 1). The strict convexity of the functional implies the existence and uniqueness of the minimum point. As usual, the unique minimum point v satisfies the boundary condition on Ω (in the H 1/2 -sense) by constraint, and As a consequence, for every ζ ∈ H 1 (C ) such that ζ(x, 0) ≡ 0, it is possible to choose ψ := ζ − A C ζ as a test function in the previous equation. This provides In a standard way this implies both that v is harmonic in C and that it satisfies the boundary condition on ∂Ω × (0, +∞) (in the H −1/2 -sense). Finally, if u(x) is given as in (2.7), then v as in (2.7) solves problem (2.6) and the uniqueness of the solution provides the claim.
Definition 2.6. We will refer to the unique v solving (2.6) as the Neumann harmonic extension of the function u. Remark 2.7. As we already noticed, Furthermore, it is well known that the two norms Reasoning as in the proof of Proposition 2.4, and taking into account Lemma 2.5, we obtain that H 1/2 (Ω) can be equipped with the equivalent norms where the terms u k are the Fourier coefficients of u. In particular, the harmonic extension of u depends on u in a linear and continuous way.
In order to introduce and study the dual space of H 1/2 (Ω) let us first introduce the following space.
The subspace just introduced as a strict connection with the dual space of H 1/2 (Ω) as well explained in the following proposition.
As a first step to arrive to a correct definition of the half Laplacian operator, let us prove the following lemma Lemma 2.10. Let u ∈ H 1/2 (Ω), and let v ∈ H 1 (C ) denote its Neumann harmonic extension. Then the functional Proof. The functional is well defined, indeed ifg 1 andg 2 are two extensions of g we have that (g 2 −g 1 )(x, 0) ≡ 0 and, arguing as in equation (2.8), yields Moreover −∂ y v(x, 0) is linear and continuous: indeed, let us choose as an extension of g G := A Ω g +g , whereg is the harmonic extension of Z Ω g ; by Remark 2.7 applied tog we have that As a consequence −∂ y v(x, 0) ∈ H −1/2 (Ω). Finally, since w(x, y) := (1−y) + belongs to H 1 (C ), by definition we obtain that which vanishes because v ∈ H 1 (C ).
Remark 2.11. If the harmonic extension v is more regular (for instance H 2 (C )), then we can employ integration by parts in order to prove that the definition of −∂ y v(x, 0) given above agrees with the usual one.
Thanks to the previous lemmas, we are now in a position to define the fractional operators we work with.
Definition 2.12. We define the operator L 1/2 : where v is the harmonic extension of u according to (2.6). Analogously, we define the operator (−∆ N ) 1/2 : In Definition 2.12 we have introduced the fractional Laplace operator associated to homogeneous Neumann boundary conditions as a Dirichlet to Neumann map. Moreover, thanks to the equivalences of Proposition 2.4, we realize the spectral expression of this operator as explained in the following remark.
Remark 2.13. Since the harmonic extension operator u → v is linear and continuous by Remark 2.7, we have that both L 1/2 and (−∆ N ) 1/2 are linear and con- . This allows to write In particular, if u ∈ H 2 (Ω) then provides the usual Laplace operator associated to homogeneous Neumann boundary conditions on ∂Ω.
We remark that we can think to L 1/2 as acting between H 1/2 (Ω) and its dual thank to Proposition 2.9. While (−∆ N ) 1/2 :

Moreover, the function v is the unique (weak) solution of the problem
Proof. The existence and uniqueness of v follows from Riesz representation Theorem. The fact that v satisfies (2.11) follows once one shows that equation (2.10) holds also for every ψ ∈ H 1 (C ). This can be readily done exactly as in the proof of Lemma 2.5. Further, it is also a consequence of the next result.
The choice of H 1 (C ) as test function space is not restrictive as the following lemma shows.
Lemma 2.15. Let f ∈ H −1/2 (Ω), and v ∈ H 1 (C ) be defined as in Lemma 2.14. Then: (i) there exist positive constants C , k depending on f such that, for every y > 0, (ii) equation (2.10) holds for any ψ ∈ H 1 loc (C ) admitting a constant C such that, for every y, Since Ω × (0, y) is bounded, we can test (2.11) with ψ and use integration by parts in order to obtain that, for a.e. y, As far as the first statement is concerned, the above equation used with ψ = v gives Then Φ is absolutely continuous and which implies Φ(y) ≤ Φ(0) · e −k y and the required inequality. Now we turn to the second statement. If ψ is as in its assumption then (2.13) holds. In order to conclude we must prove that the last term in that equation vanishes as y → +∞. But this is easily proved by applying Hölder inequality and using the first part of the lemma.

Remark 2.16.
In particular, the previous proposition implies that equation (2.10) holds for any ψ ∈ H 1 (C ). On the other hand, from its proof one can deduce that more general test functions are admissible, for instance functions such that their L 2 (Ω) norm does not grow too much with respect to y.
We collect in the following proposition the properties of T 1/2 .
Proposition 2.18. The operator T 1/2 defined in (2.14) is linear and such that L 1 Proof. First, let us observe that T 1/2 is well defined, as for every f ∈ H −1/2 (Ω) there exists a unique v solution of (2.11), moreover T 1/2 is evidently linear. If where v is the solution of (2.11), then L 1/2 v = ∂ ν v(x, 0) and from (2.11), In order to show that T 1/2 is compact when restricted to L 2 (Ω), let us take f n ∈ L 2 (Ω) weakly converging to f ∈ L 2 (Ω) and consider T 1/2 ( f n ) = v n (x, 0) with v n ∈ H 1 (C ) sequence of solutions of (2.11) with datum f n . From the weak formulation of (2.11) we obtain that v n is uniformly bounded in H 1 (C ), so that it weakly converges to a function v ∈ H 1 (C ), which turns out to be a weak solution with datum f . Choosing as test function ψ = v n − v in the equation satisfied by v n and taking advantage of the compact embedding of H 1/2 (Ω) in L 2 (Ω) immediately gives the strong convergence of v n to v in H 1 (C ). And by continuity of the trace operator, v n (x, 0) = T 1/2 ( f n ) converges to v(x, 0) = T 1/2 ( f ) in L 2 (Ω). Arguing as in Proposition 2.12 in [7] it is easy to obtain that T 1/2 restricted to L 2 (Ω) is self-adjoint and positive. Finally, the last part of the statement can be proved by following the argument of Proposition 2.12 in [7] (see also Remark 2.13).
To end this section, we face some regularity issues. As already observed, any of the above harmonic extensions is of course smooth inside C . On the other hand, improved regularity up to the boundary seems to be prevented by the fact that ∂C is only Lipschitz. Nonetheless, we can exploit the homogeneous Neumann condition (together with some regularity of ∂Ω) in order to suitably extend the harmonic functions outside C , thus removing that obstruction.
Proof. The fact that v ∈ C 2,α (Ω×(0, +∞)) follows from standard regularity theory for the Laplace equation with homogeneous Neumann boundary conditions on smooth domains. As far as (i) is concerned, due to the exponential decay of v given by Lemma 2.15, we are left to prove regularity near {y = 0}. To start with, for any x 0 ∈ Ω, let us consider any half-ball and let us introduce the notation Since v solves (2.10), integration by parts yields As a consequence, Theorem 3.14 in [27] implies that v ∈ W 1,p (B + r )(x 0 , 0) for every r < R. On the other hand, let x 0 ∈ ∂Ω. By assumption, there exists an open neighborhood U (x 0 , 0) and a C 2,α -diffeomorphism Φ between U ∩ {y > 0} and B + 1 (0, 0) which is the identity on the y-coordinate and such that Φ(x 0 , 0) = (0, 0), Since v is harmonic we have thatṽ satisfies an equation like (2.15) on B + ∩{x N < 0}, where now the bilinear form a has C 1,α coefficients (which depend on Ω through the first derivatives of Φ). Accordingly, the conormal derivative ofṽ on {x N = 0} vanishes. Since Ω is C 2,α , the last fact allows to extendṽ to the whole B + by (conormal) reflection, at least when the initial neighborhood U is sufficiently small; in a standard way, the extended function satisfies again an equation like (2.15), and now the corresponding a has Lipschitz-continuous coefficients. Furthermore, the analogous extension of f is again L p . As a consequence, Theorem 3.14 in [27] implies also in this situation thatṽ, and hence v, is W 1,p (B + r ) for r < R. Taking into account the previous discussion, property (i) follows by a covering argument. Finally, (ii) and (iii) can be proved with minor changes in the previous argument, by using Theorems 3.15, 3.12 and 1.17 in [27].
The previous proposition implies a number of regularity properties for the inverse operator T 1/2 . Analogous arguments yield improved regularity also for the direct operator L 1/2 .
Proof. It is sufficient to show that w(x, y) := v(x, y) − u(x) is C 1,α (C ). This can be done by following straightforwardly the proof of Proposition 2.19, once one notices that, instead of equation (2.15), w satisfies w(x, 0) = 0 and, as v is harmonic, for every B + ⊂ C . Hence the role that f had in the aforementioned proposition is now played by ∇ x u. Since ∇ x u is C 0,α (C ), the proposition follows again by applying [27], Theorem 3.12, to w (or to suitable extensionsw near ∂Ω×{0}).
As a conclusion of this section we state the following result, which will be useful in the applications Corollary 2.21. Let us define the spaces Then the operators L 1/2 : X → F, T 1/2 : F → X are linear and continuous and L 1 Proof. The conclusion easily follows from Propositions 2.18 and 2.19.
In the following we will be concerned with positive solutions of equations involving the fractional operators defined above. In this perspective, the arguments we employed to improve regularity allow to check the validity of suitable maximum principles and Hopf lemma. In particular, the following strong maximum principle holds.

The Weighted Logistic Equation
Our main application is the study of the positive solutions of the nonlinear Problem (1.7), understood in terms of the operator (−∆ N ) 1/2 . To this aim, a necessary solvability condition is that the right hand side of the equation has null average. On the other hand the possible solution u, being positive, has positive average. In order to apply the theory developed in the previous section, we recall that any u ∈ H 1/2 (Ω) can be decomposed as where c u is constant andũ ∈ H 1/2 (Ω). Using Lemma 2.5 we can denote byṽ the Neumann harmonic extension ofũ to C , obtaining that is a function u ∈ H 1/2 (Ω) such that f (·, u(·)) ∈ H −1/2 (Ω) and both A Ω f (·, u) = 0 and (−∆ N ) 1/2 u = f (·, u).
In particular u(x) = v(x, 0), where v(x, y) =ṽ(x, y) + h, and (ṽ, h) ∈ H 1 (C ) × R is a weak solution of the nonlinear problem Using the previous definition we can now rewrite Problem (1.7) in the equivalent form and we recall that we assume λ > 0 and m ∈ C 0,1 (Ω).

Remark 3.2.
In the standard diffusion case, nonlinear boundary data have been frequently considered especially in the determination of selection-migration problem for alleles in a region, admitting flow of genes throughout the boundary (see [18] and the references therein).
As in the classical literature concerning the logistic equation, the comprehension of the linearized problem arises as crucial in the study. In our context, this correspond to tackle the following weighted eigenvalue problem

Remark 3.4.
Taking into account the usual decomposition u =ũ +c u as in (3.1), we have that Problem (3.6) can be written as where T 1/2 is compact by Proposition 2.18. If moreover we assume Ω m 0, we can solve the second equation for c u and infer the equivalent formulatioñ As a consequence, we can apply Fredholm's Alternative, obtaining that the spectrum of the operator at the left hand side consists in a sequence of eigenvalues (λ k ) k , with associated kernel of dimension d k < +∞ and closed range having codimension d k .
Proof. The proof relies on the classical bootstrap technique. Indeed, as above, let us write u =ũ+c u and let us denote withṽ the Neumann harmonic extension ofũ. Since m ∈ C 0,1 (Ω), Proposition 2.19 and the trace and Sobolev embedding theorems implỹ v ∈ W 1,r (C ) =⇒ũ ∈ W 1−1/r,r (Ω) =⇒ũ ∈ L N r /(N +1−r ) (Ω) whenever 2 ≤ r < N + 1. Starting from r 0 = 2 and iterating the above procedure the first part of the proposition follows. As a consequence, the second one is implied by Proposition 2.22.
Searching for positive solutions of (3.5), we are interested in positive eigenfunctions of (3.6). Of course, λ = 0 is always eigenvalue with normalized eigenfunction ϕ 0 = 1/ |Ω| > 0, but this does not prevent the existence of positive eigenfunctions associated with positive eigenvalues. Proof. Supposing that there exists λ 1 > 0 with positive nonconstant eigenfunction ϕ 1 , we can apply Lemma 2.15 and use ϕ 0 = 1/ |Ω| > 0 as a test function in the weak formulation of (3.6) satisfied by ϕ 1 to obtain As ϕ 1 is positive, m has to change sign. Now, taking advantage of the usual decomposition, let us write ϕ 1 (x) = v 1 (x, 0) =ṽ 1 (x, 0) + h. From Remark 2.23, we deduce that v 1 > 0 on C , and Lemma 2.15 allows to use ψ = 1/v 1 as test function in the equation satisfied by ϕ 1 . We obtain and the lemma follows.
The following result shows that the previous necessary condition is also sufficient in order to obtain the existence of a first positive eigenvalue with positive eigenfunction.
Proof. We will find ϕ 1 solving the extension Problem (3.3), via a constrained minimization. Namely, we look for a minimum of the functional E : constrained on the manifold First, let us observe that M , indeed, from (3.7) we can find ω an open set of positive measure such that m(x) > 0 in ω, so that for anyw ∈ C ∞ c (ω) having zero mean there exists a suitable positive real number C , such that (w/ C , 0) belongs to M . Let us consider a sequence (ṽ n , h n ) ∈ M such that From the definition of E it follows thatṽ n is uniformly bounded in H 1 (C ), so thatṽ n (x, 0) is uniformly bounded in H 1/2 (Ω); by the compact embedding of the trace space H 1/2 (Ω) in L 2 (Ω) we obtain that there existsṽ 1 ∈ H 1 (C ) such that v n tends toṽ 1 weakly in H 1 (C ), andṽ n (x, 0) strongly converges toṽ 1 (x, 0) in L 2 (Ω). As far as the sequence h n is concerned, let us show that it is bounded by contradiction, assuming that, up to a subsequence, h n → +∞ (the case h n → −∞ can be handled analogously). By the definition of M it follows where o(1) denotes a quantity tending to zero as n goes to infinity. Then, a contradiction follows from (3.7). As a consequence, there exists h 1 such that h n → h 1 in R. By weak lower semicontinuity of E , it results that the pair (ṽ 1 , h 1 ) satisfies Moreover, let us show E (ṽ 1 , h 1 ) > 0. Again by contradiction, let us assume that E (ṽ 1 , h 1 ) = 0; then, asṽ 1 ∈ H 1 (C ), it followsṽ 1 = 0, and from (3.9) we obtain that so that (3.7) yields again a contradiction. Since (ṽ 1 , h 1 ) is a constrained minimum point of E on M , there exists λ ∈ R such that, by Proposition 2.15, for every ψ ∈ H 1 loc (C ) satisfying (2.12), it holds thus λ > 0 and we can define its corresponding eigenfunction, which is a weak solution of Problem (3.6). As a consequence, Lemma 3.5 implies that ϕ 1 ∈ C 1,α (Ω) for every α ∈ (0, 1). Since any other solution (λ, v) with λ > 0 of (3.6) corresponds to a constrained critical point of E on M , λ 1 is the smallest positive eigenvalue. In order to show that ϕ 1 can be chosen positive, let us take w(x) = |ϕ 1 (x)| = |ṽ 1 (x, 0) + h|. Writing w(x) =w(x) + c w , withw ∈ H 1/2 (Ω) and c w constant, let us considerζ(x, y) ∈ H 1 (C ) the harmonic extension ofw(x) obtained thanks to Lemma 2.5. Notice that (ζ, c w ) ∈ M ; moreoverζ(x, 0) = |ṽ 1 (x, 0) + h| − c w = z(x, 0) for z(x, y) = |ṽ 1 (x, y) + h| − c w so that, thanks to Remark 2.7 As a consequence, also the nonnegative function w solves the minimization Problem (3.9), showing that we can assume, without loss of generality, that ϕ 1 is nonnegative. But then Lemma 3.5 applies again, yielding ϕ 1 > 0 on Ω. It is possible to show that λ 1 is simple by contradiction, supposing that there exists ϕ 1 and u solutions of (3.12), with ϕ 1 (x) = v 1 (x, 0) =ṽ 1 (x, 0) + h, u(x) = w(x, 0) = w(x, 0) + k. From Remark 2.23, we deduce that v 1 (x, y) > 0 in C , so that we can use ψ(x, y) = w 2 (x, y)/v 1 (x, y) as test function in the equation satisfied by ϕ 1 , obtaining This implies yielding the linear dependence between ϕ 1 and u. Moreover, it is possible to follow the same argument as in [19] to obtain that also the algebraic multiplicity of λ 1 is one. Now we come to part (ii). In order to show that there is not a positive solution u of Problem (3.6) associated to λ > λ 1 , let us argue again by contradiction, and suppose that there exists u(x) = w(x, 0) =w(x, 0) + k positive eigenfunction associated to an eigenvalue λ greater than λ 1 . As before, observe that Remark 2.23 allows to choose as test function ψ(x, y) = v 1 (x, y) 2 /w(x, y) in the equation satisfied by u and obtain which gives and (3.9) immediately implies that λ < λ 1 . We are finally in the position to tackle the logistic equation (3.5); let us start our study with the easy observation concerning the autonomous problem, i.e. m ≡ 1, contained in the following proposition. Proof. Let u(x) = v(x, 0) =ṽ(x, 0) + c u be a solution of (3.5). Lemma 2.15 implies that we can choose as test function ψ(x, y) = v(x, y)−1. Since ∇ψ = ∇v we obtain and the lemma follows.
Back to the nonautonomous case, the next two results provide a priori bounds on the set of the positive solutions of (3.5) and on the set of the parameters λ.
Then any positive solution of (3.5) satisfies In particular, if m(x) ≤ 0 then no positive solution exists.
Proof. This is an easy consequence of Lemma 3.10, and Propositions 2.19 and 2.22.
Concerning the set of the parameters λ the following necessary condition holds. Proof. Let u(x) = v(x, 0) =ṽ(x, 0) + c u , be a solution of equation (3.6) and let v 1 =ṽ 1 + h satisfying (3.10) and (3.11). By Corollary 3.11 we can take as a test function in the equation satisfied by v 1 , ψ(x, y) = v 2 (x, y)/v 1 (x, y) to obtain and by using (3.5) providing the conclusion.
We will obtain existence results for Problem (3.5) via Bifurcation Theory; developing this approach we have to take into account that every solution may have a constant component that is invisible in the differential part of the equation, then in order to make this component appear, we will be concerned with the map G : and G has components G 1 (λ, h,ũ) and G 2 (λ, h,ũ)) given by for f (x, s) = s(m(x) − s). Let us remark that, since Ω G 1 d x = G 2 , we have that the elements in the range of G automatically satisfy the condition in the definition of Y . Moreover, thanks to Corollary 3.11, the zeroes of G correspond to solutions of Problem (3.5). Of course, we are interested in nontrivial solutions.

Remark 3.14.
We observe that if m ≤ 0 on Ω then Lemma 3.10 implies S = .
On the other hand, if m is a positive constant, reasoning as in Lemma 3.9 we infer that S = {(λ, m, 0) : λ > 0}. As a consequence, in the following we can assume without loss of generality that m is not constant and is positive somewhere.
The following local bifurcation result is concerned with the case of negative mean of the function m. Proof. The proof relies on classical results about the local bifurcation from a simple eigenvalue, see for example [2], Chapter 5, Theorem 4.1.
The derivative of G with respect to the pair (h,ũ) has components (3.17) which, evaluated at the triplet (λ, 0, 0), gives Now, by Remark 3.4, we have that (λ, 0, 0) can be a bifurcation point for positive solutions only if there exists a pair (k,w) withw + k > 0 belonging to the kernel of the operator ∂ (h,ũ) G(λ, 0, 0), i.e. such that which is equivalent to say that the function w(x, y) =w(x, y) + k is a positive solution of (3.18) For this linear eigenvalue problem, Theorem 3.7 shows that there exists only one positive simple eigenvalue λ 1 with a positive eigenfunction ϕ 1 satisfying (3.12). Decomposing ϕ 1 asφ 1 + c ϕ 1 =φ 1 + h 1 , we deduce that the kernel of the operator ∂ (h,ũ) G(λ 1 , 0, 0) is generated by (h 1 ,φ 1 ). By virtue of Remark 3.4, this implies that the range of the operator ∂ (h,ũ) G(λ 1 , 0, 0) is closed and that it has codimension one. Such range consists in the pairs (w, t ) such that there exists a solution z(x, y) =z(x, y) + h of the problem Taking as test function ψ(x, y) =ṽ 1 (x, y) in the weak formulation of the first equation we derive that the range is given by Deriving (3.16) and (3.17) with respect to λ leads to At this point, in order to apply the aforementioned theorem from [2], we only have to check that M (h 1 ,φ 1 ) does not belong to the range of ∂ (h,ũ) G, and this occurs because Then at (λ 1 , 0, 0) a bifurcation occurs. Moreover, as f (x, s) = s(m(x) − s) is of class C 2 with respect to s, the set of the nontrivial solution of G(λ,ũ, h) = 0 near (λ 1 , 0, 0) is a unique C 1 cartesian curve, parameterized by for t ∈ (−ε, ε), t 0. Here both γ(λ 1 + µ(t ), tφ 1 ) and β(λ 1 + µ(t ), tφ 1 ) are o(t ) as t → 0, while a direct computation shows that µ (0) > 0. Thus, for sufficiently small t > 0, it is possible to write t = t (λ), and the solution (λ, h,ũ) is positive.
Coming to the case of positive mean of the function m, it is more convenient to use as a bifurcation parameter h instead of λ.
and let h * be defined as Then (0, h * , 0) is a bifurcation point of positive solutions of Problem (3.5) from T 2 , and it is the only one. Moreover, locally near such point, S is a unique C 1 cartesian curve, parameterized by λ ∈ (0, δ), for some δ > 0.
Proof. The derivative of G with respect to (λ,ũ) has components (3.23) so that a pair (l ,w) belongs to the kernel of ∂ (λ,ũ) G(0, h, 0) if and only if (l ,w) solves the problem For l = 0, taking into account Corollary 2.21 we findw = 0, while for l 0 the mean of f (x, h) has to be zero and this, thanks to (3.20), yields the positive value for h * given by (3.21). With this choice of h * , for any l there exists a unique solution of the first equation. Denoting withz * the one corresponding to l = 1, we obtain that the kernel of ∂ (λ,ũ) G(0, h, 0) is the one dimensional space generated by the pair (1,z * ). On the other hand, pair (w, t ) belongs to the range of ∂ (λ,ũ) G(0, h * , 0) if and only if there exists a solution (ṽ, l ) of the problem Since the function f (x, h * ) has zero mean, t has to be zero and the range is given by the set {(w, t ) ∈ Y , such that t = 0} which is closed and of codimension one. Deriving (3.22) and (3.23) with respect to h leads to This time we obtain the operator N = ∂ h ∂ (λ,ũ) G(0, h * , 0), which computed on (1,z * ) gives If the second component of N (1,z * ) 0 then N (1,z * ) does not belong to the range of ∂ (λ,ũ) G(0, h * , 0) implying that bifurcation occurs in this case too; and this is true since (3.21) yields As before, using [2], Chapter 5, Theorem 4.1, we deduce the existence of a cartesian curve in a neighborhood of (0, h * , 0) with representation for t ∈ (−ε, ε), t 0. Here both α(h * + ν(t ), t z * ) and γ(h * + ν(t ), t z * ) are o(t ) as t → 0, while ν(0) = 0. Since h * is positive and also λ is positive for t positive and small, the proposition easily follows.

Remark 3.17.
We stress the fact that both in Proposition 3.15 and in Proposition 3.16 we can locally parameterize S with respect to λ, even though in the latter the bifurcation parameter is h. Remark 3.18. If (3.13) holds we can go through the proof of Proposition 3.15 and use Remark 3.8 to obtain that λ 1 < 0 is bifurcation point of positive solutions of (3.5) with λ < 0 and u =ũ + h nonnegative. Moreover, as in the case of λ 1 > 0, Lemma 3.12 implies that the bifurcation occurs on the right hand side of λ 1 . Finally, let us notice that, in order to show the local bifurcation from (0, h * , 0), it is enough to assume (3.20) and m needs not to be sign-changing.
Note that, by Proposition 2.18, it is possible to reformulate the equation G = 0 in terms of a identity minus compact map, see also Remark 3.4. Then a classical result due to Rabinowitz [22] implies that the continuum bifurcating either from (λ 1 , 0, 0) or from (0, h * , 0) is actually global. Here we prefer to recover this result from a stronger one: indeed we are going to show that the set S of positive solutions is a smooth arc. Lemma 3.19. Let (λ 0 ,ũ 0 , h 0 ) ∈ S . Then there exist U ∈ R × R × X neighborhood of (λ 0 ,ũ 0 , h 0 ), δ > 0 and a C 1 map Ψ : Proof. The conclusion will follow from the application of the Implicit Function Theorem to the map G(λ, h,ũ) defined in (3.15). To this aim, taking into account (3.16), (3.17), we want to show the invertibility of the operator We claim that, for such operator, the Fredholm Alternative holds. Reasoning as in Remark 3.4, to obtain the claim it is enough to show that Ω (m(x) − 2u 0 ) 0. But this can be easily obtained by testing the equation for u 0 with 1/v 0 , where as Once the Fredholm Alternative is established, we have that ∂ (h,ũ) G(λ 0 , h 0 ,ũ 0 ) is invertible if and only if its kernel is trivial. In turn, (t ,z) belongs to the kernel if and only if z =z + t solves the problem Taking ψ(x, y) = w 2 (x, y)/v 0 (x, y) as test function in the equation satisfied by v 0 , where w is the harmonic extension of z, we obtain Then we can test the equation for w with w itself, and subtract it from the equation above. We obtain which implies that z, and then (t ,z), must vanish. Proof. To start with, we prove that S contains such a graph. Let us assume condition (3.7), and let us define Proposition 3.15 and Lemma 3.12 imply that Λ > λ 1 , let us suppose by contradiction that Λ < +∞, and consider a cartesian curve Ψ : (λ 1 , Λ) → R × X , defined by Ψ(λ) = (Ψ 1 (λ), Ψ 2 (λ)), with (λ, Ψ 1 (λ), Ψ 2 (λ)) ∈ S . Let us consider a sequence λ n < Λ tending to Λ with corresponding solutions (λ n , h n ,ũ n ), where h n = Ψ 1 (λ n ),ũ n = Ψ 2 (λ n ), and u n =ũ n + h n . Moreover, let us recall that we can writeũ n (x) =ṽ n (x, 0) and v n (x, y) =ṽ n (x, y) + h n . Taking as test function in (3.5) ψ(x, y) = v n (x, y) and applying Lemma 3.10, we immediately infer the uniform bound from which we deduce thatũ n is uniformly bounded in the spaces L p (Ω) with 1 ≤ p ≤ 2N /(N −1). Since G 2 (λ n , h n ,ũ n ) = 0, where G 2 is defined in (3.15), and as u n has zero mean, h n has to be positive and we obtain for c positive constant and M defined in Lemma 3.10. Hence, also h n is bounded and there exists h ≥ 0 such that, up to subsequences, h n → h, and u n =ũ n + h n →ũ + h ≥ 0. From Proposition 2.22 we have two possibilities, either u > 0 or u ≡ 0. In the first case we have obtained a positive solution of (3.5) with λ = Λ and Lemma 3.19 provides a contradiction with the definition of Λ. In the second case, Λ turns out to be a local bifurcation point for positive solutions, but then Proposition 3.15 implies that Λ = λ 1 which is again a contradiction, showing that Λ = +∞. When (3.20) is assumed we define Then Λ > 0 by Proposition 3.16, and arguing as above we obtain that also in this case Λ = +∞. Finally, we are left to show that S \graph(Ψ) is empty. We prove it assuming (3.7), when (3.20) holds the same conclusion can be obtained with minor changes. Let us argue by contradiction and suppose that there exists λ * with distinct positive solutions (λ * , h 1 ,ũ 1 ) and (λ * , h 2 ,ũ 2 ). Arguing as above, it is possible to see that (λ * , h 1 ,ũ 1 ) and (λ * , h 2 ,ũ 2 ) belong respectively to global branches S 1 and S 2 of positive solutions that can be parameterized by cartesian curves Ψ 1 , Ψ 2 : [λ 1 , +∞) → R × X . Notice that S 1 ∩ S 2 = and neither S 1 nor S 2 may have turning points, otherwise Lemma 3.19 would be contradicted. As a consequence λ 1 is a multiple bifurcation point of positive solutions, but this is in contradiction with the local representation provided in (3.19). Proof. This is an evident consequence of Theorem 3.20.
Taking into account Remark 3.14 we have that the only case left uncover by Theorem 3.20 is when m has zero mean but it is not identically zero. Notice that in such case the candidate bifurcation point is the origin, but it is not possible to argue as in the previous results, as the mixed derivatives ∂ λ ∂ (h,ũ) G(0, 0, 0), ∂ h ∂ (λ,ũ) G(0, 0, 0) are now both trivial. Nevertheless, we can still prove the existence of a solution for every λ > 0 arguing by approximation. n Ω m 2 .
We claim that, when n is sufficiently large, q n can be chosen in such a way that u n ∈ M n , that is, Ω m n (x)(p n m(x) + q n ) 2 d x = 1.
Indeed, since Ω m 2 = Ω mm n , by direct calculations the above equation can be rewritten as yielding λ 1,n → 0 as n → ∞. Now, Theorem 3.20 provides a sequence of C 1 functions Ψ n : [λ 1,n , +∞) → R× X with Ψ n (λ) = (h n ,ũ n ) positive solution of (3.5) with weight m n (x). Let us fix 0 < δ < Λ and n 1 > n 0 such that λ 1,n < δ for every n ≥ n 1 , so that Ψ n is defined in [δ, Λ] for every n ≥ n 1 . Using Lemma 3.10 and Proposition 2.19, we obtain that (h n ,ũ n ) is uniformly bounded in R×X , so that up to a subsequence (h n ,ũ n ) converges in R × H 1 (C ) to a pair (h,ũ) solution of (3.5); moreover, the same a priori bounds implies that Ψ n satisfies the hypotheses of Ascoli-Arzelà Theorem in the closed, bounded interval [δ, Λ], yielding the existence of a continuous function Ψ : [δ, Λ] such that Ψ n converges to Ψ uniformly and Ψ(λ) = (h,ũ). By the arbitrariness of δ and Λ, we have that Ψ is defined in the whole interval [0, +∞), and Ψ(0) = (0, 0). The only thing left to show is that Ψ(λ) 0 for ever λ > 0. Let us argue by contradiction and suppose that there exists λ > 0 such that Ψ n (λ) = (h n ,ũ n ) → (0, 0). As usual, let u n =ũ n + h n and v n =ṽ n + h n be such that v n (x, 0) = u n (x). Setting z n = u n / ṽ n H 1 (C ) , w n = v n / ṽ n H 1 (C ) , we obtain which is equivalent to say that the nontrivial function z is a nonnegative eigenfunction associated to the positive eigenvalue λ, but as m has zero mean value this contradicts Lemmas 3.5, 3.6.
Under this point of view, solving the above minimization problems amounts to finding the smallest positive eigenvalue Λ(s, m, Ω) of the problem diag µ s i i ≥0 u = ΛM u, indeed λ 1 (m, Ω) = Λ(1/2, m, Ω) and µ 1 (m, Ω) = Λ(1, m, Ω). In turn, such eigenvalue can be easily approximated by truncating the Fourier series. In Figure 1 we report these approximations in the cases Ω = (0, L) and Hence m has always mean equal to 1/2, while We observe that in the case L = 2.5 < π then µ 1 > 1, and thus Λ is increasing in s for any choice of m as one can trivially prove. On the other hand, when µ 1 < 1 the situation is more variegated. In any case, the eigenvalue corresponding to m 1 is always lower than the one corresponding to m 2 , in agreement with the results obtained for similar weights in the case of the standard Laplacian in [10].