Skip to main content
Log in

Asymptotic harvesting of populations in random environments

  • Published:
Journal of Mathematical Biology Aims and scope Submit manuscript

Abstract

We consider the harvesting of a population in a stochastic environment whose dynamics in the absence of harvesting is described by a one dimensional diffusion. Using ergodic optimal control, we find the optimal harvesting strategy which maximizes the asymptotic yield of harvested individuals. To our knowledge, ergodic optimal control has not been used before to study harvesting strategies. However, it is a natural framework because the optimal harvesting strategy will never be such that the population is harvested to extinction—instead the harvested population converges to a unique invariant probability measure. When the yield function is the identity, we show that the optimal strategy has a bang–bang property: there exists a threshold \(x^*>0\) such that whenever the population is under the threshold the harvesting rate must be zero, whereas when the population is above the threshold the harvesting rate must be at the upper limit. We provide upper and lower bounds on the maximal asymptotic yield, and explore via numerical simulations how the harvesting threshold and the maximal asymptotic yield change with the growth rate, maximal harvesting rate, or the competition rate. We also show that, if the yield function is \(C^2\) and strictly concave, then the optimal harvesting strategy is continuous, whereas when the yield function is convex the optimal strategy is of bang–bang type. This shows that one cannot always expect bang–bang type optimal controls.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. We thank the anonymous referee who has brought the paper to our attention.

  2. The assumption may not apply in models with large institutional investors.

  3. An ecological model extension that would link this literature to our model would consider optimal extraction policy to maximise a time discounted concave total-yield function when there are at least two populations, situated in different environments with no growth limitation.

  4. \(|\mu (x) - \mu (0)| < M|x|\) for some real \(M>0\) as \(\mu \) locally Lipschitz by asssumption. Therefore \(|g(x) - g(0)| < Mx^2\), so g differentiable at 0. Moreover, \(g'(0) = - \mu (0)\), and \(\mu (0)>0\) by assumption, so \(x_\iota \ne 0\).

References

  • Abakuks A (1979) An optimal hunting policy for a stochastic logistic model. J Appl Probab 16(2):319–331

    Article  MathSciNet  MATH  Google Scholar 

  • Arapostathis A, Borkar V S, Ghosh M K (2012) Ergodic control of diffusion processes. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  • Alvarez LHR (2000) Singular stochastic control in the presence of a state-dependent yield structure. Stoch Process Appl 86(2):323–343

    Article  MathSciNet  MATH  Google Scholar 

  • Abakuks A, Prajneshu (1981) An optimal harvesting policy for a logistic model in a randomly varying environment. Math Biosci 55(3–4):169–177

    Article  MathSciNet  MATH  Google Scholar 

  • Alvarez LHR, Shepp LA (1998) Optimal harvesting of stochastically fluctuating populations. J Math Biol 37(2):155–177

    Article  MathSciNet  MATH  Google Scholar 

  • Beddington JR, May RM (1977) Harvesting natural populations in a randomly fluctuating environment. Science 197(4302):463–465

    Article  Google Scholar 

  • Berg C, Pedersen HL (2006) The Chen–Rubin conjecture in a continuous setting. Methods Appl Anal 13(1):63–88

    MathSciNet  MATH  Google Scholar 

  • Berg C, Pedersen HL (2008) Convexity of the median in the gamma distribution. Ark Mat 46(1):1–6

    Article  MathSciNet  MATH  Google Scholar 

  • Braumann CA (2002) Variable effort harvesting models in random environments: generalization to density-dependent noise intensities. Math Biosci 177(178):229–245 (Deterministic and stochastic modeling of biointeraction (West Lafayette, IN, 2000))

    Article  MathSciNet  MATH  Google Scholar 

  • Benaïm M, Schreiber SJ (2009) Persistence of structured populations in random environments. Theor Popul Biol 76(1):19–34

    Article  MATH  Google Scholar 

  • Borodin AN, Salminen P (2012) Handbook of Brownian motion-facts and formulae. Birkhäuser, Basel

    MATH  Google Scholar 

  • Dennis B, Patil GP (1984) The gamma distribution and weighted multimodal gamma distributions as models of population abundance. Math Biosci 68(2):187–212

    Article  MathSciNet  MATH  Google Scholar 

  • Drèze J, Stern N (1987) The theory of cost-benefit analysis. Handb Pub Econ 2:909–989

    Article  MathSciNet  Google Scholar 

  • Evans SN, Hening A, Schreiber SJ (2015) Protected polymorphisms and evolutionary stability of patch-selection strategies in stochastic environments. J Math Biol 71(2):325–359

    Article  MathSciNet  MATH  Google Scholar 

  • Ethier S N, Kurtz T G (2009) Markov processes: characterization and convergence. Wiley, New York

    MATH  Google Scholar 

  • Evans SN, Ralph PL, Schreiber SJ, Sen A (2013) Stochastic population growth in spatially heterogeneous environments. J Math Biol 66(3):423–476

    Article  MathSciNet  MATH  Google Scholar 

  • Gard TC (1984) Persistence in stochastic food web models. Bull Math Biol 46(3):357–370

    Article  MathSciNet  MATH  Google Scholar 

  • Gard TC (1988) Introduction to stochastic differential equations. M. Dekker, New York

    MATH  Google Scholar 

  • Gard TC, Hallam TG (1979) Persistence in food webs. I. Lotka-Volterra food chains. Bull Math Biol 41(6):877–891

    MathSciNet  MATH  Google Scholar 

  • Gulland JA (1971) The effect of exploitation on the numbers of marine animals. In: Proceedings of the advanced study institute on dynamics of numbers in populations, pp 450–468

  • Hening A, Kolb M (2018) Quasistationary distributions for one-dimensional diffusions with singular boundary points (submitted). arXiv:1409.2387

  • Hening A, Nguyen D (2018) Coexistence and extinction for stochastic Kolmogorov systems. Ann Appl Probab. arXiv:1704.06984

  • Hening A, Nguyen D (2018) Persistence in stochastic Lotka-Volterra food chains with intraspecific competition (preprint). arXiv:1704.07501

  • Hening A, Nguyen D (2018) Stochastic Lotka-Volterra food chains. J Math Biol. arXiv:1703.04809

  • Hening A, Nguyen D, Yin G (2018) Stochastic population growth in spatially heterogeneous environments: the density-dependent case. J Math Biol 76(3):697–754

    Article  MathSciNet  MATH  Google Scholar 

  • Hutchings JA, Reynolds JD (2004) Marine fish population collapses: consequences for recovery and extinction risk. AIBS Bull 54(4):297–309

    Google Scholar 

  • Khasminskii R (2012) Stochastic stability of differential equations. In: Rozovskiĭ B, Glynn PW (eds) Stochastic modelling and applied probability, vol 66, 2nd edn. Springer, Heidelberg (With contributions by G. N. Milstein and M. B. Nevelson)

    MATH  Google Scholar 

  • Kokko H (2001) Optimal and suboptimal use of compensatory responses to harvesting: timing of hunting as an example. Wildl Biol 7(3):141–150

    Article  Google Scholar 

  • Leigh EG (1981) The average lifetime of a population in a varying environment. J Theor Biol 90(2):213–239

    Article  MathSciNet  Google Scholar 

  • Lande R, Engen S, Saether B-E (1995) Optimal harvesting of fluctuating populations with a risk of extinction. Am Nat 145(5):728–745

    Article  Google Scholar 

  • Ludwig D, Hilborn R, Walters C (1993) Uncertainty, resource exploitation, and conservation: lessons from history. Ecol Appl 3:548–549

    Google Scholar 

  • Lungu EM, Øksendal B (1997) Optimal harvesting from a population in a stochastic crowded environment. Math Biosci 145(1):47–75

    Article  MathSciNet  MATH  Google Scholar 

  • May RM, Beddington JR, Horwood JW, Shepherd JG (1978) Exploiting natural populations in an uncertain world. Math Biosci 42(3–4):219–252

    Article  MathSciNet  Google Scholar 

  • Mas-Colell A, Whinston MD, Green JR (1995) Microeconomic theory, vol 1. Oxford University Press, New York

    MATH  Google Scholar 

  • Merton RC (1969) Lifetime portfolio selection under uncertainty: the continuous-time case. Rev Econ Stat 51:247–257

    Article  Google Scholar 

  • Merton RC (1971) Optimum consumption and portfolio rules in a continuous-time model. J Econ Theory 3(4):373–413

    Article  MathSciNet  MATH  Google Scholar 

  • Primack RB (2006) Essentials of conservation biology. Sinauer Associates, Sunderland

    Google Scholar 

  • Reiter J, Panken KJ, Le Boeuf BJ (1981) Female competition and reproductive success in northern elephant seals. Anim Behav 29(3):670–687

    Article  Google Scholar 

  • Roth G, Schreiber SJ (2014) Persistence in fluctuating environments for interacting structured populations. J Math Biol 69(5):1267–1317

    Article  MathSciNet  MATH  Google Scholar 

  • Schreiber SJ, Benaïm M, Atchadé KAS (2011) Persistence in fluctuating environments. J Math Biol 62(5):655–683

    Article  MathSciNet  MATH  Google Scholar 

  • Shaffer ML (1981) Minimum population sizes for species conservation. Bioscience 31(2):131–134

    Article  MathSciNet  Google Scholar 

  • Smith JB (1978) An analysis of optimal replenishable resource management under uncertainty. Digitized Theses, Paper 1074

  • Schreiber SJ, Ryan ME (2011) Invasion speeds for structured populations in fluctuating environments. Theor Ecol 4(4):423–434

    Article  Google Scholar 

  • Traill LW, Bradshaw CJA, Brook BW (2007) Minimum viable population size: a meta-analysis of 30 years of published estimates. Biol Conserv 139(1):159–166

    Article  Google Scholar 

  • Tyson R, Lutscher F (2016) Seasonally varying predation behavior and climate shifts are predicted to affect predator-prey cycles. Am Nat 188(5):539–553

    Article  Google Scholar 

  • Turelli M (1977) Random environments and stochastic calculus. Theor Popul Biol 12(2):140–178

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We thank two anonymous referees for very insightful comments and suggestions that led to major improvements.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexandru Hening.

Additional information

Dang H. Nguyen was in part supported by the National Science Foundation under Grant DMS-1207667 and is also supported by an AMS Simons travel grant. Tak Kwong Wong was in part supported by the HKU Seed Fund for Basic Research under the Project Code 201702159009, and the Start-up Allowance for Croucher Award Recipients.

Appendices

Appendix A: Proofs

In this appendix we present the framework of ergodic optimal control and prove the main results of our paper.

For any \(v\in \mathfrak U_{sm}\), denote the unique invariant probability measure of X(t) on \(\mathbb {R}_{++}\) by \(\pi _v\) if it exists. Define

$$\begin{aligned} \rho _v={\left\{ \begin{array}{ll} \int _0^\infty \Phi (xv(x))\pi _v(dx)&{}\text { if } \pi _v \text { exists,}\\ 0&{} \text { otherwise.} \end{array}\right. } \end{aligned}$$

Let \(p>0\). Since \(\lim _{x\rightarrow \infty }\mu (x)=-\infty \), there exist constants \( k_{1p}, k_{2p}>0\) such that

$$\begin{aligned} {\mathcal L}_u x^p\le px^p\mu (x)+\frac{1}{2}p(p-1)\sigma x^p \le k_{1p}-k_{2p}x^{p},\, x\in \mathbb {R}_{++}, u\in [0,M] \end{aligned}$$
(A.1)

By Dynkin’s formula

$$\begin{aligned} {\mathbb {E}}_x^v [X(t)]^p\le x^p+k_{1p}t-k_{2p}{\mathbb {E}}_x^v\int _0^t[X(s)]^{p}ds. \end{aligned}$$

Thus,

$$\begin{aligned} \dfrac{1}{t}{\mathbb {E}}_x^v\int _0^t[X(s)]^{p}ds \le \frac{1}{k_{2p}}\left( \dfrac{x^p}{t}+k_{1p}\right) . \end{aligned}$$
(A.2)

As a result, the family of occupation measures

$$\begin{aligned} \Pi _{x,t}^v(\cdot ):=\dfrac{1}{t}\int _0^t\mathbb {P}_x^v\{X(s)\in \cdot \}\,ds,~~ t\ge 1 \end{aligned}$$

is tight. If X(t) has an invariant probability measure on \(\mathbb {R}_{++}\), then \(\left( \Pi _{x,t}^v\right) _{t\ge 0}\) converges weakly to \(\pi _v\) because the diffusion is nondegenerate. This convergence and the uniform integrability (A.2) imply that

$$\begin{aligned} \lim _{t\rightarrow \infty }\dfrac{1}{t}\int _0^t\Phi (X(s)v(X(s)))ds=\rho _v. \end{aligned}$$

If X(t) has no invariant probability measures on \(\mathbb {R}_{++}\), then the Dirac measure with mass at 0 is the only invariant probability measure of X(t) on \(\mathbb {R}_+\). Moreover, any weak-limit of \(\left( \Pi _{x,t}^v\right) _{t\ge 0}\) as \(t\rightarrow \infty \) is an invariant probability measure of X(t) (Ethier and Kurtz 2009, Theorem 9.9 or Evans et al. 2015, Proposition 8.4). Thus, \(\left( \Pi _{x,t}^v\right) _{t\ge 0}\) converges weakly to the Dirac measure \(\delta _0\) as \(t\rightarrow \infty \). Because of (A.2) and \(\Phi (0)=0\), we have

$$\begin{aligned} \lim _{t\rightarrow \infty }\dfrac{1}{t}\int _0^t\Phi (X(s)v(X(s))ds=\int _0^\infty \Phi (xv(x))\pi _v(dx). \end{aligned}$$

Thus, we always have

$$\begin{aligned} \lim _{t\rightarrow \infty }\dfrac{1}{t}\int _0^t\Phi (X(s)v(X(s))ds=\rho _v. \end{aligned}$$
(A.3)

Define

$$\begin{aligned} \rho ^*:=\sup _{v\in \mathfrak U_{sm}}\{\rho _v\}. \end{aligned}$$
(A.4)

It will be shown later that \(\rho ^*>0\) whenever the population without harvesting persists, i.e. when \(\mu (0)-\sigma ^2/2>0\).

Theorem A.1

Suppose \(\mu (0)-\sigma ^2/2>0\), \(\mu (\cdot )\) satisfies Assumption 2.1 and \(\Phi (\cdot )\) satisfies Assumption 2.2. There exists a stationary Markov strategy \(v^*\in \mathfrak U_{sm}\) such that \(\pi _{v^*}\) exists and \(\rho _{v^*}=\rho ^*\). Moreover, for any admissible control h(t), we have

$$\begin{aligned} \liminf _{T\rightarrow \infty }\dfrac{1}{T}\int _0^T \Phi \Big (X(t)h(t)\Big )\,dt \le \rho _{v^*}=\rho ^* \text { a.s.} \end{aligned}$$

Proof

By (A.2) and since \(\Phi \) has a subpolynomial growth rate we can conclude that

$$\begin{aligned} \sup _{v\in \mathfrak U_{sm}}\int _0^\infty \Phi (xv(x))\pi _v(dx)<\infty . \end{aligned}$$
(A.5)

Moreover, since \(\mu (0)-\sigma ^2/2>0\) we note that, since our population does not go extinct, \(\rho ^*>0\). On the other hand, since \(\Phi \) is continuous and \(\Phi (0)=0\) we get that \(\Phi (x)<\rho ^*\) for x is sufficiently small. This fact combined with (A.5) implies the existence of an optimal Markov strategy \(v^*\) according to Arapostathis et al. (2012, Theorem 3.4.5, Theorem 3.4.7). \(\square \)

Theorem A.2

Suppose \(\mu (0)-\sigma ^2/2>0\), \(\mu (\cdot )\) satisfies Assumption 2.1 and \(\Phi (\cdot )\) satisfies Assumption 2.2. The HJB equation

$$\begin{aligned} \max _{u\in U}\Big [{\mathcal L}_u V(x)+\Phi (xu)\Big ]=\rho \end{aligned}$$
(A.6)

admits a classical solution \(V^*\in C^2(\mathbb {R}_+)\) satisfying \(V^*(1)=0\) and \(\rho =\rho ^*> 0\). The solution \(V^*\) of (A.6) has the following properties:

  1. a)

    For any \(p\in (0,1)\)

    $$\begin{aligned} \lim _{x\rightarrow \infty }\dfrac{V^*(x)}{x^{p}}=0. \end{aligned}$$
    (A.7)
  2. b)

    The function \(V^*\) is increasing, that is

    $$\begin{aligned} V^*_x\ge 0,\, x\in \mathbb {R}_{++}. \end{aligned}$$
    (A.8)

A Markov control v is optimal if and only if it satisfies

$$\begin{aligned} \begin{aligned} \dfrac{d V^*}{d x}(x)\Big [x(\mu (x)-v(x))\Big ]+\Phi (xv(x)) =\max _{u\in U} \left( \dfrac{d V^*}{d x}(x)\Big [x(\mu (x)-u)\Big ]+\Phi (xu)\right) \end{aligned} \end{aligned}$$
(A.9)

almost everywhere in \(\mathbb {R}_+\).

Proof

Consider the optimal problem with the yield function

$$\begin{aligned} J_h(x)={\mathbb {E}}_x\int _0^\infty e^{-\alpha t} h(t)X(t)dt \end{aligned}$$

for some fixed \(x\in \mathbb {R}_{++}\) and \(h\in \mathfrak {U}\). Note that this is the \(\alpha \)-discounted optimal problem. Pick any \(0<x_1<x_2<\infty \) and let \(X^{x_1}\), \(X^{x_2}\) be the solutions to the controlled diffusion

$$\begin{aligned} dX(t) = X(t)(\mu (X(t)) -h(t) )\,dt + \sigma X(t)\,dB(t) \end{aligned}$$

with initial values \(x_1\), \(x_2\) respectively. Note that we are using a fixed admissible control h(t) which is the same for any initial value. The control h(t) here is not a Markov control which in general depends on the initial value. Since \(\mu (\cdot )\) is continuous and decreasing, for \(y_1, y_2>0\), there exists \(\xi (y_1, y_2)>0\) depending continuously on \(y_1, y_2\) such that \(\mu (y_1)-\mu (y_2)=-\xi (y_1, y_2)(\ln y_1-\ln y_2)\). Using Itô’s Lemma we have

$$\begin{aligned} \begin{aligned} d (\ln X^{x_2}(t)-\ln X^{x_1}(t))=&\left( \mu ((X^{x_2}(t))-\mu (X^{x_1}(t))\right) dt\\ =&-\xi (X^{x_1}(t), X^{x_2}(t)) (\ln X^{x_2}(t)-\ln X^{x_1}(t))dt, \end{aligned} \end{aligned}$$

which in turn yields

$$\begin{aligned} \ln X^{x_2}(t)-\ln X^{x_1}(t)=(\ln x_2-\ln x_1)\exp \left( -\int _0^t\xi (X^{x_1}(s), X^{x_2}(s))ds\right) >0. \end{aligned}$$

Therefore, if \(x_2>x_1\), we get that

$$\begin{aligned} \mathbb {P}(X^{x_2}(t)> X^{x_1}(t), t\ge 0)=1. \end{aligned}$$

This implies that \(J_h(\cdot )\) is an increasing function. Therefore, the optimal yield

$$\begin{aligned} V_\alpha (x):=\sup _{h\in \mathfrak {U}}J_h(x) \end{aligned}$$

is also increasing. By Arapostathis et al. (2012, Lemma 3.7.8), there is a function \(V^*\in C^2(\mathbb {R}_{++})\) satisfying (A.6) for a number \(\rho \) such that

$$\begin{aligned} \rho \ge \rho ^*. \end{aligned}$$
(A.10)

Moreover,

$$\begin{aligned} V^*(x)=\lim _{n\rightarrow \infty } \left( V_{\alpha _n}(x)-V_{\alpha _n}(1)\right) \end{aligned}$$

for some sequence \((\alpha _n)_{n\in \mathbb {N}}\) that satisfies \(\alpha _n\rightarrow 0\) as \(n\rightarrow \infty \). This implies that \(V^*\) is an increasing function, i.e.

$$\begin{aligned} V^*_x\ge 0,\, x\in \mathbb {R}_{++}. \end{aligned}$$
(A.11)

For any continuous function \(\psi :\mathbb {R}_{++}\mapsto \mathbb {R}\) satisfying

$$\begin{aligned} |\psi (x)|\le c(1+x^p),\quad x\in \mathbb {R}_{++}, c>0 \end{aligned}$$
(A.12)

we have from (A.1) and Arapostathis et al. (2012, Lemma 3.7.2) that \({\mathbb {E}}^v_x |\psi (X(t))|\) exists and satisfies

$$\begin{aligned} \lim _{t\rightarrow \infty }\left( \frac{1}{t}\sup _{v\in \mathfrak U_{sm}}{\mathbb {E}}^v_x |\psi (X(t))|\right) =0, \end{aligned}$$
(A.13)

and

$$\begin{aligned} \lim _{R\rightarrow \infty }{\mathbb {E}}^v_x \psi (X(t\wedge \xi _R))={\mathbb {E}}^v_x \psi (X(t))<\infty ,\quad t\ge 0, \end{aligned}$$
(A.14)

where \(\xi _R=\inf \{t\ge 0: X(t)> R \text { or } X(t)<R^{-1}\}.\) Moreover, by using Arapostathis et al. (2012, Lemma 3.7.2) again we get that

$$\begin{aligned} \lim _{x\rightarrow \infty }\dfrac{f_R(x)}{x^p}=0,\quad R\ge 0 \end{aligned}$$
(A.15)

where

$$\begin{aligned}f_R(x):=\sup _{v\in \mathfrak U_{sm}}{\mathbb {E}}^v_x\int _0^{\tau _R}\Phi (X(t))dt, \end{aligned}$$

and \(\tau _R:=\inf \{t\ge 0: X(t)\le R\}.\)

By Arapostathis et al. (2012, Formula 3.7.48), we have the estimate

$$\begin{aligned} V^*(x)\le \sup _{v\in \mathfrak U_{sm}}{\mathbb {E}}^v_x\int _0^{\tau _R}\left( \Phi (X(t))+\rho ^*\right) dt+\sup _{y\in [0,R]}\{V^*(y)\} \end{aligned}$$

which implies

$$\begin{aligned} V^*(x)\le c_p(1+x^p), x\ge R \quad \text { for some }\,c_p>0. \end{aligned}$$
(A.16)

Now, pick any \(\varepsilon >0\) and divide (A.16) on both sides by \(x^{p+\varepsilon }\). We get

$$\begin{aligned} \frac{V^*(x)}{x^{p+\varepsilon }}\le c_p\left( \frac{1}{x^{p+\varepsilon }}+x^{-\varepsilon }\right) , \quad x\ge R \end{aligned}$$

and by letting \(x\rightarrow \infty \)

$$\begin{aligned} \lim _{x\rightarrow \infty }\frac{V^*(x)}{x^{p+\varepsilon }} = 0. \end{aligned}$$

This implies, since p and \(\varepsilon >0\) are arbitrary, Eq. (A.7). Let \(\chi :\mathbb {R}_{++}\mapsto [0,1]\) be a continuous function satisfying \(\chi (x)=0\) if \(x<\frac{1}{2}\) and \(\chi (x)=1\) if \(x\ge 1\). Then \(\psi (x):=V^*(x)\chi (x)\) satisfies (A.12) because of (A.16). On the other hand, since \(V^*(x)\) is increasing and \(V^*(1)=0\), then \(V^*(x)\le 0\) when \(x\le 1\). Thus, we have

$$\begin{aligned} V^*(x)\le \chi (x)V^*(x),\quad x\in \mathbb {R}_{++}. \end{aligned}$$

Let \(v^*\) be the measurable function satisfying (A.9).

$$\begin{aligned} \rho \ge \rho ^*\ge \rho _{v^*}. \end{aligned}$$
(A.17)

By Dynkin’s formula

$$\begin{aligned} \begin{aligned} {\mathbb {E}}_x^{v^*}\chi (X(t\wedge \xi _R))V^*(X(t\wedge \xi _R))-V^*(x)&\ge {\mathbb {E}}_x^{v^*}V^*(X(t\wedge \xi _R))-V^*(x)\\&={\mathbb {E}}_x^{v^*}\int _0^{t\wedge \xi _R}\left( \rho -\Phi (X(s)v(X(s)))\right) ds \end{aligned} \end{aligned}$$

Letting \(R\rightarrow \infty \), we obtain from the monotone convergence theorem and (A.14) that

$$\begin{aligned} \begin{aligned} \dfrac{1}{t}\left( {\mathbb {E}}_x^{v^*}\chi (X(t))V^*(X(t))-V^*(x)\right) \ge \rho -\dfrac{1}{t}{\mathbb {E}}_x^{v^*}\int _0^{t}\Phi (X(s)v(X(s)))ds,\quad t>0 \end{aligned} \end{aligned}$$

Letting \(t\rightarrow \infty \) and using (A.13) and (A.3), we have

$$\begin{aligned}0\ge \rho -\rho _{v^*}.\end{aligned}$$

This and (A.17) implies that \(\rho =\rho ^*=\rho _{v^*}\).

By the arguments from Arapostathis et al. (2012, Theorem 3.7.12), we can show that v is an optimal control if and only if (A.9) is satisfied. \(\square \)

When \(\Phi \) is the identity mapping the Eq. (A.9) becomes

$$\begin{aligned} \begin{aligned} -\dfrac{d V^*}{d x} v (x) + v(x) =\max _{u\in U} \left( -\dfrac{d V^*}{d x} u +u \right) , \end{aligned} \end{aligned}$$

which implies

$$\begin{aligned} \begin{aligned} v(x)&= {\left\{ \begin{array}{ll} 0 &{} \text{ if } V_x^*>1 \\ M &{} \text{ if } V_x^*<1. \end{array}\right. } \end{aligned} \end{aligned}$$
(A.18)

Our main result is the following theorem.

Theorem 2.1

Assume that \(\Phi (x)=x, x\in (0,\infty )\) and that the population survives in the absence of harvesting, that is \(\mu (0)-\frac{\sigma ^2}{2}>0\). Furthermore assume that the drift function \(\mu (\cdot )\) satisfies Assumption 2.1. The optimal control (the optimal harvesting strategy) v has the bang–bang form

$$\begin{aligned} \begin{aligned} v(x)&= {\left\{ \begin{array}{ll} 0 &{}\quad \text{ if } \; 0< x\le x^* \\ M &{}\quad \text{ if } \; x>x^* \end{array}\right. } \end{aligned} \end{aligned}$$
(2.6)

for a unique \(x^*\in (0,\infty )\). Furthermore, we have the following upper bound for the optimal asymptotic yield

$$\begin{aligned} \rho ^* \le \sup _{x\in \mathbb {R}_+} x\mu (x). \end{aligned}$$
(2.7)

Remark A.1

If \(V_x^*(x)=1\) then we note that (A.18) does not provide any information about v(x). However, in this case we can set the harvesting rate equal to anything since the yield function will not change. This is because our diffusion is non-degenerate and changing the values of the drift on a set of zero Lebesgue measure does not change the distribution of X.

We split up the proof of Theorem 2.1 into a few propositions. It is immediate to see that the HJB equation (A.6) becomes

$$\begin{aligned} \begin{aligned} \rho&=\max _{u\in U}\Big [ x(\mu (x)-u)f_x+\dfrac{1}{2}\sigma ^2 x^2f_{xx} + xu \Big ] \\&=x\mu (x)f_x+\dfrac{1}{2}\sigma ^2x^2f_{xx} +\max _{u\in U}[ (1-f_x)xu]\\&= {\left\{ \begin{array}{ll} x\mu (x)f_x+\dfrac{1}{2}\sigma ^2 x^2f_{xx} &{} \text{ if } f_x> 1\\ x(\mu (x)-M)f_x+\dfrac{1}{2}\sigma ^2 x^2f_{xx} +Mx &{}\text{ if } f_x\le 1. \end{array}\right. } \end{aligned} \end{aligned}$$
(A.19)

Sketch of proof of Theorem 2.1

Since the optimal control is given by (A.18) we need to analyze the properties of the function \(V_x^*\) which by (A.19) satisfies a first order ODE. The analysis of this is split up into several propositions. Note that the ODE governing \(V_x^*\) is different, depending on whether \(V_x^*>1\) or \(V_x^*\le 1\).

In Proposition A.1 we analyze the ODE for when \(V_x^*\le 1\) and find its asymptotic behavior close to 0. Using this we can show in Proposition A.2 that one cannot have a \(\eta >0\) such that \(V_x^*(x)\le 0\) for all \(x\in (0,\eta ]\).

Similarly, in Proposition A.3 we show that there can exist no \(\zeta >0\) such that \(V_x^*(x)\ge 1\) for all \(x\ge \zeta \).

In Proposition A.4 we explore the possible ways \(V_x^*\) can cross the line \(y=1\) and find using soft arguments that there can be at most 3 crossings. Finally, we show that actually there must be exactly one crossing of \(y=1\) by \(V_x^*\) and that this crossing has to be from above. This combined with (A.18) completes the proof. \(\square \)

Proposition A.1

Assume \(\mu (\cdot )\) is locally Lipschitz on \([0,\infty )\). Then any solution \(\varphi _2\) of the ODE

$$\begin{aligned} \frac{d\varphi _2}{dx}(x) + \frac{2(\mu (x) -M)}{\sigma ^2 x}\varphi _2(x) = \frac{2(\rho -Mx)}{\sigma ^2 x^2} \end{aligned}$$
(A.20)

satisfies

$$\begin{aligned} \lim _{x\rightarrow 0^+} \varphi _2 (x) = \pm \infty . \end{aligned}$$
(A.21)

Proof

It follows from the method of integrating factors that the solution to the ODE (A.20) is

$$\begin{aligned} \varphi _2 (x) = \frac{\zeta (x_0)\varphi _2 (x_0) + \int _{x_0}^{x} \zeta (y)\beta (y) \;dy}{\zeta (x)}, \end{aligned}$$
(A.22)

where the non-homogeneous term is \(\beta (y):=\frac{2(\rho -My)}{\sigma ^2 y^2}\), and the integrating factor is

$$\begin{aligned} \zeta (x) := e^{\int _{x_1}^{x} \gamma (y) \;dy}, \end{aligned}$$

for \(\gamma (y):=\frac{2(\mu (y) -M)}{\sigma ^2 y}\), and arbitrary \(x_0, x_1\in (0,\infty )\). Since \(\mu \) is locally Lipschitz at \(x=0\), there are constants \(L, K>0\) such that for any \(x\in [0,L]\), \(|\mu (x)-\mu _0|\le K x\), where \(\mu _0:=\mu (0)\). From now on, we choose \(x_1:=L\) (or any number between 0 and L). We have, for any \(x\in [0,x_1]\),

$$\begin{aligned} \left| \int _{x_1}^{x} \frac{\mu (y)-\mu _0}{y} \;dy\right| \le \int _{x}^{x_1} \frac{|\mu (y)-\mu _0|}{y} \;dy \le \int _{x}^{x_1} \frac{Ky}{y} \;dy\le K (x_1 - x). \end{aligned}$$

This implies that as \(x\rightarrow 0^+\),

$$\begin{aligned} \zeta (x) = e^{\frac{2}{\sigma ^2}\int _{x_1}^{x} \frac{\mu (y)-\mu _0}{y} \;dy} e^{\frac{2}{\sigma ^2}\int _{x_1}^{x} \frac{\mu _0 -M}{y} \;dy} \sim x^{\frac{2}{\sigma ^2}(\mu _0-M)}. \end{aligned}$$
(A.23)

On the other hand, from now on, if we choose \(x_0>0\) sufficiently close to 0 such that \(\rho -Mx>0\) and (A.23) holds for all \(x\in (0,x_0)\), then we have, for any \(0<x<x_0\),

$$\begin{aligned} \begin{aligned}&\int _{x}^{x_0} \zeta (y)\beta (y) \;dy \sim \frac{2}{\sigma ^2} \int _{x}^{x_0} y^{\frac{2}{\sigma ^2}(\mu _0-M)-2} (\rho -My) \;dy \\&\quad = {\left\{ \begin{array}{ll} C_0 + C_1 x^{\frac{2}{\sigma ^2}(\mu _0-M)} + C_2 x^{\frac{2}{\sigma ^2}(\mu _0-M)-1} &{} \text {if } \mu _0-M \ne 0, \; \frac{\sigma ^2}{2}\\ \frac{2}{\sigma ^2} (\rho \ln x_0 - M x_0) + \frac{2M}{\sigma ^2} x - \frac{2\rho }{\sigma ^2} \ln x &{} \text{ if } \mu _0-M = \frac{\sigma ^2}{2}\\ \frac{2}{\sigma ^2} (-\rho x^{-1}_0 - M \ln x_0) + \frac{2M}{\sigma ^2} \ln x + \frac{2\rho }{\sigma ^2} x^{-1} &{} \text{ if } \mu _0-M=0, \end{array}\right. } \end{aligned} \end{aligned}$$
(A.24)

where the constants \(C_i\) are given by

$$\begin{aligned} C_0 := -\frac{M x_0^{\frac{2}{\sigma ^2}(\mu _0-M)}}{\mu _0-M} + \frac{\rho x_0^{\frac{2}{\sigma ^2} (\mu _0-M)-1}}{\mu _0-\frac{\sigma ^2}{2}-M}, \quad C_1 := \frac{M}{\mu _0-M}, \quad C_2 := -\frac{\rho }{\mu _0-\frac{\sigma ^2}{2}-M} . \end{aligned}$$

Now, using the asymptotic properties (A.23) and (A.24), we can analyze the limit of \(\varphi _2\) as follows.

Case 1:\(\mu _0 < M\).

In this case, we get from (A.23) and (A.24) that

$$\begin{aligned} \lim _{x\rightarrow 0^+} \zeta (x) = \pm \infty , \quad \lim _{x\rightarrow 0^+} \zeta (x_0)\varphi _2 (x_0) + \int _{x_0}^{x} \zeta (y)\beta (y) \;dy = \pm \infty . \end{aligned}$$

Thus, we can apply l’Hôpital’s rule and obtain

$$\begin{aligned} \lim _{x\rightarrow 0^+} \varphi _2 (x) = \lim _{x\rightarrow 0^+} \frac{\zeta (x_0)\varphi _2 (x_0) + \int _{x_0}^{x} \zeta (y)\beta (y) \;dy}{\zeta (x)} = \lim _{x\rightarrow 0^+} \frac{\rho -Mx}{x(\mu (x)-M)} = \pm \infty \end{aligned}$$
(A.25)

since \(\rho >0\). This shows the limit (B.4).

Case 2:\(M\le \mu _0 \le M + \frac{\sigma ^2}{2}\).

For this range of \(\mu _0\), it follows from (A.23) and (A.24) again that

$$\begin{aligned} \lim _{x\rightarrow 0^+} \zeta (x_0)\varphi _2 (x_0) + \int _{x_0}^{x} \zeta (y)\beta (y) \;dy = \pm \infty , \end{aligned}$$

but \(\lim _{x\rightarrow 0^+} \zeta (x)\) exists and is finite. Hence, we can obtain the limit (B.4) by passing to the limit \(x\rightarrow 0^+\) in the solution formula (A.22).

Case 3:\(\mu _0 > M + \frac{\sigma ^2}{2}\).

In this final case, it follows from (A.23) and (A.24) that \(\lim _{x\rightarrow 0^+} \zeta (x) = 0\) and

$$\begin{aligned} J := \lim _{x\rightarrow 0^+} \zeta (x_0)\varphi _2 (x_0) + \int _{x_0}^{x} \zeta (y)\beta (y) \;dy \end{aligned}$$

exists and is finite. If \(J\ne 0\), then passing to the limit \(x\rightarrow \infty \) in the solution formula (A.22) will imply the limit (B.4). Otherwise, we can apply l’Hôpital’s rule and do the same computations we did in (A.25). This proves the limit (B.4).

Putting together Cases 1,2 and 3 completes the proof. \(\square \)

Proposition A.2

There does not exist any \(\eta >0\) such that \(V_x^*(x)\le 1, x\in (0,\eta ]\).

Proof

We will argue by contradiction. Assume there exists \(\eta >0\) such that \(V_x^*(x)\le 1, x\in (0,\eta ]\). Then by (A.19) we get that \(V_x^*\) follows the ODE (A.20) for all \(x\in (0,\eta )\). Making use of Proposition A.1 we get that

$$\begin{aligned} \lim _{x\rightarrow 0+}V_x^*(x) = \lim _{x\rightarrow 0+}\varphi _2(x) = \pm \infty \end{aligned}$$

which contradicts that \(V_x^*\ge 0\) or that \(V_x^*(x)\le 1, x\in (0,\eta ]\). The proof is complete. \(\square \)

The above Proposition shows that the scenario from Fig. 5 cannot happen.

Fig. 5
figure 5

If \(V^*_x\) crosses \(y=1\) from below at \(x_0\), and it has not crossed from above before then we get a contradiction by Proposition A.2

Proposition A.3

There does not exist any \(\chi >0\) such that \(V_x^*(x)\ge 1\) for all \(x\ge \chi \).

Proof

Once again we will argue by contradiction. Assume there exists \(\chi >0\) such that \(V_x^*(x)\ge 1\) for all \(x\ge \chi \). By (A.19) \(V_x^*\) will follow the ODE

$$\begin{aligned} \frac{d\varphi _1}{dx}(x) + \frac{2\mu (x)}{\sigma ^2 x}\varphi _1(x) = \frac{2\rho }{\sigma ^2 x^2} \end{aligned}$$

for all \(x\ge \chi \). As a result we get just as in Proposition A.1

$$\begin{aligned} \varphi _1 (x) = \frac{\zeta (x_0)\varphi _1 (x_0) + \int _{x_0}^{x} \zeta (y)\beta (y) \;dy}{\zeta (x)}, \end{aligned}$$
(A.26)

where the non-homogeneous term is \(\beta (y):=\frac{2\rho }{\sigma ^2 y^2}\), and the integrating factor is

$$\begin{aligned} \zeta (x) := e^{\int _{x_1}^{x} \gamma (y) \;dy}, \end{aligned}$$

for \(\gamma (y):=\frac{2\mu (y)}{\sigma ^2 y}\), and arbitrary \(x_0, x_1\in (\chi ,\infty )\). Under Assumption 2.1 we can see that there exist constants \(L>0\) and \(c>0\) such that \(\mu (y) < -c\) for all \(y>L\), and hence, \(\int _{L}^x \frac{\mu (y)}{y} \;dy \le -c \int _{x_1}^x \frac{1}{y} \;dy = -c (\ln x - \ln x_1) \rightarrow -\infty \) as \(x\rightarrow \infty \). If we choose \(c> \frac{\sigma ^2}{2}\), \(x_1:=L\) we get

$$\begin{aligned} x \zeta (x) \le x^{1 - \frac{2c}{\sigma ^2}} \rightarrow 0 \end{aligned}$$
(A.27)

as \(x\rightarrow \infty \). If

$$\begin{aligned} \zeta (x_0)\varphi _1 (x_0) + \int _{x_0}^{\infty } \zeta (y)\beta (y) \;dy>0 \end{aligned}$$

then by (A.27) and the positivity of \(\zeta \) one has

$$\begin{aligned} \lim _{x\rightarrow \infty } \frac{V_x^*}{x}= \lim _{x\rightarrow \infty } \frac{\varphi _1(x)}{x} = \frac{\zeta (x_0)\varphi _1 (x_0) + \int _{x_0}^{x} \zeta (y)\beta (y) \;dy}{x\zeta (x)}= + \infty \end{aligned}$$

which contradicts the growth condition (A.7). Therefore we need

$$\begin{aligned} \zeta (x_0)\varphi _1 (x_0) + \int _{x_0}^{\infty } \zeta (y)\beta (y) \;dy \le 0. \end{aligned}$$

Note that in this case

$$\begin{aligned} \begin{aligned} \zeta (x_0)\varphi _1 (x_0) + \int _{x_0}^{x} \zeta (y)\beta (y) \;dy \le - \int _{x}^{\infty } \zeta (y)\beta (y) \;dy <0. \end{aligned} \end{aligned}$$

This implies, since \(\zeta (x)>0\), that for \(x>x_0\)

$$\begin{aligned} \begin{aligned} V^*_x(x)&= \varphi _1 (x) = \frac{\zeta (x_0)\varphi _1 (x_0) + \int _{x_0}^{x} \zeta (y)\beta (y) \;dy}{\zeta (x)}<0, \end{aligned} \end{aligned}$$

which contradicts the assumption that \(V_x^*(x)\ge 1\) for all \(x\ge \chi \). \(\square \)

The above Proposition shows that the scenario from Fig. 6 is not possible.

Fig. 6
figure 6

An impossible scenario, by Proposition A.3

Set \(g(x):=\rho - x\mu (x)\). By assumption \(p(x):=x\mu (x)\) has a unique maximum and \(\mu \) is locally Lipschitz and decreasing with \(\lim _{x\rightarrow \infty } \mu (x)=-\infty \). This implies that g(x) has a unique minimum for some \(x_\iota \in (0,\infty )\).Footnote 4 If \(g(x_\iota )<0\) then g intersects the x axis in exactly two points \(0<\alpha _1<\alpha _2<\infty \). If \(g(x_\iota )>0\) there is no intersection of g with the x axis. Finally, if \(g(x_\iota )=0\) there is exactly one intersection and this happens at \(x=x_\iota \).

Proposition A.4

The function \(V^*_x\) crosses the line \(y=1\) at most three times. More specifically, we have the following possibilities:

  1. (I)

    If \(g(x_\iota )<0\) then

    1. (i)

      For \(0\le x<\alpha _1\) the function \(V_x^*\) can only pass the line \(y=1\) at most once and the crossing has to be from below.

    2. (ii)

      For \(x>\alpha _2\) the function \(V_x^*\) can pass the line \(y=1\) at most once and the crossing has to be from below.

    3. (iii)

      For \(\alpha _1<x<\alpha _2\) the function \(V_x^*\) can pass the line \(y=1\) at most once and the crossing has to be from above.

  2. (II)

    If \(g(x_\iota )>0\) then the function \(V_x^*\) can pass the line \(y=1\) at most once and the crossing has to be from below.

  3. (III)

    If \(g(x_\iota )=0\) then \(V_x^*\) can cross the line \(y=1\) at most three times. In particular, the possible crossing(s) in \((0,x_\iota ) \cup (x_\iota ,\infty )\) must be from below.

  4. (IV)

    If \(V_x^*\) crosses the line \(y=1\) at \(x_0\) then we cannot have \(\varepsilon >0\) such that \(V_x^*=1\) on \((x_0,x_0+\varepsilon )\). In other words, the intersections have to be at separate points and we cannot ‘stick’ to \(y=1\).

Proof

It follows from the HJB Eq. (A.6) with \(\varphi :=V_x\) that if \(\varphi (x_0)=1\), then we have

$$\begin{aligned} g(x_0) = \rho - x_0\mu (x_0) = \frac{1}{2}\sigma ^2 x_0^2 \varphi '(x_0){\left\{ \begin{array}{ll}<0 &{} \text{ if } \varphi '(x_0)<0 \\ =0 &{} \text{ if } \varphi '(x_0)=0 \\>0 &{} \text{ if } \varphi '(x_0)>0. \end{array}\right. } \end{aligned}$$

Therefore, when \(\varphi \) crosses the line \(y=1\), we obtain some information from g. More precisely, we can infer the following:

  1. (I)

    When \(g(x_\iota )<0\) the function \(g(x) = \rho - x\mu (x)\) has exactly two zeros at \(\alpha _1, \alpha _2\) with \(0<\alpha _1<\alpha _2<\infty \).

    1. (ii)

      for \(0\le x<\alpha _1\) we have \(g(x) > 0\), hence \(\varphi \) is only allowed to cross the line \(y=1\) from below in this region;

    2. (iii)

      for \(x>\alpha _2\) we have \(g(x) > 0\), hence \(\varphi \) is only allowed to cross the line \(y=1\) from below in this region;

    3. (iv)

      for \(\alpha _1<x<\alpha _2\), \(g(x)<0\) and \(\varphi \) is only allowed to cross the line \(y=1\) from above in this region.

  2. (II)

    If \(g(x_\iota )>0\) then \(g(x)> 0\) for all \(x\in \mathbb {R}_+\). The function \(V_x^*\) can pass the line \(y=1\) at most once and the crossing has to be from below.

  3. (III)

    If \(g(x_\iota )=0\) then g(x) has a unique intersection of the x axis at \(x_\iota \). As a consequence \(g(x)\ge 0\) and the function \(V_x^*\) can pass the line \(y=1\) at most thrice: at most once from below in the region \(x<x_\iota \), at most once from below in the region \(x>x_\iota \) and at most once from above or from below at the point \(x= x_\iota \).

  4. (IV)

    Since \(x\mu (x)\) is never constant on an interval, it is clear that for any \((u,v)\subset \mathbb {R}_+\) we cannot have \(V_x^*=1\) for all \(x\in (u,v)\).

\(\square \)

Remark A.2

By the analysis above one can note that at the intersection points (or roots) \(\alpha _{1,2}\) of the function g(x) with the x axis the derivative of \(\varphi \) is 0. This makes it more complicated to say, in case there is a crossing at a root, if the crossing is from above or from below. However, this does not require us to change our arguments. For example, if there is a crossing from below on \(0\le x<\alpha _1\) and there is a crossing at \(x=\alpha _1\) then the crossing at \(\alpha _1\) is necessarily from above. This then implies that there can be no crossing for \(x\in (\alpha _1,\alpha _2)\) because in this region the crossing has to be from above and there cannot be two crossings from above in a row.

Proof of Theorem 2.1

A direct consequence of Proposition A.4 is that \(V_x^*\) can cross the line \(y=1\) at most three times. We also know, given the at most two possible solutions \(\alpha _{1,2}\) of the equation \(g(x)=0\) how these crossings have to happen. Next, we eliminate all but one possibility.

  1. i)

    If we get a crossing from below in \((0,\alpha _1)\) this means that there exists \(\eta >0\) such that for all \(x\in (0,\eta )\) we have \(V_x^*(x)=\varphi _2(x) \le 1\). This is not possible by Proposition A.2. As such there can be no crossings in \((0,\alpha _1)\).

  2. ii)

    If we have a crossing from below in \((\alpha _2,\infty )\) then there is \(\zeta >0\) such that for all \(x\ge \zeta \)

    $$\begin{aligned} V_x^*(x)=\varphi _1(x)\ge 1. \end{aligned}$$

    This is not possible by Proposition A.3. Therefore, there are no crossings in \( (\alpha _2,\infty )\).

  3. iii)

    We cannot have that \(V_x^*(x)\ge 1\) for all \(x\in (0,\infty )\) because then we get a contradiction by Proposition A.3. Similarly, we cannot have \( V_x^*(x)\le 1\) for all \(x\in (0,\infty )\) since we get a contradiction by Proposition A.2.

  4. iv)

    If \(g(x_\iota )>0\) then, in principle, there could be at most one crossing and this would have to be from below. But this creates a contradiction by either using Proposition A.2 or Proposition A.3. If there is no crossing then we get a contradiction by (iii) above.

  5. v)

    If \(g(x_\iota )=0\) then

    1. (a)

      If there is no crossing, then we get a contradiction by part iii) above.

    2. (b)

      If there are two crossings then we get contradictions from either Proposition A.2 or Proposition A.3.

    3. (c)

      If there are three crossings then we must have a crossing from below in \((0,x_\iota )\), one from above at \(x=x_\iota \) and one from below in \((x_\iota ,\infty )\). This yields a contradiction because of Proposition A.2.

    4. (d)

      If there is just one crossing and the crossing is from below then we get a contradiction by Proposition A.3.

  6. vi)

    By parts i)-iv) we get that there is exactly one crossing of the line \(y=1\), that this crossing is from above and that the crossing happens at a point in the interval \([\alpha _1,\alpha _2]\) when \(g(x_\iota )<0\) or at \(x_\iota \) if \(g(x_\iota )=0\).

Fig. 7
figure 7

The only case which doesn’t lead to a contradiction is when \(V^*_x\) crosses \(y=1\) only once and the crossing is from above

This, together with (A.18), implies that the optimal strategy is of bang–bang type

$$\begin{aligned} \begin{aligned} v(x)&= {\left\{ \begin{array}{ll} 0 &{} \text{ if } 0< x\le x^* \\ M &{} \text{ if } x>x^*. \end{array}\right. } \end{aligned} \end{aligned}$$

Moreover, one can see that \(g(x_\iota )\le 0\) which in turn forces

$$\begin{aligned} \rho \le \sup _{x\in \mathbb {R}_+} x\mu (x) = x_\iota \mu (x_\iota ). \end{aligned}$$

\(\square \)

Appendix B: Optimal harvesting with concave and convex yields: proofs

This appendix shows that for a class of yield functions \(\Phi \) one can get continuous optimal harvesting strategies. Therefore, the optimal harvesting strategy will be discontinuous. One might wonder under which conditions on \(\Phi \) the optimal harvesting strategies will be continuous (Fig. 7).

We proved in Theorem A.2 that the HJB equation

$$\begin{aligned} \max _{u\in U}\Big [{\mathcal L}_u V(x)+\Phi (xu)\Big ]=\rho \end{aligned}$$

admits a classical solution \(V^*\in C^2(\mathbb {R}_+)\) satisfying \(V^*(1)=0\) and \(\rho =\rho ^*> 0\).

For any given \(\Phi \), we define

$$\begin{aligned} F(\omega ) := -A\omega + \Phi (\omega ), \end{aligned}$$

where A is a shorthand of \(V_x^*\), that is,

$$\begin{aligned} A := V_x^*(x). \end{aligned}$$

For any fixed x, we can see A as a constant. Using these shorthands, we can rewrite the HJB equation as

$$\begin{aligned} \begin{aligned} F(xv) = \max _{\omega \in [0,L]} F(\omega ), \end{aligned} \end{aligned}$$
(B.1)

where \(L:=xM\). A direct computation yields

$$\begin{aligned}\left\{ \begin{aligned} F(0)&= -A\cdot 0 + \Phi (0) = 0 \\ F(L)&= -AL + \Phi (L)\\ F'(\omega )&= -A + \Phi '(\omega ) \end{aligned}\right. \end{aligned}$$

because \(\Phi (0) = 0\). Therefore, the critical point(s) will be given by \(\omega _c = [\Phi ']^{-1}(A)\), and

$$\begin{aligned} F(\omega _c) = -A\omega _c + \Phi (\omega _c) = -A[\Phi ']^{-1}(A) + \Phi ([\Phi ']^{-1}(A)). \end{aligned}$$

If \(\Phi \) is assumed to be strictly concave, the maximum on the right hand side of (B.1) can be found easily because \(F''=\Phi ''\).

Theorem 3.1

Suppose Assumption 2.1 holds and the yield function satisfies

  1. (1)

    \(\Phi \in C^2(\mathbb {R}_+)\),

  2. (2)

    \(\Phi \) is strictly concave.

Then the optimal harvesting strategy is continuous and given by

$$\begin{aligned}\begin{aligned} v&= {\left\{ \begin{array}{ll} 0 &{}\text{ if } [\Phi ']^{-1}(V_x^*(x))\le 0,\\ \displaystyle \frac{[\Phi ']^{-1}(V_x^*(x))}{x} &{}\text{ if } 0<[\Phi ']^{-1}(V_x^*(x))<xM,\\ M &{}\text{ if } [\Phi ']^{-1}(V_x^*(x))\ge xM. \end{array}\right. } \\ \end{aligned} \end{aligned}$$

Furthermore, the HJB equation for the system becomes

$$\begin{aligned} \begin{aligned} \rho&= {\left\{ \begin{array}{ll} x\mu (x)f_x+\dfrac{1}{2}\sigma ^2 x^2f_{xx} &{} \text{ if } [\Phi ']^{-1}(f_x(x))\le 0,\\ x\mu (x)f_x+\dfrac{1}{2}\sigma ^2 x^2f_{xx}\\ -f_x[\Phi ']^{-1}(f_x) + \Phi ([\Phi ']^{-1}(f_x)) &{}\text{ if } 0<[\Phi ']^{-1}(f_x(x))<xM,\\ x(\mu (x)-M)f_x+\dfrac{1}{2}\sigma ^2 x^2f_{xx}+\Phi (xM) &{}\text{ if } [\Phi ']^{-1}(f_x(x))\ge xM. \end{array}\right. } \end{aligned} \end{aligned}$$
(3.1)

Proof

Assume that \(\Phi \) is \(C^2\) and strictly concave. Since \(\Phi \) is \(C^2\) we have that \(\Phi ''<0\). In this case, \(\Phi '\) is strictly decreasing, so its inverse is well-defined. As a result, we have a unique critical point which is a maximum \(\omega _c = [\Phi ']^{-1}(A)\). A standard calculus result yields

$$\begin{aligned}\begin{aligned} \max _{\omega \in [0,L]} F(\omega )&= {\left\{ \begin{array}{ll} \max \left\{ F(0),F(L) \right\} &{} \text{ if } \omega _c \not \in (0,L) \\ \max \left\{ F(0),F(\omega _c),F(L) \right\} &{} \text{ if } 0<\omega _c<L \end{array}\right. }\\&= {\left\{ \begin{array}{ll} 0 &{} \text{ if } \omega _c \le 0 \\ F(\omega _c) &{} \text{ if } 0<\omega _c<L\\ F(L) &{} \text{ if } \omega _c \ge L, \end{array}\right. } \end{aligned} \end{aligned}$$

where we used the fact that \(F(0)=0\) and the concavity of \(\Phi \) in the last equality.

Depending on the maximum point, we have the corresponding optimal Markov control:

$$\begin{aligned}\begin{aligned} v&= {\left\{ \begin{array}{ll} 0 &{}\text{ if } \displaystyle \max _{\omega \in [0,L]} F(\omega )=0\\ \displaystyle \frac{[\Phi ']^{-1}(A)}{x} &{}\text{ if } \displaystyle \max _{\omega \in [0,L]} F(\omega )=F(\omega _c)\\ M &{}\text{ if } \displaystyle \max _{\omega \in [0,L]} F(\omega )=F(L) \end{array}\right. } \\&= {\left\{ \begin{array}{ll} 0 &{}\text{ if } [\Phi ']^{-1}(A)\le 0\\ \displaystyle \frac{[\Phi ']^{-1}(A)}{x} &{}\text{ if } 0<[\Phi ']^{-1}(A)<xM\\ M &{}\text{ if } [\Phi ']^{-1}(A)\ge xM, \end{array}\right. } \\ \end{aligned}\end{aligned}$$

because v is the solution to

$$\begin{aligned} -Axv + \Phi (xv) = \max _{\omega \in [0,L]} F(\omega ). \end{aligned}$$

In conclusion, in this case, v depends on \(A:=\dfrac{d V^*}{d x}(x)\) continuously. Hence, since \(V^*\in C^2\left( \mathbb {R}_+\right) \) we conclude that v is continuous.

The HJB Eq. (A.6) becomes

$$\begin{aligned} \begin{aligned} \rho&=\max _{u\in U}\Big [ x(\mu (x)-u)f_x+\dfrac{1}{2}\sigma ^2 x^2f_{xx} + \Phi (xu) \Big ] \\&=x\mu (x)f_x+\dfrac{1}{2}\sigma ^2x^2f_{xx} +\max _{u\in U}[ (\Phi (xu)-xu f_x)]\\&= {\left\{ \begin{array}{ll} x\mu (x)f_x+\dfrac{1}{2}\sigma ^2 x^2f_{xx} &{} \text{ if } [\Phi ']^{-1}(f_x(x))\le 0,\\ x\mu (x)f_x+\dfrac{1}{2}\sigma ^2 x^2f_{xx}\\ -f_x[\Phi ']^{-1}(f_x) + \Phi ([\Phi ']^{-1}(f_x)) &{}\text{ if } 0<[\Phi ']^{-1}(f_x(x))<xM,\\ x(\mu (x)-M)f_x+\dfrac{1}{2}\sigma ^2 x^2f_{xx}+\Phi (xM) &{}\text{ if } [\Phi ']^{-1}(f_x(x))\ge xM. \end{array}\right. } \end{aligned} \end{aligned}$$

\(\square \)

The case when the yield function \(\Phi \) is convex is qualitatively similar to the case when the yield function is linear, and the optimal solution is of the bang–bang type. We can improve Theorem 2.1 as follows.

Theorem 3.2

Assume that \(\Phi :\mathbb {R}_+\rightarrow \mathbb {R}_+\) is weakly convex, \(\Phi \) grows at most polynomially, \(\Phi \in C^1(\mathbb {R_+})\) and the population survives in the absence of harvesting, that is \(\mu (0)-\frac{\sigma ^2}{2}>0\). Furthermore assume that the drift function \(\mu (\cdot )\) satisfies the following modification of Assumption 2.1:

  1. (i)

    \(\mu \) is locally Lipschitz.

  2. (ii)

    \(\mu \) is decreasing.

  3. (iii)

    As \(x\rightarrow \infty \) we have \(\mu (x)\rightarrow -\infty \).

  4. (iv)

    The function

    $$\begin{aligned} G(x)= \Phi (xM)\left( 1-\frac{2}{\sigma ^2}\mu (x)\right) - xM\Phi '(xM) \end{aligned}$$
    (3.2)

    has a unique extreme point in \((0,\infty )\) which is a minimum, and is not constant on any interval \((u,v)\subset \mathbb {R}_+\).

If the assumptions (i)–(iii) hold, the optimal control has a bang–bang form (i.e., the harvesting rate is either 0 or the maximal M). If assumptions (i)–(iv) hold, the optimal harvesting strategy v has a bang–bang form with one threshold

$$\begin{aligned} \begin{aligned} v(x)&= {\left\{ \begin{array}{ll} 0 &{} \text{ if } 0< x\le x^* \\ M &{} \text{ if } x>x^* \end{array}\right. } \end{aligned} \end{aligned}$$

for some \(x^*\in (0,\infty )\).

Proof

This proof is similar to the proof of Theorem 2.1 from Appendix A. By (A.9)

$$\begin{aligned} \frac{d V^*}{dx}(x)[x(\mu (x)-v(x))] + \Phi (xv(x)) = \max _{u \in U}\left( \frac{d V^*}{dx}(x)[x(\mu (x)-u)] + \Phi (xu)\right) . \end{aligned}$$

Dropping the common terms gives

$$\begin{aligned} - \frac{d V^*}{dx}(x) xv(x) + \Phi (xv(x)) = \max _{u \in U}\left( - \frac{d V^*}{dx}(x)xu + \Phi (xu)\right) . \end{aligned}$$

With \(x>0\), the right hand side is a weakly convex function of u, so one of the end points of the interval U achieves the maximum. This already shows that the optimal control is bang–bang, but says nothing else of the shape of v(x). Since \(\Phi (0) = 0\), we get

$$\begin{aligned} \max _{u \in U}\left( -\frac{d V^*}{dx}(x)xu + \Phi (xu)\right) = {\left\{ \begin{array}{ll} -\frac{d V^*}{dx}(x)xM + \Phi (xM), &{} \text { if }\frac{\Phi (xM)}{xM} > \frac{d V^*}{dx}(x), \\ 0, &{} \text { else.} \end{array}\right. } \end{aligned}$$

This implies

$$\begin{aligned} v(x) = {\left\{ \begin{array}{ll} 0, &{} \text { if } V^*_x < \frac{\Phi (xM)}{xM}, \\ M, &{} \text { if } V^*_x \ge \frac{\Phi (xM)}{xM}. \end{array}\right. } \end{aligned}$$

The function \(\Phi (\cdot )\) is weakly convex, therefore, for \(\alpha \in (0,1)\), \(\Phi (\alpha x + (1-\alpha ) y ) \le \alpha \Phi (x) + (1-\alpha )\Phi (y)\). By assumption, it is also continuous and positive valued. So, for \(\alpha \in (0,1)\), \(\alpha \Phi (xM) \ge \Phi (\alpha xM)\), equivalent with \(\Phi (xM) \ge \frac{1}{\alpha } \Phi (\alpha xM)\), equivalent with \(\frac{\Phi (xM)}{xM} \ge \frac{\Phi (\alpha xM)}{\alpha xM}\) if \(x,M>0\). Therefore \(\frac{\Phi (xM)}{xM}\) must be positive and monotonically increasing in x for \(M>0\), \(x>0\). In particular \(\Phi '(0) = \lim _{x\rightarrow 0^+} \frac{\Phi (xM)}{xM}\) exists and it is greater or equal to 0. The HJB equation A.6 becomes

$$\begin{aligned} \rho = {\left\{ \begin{array}{ll} x\mu (x)f_x+\dfrac{1}{2}\sigma ^2 x^2f_{xx} &{} \text{ if } f_x> \frac{\Phi (xM)}{xM},\\ x(\mu (x)-M)f_x+\dfrac{1}{2}\sigma ^2 x^2f_{xx} +\Phi (Mx) &{}\text{ if } f_x\le \frac{\Phi (xM)}{xM}. \end{array}\right. } \end{aligned}$$
(B.2)

One can easily modify the proofs from Appendix A to show the following four propositions:

Proposition A.5

Assume \(\mu , \Phi \) satisfy the assumptions of Theorem 3.2. Then any solution \(\varphi _2\) of the ODE

$$\begin{aligned} \frac{d\varphi _2}{dx}(x) + \frac{2(\mu (x) -M)}{\sigma ^2 x}\varphi _2(x) = \frac{2(\rho -\Phi (Mx))}{\sigma ^2 x^2} \end{aligned}$$
(B.3)

satisfies

$$\begin{aligned} \lim _{x\rightarrow 0^+} \varphi _2 (x) = \pm \infty . \end{aligned}$$
(B.4)

Proof

Proceed similarly to the proof of Proposition A.1, replacing the definition \(\beta (y): = \frac{2(\rho - \Phi (My))}{\sigma ^2 y^2}\). This time,

$$\begin{aligned} \int _{x}^{x_0} \zeta (y)\beta (y) \;dy \sim \frac{2}{\sigma ^2} \int _{x}^{x_0} y^{\frac{2}{\sigma ^2}(\mu _0-M)-2} (\rho -\Phi (My)) \;dy. \end{aligned}$$

For \(y \in [0,x_0]\), we have \(\Phi '(0) \le \frac{\Phi (My)}{My} \le \frac{\Phi (Mx_0)}{Mx_0}\), so

$$\begin{aligned} \frac{2}{\sigma ^2} \int _{x}^{x_0} y^{\frac{2}{\sigma ^2}(\mu _0-M)-2} (\rho -\frac{\Phi (Mx_0)}{Mx_0}My) \;dy&\le \frac{2}{\sigma ^2} \int _{x}^{x_0} y^{\frac{2}{\sigma ^2}(\mu _0-M)-2} (\rho -\Phi (My)) \;dy \\&\le \frac{2}{\sigma ^2} \int _{x}^{x_0} y^{\frac{2}{\sigma ^2}(\mu _0-M)-2} (\rho -\Phi '(0) My) \;dy. \end{aligned}$$

For a general positive constant N,

$$\begin{aligned}&\frac{2}{\sigma ^2} \int _{x}^{x_0} y^{\frac{2}{\sigma ^2}(\mu _0-M)-2} (\rho -Ny) \;dy\\&= {\left\{ \begin{array}{ll} C_0 + C_1 x^{\frac{2}{\sigma ^2}(\mu _0-M)} + C_2 x^{\frac{2}{\sigma ^2}(\mu _0-M)-1} &{} \text {if } \mu _0-M \ne 0, \; \frac{\sigma ^2}{2}\\ \frac{2}{\sigma ^2} (\rho \ln x_0 - N x_0) + \frac{2N}{\sigma ^2} x - \frac{2\rho }{\sigma ^2} \ln x &{} \text{ if } \mu _0-M = \frac{\sigma ^2}{2}\\ \frac{2}{\sigma ^2} (-\rho x^{-1}_0 - N \ln x_0) + \frac{2N}{\sigma ^2} \ln x + \frac{2\rho }{\sigma ^2} x^{-1} &{} \text{ if } \mu _0-M=0, \end{array}\right. } \end{aligned}$$

where the integration constants are given by

$$\begin{aligned} C_0 := -\frac{N x_0^{\frac{2}{\sigma ^2}(\mu _0-M)}}{\mu _0-M} + \frac{\rho x_0^{\frac{2}{\sigma ^2} (\mu _0-M)-1}}{\mu _0-\frac{\sigma ^2}{2}-M}, \quad C_1 := \frac{N}{\mu _0-M}, \quad C_2 := -\frac{\rho }{\mu _0-\frac{\sigma ^2}{2}-M} . \end{aligned}$$

Now the case-by-case analysis of Proposition A.1 can be repeated similarly because the constants of the dominant terms in the expression above do not depend on N. \(\square \)

Proposition B.2

There does not exist any \(\eta >0\) such that \(V_x^*(x)\le \frac{\Phi (xM)}{xM}, x\in (0,\eta ]\).

Proof

Noting that \(\sup _{x\in (0,\eta ]} \frac{\Phi (xM)}{xM} = \frac{\Phi (\eta M)}{\eta M}\), the proof is similar to the proof of Proposition A.2, relying on the application of Proposition A.5 to Eq. (B.3). \(\square \)

Proposition B.3

There does not exist any \(\chi >0\) such that \(V_x^*(x)\ge \frac{\Phi (xM)}{xM}\) for all \(x\ge \chi \).

Proof

It follows the proof of Proposition A.3 without change, because \(\frac{\Phi (xM)}{xM} \ge 0\). \(\square \)

Fig. 8
figure 8

The only case which doesn’t lead to a contradiction is when \(V^*_x\) crosses \(\frac{\Phi (xM)}{xM}\) only once and the crossing is from above

Proposition B.4

The function \(V^*_x\) intersects the curve \(\frac{\Phi (xM)}{xM}\) at most three times on \([0,\infty )\).

Proof

By (B.2) if we set \(f_x:=\varphi \), then at the intersections \(x:\; \varphi (x) = \frac{\Phi (xM)}{xM}\) we have

$$\begin{aligned} \varphi _x = \frac{2}{\sigma ^2x^2}\left( \rho -x\mu (x)\frac{\Phi (xM)}{xM}\right) , \end{aligned}$$

from the HJB equation. Now we want to compare \(\varphi _x\) with \(\left( \frac{\Phi (xM)}{xM}\right) '\) whenever there is a crossing, to infer the direction from which \(\varphi \) is crossing. To do that, consider the equation \(\varphi _x = \left( \frac{\Phi (xM)}{xM}\right) '\). Substituting and simplifying gives us the condition \(G(x) + \frac{2M\rho }{\sigma ^2} = 0\) where G(x) is defined in 3.2. Since G(x) has only one extremum by assumption, this equation has zero, one or two solutions. When there are two solutions, say \(\alpha _1, \alpha _2\), any intersection of \(\varphi \) with \(\frac{\Phi (xM)}{xM}\) for \(x \in (\alpha _1,\alpha _2)\) will have to be with \(\varphi \) coming from above, as \(\varphi _x <0\) in that interval. Using similar arguments to those in Proposition A.4, this implies, together with the condition on G from (3.2), that \(\varphi \) can intersect \(\frac{\Phi (xM)}{xM}\) at most three times. \(\square \)

The rest of the proof also mirrors the one of Theorem 2.1. Apply the four results above and find again that the optimal control is bang–bang with a single threshold \(x^*\),

$$\begin{aligned} \begin{aligned} v(x)&= {\left\{ \begin{array}{ll} 0 &{} \text{ if } 0< x\le x^*, \\ M &{} \text{ if } x>x^*. \end{array}\right. } \end{aligned} \end{aligned}$$

for some \(x^*\in (0,\infty )\) (see Fig. 8). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hening, A., Nguyen, D.H., Ungureanu, S.C. et al. Asymptotic harvesting of populations in random environments. J. Math. Biol. 78, 293–329 (2019). https://doi.org/10.1007/s00285-018-1275-1

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00285-018-1275-1

Keywords

Mathematics Subject Classification

Navigation