Next Article in Journal
The Concept of Corporate Social Responsibility Based on Integrating the SDGs into Corporate Strategies: International Experience and the Risks for Profit
Next Article in Special Issue
Diffusion Approximations of the Ruin Probability for the Insurer–Reinsurer Model Driven by a Renewal Process
Previous Article in Journal
Using Econometric Models to Manage the Price Risk of Cocoa Beans: A Case from India
Previous Article in Special Issue
EM Estimation for the Bivariate Mixed Exponential Regression Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Dividends for a Two-Dimensional Risk Model with Simultaneous Ruin of Both Branches

by
Philipp Lukas Strietzel
*,† and
Henriette Elisabeth Heinrich
Institut für Mathematische Stochastik, Technische Universität Dresden, 01062 Dresden, Germany
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Risks 2022, 10(6), 116; https://doi.org/10.3390/risks10060116
Submission received: 21 April 2022 / Revised: 23 May 2022 / Accepted: 30 May 2022 / Published: 2 June 2022
(This article belongs to the Special Issue Multivariate Risks)

Abstract

:
We consider the optimal dividend problem in the so-called degenerate bivariate risk model under the assumption that the surplus of one branch may become negative. More specific, we solve the stochastic control problem of maximizing discounted dividends until simultaneous ruin of both branches of an insurance company by showing that the optimal value function satisfies a certain Hamilton–Jacobi–Bellman (HJB) equation. Further, we prove that the optimal value function is the smallest viscosity solution of said HJB equation, satisfying certain growth conditions. Under some additional assumptions, we show that the optimal strategy lies within a certain subclass of all admissible strategies and reduce the two-dimensional control problem to a one-dimensional one. The results are illustrated by a numerical example and Monte Carlo simulated value functions.

1. Introduction

The problem of paying dividends optimally naturally arises when considering insurance risk processes. This is due to the fact that for any classical Cramér–Lundberg or even any spectrally negative Lévy risk model, the process either drifts to infinity with positive probability, or it faces ruin almost surely. As the assumption that an insurance company’s surplus grows to infinity is unrealistic, dividend payments are the natural choice to avoid this behavior.
In the univariate setting, optimal dividend payments are a well-studied field that was first introduced by De Finetti in De Finetti (1957) who considered the dividend barrier model. Later, in Gerber (1969), it was shown that in the Cramér–Lundberg risk model, the optimal strategy is always a band strategy. The study of dividends in this classical risk model has been continued with several extensions, see, e.g., the works Azcue and Muler (2005, 2010, 2012, 2014). The former three publications treat the cases of optimal dividend strategies with reinsurance, with investment in a Black–Scholes market, and the case of bounded dividend rates, respectively. The textbook Azcue and Muler (2014) gives a broad overview of how to use the theory of stochastic control and the dynamic programming approach to tackle dividend problems. In Thonhauser and Albrecher (2007), this approach is used to solve the problem of optimal dividend payments in the presence of a reward for later ruin. Additionlly in the field of (spectrally negative) Lévy risk models, there are several works regarding the dividend problem. See, e.g., Avram et al. (2007), where the optimal dividend policy for a spectrally negative Lévy process with and without bailout loans is considered and the optimal strategy among all barrier strategies is identified, or Loeffen (2008), where these results are extended by giving sufficient conditions on when the optimal strategy is indeed of barrier type. For a more thorough overview of the available tools, approaches, optimality results and optimal strategies in the univariate setting we refer to Albrecher and Thonhauser (2009); Avanzi (2009) and Schmidli (2008).
In multivariate risk theory, a popular model is the so-called degenerate bivariate risk model: Given a Poisson process N = N ( t ) t 0 with rate λ > 0 , define a claim process by
S ( t ) = n = 1 N ( t ) U n , t 0 ,
where U n are non-negative i.i.d. claim sizes with cdf F, independent of N. The degenerate model is then given via
X ( t ) = X 1 ( t ) X 2 ( t ) : = x 1 + c 1 t n = 1 N ( t ) b 1 U n x 2 + c 2 t n = 1 N ( t ) b 2 U n = : x + c t b S ( t ) ,
where x 1 , x 2 0 are the initial capitals of the two branches of the insurer, c 1 , c 2 > 0 define constant premium rates, and b 1 , b 2 > 0 define the proportion of each claim covered by the corresponding branch. As the claims need to be fully covered, we assume b 1 + b 2 = 1 .
The degenerate model can be seen as an insurer-reinsurer model with proportional reinsurance, where the insurer covers proportion b 1 of each claim, while the reinsurer covers proportion b 2 . The model was first introduced in Avram et al. (2008a), where the authors derive the Laplace transform of the probability of ruin of at least one branch. Further on, the study of ruin probabilities in the degenerate model under different assumptions has been continued, see, e.g., Avram et al. (2008a, 2008b); Badescu et al. (2011); Hu and Jiang (2013); Palmowski et al. (2018). In addition, in the field of optimal dividends, the degenerate model has gained attention, e.g., in Czarna and Palmowski (2011) where dividends are paid according to an impulse or refraction control and ruin corresponds to exiting the positive quadrant. Under the same ruin assumption, in Azcue et al. (2018), it was shown that the optimal value function for general admissible strategies satisfies a certain Hamilton–Jacobi–Bellmann equation and the optimal strategy is described.
In the present work, we adapt the stochastic control approach used in Azcue et al. (2018) and consider the problem of paying dividends optimally under the assumption that one branch of the insurance company may have a negative surplus., i.e., we extend the set, where the insurance company is considered solvent to R 2 \ R < 0 2 . We show that the optimal value function, which represents the expected discounted value of the paid dividends of the optimal strategy, may be characterized as the smallest viscosity solution of a certain Hamilton–Jacobi–Bellman (HJB) equation. Further, we show that in certain subcases, the optimal strategy lies within the subclass of so-called bang strategies, which reduces the bivariate optimization problem to a univariate one.
The paper is organized as follows. In Section 2 we specify our model and introduce necessary notations. Afterward, in Section 3 we derive the HJB equation and show that it is satisfied by the optimal value function. Section 4 is dedicated to the bang strategies, whereas in Section 5 we use Monte Carlo simulation to approximate the optimal strategy for a certain example.

2. Preliminaries

Throughout the whole paper we are going to use the bold version of any variable for the two-dimensional version of the same letter (e.g., x = ( x 1 , x 2 ) ) without explicitly stating it again. All random quantities are defined on a filtered probability space ( Ω , A , F , P ) and P x , E x denote the probability measure and expectation, given that X ( 0 ) = x , respectively. The derivative of a function u with respect to the variable x i is denoted by x i u ( x ) or u x i ( x ) . For any set M we denote by M the inner of M.
As mentioned before, in this paper, we consider the degenerate bivariate risk model defined in (2). In contrast to the assumptions in Azcue et al. (2018), we allow that one initial capital is negative as long as the other is not, i.e., we consider the solvency set S : = R 2 \ R < 0 2 . Without loss of generality, we assume that the second branch is equally or less profitable than the first one, i.e.,
c 1 b 1 c 2 b 2 .
Both branches pay dividends to their shareholders using part of their surpluses, where the dividend payment strategy L ( t ) represents the total amount of dividends paid by both branches up to time t. We define the associated controlled process with initial surplus x = ( x 1 , x 2 ) as
X L ( t ) : = X ( t ) L ( t ) = X 1 L ( t ) X 2 L ( t ) = X 1 ( t ) L 1 ( t ) X 2 ( t ) L 2 ( t ) .
This is mainly the model as the one considered in Azcue et al. (2018). The crucial difference to our work is, that in Azcue et al. (2018) the authors consider the ruin time
τ L : = inf t > 0 : X 1 ( t ) L 1 ( t ) < 0 or X 2 ( t ) L 2 ( t ) < 0 ,
i.e., the first moment when at least one branch faces ruin, while we consider the time of simultaneous ruin
τ L : = inf t > 0 : X 1 ( t ) L 1 ( t ) < 0 and X 2 ( t ) L 2 ( t ) < 0 ,
i.e., the first moment when both branches face ruin. Apart from the at-least-one ruin, the simultaneous ruin is a common assumption for bivariate risk models, which is considered, e.g., in Avram et al. (2008b), Palmowski et al. (2018) or Hu and Jiang (2013).
Corresponding to the new solvency set, we extend the concept of admissibility of a dividend strategy in comparison with Azcue et al. (2018):
Definition 1 (Admissible dividend payment strategy).
A bivariate dividend payment strategy L ( t ) = ( L 1 ( t ) , L 2 ( t ) ) t 0 is called admissible if it is a componentwise non-decreasing, càglàd stochastic process that is predictable with respect to the natural filtration F of X and if it satisfies
(i) 
L i ( 0 ) = 0 for i = 1 , 2 ,
(ii) 
if X i ( t ) < L i ( t ) then d L i ( t ) = 0 for all t 0 , i = 1 , 2 ,
(iii) 
if X i ( t ) L i ( t ) then d L i ( t ) X i ( t ) L i ( t ) + d X i ( t ) for all t 0 , i = 1 , 2 .
The previous definition ensures that dividends are only paid as long as the associated controlled branch is non-negative. Moreover, it guarantees that no branch faces ruin due to dividend payments. Note that we set L i ( t ) L i ( τ L ) for t τ L .
Further, for any admissible dividend strategy L and at any time point 0 t < τ L it holds that
L 1 ( t ) X 1 ( t ) or L 2 ( t ) X 2 ( t ) ,
which is a direct consequence of the definition of our ruin time.
For a given initial surplus x = ( x 1 , x 2 ) S , the set of all admissible dividend strategies is denoted by Π x . Given any admissible dividend strategy L Π x , the associated controlled process X L is adapted to the natural filtration F of X and the ruin time τ L is a stopping time with respect to this filtration. Moreover, the two-dimensional controlled risk process is of finite variation and by construction, ruin can only happen at the arrival of claims.
For a fixed dividend strategy L Π x , x S , the value function V L ( x ) , which represents the cumulative expected discounted dividends, is defined as
V L ( x ) : = E x 0 τ L e q s d L ( s ) = E x 0 τ L e q s d L 1 ( s ) + 0 τ L e q s d L 2 ( s ) ,
where q > 0 is a constant discount factor. On R < 0 2 we assume V L to be zero. Our goal is to identify the optimal value function and the corresponding strategy, i.e., to solve the optimization problem defined by
V ( x ) = sup L Π x V L ( x )
for any x S .

3. The Optimal Value Function

We start our investigation of the optimal value function by collecting some properties that are going to be used later on. We emphasize that most results and proofs in this section are in large parts similar to either the one-dimensional case presented in Azcue and Muler (2014) or the degenerate model considered in Azcue et al. (2018). Therefore, our line of argument focuses on the differences.
Lemma 1.
For all ( x 1 , x 2 ) S the optimal value function is well-defined. If both x 1 , x 2 0 , then the optimal value function satisfies
x 1 + x 2 + c 1 + c 2 q + λ V ( x 1 , x 2 ) x 1 + x 2 + c 1 + c 2 q .
If x 1 0 , x 2 < 0 the optimal value function satisfies
x 1 + c 1 q + λ + c 2 q + λ e ( q + λ ) x 2 c 2 V ( x 1 , x 2 ) x 1 + c 1 q + c 2 q e q x 2 c 2 .
Equation (9) analogously holds for the case x 1 < 0 and x 2 0 with indices exchanged.
Proof. 
The proofs for well-definedness and Inequality (8) are analog to the one-dimensional case presented in (Azcue and Muler 2014, Prop. 1.2). To show (9) assume that x 1 0 and x 2 < 0 . Define the strategy L 0 = ( L 1 0 , L 2 0 ) as the strategy that pays the maximum dividends possible at every t 0 . Obviously the ruin time τ L 0 is equal to the arrival time of the first claim, denoted by τ 1 Exp ( λ ) . In the first branch, we obtain L 1 ( t ) = x 1 + c 1 t for t τ 1 . The second branch can only pay dividends if X 2 ( t ) L 2 ( t ) . Hence, we need to wait at least until t 0 = x 2 c 2 , before any dividend can be paid. Consequently, the resulting dividend process is given by L 2 ( t ) = ( x 2 + c 2 t ) 𝟙 { x 2 c 2 t τ 1 } . We get
V L 0 ( x 1 , x 2 ) = E x 0 τ 1 e q t d L 1 ( t ) + 0 τ 1 e q t d L 2 ( t ) = x 1 + c 1 E x 0 τ 1 e q t d t + c 2 E x 𝟙 { τ 1 x 2 c 2 } x 2 c 2 τ 1 e q t d t = x 1 + c 1 q + λ + c 2 q + λ e ( q + λ ) x 2 c 2 ,
which, by (7), implies the lower bound. The upper bound follows by a similar computation, since for any admissible dividend strategy L = ( L 1 , L 2 ) we have L 1 ( t ) ( x 1 + c 1 t ) 𝟙 { t 0 } and L 2 ( t ) ( x 2 + c 2 t ) 𝟙 { t x 2 c 2 } . □
Lemma 2.
The optimal value function V is componentwise increasing, locally Lipschitz, and satisfies
0 < V ( x 1 + h , x 2 ) V ( x 1 , x 2 ) ( e ( q + λ ) h / c 1 1 ) V ( x 1 , x 2 ) ,
and
0 < V ( x 1 , x 2 + h ) V ( x 1 , x 2 ) ( e ( q + λ ) h / c 2 1 ) V ( x 1 , x 2 )
for any initial surplus ( x 1 , x 2 ) S and any h > 0 .
In the case that x 1 0 and x 2 R , we get
h V ( x 1 + h , x 2 ) V ( x 1 , x 2 ) ,
and if x 1 R and x 2 0 , we get
h V ( x 1 , x 2 + h ) V ( x 1 , x 2 ) ,
for any h > 0 .
Proof. 
The property of being componentwise (non-strictly) increasing, the upper bounds in Equations (10) and (11), as well as Equations (12) and (13) can be proven exactly as in the proof of (Azcue et al. 2018, Lemma 3.2). Hence, we only show the lower bounds in (10) and (11), where due to analogy, we restrict to the former. If x 1 0 , x 2 R , then the statement is a direct consequence of (12). Thus, we need to show that for x 2 0 , h > 0 , x 1 < 0 it holds
0 < V ( x 1 + h , x 2 ) V ( x 1 , x 2 ) .
W.l.o.g. we may assume that x 1 + h 0 because otherwise, it follows that
V ( x 1 + h , x 2 ) V ( x 1 , x 2 ) V ( x 1 + h , x 2 ) V ( 0 , x 2 ) x 1 + h > 0 ,
since V is increasing and due to (12). The rest of the proof is done by construction. Given an ε > 0 , let L 0 = ( L 1 0 , L 2 0 ) Π x 1 , x 2 be a strategy such that
V L 0 ( x 1 , x 2 ) > V ( x 1 , x 2 ) ε .
We define a new strategy L = ( L 1 ( t ) , L 2 ( t ) ) Π x 1 + h , x 2 as follows:
  • Dividends from branch two are paid according to strategy L 2 0 , i.e., L 2 ( t ) : = L 2 0 ( t ) .
  • Branch one pays no dividends as long as its uncontrolled surplus is below + h . Once the surplus hits h, it pays h as a lump sum immediately, setting the controlled surplus to 0. Afterward it continues by paying dividends according to L 1 0 .
By construction we have τ L | X ( 0 ) = ( x 1 + h , x 2 ) τ L 0 | X ( 0 ) = ( x 1 , x 2 ) . Moreover, for any s 0 we define the stopping time τ s as the first time the process X 1 reaches s, i.e.,
τ s : = inf 0 t τ L : X 1 ( t ) = s ,
where inf = . Then it holds that
P x 1 + h , x 2 ( τ h · ) = P x 1 , x 2 ( τ 0 · ) .
We conclude
V L ( x 1 + h , x 2 ) E x 1 , x 2 0 τ L 0 e q s d L 1 0 ( s ) + 0 τ L 0 e q s d L 2 0 ( s ) + E x 1 + h , x 2 h · e q · τ h = V L 0 ( x 1 , x 2 ) + E x 1 , x 2 h · e q · τ 0 .
By construction follows that
V ( x 1 + h , x 2 ) V L ( x 1 + h , x 2 ) V L 0 ( x 1 , x 2 ) + E x 1 , x 2 h · e q · τ 0 > V ( x 1 , x 2 ) ε + E x 1 , x 2 h · e q · τ 0 ,
which implies
V ( x 1 + h , x 2 ) V ( x 1 , x 2 ) + E x 1 , x 2 h · e q · τ 0
since ε > 0 was arbitrary. We note that E x 1 , x 2 h · e q · τ 0 > 0 , since the probability that τ 0 is reached before the arrival of the first claim is strictly positive. Thus the proof is completed. □
Our next result is the Dynamic Programming Principle, which heuristically states that an optimal strategy must be optimal at any point in time.
Proposition 1 (Dynamic Programming Principle).
For any initial surplus x S and any stopping time τ it holds that
V ( x ) = sup L Π x E x [ 0 τ τ L e q s d L 1 ( s ) + 0 τ τ L e q s d L 2 ( s ) + e q ( τ τ L ) V X 1 L ( τ τ L ) , X 2 L ( τ τ L ) ] .
Proof. 
We use the same strategy as in the proof of (Azcue and Muler 2014, Lemma 1.2): We show the statement for a fixed time T 0 , and then the general case follows using the arguments given in (Zhu 1992, chp. II.2).
Set
v ( x , T ) = sup L Π x E x [ 0 T τ L e q s d L 1 ( s ) + 0 T τ L e q s d L 2 ( s ) = sup L Π x E x [ + e q ( T τ L ) V X 1 L ( T τ L ) , X 2 L ( T τ L ) ] .
Then, by the same arguments as in (Azcue and Muler 2014, Lemma 1.2), we have for any x S and L Π x that
V L ( x ) v ( x , T ) .
The proof of the inverse inequality v ( x , T ) V L ( x ) is also similar to Azcue and Muler (2014), but more involved due to the new cases that appear if one initial capital is strictly negative. Therefore, we will go into detail here:
For any given ε > 0 , we fix L Π x such that
E x [ 0 T τ L e q s d L 1 ( s ) + 0 T τ L e q s d L 2 ( s ) E x [ + e q ( T τ L ) V X 1 L ( T τ L ) , X 2 L ( T τ L ) ] v ( x , T ) ε 2 .
By Lemma 2 the optimal value function V is increasing and continuous in S , and hence we can construct monotonically increasing sequences ( v 1 ( i ) ) i N , ( v 2 ( i ) ) i N with ( v 1 ( 1 ) , v 2 ( 1 ) ) S , 0 ( v 1 ( i ) ) i N , 0 ( v 2 ( i ) ) i N and lim i v 1 ( i ) = lim i v 2 ( i ) = such that if y 1 [ v 1 ( i ) , v 1 ( i + 1 ) ) or y 2 [ v 2 ( j ) , v 2 ( j + 1 ) ) , then
V ( y 1 , x 2 ) V ( v 1 ( i ) , x 2 ) < ε 8 x 2 R and V ( x 1 , y 2 ) V ( x 1 , v 2 ( j ) ) < ε 8 x 1 R
for i 0 , j 0 . W.l.o.g., in the following we solely use subscript i to simplify the notation. This is possible as we can always insert additional elements into the sequences. Hence, we get
V ( y 1 , y 2 ) < V ( v 1 ( i ) , y 2 ) + ε 8 < V ( v 1 ( i ) , v 2 ( i ) ) + ε 4 .
Given i N we consider strategies L i = ( L 1 i ( t ) , L 2 i ( t ) ) t 0 Π ( v 1 ( i ) , v 2 ( i ) ) such that
V ( v 1 ( i ) , v 2 ( i ) ) V L i ( v 1 ( i ) , v 2 ( i ) ) ε 4 .
Based on these strategies we define a new strategy L * = ( L 1 * ( t ) , L 2 * ( t ) ) t 0 as follows:
  • If τ L T , set L 1 * ( t ) = L 1 ( t ) and L 2 * ( t ) = L 2 ( t ) for all t 0 .
  • If τ L > T , set L 1 * ( t ) = L 1 ( t ) and L 2 * ( t ) = L 2 ( t ) for all t [ 0 , T ] .
  • If τ L > T and t T , choose i such that X 1 L ( T ) [ v 1 ( i ) , v 1 ( i + 1 ) ) and X 2 L ( T ) [ v 2 ( i ) , v 2 ( i + 1 ) ) . We distinguish three cases:
    If X 1 L ( T ) 0 and X 2 L ( T ) 0 , then by assumption we have v 1 ( i ) 0 and v 2 ( i ) 0 . In L * , branch one immediately pays X 1 L ( T ) v 1 ( i ) and branch two immediately pays X 2 L ( T ) v 2 ( i ) as dividends at time T. Afterward we follow L i .
    If X 1 L ( T ) 0 and X 2 L ( T ) < 0 then by assumption v 1 ( i ) 0 . In L * , branch one immediately pays X 1 L ( T ) v 1 ( i ) as dividends. Then we follow L i from surplus ( v 1 ( i ) , X 2 L ( T ) ) .
    Similar to the previous case, if X 1 L ( T ) < 0 and X 2 L ( T ) 0 , branch two pays X 2 L ( T ) v 2 ( i ) as dividends and then we follow L i from surplus ( X 1 L ( T ) , v 2 ( i ) ) .
In the case of τ L > T and t T , assuming that X 1 L ( T ) [ v 1 ( i ) , v 1 ( i + 1 ) ) and X 2 L ( T ) [ v 2 ( i ) , v 2 ( i + 1 ) ) , it follows by (17)
V L * ( X 1 L ( T ) , X 2 L ( T ) ) = X 1 L ( T ) v 1 ( i ) + X 2 L ( T ) v 2 ( i ) + V L i ( v 1 ( i ) , v 2 ( i ) ) , v 1 ( i ) 0 , v 2 ( i ) 0 , X 1 L ( T ) v 1 ( i ) + V L i ( v 1 ( i ) , X 2 L ( T ) ) , v 1 ( i ) 0 , v 2 ( i ) < 0 , X 2 L ( T ) v 2 ( i ) + V L i ( X 1 L ( T ) , v 2 ( i ) ) , v 1 ( i ) < 0 , v 2 ( i ) 0 𝟙 { v 1 ( i ) 0 } X 1 L ( T ) v 1 ( i ) + 𝟙 { v 2 ( i ) 0 } X 2 L ( T ) v 2 ( i ) + V L i ( v 1 ( i ) , v 2 ( i ) ) V L i ( v 1 ( i ) , v 2 ( i ) ) V ( v 1 ( i ) , v 2 ( i ) ) ε 4 .
Strategy L * is then admissible and its value function can be obtained as
V L * ( x ) = E x 0 T τ L e q s d L 1 ( s ) + 0 T τ L e q s d L 2 ( s ) + T τ L τ L * e q s d L 1 * ( s ) + T τ L τ L * e q s d L 2 * ( s ) = E x 0 T τ L e q s d L 1 ( s ) + 0 T τ L e q s d L 2 ( s ) + e q ( T τ L ) E X L ( T τ L ) 0 τ L * ( T τ L ) e q s d L 1 * ( s + ( T τ L ) ) + 0 τ L * ( T τ L ) e q s d L 2 * ( s + ( T τ L ) ) = E x [ 0 T τ L e q s d L 1 ( s ) + 0 T τ L e q s d L 2 ( s ) + e q ( T τ L ) V L * X 1 L ( T τ L ) , X 2 L ( T τ L ) ] .
Now, under usage of (15), (16) and (18) and Lemma 1 the result follows, since
v ( x , T ) V L * ( x ) E x 0 T τ L e q s d L 1 ( s ) + 0 T τ L e q s d L 2 ( s ) + e q ( T τ L ) V X 1 L ( T τ L ) , X 2 L ( T τ L ) E x [ 0 T τ L e q s d L 1 ( s ) + 0 T τ L e q s d L 2 ( s ) + e q ( T τ L ) V L * X 1 L ( T τ L ) , X 2 L ( T τ L ) ] + ε 2 = E x e q ( T τ L ) V X 1 L ( T τ L ) , X 2 L ( T τ L ) V L * X 1 L ( T τ L ) , X 2 L ( T τ L ) + ε 2 E x e q ( T τ L ) V X 1 L ( T τ L ) , X 2 L ( T τ L ) V ( v 1 ( i ) , v 2 ( i ) ) ε 4 + ε 2 < E x e q ( T τ L ) ε 4 + ε 4 + ε 2 ε
and because ε was arbitrary. □
We now aim to derive the HJB equation. Therefore, recall the concept of the discounted infinitesimal generator, cf. (Azcue and Muler 2014, sct. 1.4), (Azcue et al. 2018, eq. (7)): Given a Markov process S in R 2 and x S , set
G ˜ ( S , f ) ( x ) = lim t 0 E x [ e q t f ( S t ) ] f ( x ) t
for any real-valued, continuously differentiable function f on S such that the above limit exists. For our considerations we choose S ( t ) = X L ( t τ L ) , i.e., the controlled risk process stopped at ruin.
Let 1 , 2 0 be constants and define the dividend strategy L , that constantly pays dividends at rate 1 , 2 from branch one and two, whenever the respective surplus is non-negative. Then clearly L is admissible. Further, let τ 1 be the first claim arrival time of X . Using the same arguments as in (Azcue and Muler 2014, sct. 1.4), we derive
G ˜ X L ( t τ L ) t 0 , f ( x ) = G ˜ X L ( t τ 1 ) t 0 , f ( x ) = c 1 1 𝟙 { x 1 0 } f x 1 ( x ) + c 2 2 𝟙 { x 2 0 } f x 2 ( x ) ( λ + q ) f ( x ) + λ I ( f ) ( x ) ,
where I is an integral operator given via
I ( f ) ( x ) : = 0 ( x 1 / b 1 ) ( x 2 / b 2 ) f ( x 1 b 1 α , x 2 b 2 α ) d F ( α ) .
Moreover, set
L ( V ) ( x ) : = c 1 V x 1 ( x ) + c 2 V x 2 ( x ) ( q + λ ) V ( x ) + λ I ( V ) ( x ) .
The HJB equation, i.e., the integro-differential equation which is satisfied by the optimal value function, turns out to be
max { 𝟙 { x 1 0 } ( 1 V x 1 ( x ) ) , 𝟙 { x 2 0 } ( 1 V x 2 ( x ) ) , L ( V ) ( x ) } = 0
for any x S and in the following we explain its derivation:
As the case x R 0 2 is completely similar to the derivation of the HJB equation in Azcue et al. (2018), we will only explain the differences in the new case, where one surplus is strictly negative. Due to symmetry we consider w.l.o.g. x 1 > 0 , x 2 < 0 . Assume that V is continuously differentiable. Since x 2 < 0 , Equations (19) and (20) reduce to
G ˜ X L ( t τ 1 ) t 0 , V ( x ) = ( c 1 l 1 ) V x 1 ( x ) + c 2 V x 2 ( x ) ( λ + q ) V ( x ) + λ I ( V ) ( x ) , I ( V ) ( x ) = 0 ( x 1 / b 1 ) V ( x 1 b 1 α , x 2 b 2 α ) d F ( α ) .
Let t > 0 such that t < x 2 c 2 and such that t < x 1 l 1 c 1 , if l 1 > c 1 . From Proposition 1 with τ = t τ 1 τ L it follows that
V ( x ) E x [ 0 t τ 1 e q s · 1 d s + 0 t τ 1 e q s · 2 · 𝟙 { s x 2 p 2 } d s + e q ( t τ 1 ) V X 1 L ( t τ 1 ) , X 2 L ( t τ 1 ) ] = E x 0 t τ 1 e q s · 1 d s + e q ( t τ 1 ) V X 1 L ( t τ 1 ) , X 2 L ( t τ 1 ) .
This implies
0 lim t 0 1 · E x 0 t τ 1 e q s d s + E x e q ( τ 1 t ) V X 1 L ( t τ 1 ) , X 2 L ( t τ 1 ) V ( x ) t = 1 + G ˜ X L ( t τ 1 ) t 0 , V ( x ) ,
and we conclude
L ( V ) ( x ) + 1 · ( 1 V x 1 ( x ) ) 0 .
Lastly, choosing either 1 = 0 or letting 1 we obtain that
max 1 V x 1 ( x ) , L ( V ) ( x ) 0 ,
which is (22) for x 1 > 0 , x 2 < 0 . The case x 2 > 0 , x 1 < 0 follows in complete analogy, where we obtain
max 1 V x 2 ( x ) , L ( V ) ( x ) 0 .
Remark 1.
Note that at first sight, Equations (19)–(22) look very similar to Equations (8)–(12) in Azcue et al. (2018). The occurring indicator functions in (19) and (22) account only for the cases where one x i , i = 1 , 2 , is strictly negative, which implies that on R 0 2 Equations (19), (21) and (22) superficially coincide with Equations (8), (10) and (12) in Azcue et al. (2018). The difference lies in the integral operator, as we need to allow to integrate up to the maximum instead of the minimum of x 1 / b 1 and x 2 / b 2 as f is only assumed to be zero on the pure negative quadrant, while it may be strictly positive outside.
As usual for this type of problem, there are cases where the value function may not be differentiable and thus it may not fulfill the HJB equation in the classical sense. Thus, subsequently, we use the notion of viscosity solutions: We follow the definition given in (Azcue et al. 2018, Def. 3.4) and call a function u ̲ : S R viscosity subsolution of (22) at a fixed point x ˜ S , if it is locally Lipschitz, and any continuously differentiable function ψ : S R with ψ ( x ˜ ) = u ̲ ( x ˜ ) such that u ̲ ψ reaches its maximum in x ˜ , satisfies
max { 𝟙 { x ˜ 1 0 } ( 1 ψ x 1 ( x ˜ ) ) , 𝟙 { x ˜ 2 0 } ( 1 ψ x 2 ( x ˜ ) ) , L ( ψ ) ( x ˜ ) } 0 .
Moreover, a function u ¯ : S R is called a viscosity supersolution of (22) at a fixed point x ˜ S , if it is locally Lipschitz, and any continuously differentiable function φ : S R with φ ( x ˜ ) = u ¯ ( x ˜ ) such that u ¯ φ reaches its minimum in x ˜ , satisfies
max { 𝟙 { x ˜ 1 0 } ( 1 φ x 1 ( x ˜ ) ) , 𝟙 { x ˜ 2 0 } ( 1 φ x 2 ( x ˜ ) ) , L ( φ ) ( x ˜ ) } 0 .
The functions ψ and φ are also called test functions. A function u : S R is called viscosity solution at x ˜ S if it is both a viscosity sub- and supersolution.
Proposition 2.
The optimal value function V defined in (7) is a viscosity solution of (22) at any x S .
The proof of Proposition 2 is naturally split into two parts that cover sub- and supersolution, respectively. The proof that V is a supersolution follows the idea of (Azcue and Muler 2014, Prop. 3.1). Mainly it uses the same arguments as the derivation of the HJB equation. For the sake of brevity the details will be omitted here. Proving that V is a viscosity subsolution follows the main ideas of (Azcue and Muler 2014, Prop. 3.1) as well and is done by contradiction. As our enlarged solvency set demands some additional care, we go a little more into detail here. For an easier understanding, we split the proof into Lemmas 3 and 4, which yield an immediate contradiction, showing that V is indeed a viscosity subsolution.
In the following, for notational simplicity, we abuse notation and define
[ a 1 , b 1 ] × [ a 2 , b 2 ] : = , if a 1 > a 2 or b 1 > b 2 .
Lemma 3.
Assume V is not a viscosity subsolution of (22) at x ˜ = ( x ˜ 1 , x ˜ 2 ) S . Then we can find ε > 0 ,
h 0 , 1 2 ( | x ˜ 1 | | x ˜ 2 | ) , i f | x ˜ 1 | | x ˜ 2 | > 0 , 0 , 1 2 ( | x ˜ 1 | | x ˜ 2 | ) , i f | x ˜ 1 | | x ˜ 2 | = 0 ,
and a continuously differentiable function ψ : R 2 R such that ψ is a test function for a subsolution of Equation (22) satisfying
𝟙 { x ˜ 1 0 } ( 1 ψ x 1 ( x ) ) 0 f o r x [ 0 x ˜ 1 h , x ˜ 1 + h ] × ( , x ˜ 2 + h ] ,
𝟙 { x ˜ 2 0 } ( 1 ψ x 2 ( x ) ) 0 f o r x ( , x ˜ 1 + h ] × [ 0 x ˜ 2 h , x ˜ 2 + h ] ,
L ( ψ ) ( x ) 2 ε q f o r x [ x ˜ 1 h , x ˜ 1 + h ] × [ x ˜ 2 h , x ˜ 2 + h ] ,
V ( x ) ψ ( x ) 2 ε f o r x B * : = B 1 B 2 B 3 B 4 R < 0 2 ,
where
B 1 : = , x ˜ 1 + h × , x ˜ 2 h 2 , B 2 : = , x ˜ 1 h 2 × , x ˜ 2 + h , B 3 : = { x ˜ 1 + h } × ( , x ˜ 2 + h ] , B 4 : = ( , x ˜ 1 + h ] × { x ˜ 2 + h } .
Note that the minima (∧) inside the intervals in (26) and (27) are only relevant if x ˜ 1 = 0 or x ˜ 2 = 0 , respectively: If x ˜ i > 0 then (25) ensures that x ˜ i h > 0 .
Proof of Lemma 3. 
The proof follows the outline of the univariate case as presented in (Azcue and Muler 2014, Prop. 3.1) and consists in constructing a test function ψ that fulfills the desired properties. □
Lemma 4.
Assume V is not a viscosity subsolution and let ψ , x ˜ , h , ϵ be as in Lemma 3. Then it holds that
V ( x ˜ ) < ψ ( x ˜ ) .
Proof of Lemma 4. 
In addition, this proof follows the outline given in (Azcue and Muler 2014, Prop. 3.1) and is similar to the proof of (Azcue et al. 2018, Thm. 3.5). However, there are some decisive extra arguments needed in our extended setting, which is why we go a bit into detail here.
Since ψ is continuously differentiable, for any compact set M R 2 we can find C 0 such that for any x M we have
L ( ψ ) ( x ) C .
Choose
0 < θ < min ε 2 C , 1 4 q , h 2 max { c 1 , c 2 }
and fix an admissible dividend policy L ( t ) = ( L 1 ( t ) , L 2 ( t ) ) Π x ˜ for all t. Similar to the proof of (Azcue et al. 2018, Thm. 3.5) we define the stopping times
τ ¯ : = inf { t > 0 : X 1 L ( t ) x ˜ 1 + h or X 2 L ( t ) x ˜ 2 + h } , τ ̲ : = inf { t > 0 : X 1 L ( t ) x ˜ 1 h or X 2 L ( t ) x ˜ 2 h }
and set
τ * : = τ ¯ ( τ ̲ + θ ) τ L ,
where τ L is, as usual, the ruin time of the controlled process X L . Clearly, τ * < for h small enough. Recall the sets B * , B 1 , B 2 , B 3 , and B 4 from Lemma 3. Then by construction we have X L ( τ ̲ + θ ) B 1 B 2 , X L ( τ ¯ ) B 3 B 4 and X L ( τ L ) R < 0 2 . Hence,
X L ( τ * ) B *
and consequently (29) implies
V ( X L ( τ * ) ) ψ ( X L ( τ * ) ) 2 ε .
Using a bivariate extension of (Azcue and Muler 2014, Prop. 2.13) we obtain
e q τ * ψ X L ( τ * ) ψ ( x ˜ ) = 0 τ * e q s L ( ψ ) X L ( s ) d s 0 τ * e q s d L ( s ) + 0 τ * 1 ψ x 1 X L ( s ) e q s d L 1 c ( s ) + 0 τ * 1 ψ x 2 X L ( s ) e q s d L 2 c ( s ) + X 1 ( s ) X 1 ( s + ) s < τ * e q s 0 L 1 ( s + ) L 1 ( s ) 1 ψ x 1 X 1 L ( s ) α , X 2 L ( s ) d α + X 2 ( s ) X 2 ( s + ) s < τ * e q s 0 L 2 ( s + ) L 2 ( s ) 1 ψ x 2 X 1 L ( s ) , X 2 L ( s ) α d α + M ˜ ( τ * ) ,
where
M ˜ ( t ) : = X L ( s ) X L ( s ) s t e q s u X L ( s ) u X L ( s ) λ 0 t e q s 0 u X 1 L ( s ) b 1 α , X 2 L ( s ) b 2 α u X L ( s ) d F ( α ) d s
defines a zero mean martingale. Fix i = 1 , 2 . If x ˜ i < 0 , then by construction of h we have X i L ( s ) < 0 for any s ( 0 , τ * ] . Hence, by admissibility of the strategy, no dividends can be paid from branch i and the respective integrals in (34) are zero. If on the other hand x ˜ i 0 , then we may apply (26) or (27) to get
e q τ * ψ X L ( τ * ) ψ ( x ˜ ) 0 τ * e q s L ( ψ ) X L ( s ) d s 0 τ * e q s d L ( s ) + M ˜ ( τ * ) .
From (28), (30) and (31), we have
0 τ * e q s L ( ψ ) X L ( s ) d s = 0 τ L τ ̲ e q s L ( ψ ) X L ( s ) d s + τ L τ ̲ τ * e q s L ( ψ ) X L ( s ) d s 2 ε q 0 τ L τ ̲ e q s d s + C θ 2 ε q 0 τ L τ ̲ e q s d s + ε 2 ,
where the inequality for the second integral holds, since X L ( s ) is in the union of some compact set M and R < 0 2 . This is because, due to admissibility, we can not force a branch to go negative by a dividend payment and because claims occur along the line x 2 = b 2 b 1 · x 1 .
Now, the rest of the proof is completely similar to the proof of (Azcue and Muler 2014, Prop. 3.1): We use the Dynamic Programming Principle (Proposition 1) together with Equations (33), (35) and (36) to obtain the desired inequality
V ( x ˜ ) = sup L Π x ˜ E x ˜ 0 τ * e q s d L 1 ( s ) + 0 τ * e q s d L 2 ( s ) + e q τ * V ( X L ( τ * ) ) ψ ( x ˜ ) ε < ψ ( x ˜ ) .
Remark 2.
A comparison of our proof and the proof of Theorem 3.5 in Azcue et al. (2018) exhibits some flaws in the latter. The authors state that
V ( x ¯ ) ψ ( x ¯ ) 2 ϵ f o r x ¯ [ , x ¯ 0 h / 2 ] { x ¯ 0 + h }
and later use the same stopping times τ ¯ , τ ̲ and τ * as we do. This however is not enough, as
X ¯ ( τ * ) [ , x ¯ 0 h / 2 ] { x ¯ 0 + h }
does not necessarily hold (see also (32)) and hence, the multivariate generalization of (Azcue and Muler 2014, Eq. (3.20)) fails.
Nevertheless, we emphasize that this does not affect the statement of (Azcue et al. 2018, Thm. 3.5) itself, as the proof may be fixed by adjusting the definition of ψ 1 as
ψ 1 ( x ) : = ψ 0 ( x ) + κ ( b 1 2 + b 2 2 ) x ˜ 1 b 1 x ˜ 2 b 2 2 ( x 1 x ˜ 1 ) 2 + ( x 2 x ˜ 2 ) 2
for any x = ( x 1 , x 2 ) S , in order to show that (29) holds on the larger set B * .
The following proposition is also called verification result:
Proposition 3.
The optimal value function is the smallest viscosity solution u of (22) satisfying the growth conditions
u ( x 1 , x 2 ) K + ( x 1 0 ) + ( x 2 0 ) for some K > 0 and any ( x 1 , x 2 ) S
and
u ( x 1 , x 2 ) < u ( x 1 + h , x 2 ) , u ( x 1 , x 2 ) < u ( x 1 , x 2 + h )
for any ( x 1 , x 2 ) S and h > 0 .
Proof. 
Similar to (Azcue and Muler 2014, Prop. 4.4) one can show that for arbitrary L Π x and any viscosity supersolution u ¯ of (22) satisfying (G1) and (G2), it holds that V L ( x ) u ¯ ( x ) . Together with Proposition 2 this implies the statement. □

4. Bang Strategies

In this section, we choose a heuristic approach to define a class of strategies that appeal to be optimal. We show that in certain subcases, the optimal strategy indeed lies in this class and we reduce the control problem defined in (7) to a univariate one.
The idea behind the strategies is as follows: If we face ruin at time τ L then it is desirable to have a relatively small surplus right before τ L because it implies that we paid more dividends before ruin. As we allow that one branch of the process becomes negative, all possible dividends can be paid from one branch (at every t 0 ) to ensure that there is no capital “wasted” at the time of ruin. We thus consider strategies that pay dividends according to the following principles:
  • One branch follows some admissible, one-dimensional dividend strategy,
  • the other branch pays dividends as follows:
    (i)
    If the surplus is positive, the whole surplus is immediately paid as a lump sum.
    (ii)
    If the surplus is zero, all incoming premia are continuously paid as dividends.
    (iii)
    If the surplus is negative, no dividends are paid until the branch reaches zero again.
Inspired by the well-known Bang-bang controls (see, e.g., Rolewicz 1987, sct. 6.5) we call these strategies bang strategies as they always pay the maximum dividends possible from one branch. Let κ { 1 , 2 } and fix ι = 3 κ such that in particular ( κ , ι ) = ( 1 , 2 ) or ( 2 , 1 ) . Formally, for any x S we set
Π x * κ : = { L = ( L 1 , L 2 ) : L κ is admissible for X κ ( t ) , L ι ( t ) = ( x ι 0 ) + c ι · 0 t 𝟙 { X ¯ ι ( s ) 0 } · 𝟙 { X ¯ ι ( s ) = X ι ( s ) } d s }
where X ¯ ι ( s ) : = sup 0 r s X ι ( r ) denotes the running supremum of X ι . We define the class of bang strategies as
Π x * : = Π x * 1 Π x * 2 ,
where by construction any strategy in Π x * is admissible. Clearly, for any L 1 Π x * 1 the ruin time τ L 1 is equal to the ruin time of the first branch, denoted by τ L 1 , whereas for any L 2 Π x * 2 the ruin time τ L 2 is equal to the ruin time of the second branch τ L 2 . This is, because any occurring claim affects both branches of our risk process and by construction of L 1 , L 2 any occurring claim leads to a negative surplus in the second or first branch, respectively. Further, for any univariate admissible strategy L defined on branch i it holds that
L i * ( t ) : = ( x i 0 ) + c i · 0 t 𝟙 { X ¯ i ( s ) 0 } · 𝟙 { X ¯ i ( s ) = X i ( s ) } d s L ( t ) .
In the following, to facilitate reading, we are going to use the expression “strategy of type L * 1 ” to refer to a strategy in Π x * 1 and similarly for Π x * 2 .
Our next goal is to show that the optimal strategy for the control problem (7) lies in Π x * . We start by proving that strategies of type L * 1 and L * 2 are the optimal choice for certain subsets of all admissible strategies. To do this, we define the sets
D 1 : = { x S : ( b 2 / b 1 ) x 1 x 2 } , D 2 : = { x S : ( b 2 / b 1 ) x 1 x 2 } .
By construction, the controlled process X L ( t ) can neither exit D 1 into D 2 nor exit D 2 into D 1 by a claim. Such a change can only happen by two events:
(i)
The process deterministically creeps from D 2 into D 1 , induced by the collected premia and by (3), or
(ii)
the process is forced to change from one set to the other by dividend payments (continuously or by a lump sum).
The sets D 1 and D 2 and a sample path of the process X are shown in Figure 1.
Note that event (i) can be interpreted as a special case of event (ii) since it corresponds to paying no dividends at this particular time. Hence, changes between the sets are determined by the dividend policy.
For x D i , i = 1 , 2 , let Π x i be the set of all admissible strategies L which ensure that X L stays in D i until ruin. By the previous reasoning the sets Π x 1 , Π x 2 are well-defined.
Theorem 1.
The optimal strategy in Π x 1 is of type L * 1 , whereas the optimal strategy in Π x 2 is of type L * 2 .
Proof. 
We show the statement for Π x 1 as the proof for the second case is similar. Let L 0 = ( L 1 0 , L 2 0 ) be any strategy in Π x 1 . Define another strategy L 1 as
L 1 : = ( L 1 1 , L 2 1 ) : = ( L 1 0 , L 2 * ) ,
with L 2 * as in (38). Then L 1 Π x 1 and it holds that L 2 1 ( t ) L 2 0 ( t ) for any 0 t τ L 1 . Hence,
L 1 ( t ) L 0 ( t ) for any 0 < t τ L 1 τ L 0 .
Moreover, by definition of L 0 and L 1 we have for all 0 t τ L 0
b 2 b 1 · X 1 L 1 ( t ) = b 2 b 1 · X 1 L 0 ( t ) X 2 L 0 ( t ) X 2 L 1 ( t ) ,
which directly implies that τ L 1 = τ L 0 = τ L 1 0 . Hence, V L 1 ( x ) V L 0 ( x ) and the statement follows as L 0 Π x 1 was arbitrary. □
Remark 3.
Note that in contrast to the results in (Azcue et al. 2018, Section 4.3), Theorem 1 implies that a strategy that stays in D 1 D 2 = { ( x 1 , b 1 b 2 x 1 ) , x 1 0 } until ruin can never be optimal in our setting.
Next, we show that under some additional assumptions a strategy of type L * 1 is indeed optimal:
Theorem 2.
Let b 1 b 2 and let x in D 1 . Then for any strategy L 0 = ( L 1 0 , L 2 0 ) Π x there exists a strategy L 1 Π x 1 such that V L 0 ( x ) V L 1 ( x ) .
Before we begin with the proof of Theorem 2, we prove a preparatory lemma:
Lemma 5.
It holds that
c 2 · 0 t 𝟙 { X ¯ 2 ( s ) = X 2 ( s ) } d s t b 2 b 1 · c 1 · 0 t 𝟙 { X ¯ 1 ( s ) = X 1 ( s ) } d s t .
Proof. 
Let
O i ( t ) : = X ¯ i ( t ) X i ( t ) = n = 1 N ( t ) b i U n c i 0 t 𝟙 { X ¯ i ( s ) X i ( s ) } d s
be the offset of branch i from its running supremum. Note that the offset is independent of the initial value X i ( 0 ) = x i . Hence, w.l.o.g. we may assume x 1 = x 2 = 0 . It holds that
b 1 b 2 O 2 ( t ) = b 1 b 2 X ¯ 2 ( t ) b 1 b 2 X 2 ( t ) = sup 0 s t b 1 b 2 c 2 s n = 1 N ( s ) b 1 U n + c 1 b 1 b 2 c 2 t X 1 ( t ) sup 0 s t b 1 b 2 c 2 s + c 1 b 1 b 2 c 2 s n = 1 N ( s ) b 1 U n X 1 ( t ) = O 1 ( t ) ,
where for the inequality we used that ( c 1 b 1 b 2 c 2 ) t is monotonically increasing by (3). Equations (40) and (41) imply
c 2 · 0 t 𝟙 { X ¯ 2 ( s ) X 2 ( s ) } d s = n = 1 N ( t ) b 2 U n O 2 ( t ) = b 2 b 1 · n = 1 N ( t ) b 1 U n b 1 b 2 · O 2 ( t ) b 2 b 1 · n = 1 N ( t ) b 1 U n O 1 ( t ) = b 2 b 1 · c 1 · 0 t 𝟙 { X ¯ 1 ( s ) X 1 ( s ) } d s ,
which completes the proof, as
0 t 𝟙 { X ¯ i ( s ) X i ( s ) } d s = t 0 t 𝟙 { X ¯ i ( s ) = X i ( s ) } d s .
Now, we are ready to prove Theorem 2:
Proof of Theorem 2. 
We start by showing the statement for the slightly stronger assumption that x D 1 R 0 2 . Let L 0 = ( L 1 0 , L 2 0 ) Π x be arbitrary. For any t 0 we define a strategy L 1 = ( L 1 1 , L 2 1 ) by
L 1 1 ( t ) : = min L 1 0 ( t ) , x 1 b 1 b 2 x 2 + c 1 b 1 b 2 · c 2 t + b 1 b 2 · L 2 0 ( t ) ,
L 2 1 ( t ) : = L 2 * ( t ) .
Note that L 1 1 is increasing by (3) and non-negative as x D 1 . Moreover, admissibility of L 1 1 follows directly from admissibility of L 0 . By definition, L 2 1 is admissible and it is clear that X L 1 ( t ) D 1 for all t 0 . Hence, it remains to show that indeed V L 1 ( x ) V L 0 ( x ) . To accomplish this, we show
τ L 0 τ L 1 and L 1 1 ( t ) + L 2 2 ( t ) L 1 0 ( t ) + L 2 0 ( t ) , t 0 .
By (42) it holds that
X 1 L 1 ( t ) = x 1 + c 1 t n = 1 N ( t ) b 1 U n L 1 1 ( t ) = max x 1 + c 1 t n = 1 N ( t ) b 1 U n L 1 0 ( t ) , b 1 b 2 x 2 + c 2 t n = 1 N ( t ) b 2 U n L 2 0 ( t ) = max X 1 L 0 ( t ) , b 1 b 2 · X 2 L 0 ( t )
and hence τ L 0 τ L 1 .
To show the second inequality in (44) we consider two separate cases. Let first X L 0 ( t ) D 1 . Then b 2 b 1 X 1 L 0 ( t ) X 2 L 0 ( t ) , which is equivalent to
L 1 0 ( t ) x 1 b 1 b 2 · x 2 + c 1 b 1 b 2 · c 2 t + b 1 b 2 · L 2 0 ( t ) .
Hence, in this case the minimum in (42) is attained by L 1 0 ( t ) . Moreover, by definition of L 2 * ( t ) we have L 2 1 ( t ) L 2 0 ( t ) for any t 0 , and thus
L 1 1 ( t ) + L 2 2 ( t ) L 1 0 ( t ) + L 2 0 ( t ) .
If otherwhise X L 0 ( t ) D 2 , then analogously we have
L 2 0 ( t ) x 2 b 2 b 1 · x 1 + c 2 b 2 b 1 · c 1 t + b 2 b 1 · L 1 0 ( t ) ,
and the minimum in (42) is attained by the second term. We use (46) to show
L 1 0 ( t ) + 1 b 1 b 2 L 2 0 ( t ) x 1 b 1 b 2 x 2 + c 1 b 1 b 2 · c 2 t L 1 0 ( t ) + 1 b 1 b 2 · x 2 b 2 b 1 · x 1 + c 2 b 2 b 1 · c 1 t + b 2 b 1 · L 1 0 ( t ) x 1 b 1 b 2 x 2 + c 1 b 1 b 2 · c 2 t = b 2 b 1 L 1 0 ( t ) x 1 + b 1 b 2 x 2 c 1 t + b 1 b 2 · c 2 t .
Note that, here we used b 1 b 2 to ensure that 1 b 1 b 2 0 .
Moreover, it holds that (see (38))
L 1 0 ( t ) L 1 * ( t ) = x 1 + c 1 · 0 t 𝟙 { X ¯ 1 ( s ) = X 1 ( s ) } d s .
With this and by (39) we have
b 2 b 1 · L 1 0 ( t ) x 1 + b 1 b 2 x 2 c 1 t + b 1 b 2 c 2 t b 2 b 1 · x 1 + c 1 · 0 t 𝟙 { X ¯ 1 ( s ) = X 1 ( s ) } d s x 1 + b 1 b 2 x 2 c 1 t + b 1 b 2 c 2 t = b 2 b 1 · c 1 · 0 t 𝟙 { X ¯ 1 ( s ) = X 1 ( s ) } d s t + x 2 + c 2 t x 2 + c 2 · 0 t 𝟙 { X ¯ 2 ( s ) = X 2 ( s ) } d s = L 2 * ( t ) = L 2 1 ( t ) ,
since x 2 0 . We combine (47) and (48) and obtain
L 2 1 ( t ) L 1 0 ( t ) + 1 b 1 b 2 L 2 0 ( t ) x 1 b 1 b 2 x 2 + c 1 b 1 b 2 · c 2 t = L 1 0 ( t ) + L 2 0 ( t ) x 1 b 1 b 2 x 2 + c 1 b 1 b 2 · c 2 t + b 1 b 2 · L 2 0 ( t ) .
Finally, as the minimum in (42) is attained by the second term, this implies that
L 1 1 ( t ) + L 2 1 ( t ) = x 1 b 1 b 2 x 2 + c 1 b 1 b 2 · c 2 t + b 1 b 2 · L 2 0 ( t ) + L 2 1 ( t ) L 1 0 ( t ) + L 2 0 ( t )
and hence the proof for x D 1 R 0 2 is finished. Note that the additional restriction x 2 0 can be dropped, since in the case of x 2 < 0 , by admissibility, no dividends are paid by branch two until X 2 ( t ) reaches zero. Hence, any admissible strategy on the second branch coincides with the strategy L 2 * . Once the process X 2 reaches zero, we may apply the restricted result. □
Remark 4.
Note that the assumption b 1 b 2 in Theorem 2 does indeed impose a restriction to our model. As we assumed (3) throughout the whole paper, we can not simply exchange branches in order to obtain the case b 2 < b 1 .
Unfortunately, even though b 1 b 2 is only used in (47), it is not possible to adapt the proof to obtain a similar result neither for the case b 2 < b 1 , nor for x D 2 and in the following we heuristically explain why.
The general idea of the proof is, given the arbitrary strategy L 0 , to construct another strategy L 1 Π x 1 that fulfills τ L 1 τ L 0 almost surely. (Actually, (42) and (43) ensure that (45) implies even τ L 1 = τ L 0 .)
This construction relies heavily on the assumptions (3) and x D 1 . Otherwise L 1 1 ( t ) would not be admissible, as c 1 b 1 b 2 c 2 < 0 or x 1 b 1 b 2 x 2 < 0 , respectively. Moreover, in order to fulfill the second inequality in (44), b 1 b 2 is essential. We illustrate this using a counterexample: Assume b 1 > b 2 and fix a strategy L 0 such that
  • at t = 0 , the first branch pays the whole initial capital x 1 as lump sum, while
  • the second branch does not make any dividend payments at t = 0 .
Let x 1 , x 2 > 0 . Then, by (42) and (43) we have
L 1 1 ( 0 + ) + L 2 1 ( 0 + ) = x 1 b 1 b 2 x 2 + b 1 b 2 · L 2 0 ( 0 + ) + x 2 = x 1 + 1 b 1 b 2 x 2 < x 1 = L 1 0 ( 0 + ) + L 2 0 ( 0 + ) ,
where L i j ( 0 + ) : = lim t 0 L i j ( t ) , j = 0 , 1 , i = 1 , 2 . Hence, if b 1 > b 2 , then the construction (42)—which ensures that τ L 0 τ L 1 —does not allow for higher dividend payments in general.
The following Corollary summarizes the important consequences of Theorems 1 and 2.
Corollary 1.
Let b 1 b 2 . Then for any x D 1 , the optimal strategy is of type L * 1 .
If x D 2 then either the optimal strategy is of type L * 2 , or the optimal strategy’s controlled process enters D 1 with positive probability, in which case it is optimal to continue with a strategy of type L * 1 .
As explained in Remark 4 our approach for proving the optimality of bang strategies fails in the case b 2 < b 1 as well as if b 1 b 2 , x D 2 . Even though in the latter case Corollary 1 does make a statement on the two possible behaviors of the optimal strategy, there remains an open question: If the optimal strategy’s controlled process enters D 1 with positive probability, then the corollary does not make any statement on how the optimal strategy behaves until this event.
As both of these problems could possibly be solved using other techniques, we leave them as open questions for future research.
  • Value functions of bang strategies
Next we study properties of the value functions of strategies of type L * 1 and L * 2 . By symmetry, the classes are similar and we thus focus on the former.
For any x = ( x 1 , x 2 ) S the value function V L * 1 ( x 1 , x 2 ) of the strategy L * 1 = ( L 1 , L 2 * ) can be expressed as
V L * 1 ( x 1 , x 2 ) = ( x 2 0 ) + E x 1 , x 2 0 τ L 1 e q s d L 1 ( s ) + c 2 · 0 τ L 1 𝟙 { X ¯ 2 ( s ) 0 } · 𝟙 { X ¯ 2 ( s ) = X 2 ( s ) } · e q s d s .
Note that if x 2 0 , then 𝟙 { X ¯ 2 ( s ) 0 } 1 and hence, the expression can be dropped in the second integral of (49). If the initial capital x 1 of branch one is negative, then we may characterize the value of strategy L * 1 explicitly in terms of V L * 1 ( 0 , 0 ) :
Lemma 6.
For any x 1 < 0 , x 2 0 and any admissible strategy L 1 on branch one it holds that
V L * 1 ( x 1 , x 2 ) = x 2 + c 2 q + λ + V L * 1 ( 0 , 0 ) c 2 q + λ · e ( q + λ ) · x 1 c 1 .
In particular, as x 1 , V L * 1 is exponentially decreasing and
lim x 1 V L * 1 ( x 1 , x 2 ) = x 2 + c 2 q + λ .
The result holds analogously for x 1 0 , x 2 < 0 and strategies of type L * 2 .
Proof. 
Note that by construction branch two pays x 2 as a lump sum at the beginning and continues paying constantly c 2 d t as dividends, whenever the controlled process is positive. This construction ensures that X 2 L 2 * ( t ) 0 . Consequently, if the first claim at time τ 1 happens before branch one gets positive, i.e., if τ 1 t 0 : = x 1 c 1 , then the value of the strategy is exactly the discounted value of the dividends paid from branch two until the first claim. If the first claim happens after t 0 , then the value of the strategy at t 0 is exactly e q t 0 · V L * 1 ( 0 , 0 ) plus the discounted dividends from branch two up to time t 0 . As τ 1 Exp ( λ ) and due to the lack of memory of the exponential distribution, we conclude that for any x 1 < 0 , x 2 0 it holds that
V L * 1 ( x 1 , x 2 ) = x 2 + E 𝟙 { τ 1 t 0 } V L * 1 ( x 1 , 0 ) + 𝟙 { τ 1 > t 0 } V L * 1 ( x 1 , 0 ) = x 2 + 0 t 0 λ e λ t 0 t e q s c 2   d s   d t + P ( τ 1 > t 0 ) · e q · t 0 · V L * 1 ( 0 , 0 ) + 0 t 0 e q s c 2 d s = x 2 + c 2 q · 1 λ q + λ + V L * 1 ( 0 , 0 ) c 2 q · 1 λ λ + q · e ( q + λ ) · x 1 c 1 .
Remark 5.
The asymptotics as x 1 indicate that L * 1 is not the optimal strategy on R < 0 × R 0 because as long as the surplus of branch one is negative, L * 1 collapses to the trivial strategy, that always pays the highest dividend possible. However, if the initial capital x is in R 0 2 , then the process X L * 1 never actually enters this set, as by construction of L * 1 the occurrence of X 1 L * 1 ( t ) < 0 implies ruin.
The next Lemma gives sufficient conditions on when L * 1 is preferable over L * 2 and vice versa.
Lemma 7.
Let x 1 0 , x 2 R . If V L * 1 ( x 1 , x 2 ) V L * 2 ( x 1 , x 2 ) , then we have
V L * 1 ( x 1 + h , x 2 ) V L * 2 ( x 1 + h , x 2 ) for all h > 0 .
If otherwise x 2 0 , x 1 R and V L * 2 ( x 1 , x 2 ) V L * 1 ( x 1 , x 2 ) , then
V L * 2 ( x 1 , x 2 + h ) V L * 1 ( x 1 , x 2 + h ) for all h > 0 .
Proof. 
Assume that V L * 1 ( x 1 , x 2 ) V L * 2 ( x 1 , x 2 ) for some x 1 0 , x 2 R . Then obviously V L * 1 ( x 1 + h , x 2 ) V L * 1 ( x 1 , x 2 ) + h as branch one can always pay a lump sum of size h. We conclude by construction of L * 2 that
V L * 1 ( x 1 + h , x 2 ) h + V L * 1 ( x 1 , x 2 ) = V L * 2 ( x 1 + h , x 2 ) .
The proof of the second case is completely analog. □
  • The one-dimensional optimization problem
As the last step in this section, we formulate the one-dimensional optimization problem that arises for bang strategies. Let Π x i , i = 1 , 2 , be the set of all admissible strategies (in the classical univariate sense, see Azcue and Muler 2014, chp. 1.2) acting on branch i with initial capital x i . For technical reasons, we allow x i to be negative and extend the definition, such that strategies are admissible if they do not pay any dividends, while branch i is negative. Define
V * 1 ( x 1 , x 2 ) : = ( x 2 0 ) + sup L 1 Π x 1 E x 1 , x 2 0 τ L 1 e q s d L 1 ( s ) + c 2 · 0 τ L 1 𝟙 { X ¯ 2 ( s ) 0 } · 𝟙 { X ¯ 2 ( s ) = X 2 ( s ) } · e q s d s
and
V * 2 ( x 1 , x 2 ) : = ( x 1 0 ) + sup L 2 Π x 2 E x 1 , x 2 c 1 · 0 τ L 2 𝟙 { X ¯ 1 ( s ) 0 } · 𝟙 { X ¯ 1 ( s ) = X 1 ( s ) } · e q s d s + 0 τ L 2 e q s d L 2 ( s ) .
Corollary 1 implies, that if b 1 b 2 and x D 1 then the solution of (52) defines a solution to the original problem (7).
If on the other hand b 1 b 2 and x D 2 , then either the optimal strategy can be defined through the solution of (53), or the optimal strategy ensures that the controlled process enters D 1 with positive probability, which brings us back to the first case. Hence, the maximum of the solutions to (52) and (53) is at minimum a good approximation to the solution of (7) if x D 2 .
However, (52) and (53) are hard to solve explicitly, as τ L 1 (and τ L 2 ) and the expressions inside the indicator functions are strongly correlated, since all depend on the path of the underlying claim process S ( t ) . An approach to find approximate solutions to the problems in another subclass of the admissible strategies and using Monte Carlo simulations is presented in the following section.

5. Approximation Approach and Simulation Study

Given the theoretical results of the preceding section, we want to approximately solve the problems (52) and (53). As the approach is identical, we are going to discuss only the former case.
Assume that x 2 0 , while x 1 0 . We define the random process
Λ ( x 2 , s ) : = c 2 · 𝟙 { X 2 ¯ ( s ) 0 } · 𝟙 { X ¯ 2 ( s ) = X 2 ( s ) } , s 0 ,
such that we have (see (49))
V L * 1 ( x 1 , x 2 ) = E x 1 , x 2 0 τ L 1 e q s d L 1 ( s ) + 0 τ L 1 Λ ( x 2 , s ) · e q s d s .
The function Λ depends explicitly on the initial value x 2 and the time s and implicitly on the random path of the claim process S. Let ω Ω . Then Λ ( x 2 , s ) ( ω ) is monotonically increasing with respect to x 2 and for any s 0 it holds that
c 2 Λ ( x 2 , s ) x 2 0
almost surely. Equation (52) may be reformulated as
V ˜ * 1 ( x 1 , x 2 ) = sup L 1 Π x 1 V L * 1 ( x 1 , x 2 ) .
As mentioned before, due to the complicated dependency structure between Λ and X 1 ( t ) , this problem seems impossible to solve exactly and explicitly. However, similar problems have been treated before, e.g., in Thonhauser and Albrecher (2007), where the authors consider the problem (54) with Λ ( x 2 , s ) ( ω ) Λ constant, i.e., (see Thonhauser and Albrecher 2007, eq. (2)) a problem of type
V ˜ 1 ( x 1 ) : = sup L 1 Π x 1 E x 1 0 τ L 1 e q s d L 1 ( s ) + 0 τ L 1 Λ · e q s d s .
It is shown, that in the case of unbounded dividend payments, the corresponding HJB equation is given by (cf. Thonhauser and Albrecher 2007, Eq. 36)
max Λ + c V ˜ 1 ( x 1 ) + λ 0 x V ˜ 1 ( x y ) d F 1 ( y ) ( q + λ ) V ˜ 1 ( x ) , 1 V ˜ 1 ( x ) = 0 ,
where F 1 is the cdf of the claims affecting branch one, i.e., F 1 ( y ) = F ( y / b 1 ) . Moreover, it turns out that in the case of exponential claims, the optimal strategy is a barrier strategy, cf. (Thonhauser and Albrecher 2007, Prop. 11).
In the remainder of this section we will therefore restrict to barrier strategies as well. We say that an admissible strategy L 1 is of barrier type, if there exists a fixed surplus level x 1 b , such that
  • L 1 does not pay any dividends if X 1 L 1 ( t ) < x 1 b ,
  • L 1 pays continuously c 1 d t as dividends, while X 1 L 1 ( t ) = x 1 b ,
  • L 1 pays a lump sum of size X 1 L 1 ( t ) x 1 b as dividends, if X 1 L 1 ( t ) > x 1 b .
The set of all barrier strategies acting on branch one with initial capital x 1 is denoted by Π x 1 b . We consider
V ˜ 1 b ( x 1 , x 2 ) : = sup L 1 Π x 1 b E x 1 , x 2 0 τ L 1 e q s d L 1 ( s ) + 0 τ L 1 Λ ( x 2 , s ) · e q s d s ,
which is a slight modification of (52) (see also (54)) and solve it using Monte Carlo techniques.
In order to apply the results from Thonhauser and Albrecher (2007), we assume in the following that the claims U i of the driving compound Poisson process S ( t ) are exponentially distributed. Note that for modeling claim sizes there are more realistic distributions, such as the log-normal or the generalized Pareto distribution. However, the exponential distribution is a popular choice for claim sizes in Cramér–Lundberg risk models, as this case is particularly well-treatable since, e.g., ruin probabilities and excess of loss distributions at the time of ruin can be expressed in closed-form, see Asmussen and Albrecher (2010).
Let U i Exp ( γ ) , γ > 0 , such that
b j U i Exp γ / b j , j = 1 , 2 .
By (Thonhauser and Albrecher 2007, Prop. 11) the barrier x 1 * corresponding to the solution of (55) fulfills
x 1 * : = 1 R 1 R 2 · log R 2 2 · ( γ / b 1 + R 2 ) · ( γ / b 1 · q + e R 1 x 1 * ( γ / b 1 + R 1 ) · Λ · R 1 ) R 1 2 · ( γ / b 1 + R 1 ) · ( γ / b 1 · q + e R 2 x 1 * ( γ / b 1 + R 2 ) · Λ · R 2 ) ,
where R 1 , R 2 are the roots of the polynomial
P ( R ) = c 1 R 2 + γ b 1 · c 1 ( q + λ ) · R γ b 1 · q
such that R 2 < 0 < R 1 . Hence, as 0 Λ ( x 2 , s ) c 2 , the barrier x 1 b corresponding to the solution of (56) is likely to be contained in the set of solutions of (57) for Λ [ 0 , c 2 ] .
For our numerical study, we consider model (2) with parameters γ = 0.25 , λ = 1 , c 1 = 2 , c 2 = 4 , b 1 = 0.25 , b 2 = 0.75 and q = 0.05 such that in particular assumption (3) is fulfilled. Solving (57) for these parameters and Λ [ 0 , c 2 ] yields x 1 b [ 7.00464 , 10.7136 ] . Similar to the previous reasoning, we can consider L * 2 and approximate the corresponding optimal strategy in branch two by a barrier strategy. In this case we obtain for the optimal barrier x 2 b [ 10.8148 , 23.7285 ] .
Figure 2 shows the estimated discounted dividend values of strategy L * 1 = ( L 1 b , L 2 * ) and L * 2 = ( L 1 * , L 2 b ) with respect to different barriers x 1 b , x 2 b and for an initial capital of x = ( 25 , 25 ) , such that we always start by lump-sum payments in both branches. As expected, in both cases the optimal barrier for our model is strictly in between the optimal barriers for problem (55) with Λ = 0 and Λ = c 2 ( c 1 for the case of L * 2 ).
Figure 3 clearly shows that for our example L * 1 is the preferable choice over L * 2 for all initial values x R 0 2 . Only on a subset of ( , 0 ) × ( 0 , ) it yields a smaller expected value for the dividends, as already predicted in Lemma 6. However, in general, a global statement that L * 1 is better than L * 2 (or vice versa) is not true, as our last example indicates:
Example 1.
Consider a model, where b 1 = b 2 , c 1 = c 2 . Then obviously claims happen along the line x 1 = x 2 and, if X 1 ( 0 ) = X 2 ( 0 ) , we have X 1 ( t ) = X 2 ( t ) for all t 0 . By construction, we have V * 1 ( x , x ) = V * 2 ( x , x ) for all x 0 and consequently Lemma 7 implies, that V * 1 ( x ) V * 2 ( x ) on D 1 , while V * 2 ( x ) V * 1 ( x ) on D 2 . This shows that a general statement on whether L * 1 or L * 2 is better for all x R 0 2 can not hold in general.

Author Contributions

Conceptualization, P.L.S. and H.E.H.; methodology, P.L.S. and H.E.H.; software, P.L.S.; validation, P.L.S. and H.E.H.; formal analysis, P.L.S. and H.E.H.; investigation, P.L.S. and H.E.H.; resources, P.L.S.; data curation, P.L.S.; writing—original draft preparation, P.L.S. and H.E.H.; writing—review and editing, P.L.S. and H.E.H.; visualization, P.L.S.; project administration, P.L.S. and H.E.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

We gratefully acknowledge the support of our supervisor Anita Behme and her valuable comments on earlier versions of the present manuscript. Moreover, we thank three referees for suggestions and comments that helped improving this manuscript. We acknowledge the GWK support for funding this project by providing computing time through the Center for Information Services and HPC (ZIH) at TU Dresden.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Albrecher, Hansjörg, and Stefan Thonhauser. 2009. Optimality results for dividend problems in insurance. Revista de la Real Academia de Ciencias Exactas, Fisicas y Naturales. Serie A. Matematicas 103: 295–320. [Google Scholar] [CrossRef] [Green Version]
  2. Asmussen, Søren, and Hansjörg Albrecher. 2010. Ruin Probabilities, 2nd ed. Singapore: World Scientific. [Google Scholar]
  3. Avanzi, Benjamin. 2009. Strategies for dividend distribution: A review. North American Actuarial Journal 13: 217–51. [Google Scholar] [CrossRef]
  4. Avram, Florin, Zbigniew Palmowski, and Martijn R. Pistorius. 2007. On the optimal dividend problem for a spectrally negative Lévy process. The Annals of Applied Probability 17: 156–80. [Google Scholar] [CrossRef]
  5. Avram, Florin, Zbigniew Palmowski, and Martijn R. Pistorius. 2008a. A two-dimensional ruin problem on the positive quadrant. Insurance: Mathematics and Economics 42: 227–34. [Google Scholar] [CrossRef] [Green Version]
  6. Avram, Florin, Zbigniew Palmowski, and Martijn R. Pistorius. 2008b. Exit problem of a two-dimensional risk process from the quadrant: Exact and asymptotic results. The Annals of Applied Probability 18: 2421–49. [Google Scholar] [CrossRef]
  7. Azcue, Pablo, and Nora Muler. 2005. Optimal reinsurance and dividend distribution policies in the cramér-lundberg model. Mathematical Finance 15: 261–308. [Google Scholar] [CrossRef]
  8. Azcue, Pablo, and Nora Muler. 2010. Optimal investment policy and dividend payment strategy in an insurance company. The Annals of Applied Probability 20: 1253–302. [Google Scholar] [CrossRef] [Green Version]
  9. Azcue, Pablo, and Nora Muler. 2012. Optimal dividend policies for compound poisson processes: The case of bounded dividend rates. Insurance: Mathematics and Economics 51: 26–42. [Google Scholar] [CrossRef]
  10. Azcue, Pablo, and Nora Muler. 2014. Stochastic Optimization in Insurance: A Dynamic Programming Approach. Berlin and Heidelberg: Springer. [Google Scholar]
  11. Azcue, Pablo, Nora Muler, and Zbigniew Palmowski. 2018. Optimal dividend payments for a two-dimensional insurance risk process. European Actuarial Journal 9: 241–72. [Google Scholar] [CrossRef] [Green Version]
  12. Badescu, Andrei L., Eric C. K. Cheung, and Landy Rabehasaina. 2011. A two-dimensional risk model with proportional reinsurance. Journal of Applied Probability 48: 749–65. [Google Scholar] [CrossRef] [Green Version]
  13. Czarna, Irmina, and Zbigniew Palmowski. 2011. De Finetti’s dividend problem and impulse control for a two-dimensional insurance risk process. Stochastic Models 27: 220–50. [Google Scholar] [CrossRef]
  14. De Finetti, Bruno. 1957. Su un’impostazione alternativa della teoria collettiva del rischio. Transactions of the XVth International Congress of Actuaries 2: 433–43. [Google Scholar]
  15. Gerber, Hans U. 1969. Entscheidungskriterien für den zusammengesetzten Poisson-Prozess. Mitteilungen/Vereinigung Schweizerischer Versicherungsmathematiker 69: 185–228. [Google Scholar]
  16. Hu, Zechun, and Bin Jiang. 2013. On joint ruin probabilities of a two-dimensional risk model with constant interest rate. Journal of Applied Probability 50: 309–22. [Google Scholar] [CrossRef] [Green Version]
  17. Loeffen, Ronnie L. 2008. On optimality of the barrier strategy in de Finetti’s dividend problem for spectrally negative Lévy processes. The Annals of Applied Probability 18: 1669–80. [Google Scholar] [CrossRef]
  18. Palmowski, Zbigniew, Sergey Foss, Dmitry Korshunov, and Tomasz Rolski. 2018. Two-dimensional ruin probability for subexponential claim size. Probability and Mathematical Statistics 37: 319–35. [Google Scholar] [CrossRef] [Green Version]
  19. Rolewicz, Stefan. 1987. Functional Analysis and Control Theory. Berlin and Heidelberg: Springer. [Google Scholar]
  20. Schmidli, Hanspeter. 2008. Stochastic Control in Insurance. Berlin and Heidelberg: Springer. [Google Scholar]
  21. Thonhauser, Stefan, and Hansjörg Albrecher. 2007. Dividend maximization under consideration of the time value of ruin. Insurance: Mathematics and Economics 41: 163–84. [Google Scholar] [CrossRef] [Green Version]
  22. Zhu, Hang. 1992. Dynamic Programming and Variational Inequalities in Singular Stochastic Control. Ph.D. thesis, Brown University, Providence, RI, USA. [Google Scholar]
Figure 1. Visualization of the sets D 1 , D 2 and a sample path of X .
Figure 1. Visualization of the sets D 1 , D 2 and a sample path of X .
Risks 10 00116 g001
Figure 2. Expected discounted dividend payments of strategy L * 1 = ( L 1 b , L 2 * ) and L * 2 = ( L 1 * , L 2 b ) in dependence of the barriers x 1 b , x 2 b chosen for strategies L 1 b , L 2 b .
Figure 2. Expected discounted dividend payments of strategy L * 1 = ( L 1 b , L 2 * ) and L * 2 = ( L 1 * , L 2 b ) in dependence of the barriers x 1 b , x 2 b chosen for strategies L 1 b , L 2 b .
Risks 10 00116 g002
Figure 3. Approximated value functions V L * 1 , V L * 2 with the optimal barriers x 1 b = 8.0 , x 2 b = 18.35 with respect to different initial values x 0 from several perspectives.
Figure 3. Approximated value functions V L * 1 , V L * 2 with the optimal barriers x 1 b = 8.0 , x 2 b = 18.35 with respect to different initial values x 0 from several perspectives.
Risks 10 00116 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Strietzel, P.L.; Heinrich, H.E. Optimal Dividends for a Two-Dimensional Risk Model with Simultaneous Ruin of Both Branches. Risks 2022, 10, 116. https://doi.org/10.3390/risks10060116

AMA Style

Strietzel PL, Heinrich HE. Optimal Dividends for a Two-Dimensional Risk Model with Simultaneous Ruin of Both Branches. Risks. 2022; 10(6):116. https://doi.org/10.3390/risks10060116

Chicago/Turabian Style

Strietzel, Philipp Lukas, and Henriette Elisabeth Heinrich. 2022. "Optimal Dividends for a Two-Dimensional Risk Model with Simultaneous Ruin of Both Branches" Risks 10, no. 6: 116. https://doi.org/10.3390/risks10060116

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop