Occupancy time in sets of states for demographic models

As an individual moves through its life cycle, it passes through a series of states (age classes, size classes, reproductive states, spatial locations, health statuses, etc.) before its eventual death. The occupancy time in a state is the time spent in that state over the individual’s life. Depending on the life cycle description, the occupancy times describe different demographic variables, for example, lifetime breeding success, lifetime habitat utilisation, or healthy longevity. Models based on absorbing Markov chains provide a powerful framework for the analysis of occupancy times. Current theory, however, can completely analyse only the occupancy of single states, although the occupancy time in a set of states is often desired. For example, a range of sizes in a size-classified model, an age class in an age×stage model, and a group of locations in a spatial stage model are all sets of states. We present a new mathematical approach to absorbing Markov chains that extends the analysis of life histories by providing a comprehensive theory for the occupancy of arbitrary sets of states, and for other demographic variables related to these sets (e.g., reaching time, return time). We apply this approach to a matrix population model of the Southern Fulmar (Fulmarus glacialoides). The analysis of this model provides interesting insight into the lifetime number of breeding attempts of this species. Our new approach to absorbing Markov chains, and its implementation in matrix oriented software, makes the analysis of occupancy times more accessible to population ecologists, and directly applicable to any matrix population models.

enters for its first time in B. Then we can rewrite u S j ↵,i ↵ as 829 u S j ↵,i ↵ = P i (X T = j) .
(61) 830 By definition of the absorbing probabilities (eqn. 11), for a non target state k 2 B c , we have 831 a j ↵,k = P k (X T = j) .

832
Using the Chapman-Kolmogorov equation (see e.g., Meyn and Tweedie [2009]), we obtain where the matrices L and Q are extracted from the matrix U, as in equation (7).

836
A.2 Proof that X C is a Markov chain 837 Iosifescu [1980] (section 3.2.9) proves that an absorbing Markov chain, with respect to the 838 conditional probability that it is absorbed by a specific state, is still an absorbing Markov 839 chain. Here, we generalise this statement to the condition that the chain is absorbed in a 840 specific set of states.

841
Let's define the event A = {X K is absorbed in the target set B}, i.e. the killed chain is 842 absorbed in the target set. We consider the stochastic process X C , living on the space T , 843 defined by for any measurable set B ⇢ S. To ease the notation, we write P X K t 2 B|A = P A X K t 2 B .

846
By definition, the process X C corresponds to the killed Markov chain, where trajectories 847 encountering death before target states are set aside. We first prove that X C is a Markov chain 848 and then we show that its transition probabilities are describe by the matrix P C defined in 849 Section 3.2. As a consequence, this proves that the conditional Markov chain is indeed a Markov 850 chain and that it corresponds to the killed Markov chain, where trajectories encountering death 851 before target states are set aside.

852
To prove that X C is a Markov chain, we only need to show that it satisfies the Markov 853 property, i.e.

856
Fix (i 0 , . . . , i t+1 ) 2 T t+2 , and define the event From the definitions of conditional probabilities and the process X C , we have for any 0  s  t. The second equality is is a consequence of the Markov property of the killed 861 Markov chain, and the third equality follows from the definition of the absorbing probability 862 vector p a (see eqn 12).

863
Similarly, where I B (k) equals 1 if k 2 B and 0 otherwise.
Equations (72) and (75) imply that the ratio on the right hand side of equation (67) does not 867 depend on s, for 0  s  t. In particular, This prove that X C satisfies the Markov property.

869
The transition probabilities of the Markov chain X C follow from the equations above. For 870 j 2 T and i 6 2 B, 872 with the convention that p a k = 1 for k 2 B. And we have for i, j 2 B,

874
It follows form equations (78) and (79) that the transition probabilities of the Markov chain 875 X C are given by the matrix P C defined in Section 3.2, i.e.
where a ji is the probability that the killed Markov chain, starting in state i, is absorbed by the 888 state j, as in Section 3.1.1. In matrix notation, equation (82) is equivalent to

A.4 Covariance between the occupancy times in two disjoint sets 891
Here, we calculate the covariance between the occupancy time in two disjoint subsets B 1 and 892 B 2 , of the transient set T . As stated in the main text, the covariance between ⌧ B 1 and ⌧ B 2 is 894 We rewrite the covariance between ⌧ B 1 and ⌧ B 2 in terms of their variances and the variance of 895 their sum,

897
Since the sets B 1 and B 2 are disjoint, the occupancy time in the union B 1 [ B 2 is the sum of 898 the occupancy times in each of the subsets. Thus, Let w in ij be the conditional probability that an individual in target state ↵ + j moves to the 904 target state ↵ + i, in one time-step, given that it eventually returns to the target set. Then, 905 w in ji := P ↵+i (X 1 = ↵ + j|T < 1) = P ↵+i (X 1 = ↵ + j, T < 1) P ↵+i (T < 1) where p r describes the return probabilities, as defined in equation (47). Thus,

907
where D r = diag (p r ) and the matrix Q is extracted from the matrix U, as in equation (7).

908
Let w out ij be the conditional probability that an individual in target state ↵ + j moves to the 909 non-target state i, in one time-step, given that it eventually returns to the target set. Then 910 w out ji := P ↵+i (X 1 = j|T < 1) = P ↵+i (T < 1|X 1 = j) P ↵+i (X 1 = j) P ↵+i (T < 1) where the vector p a describes the probabilities of absorption in the target states, as defined in 911 eqn. (11). Thus,

913
where D r = diag (p r ), D a = diag (p a ), and the matrix L is extracted form the matrix U, as in 914 equation (7).

915
Now, we derive the moments of µ, conditional on the individual returning to the target set.

916
Let ↵ + i be a target state. Then Hence, in matrix notation,