Limit theorems for Parrondo's paradox

That there exist two losing games that can be combined, either by random mixture or by nonrandom alternation, to form a winning game is known as Parrondo's paradox. We establish a strong law of large numbers and a central limit theorem for the Parrondo player's sequence of profits, both in a one-parameter family of capital-dependent games and in a two-parameter family of history-dependent games, with the potentially winning game being either a random mixture or a nonrandom pattern of the two losing games. We derive formulas for the mean and variance parameters of the central limit theorem in nearly all such scenarios; formulas for the mean permit an analysis of when the Parrondo effect is present.


Introduction
The original Parrondo (1996) games can be described as follows: Let p := 1 2 − ε and p 0 := 1 10 − ε, where ε > 0 is a small bias parameter (less than 1/10, of course). In game A, the player tosses a p-coin (i.e., p is the probability of heads). In game B, if the player's current capital is divisible by 3, he tosses a p 0 -coin, otherwise he tosses a p 1 -coin. (Assume initial capital is 0 for simplicity.) In both games, the player wins one unit with heads and loses one unit with tails. It can be shown that games A and B are both losing games, regardless of ε, whereas the random mixture C := 1 2 A + 1 2 B (toss a fair coin to determine which game to play) is a winning game for ε sufficiently small. Furthermore, certain nonrandom patterns, including AAB, ABB, and AABB but excluding AB, are winning as well, again for ε sufficiently small. These are examples of Parrondo's paradox.
The terms "losing" and "winning" are meant in an asymptotic sense. More precisely, assume that the game (or mixture or pattern of games) is repeated ad infinitum. Let S n be the player's cumulative profit after n games for n ≥ 1. A game (or mixture or pattern of games) is losing if lim n→∞ S n = −∞ a.s., it is winning if lim n→∞ S n = ∞ a.s., and it is fair if −∞ = lim inf n→∞ S n < lim sup n→∞ S n = ∞ a.s. These definitions are due in this context to Key, K losek, and Abbott (2006).
Because the games were introduced to help explain the so-called flashing Brownian ratchet (Ajdari and Prost 1992), much of the work on this topic has appeared in the physics literature. Survey articles include Harmer and Abbott (2002), Parrondo and Dinís (2004), Epstein (2007), and Abbott (2009). Game B can be described as capital-dependent because the coin choice depends on current capital. An alternative game B, called history-dependent, was introduced by Parrondo, Harmer, and Abbott (2000): Let p 0 := 9 10 − ε, p 1 = p 2 := 1 4 − ε, p 3 := 7 10 − ε, where ε > 0 is a small bias parameter. Game A is as before. In game B, the player tosses a p 0 -coin (resp., a p 1 -coin, a p 2 -coin, a p 3 -coin) if his two previous results are loss-loss (resp., loss-win, win-loss, win-win). He wins one unit with heads and loses one unit with tails. The conclusions for the history-dependent games are the same as for the capital-dependent ones, except that the pattern AB need not be excluded. Pyke (2003) proved a strong law of large numbers for the Parrondo player's sequence of profits in the capital-dependent setting. In the present paper we generalize his result and obtain a central limit theorem as well. We formulate a stochastic model general enough to include both the capital-dependent and the history-dependent games. We also treat separately the case in which the potentially winning game is a random mixture of the two losing games (game A is played with probability γ, and game B is played with probability 1 − γ) and the case in which the potentially winning game (or, more precisely, pattern of games) is a nonrandom pattern of the two losing games, specifically the pattern [r, s], denoting r plays of game A followed by s plays of game B. Finally, we replace (1) by where ρ > 0; (1) is the special case ρ = 1/3. We also replace (2) by where κ > 0, λ > 0, and λ < 1 + κ; (2) is the special case κ = 1/9 and λ = 1/3. The reasons for these parametrizations are explained in Sections 3 and 4. Section 2 formulates our model and derives an SLLN and a CLT. Section 3 specializes to the capital-dependent games and their random mixtures, showing that the Parrondo effect is present whenever ρ ∈ (0, 1) and γ ∈ (0, 1). Section 4 specializes to the history-dependent games and their random mixtures, showing that the Parrondo effect is present whenever either κ < λ < 1 or κ > λ > 1 and γ ∈ (0, 1). Section 5 treats the nonrandom patterns [r, s] and derives an SLLN and a CLT. Section 6 specializes to the capital-dependent games, showing that the Parrondo effect is present whenever ρ ∈ (0, 1) and r, s ≥ 1 with one exception: r = s = 1. Section 7 specializes to the history-dependent games.
Here we expect that the Parrondo effect is present whenever either κ < λ < 1 or κ > λ > 1 and r, s ≥ 1 (without exception), but although we can prove it for certain specific values of κ and λ (such as κ = 1/9 and λ = 1/3), we cannot prove it in general. Finally, Section 8 addresses the question of why Parrondo's paradox holds.
In nearly all cases we obtain formulas for the mean and variance parameters of the CLT. These parameters can be interpreted as the asymptotic mean per game played and the asymptotic variance per game played of the player's cumulative profit. Of course, the pattern [r, s] comprises r + s games.
Some of the algebra required in what follows is rather formidable, so we have used Mathematica 6 where necessary. Our .nb files are available upon request.

A general formulation of Parrondo's games
In some formulations of Parrondo's games, the player's cumulative profit S n after n games is described by some type of random walk {S n } n≥1 , and then a Markov chain {X n } n≥0 is defined in terms of {S n } n≥1 ; for example, X n ≡ ξ 0 +S n (mod 3) in the capital-dependent games, where ξ 0 denotes initial capital. However, it is more logical to introduce the Markov chain {X n } n≥0 first and then define the random walk {S n } n≥1 in terms of {X n } n≥0 .
Consider an irreducible aperiodic Markov chain {X n } n≥0 with finite state space Σ. It evolves according to the one-step transition matrix 1 P = (P ij ) i,j∈Σ . Let us denote its unique stationary distribution by π = (π i ) i∈Σ . Let w : Σ×Σ → R be an arbitrary function, which we will sometimes write as a matrix W := (w(i, j)) i,j∈Σ and refer to as the payoff matrix. Finally, define the sequences {ξ n } n≥1 and {S n } n≥1 by and S n := ξ 1 + · · · + ξ n , n ≥ 1.
For example, let Σ := {0, 1, 2}, put X 0 := ξ 0 (mod 3), ξ 0 being initial capital, and let the payoff matrix be given by 2 With the role of P played by where p 0 and p 1 are as in (1), S n represents the player's profit after n games when playing the capital-dependent game B repeatedly. With the role of P played by where p := 1 2 − ε, S n represents the player's profit after n games when playing game A repeatedly. Finally, with the role of P played by P C := γP A +(1−γ)P B , where 0 < γ < 1, S n represents the player's profit after n games when playing the mixed game C := γA + (1 − γ)B repeatedly. In summary, all three capitaldependent games are described by the same stochastic model with a suitable choice of parameters.
Similarly, the history-dependent games fit into the same framework, as do the "primary" Parrondo games of Cleuren and Van den Broeck (2004).
Thus, our initial goal is to analyze the asymptotic behavior of S n under the conditions of the second paragraph of this section. We begin by assuming that X 0 has distribution π, so that {X n } n≥0 and hence {ξ n } n≥1 are stationary sequences, although we will weaken this assumption later.
1 In the physics literature the one-step transition matrix is often written in transposed form, that is, with column sums equal to 1. We do not follow that convention here. More precisely, here P ij := P(Xn = j | X n−1 = i).
2 Coincidentally, this is the payoff matrix for the classic game stone-scissors-paper. However, Parrondo's games, as originally formulated, are games of chance, not games of strategy, and so are outside the purview of game theory (in the sense of von Neumann).
Let us evaluate the mean and variance parameters of the central limit theorem. First, To evaluate Cov(ξ 1 , ξ m+1 ), we first find from which it follows that where Z = (z ij ) is the fundamental matrix associated with P ( Kemeny and Snell 1960, p. 75).
Finally, we claim that, if µ = 0 and σ 2 > 0, then Indeed, {ξ n } n≥1 is stationary and strongly mixing, hence its future tail σ-field is trivial (Bradley 2007, p. 60), in the sense that every event has probability 0 or 1. It follows that P(lim inf n→∞ S n = −∞) is 0 or 1. Since µ = 0 and σ 2 > 0, we can invoke the central limit theorem to conclude that this probability is 1. Similarly, we get P(lim sup n→∞ S n = ∞) = 1. Each of these derivations required that the sequence {ξ n } n≥1 be stationary, an assumption that holds if X 0 has distribution π, but in fact the distribution of X 0 can be arbitrary, and {ξ n } n≥1 need not be stationary.
Remark. Assume that σ 2 > 0. It follows that, if S n is the player's cumulative profit after n games for each n ≥ 1, then the game (or mixture or pattern of games) is losing if µ < 0, winning if µ > 0, and fair if µ = 0. (See Section 1 for the definitions of these three terms.) Proof. It will suffice to treat the case X 0 = i 0 ∈ Σ, and then use this case to prove the general case. Let {X n } n≥0 be a Markov chain in Σ with one-step transition matrix P and initial distribution π, so that {ξ n } n≥1 is stationary, as above. Let N := min{n ≥ 0 : X n = i 0 }, and definê X n := X N +n , n ≥ 0.
However, it may be of interest to confirm these conclusions using only the mean, variance, and covariance formulas above. We calculate Since Z = (I − (P − Π)) −1 , we can multiply (I − P + Π)Z = I on the right by P and use ΠZP = ΠP = Π to get P ZP = ZP + Π − P . This implies that From the fact that P Z = Z + Π − I, we also obtain i,j,k Substituting in (14) gives and this, together with (13), implies that σ 2 = 0.

Mixtures of capital-dependent games
The Markov chain underlying the capital-dependent Parrondo games has state space Σ = {0, 1, 2} and one-step transition matrix of the form where p 0 , p 1 , p 2 ∈ (0, 1). It is irreducible and aperiodic. The payoff matrix W is as in (5).
We conclude that {ξ n } n≥1 satisfies the SLLN with and the CLT with the same µ and with We now apply these results to the capital-dependent Parrondo games. Although actually much simpler, game A fits into this framework with one-step transition matrix P A defined by (15) with where ε > 0 is a small bias parameter. In game B, it is typically assumed that, ignoring the bias parameter, the one-step transition matrix P B is defined by (15) with p 1 = p 2 and µ = 0.
These two constraints determine a one-parameter family of probabilities given by To eliminate the square root, we reparametrize the probabilities in terms of ρ > 0. Restoring the bias parameter, game B has one-step transition matrix P B defined by (15) with which includes (1) when ρ = 1/3. Finally, game C := γA + (1 − γ)B is a mixture (0 < γ < 1) of the two games, hence has one-step transition matrix P C := γP A + (1 − γ)P B , which can also be defined by (15) with Let us denote the mean µ for game A by µ A (ε), to emphasize the game as well as its dependence on ε. Similarly, we denote the variance σ 2 for game A by σ 2 A (ε). Analogous notation applies to games B and C. We obtain, for game and for game C, and One can check that σ 2 C (0) < 1 for all ρ = 1 and γ ∈ (0, 1). The formula for σ 2 B (0) was found by Percus and Percus (2002) in a different form. We prefer the form given here because it tells us immediately that game B has smaller variance than game A for each ρ = 1, provided ε is sufficiently small. With ρ = 1/3 and ε = 1/200, Harmer and Abbott (2002, Fig. 5) inferred this from a simulation.
The formula for µ C (0) was obtained by Berresford and Rockett (2003) in a different form. We prefer the form given here because it makes the following conclusion transparent.
Assuming (17) with ε = 0, the condition ρ < 1 is equivalent to p 0 < 1 2 . Clearly, the Parrondo effect appears (with the bias parameter present) if and only if µ C (0) > 0. A reverse Parrondo effect, in which two winning games combine to lose, appears (with a negative bias parameter present) if and only if µ C (0) < 0.
Corollary 3 (Pyke 2003). Let games A and B be as above (with the bias parameter present). If ρ ∈ (0, 1) and γ ∈ (0, 1), then there exists ε 0 > 0, depending on ρ and γ, such that Parrondo's paradox holds for games A, B, and The theorem and corollary are special cases of results of Pyke. In his formulation, the modulo 3 condition in the definition of game B is replaced by a modulo m condition, where m ≥ 3, and game A is replaced by a game analogous to game B but with a different parameter ρ 0 . Pyke's condition is equivalent to 0 < ρ < ρ 0 ≤ 1. We have assumed m = 3 and ρ 0 = 1.

Mixtures of history-dependent games
The Markov chain underlying the history-dependent Parrondo games has state space Σ = {0, 1, 2, 3} and one-step transition matrix of the form where p 0 , p 1 , p 2 , p 3 ∈ (0, 1). Think of the states of Σ in binary form: 00, 01, 10, 11. They represent, respectively, loss-loss, loss-win, win-loss, and win-win for the results of the two preceding games, with the second-listed result being the more recent one. The Markov chain is irreducible and aperiodic. The payoff matrix W is given by It will be convenient below to define q i := 1 − p i for i = 0, 1, 2, 3. Now, the unique stationary distribution π = (π 0 , π 1 , π 2 , π 3 ) has the form We conclude that {ξ n } n≥1 satisfies the SLLN with and the CLT with the same µ and with We now apply these results to the history-dependent Parrondo games. Although actually much simpler, game A fits into this framework with one-step transition matrix P A defined by (19) with where ε > 0 is a small bias parameter. In game B, it is typically assumed that, ignoring the bias parameter, the one-step transition matrix P B is defined by (19) with p 1 = p 2 and µ = 0.
These two constraints determine a two-parameter family of probabilities given by We reparametrize the probabilities in terms of κ > 0 and λ > 0 (with λ < 1+κ).
Corollary 5. Let games A and B be as above (with the bias parameter present). If 0 < κ < λ < 1 or κ > λ > 1, and if γ ∈ (0, 1), then there exists ε 0 > 0, depending on κ, λ, and γ, such that Parrondo's paradox holds for games A, B, and C : When κ = 1/9 and λ = 1/3, the mean and variance formulas simplify to and, if γ = 1 2 as well, Here, in contrast to the capital-dependent games, the variance of game B is greater than that of game A. This conclusion, however, is parameter dependent.

Nonrandom patterns of games
We also want to consider nonrandom patterns of games of the form A r B s , in which game A is played r times, then game B is played s times, where r and s are positive integers. Such a pattern is denoted in the literature by [r, s]. Associated with the games are one-step transition matrices for Markov chains in a finite state space Σ, which we will denote by P A and P B , and a function w : Σ × Σ → R. We assume that P A and P B are irreducible and aperiodic, as are P : (the r + s cyclic permutations of P r A P s B ). Let us denote the unique stationary distribution associated with P by π = (π i ) i∈Σ . The driving Markov chain {X n } n≥0 is timeinhomogeneous, with one-step transition matrices P A , P A , . . . , P A (r times), P B , P B , . . . , P B (s times), P A , P A , . . . , P A (r times), P B , P B , . . . , P B (s times), and so on. Now define {ξ n } n≥1 and {S n } n≥1 by (3) and (4). What is the asymptotic behavior of S n as n → ∞?
It remains to evaluate these variances and covariances. First, (24) This formula for µ is equivalent to one found by Kay and Johnson (2003) in the history-dependent setting. Next, Furthermore, Consider the factor (P r−u−1 in the first sum on the right, for example. With Π denoting the square matrix each of whose rows is π, we can rewrite this as Thus, summing over m ≥ 1, we get where Z is the fundamental matrix associated with P := P r A P s B . We conclude that which relies on (25) and (26). We summarize the results of this section in the following theorem.
Proof. The proof is similar to that of Theorem 1, except that N := min{n ≥ 0 : X n = i 0 , n is divisible by r + s}.
In the examples to which we will be applying the above mean and variance formulas, additional simplifications will occur because P A has a particularly simple form and P B has a spectral representation. Denote by P ′ A the matrix with (i, j)th entry (P A ) ij w(i, j), and assume that P ′ A has row sums equal to 0. Denote by P ′ B the matrix with (i, j)th entry (P B ) ij w(i, j), and define ζ := P ′ B (1, 1, . . . , 1) T to be the vector of row sums of P ′ B . Further, let t := |Σ| and suppose that P B has eigenvalues 1, e 1 , . . . , e t−1 and corresponding linearly independent right eigenvectors r 0 , r 1 , . . . , r t−1 . Put D := diag(1, e 1 , . . . , e t−1 ) and R := (r 0 , r 1 , . . . , r t−1 ). Then the rows of L := R −1 are left eigenvectors and P v B = RD v L for all v ≥ 0. Finally, with we have I + P B + · · · + P v−1 B = RD v L for all v ≥ 1. Additional notation includes π s,r for the unique stationary distribution of P s B P r A , and π as before for the unique stationary distribution of P := P r A P s B . Notice that πP r A = π s,r . Finally, we assume as well that w(i, j) = ±1 whenever (P A ) ij > 0 or (P B ) ij > 0.
These assumptions allow us to write (24) as µ = (r + s) −1 π s,r RD s Lζ, and similarly (25) and (26) become A referee has suggested an alternative approach to the results of this section that has certain advantages. Instead of starting with a time-inhomogeneous Markov chain in Σ, we begin with the (time-homogeneous) Markov chain in the product spaceΣ := {0, 1, . . . , r + s − 1} × Σ with transition probabilities (i, j) → (i + 1 mod r + s, k) with probability It is irreducible and periodic with period r + s. Ordering the states ofΣ lexicographically, we can write the one-step transition matrix in block form as where P i := P A for 0 ≤ i ≤ r − 1 and P i := P B for r ≤ i ≤ r + s − 1. With π as above, the unique stationary distribution forP isπ := (r + s) −1 (π 0 , π 1 , . . . , π r+s−1 ), where π 0 := π and π i := π i−1 P i−1 for 1 ≤ i ≤ r + s − 1. Using the idea in (22) and (23), we can deduce the strong mixing property and the central limit theorem. The mean and variance parameters are as in (9) but with bars on the matrices. Of course,Z := (I − (P −Π)) −1 ,Π being the square matrix each of whose rows isπ;P ′ is asP but with P ′ A and P ′ B in place of P A and P B ; and similarly forP ′′ . The principal advantage of this approach is the simplicity of the formulas for the mean and variance parameters. Another advantage is that patterns other than those of the form [r, s] can be treated just as easily. The only drawback is that, in the context of our two models, the matrices are no longer 3 × 3 or 4 × 4 but rather 3(r + s) × 3(r + s) and 4(r + s) × 4(r + s). In other words, the formulas (28)-(30), although lacking elegance, are more user-friendly.

Patterns of capital-dependent games
Let games A and B be as in Section 3; see especially (17). Both games are losing. In this section we show that, for every ρ ∈ (0, 1), and for every pair of positive integers r and s except for r = s = 1, the pattern [r, s], which stands for r plays of game A followed by s plays of game B, is winning for sufficiently small ε > 0. Notice that it will suffice to treat the case ε = 0.
We begin by finding a formula for µ [r,s] (0), the asymptotic mean per game played of the player's cumulative profit for the pattern [r, s], assuming ε = 0. Recall that P A is given by (15) with p 0 = p 1 = p 2 := 1 2 , and P B is given by (15) with p 0 := ρ 2 /(1 + ρ 2 ) and p 1 = p 2 := 1/(1 + ρ), where ρ > 0. First, we can prove by induction that Next, with S := (1 + ρ 2 )(1 + 4ρ + ρ 2 ), the nonunit eigenvalues of P B are and we define the diagonal matrix D := diag(1, e 1 , e 2 ). The corresponding right eigenvectors are linearly independent for all ρ > 0 (including ρ = 1), so we define R := (r 0 , r 1 , r 2 ) and L := R −1 . Then which leads to an explicit formula for P s B P r A , from which we can compute its unique stationary distribution π s,r as a left eigenvector corresponding to the eigenvalue 1. With Algebraic computations show that this reduces to The last assertion of the theorem was known to Pyke (2003). As before, the Parrondo effect appears (with the bias parameter present) if and only if µ [r,s] (0) > 0. A reverse Parrondo effect, in which two winning games combine to lose, appears (with a negative bias parameter present) if and only if µ [r,s] (0) < 0.
A formula for σ 2 [r,s] (0) analogous to (32) would be extremely complicated. We therefore consider the matrix formulas of Section 5 to be in final form.

Patterns of history-dependent games
Let games A and B be as in Section 4; see especially (21). Both games are losing. In this section we attempt to find conditions on κ and λ such that, for every pair of positive integers r and s, the pattern [r, s], which stands for r plays of game A followed by s plays of game B, is winning for sufficiently small ε > 0.
As for the spectral representation of P B , its eigenvalues include 1 and the three roots of the cubic equation x 3 + a 2 x 2 + a 1 x + a 0 = 0, where (37) With the help of Cardano's formula, we find that the nonunit eigenvalues of P B are , where ω := e 2πi/3 = − 1 2 + 1 2 √ 3 i and ω 2 = e 4πi/3 =ω = − 1 2 − 1 2 √ 3 i are cube roots of unity, and Of course, e 1 , e 2 , and e 3 are each less than 1 in absolute value. Notice that the definitions of P and Q are slightly ambiguous, owing to the nonuniqueness of the cube roots. (If (P, Q) is replaced in the definitions of e 1 , e 2 , and e 3 by (ωP, ω 2 Q) or by (ω 2 P, ωQ), then e 1 , e 2 , and e 3 are merely permuted.) If β 2 + 4α 3 > 0, then P and Q can be taken to be real and distinct, 3 in which case e 1 is real and e 2 and e 3 are complex conjugates; in particular, e 1 , e 2 , and e 3 are distinct. If β 2 + 4α 3 = 0, then P and Q can be taken to be real and equal, in which case e 1 , e 2 , and e 3 are real with e 2 = e 3 . If β 2 + 4α 3 < 0, then P and Q can be taken to be complex conjugates, in which case e 1 , e 2 , and e 3 are real and distinct; in fact, they can be written where θ := cos −1 ( 1 2 β/ √ −α 3 ), which implies that 1 > e 1 > e 3 > e 2 > −1. See Figure 1.
We can extend the formulas to κλ = 1 by noting that, in this case, 0 is an eigenvalue of P B and the two remaining nonunit eigenvalues, which can be obtained from the quadratic formula, are distinct from 0 and 1 unless κ = λ = 1. Here we are using the assumption that λ < 1 + κ, hence κ > (−1 + √ 5)/2. This allows a spectral representation in such cases and again leads to formulas for µ [r,s] (0) for all r, s ≥ 1, which we do not include here. Writing the numerator of f (x, y, z) temporarily as (λ− κ)p(x)q(y, z), notice that the two nonunit, nonzero eigenvalues coincide, when κλ = 1, with the roots of p(x), and this ordered pair of eigenvalues is also a zero of q. This explains why the singularity on the curve κλ = 1 is removable.
We cannot prove the analogue of Theorem 7 in this setting, so we state it as a conjecture.
We can prove a very small part of the conjecture. Let K be the set of positive fractions with one-digit numerators and denominators, that is, K := {k/l : k, l = 1, 2, . . . , 9}, and note that K has 55 distinct elements.
We turn to case (c), but let us begin by treating the general case. Given κ > 0 and λ > 0 with λ < 1 + κ, κ = λ, and λ = 1, we clearly have c 0 = 0. The conjecture says that E s and F s := E s + G s H s have the same sign as c 0 for all s ≥ 1.
We have carried out the required estimates in Mathematica for the 2123 cases of part (c). There were no exceptions to the conjecture. Table 1 lists a few of these cases. losing. We propose an alternative explanation (the Utah interpretation?) that tends to support the Boston interpretation. To motivate it, we observe that (r + s)µ [r,s] (0) can be interpreted as the asymptotic mean per cycle of r plays of game A and s plays of game B of the player's cumulative profit when ε = 0. If s is large relative to r, then the r plays of game A might reasonably be interpreted as periodic noise in an otherwise uninterrupted sequence of plays of game B. We will show that lim s→∞ (r + s)µ [r,s] (0) exists and is finite for every r ≥ 1. If the limit is positive for some r ≥ 1, then µ [r,s] (0) > 0 for that r and all s sufficiently large. If the limit is negative for some r ≥ 1, then µ [r,s] (0) < 0 for that r and all s sufficiently large. These conclusions are weaker than those of Theorem 7 and the conjecture but the derivation is much simpler, depending only on the fundamental matrix of P B , not on its spectral representation.
Here is the derivation. First, π s,r = πP r A , where π is the unique stationary distribution of P r A P s B . Now where Π B is the square matrix each of whose rows is π B . We conclude that π s,r → π B P r A as s → ∞ and therefore that lim s→∞ (r + s)µ [r,s] (0) = lim s→∞ π s,r (I + P B + · · · + P s−1 B )ζ = π B P r A lim s→∞ s−1 n=0 where Z B denotes the fundamental matrix of P B (see (7)). Since π B Z B = π B and π B ζ = 0, it is the P r A factor, or the "noise" caused by game A, that leads to a (typically) nonzero limit.
Of course we could derive these limits from the formulas in Sections 6 and 7, but the point is that they do not require the spectral representation of P B -they are simpler than that. They also explain why the conditions on ρ are the same in Theorems 2 and 7, and the conditions on κ and λ are the same in Theorem 4 and the conjecture.