SRB Measures For Certain Markov Processes

We study Markov processes generated by iterated function systems (IFS). The constituent maps of the IFS are monotonic transformations of the interval. We first obtain an upper bound on the number of SRB (Sinai-Ruelle-Bowen) measures for the IFS. Then, when all the constituent maps have common fixed points at 0 and 1, theorems are given to analyze properties of the ergodic invariant measures $\delta_0$ and $\delta_1$. In particular, sufficient conditions for $\delta_0$ and/or $\delta_1$ to be, or not to be, SRB measures are given. We apply some of our results to asset market games.


Introduction
In the 1970's, Sinai, Ruelle and Bowen studied the existence of an important class of invariant measures in the context of deterministic dynamical systems. These invariant measures are nowadays known as SRB (Sinai-Ruelle-Bowen) measures [14]. SRB measures are distinguished among other ergodic invariant measures because of their physical importance. In fact, from ergodic theory point of view, they are the only useful ergodic measures. This is due to the fact that SRB measures are the only ergodic measures for which the Birkhoff Ergodic Theorem holds on a set of positive measure of the phase space. In this note, we study SRB measures in a stochastic setting-Markov processes generated by iterated function systems (IFS).
An IFS 1 is a discrete-time random dynamical system [1, 10] which consists of a finite collection of transformations and a probability vector {τ s ; p s } L s=1 . At each time step, a transformation τ s is selected with probability p s > 0 and applied to the process. IFS has been a very active topic of research due to its wide applications in fractals and in learning models. The survey articles [5, 13] contain a considerable list of references and results in this area.
The systems which we study in this note do not fall in the category of the IFS 2 considered in [5, 13] and references therein. Moreover, in general, our IFS do not satisfy the classical splitting 3 condition of [7]. In fact, our aim in this note is to depart from the traditional goal of finding sufficient conditions for an IFS to admit a unique attracting invariant measure [7, 5, 13]. Instead, we study cases where an IFS may admit more than one invariant measure and aim to identify the physically relevant ones; i.e., invariant measures for which the Ergodic Theorem holds on a set of positive measure of the ambient phase space. We call such invariant measures SRB.
Physical SRB measures for random maps have been studied by Buzzi [3] in the context of random Lasota-Yorke maps. However, Buzzi's definition of a basin of an SRB measure is different from ours. We will clarify this difference in Section 2. A general concept of an SRB measure for general random dynamical systems can be found in the survey article [11]. In this note we study physical SRB measures for IFS whose constituent maps are strictly increasing transformations of the interval. We obtain an upper bound on the number of SRB measures for the IFS. Moreover, when all the constituent maps have common fixed points at 0 and 1, we provide sufficient conditions for δ 0 and/or δ 1 to be, or not to be, SRB measures. To complement our theoretical results, we show at the end of this note that examples of IFS of this type can describe evolutionary models of financial markets [4].
In Section 2 we introduce our notation and main definitions. In particular, Section 2 includes the definition of an SRB measure for an IFS. In Section 3 we identify the structure of the basins of SRB measures and we obtain a sharp upper bound on the number of SRB measures. Section 4 contains sufficient conditions for δ 0 and δ 1 , the delta measures concentrated at 0 and 1 respectively, to be SRB. It also contains sufficient conditions for δ 0 and δ 1 not to be SRB measures. Our main results in this section are Theorems 4.3 and Theorem 4.7. In Section 5 we study ergodic properties of δ 0 and δ 1 without having any information about the probability vector of the IFS. In Section 6 we apply our results to asset market games. In particular, we find a generalization of the famous Kelly rule [9] which expresses the principle of "betting your beliefs". The importance of our generalization lies in the fact that it does not require the full knowledge of the probability distribution of the random states of the system. Section 7 contains an auxiliary result which we use in the proof of Theorem 4.7.
We denote the space of sequences ω = {s 1 , s 2 , . . . }, s l ∈ S, by Ω. The topology on Ω is defined as the product of the discrete topologies on S. Let π p denote the Borel measure on Ω defined as the product measure p N . Moreover, we write s t := (s 1 , s 2 , . . . , s t ) for the history up to time t, and for any r 0 ∈ [0, 1] we write Finally, by E(·) we denote the expectation with respect to p, by E(·|s t ) the conditional expectation given the history up to time t and by var(·) the variance with respect to p.

Invariant measures. F is understood as a Markov process with a transition function
where A ∈ B and χ A is the characteristic function of the set A. The transition function P induces an operator P on measures on ([0, 1], B) defined by (2.1) Following the standard notion of an invariant measure for a Markov process, we call a probability measure µ on ( → µ with π p -probability one. Then µ is called an SRB (Sinai-Ruelle-Bowen) measure. The set of points r 0 ∈ [0, 1] for which (2.2) is satisfied will be called the basin 4 of µ and it will be denoted by B(µ). Obviously, if λ(B(µ)) = 1 then µ is the unique SRB measure of F .

Number of SRB measures and their basins
The basin of an SRB measure for the systems we are dealing with is described by the following two propositions.
We now state the main result of this section. Firstly, we recall that ·, · denotes an interval which is closed or open at any of the endpoints. Secondly, we define a set BS whose elements are intervals of the form ·, · with the following property:  Proof. The fact that number of SRB measures of F is bounded above by the cardinality of the set BS is a direct consequence of Proposition 3.2. To elaborate on the second part of the theorem, assume without loss of generality that τ s0 (r) > r for all r ∈ (0, 1). Obviously, by Proposition 3.2, if all the other maps τ s , s ∈ S \ {s 0 } has no fixed points in (0, 1), then F admits at most one SRB measure. So let us assume that there exists an s * ∈ S \ {s 0 } such that τ s * has a finite or infinite number of fixed points in [0, 1]. In the case of finite number of fixed points, denote the fixed points of τ s * in [0,1] by r * i , i = 1, . . . , q, such that 0 ≤ r * 1 < r * 2 < · · · < r * q ≤ 1. Since τ s (r * i ) > r * i for all r * i ∈ (0, 1), the only possible basin for an SRB measure would be either r * q−1 , 1 or r * q , 1 . In the case of infinite number of fixed points, letr = sup{r ∈ (0, 1) : τ s * (r) = r}.
Ifr < 1, then τ s0 (r) >r. By Proposition 3.2, r, 1 is the only possible basin for an SRB measure. Ifr = 1, letJ denote the closure of the set of fixed points of τ s * and letJ 0 ⊆J be the minimal closed subset ofJ which contains the point 1.J 0 is the only possible basin for an SRB measure. Moreover, it cannot be decomposed into basins of different SRB measures. Indeed, let J 1 ∪ J 2 =J 0 such that J 1 = a, b with b < 1. Since τ s0 (b) > b, by Proposition 3.2, J 1 cannot be a basin of an SRB measure. Thus, F admits at most one SRB measure.
The following example shows that Proposition 3.2 can be used to identify intervals which are not in the basin of an SRB measure. In particular, it shows that the bound obtained on the number of SRB measures in Theorem 3.3 is really sharp.
The graphs of the above maps are shown in Figure 1. Using Proposition 3.2, we see that the points of the interval (1/3, 2/3) do not belong to a basin of any SRB measure. Moreover, by Theorem 3.3, F admits at most two SRB measures. Indeed, one can easily check that δ 0 and δ 1 are the only SRB measures with basins B(δ 0 ) = [0, 1/3] and B(δ 1 ) = [2/3, 1] respectively. For any r ∈ [0, 1/3) for all ω's the averages 1 T T −1 t=0 δ rt(s t ) converge weakly to δ 0 . For r = 1/3 the only ω for which this does not happen is ω = {1, 1, 1, . . . } so again the averages converge weakly to δ 0 with π p -probability 1. Similarly, we can show that B(δ 1 ) = [2/3, 1]. If r ∈ (1/3, 2/3), then with positive π p -probability the averages converge to δ 0 and with positive π p -probability the averages converge to δ 1 . Thus, these points do not belong to a basin of any SRB measure and there are only two SRB measures.

Properties of δ 0 and δ 1
In addition to condition (A), we assume in this section that for all s ∈ S: (B) τ s (0) = 0 and τ s (1) = 1; Obviously by Condition (B) the delta measures δ 0 and δ 1 are ergodic probability measures for the IFS. We will be mainly concerned with the following question: When does F have δ 0 and/or δ 1 as SRB measures? We start our analysis by proving a lemma which provides a sufficient condition for δ x , the point measure concentrated at x ∈ [0, 1], to be an SRB measure.
Lemma 4.1. Suppose that τ s (x) = x for all s ∈ {1, . . . , L} and that there exists an initial point of a random orbit r 0 , r 0 = x, for which lim t→∞ r t (s t ) = x with probability π p = 1. Then δ x is an SRB measure for F and Proof. Let f be a continuous function on [0, 1]. Let r 0 = x and fix a history s t for which lim t→∞ r t (s t ) = x. Then appears with probability one, the event also appears with probability one. Thus, by Proposition 3.1, δ x is an SRB measure The following lemma, which is easy to prove, is a key observation for our main results in this section.
In the rest of this section, the following notation will be used: Proof. Let us consider the sequence of random exponents where α i = β s (r i−1 ) with probability p s , and observe that r t (s t ) = r α(t) .
We have ln α(t + 1) = ln α t+1 + ln α(t), and, with probability one, Hence ln α(t) is a supermartingale with bounded increments. Thus, using Theorem 5.1 in Chapter VII of [12], with probability one ln α(t) does not converge to +∞. Consequently, with probability one, r t (s t ) = r α(t) does not converge to zero.
We now prove the second statement of the theorem. Again we consider the sequence of random exponents We have E(M t ) = 0 and ln α t is uniformly bounded. Therefore, by the strong law of large numbers (see Theorem 2.19 in [8]), with probability one Therefore, with probability one, From this we can conclude that for T large enough there is a positive random variable η such that α(T ) ≤ e −T η a.s. Thus, since r ∈ [0, 1], for T large enough we obtain a.s.
By taking the limit of T to infinity we obtain The proof of the third statement is very similar to the proof of the second one with slight changes. In particular, using (4.1), we see that, with probability one, From this we can conclude that for T large enough there is a positive random variable η such that α(T ) ≥ e T η a.s. Thus, since r ∈ [0, 1], for T large enough we obtain a.s.
By taking the limit of T to infinity we obtain Proof. The proof is a consequence of statements (2) and (3) of Theorem 4.3 and Lemma 4.1.
Remark 4.5. Observe that: (1) Remark 4.6. In the proof of statement (1) of Theorem 4.3, we have with probability π p = 1, lim t→∞ ln α(t) = ∞. In general, it is not clear that this implies that δ 0 is not an SRB measure. However, in the following theorem under additional natural assumption on the variance of ln α t we show that δ 0 is indeed not an SRB measure.
Proof. Consider the sequence of random exponents where α i = β s (r t−1 ) with probability p s , and observe that Observe that the sequence Z T = ln α(T ) forms a supermartingale with bounded increments. Doob's decomposition theorem gives the representation where W T = T t=1 E(ln α t |s t−1 ) is a decreasing predictable sequence and is a 0 mean martingale with bounded increments. By Theorem 5.1 (Ch. VII) of [12] with probability 1 process S T either converges to finite limit or lim sup T →∞ S T = − lim inf T →∞ S T = ∞. In the first case the process Z T is bounded from above. We will consider only the second case to show that with positive probability the process Z T is bounded from above for a set of indices T which has positive density in N, i.e, there exist M > 0, 0 < a, b < 1 such that Let us denote X t = ln α t − E(ln α t |s t−1 ) , t ≥ 1.
This sequence satisfies assumptions of Theorem 7.1, with A t = σ(s t ). We have Thus, the sequence X t satisfies assumptions of Proposition 7.2. In particular, (7.1) holds, i.e., if Pos T is the number of times ln α(t) > 0 for t ≤ T , then where a, b are some numbers in (0, 1). This means that if N T is the number of times ln α(t) ≤ 0 for t ≤ T , then Now, we we show that Thus, with a positive probability b/2 > 0, there exist a sequence T n → ∞ such that N Tn Thus, ln α(T ) is negative with positive density, i.e., with nonzero probability b/2 which proves there is no weak convergence to δ 0 .

5.
Properties of δ 0 and δ 1 : The case when p is unknown In general, one cannot decide whether δ 0 or δ 1 is the unique SRB measure without having information about p. We illustrate this fact in the following example.
By Corollary 4.4, if p 1 < 1/2 the measure δ 1 is the unique SRB measure of F ; however, if p 1 > 1/2 the measure δ 0 is the unique SRB measure of F . Thus, for this example, without having information about p, no information about the nature of δ 0 or δ 1 can be obtained.
Although Example 5.1 shows that the analysis cannot be definitive in some cases without knowing the probability distribution on S, our aim in this section is to find situations when δ 0 and/or δ 1 are not SRB. Moreover, in addition to studying the properties of δ 0 and δ 1 , we are going to study the case when the IFS admit an invariant probability measure whose support is separated from zero and is not necessarily concentrated at one. The definition of such a measure is given below.
Definition 5.2. Let µ be a probability measure on ([0, 1], B), where B is the Borel σ-algebra. We define the support of µ, denoted by supp(µ), as the smallest closed set of full µ measure. We say that supp(µ) is separated from zero if there exists an η > 0 such that µ([0, η]) = 0.
In addition to properties (A) and (B), we assume in this section that: (C) Every τ s has a finite number of fixed points.
In this section, we use a graph theoretic techniques to analyze ergodic properties of δ 0 and δ 1 . This approach is inspired by the concept of a Markov partition used in the dynamical systems literature. For instance, in [2], the ergodic properties of a deterministic system which admits a Markov partition is studied via a directed graph and an incidence matrix. In our approach we construct a partition for our random dynamical system akin to that of a Markov partition and use two directed graphs to study ergodic properties of the system. We now introduce the two graphs, G d and G u , which we will use in our analysis.
(1) Both G d and G u have the same vertices; (2) For s ∈ {1, . . . , L}, an interval J s,m = (a s,m , a s,m+1 ) is a vertex in G d and G u if and only if τ s (a s,m ) = a s,m , τ s (a s,m+1 ) = a s,m+1 and τ s (r) = r for all r ∈ (a s,m , a s,m+1 ); (3) Let J s,m and J l,j be two vertices of G d . There is a directed edge connecting J s,m to J l,j if and only if ∃ an r ∈ J s,m , r > a l,j+1 , and a t ≥ 1 such that τ t s (r) ∈ J l,j . (4) Let J s,m and J l,j be two vertices of G u . There is a directed edge connecting J s,m to J l,j if and only if ∃ an r ∈ J s,m , r < a l,j , and a t ≥ 1 such that τ t s (r) ∈ J l,j . (5) By the out-degree of a vertex we mean the number of outgoing directed edges from this vertex in the graph, and by the in-degree of a vertex we mean the number of incoming directed edges incident to this vertex in the graph. (6) A vertex is called a source if it is a vertex with in-degree equals to zero. A vertex is called a sink if it is a vertex with out-degree equals to zero. For the above graphs, one can identify two types of vertices: let (a s,m , a s,m+1 ) be a vertex. If τ s (r) > r for all r ∈ (a s,m , a s,m+1 ), then the vertex (a s,m , a s,m+1 ) will be denoted byĴ s,m . If τ s (r) < r for all r ∈ (a s,m , a s,m+1 ), then the vertex (a s,m , a s,m+1 ) will be denoted byJ s,m . When we prove a statement for a vertex J s,m (without a label), this means that the result holds for both types of vertices. The following lemma contains some properties of G d and G u . Lemma 5.3. Let G d and G u be defined as above.
(1) IfĴ s,m is a vertex in G d , thenĴ s,m is a sink in G d .
(2) LetJ s,m and J l,j be two vertices in G d . There is a directed edge connectinǧ J s,m to J l,j in G d if and only if a s,m < a l,j+1 < a s,m+1 . In particular for all s ∈ S there is no directed edge in G d connecting J s,m to J s,j for any m and j.
(3) IfJ s,m is a vertex in G u , thenJ s,m is a sink in G u .
(4) LetĴ s,m and J l,j be two vertices in G u . There is a directed edge connectinĝ J s,m to J l,j in G u if and only if a s,m < a l,j < a s,m+1 . In particular for all s ∈ S there is no directed edge in G u connecting J s,m to J s,j for any m and j.
Proof. The proof of the first statement is straight forward. Indeed, let J l,j be any vertex in G d and r ∈Ĵ s,m such that r > a l,j+1 . Then for all t ≥ 1 τ t s (r) > τ t−1 s (r) > . . . τ s (r) > r > a l,j+1 . The proof of the second statement follows from the fact that if r > a s,m ≥ a l,j+1 then for t ≥ 1 we have τ t s (r) > a s,m ≥ a l,j+1 . If r > a l,j+1 > a s,m , then there exits a t ≥ 1 such that a s,m < τ t s (r) < a l,j+1 . Proofs of the third and fourth statements are similar to the first two.
For our further analysis we introduce the following notion.
Definition 5.4. We say that a random orbit of F stays above a point c if all the points of the infinite orbit are bigger than or equal to c with probability π p = 1.
Lemma 5.5. Let J l,j be a vertex in G d such that a l,j+1 = 1. If J l,j is a source in G d , then the random orbit of F starting from r > a l,j+1 stays above a l,j+1 with probability π p = 1.
Proof. Suppose J l,j is a source in G d . Then for all r > a l,j+1 , we have τ t s (r) > a l,j+1 for all s ∈ S and t ≥ 1. This means that if r > a l,j+1 we have τ s1 (r) > a l,j+1 and τ s2 • τ s1 (r) > a l,j+1 and so on.
Theorem 5.6. Let F be an IFS whose transformations satisfy the properties (A), (B) and (C).
(1) If for all s ∈ S there is a vertexJ s,m in G d with a s,m = 0, then δ 0 is an SRB measure, B(δ 0 ) ⊇ [0, a), where a = min s {a s,m+1 }. In particular, for any r 0 ∈ [0, a), lim t r t (s t ) = 0 a.s. (2) If for all s ∈ S there is a vertexĴ s,m in G d with a s,m+1 = 1, then δ 1 is an SRB measure. Moreover, B(δ 1 ) ⊇ (b, 1], where b = max s {a s,m }. In particular, for any r 0 ∈ (b, 1], lim t r t (s t ) = 1 a.s. (3) Let J l,j be a vertex in G d such that a l,j+1 = 1. If J l,j is a source in G d 7 then F preserves a probability measure whose support is separated from 0 8 .
(4) Let J l,j be a vertex in G u such that a l,j+1 = 0. If J l,j is a source in G u then F preserves a probability measure whose support is separated from 1. 7 In the case where a l,j = 0, even if otherĴs,m, with as,m = 0, receives a directed edge, the result still holds. Thus, to know the existence of an invariant probability measure whose support is separated from 0, it is enough to check that one vertex J l,j with a l,j = 0 which is a source in G d . Statements of Lemma 5.3 can be useful to visualize cases of this type. 8 The invariant measure here is not necessarily δ 1 .
(5) LetĴ s * ,m be a vertex with a s * ,m = 0 whose out-degree in G u is at least one. If ∃ a vertex J s0,j in G d , a s0,j = 0 and a s0,j+1 < a s * ,m+1 , which is a source in G d , then for any r 0 ∈ (0, 1], lim t r t (s t ) = 0 a.s. Moreover, δ 0 is not an SRB measure for F . (6) LetJ s0,j be a vertex in G d such that a s0,j+1 = 1 and whose out-degree in G d is at least one. If ∃ a J s * ,m in G u , a s * ,m+1 = 1 and a s * ,m > a s0,j , which is a source in G u , then for any r 0 ∈ [0, 1), lim t r t (s t ) = 1 a.s. Moreover, δ 1 is not an SRB measure for F . (7) If for all s ∈ S the vertices whose a s,m = 0 are of the formĴ s,m and their a s,m+1 ≡ a are identical, then for any r 0 in (0, a], with probability one, lim t r t (s t ) = a. In particular, δ a is an SRB measure with B(δ a ) = (0, a] and δ 0 is not an SRB measure. Proof. We only prove the odd numbered statements in the theorem. Proofs of the even numbered statements are very similar. (1) For any r 0 ∈ [0, a), any random orbit of F will converge to zero. Using Lemma 4.1, this shows that δ 0 is an SRB measure with B(δ 0 ) ⊇ [0, a).
(3) Let r 0 > a l,j+1 . Since [0, 1] is a compact metric space and for all s ∈ S τ s is continuous, the average 1 T T −1 t=0 P t δ r0 of the probability measures converges in the weak* topology to an F invariant probability measure 9 . By Lemma 5.5, this measure is supported on [a l,j+1 , 1]. (5) Let D = {J s,m \ {0} : a s,m = 0}. For any r 0 ∈ D, there exists a finite t ≥ 1 such that τ t s * (r 0 ) > a s0,j+1 . Since J s0,j is a source in G d , by Lemma 5.5, τ t s * (r 0 ) stays above a s0,j+1 with probability π p = 1. Therefore, for any r 0 ∈ D, with positive probability, the random orbit of r 0 is bounded away from 0. Let us consider now the case of a starting point r 0 > a s0,m+1 . Since all the transformations are homeomorphisms and 0 is a common fixed point, for any r 0 > a s0,m+1 and any t ≥ 0, with positive probability, r t (s t ) > a s0,m+1 . Hence, for any r ∈ (0, 1], with strictly positive probability, lim t→∞ r t (s t ) ≥ a s0,m+1 . Moreover, with strictly positive probability, for any r ∈ (0, 1], there exists a T − 1 > t 0 ≥ 1 such that Therefore, with strictly positive probability, for any r ∈ (0, 1], Now, to show that δ 0 is not an SRB measure, it is enough to find a continuous function f on [0, 1] such that with positive probability, for any r ∈ (0, 1], Indeed, this is the case if we use the function f (r) = r and (5.1). Thus, δ 0 is not an SRB measure.. (7) Obviously, for any r 0 ∈ (0, a], the random orbit of F starting at r 0 will converge to a. Using Lemma 4.1, this implies that δ a is an SRB measure with B(δ a ) = (0, a]. Moreover, since all the transformations are homeomorphisms with common fixed point at a, for any r 0 > a, the random orbit of F stays above a. Thus, δ 0 is not an SRB measure.

Asset Market Games
In this section, we apply our results to evolutionary models of financial markets. In particular, we will focus on the model introduced by [4]. First, we recall the model of [4].
6.1. The Model. Let S is a finite set and s t ∈ S, t = 1, 2, ..., be the "state of the world" at date t. Let p be a probability distribution on S such that for all s ∈ S p(s) > 0. We also assume that s t are independent and identically distributed.
In this model there are K "short-lived" assets k = 1, 2, ..., K (live one period and are identically reborn every next period). One unit of asset k issued at time t yields payoff D k (s t+1 ) ≥ 0 at time t + 1. It is assumed that ED k (s t ) > 0 for each k = 1, 2, ..., K , where E is the expectation with respect to the underlying probability p. The total amount of asset k available in the market is V k = 1.
In this model there are I investors (traders) i = 1, ..., I. Every investor i at each time t = 0, 1, 2, ... has a portfolio k is the number of units of asset k in the portfolio x i t = x i t (s t ), s t = (s 1 , ..., s t ). We assume that for each moment of time t ≥ 1 and each random situation s t , the market for every asset k clears: Each investor is endowed with initial wealth w i 0 > 0. Wealth w i t+1 of investor i at time t + 1 can be computed as follows: Total market wealth at time t + 1 is equal to Investment strategies are characterized in terms of investment proportions: Here, λ i t,k stands for the share of the budget w i t of investor i that is invested into asset k at time t. In general λ i t,k may depend on s t = (s 1 , s 2 , ..., s t ). Given strategies Λ i = {λ i 0 , λ i 1 , λ i 2 , ...} of investors i = 1, ..., I, the equation determines the market clearing price p t,k = p t,k (s t ) of asset k. The number of units of asset k in the portfolio of investor i at time t is equal to By using (6.6) and (6.2), we get Since w i 0 > 0, we obtain w i t > 0 for each t. The main focus of the model is on the analysis of the dynamics of the market shares of the investors Using (6.7) and (6.3), we obtain (6.8) K m=1 D m (s t+1 ) are the relative (normalized) payoffs of the assets k = 1, 2, ..., K. We have R k (s) ≥ 0 and k R k (s) = 1.
6.2. Performance of investment strategies and the Kelly rule. In the theory of evolutionary finance there are three possible grades for investor i (or for the strategy she/he employs): (i) extinction: lim r i t = 0 a.s.; (ii) survival: lim sup r i t > 0 but lim inf r i t < 1 a.s.; (iii) domination: lim r i t = 1 a.s. Definition 6.1. An investment strategy is called completely mixed strategy if it assigns a positive percentage of wealth λ t,k (s t ) to every asset k = 1, . . . , K for all t and s t ; moreover, it is called simple if λ t,k (s t ) = λ k > 0.

E2) All investors use simple strategies;
it was shown that investors who follow the Kelly rule survive and others who use a different simple strategy get extinct. In particular, If only one investor follows the Kelly rule, then this investor dominates the market.
The main challenge in using the Kelly rule lies in the fact that it requires from investors the full knowledge of the probability distribution p. In Subsection 6.4, using an IFS representation of (6.8) and Theorem 4.3, we overcome this difficulty by finding another successful strategy which requires partial knowledge of the probability distribution p.

6.
3. An IFS realization of the model. In the rest of the paper, we are going to show how the above model can be represented by an IFS. We are going to apply the results of Sections 4 and 5 to study the dynamics of (6.8). As in [4], we assume here that all the investors use simple strategies. Further, we focus on the case 10 when I = 2. The market selection process (6.8) reduces to the following one dimensional system: where r t is investor's 1 relative market share at time t and (λ 1 k ) K k=1 and (λ 2 k ) K k=1 are the investment strategies of investor 1 and 2 respectively. Then the random dynamical (6.9) of the market selection process can be described by an iterated function system with probabilities: .
We first note that the transformations τ s of the IFS of the market selection process are maps from the unit interval into itself and they satisfy assumptions (A), (B) and (C). In fact, the maps for this model have additional properties. For example, they are differentiable functions.
6.4. Investors with partial information on p and a generalization of the Kelly rule. We use Theorem 4.3 to provide a rule for investors with partial information on p. The investor who follows this rule cannot be driven out of the market; i.e., she/he either dominates or at least survives. The importance of this rule lies in the fact that investor 1 does not need to know the Kelly rule exactly 11 . She/he only needs to know a perturbation of the Kelly rule; for example, the Kelly rule plus some error bounds. Firstly, we show in the following lemma that the logarithms of the exponents β s (r) are uniformly bounded.
The minimum and maximum of β(r) can be attained at r = 0, r = 1 or at a point of a local extremum. Using De L'Hospital rule we find lim r→0 + β(r) = 1 A point of local extremum r * in (0, 1) of β(r) is found by solving Therefore, at the point r = r * of local extremum .
Observe that the function is continuous at [0, 1]. Thus, it attains its maximum and minimum on [0, 1]. This completes the proof of the lemma.
11 It is often difficult for an investor to know the exact probability distribution of the states of the world. Corollary 6.3. Let , r ∈ [0, 1]. and . Theorem 6.4. If for each k ∈ {1, . . . , K} λ 1 k lies between ER k and λ 2 k , then investor 1 cannot be driven out of the market; i.e., she/he either dominates or at least survives.
Proof. Let us consider the function is a probability vector. We will find conditions on λ 1 k which ensure G(r) ≥ 1, r ∈ [0, 1]. It is easy to see that (6.10) We also have Thus, G is a convex function and its derivative G is increasing. If G (1) ≤ 0 then G is decreasing and because of (6.10) this implies that G(r) ≥ 1, r ∈ [0, 1]. Observe that It is easy to see that a sufficient condition for (6.11) or, in short, for each k, 1 ≤ k ≤ K, λ 1 k should be between λ 2 k and v k . Now, let us consider the expression L s=1 p s ln(β s (r)). with v k being the expected payoff for the k th asset, v k = L s=1 p s R k (s), k = 1, . . . , K.
We have shown before that a sufficient condition for this is (6.11) or placing each λ 1 k between the expected payoff v k and λ 2 k . To complete the proof of the theorem, we first use Lemma 6.2 to observe that exponents β s (r) of this system are bounded and then (1) of Theorem 4.3. Indeed, for any fixed partial history s t−2 , because the stochastic process s t is an iid process, we have E(ln α t |s t−1 ) = L s=1 p s ln(β s (r t−2 )). 6.5. Incorrect beliefs. Our results in Section 5 are also interesting for studying the dynamics of (6.8). In fact, they can be used to study the dynamics in the situation where both players have 'incorrect beliefs'; i.e., when players do not have the right information or partial information about p. Thus, they either use wrong distributions to build their strategies or they arbitrarily choose their strategies. Consequently, their strategies are, in general, different from the Kelly rule and the generalization which we presented in Subsection 6.4. In this case, the results of Section 5 can be used to identify the exact outcome of the game in certain situations. In some situations, as in Example 5.1, one cannot know the outcome of the system without knowing p.

Appendix
The following general arcsine law has been proved in [6].
i |A i−1 ) and assume E(X m+1 |A m ) = 0 , EX 2 m < ∞ , and v m → ∞ a.s. Let T n = inf{m : v m ≥ n} and L n = 1 0 for all ε > 0, then the distributions of L n converge to the arcsine distribution.
We now use Theorem 7.1 to prove a proposition which is used in the proof of Theorem 4.7.
Proposition 7.2. Let X 1 , X 2 , . . . be a sequence of random variables adapted to the sequence of σ-algebras A 1 , A 2 , . . . . Suppose that there exist constants d > 0 and 0 < D < ∞ such that for all n ≥ 1 we have 0 < d ≤ E(X 2 n |A n−1 ) and X 2 n ≤ D. Then, the sequence satisfies the remaining assumptions of Theorem 7.1. In particular, Theorem 7.1 implies the condition (7.1) lim sup n→∞ P r( Pos n n ≤ a) ≥ b > 0, for some constants 0 < a, b < 1, where Pos n = n i=1 χ {Si>0} . Proof. The remaining assumptions of Theorem 7.1 are trivially satisfied. We have m · d ≤ v m ≤ m · D for all m ≥ 1 so T n · d ≤ n ≤ T n · D for all n ≥ 1. Then, and, for 0 ≤ a 1 ≤ 1, we have For a 1 small enough we obtain a meaningful estimate for a = a 1 D d and n large enough. This implies condition (7.1).