On the One Dimensional “Learning from Neighbours" Model

We consider a model of a discrete time “interacting particle system" on the integer line where inﬁnitely many changes are allowed at each instance of time. We describe the model using chameleons of two different colours, viz ., red ( R ) and blue ( B ). At each instance of time each chameleon performs an independent but identical coin toss experiment with probability α to decide whether to change its colour or not. If the coin lands head then the creature retains its colour (this is to be interpreted as a “success"), otherwise it observes the colours and coin tosses of its two nearest neighbours and changes its colour only if, among its neighbours and including itself, the proportion of successes of the other colour is larger than the proportion of successes of its own colour. This produces a Markov chain with inﬁnite state space { R , B } (cid:90) . This model was studied by Chatterjee and Xu [ 5 ] in the context of diffusion of technologies in a set-up of myopic, memoryless agents. In their work they assume different success probabilities of coin tosses according to the colour of the chameleon. In this work we consider the symmetric case where the success probability, α , is the same irrespective of the colour of the chameleon. We show that starting from any initial translation invariant distribution of colours the Markov chain converges to a limit of a single colour, i.e., even at the symmetric case there is no “coexistence" of the two colours at the limit. As a corollary we also characterize the set of all translation invariant stationary laws of this Markov chain. Moreover we show that starting with an i.i.d. colour distribution with density p ∈ [ 0,1 ] of one colour (say red), the limiting distribution is all red with probability π (cid:0) α , p (cid:1) which is continuous in p and for p “small" π ( p ) > p . The last result can be interpreted as the model favours the “underdog".


Background and Motivation
Chatterjee and Xu [5] introduced a model of particle systems consisting of a countable number of particles of two types, each particle being situated on integer points of the integer line. The type of a particle evolves with time depending on the behaviour of the neighbouring particles. This model, as Chatterjee and Xu explain is "... a problem of diffusion of technology, where one technology is better than the other and agents imitate better technologies among their neighbours." The model above is part of a large class of models studied by economists over the last decade on 'social learning'. Ellison and Fudenberg [6] introduced the notion of social learning -they studied how the speed of learning and the ultimate determination of market equilibrium is affected by social networks and other institutions governing communication between market participants. Bala and Goyal [2] studied a model where "... individuals periodically make decisions concerning the continuation of existing information links and the formation of new information links, with their cohorts ... (based on) ... the costs of forming and maintaining links against the potential rewards from doing so". They studied the long run behaviour of this process. Much of the work on this was inspired by an earlier paper Bala and Goyal [1] where the learning was from neighbours and they showed " ... local learning ensures that all agents obtain the same payoffs in the long run". Banerjee and Fudenberg [3] also obtained similar results of a single 'long-run outcome' when the decision making of an individual is based on a larger set of agents.
Here we consider the model studied by Chatterjee and Xu [5]. Instead of particles or technologies we describe the model with chameleons which can change their colours. Let G = (V, E) be an infinite connected graph which is locally finite, i.e., deg G (v) < ∞ for any vertex v ∈ V . Suppose at every v ∈ V and at any instance of time t, there is a chameleon ξ v (t), which is either red (R) or blue (B) in colour. In accordance with its colour it also has either a red coin C R (v, t) or a blue coin C B (v, t). The red coin has a probability p R of success (1) and a probability 1 − p R of failure (0), while the blue coin has a probability p B of success (1) and a probability 1 − p B of failure (0). The outcome of the coin toss of a chameleon is independent of the outcomes of the coins as well as the colours of the other chameleons. The evolution is governed by the rule described below which is referred as Rule-I in [5].
, in other words, if the coin toss of the chameleon at v at time t results in a success then it retains its colour. The process ξ 0 starts with some initial distribution 0 on {R, B} V and the evolution is governed by the rule above. Let t be the distribution of ξ t at time t. In this work we are interested in finding the possible limiting distributions π for t t≥0 . From the definition it follows that {ξ t : t ≥ 0} is a Markov chain with state space {R, B} V ; thus any limiting distribution π, if it exists, is a stationary distribution of this Markov chain. We also observe that there are two absorbing states for this Markov chain, namely, the configuration of all reds and the configuration of all blues. Let δ R denote the degenerate measure on {R, B} V which assigns mass 1 to the configuration of all reds, and similarly δ B denote the measure on {R, B} V which assigns mass 1 to the configuration of all blues. Chatterjee and Xu [5] studied this model for the one dimensional integer line with nearest neighbour links. They showed that when ξ i (0) i∈ are i.i.d. with 0 ξ 0 (0) = R = p and 0 ξ 0 (0) = B = 1 − p, for some p ∈ (0, 1) and In this work we first present a simpler proof of the above result. However our main interest is the study of the model when p R = p B , that is, when the success/failure of a coin is "colour-blind". We call this the symmetric case. The following subsection provides our main results.

Main Results
We first state the result of Chatterjee and Xu [5] for which we provide a different proof in Section 2.
Our main result is for the symmetric case when p R = p B . For this we have the following result.
(iv) For every α ∈ (0, 1) there exists ≡ (α) > 0 such that π α, p > p for all 0 < p < . Theorem 2 basically says that under the evolution scheme described above if p R = p B = α then starting with i.i.d. colours on the integer line the distribution of the colours will converge either to the all red or to the all blue configuration; thus ruling out the possibility of any coexistence of both the colours at the limit. Such a result is expected considering the one dimensionality of the graph . This lack of coexistence on is akin to the situation in many statistical physics models, such as, percolation, Ising model, q-Potts model which do not admit phase transition in one dimension [9,8].
Here it is worth mentioning that it is natural to compare our model to the one dimensional nearest neighbour threshold voter model. A closer look reveals that our model has certain intrinsically different aspects than the standard interacting particle system models including threshold voter model with nearest neighbour interactions as described in [10]. On the technical side those models are typically studied in continuous time while ours is discrete in nature and we allow the possibility of infinitely many changes to happen simultaneously at a time instance. In fact there is no "natural" continuous time analogue of this kind of particle system. Moreover unlike in voter's model the change of colour is not determined just by the number of successes of a colour in the neighbourhood but by the proportion of successes. This introduces more local dependency in the model. On more conceptual side it is interesting to note that for our model we get π α, p > p in a neighbourhood of 0, which can be interpreted as follows: The model gives an "advantage to the underdog", in the sense that for a fixed α ∈ (0, 1) if p is "small" then there is still a possibility that the (underdog) red chameleons will survive at the end. This is different from what one obtains in a threshold voter model where the limiting fraction of reds is same as the initial density of reds.
We believe that this phenomenon of "advantage given to the underdog" is true for all 0 < p < 1 with the caveat that the colour of the underdog is different according as p is smaller or greater than 1 2 . We conjecture that the graph of the function p → π α, p is as in Figure 1.2 for every fixed α ∈ (0, 1).
We would also like to point out that unlike in the voter model there is no simple coalescing branching random walks as a dual process to our model. Our techniques involve a direct computation of probabilities of certain type of cylinder sets leading to the result in Theorem 2.
Finally, we note that the proof of the convergence (3) of Theorem 2 (which is given in Section 4) works as it is, when we start the Markov chain with a translation invariant starting distribution. Thus we get the following more general result.
whereπ depends on the initial distribution 0 and as well as α.
The following corollary is also an immediate consequence Here it is worthwhile to mention that it is unlikely that this chain has a stationary distribution which is not translation invariant, but we have not explored in that direction.

Outline
The rest of the paper is divided as follows. In Section 2 we prove Theorem 1. In Section 3 we consider a toy model on the half-line + where a chameleon decides to change its colour according to its own toss and the colour and outcome of the toss of its neighbour to the right. This model is simpler to analyze and its usefulness is in providing an illustration of our method. In Section 4 we prove Theorem 2. Section 5 provides some auxiliary technical results which we use in various derivations. We end with some discussion on the coexistence of two colours in Section 6.

Red is More Successful than Blue
In this section we provide a simple proof of Theorem 1. We begin by placing only blue chameleons at each point of the negative half line and red chameleons on the non-negative half line, and we take this as our initial configuration, that is, It is easy to see that, starting with ξ(0) as given above there is always a sharp interface left of which all chameleons are blue and right of which all are red. Moreover if we write X t as the position of the left most red chameleon at time t ≥ 0 then X t t≥0 performs a symmetric random walk starting from the origin with i.i.d. increments, each taking values −1, 0 and 1 with probabilities Now it is easy to check that if p R > p B , then X t t≥0 is a transient random walk with a strictly negative drift. In other words it proves that starting with the configuration given in (5) t w −→ δ R as t → ∞.

Proof of Theorem 1
To complete the proof of Theorem 1, fix an η > 0 and let M be such that the random walk X t satisfies If the initial distribution of chameleons at time 0 is such that, for some j ∈ where we write X − t as the left interface and X + t as the right interface at time t ≥ 0. Further an easy coupling argument shows that the above situation of a stretch of 2M + 2 red chameleons flanked by only blue chameleons on either sides is "worse" than the case when the two ends instead of being all blue is actually a mixture of red and blue chameleons. More precisely, suppose the starting configuration ξ 0 is such that there exists a location j ∈ with ξ i (0) = R for all are the positions of the leftmost and rightmost red chameleons at time t ≥ 0 of the lounge of chameleons which started as the stretch of length 2M + 2.
Now for any k ≥ 1 where the last inequality follows from equation (9).
Finally since η > 0 is arbitrary we conclude that t w −→ δ R as t → ∞.

Remark:
We observe that the above argument holds for any starting configuration ξ (0) such that intervals of reds of arbitrary length can be found with probability one. This generalizes Theorem 1.

One Directional Neighbourhood Model
In this section we study the simpler one directional neighbourhood model model where the dynamics follows our rule but with N i := {i, i + 1} for i ∈ . The computations for this model are much simpler than the original two sided neighbourhood model and the method used here is illustrative of the method employed for the original two sided neighbourhood model. We now state the convergence result for the one directional neighbourhood model.
Moreover the following corollary is now immediate.

Corollary 6.
For the one directional neighbourhood model with p R = p B = α ∈ (0, 1) the only translation invariant stationary measures are of the form Before we prove this theorem we make the following observation which is very simple to prove but plays an important role in all our subsequent discussions. The proof of this Proposition follows from the Markov chain dynamics of the model and we omit the details. It is worth remarking here that a similar result is true for the two sided neighbourhood model.

Proof of Theorem 5
Before we embark on the proof of Theorem 5 we present some notation. From the translation invariance of t as given by Proposition 7, for every t ≥ 0, k ≥ 1, i ∈ and ω j ∈ {R, B} we observe that t ξ i (t) = ω 1 , ξ i+1 (t) = ω 2 , . . . , ξ i+k−1 = ω k does not depend on the location i; thus with a slight abuse of notation we write Also t RB k R := t (RB . . . BR) where there are k many B's in the expression on the right.
To prove this theorem we will use the technical result Theorem 9 given in Section 5.
The first equality follows from the dynamics rule. The second equality follows from the fact which is a consequence of the translation invariance of t . Now for t ≥ 0 using the rule of the dynamics we get On the other hand by translation invariance of t we have Subtracting equation (13) from the equation (12) we get Here we use the fact that t (BRR) = t (RR) − t (RRR) = t (RRB). So we conclude that Since the summands above are non-negative and since 0 ≤ t (RR) ≤ 1 we have lim t→∞ t (RR) exists. In addition, we have Finally observe that for any k ≥ 1 we have Using (16) and (17) it follows by induction that Theorem 5 now follows from Theorem 9.

Proof of Theorem 2
In this section we prove our main result, namely, Theorem 2. But before we proceed we note that as remarked in the previous section, the following result is also true for our original model.

Proposition 8. Under the dynamics of our original model with p R = p B , if 0 is a translation invariant measure on {R, B} then t is also translation invariant for every t ≥ 0.
Once again the proof is simple so we omit the details.
As in Section 3.1, Proposition 8 demonstrates the translation invariance of t whenever 0 is translation invariant. The notation we use in this section are the same as those in Section 3.1.

Proof of The Convergence (3)
As in the previous section, here too we use the Theorem 9 to prove the convergence (3). For that we begin by checking that lim t→∞ t (RR) exists. In order to prove this limit we use a technique similar to that in Section 3. The dynamics of the two sided neighbourhood model bring in some additional intricacies.
The following table presents some calculations which we use repeatedly. The column on the right is the probability of obtaining a configuration RR at locations (i, i + 1) of at time (t + 1) when the configuration at time t at locations (i − 1, i, i + 1, i + 2) is given by the column on the left.

Configuration at time t
Probability of getting a configuration RR at time t + 1 Also by translation invariance of t it also follows that Now subtracting equation (20) from equation (19) we get In the above equations the second equality holds because of the following two easy algebraic identities: α) and and the last equality is obtained from the following relations: Since each of the terms in the last equality of (21) are non-negative so we conclude that t (RR) t≥1 is an increasing sequence and hence converges. Thus t+1 (RR) − t (R) → 0 as t → ∞. So each of the following eight probabilities converge to 0 as t → ∞.
It then follows that Now observe that for any k ≥ 1 we have So by induction lim for any k ≥ 1.
Finally, we consider the one dimensional marginal and observe Also from translation invariance of t it follows that Subtracting equation (28) from equation (27) we have To derive this final expression we used the following identities which are easy consequences of translation invariance of t .
Now from equation (21) we get that all the eight probabilities given in (22) are summable. In fact we can write It then follows that the two sequences t (RBR) = t (RBRR) + t (RBRB) and t (BRB) = t (BRBR) + t (BRBB) are also summable. In particular they converge to 0. This proves that lim t→∞ t (R) exists.
Invoking Theorem 9 we now complete the proof of the convergence (3).
We also note that so far we have only used the fact that the starting distribution is translation invariant. Thus as mentioned in the introduction this in fact proves Theorem 3 as well.

Proof of the Properties of π α, p
First, from the definition it follows that π α, p = lim t→∞ t (RR); thus using equation (30) we get This immediately proves that π α, p > p 2 for any p ∈ (0, 1). Moreover because the model is symmetric with respect to colour we have This proves that p 2 < π α, p < 2p − p 2 as well as π α, 1 2 = 1 2 . Thus properties (ii) and (iii) of π hold. Moreover from the expression (31) it follows that for every fixed α ∈ (0, 1) the limiting marginal π as a function of p is an increasing limit of polynomials in p. This implies that p → π α, p is lower semi-continuous [11]. But because of the identity (32) for the same reason it is also upper semi-continuous. This proves that that π as a function of p is continuous, establishing the property (i).
Finally, we show property (iv). For this fix α ∈ (0, 1) and notice that from the expression (31), since all the summands are non-negative, we have Now fix t ≥ 0 and consider the probability t (BRBB). Because of translation invariance, without loss of any generality, we may assume that the configuration we are considering is at the locations −1, 0, 1, 2. Now notice that because the dynamics depends only on the nearest neighbours so t (BRBB) depends on the initial configuration at the locations in the interval [−t − 1, t + 2]. So without loss of generality we may assume that outside the interval [−t − 1, t + 2], at every location the colour of the chameleons are blue (B). So we may write where the terms in o p 2 are all non-negative and p (t) 11 (α) is the sum over all locations x ∈ [−t − 1, t + 2] of the probability of obtaining exactly one red (R) chameleon at location 0 at time t having started at time 0 with exactly one red (R) chameleon at location x. Observe also that t (BBRB) has exactly the same representation as t (BRBB). Now let us consider the case when we start with exactly one R chameleon at some location x ∈ and all other chameleon of colour blue (B). For this let L t be the number of red chameleons at time t and X t be the position of the leftmost red chameleon at time t. These two quantities are well defined for our Markov chain. Thus we get where the second equality follows because of the translation invariance of the measure while the last follows because if X 0 = 0 then X t ∈ [−t, t] with probability one.
Now it follows easily that starting with exactly one R chameleon at the origin the stochastic process L t ∞ t=0 is a Markov chain with state-space {0, 1, 2, . . .} starting at L 0 = 1 and with absorbing state 0. The transition matrix P := p i j is given by Now using equations (33), (34) it follows that Here we note that the second inequality follows from Fatou's Lemma and, noting that ∞ t=0 p (t) 11 = E # of returns to state 1 L 0 = 1 , the last inequality follows from Theorem 10 of Section 5.
This completes the proof of Theorem 2.

Some Technical Results
In this section we prove some technical results which have been used in the proofs in the previous sections.
Proof. Let a := lim t→∞ t (R) and b := lim t→∞ t (RR). To prove the result it is enough to show that a = b. This is because then t (RB) = t (BR) = t (RR) − t (R) → 0 as t → ∞. Now to show a = b we first observe that

Thus under assumption (iii) it follows by induction that
Also for any k ≥ 1 we have

Thus it follows by induction that
From equation (38) and (39) it follows that This proves that a = b completing the proof.
Proof. Let f 11 := P L t = 1 for some t ≥ 1 L 0 = 1 then from standard Markov chain theory [7] it follows that E # of returns to the state 1 L 0 = 1 = 1 Moreover we can also write, where f k1 := P L t = 1 for some t ≥ 1 L 0 = k for k ∈ {2, 3}. Now let p i j be the transition matrix of a new Markov chain on the same state-space {0, 1, 2, . . .} such that both 0 and 1 are absorbing states andp i j = p i j for all i ≥ 2. Let u k be the probability of getting absorbed in the state 1 for this new chain when started at state k. Then it is easy to see that f k1 = u k for any k ≥ 2.
Going back to the characteristic equation (45) we determine that where negative integers while all red chameleons at the non-negative integers. Let X t be the position of the left-most red chameleon. Then it is easy to see that X t := Y 1 + Y 2 + · · · + Y t where Y i i≥1 are independent and Y i follows a distribution on {−1, 0, 1} with P Y i = −1 = P Y i = 1 = α i 1 − α i and P Y i = 0 = 1 − 2α i 1 − α i . So by Kolmogorov's Three Series Theorem [4] the sequence of random variables X t ∞ t=0 converges a.s. if and only if Thus if (53) is satisfied then there will be both colours present at the limit.
This is intuitively clear since under the condition (53) for large enough t one of α t or 1 − α t is "small" and hence there will either be a large number of failures or large number of successes of the coin tosses, and in either case, no change is expected.
It is of course more interesting to study this symmetric model on higher dimensions with homogeneous success probability and to explore the possibility of coexistence in that case. Unfortunately our method does not help in that case. In fact in higher dimension it is not even clear a condition like p R > p B is good enough to get all red configuration at the limit.