The Compulsive Gambler Process

In the compulsive gambler process there is a finite set of agents who meet pairwise at random times ($i$ and $j$ meet at times of a rate-$\nu_{ij}$ Poisson process) and, upon meeting, play an instantaneous fair game in which one wins the other's money. We introduce this process and describe some of its basic properties. Some properties are rather obvious (martingale structure; comparison with Kingman coalescent) while others are more subtle (an"exchangeable over the money elements"property, and a construction reminiscent of the Donnelly-Kurtz look-down construction). Several directions for possible future research are described. One -- where agents meet neighbors in a sparse graph -- is studied here, and another -- a continuous-space extension called the {\em metric coalescent} -- is studied in Lanoue (2014).


Introduction
The style of models known to probabilists as Interacting Particle Systems (IPS) [11] have found use in many fields across the mathematical and social sciences. Often the underlying conceptual picture is of a social network, where individual "agents" meet pairwise and update their "state" (opinion, activity etc) in a way depending on their previous states. This picture motivates a precise general setup we call Finite Markov Information Exchange (FMIE) processes [1]. Consider a set Agents of n agents and a nonnegative array (ν ij ), indexed by unordered pairs {i, j}, which is irreducible (i.e. the graph of edges corresponding to strictly positive entries is connected). Assume • Each unordered pair i, j of agents with ν ij > 0 meets at the times of a rate-ν ij Poisson process, independent for different pairs.
Call this collection of Poisson processes the meeting process; the array (ν ij ) specifies the meeting model. A specific FMIE is a specific rule (deterministic or random) for updating states, so this encompasses most of the familiar IPS models such as the voter model and contact process. But our emphasis differs from the classical emphasis of IPS in several ways; the states are typically numerical rather than categorical, the number n of agents is finite (though we consider n → ∞ asymptotics) and we focus on obtaining rough results for general meeting rates rather than sharp results for very specific meeting rates. One specific FMIE model is the averaging process [2] in which agents initially have different amounts of money; whenever two agents meet, they share their combined money equally. In this paper we introduce and study a conceptually opposite model, the compulsive gambler process. In the "simple" form of the model, agents each start with one unit money. When two agents meet, if they each have non-zero money (say amounts a and b) then they instantly play a fair game in which one agent acquires the combined amount a + b (so with probabilities a/(a + b) and b/(a + b)) respectively). In the "standardized" form of the model the initial fortunes can be non-uniform, and we scale so that the total money equals 1.
This is an invented model for which we do not claim realism 1 , but we do claim some mathematical interest as an intermediary between IPS theory and coalescent theory.

Elementary observations
First consider a fixed meeting model on n agents. Write X(t) = (X i (t), i ∈ Agents) for the time-t configuration of the simple compulsive gambler process; agent i has X i (t) units of money. The following assertions are true, and mostly obvious. We will give proofs, where necessary, and some crude quantifications in section 2 (Lemmas 2.2 and 2.7). (i) X(t) is a finite-state continuous-time Markov chain which, at some a.s. finite random time T , reaches some absorbing configuration X * in which there is some random non-empty set T of agents who are solvent, i.e. have non-zero money. (ii) If ν ij > 0 for all j = i then |T | = 1 a.s., and we call T the fixation time. Furthermore, because each (X i (t), 0 ≤ t < ∞) is a martingale we have P(T = {i}) = P(X * i = n) = 1/n for each agent i. (iii) If ν ij = 0 for some j = i then P(|T | = 1) is strictly between 0 and 1.
These facts suggest more quantitative questions to ask, in the setting of a sequence (ν (n) ij ) of meeting models with n → ∞. How does T (n) behave? In case (iii), how do |T (n) | and the distribution P(X * i ∈ ·|i ∈ T (n) ) of a typical "final fortune" behave? In either case we can ask how the process of the number of solvent agents behaves over 0 ≤ t ≤ T . If the meeting model has some spatial structure then what can we say about the spatial structure of the set of solvent agents at time t? We continue this discussion of research directions in section 1.3.

Techniques
It turns out that a surprising variety of techniques can be exploited in the study of the compulsive gambler process. Amongst these techniques, to be described in section 2, the most natural are martingale results (Lemma 2.1) and elementary bounds obtained by comparison with the Kingman coalescent (e.g. Lemma 2.2). Less obvious is Lemma 2.3: instead of making the random choices of game-winners at the meeting times, we can insert initial randomness and then have a deterministic rule for game-winners. And in the "simple" case that construction has a symmetry property (Lemma 2.5): the deterministic rule is based on a uniform random labeling of initial currency notes as 1, . . . , n, and conditional on the configuration of fortunes at time t, the allocation of note-labels to agents is uniform random. This last method seems very similar to methods used in the study of exchangeable coalescents [4,5,6], though the precise relation is not clear to us and we have not used results from that theory.

Directions for future research
Our main purpose is to lay the groundwork for future research by describing explicitly these techniques (section 2). In this paper we pursue analysis in only one direction, by studying the setting where the meeting model is that agents meet neighbors in a sparse graph (section 3). Here are some other directions of current or future research.
The metric coalescent. This concerns a continuous-space extension. Take a suitable space S, write P(S) for the space of probability measures µ on S and write P fs (S) ⊂ P(S) for the subspace of finite support probability measures. Consider a symmetric function ν : S 2 → R ≥0 . For any µ ∈ P fs (S), we can consider the standardized compulsive gambler process for which the set Agents is the support {s 1 , . . . , s n } of µ, the meeting rates are the ν(s i , s j ), and the initial distribution of money is µ; and moreover we can regard the states of the process as elements of P fs (S). So we can reconsider the standardized compulsive gambler process as a Markov process, specified by the function ν, whose state space is (all of) P fs (S). Then we can ask, inspired by the Kingman coalescent and its extensions [5], whether it makes sense to imagine this process starting with a general (in particular, non-atomic) initial state µ 0 ∈ P(S). This topic is studied in detail in [10] in the context of a complete separable locally finite metric space (S, d) and meeting rates of the form for some continuous function φ(·) > 0. The main result of [10] is that, under the condition lim x↓0 φ(x) = ∞, the standardized compulsive gambler process on P fs (S) extends to a Feller process (the metric coalescent) on all of P(S). In particular, for an initial µ 0 ∈ P(S) with compact support, the metric coalescent process (µ t , 0 ≤ t < ∞) has finite support at each t 0 > 0, evolves as the compulsive gambler process over [t 0 , ∞) and satisfies the initial condition P(lim t↓0 µ t = µ 0 in P(S)) = 1.
A key ingredient in the proof is Corollary 2.4 below. In informal language, Corollary 2.4 says that for the simple compulsive gambler process, instead of determining the game winners at the meeting times, we can do so via initial randomization, as follows. Initially each agent has a currency note with a random serial number; when two solvent agents meet, each has a collection of notes, but now the winner is always the owner of the lowest-ranked note, ranking by serial number. In other words we start with a uniform random ordering s 1 , . . . , s n of Agents, ranked by serial number of note. In the continuous-space setting we can do the same construction but starting with i.i.d. (µ 0 ) random samples s 1 , . . . , s n . For each n we now have a P fs (S)-valued process (µ (n) t , 0 ≤ t < ∞). But as n varies these processes have a natural coupling and the metric coalescent can be constructed as the a.s. n → ∞ limit process.
Infinite discrete space. For a countable infinite set Agents, the simple compulsive gambler process is well-defined under certain conditions, for instance if In particular, for an infinite vertex-transitive bounded degree graph, with meeting rates ν e ≡ 1 for edges e, there is a random set T of agents who remain solvent in the t → ∞ limit, and one can seek to calculate the density P(i ∈ T ) of that set -see section 3 for the case of the r-regular tree. For another direction, consider the case where Agents = Z d and the meeting rates are for some α > d, implying (1). Consider the mean density of solvent agents at time t ρ(t) := P(X i (t) = 0) and the conditional distribution Heuristic arguments, based on supposing the positions of solvent agents do not become "clustered", suggest that It is then plausible that for some Z such that EZ = 1 and then that the process has a scaling limit, the limit being a process whose states are (locally finite support) measures on R d .
Other finite-agent meeting models. For the complete graph meeting model (ν ij ≡ 1) the compulsive gambler process is essentially just the Kingman coalescent. On the d-dimensional discrete torus Z d m one could reconsider meeting rates as at (2). By the heuristics above, for α > d we expect the fixation time T to scale as ρ −1 (m −d ) = m α . In this setting it makes sense to consider also the case α < d, but in this case an agent will tend to meet distant agents rather than nearby ones, and by comparison with the Kingman coalescent we expect T to scale as the total meeting rate j =i ν ij of a given agent, that is as m d−α . Finally, by the techniques of section 3 one can calculate the asymptotic density (27) of solvent agents under the sparse Erdős-Rényi meeting model, a result which can alternatively be seen in terms of the short-time behavior of the Kingman coalescent (section 3.3).

Four basic techniques
We now abbreviate "compulsive gambler" to CG. Fix a meeting model (ν ij ) on a set Agents of n agents. As in section 1.1, write X(t) = (X i (t), i ∈ Agents) for the time-t configuration of the CG process; agent i has X i (t) units of money. And write for the number of solvent agents. The CG process is specified by its transition rates. For each ordered distinct pair (j, k) with min(x j , x k ) > 0, In full generality the state space conists of all configurations x = (x 1 , . . . , x n ) with x i ≥ 0 ∀i. For the simple CG process the initial state is X i (0) = 1 ∀i ∈ Agents. For the standardized CG process there is an initial state x = (x i ) with x i ≥ 0 ∀i and i x i = 1. Clearly i X i (t) ≡ n in the simple case and i X i (t) ≡ 1 in the standardized case. Results in this section 2 hold hold in both simple and standardized cases unless otherwise stated.

Martingale properties
We first record some notation for the elementary stochastic calculus of integrable bounded variation processes. Such a process (Z t ) has a Doob-Meyer decompo- is a martingale and (A t ) is predictable, which can be written in differential notation as dZ t = dM t +dA t . To avoid introducing new symbols, we write E(dZ t |F t ) for dA t .
Lemma 2.1. For any meeting process:

. Given a metric d on
Agents write .
is a martingale, and for a standardized CG process, Proof. (i) and (ii) are straightforward. For (iii), M f (·) is a martingale, and we calculate the sum being over unordered pairs. From An explicit formula for EM 2 f (t) is given in Lemma 2.6.

The Kingman coalescent
In the particular case ν ij = 1, j = i of the meeting model, the compulsive gambler process is essentially the wellstudied Kingman coalescent [4]. In this case the process (N (t), 0 ≤ t < ∞) is the "pure death" Markov chain, started at n, with transition rates q m,m−1 = m 2 , from which it immediately follows that the fixation time is a.s. finite with expectation Here is a simple application.

Lemma 2.2. Consider a meeting model for which
(iv) Let L be the agent who has acquired all the money at time T . In the simple CG process, P(L = i) = 1/n, ∀i ∈ Agents. In the standardized CG process with initial configuration ( Proof. Although the process (N (t)) is typically not Markov, when N (t) = m the conditional intensity of a transition m → m − 1 is at least δ m 2 , so (i) follows by comparison with the Kingman chain result (5). Similarly, write T (r) = min{t : N (t) ≤ r} and T King (r) for the corresponding quantity for the Kingman chain. Then So (ii) follows by comparison. A similar argument, calculating var(T King (r) ) and using Chebyshev's inequality, establishes (iii). Assertion (iv) follows from the martingale property (Lemma 2.1(i)) of (X i (t)), applying the optional sampling theorem at time T .

The augmented process
Given a probability distribution π = (π i ) on the set Agents of n agents with each π i > 0, take independent random variables η i with Exponential(π i ) distributions. Define a random ordering ≺ on Agents by This is one of several equivalent definitions of the size-biased random ordering [9] associated with π. For instance, defining a random bijection F : {1, . . . , n} → Agents by and so on, then the size-biased random ordering could be defined as We want to consider the standardized CG process with some initial configuration x(0) = (x i (0)). Take the size-biased random ordering ≺ on Agents associated with the probabiity distribution x(0). Conditional on the realization of ≺, we can define a variation of the CG process in which, when two agents i, j with non-zero money meet, the winner is always the agent who comes earlier in ≺ (if i ≺ j then i is the winner). In other words, the transition rates (3) become x → x (j,k) at rate ν jk if min(x j , x k ) > 0 and j ≺ k.
Call this the augmented process (X(t), ≺) with initial state (x(0), ≺). Note that the random order ≺ does not change with time. See below for a way of visualizing this process in the simple setting. The next lemma says that unconditionally, that is when we do not see the realization of ≺, we see the CG process.
Implement the augmented process using the order ≺ at (6) given by independent Exponential(x i (0)) r.v.'s (η i , i ∈ Agents). Write F (t) = σ(X(s), 0 ≤ s ≤ t), and note this does not include the random order ≺. We claim that for each t conditional on F (t), the r.v.'s (η i : i ∈ Agents, X i (t) > 0) are independent Exponentials with rates X i (t).
It is enough to check this remains true inductively over meetings. If agents i and j meet at t with non-zero fortunes X i (t−) and X j (t−), then by the evolution rule for the augmented process on the event {η i < η j } we have X i (t) = X i (t−)+X j (t−) and X j (t) = 0 and similarly on the complementary event. By inductive hypothesis η i and η j are independent Exponentials of rates X i (t−) and X j (t−); fact (10) then says that η i has Exponential X i (t) distribution and the induction goes through.
Having established the claim, consider again what happens when agents i and j meet at t with non-zero fortunes X i (t−) and X j (t−). The probability that the update is to X i (t) = X i (t−) + X j (t−) and X j (t) = 0 (and similarly for the complementary event) is the probability of the event {η i < η j }, which by the claim and (9) equals X i (t−)/(X i (t−) + X j (t−)). But this is the dynamics of the CG process.
See [10] for uses of this result in the context of the metric coalescent.

The token process
For the special case of a simple CG process, there is a more concrete and informative expansion of the notion of augmented process above. First, here is a story which might help visualize what is going on. (In talks we ask several audience members to each place an actual currency note on the table, so we can demonstrate the story.) Real-world currency notes have serial numbers; imagine each agent starting with one note with a random serial number, so that the ranking (smallest to largest) of the n notes is uniform random. When two agents with non-zero money meet, we specify that the agent who wins the game is determined as the agent who possesses, in their collection at that time, the smallest-ranked note. The winner adds the loser's notes to his pile of notes.
In the story, each agent has a set of notes, but what is relevant is not the precise serial numbers but the relative rankings of each of the n serial numbers. In the formalization below, (S i (t)) is the set of rankings of all the notes owned by agent i at time t.
Note this story is consistent with the "simple" case of Lemma 2.3, which is essentially the context where we record only the relative orders of each agent's smallest-ranked note. But in contrast to Lemma 2.3, what we do next is useful only in the "simple" context.
To formalize the story above, given meeting rates (ν ij , i, j ∈ Agents) we first take a uniform random bijection F : {1, . . . , n} → Agents. Visualize tokens 1, . . . , n being randomly dealt to the agents. Define a process S(t) = (S i (t), i ∈ Agents) to have initial configuration S i (0) = {F −1 (i)}, i ∈ Agents and transition rates (copying (8)) S → S (j,k) at rate ν jk if S j and S k non-empty and min S j < min S k (11) where S So S i (t) is just the set of tokens held by agent i at time t, and at a meeting the game is always won by the owner of the smallest (lowest-ranked) token. Call (S(t), 0 ≤ t < ∞) the token process, and write As discussed above, Lemma 2.3 implies Corollary 2.4. In the token process (S(t), 0 ≤ t < ∞), the process X(t) := (|S i (t)|, i ∈ Agents) evolves as the simple CG process.
As mentioned earlier, Corollary 2.4 plays a key role in the development of the metric coalescent in [10]. And we will see in sections 2.6 and 3 how it enables us to use simple intuitive arguments in our discrete setting. Corollary 2.4 is reminiscent of the Donnelly-Kurtz look-down construction [8] but we do not see a precise connection. Lemma 2.5 below says: if we just see the number of tokens that each agent has, then the assignment of tokens to agents is uniform over possible assignments.
Lemma 2.5. In the token process, for each t, the conditional distribution of Proof. As in the proof of Lemma 2.3, it is enough to check that the assertion remains true inductively over meetings. Given that S j1 (t) and S j2 (t) are non-empty, the event of a meeting of (j 1 , j 2 ) in (t, t + dt) is independent of (S j1 (t), S j2 (t)). Such a meeting causes either S j1 or S j2 to become S j1 (t)∪S j2 (t) and the other to become empty. Now checking that the induction goes through reduces to checking the following elementary fact about merging components of uniform random partitions, which we leave to the reader. Take (n i , i ∈ I) with each n i ≥ 1 and i n i = n. Take two elements j 1 , j 2 of I, write j 0 for a new symbol and let I ′ := (I \ {j 1 , j 2 }) ∪ {j 0 } and n j0 = n j1 + n j2 . Take a uniform random partition ( is a uniform random partition of {1, . . . , n} into components with |B ′ i | = n i ∀i ∈ I ′ ; (ii) The event "the minimum element of B j1 is smaller than the minimum element of B j2 " is independent of the random partition (B ′ i , i ∈ I ′ ). In applying this fact in our setting, the point is that the information revealed by the change in X(·) at the meeting is precisely the identity of j 1 , j 2 and whether the event in (ii) occurs, but conditioning on these does not destroy uniformity.
A more detailed treatment is given in [10] section 2.4.

Moment calculations
Lemma 2.5 allows us to do various calculations with a simple CG process, such as the second-moment calculations below. Recall that (by the martingale property) we know EX i (t) ≡ 1.

Lemma 2.6. For a simple CG process,
Proof. Lemma 2.5 gives and taking expectation From the dynamics of the token process, the event {1 ∈ S i (t), 2 ∈ S j (t)} happens if and only if F (1) = i, F (2) = j and τ ij > t, where τ ij is the first meeting time of i and j. So These last two identities give (12). One can deduce (13) from (12) and the "martingale" fact E|S i (t)| = 1, but let us see how it follows by the same kind of argument as above. Lemma 2.5 gives and taking expectation From the dynamics of the token process, the event {{1, 2} ⊆ S i (t)} happens if and only if F (1) = i and F (2) = some j for which τ ij ≤ t, and so These last two identities give (13). Finally, (14) follows from (12) and (13) by expanding the square.

Elementary properties of the simple CG process
Here we prove the remaining "mostly obvious" assertions about the simple CG process from section 1.1, and some minor extensions. Fix a meeting model (ν ij ) on n agents. Write G for the graph whose edges are the pairs (i, j) with ν ij > 0. Recall G is connected by assumption. An anticlique (or independent set) in G is a set A of vertices such that there is no edge with both end-vertices in A. There is a finite set of configurations x that can be reached by the simple CG process. Such a configuration is absorbing if and only if {i : x i ≥ 1} is an anticlique. The process must reach some absorbing configuration at some a.s. finite time T , because N (t) (the number of solvent agents) decreases by one every time the configuration changes. Write T for the random set of agents with non-zero money at T .
Proof. If N (t) = m and the configuration is not an anticlique then the conditional intensity of a transition m → m − 1 is at least δ, implying (i) by comparison with the pure death process with constant transition rate δ. For (ii) consider the token process from section 2.4. If ν ij = 0 then with probability 1/ n 2 agents i and j have tokens 1 and 2; if so, then neither can lose a game, so both must end in T . Similarly for (iii), with probability 1/(1 + d(i)) agent i's token is smaller than all its d(i) neighbors' tokens; if so, then agent i cannot lose a game, implying i ∈ T . So P(i ∈ T ) ≥ 1/(1 + d(i)), which is (iii). For (iv), consider a spanning tree for G. We can order its edges as in such a way that each ℓ i is a leaf of the subtree in which edges e 1 , . . . , e i−1 have been deleted. With non-zero probability, the first n − 1 meetings in the meeting process are over the edges e 1 , . . . , e n−1 in that order; and with non-zero probability, the game involving (ℓ i , v i ) is won by v i for each i. If this happens then v n−1 ends up with all the money.

The sparse graph setting
Consider a connected finite graph G with n vertices and which is r-regular, for r ≥ 3 (so if r is odd then n must be even). Take the set Agents as the vertices of G, and the meeting rates as As observed in section 1.1, the simple CG process must terminate in a random configuration X * with some random set T of solvent agents. We study the density of solvent agents: ρ(G) := n −1 E|T |.
What are the possible values of ρ(G), in terms of n and r? Consider first the lower bound.
If n is a multiple of r then there exists a graph G such that ρ(G) ≤ 1 r (1 + 2κr r−1 ) where κ r , defined by (16) below, is such that κ r ↑ κ ∞ < ∞ as r ↑ ∞.
Proof. Assertion (i) repeats Lemma 2.7(iii). For (ii), consider the graph G constructed as follows. Take n/r disjoint graphs C 1 , . . . , C n/r , each being the complete graph on r vertices with one edge (a i , b i ) removed. Then add edges (b 1 , a 2 ), (b 2 , a 3 ), . . . , (b n/r , a 1 ) to make G.
For each 1 ≤ i ≤ n/r, the only possible way for T to contain more than one vertex of C i is for T to contain the two vertices {a i , b i } (because any other pair of agents in C i will meet). It follows that where q m is the probability that T does contain both a i and b i , given that a i has the smallest and b i has the m'th smallest token amongst agents C i . But this latter event can only happen if, in the sequence of games involving the agents initially holding these m tokens, b i is never involved, which has probability So now we have shown But σ m decreases as order m −2 , establishing the bound in (ii) for Finding somewhat tight upper bounds complementary to those in Lemma 3.1 seems more difficult. As noted in section 1.3, we can consider the simple CG process on the infinite r-ary tree T r , and the random set T of solvent agents in the t → ∞ limit has some density ρ(T r ) := P(i ∈ T ).
It is well-known [7] that there exist, for fixed r ≥ 3, sequences (G n,r , n ≥ n 0 (r)) of r-regular n-vertex connected graphs (derived e.g. from typical realizations of random r-regular graphs) which converge in the sense of local weak convergence (Benjamini-Schramm convergence) to T r , and for such a sequence we will have lim n ρ(G n,r ) = ρ(T r ).
By analyzing the CG process on T r we will show (Corollary 3.5) that ρ(T r ) ∼ 2/r as r → ∞. Granted that result, we can summarize Lemma 3.1 and the discussion above as follows. the sup and inf over sequences (G n,r , n ≥ n 0 (r)) of r-regular n-vertex connected graphs. Then We conjecture that in fact a * (r) ∼ 2/r as r → ∞, in other words that locally tree-like graphs are asymptotically extremal for this problem.

Finite trees
Consider the simple CG process on a finite tree T, with the constant meeting rates (15) over edges. We establish a recursion, Lemma 3.3, for the distribution of X (T,o) (t), the fortune of agent o at time t. The CG process uses only the first meeting times τ e across edges, which are independent with Exponential(1) distribution; by a deterministic time-change we can suppose instead the distribution is Uniform(0,1), simplifying calculations below.
For 0 ≤ t, z ≤ 1, set For a neighbor i of o (written i ∼ o), we let T i denote the subtree of T (as viewed from root o) consisting of i and all its descendants.
Proof. Let Y (t) denote the fortune of agent o at time t in the modified process where o systematically wins every game she plays. Clearly, with the terms in the sum being independent. The original process can be coupled with the modified process in the natural way, such that they coincide as long as o has not lost a game (in the original process). Hence, almost surely under this coupling, The r−regular tree. The r−regular infinite tree consists of r copies of a (r − 1)−ary tree, connected to a root. Letting φ * r (z, t) denote the corresponding function, as at (20), the general recursion from Lemma 3.3 gives Comparing with (21) and using (23), it follows that r satisfies the same r → ∞ asymptotics as does φ r−1 . In particular, Corollary 3.5. On the r−regular infinite tree T r , the probability ρ(T r ) = φ * r (0, 1) that a given agent finishes with non-zero money is 2/r + o(1/r) as r → ∞.
Galton-Watson trees. When the rooted tree (T, o) is a random Galton-Watson tree with degree distribution {π n : n ≥ 0}, the general recursion from Lemma 3.3 immediately leads to a recursive distributional equation for the annealed generating function where expectation is now taken with respect to both the randomness of the rooted tree and the randomness of the CG process. Letting F π (x) = n π n x n denote the degree generating function of the Galton-Watson tree, we readily obtain: φ(z, t) = 1 z F π 1 − t 0 φ(ξ, u)du dξ. (24) Extracting useful information from this equation for a general distribution {π n : n ≥ 0} remains an open problem.

3.3
The Poisson-Galton-Watson tree, the sparse Erdős-Rényi graph and the short-time behavior of the Kingman coalescent In the case where {π n : n ≥ 0} is the Poisson distribution with mean c ≥ 0 (i.e. F π (z) = e cx−c ), equation (24) can be easily solved, yielding the following explicit formula: Identifying this generating function, we find that the fortune X(t) of the agent at the root has distribution specified by P(X(t) > 0) = 2 2+ct (25) the conditional distribution of X(t) given X(t) > 0 is Geometric( 2 2+ct ).
Let us outline an interesting alternative explanation of why (25, 26) arise here. Under our time-change (first meetings occur at Uniform(0, 1) random times) the simple CG process on G(n, c/n) arises from a two-stage construction: for each edge e of the complete graph on n vertices, first select e with probability c/n, then (if selected) assign the Uniform random meeting time. But the set of meetings that occur before time t ∈ [0, 1] can alternatively be described by: for each edge e of the complete graph, a meeting has occured with chance ct/n, independently over e, and meetings occured at independent Uniform times. Now consider the Kingman coalescent, in the spirit of more general stochastic coalescence models [3,4,5,6], as a process of coalescing partitions of {1, 2, . . . , n}, being the special case in which each pair of blocks merges at constant rate. But take this rate to be 1/n instead of 1. The n → ∞ limit distribution of block sizes at time τ in this short-time limit regime is known to be Geometric( 2 2+τ ) -see Construction 5 in [3] for an intuitive explanation in terms of a process of coalescing intervals on Z. But in our "CG process on G(n, c/n)" above, the process of fortunes of the solvent agents, considered as a process indexed by τ = ct, evolves in essentially the same way as the process of block sizes in this Kingman coalescent, so (26) is ultimately equivalent to the short-time Geometric limit result for the Kingman coalescent.