A particle system in interaction with a rapidly varying environment: Mean field limits and applications

We study an interacting particle system whose dynamics depends on an interacting random environment. As the number of particles grows large, the transition rate of the particles slows down (perhaps because they share a common resource of fixed capacity). The transition rate of a particle is determined by its state, by the empirical distribution of all the particles and by a rapidly varying environment. The transitions of the environment are determined by the empirical distribution of the particles. We prove the propagation of chaos on the path space of the particles and establish that the limiting trajectory of the empirical measure of the states of the particles satisfies a deterministic differential equation. This deterministic differential equation involves the time averages of the environment process. We apply our results to analyze the performance of communication networks where users access some resources using random distributed multi-access algorithms. For these networks, we show that the environment process corresponds to a process describing the number of clients in a certain loss network, which allows us provide simple and explicit expressions of the network performance.


Introduction and motivation
The paper comprises two separate parts: a first part is devoted to the analysis of the mean field limits of a general system of interacting particles, also interacting with a random environment. In the second part of the paper, we demonstrate how the results on particle systems derived in the first part can be applied to understand the behavior of computer networks where users access a shared resource using some distributed random Medium Access Control (MAC) algorithms. These algorithms are implemented in the network access card of all computers connected to a Local Area Network (LAN).
LANs are networks covering a small geographic area, like a home, an office, a building, and constitute a first and crucial component of the Internet. Analyzing random MAC algorithms is notoriously difficult; most of the related issues have actually been open since the introduction of the first of these algorithms in the early 70's. In [7], the author has used heuristic formulas to approximate their performance in specific networks. These formulas are based on the assumption that the particles (or computers) evolve independently. Our mean field analysis rigorously proves this propagation of chaos, but also allows for the first time to derive explicit analytical expressions of the performance of these algorithms in general networks.
A particle system interacting with a random environment In the first part of the paper, we are interested in the mean field limit of a system of N interacting particles whose dynamics also depends on an environment process. More specifically, the evolution of each particle depends on the state of the particle, on the empirical distribution of all the particles and also on environment variables. The environment process is a finite state space Markov chain which interacts with the particle system because its transition kernel depends on the empirical distribution of the states of the particles. A key feature of the systems considered here is that the environment is rapidly varying: it evolves at rate 1, whereas the particles evolve at rate 1/N . We prove a mean field limit for this particle system when N goes to infinity. In order to capture the evolution of the particles we must speed up time by a factor of N . In so doing the particles see a time average of the rapidly changing environment. In the mean field limit, particles evolve independently, and see the environment process in its steady state, which in turn evolves as the particles evolve. Our results on a particle system evolving in a rapidly changing environment is a generalization of results obtained by Kurtz in [23]. We extend these results in a couple of ways: first, the particles may evolve according to their current states, to the empirical distribution of the states of the particles, and to an environment process; then we show the path-space convergence of the trajectory of the empirical distribution of the states of the particles.
To prove this convergence, we extend and adapt the method developed by Sznitman and Graham in [27,18].
The initial motivation for the use of mean field asymptotics was to analyze the behavior of computer networks. Of course, mean field models have been used in many contexts and the theory is well developed. For example, Dawson [12] studies a model in statistical physics where N particles diffusing in a potential well have the additional property that they are all attracted to the center of mass of all the particules. The Fleming-Viot model [13] is an example from genetics where a particle represents an individual and its state represents the genetic type and its location. Our results (and those in [23]) on interacting particle systems with a rapidly varying environment could find other applications. For instance, they could be used to capture the dynamics of a population whose genetic makeup evolves slowly in time in the presence of a rapidly varying environment whose evolution may partly depend on the empirical distribution of the individuals. Another potential field of application is microscopic models in economic theory and stochastic market evolution, also known as "econophysics", see for example the work by Karatzas [20] or Cordier [11]. In a simple market economy or in a financial market, a particle is an economic agent and its states represents its goods and its savings. The environment is the prices of the various available goods.
Agents may exchange, borrow or lend money. Both prices and the purchase decisions of agents are interacting. In some markets, like financial markets, the prices are fluctuating roughly N times faster than the decisions of each individual agent.
Analyzing Medium Access Control algorithms in computer networks Consider N users (or computers) communicating in a wired or wireless Local Area Network (LAN). To transmit data packets, users have to share a single resource (a cable in wired LANs or a radio channel in wireless LANs) using some Medium Access Control (MAC) protocols. These protocols are distributed, meaning that each user runs its protocol independently of the other users sharing the same resource. This architecture has ensured the scalability of LANs (in the sense that new users can join and leave the network without the need of explicitly advertising it); it has played a crucial role in their development and hence contributed to the rapid growth of the Internet.
When two users cannot simultaneously successfully transmit data packets (because they share the same resource), we say that these users interfere.
Two interfering users who simultaneously transmit experience a collision, and the packets have to be retransmitted. Most current MAC protocols limit collisions using the following two main principles: first, before transmitting, users sense the resource and should it be busy they abstain from transmitting. This technique is referred to as CSMA (Carrier Sense Multiple Access) and ensures that packet transmissions cannot be interrupted. Even if the sensing mechanism is perfect, a collision may still occur if two interfering users start transmitting at the same time (or rather so close together in time that CSMA can't prevent the collision). The second main principle, termed random back-off, aims at reducing the possibilities that several users start transmitting simultaneously. To do so, a user only starts transmitting with a certain probability less than one. This probability is adapted to the number of successive collisions experienced by users, which allows users to infer the level of congestion of the resource. Typically, in LANs today, users implement the exponential back-off algorithm (also referred to as the Decentralized Coordination Function (DCF) in the standards, see [7] and references therein for a detailed description of these standards): the transmission probability is divided by a factor two after each collision, and it is reinitialized after the successful transmission of a packet.
The performance of MAC protocols is measured in terms of the throughput realized by the various users, i.e., of the number of packets successfully transmitted by users per second. The performance analysis requires that we can characterize the joint evolution of the transmission probabilities of the N users (see Section 5 for the state of the art). These probabilities evolve according to a N -dimensional Markov chain that is usually intractable because of the correlations introduced by collisions. Mean field asymptotics are useful to approximate this evolution.
In this paper, we consider two relevant scenarios for interference. We consider networks with full interference where all pairs of users interfere, and networks with partial interference where users do not interfere with all other users. In the latter scenario, users are classified according to the set of users they interfere with. Partial interference typically arises in wireless networks as illustrated in Figure 1: all 6 users are willing to transmit data packets to the access points 1 or 2; class-1 (resp. class-3) users interfere with users of classes 1 and 2 (resp. 2 and 3), whereas class-2 users interfere with all users. Two users of class 1 and 3 respectively can not sense each other and this can lead to fairness issues: users of class 2 find themselves in a predicament like that of a polite nephew sitting on a sofa between two garrulous aunts who are hard of hearing and therefore hear the nephew but not each other. Each aunt will launch into a new dialogue before the other aunt has finished. The poor nephew will hardly ever get a word in! class−1 users class−2 users class−3 users AP2 AP1 Figure 1: A network with partial interference -A dashed line between two users means that they sense each other.
In Section 5, we apply the results derived for the particle systems to provide accurate approximations of the performance of a general class of MAC protocols in networks with full or partial interference. A user is modeled as a particle whose state includes its transmission probability and its class in the case of partial interference. The particles interact with each other because of collisions. If the access protocol is fair each user will necessarily share around 1/N of the resource; i.e. the transmission probability of users slows down as N increases. Our mean field limit will therefore depend on rescaling and speeding up time by a factor of N . The environment process captures the fact that for a given user, the resource is sensed busy or idle.
For example, the environment of the network in Figure 1 is represented by a vector z = (z 1 , z 2 , z 3 ) of zeros and ones. The environment (1, 0, 1) would represent ongoing transmissions from a user in class 1 and a user in class 3.
When a user transmits the resource is blocked; i.e. the environment changes.
These environmental changes occur at rate determined by N users; i.e. at rate 1. Consequently the conditions for our theory are met.
Notations Let S be a separable, complete metric space, P(S) denotes the space of probability measures on Y. L(X) is the law of the S-valued random variable X. D(R + , S) the space of right-continuous functions with left-handed limits, with the Skorohod topology associated with its usual metric, see [15] p 117. With this metric, D(R + , S) is complete and separable.
We extend a discrete time trajectory (X(k)), k ∈ N, in D(N, S) in a continuous time trajectory in D(R + , S) by setting for t ∈ R + , where [·] denotes the integer part. (F t ), t ∈ R + or N, will denote the natural filtration with respect to the processes considered.
· denotes the norm in total variation of measures. Finally, for any measure Q ∈ P(S) and any measurable function f on Y, f, Q = Q(f ) = f dQ denotes the usual duality brackets.
We recall that a sequence of random variables (X N i ) i∈{1,...,N } ∈ S N is exchangeable if L((X N i ) i∈{1,...,N } ) = L((X N σ(i) ) i∈{1,...,N } ) where σ is any permutation of {1, . . . , N }. Moreover the sequence is Q-chaotic if for all subsets 2 An interacting particle system in a varying environment In this section, we first provide a precise description of the interacting particle system under consideration. We then state the main results, giving the system behavior in the mean field limit when the number of particles grows to infinity. The proofs of these results are postponed to subsequent sections.

Model description
The particles We consider N particles evolving in a countable state space X at discrete time slots k ∈ N. For simplicity we assume the particles are exchangeable. At time k, the state of the i-th particle is X N i (k) ∈ X .
The state of the system at time k is described by the empirical measure ν N (k) ∈ P(X ) while the entire history of the process is described by the empirical measure ν N on path space P(D(N, X )): The interacting environment In the system considered, the evolution of the particles depends not only on the state of the particle system but also on a background Markovian process where Z is a countable state space. Specifically, Z N is a Markov chain whose transition kernel satisfies the following: where K N µ,x is a transition kernel on Z depending on a probability measure µ on P(X ) and on x ∈ X , and where F k = σ (ν N (0), Z N (0)), · · · , (ν N (k), Z N (k)) .
The latter filtration depends on N , but as pointed out above, without possible confusion, F k will always denote the underlying natural filtration of the processes. Note that (2) does not completly defined the transition kernel of Z N , and actually the joint evolution of the vector (Z N 1 (k), · · · , Z N N (k)) is arbitrary.
Evolution of the particles We represent the possible transitions for a particle by a countable set S of mappings from X to X . A s-transition for a particle in state x leads this particle to the state s(x). We assume that the conditional probability given F k that a s-transition occurs for the particle i between times k and k + 1 is equal to with s∈S F N s (x, α, z) = 1 for all (x, α, z) ∈ X ×P(X )×Z (the assumption is for simplicity, the content of the paper is unchanged if s∈S F N s (x, α, z) ≤ C for some constant C independent of (x, α, z)).

We define the events
A N i (k) = {a transition occurs for particle i between times k and k + 1}.
We assume that the joint distribution of the transitions is weakly correlated.
More precisely, A0. There exists a positive sequence (ρ N ) N ∈N such that lim N ρ N = 0 and Note that, due to (3), the process Z N evolves quickly while the empirical measure ν N (k) evolves slowly. Also note that the s-transitions of the various particles may be correlated. The process Z N may depend on the transitions of the particles. The particle system is thus in interaction with its environment. Note finally that if the particle transitions are independent then (4) holds with ρ N = 1/N .
We make the following additional assumptions on the system evolution.

Assumptions
A1. Uniform convergence of F N s to F s : A2. The functions F s is uniformly Lipschitz: A3. Uniform convergence in total variation of K N α to K α : A4. The mapping α → K α is uniformly Lipschitz: A5. The Markov chains with kernels K α,x have a unique stationary probability measure π α,x .
A6. For all x in X , α, β in P(X ): We discuss in Section 4 how the above assumptions may be checked.

Main Results
The main result of this paper is to provide a mean field analysis of the system described above, i.e, to characterize the evolution of the system when the number of particles grows. According to (3), as N → ∞, the chains X N i (t) slow down hence to derive a limiting behavior we define: We wish to apply the ideas in Theorem 2.1 in [23]. In that context we define the joint measure for A ⊆ X and B ⊆ Z. Clearly the evolution of ν N is determined by ζ N .
. In the context of [23] our ν N is Kurtz's X N and our Y N is Kurtz's Y N . However we can't quite apply the theorems in [23] because the transition kernel of Z N i depends on both ν N and X N i .
Following [23] we define ℓ m (Z ×X ) to be the space of measures on [0, ∞)× Since Y N doesn't slow down as N → ∞ like µ N we can't hope to prove the weak convergence of Y N but the occupation measure Γ N does converge weakly by averaging. To obtain the relative compactness of Γ N and µ N we require the following assumptions.
A7. For each ǫ > 0 and each t > 0 there exists a compact K ⊆ X × Z such In most applications, the tightness of L(q N 1 (·)) in P(D(R + , X )) is not a major issue. Indeed, note that the inter-arrival times between two transitions of q N 1 (.) are independent Binomial (N, 1/N ) variables (which converges to exponential (1) variables). Hence, if for example the state space X or the set of transitions S is finite, we may apply the tightness criterion Theorem 7.2 in Ethier-Kurtz [15] p.128.

Transient regimes
The following theorem provides the limiting behavior of the system in transient regimes.
Theorem 2.1 Assume that the Assumptions A0-A8 hold and that the initial values q N i (0), i = 1, . . . , N , are exchangeable and such that their empirical measure µ N 0 converges in distribution to a deterministic limit Q 0 ∈ P(X ) when N → ∞. There exists a probability measure Q on D(R + , X ) such that In [27], Sznitman proved that if q N i (0), i = 1, . . . , N , are exchangeable, their empirical measure µ N 0 converges in distribution to a deterministic limit . . , N , are Q 0 -chaotic. Then, the above theorem states that if the particles are initially asymptotically independent, then they remain asymptotically independent. This phenomenon is also known as the propagation of chaos.
The independence allows us to derive an explicit expression for the system state evolution. As explained earlier, intuitively, when N is large, the evolution of the background process is very fast compared to that of the particle system. The particles then see a time average of the background process. The following theorem formalizes this observation. For α ∈ P(X ) and x ∈ X , let π α,x denote the stationary distribution of the Markov chain with transition kernel K α,x . We define the average transition rates for a particle in state x by and for all time t > 0, for all n ∈ N, The equations (6) have the following interpretation: if s(x m ) = x n then is a mean flow of particles from state x m to x n . Hence, , is the total mean incoming flow of particle to x n and s∈S Q n (t)F s (x n , Q(t)) is the mean outgoing flow from x n .

Stationary regime
We now characterize the stationary behavior of the system in the mean field limit. To do so, we make two additional assumptions: A10. The dynamical system (6) is globally stable: there exists a measure Q st = (Q n st ) ∈ P(X ) satisfying for all n: and such that for all Q ∈ P(D(R + , X )) satisfying (6), for all n, lim t→+∞ Q n (t) = Q n st .
Then the asymptotic independence of the particles also holds in the stationary regime:

Proof of Theorems 2.1, 2.2 and 2.3
We use the following notation extensively: A N,s i (k) = {s-transition occurs for the particle i between k and k + 1}.
By definition, we have: We also recall the notation A N i (k) = {a transition occurs for particle i between times k and k + 1}.
We have:

Proof of Theorems 2.1 and 2.2
By Proposition 2.2. in Sznitman [27], Theorem 2.1 is equivalent to To establish (10), we first prove the tightness of the sequence L(µ N , Γ N ).
We then show that any accumulation point of the previous sequence is the unique solution of a martingale problem. This requires idea from Theorem 2.1 and Example 2.3 in [23].

Step 1 : Relative Compactness
First we check that the sequence L(µ N ) is tight in P(P(D(R + , X ))). Thanks again to Sznitman [27] Proposition 2.2, this a consequence of the tightness of L(q N 1 (.)) in P(D(R + , X )); i.e. of A8. By Prohorov's theorem L(µ N ) is relatively compact. By Lemma 1.3 in [23], Γ N is relatively compact because of the compact containment hypothesis A7. It follows that the sequence L(µ N , Γ N )) is relatively compact.

Step 2 : Convergence to the solution of a martingale problem
We will follow the Step 2 in Graham [18]. We show that any accumulation the bounded and forcibly measurable functions of X → R. For each s ∈ S, Then and So that, we may rewrite Equation (11) as The proof of the following lemma is given at the end of this section.
Now assume that Lemma 3.1 holds, and let Π ∞ be an accumulation point of L(µ N , Γ N ). Let (µ, Γ) be a random variable taking values in P(D(R + , X ))× ℓ m (Z × X ) having distribution Π ∞ which is adapted to a complete filtration Define the Radon-Nikodym derivative: Lemma 3.2 We have: The first term in the above expression goes to 0 as N goes to infinity. The third term is a mean zero martingale. From Dynkin formula, we have The second term in (14) is equal to Note that from (3) w:s(w)=x Thus, as N → ∞ the only important term in (15) is the first sum and it is equivalent to: as N → ∞ (by Assumptions A3-A4). Therefore, our calculation gives, However for a given µ(t) and x, by Assumption A5 there is a unique solution to the above which is a probability; i.e. for all z ∈ Z, Specifically, for all f ∈ L ∞ (X ), is a µ-martingale, where X = (X(t)) t≥0 denotes a canonical trajectory in Proof. The proof is similar to Step 2 of Theorem 3.4 of Graham [18] or of Theorem 4.5 of Graham and Méléard [17]. However, here our assumptions are weaker so we detail the proof.
From Lemma 7.1 in Ethier and Kurtz [15], the projection map X → X(t) is µ-a.s. continuous for all t except perhaps in at most a countable subset is at most countable (see the argument in the proof of Theorem 4.5 of Graham and Méléard [17]).
Now assume (17) holds for arbitrary 0 ≤ t 1 < t 2 < · · · t k ≤ t < T outside a countable set D and g ∈ C b (X k ). It implies that for all A ⊂ F t , martingale and µ satisfies the non-linear martingale problem (16).
It remains to prove (17). Let Π N be the law of (µ N , Γ N ), we write : ). From (13), Hence, Using exchangeability and the Cauchy-Schwartz inequality, we obtain: Lemma 3.1 implies that I tends to 0.
Next, II 2 is less than or equal to However, as N → ∞, Consequently II 2 → 0 as N → ∞.
Hence, from (18)  To conclude the proof of Lemma 3.3, note that the continuity of X →

Step 3 : Uniqueness of the solution of martingale problem
We now show the solution to (16) is unique. Here, we will use Proposition 2.3 in Graham [18] (which is an extension of Lemma 2.3 in Shiga and Tanaka [26]) to show uniqueness. We remark that By Fubini's Theorem, Since |ϕ(s(x))| ≤ 1, F s (x, α, z) ≥ 0 and s∈S F s (x, α, z) = 1, Thus applying Assumptions A4-A6, we deduce: Using Assumption A2, So finally, we have checked that: We then use Proposition 2.3 in Graham [18] to establish the solution to the martingale problem (16) is unique.

Step 4 : Weak convergence and Evolution equation
In the three first steps we have proved that L(µ N ) converges weakly to µ = δ Q , where Q is the unique solution of the martingale problem (16) starting at Q 0 .
We can now identify the evolution equation satisfied by Q. Since Q satisfies the martingale problem then (Q(t)) t≥0 solves the non-linear Kolmogorov equation derived by taking the expectations in (16): Applying (18) to f = 1 xn for all n, we get the set of differential equations (6). It immediately follows that Γ is also deterministic and Γ(dt, x, z) = dt · Q x (t) · π Q(t),x (z) almost surely.

Proof of Lemma 3.1
First, M f,N i (t) is a square-integrable martingale by the Dynkin formula. Re- In the sequel, E F k [.] will denote E[.|F k ]. With this notation, E F k [1 A N i (k) ] = 1/N , and we can rewrite Equation (12) as: (t)) t∈R + is a martingale this product is equal to:

Now, let
Notice that Analogously, we also have: Therefore from (4), and Similarly, we obtain and the lemma follows. ✷

Proof of Theorem 2.3
Assume that ((q N i (0)) i , Z N ) represents the system of N particles in stationary regime. Then by symmetry, (q N i (0)) i is exchangeable. Define Π N = L(µ N , Γ N ). We cannot apply directly Theorem 2.1 since we do not know whether a converging subsequence of µ N (0) converges weakly toward a deterministic limit.
We now circumvent this difficulty. By Assumption A9, as in Step 1 in the proof of Theorem 2.1, we deduce from Sznitman [27] Proposition 2.2, that µ N is tight in P(D(R + , X )) and Π N is tight in P(D(R + , X )) × ℓ m (Z × X ). Let Q in P(D(R + , X )) be in the support of Π ∞ = (µ ∞ , Γ ∞ ), an accumulation point of Π N . We can prove similarly that Lemma 3.3 still holds for Q.

By
Step 3 of Theorem 2.1, the solution of the martingale problem is unique and Q solves it with initial condition Q(0). The stationarity implies that µ N (t) and µ N (0) are equal. Note also that outside a countable set

A uniform domination criterion
In this section we discuss the Assumptions A0-A9 made on the particle system. Assumptions A0-A6 are natural and can be checked directly. The additional assumptions A9 and A10 needed to derive the mean field limit in the stationary regime may be difficult to check: A9 is a tightness assumption on the stationary measures and A10 is the global stability of a differential equation.
In this section we present a new set of assumptions, based on uniform domination of the transition kernel of the background process, that is provably sufficient to ensure that Assumptions A7. The new assumptions are defined as follows: A11 There exists a transition kernel K on Z which dominates the kernels K N α,x . Specifically, let be a partial order on Z such that K z = {w ∈ Z : w z} is finite for all z ∈ Z. There exists K such that for all N , z, x, α, where st is the stochastic order relation: P st P ′ if for all z 1 ∈ Z: A12 The Markov chain Z(t) with transition kernel K is positive recurrent. Proof. Because the chain Z is positive recurrent, the long run proportion of time the chain Z spends outside a compact set K z is of probability at most ǫ/2 for some z ∈ Z, so By A8 we know µ N is relatively compact and hence tight. By (2.5) in [27] the tightness of µ N is equivalent to the tightness of their intensity measures I(µ N ) in P(D(R + , X )) defined by I(µ N )(F ) = Eµ N (F ) for F ∈ B((D(R + , X )), the Borel σ-algebra associated to the Skorohod topology.
Hence for every ǫ > 0 there exists a compact set K ǫ in D(R + , X ) such that inf N Eµ N (K ǫ ) ≥ 1 − ǫ/2. However by Remark 6.4 on page 124 in [15], for each T > 0 there exists a compact set K x ⊆ X such that for all t ∈ [0, T ], and for all N .
However, for each t, by the above. We conclude A7 holds with K = K x × K z .

Application to random multi-access protocols
We now apply the previous analytical results to study the performance of communication networks where N users share a common resource in a distributed manner. We consider for example Local Area Networks (LANs) which are computer networks with relatively small geographic coverage (an office, a house, a part of a campus), and which constitutes the first crucial component of the Internet. Transmissions in LANs are handled either on a cable (wired LANs) or on a radio channel (wireless LANs, also commonly called WiFi). Here we will focus on wireless LANs (our analysis can be carried out similarly in the case of wired LANs). In wireless LANs, users that are close to each other or that wish to transmit to the same receivers interfere in the sense that they cannot simultanerously transmit packets succesfully.
Two interfering users transmitting simultaneously are said to experience a collision. A collision is detected by a user at the end of the packet transmission when the corresponding receiver does not acknowledge a successful reception. One of the most challenging problem in computer networking has been to design mechanisms so that interfering users could efficiently and fairly share the resource in a distributed manner. Currently, users willing to transmit packets through a wireless LAN, implement two standardized mechanisms, Carrier Sense Multiple Access (CSMA) and a random back-off algorithm referred to as the Decentralized Coordination Function (DCF), see [2]. In this section we aim at analyzing the performance of a general class of mechanisms, including the current CSMA -DCF couple, and at understanding whether current mechanisms perform well or if they still require important improvements.
In the next subsection, we provide a short description of CSMA and of a class of random back-off algorithms, but also introduce a simple model for interference, and explain why the performance in wireless LANs is difficult to study. In the subsequent subsections, we explain how the results derived earlier in the paper for particle systems allow us to circumvent this difficulty and explicitly characterize the performance in these networks.

Carrier Sensing mechanisms
A first mechanism to separate transmissions of interfering users in time is CSMA. Before transmission, each user senses the channel, and should it be busy, it abstains from transmitting. This sensing mechanism may be too simple to capture the actual interference structure of the network (since for example, the sensing is made at the transmitters, whereas interference is experienced at the receivers). Collisions may occur due to hidden terminals, and a loss of efficiency can be due to exposed terminals, see e.g. [19]. Hidden terminals refer to users whose transmissions interfer at the receiver, but are not able to detect (sense) each other. On the contrary, exposed terminals are users that do not interfere at the receiver, but cannot simultaneously transmit because they sense each other's transmissions. In this paper, for simplicity, we restrict our attention to a perfect Carrier sensing mechanism, where users sensing each other actually interfere at the receiver (we believe the analysis could be extended with hidden and exposed terminals).

Random back-off algorithms
Even under a perfect carrier sensing mechanism, collisions cannot be completely avoided if two users start transmitting simultaneously. To further reduce collisions, each user runs (independently of other users) a random back-off algorithm. After each successful transmission or each collision, the user randomly picks a value for its back-off counter according to some distribution on N. This value represents the number of slots the channel has to be observed idle before that the user may start transmitting (basically the user decrements its counter by one after sensing the channel idle during one slot). Note that slots have a fixed duration that does not depend on the user (between 9 and 20 microseconds in IEEE802.11 standards [2]). The details of this mechanism works is exemplified in Figure 5.1.2.  Figure 2: User behavior -the case of a successful transmission. Before t = 0, the channel is sensed busy. At time DIFS, (DCF Inter Frame Space), the user starts decrementing its back-off counter again by one per slot, and transmits when the latter reaches 0. After transmission, the receiver waits for a duration of length SIFS (Short Inter Frame Space) and then sends the packet acknowledgment. After receiving this acknowledgment, the user picks a new back-off counter (12 in this case) and waits DIFS before starting decrementing it. Note that the inter-frame spaces are introduced to handle the acknowledgment procedure, and that DIFS > SIFS. In the following, we assume that the back-off distribution is always geometric (so as to keep a simple Markovian setting), although we could easily generalize the analysis to uniform distributions. With this assumption, each user transmits with a given probability p at the beginning of each idle slot.
We consider the following generic way of adapting this probability: first the probability belongs to a countable set B, after a successful transmission p is updated to S(p), and after a collision p is updated to C(p), where S(·) (resp. C(·)) is a decreasing (resp. increasing) mapping from B → B. We denote by p 0 = max{p ∈ B}. Finally, we denote by L (in slots) the average duration of a successful packet transmission (including its acknowledgment, 1 Note that in the DCF, m is upper bounded by 7. see Figure 5.1.2), and assume that collisions have average durations equal to L c that might be different than L. Again to keep the formalism simple, we assume that the durations of successful transmissions and collisions are geometrically distributed (a multiple of slots), which again does not constitute a crucial assumption.

Interference model and user class
We consider a simple model for interference as follows. We say that the network has full interference if A cd = 1 for all c, d and has partial interference otherwise.

Performance metrics
The performance metrics we aim at analyzing is the long-term throughput (the number of packets successfully transmitted per time unit) achieved by the users of various classes. We denote by γ c the throughput of class-c users.
Deriving expressions for this performance metrics is notoriously difficult. This is due to the inherent interactions between users through interference.
A popular approach to circumvent this difficulty consists in decoupling the users, i.e., assuming that the (re)-transmission processes of the various users are mutually independent. This heuristic has been used by Bianchi [7] to capture the performance of wireless LANs with full interference. In this work, we formally justify this approach, and extend it to networks with partial interference. To do so, we apply the mean field analysis derived in the first part of the paper. In case of full interference, the network can be modeled as a simple system of particles with no randomly varying environment (as already noticed in a preliminary work [9]). However, to analyze a network with partial interference, the introduction of this varying environment is necessary. As it turns out, the spatial heterogeneity in networks with partial interference may lead to important fairness issues, as mentioned in introduction, and our analysis explicitly quantifies these issues.

Model analysis
We consider a network of N users as described in the previous subsection. We analyze the system at the beginning of each slot. Denote by p N i (k)/N the probability user i becomes active at the end of the k-th slot, if idle (note that we already renormalized this probability by 1/N to be able to conduct the asymptotic analysis when N grows large). For all i, k, N , p N i (k) ∈ B.
To capture the network dynamics, we define a process Z N = {Z N (k), k ≥ We show how to model the network as a set of interacting particles as described in Section 2.
• The particles: the i-th user corresponds to the i-th particle with state describing the class of the user and the transmission probability at the end of the next idle slot X N i (k) = (c i , p N i (k)) ∈ X = C × B.
• The environment process: the process Z N introduced above is a simplified version of an environment process as described in Section 2.
The evolution of the environment is determined by the states of all the particules through ν N . The evolution of the i-th particle depends on whether or not the corresponding user senses the channel idle or not, Particle transitions We first compute the transition probabilities for the various particles. The set S of possible transitions is composed by two functions, the first one representing a successful transmission p → S(p) and the other one collisions p → C(p). Note that the class of a particle / user does Assume that at some slot k, the system is in state Averaging the above quantity gives the transition probability can readily see that we have: Similarly, the event that user i ∈ c experiences a collision at the end of slot k is given by the indicator: and the transition probability F N C ((c, p i ), α, z)/N corresponding to a collision reads: In order to fit into the scheme to the particle system of Section 2, we need to introduce a virtual transition from (c, p) to (c, p) with transition With this virtual transition the sum of the transition rates sums to 1. Note that Assumption 0 is satisfied. Since N log(1 − x/N ) converges to −x, we obtain the following expressions for the asymptotic transition rates, F ∅ ((c, p i ), α, z) = 1 − p i C c (z), The convergence of F N S (resp. F N C ) to F s (resp. F C )is uniform in α and z, so that Assumption A1 is satisfied.It is also easy to check that the functions F S and F C are uniformly Lipschitz, which ensures Assumption A2.
Transitions of the background process Z N Assume that the system K N α,A 1 , respectively K N α,A 2 , corresponds to the transitions of links starting successful transmissions, respectively collisions: K N α,D 1 , respectively K N α,D 2 , corresponds to the transitions of links with successful transmissions, respectively with collisions, which become inactive: Finally, K α,0 corresponds to classes that are not changing their state between z and z ′ : Theorem 2.1 asserts that as N → ∞, the q N i 's become independent and evolve according to a measure Q = (Q(t)) t∈R + .

Stationary throughputs
Assume that Assumptions A9-A10 hold, so that Theorem 2.3 applies. These assumptions will be partly justified below for the case of the binary exponential back-off algorithm. We are interested in deriving the stationary throughputs achieved by users of various classes. To do so, we derive the stationary distribution Q st and π Qst of the particles and the background process. To simplify the notation we write Q st = Q and π Q = π. Also denote Moreover, z∈A π A (z)E z [T 1 ] = 1 π(A) ; i.e. the intensity of the point process of visits to A. Finally the total throughput of the users of class c is where which can be interpreted as the probability that a user of class c attempts to use the channel at the end of an empty slot. We now evaluate Q and π.
Note that π depends on Q through the ρ c 's only (see (23) and its limiting expression). Then we can write:

Tightness of stationary distributions
Lemma 5.2 In case of the exponential back-off algorithm, there exists a p * > 0, such that for any 0 < p 0 < p * , the Markov process (X N i (k), Z N (k)) k∈N is positive recurrent for all N and the family of stationary distributions Deriving a tight bound for p * would involve technical details which are beyond the scope of this paper.We will only sketch the main idea and prove p * > 0. Also to clarify the presentation, we assume here that L = L c . Along the proof of Lemma 5.2, we may check that the statement of Lemma 5.2 holds for p * = ln 2 Lµ , where µ = max c∈C µ c and µ c = d∈Vc µ d is the mean proportion of particles which are in interaction with particles of class c.
Proof. To prove the recurrence we introduce a fictive system which stochastically bounds p N 1 (k).
In the fictive system, the states of the particles i ≥ 2 are independent, a particle i ≥ 2 has two states: active or inactive. If the particle i ≥ 2, is active, it remains active for the next slot with probability 1 − 1/L, if it is inactive, it becomes active with probability p 0 /N . The stationary probability that the particle i is active is L/(L + N/p 0 ) and the stationary probability that at least one is active is a N = 1 − (1 − L/(L + N/p 0 )) N −1 which converges to a = 1 − e −Lp 0 .
The particle 1 tries to become active at slot k with probability p N 1 (k)/N .
If it is active and if another particle is also active, then the particle 1 encounters a collision and p N 1 (k + 1) = Clearly, this virtual system is stochastically less than or equal to p N 1 (k) in the exponential back-off case.
Let b N (k) = p 0 /p N 1 (k), b N (k) ∈ {2 n } n∈N , the lemma will follow if we prove that for p 0 small enough, In the remaining part of the proof, using elements of queueing theory, we justify (32).
We first analyze the sequence of slots such that none of the particles i ≥ 2 is active. If the particle i ≥ 2 is active at time k, let l i (k) be the number of slots the particle remains active. l i (k) is a geometric distribution with parameter 1/L. Now, let If W N (k) = 0 none of the particles i ≥ 2 is active at time k. W N satisfies the recursion: χ{i active at k + 1, inactive at k}l i (k+1) .
W N is thus the workload in a G/G/∞ queue with inter-arrival time 1 and service time requirement σ N (k+1) = max i≥2 χ{i active at k + 1, inactive at k}l i (k+ 1). Independently of the past, σ N (k + 1) is easily bounded stochastically; indeed, let 0 < s < ln L, Eχ{i active at k + 1, inactive at k}e sl i (k+1) Note that this last bound is uniform in N and k. Let θ 0 = 0, θ n+1 = inf{k > θ n : W N (k) = 0}, and Θ N = {θ n } n∈N . Classically, there exists C > 0 such that for all N : see for example Appendix A.4 in [4]. By the renewal theorem, we deduce, uniformly in N , lim k→∞ P(k ∈ Θ N ) = 1 Eθ 1 = 1 − a N . Moreover, the monotonicity of W N (k) with respect to the initial condition implies easily that converges to e −Lp 0 , it follows that We now turn back to the process b N and prove (32). Let U (k) be a sequence of independent and uniformly distributed variables on [0, 1]. We may write In particular Taking expectation, we obtain Similarly, since b N (k + 1)χ {b N (k)=1} ≤ 2, we have: From (33), for p 0 small enough, for all N and k ≥ 0, P(k ∈ Θ N ) > 2/3. We deduce by recursion that E[b N (k)|b N (0) = 1] ≤ 2 and (32) holds. ✷ 2 −i p 0 Q i (t)) − 2 −n p 0 Q n (t), (37)  (2) so necessarily 2(1 − e −ρst ) < 1, and the stationary distribution always exists.
In complete interaction, Assumption A9 holds. Indeed, we have the following: Theorem 5.4 If p 0 < ln (2), for any initial condition Q(0), Q(t) converges (weakly) to the measure Q st .
Proof of Lemma 5.5. We define the linear system, dB n dt (t) = 2 1−n p 0 B n−1 (t) 1 − exp(−p 0 ) − 2 −n p 0 B n (t), for all n ≥ 1, (41) with initial condition B n (0) = Q n (0) for all n. First note that the time derivative of n B n (t) is zero, hence ∞ n=0 B n (t) = 1 for all t ≥ 0. Note also that ρ(t) ≤ p 0 . B n (t) corresponds to mean field limit of the proportion of users with backoff p 0 2 −n when each user is in interaction with N other users with backoff p 0 . We may then check that the probability measure B(t) = {B n (t); n ≥ 0} is stochastically larger than Q(t): for all m ≥ 1, n≥m B n (t) ≥ n≥m Q n (t). However B(t) converges to the unique invariant probability measure of the linear system: B n st = (2(1 − exp(−p 0 )) n B 0 st (recall that p 0 < ln (2)). Since B(t) converges, it is therefore tight. It follows that Q(t) is tight. ✷ Proof of Theorem 5.4. Let lim inf t→∞ ρ(t) = ρ b . Pick a subsequence t k such that lim t k →∞ ρ(t k ) = ρ b and such that the limit lim t k →∞ Q n (t k ) = Q n (∞) exists for all n. By Lemma 5.5, Q(∞) = {Q n (∞); n = 0, 1, . . .} is a probability measure and ∞ i=0 2 −i p 0 Q i (∞)) = ρ b .
Let n ≥ 1, and assume that for all t ≥ 0,Q n−1 (t) ≤ Q n (t). We have: From (45) we also conclude thatQ n (t) ≤ Q n (t) for all t so (44) follows.
Next, using L'Hôpital's rule in (43) we get

A numerical example
We now illustrate our analytical results on the simple network of Figure   1. Each link runs an exponential back-off algorithm with p 0 = 1/16 as specified in the 802.11 standard [2]. In Figure 3, the throughputs of the various user classes are presented assuming that the proportions of users of class 1 and 3 are identical, µ 1 = µ 3 . We assume here that L c = L. We give the throughputs as a function a the proportion of users of class 2. Here the packet duration is fixed and equal to L = 100 slots. The total network throughput decreases when the proportion of class-2 users increases, which illustrates the loss of efficiency due to the network spatial heterogeneity.
In Figure 4, we assume a uniform user distribution among the 3 classes, µ 1 = µ 2 = µ 3 , and we give the throughputs as a function of the packet duration L. First note that whatever the value of L, the network is highly unfair: for example when L = 100 slots, the throughput of a user of class 1 is almost 5 times greater than that of a user of class 2. This unfairness increases with L and ultimately when L is very large, users of class 2 never access the channel successfully. We have verified through simulation that mean field asymptotics led to quite accurate performance approximations, even in the case of systems with a small number of users. This has been also observed in [7] for networks with full interference.