Equi-energy sampling does not converge rapidly on the mean-field Potts model with three colors close to the critical temperature

Equi-energy sampling (EES, for short) is a method to speed up the convergence of the Metropolis chain, when the latter is slow. We show that there are still models like the mean-field Potts model, where EES does not converge rapidly in certain temperature regimes. Indeed we will show that EES is slowly mixing on the mean-field Potts model, in a regime below the critical temperature. Though we will concentrate on the Potts model with three colors, our arguments remain valid for any number of colors , if we adapt the temperature regime. For the situation of the mean-field Potts model this answers a question posed in Hua and Kou (2011 Stat. Sin. 21 1687–711).


Introduction
Sampling methods are of utmost importance in applied mathematics, e.g. in Bayesian statistics, computational physics, econometrics, or computational biology. In many cases one wants to sample a random element drawn from a finite set Ω according to a probability distribution π on (Ω, P(Ω)). But even this problem may be less trivial than it sounds. Sometimes Ω may be finite, yet very large. E.g. when modeling a ferromagnetic material on N atoms, Ω is of the form {−1, + 1} N and for real size systems, N is of the order 10 23 , thus |Ω| is of the order Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
2 10 23 . Hence a straight-forward Monte-Carlo simulation would take exponentially long in the system size N and thus would be much too expensive. In other situations, the size of Ω may not even be known, as e.g. in the so-called knapsack problem (Löwe and Meise 2001).
One potential solution of this problem lies in the use of Markov Chain Monte Carlo (MCMC, for short) algorithms. They rely on an aperiodic and irreducible Markov chain on Ω that has π as its invariant (i.e. stationary) distribution. One runs this Markov chain, stops it after some long enough time, and takes the current state as a sample element. The ergodic Theorem for Markov chains ensures that this element is almost distributed according to π, given one has waited long enough. This method immediately raises two questions: (1) Can one find for each π a Markov chain that converges in distribution to π? I.e. can one find for each π an irreducible and aperiodic Markov chain that has π as its invariant measure? This question is answered in the affirmative by the Metropolis-Hastings chain (see e.g. Häggström (2002)).
(2) How long do we need to wait to get a sample with a distribution that is reasonably close to π? If this waiting time is polynomial in the problem instance we speak about fast or rapid convergence, otherwise, in particular, if the mixing time is exponential in the problem instance, we will say the algorithm converges torpidly or slowly.
It is, however, well known that like the Glauber dynamics the Metropolis-Hastings algorithm usually converges slowly, when the target distribution is multi-modal. Such situations occur e.g. in statistical physics in the presence of a phase transition. Hence slow convergence applies to a number of interesting situations, among them the low temperature phase of the Curie-Weiss model (see e.g. the discussion in Mossel and Sly (2013)). In the next section we will introduce a close relative of the Curie-Weiss model, the three state mean-field Potts model. This will be our test model for the EES to be introduced in section 3. In section 4 we will show that this sampler mixes slowly when applied to the (three state) mean-field Potts model in a certain temperature regime. Here, a key argument is based on a property of the mean-field Potts model that is closely related to its first order phase transition (see lemma 4.3 and the remark below it): The limit point of the order parameter m N (see (2.1)) at high temperatures remains a local maximum of its distribution also in a certain part of the low temperature regime. Hence a Metropolis-Hastings chain started in a neighborhood of this high temperature limit point will typically not escape from this neighborhood in polynomial time (in the part of the low temperature regime described above). But then also EES cannot improve the performance of the Metropolis-Hastings algorithm, because there are simply no observations of the global maxima of the distribution of m N the algorithm could jump at. The proof is completed by combining this observation with the very powerful technical argument of conductance, also known as Cheeger's inequality (see theorem 4.4).

The mean-field Potts model
Let us now introduce the mean-field Potts model. Consider the space Ω = E N , where E = {1, 2, . . . , q}, q ∈ N, q 3, and N ∈ N (to avoid some complications in the future, we can think of N being a multiple of q). The elements of E are sometimes referred to as colors. For convenience, in this note, we will restrict to the case of q = 3, i.e. the mean-field Potts model with three states or colors taken from the set E = {1, 2, 3}. This from now on will be our standing assumption for the rest of the note. However, we remark that our argument remains valid for general q 3, if one changes the regime of temperatures appropriately. We will come back to this remark later.
On Ω we construct an energy function given by Here δ A denotes the indicator function for an event A (which is formally the Dirac measure for the event A, to stress that our notation is consistent with our later use of δ). Note that H N can be written as a function of the vector Indeed, one easily checks that m N is therefore called an order parameter of the model. With H N we associate a Gibbs measure π β at inverse temperature β > 0, i.e.
Here Z β = τ e βH(τ ) is the partition function. Note that conventionally in statistical physics the energy function H would carry an additional minus sign, and the Gibbs measure (as well as the partition function) would be defined in terms of the exponential of minus β times that energy. Since the two minus signs would cancel and lead to the same definition of the Gibbs measure our Gibbs measure does not carry these conventional minus signs. The mean-field Potts model was studied in a variety of papers. We refer to Ellis and Wang (1990) and Kesten and Schonmann (1989), who showed that there is a critical inverse temper ature β c . This critical inverse temperature in the 3 states mean-field Potts model equals β c = 4 log 2 (see Cuff et al (2012), which discusses the very interesting phenomenon of a temperature-dependent cut-off effect for the Glauber dynamics of the model).
At β c the model undergoes a first order phase transition. More precisely, the order parameter m N of the model converges in distribution to a 0 := ( 1 3 , 1 3 , 1 3 ), when β < β c . At smaller temperatures one observes the following: For β β c there is 1 > m * (β) > 1 3 and vectors a 1 (β), a 2 (β), a 3 (β) ∈ R 3 such that the vector a i (β) has m * (β) in its ith component and all other components are equal such that they sum up to one. For β > β c the distribution of m N converges to 1 3 3 i=1 δ ai(β) . Here δ denotes the Dirac-measure. Finally, for β = β c there are, moreover, weights λ 1 , λ 2 > 0 that sum up to 1, such that the distribution of m N converges to The phase transition is of first order, since m * (β c ) > 1 3 , i.e. the jump is discontinuous. Moreover, the vector a 0 remains a local maximum of the distribution of m N for some temperatures below the critical temperature. Such a behavior can also be observed for general values of q 3 at other values for β c . We will come back to this fact in section 4, lemmas 4.2 and 4.3, because it is of utmost importance for the proof of theorem 4.1 to be given in section 4.

Equi-energy sampling
Various modifications of the Metropolis-Hastings algorithm have been proposed to speed up its convergence. Among them the so-called swapping algorithm (see Geyer (1991)), the exchange Monte Carlo method (see Hukushima and Nemoto (1996)), parallel tempering (see Orlandini (1998)) and the simulated tempering algorithm (see Marinari and Parisi (1992), Geyer and Thompson (1995), and Madras (1998)) are very popular in applications. Another variant are Multicanonical Monte Carlo Simulations, introduced by Berg and Neuhaus (1992), also see Berg (2000). It is related to umbrella sampling (see Torrie and Valleau (1977)) and is close in spirit to the swapping algorithm, simulated tempering, as well as EES. A major difference, however, is, how an a priori estimate of the probability distribution of interest is produced. Therefore, we have not yet been able to analyze so far, whether Multicanonical Monte Carlo Simulations suffer from the same shortcomings as swapping, parallel tempering or EES (see next paragraph).
In many situations the algorithms described in the previous paragraph seem indeed to be able to improve the convergence of the Metropolis chain, however, there are also negative theoretical results about these algorithms. Madras and Zheng (2003) show that the swapping chain converges quickly for the Curie-Weiss model. On the other hand, Bhatnagar and Randall (2004) and Bhatnagar and Randall (2016) prove that both, the swapping algorithm and simulated tempering, are slowly mixing for the 3-state Potts model and conjecture that this is caused by the first order phase transition in the Potts model (also see our discussion in the remark following lemma 4.3). Qualitative properties of the swapping algorithm and parallel tempering were studied in Doll et al (2018). A first rapid convergence result for the Swapping Algorithm in an disordered situation was proved in Löwe and Vermet (2009). Ebbers and Löwe (2009) show that in disordered models the conjecture by Bhatnagar and Randall is not correct. They prove that the Swapping Algorithm mixes slowly on the Random Energy Model, even though this model has only a third order phase transition. In the Blume-Emery-Griffiths model both, rapid or torpid mixing may occur as was shown in Ebbers et al (2014).
Another idea to improve the performance of the Metropolis chain is the so called Equienergy sampling algorithm (see e.g. Kou et al (2006)). This model was tested on the Ising model in Hua and Kou (2011) and the question, how fast it converges, was posed. For the Potts model, we will answer this question in the next section. In particular, we will show that EES may be slowly mixing on relevant models from statistical mechanics. Variants of EES were studied, among others, in Baragatti et al (2013).
The principle observation to motivate EES is that a main obstacle to fast mixing is the presence of a phase transition in the model. This, in turn, may be characterized by a multi-modal distribution of a macroscopic observable. Usually, then the (projected) Metropolis chain enters one of the modes rapidly and stays there for an exponentially long time. The EES tries to avoid this behavior by introducing shortcuts in the state space. These shortcuts are created by the observations of Metropolis chains at higher temperatures where the above mentioned modes are less pronounced or possibly not even present. More precisely, additionally to the Metropolis steps one allows also for jumps to points of the same or a similar energy as the present one, given one has observed these points already at higher temperatures (otherwise, the algorithm would require the exact structure of the energy function, in which case simulations would probably be pointless). The EES has been discussed in Kou et al (2006), its conv ergence was shown in the same article, and, using a different technique, in Andrieu et al (2008). We will now give an exact description of a version of this algorithm.
Let us first briefly recall the Metropolis-Hastings algorithm, which is the basis of the EES. To define the first let K gen denote the following aperiodic, symmetric and irreducible Markov chain on Ω: Here d H is the Hamming distance and K gen is a Markov chain, because every σ ∈ Ω has 2N neighbors. Define the corresponding Metropolis-Hastings chain for the probability π β as T β (·, ·): ( 3.2) Note that T β (·, ·) sometimes is slow in natural situations, e.g. when sampling from the low temperature distribution of the Curie-Weiss model (see e.g. Madras and Piccioni (1999), of course, the T β has to be adapted to the situation of the Curie-Weiss model). To speed up its convergence, we consider the EES. To define it, we first introduce a sequence of energy levels: In our context, it is easily checked that which will be used later. Moreover, introduce a sequence of inverse temperature levels where we assume that β is the temperature we want to sample from. It will often be convenient to take β i = i β M . Note that M may and will depend on N, which is not made explicit in Kou et al (2006), otherwise our construction, so far, agrees with the construction in Kou et al (2006). We will make an explicit choice for M and give reasons for this choice, after the description of the algorithm For this, we will also need a dummy state ι and define Ω := Ω ∪ {ι}. Let M be an (M + 1) × |Ω| matrix over Ω , which is initially filled with ι, only.
The EES consists of alternations between two steps. One is a usual Metropolis step at a (random) temperature level β i . The other one is an equi-energy jump at the same temperature, if i 1. At inverse temperature 0 there are only Metropolis moves. We store the resulting energies of the states we see at temperature β i by entering them into the i'th row of the matrix M, if the state has not been seen before. In this case it replaces one of the ι's (in a pre-described order). To explain the equi-energy step assume that the chain is at temperature β i , i 1 and in state σ. We determine the energy level k, such that h k−1 < H(σ) h k and choose (with equal probabilities) a state τ from all states τ with h k−1 < H(τ ) h k , which we have already seen at temperature level β i−1 . This new state is accepted with probability min{1, Otherwise, especially, if we have not seen any state in the same energy band in the i − 1st row of M, the chain stays where it is. We denote the corresponding transition matrix (on Ω) by Q i . Note that Q i in general depends on time. We will not make this explicit, because we will just analyze the algorithm in the 'best case scenario', where the matrix M does not contain any ι's anymore. However, under this assumption, we will still be able to show that EES is slowly converging on the three state mean-field Potts model in a certain temperature regime. Formally Here n is the time variable and B n,k is the set of states τ with h k−1 < H(τ ) h k , which we have already seen at temperature level β i−1 by time n.
One might expect, that we indeed use all states we have seen previously, rather than the ones we explored with the chain at temperature β i−1 . However, there is hardly any difference between the two chains, because if temperatures are very different the chains will typically also see states of very different energies. Our choice has the advantage that it is easy to see that the global chain to be described below is reversible and moreover, it agrees with the choice in the literature, see Kou et al (2006).
Based on this, we build a matrix that describes the movement of all particles simultaneously. This operator R will be a matrix on Ω M+1 , of course. We lift the movement of the ith particle to Ω M+1 by building where I is the identity matrix. Similarly, we consider the matrix T i that lifts the Metropolis step T βi to Ω M+1 , i.e. we consider Combining these operators the EES is defined by Note that the versions of the EES given in Kou et al (2006) and Andrieu et al (2008) differ from each other and also our version is slightly different from those. However, the spirit of the algorithms is the same.
In the sequel, we will only consider a number of energy levels M that depends linearly on N, such that M = dN. We will furthermore assume that h i are equi-distant. Indeed, this choice of M is somewhat arbitrary, allowing for a polynomial dependence between M and N would not alter the algorithm much. However, choosing M, e.g. exponentially large in N, would lead to empty, or almost empty energy bands which would make the equi-energy step obsolete. Moreover, it would obviously lead to exponential relaxation times (in N), because exponentially many temperatures have to be simulated. On the other hand, having M too small, e.g. constant, leads to almost non-interactive components (i.e. an equi-energy jump is almost never accepted) and EES stands no chance of increasing the speed of convergence compared to the standard Metropolis algorithm.
Of course, eventually we will only be interested in the M + 1'st coordinate of this Markov chain. However, studying it entirely, seems easier. First of all, let us note that indeed, the distribution of the M + 1'st coordinate converges to π β .

Theorem 3.1. The distribution of the M + 1'st coordinate converges to π β as time tends to infinity.
Proof. This is the content of Andrieu et al (2008) for their version of the EES. For our version the assertion follows from the ergodic theorem for Markov chains. Indeed, denote by S the Markov chain on Ω M+1 × M, where M is the space of all (M + 1) × 3 N matrices. S will behave in its first component like R while in the second component we keep record of the filling of M. Observe that each T i := T βi is reversible with respect to π βi and M does not play any role for it. On the other hand, once we reach a situation where M is entirely filled with states different from ι (we denote this state of M by M 0 in the second coordinate of S), i.e. we have seen all states at all temperatures, also all the equi-energy steps Q i are reversible with respect to π βi . This is, because Q i (σ, τ ) > 0, if and only if, σ and τ lie in the same energy band and follows from the construction of the transition probabilities. Thus, once M 0 is reachedwhich happens almost surely in finite time-S is reversible with respect to Then the convergence follows from the convergence theorem for Markov chains. This, in particular, yields the assertion of the theorem. □ The proof is somewhat misleading, as it seems to indicate, that for exponentially large state spaces there is no hope that EES may converge in polynomial time, since first the state M 0 has to be reached. However, if we consider the high temperature situation β < β c in the Potts model the Metropolis-Hastings chain converges to its invariant distribution in polynomial time, even without any equi-energy steps.
On the other hand, we will see in the next section that in part of the low temperature regime β > β c the situation is even worse. Even, when we start S in the optimal state M 0 in its second component, i.e. when we assume the second component is already in M 0 , the mixing time may be exponential.

Torpid mixing of EES on the low temperature mean-field Potts model
We now come to the central result of the note.
Theorem 4.1. EES is slowly mixing on the 3-state mean-field Potts model, when β c < β < 3, even when the second component of the Markov chain S introduced in the proof of theorem 3.1 above is in state M 0 .
We will prepare the proof of the theorem by explaining the ideas and stating some lemmas. In the proof of the theorem we will exploit one of the main differences between the meanfield Potts model and the Curie-Weiss model (i.e. when E = {1, 2}) at low temperatures. This difference lies in the fact, that in the Curie-Weiss model the state where both colors occur equally often is a local minimum of the Gibbs measure at all low temperatures, while it is a local maximum of the Gibbs measure in the mean-field Potts model for some temperatures in the low temperature regime (also see lemma 4.3). In particular, in the Curie-Weiss model, the Gibbs measure is flat in this state at the critical temperature while it exposes a local maximum in this state at the critical temperature in the Potts model. Thus, in the latter, EES will be very reluctant to move far away from a state σ with m N (σ) ≈ ( 1 3 , 1 3 , 1 3 ). This is the core idea, even though the technical steps are somewhat more involved.
Let c 1 , c 2 , c 3 be numbers in [0,1] that add up to 1 and such that c i N is an integer for each i = 1, 2, 3. Then for σ c such that m N (σ c ) = (c 1 , c 2 , c 3 ) =: c we have that Note that we used Stirling's formula to derive the second equality in (4.1) and the fact that we can rewrite to be the domain of f (and the set of all probabilities on the space E), Gore and Jerrum show: 1999, proposition 1)). Let c be a local maximum of f . Then c satisfies:

Lemma 4.2 (See (Gore and Jerrum
(1) c lies in the interior of C.
(2) Either a i = 1 3 for all i = 1, 2, 3, or there are 0 < α < 1 β < α < 1, such that a i ∈ {α, α } for all i = 1, 2, 3. In the latter case there is exactly one a i equal to α , while all the other a j , j = i are equal to α.
Analyzing the function f around the point ( 1 3 , 1 3 , 1 3 ) we find that (in accordance with lemma 4.2) it might be a local but not a global maximum of f , if β > β c = 4 log 2 is not too large (a similar observation was already made in Kesten and Schonmann (1989)): Proof. In view of (4.1) is suffices to analyze f . For x > 0, and a ∈ [0, 1] consider h(x) := f ( 1 3 + x, 1 3 − ax, 1 3 − (1 − a)x)). It is easy matter to check that h (0) = 0 and h (0) = −(6 − 2β)(a 2 + a + 1). The assertion follows. □ Remark. Lemma 4.3 is a main reason why theorem 4.1 is true. It is not difficult to check that the same behavior is true for general q 3 in an appropriately chosen temperature regime (depending on q). Therefore, also theorem 4.1 could be proven for general q 3. Indeed, the property shown in lemma 4.3 is intrinsically related to the first order phase transition of the mean-field Potts model. Such a phase transition can be characterized by the discontinuous transition of the accumulation point(s) of an order parameter of the model at the critical inverse temperature β c . In the Potts model this order parameter is the variable m N . However, in most natural models, these new accumulation point(s) are already local maxima of the dis-tribution of the order parameter for some smaller values of β. Similarly, the old accumulation point(s) remain local maxima of the distribution of the order parameter for some larger values of β. This is exactly the statement of lemma 4.3.
Another key ingredient of the proof is a conductance argument (also known as Cheeger's inequality in Diaconis and Stroock (1991)) Theorem 4.4 (Sinclair and Jerrum (1989)). Let P be a Markov chain on a finite set S. Assume it is reversible with respect to π. For all S ⊆ S, define The conductance Φ given by Then the following holds true for the spectral gap Γ(P) of P: As follows e.g. from Diaconis and Stroock (1991) a spectral gap that is the inverse of a polynomial in the problem instance results in fast mixing of the Markov chain. On the other hand, if the spectral gap is the inverse of an exponential in the problem instance, the Markov chain mixes slowly. An immediate consequence of theorem 4.4 is that the Metropolis algorithm alone is slowly mixing on the low temperature Potts model.

Proposition 4.5. The Metropolis algorithm mixes slowly on the Potts model, if β > β c .
Proof. Take the macro-state a 1 := a 1 (β), i.e. the maximum point a = (a 1 , a 2 , a 3 ) of f , where a 1 > a 2 = a 3 . This point exists according to lemma 4.2 and because we are in the low temperature region. Since a 1 is a maximum of f , there is ε > 0 such that f is decreasing on the ball of radius 2ε centered in a 1 , B 2ε (a 1 ), when we walk from the center to the boundary on a straight line. a 1 is one of the three points in which the distribution of m N concentrates for large N and that are equally likely. Thus for when N is large enough and ε > 0 is fixed and small enough. Moreover, due to the exponential structure of π β , i.e.
and the behavior of f on B 2ε (a 1 ) (on B 2ε (a 1 ), the function f decreases like a multiple of the square of the two norms) we obtain that for a suitably chosen constant c > 0. But this implies that the set B 2ε constitutes a 'bad cut'. Indeed with the notation of the previous theorem we see that Thus T β mixes slowly, when β > β c . □ As a consequence, if EES is fast on the low temperature Potts model, this will have to be caused by the equi-energy steps. However, the following important observation is that we will not be able to switch between two states that are at very different distances from the center mode a 0 := ( 1 3 , 1 3 , 1 3 ) by an equi-energy step. More precisely: Lemma 4.6. For each ε > 0 and each ε > δ > 0 there is a number of spins N 0 such that for all N > N 0 and whenever σ and τ satisfy ||m N (σ) − a 0 || 1 < δ and ||m N (τ ) − a 0 || 1 > ε (where || · || 1 denotes the 1-norm on C) then

Here Q M is defined in (3.4).
Proof. The proof mainly shows that under the given conditions the energies of σ and τ are too far apart from each other. Indeed, observe that Q M (σ, τ ) > 0 requires σ and τ to be in the same energy band. Thus there is i ∈ {0, . . . M − 1} such that h i < H N (σ), H N (τ ) h i+1 . Now each σ a0 with m N (σ a0 ) = a 0 minimizes the energy to H N (σ a0 ) = N 2 × 3 × 1 9 = N 6 . On the other hand, the states where all spins point into the same direction have maximal energy N 2 see (3.3).
Since 1-norm and 2-norm are equivalent on C this proves the assertion. □ We will again use a conductance argument to prove theorem 4.1. In order to prepare it let us lift the balls B ε to Ω M+1 : For ε > 0 let From now on we will assume that β c < β < 3. Recall that then still a 0 is a local (but not a global) maximum of the function f . Let us fix ε > 0 so small, that still f is decreasing on B ε (a 0 ) when we move away from the center (in particular, a 0 is the only mode of π β on B ε (a 0 )). Moreover, let us fix δ < ε and N 0 so large that even with two equi-energy steps and a Metropolis step in between, a σ with ||m N (σ) − a 0 || 1 > ε cannot be reached from a τ with ||m N (τ ) − a 0 || 1 < δ.
This can be constructed as in lemma 4.6. Indeed, we will need the following: For δ > 0 given with δ < δ < ε there is N 1 , such that if N N 1 an equi-energy jump started in m N ∈ B δ (a 0 ) will not leave B δ (a 0 ). The subsequent Metropolis step can only increase the 1-distance of m N to a 0 by at most 1/N, hence m N is still in, say, B δ (a 0 ), for some δ < δ < ε. Finally, there is N 2 , such that if N N 2 an equi-energy jump started in m N ∈ B δ (a 0 ) will not leave B ε (a 0 ). We will from now on always take N N 0 := max{N 1 , N 2 }.
All this is necessary because the chains R and S possibly comprise two such jumps. Next we prove Lemma 4.7. Let β c < β < 3 and ε > δ > 0 and N 0 be chosen as above. Then, there exists Proof. According to our above analysis a 0 is a local (but not a global) maximum point of the distribution of m N under π β = π βM =: π M , if β c < β < 3. Therefore for c > 0 chosen appropriately. The proof of this statement follows the concepts of the proof of proposition 4.5. This fact is easily transferred to the measure π due to its product structure. □ With the help of this lemma we will be able to establish that the set B M+1 ε constitutes a 'bad cut' for the Markov chain S. If β c < β < 3 and the second coordinate of S is equal to M 0 its conductance Φ(S) satisfies Φ(S) e −c N for some c > 0, if N is large enough.
Here we used, of course, our previous estimates together with our construction of δ. Starting in B δ (a 0 ) the combination of an equi-energy jump, a Metropolis move, and another equi-energy jump will not leave B ε (a 0 ) according to lemma 4.6 and the construction of δ and N 0 . □ Now we have prepared everything to prove theorem 4.1.
Proof of theorem 4.1. Just note, that conditioned on the event that the second coordinate of S is in M 0 (which it cannot leave anymore), S is reversible with respect to π. Hence we can apply theorem 4.4 together with the conductance estimate of proposition 4.8 to obtain the desired result. □

Remark.
Note that a similar proof would not work in the Curie-Weiss model, because there the 'center point' (1/2, 1/2), i.e. the σ's where both directions for the spins occur equally often, is always a local minimum of the Gibbs measure at low temperatures.
Moreover, note that we could adapt the proof to different values of q 3 as mentioned above.
Finally, a similar argument should work for 'more disordered' models, as Potts models on sufficiently dense Erdös-Rényi graphs, as e.g. analyzed in Kabluchko et al (2019) for q = 2.

Remark.
We have just seen that EES mixes slowly on the 3-state Potts model at β c < β < 3, even when we know the energies of the entire set of states. We also argued that at high temperatures these temperature steps are not necessary, because already the Metropolis chain itself converges rapidly. However, one may doubt that there are reasonable models, at all, in which EES converges rapidly while the Metropolis algorithm does not. The point is, that, if we have not filled M almost entirely, a temperature jump may provide the desired tunneling effect, but to a rather unfavorable point of the target distribution.