Gibbsianizing nonequilibrium dynamics of artificial spin ice and other spin systems

Beyond effective temperature for nonequilibrium spin systems is the concept of an effective potential, an ‘as if’ potential with no regard for real energies. But if the former is ad hoc, the latter would surely seem more so. We take up the task of tying a flow of effective interaction in coupling space to specified dynamics, and illustrate what can be done with pencil-and-paper approximations as well as Monte-Carlo integration of the flow equations. This Gibbsianization program is applied to a model of a zero-temperature quench from a completely disordered state as well as a model of artificial spin lattice rotational demagnetization in the disorder-dominated regime. Lessons are drawn regarding the most fruitful effective potentials to use in modelling laboratory systems.


Introduction
Ever since the fabricated arrays of nanoscopic magnetic islands known loosely as artificial spinice [1][2][3][4][5][6] ('artificial spin lattice' may be generally more appropriate) came under experimental investigation, the notion of effective temperature has had a role in interpreting their partially ordered states. For states resulting from rotational demagnetization protocols [1], parameters of the driving protocol could be roughly correlated with an effective temperature [7,8]. Nonnegligible correlations between islands which are not nearest-neighbors naturally fall out of a Gibbsian description with only nearest-neighbor interactions [9], so that significant direct longer-range interactions are not required. Despite these successes, it is not immediately clear how such use of effective temperature can be justified, nor how the effective temperature can be calculated. Effective temperature has also been applied to arrays which acquire their ordering during the growth process [6]. A much more immediate motivation is available in this case, as the final state is probably essentially a frozen-in equilibrium state.
Here, we take on the task of providing some justification for effective temperature descriptions in nonequilibrating situations. The efficacy of a simple effective temperature has previously been studied [10] in nearest-neighbor Ising and Potts systems subjected to a temperature quench, with encouraging results. Still, it should not be surprising if a simple effective temperature as a multiplier of the actual interaction energy in a Boltzmann factor is inadequate. Thus, we promote a broadened notion of effective potential, which may, for example, contain multi-spin or distant-pair terms, while the real energy consists only of nearest-neighbor pair interactions. At the same time, we tie such a Gibbsianized description to dynamics. This brings a Gibbsian description of the dynamics itself into view, which can be useful. On a deeper level, making that connection is necessary to avoid an explosion of ad hoc ingredients in the effective potential; only a consideration of the dynamics can justify its form.
There are many other concepts of effective temperature (for a wide-ranging survey, see [11]). For aging and glassy systems, and athermal sheared granular systems [12,13], an influential notion [14][15][16] is the ratio of an unequal-time correlation function to a response function, a ratio which would be the bath temperature in equilibrium [17]. This fluctuationdissipation temperature is a dynamical concept related to a time scale, and is the temperature 3 which would be measured by a thermometer with the right response time [18]. The sort of effective potential we consider here is instead related to equal-time correlations, and is more information-theoretic than thermodynamic. The two concepts can be complementary.
Two families of models will be studied concretely. Model Z is a model of a zerotemperature quench from an infinite-temperature state, and Model K is an attempt to abstract the essential elements of the rotational demagnetization process in a recognizable form. Model Z is a simple prototype of nonequilibrium ordering and might be of general interest for that reason. In the context of rotational demagnetization of artificial spin lattices, it was studied because of its unrealistic features, in the interest of finding out whether those would be expressed in the evolved state. It will turn out that the two models have more in common than one might have first expected. The rules governing the kinetics are found to make very specific suggestions about the terms which should be put into the effective potential, immediately rendering the procedure less ad hoc, however one proceeds to determine coupling constants. The state (i.e. macrostate) of the system is given an approximate Gibbsian representation at all times, so that dynamics becomes rephrased as a flow in coupling space. The problem is to keep as much information as practicable about the state; even partial knowledge of the dynamics can inform assessment of where this crucial information resides. There is expected to be a trade-off between the accuracy of the Gibbsian description and the complexity of the effective potential. Although the motivation we originally gave involved only end-states of certain kinds of evolutions, we see that a Gibbsian description of the entire evolution holds its own interest, and can conveniently package the state of a partially completed simulation. The coupling flow is found through a pseudo-equilibrium expectation of the action of the generator of the non-equilibrium evolution on the observables conjugate to our chosen couplings.
Questions of equilibration in any remotely thermodynamic sense seem to be out of place in this setting. Thermodynamic equilibration is an economic matter: coming to agreement about the apportionment of conserved quantities. That is inapplicable to rotationally demagnetized artificial spin-ice systems, as well as others encountered in disparate fields which are highly dissipative, rendering energy effectively non-conserved. Equilibration in the sense of reaching a fixed point of the dynamics might still apply, of course. But wariness against over-interpreting a Gibbsian description is in order.
Nevertheless, as we will illustrate, the Gibbsian formulation empowers the full range of tools of equilibrium statistical mechanics for the task of analysis, even with no genuine equilibrium. These range from simple, traditional 'high-temperature' (more accurately, weakcoupling) expansions to Monte-Carlo simulation. We show on Model Z how the coupling flow equations (9) can be conveniently and accurately solved via Monte-Carlo simulation by using the kinetic rules virtually. Those rules are not used to generate a sequence of configurations directly, but the effects they would have on correlation functions are calculated. An equivalent increment of the couplings is then found, so that the original dynamics is replaced (represented) by a time-dependence of the couplings.
We will not have the opportunity in this paper to make comparisons to laboratory experiments. Our purpose is basically propagandistic: to illustrate a point of view and its usefulness. However, we mention one concrete finding which has some bearing on modelling experimental data. As mentioned already, by constraining the nearest-neighbor correlation (which amounts to an effective potential with just a nearest-neighbor interaction), a pretty-good fit to longer-range correlations can be had. What of the deficit? Should it be ascribed to real (dipolar) longer-range interactions? Our examination of dynamics shows that, in addition to a 4 nearest-neighbor pair interaction, a four-spin interaction between a spin and three neighbors is often called for. As figure 2 shows, this can be just the thing to correct the failure of a simple nearest-neighbor pair interaction. We intend to confront this idea with experimental data in a forthcoming paper.

The Markov-Gibbs connection
Our rewriting of nonequilibrium probabilistic spin systems in a form that looks like thermal equilibrium for some effective potential will be based on rational inference about the essential correlations. Consideration of the dynamics provides the information, but to fully understand what it means involves a theorem which we will describe that goes back at least to the 1970s [19][20][21] (generalization was also made [22,23] to the 'almost-Markovian' case, for which the influences do not have finite range).
A finite collection of spin variables σ i is given, labeled by 'sites' i ∈ (denoting the entire system), as well as a probability distribution P(σ ) on the configurations (microstates). For A ⊂ , we often write σ A as shorthand for the collection {σ i : i ∈ A}. (Also, note the notational abuse whereby 'σ i ' serves as the name of a variable as well as a possible value of same.) The basic question is whether this can be written in the Gibbsian form where the term A (σ A ) in the effective potential depends only on the spins in A. At a slightly more refined level, we ask how the supports A of the pieces A of the effective potential are related to the intertwined probabilistic dependencies contained in P. Obviously, one can simply take the logarithm of P to get a single A consisting of itself, but that is not very useful. Suppose we are given pair-correlation data for various pair separations for a spin lattice, and we wish to model it in a Gibbsian form as in (1). The first thing we would try is probably just a single nearest-neighbor term. But there are correlations between spins at all separations. Is there a feature of the probability distribution which would give the nearest-neighbor (or some other) correlation this sort of priority?
The first step toward an answer is to construct the dependency graph, for which we have to assume we have the complete probability distribution of the spin configurations. For i, j ∈ , examine the joint probability distribution P(σ i , σ j |σ k , k = i, j) of σ i , σ j conditioned on the values of all the other spins. σ i and σ j are conditionally independent if this conditional distribution factorizes as P(σ i , σ j |σ k , k = i, j) = P(σ i |σ k , k = i, j)P(σ i |σ k , k = i, j) for all values of {σ k : k = i, j}; this situation is denoted 'i⊥ j'. Now, if i ⊥ j, we add an edge (i j) ∈ E between sites i and j, resulting in a graph G with vertex set and edge set E. A subgraph of G, consisting of some ⊂ and the edges in E which connect vertices in is totally connected if every pair of sites in has an edge between them. A maximal totally connected subgraph is dubbed a clique. For instance, if = {0, 1, . . . , N } and E = {(i, i + 1) : 0 i < N }, then the cliques are simply the nearest-neighbor pairs corresponding to the edges. Now the theorem in question says that P factorizes over cliques. In other words, the sum over A in equation (1) runs over cliques. The conditional dependence condition i ⊥ j is equivalent to the condition I (σ i : σ j |σ k , k = i, j) > 0 on conditional mutual information between i and j given all the other 5 spins. This is a trivial rephrasing, but maybe it makes it clearer that i ⊥ j means that spin j carries unique information about spin i which is not available elsewhere, except in spin i itself.
To connect to dynamics, we take a space-time view. Consider a probabilistic cellular automaton, and assume it starts in a totally disordered state. For each spin i in the system, there is a possibly time-dependent update rule giving the probability that spin i will take the value σ i at time t given that the configuration of the entire system was σ (t − 1) at time t − 1. We assume that this probability kernel actually depends only on σ (t − 1) in some small neighborhood B i of i. Then the complete transition kernel is With reference to the theorem above, all seems nicely behaved from a space-time perspective. The cliques are the B i 's at each time. But, what we want is a description of the state of the system at some fixed time t > 0. To achieve that, the spins at earlier times need to be 'integrated out'. This is very similar to a decimation renormalization group transformation. Since the update rules are local, we expect information about spin i to be propagated outward at finite speed and become degraded as time goes on. It seems reasonable to expect that most of the unique information is contained in the correlations directly created by the dynamics. Thus, we should consult the dynamics to decide which couplings to put into the effective potential.

Coupling flow equations
Now we give more details of the method. We assume that the spins are Ising-type, taking values ±1, that V is a regular lattice and that the dynamics are spatially homogeneous. Switching to a continuous-time framework, let L t be the time-dependent generator of the dynamics. That is Here, f (σ ) is an arbitrary function of the spins, \i is all the sites except i, and c i,t (σ i |σ ) is the rate at which spin i jumps to σ i in the configuration σ . That is, L t is just the rate at which f is changing under the given dynamics at the instant that the configuration is σ . There is a certain repertoire of correlations that the dynamics is at least suspected of directly generating. For example, if spins are updated according to only the configurations of their nearest neighbors, these directly generated correlations will involve some or all of a spin and its neighbors. This repertoire S consists of shapes of clusters of spins, which are thus equivalence classes. One γ in S might be 'nearest-neighbor pairs', containing every such pair, or we might have reason to distinguish (for a square lattice) between horizontal neighbors and vertical neighbors. The point is that, for γ ∈ S, σ A is the same for all specific collections of spins A ∈ γ . In the examples in the following sections we will assume full lattice symmetry, so that the description 'shapes' applies literally. Having chosen the repertoire S, we take an effective potential of the form Now that we have a rationalized relation between the dynamics and our effective potential, it is almost obvious how the couplings should vary with time. In a time-dependent Gibbsian state (suppressing some time dependence on the right-hand side), d ds for α ∈ S and σ α denoting any particular σ A for A ∈ α (they are equivalent!). The susceptibility matrix χ is given by the static fluctuation-response relation The picture here is of a system always in equilibrium, though the parameters K α of that equilibrium are changing with time. In that picture, equation (6) is just a differential-juggled version of the definition of a generalized static susceptibility.
Putting things together produces the flow equations In the next two sections, we deploy this formalism against a couple of families of models using both simple analytical approximations as well as Monte-Carlo integration of the flow equations (9).

Model Z
In this section we illustrate the Gibbsianization method by applying it to the hexagonal and kagomé lattice variants of a simple model (Model Z) which can be treated straightforwardly.
Starting from a completely random state, the evolution proceeds as follows: pick a spin, set it opposite to the majority of its neighbors, repeat. The obvious interpretation of this is as a zero-temperature quench of an infinite-temperature state in a system with an antiferromagnetic nearest-neighbor interaction (We will not discuss the ferromagnetic variant). On its face, that does not sound much like rotational demagnetization for artificial spin lattices, and from that perspective it is interesting to look at precisely because it starts from such a different state (arrays are fully ordered by the external field in the early stages of rotational demagnetization). Nevertheless, it turns out to have a lot in common with the more realistic model taken up in the next section, showing the relative unimportance of that early ordering. Also, Model Z may serve as a general prototype of nonequilibrium ordering beyond the context of artificial spin lattices.

Hexagonal lattice
We consider first the honeycomb lattice. In this case, there is no essential difference between antiferromagnetic and ferromagnetic interaction since one can be turned into the other by swapping the + and − spin labels on one sublattice.

7
Setting a spin to point opposite the majority of its three nearest neighbors is expressed algebraically as The glyph at the right indicates the labelling. Clearly the dynamics is generating not only pair correlations but four-spin correlations. We will include in our repertoire S all symmetry-allowed couplings residing in the elementary cluster above. Global spin-flip symmetry means that only terms with an even number of spins occur, so the third coupling we include is next-nearest neighbor (n.n.n.).
Here, the last sum is over Y-shaped four-spin clusters (see the glyph in (10)). K 2 and K 4 are the couplings which are clearly being generated by the evolution, so K 2,2 -which is not so favored-is not expected to be as significant. Keeping track of it allows an inexpensive (though undoubtedly fallible) way of monitoring the possible development of additional couplings due to the interplay of the evolution at different sites. Using equation (10), the right-hand sides of the flow equations (9) are Here, the first equalities define the pair correlation functions c 2 , c 2,2 and the four-spin correlator c 4 . The glyphs denote a product of spin variables at sites having the relative locations of the dots. The time variable s is the mean cumulative number of applications of the update rule per spin ('mean touches per spin' for short).
These equations resemble any number of systems of kinetic equations, of which the Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy is the prototype. Since right-hand-sides of such systems contain increasingly complex correlations, some physically motivated means of closing the system are sought. But, in the Gibbsianization framework, closure is not an extra ingredient. In principle, the expectations are evaluated in the Gibbsian state given by the current values of the effective couplings. We will actually employ cruder approximations; when they are not satisfactorily accurate, it is easier to use Monte-Carlo methods for solving the coupling flow equations (9) than to seek more sophisticated ways of evaluating the right-hand sides of equations (12)- (14).
Now that the problem has been re-expressed as one of equilibrium statistical mechanics, we can try to apply any of the many tools developed for equilibrium problems. Since Model Z starts from an infinite-temperature state where all the couplings are zero, a high-temperature expansion is appropriate for small enough s. Denoting we evaluate the expectation values on the right-hand sides of equations (12)-(14) in a high-temperature expansion, and similarly with the susceptibility matrix, to derive the approximation These linear equations are easily solved with the result As long as the couplings remain small, (17) should be a good approximation. But there is no particular reason to expect them to remain small. (Indeed, we might naively expect the end-state to be completely ordered.) An alternative way to tackle the problem is through Monte-Carlo integration of the flow equations (9), in which case the accuracy is limited only by our choice of repertoire S and the computer time spent on evaluating the Lσ α 's and χ. Like most good Monte-Carlo algorithms, the implementation is simple and does not require anything like equations (12)- (14). Instead we do the following. Using the current values of the couplings, we sample configurations from the equilibrium ensemble in the ordinary way, e.g. with the Metropolis algorithm. With a sampled configuration, we apply the update rule virtually to each spin in turn. The spin is not actually changed, but the effect that applying the rule would have on the σ A 's is noted. Averaging over a number of configurations will yield Lσ A . Collecting simultaneously the correlations needed to evaluate the susceptibility matrix puts one in position to calculate the implied increments in the couplings K A . They are then updated and the entire procedure repeated. Figure 1 shows the results for the coupling flow from both the high-temperature expansion (17) and Monte-Carlo integration. As more-or-less expected, the high-temperature expansion works well as long as the couplings remain small. At late times, a simple nearestneighbor coupling does not describe Model Z well; it will correctly match only one correlation. A sizeable four-spin coupling is also needed. The next-nearest neighbor coupling K 2,2 , however, turns out to be very nearly zero. This last fact should give us confidence that no unincluded couplings are becoming strong.
But, do these couplings actually accurately reproduce the Model Z evolute, that is, do they reproduce correlations they were not forced to reproduce? Figure 2 makes the comparison at s = 1.95 for pair correlations. The correlations resulting from the actual Model Z dynamics are shown as solid squares and those of the Gibbsian state using the Monte-Carlo evolved K 's are shown as solid diamonds. Also shown, as open circles, are the correlations of a different simple Gibbsian model with only nearest-neighbor coupling, which has been adjusted to reproduce the nearest-neighbor correlation. This simple Gibbsian model has significant errors. One might have been inclined to repair them by introducing longer-range pair interactions. For Model Z at least, that is not the right thing to do. Instead, it needs a four-spin coupling, as deduced from examination of the dynamics.

Kagomé lattice
Now we consider Model Z on the kagomé lattice. In this case, the antiferromagnetic character matters because the kagomé lattice antiferromagnet is frustrated. This will limit the growth of correlations and makes a perturbative treatment more robust than for the hexagonal lattice. The update rule is now We did not specify what to do in case of no majority among the neighbors before, since the problem does not arise for the hexagonal lattice. The formula above gives zero, which corresponds (in correlation functions) to setting the spin randomly in that case. The alternative-no move in case of tie-is also interesting, and makes sense supposing there were an activation barrier to spin flipping. We will treat that variant later, but only by simulation, since an algebraic expression like (18) for that case is very messy. We proceed much as with the hexagonal lattice, omitting some details. Again, we need to consider two and four-spin interactions among a spin and its nearest neighbors. As before, The subscripts '2,2' and '2,3b' correspond to the pair labelling in figure 5 and are consistent with notation in a forthcoming experimental paper [24]. We may write the linear approximation   are for the rule variant which makes a random spin flip in absence of a neighbor majority and solid lines are for the variant which makes no move in that case. We see that our Monte-Carlo integration is not perfect, as one of the c 4 lines is deviating from where it should be, but the agreement with direct simulation is, in general, very good. Making random flips in case of no neighbor majority pushes both c 4 and c 2,2 toward the same value of roughly c 2 2 that they would have if there were only a nearest-neighbor coupling. Here is a potential explanation. Correlations deviating from that for only K 2 nonzero build up in the early part of the evolution. As c 2 develops toward the frustrated saturation value of −1/3, ties become more plentiful. The random flipping has no effect on nearest-neighbor correlations (essentially by definition), but can wipe out the further correlations that have built up. This interpretation is corroborated by examining the actual coupling flows from the Monte-Carlo integration, shown in figure 4, especially in the behavior of K 2,2 and K 2,3b .
At late times, the no-flip-on-tie variant tends to a state which is within the ground state manifold of the nearest-neighbor interaction, but with some additional longer-range correlations. Encountering this state with no explanation of its origins, one might easily suppose that the situation was the result of a temperature so low that weak longer-range interactions were being expressed. But, not only is there no thermal equilibrium, there are no influences in the update rule beyond nearest-neighbor. Figure 5 compares the results of the Gibbsianized models to direct simulation results at s = 2.6 for some more-distant pair correlations. The difference between the update rule which, in the event of a tie among neighbors, does nothing and that which makes random flips persists in these correlations and is very well reproduced by the Gibbsian models.

Model K
In this section we construct a kinetic model intended to capture essential features of the laboratory rotational demagnetization protocol and give it a Gibbsian analysis. Only the hexagonal lattice with easy axes out of the plane of the array [24] will be dealt with explicitly. The out-of-plane moments make all interactions antiferromagnetic and simplifies the kinetics in some respects in comparison to in-plane moments. Recall that the rotational demagnetization protocol [1] involves rotating the array in an external magnetic field H ext such that the field and the easy axes of the nanomagnetic islands are always in the same static plane. The field magnitude is slowly decreased to zero (usually in small steps and with reversals at each step) from well above the single-island coercive field.
All the islands flip obediently at the behest of H ext when the latter is very large, and are frozen when it is very small. Some sort of disorder or randomness [25,26] (or perhaps an edge effect [27]) is necessary to prevent all the islands from dropping out once and for all at the same instant, which would result in a fully ordered state (and not due to island-island interactions). Recent magneto-optical Kerr effect (MOKE) studies [28] have shown that variance in the intrinsic coercive fields H c (i) of individual islands is an essential form of disorder in these systems. We build this into the model. Although the external field rotates continuously, when it is close to the critical field H c (i) of island i, there is only a narrow window of field orientations that will permit the moment to flip, but the reversal itself occurs very quickly, so the field orientation hardly changes during it; this means that it is reasonable to regard the process as happening in discrete time steps. Since all easy axes are aligned, we arrive at the following picture: twice per rotation cycle, the islands have a chance to flip. During even-numbered time periods, flipping up is allowed. Island i responds to the combination of the external field and the fields from the other islands. The amount by which the total (trying to flip it up) exceeds H c (i) is the instability of island i; if the instability is negative, that island is stable. The most unstable island flips, which may alter the stabilities of nearby islands. With AFM interactions, that can only reduce instability. Again, the most unstable flips, etc until there are no more unstable islands. During the odd-numbered periods, there is a similar round of flipping down. That completes the modelling of one rotational cycle of the external field.
This minimal model (Model K) can now be straightforwardly simulated, given the time variation of H ext and the distribution of island critical fields (which we assume to be Gaussian, for want of clear indication that it should be otherwise). We shall consider some results of such direct simulation below. But our primary interest here is in seeing how the model might be Gibbsianized. The presence of quenched random variables in the form of a distribution of island critical fields is a new feature and the alternation of flipping-up and flipping-down periods looks likely to make for a very messy description. However, the disorder-dominated limit can be handled nicely. Moreover, this limit seems likely to be relevant to experiments, as MOKE measurements [28] suggest a width of single-island coercive fields larger than the island interactions. We shall concentrate on that limit now.

Disorder-dominated regime
There are three energy, or equivalently magnetic field, scales in Model K: the step size of the external field ( H ext ), the field exerted by one island on a nearest-neighbor (J ), and the width of the distribution of island critical fields (σ ). The magnetic moments of islands are not expected to vary much with high-quality lithography, but coercive fields of individual islands can be sensitive to small changes in the shape. The disorder dominated regime treated in this section is H ext J σ . Also, the fields exerted by other than nearest-neighbors are ignored. It may be that those fields can play the role of a significant noise, but we do not pursue that here. That H ext is much the smallest field scale means that the external field is changing effectively infinitely slowly; the final state is insensitive to H ext in this regime. (This phenomenon seems to be seen experimentally [9].) What matters for J is that it is nonzero, but its magnitude is irrelevant, as the following reasoning shows: σ is so large that for any i and j in the entire lattice, H c (i) and H c ( j) differ by much more than the maximum field which all neighbors of an island could possibly exert on it. If J is zero, then spin i simply stops following H ext when it decreases below H c (i). We say that spin i has 'dropped out'. Turning on J does not radically change that picture. The influence of i's neighbors may result in i dropping out slightly earlier or slightly later, but because the intrinsic critical fields are so widely spaced, these dropout events are isolated, and that slight shift is ultimately of no significance, since it does not change the order of dropouts. However, under the other condition defining this regime, H |J |, the neighbors of i who have already dropped out decide the state into which spin i drops; those which have not have no vote in the decision. That is because they are continuing to flip with the external field, so that the difference between the net field strength experienced by island i at an up-flipping period and that experienced at a neighboring down-flipping period is determined entirely by the neighbors which do not flip with the field.
In this disorder-dominated regime, then, only the order, not the actual values, of the critical fields matters to the final state. Each spin makes one random decision to drop out and islands which are not dropped-out are essentially invisible. Moving toward a Gibbsian description, we thus allow spins to have three values: ±1 (dropped out, or occupied) and 0 (not dropped out). The dynamical generator we want to Gibbsianize is again majority-rule-among-neighbors, but that now depends on how many neighbors are dropped-out. If site 0 has one nonzero nearestneighbor, then the majority rule is −σ a . If two, then (−1/2)(σ a + σ b ). If three (as for Model Z), then (−1/2)(σ a + σ b + σ c − σ a σ b σ c ). The labelling of sites here is given by the following diagram, which will continue to be our convention: Using what we learned from Model Z, our effective potential contains a nearest-neighbor term and a four-spin term only: Slavish adherence to the letter of Gibbsianization would suggest that we incorporate the possibility of empty sites with a chemical potential terms µ i σ 2 i . But that would be a mistake, since it represents quenched disorder. It is not difficult to simply let each site be occupied independently with probability p, independently at each time. The state when p reaches one is of primary interest, and in order to treat that we may as well take p to be our time variable during the demagnetization process. In contrast to the evolution equations for hexagonal Model Z, each possibility for occupancy of neighbors must be considered. For example, the possibility that all three neighbors of spin 0 are occupied is represented by a factor σ 2 a σ 2 b σ 2 c , and then the rule for three neighbors is applied. This procedure yields the following evolution equations for the correlation function c 2 : Here, q := 1 − p and some symmetry and the relation σ 3 i = σ i , valid for σ i = 0, ±1 has been used. The ubiquitous factor of 1 − σ 2 0 is due to the fact that a spin can change only by dropping out. The first (second, third) line corresponds to the possibility that σ 0 has only one (two, three) occupied neighbor site(s). Similarly, for the four-spin correlation we get Based on the experience of hexagonal Model Z, we keep only these two correlations. The equations look fearsome, but quickly boil down to These are straightforwardly integrated to yield, at p = 1, c 2 = −0.578, c 4 = −0.040. By contrast, direct simulation of Model K with H = 1.6, J = 6 and σ = 100 gives c 2 = −0.596, c 4 = −0.052. To calculate further pair correlations, we need to convert the correlations to couplings: c 2 ≈ p 2 t 2 + 2 p 6 t 5 2 gives t 2 = −0.51 and c 4 ≈ (t 4 + t 3 1 ) p 4 gives t 2 = 0.093. Thus, there is a sizeable ferromagnetic four-spin coupling in this model as was the case for Model Z. Now we can calculate the further pair correlations according to the high-temperature expansion approximations: Since t 2 is sizeable, some skepticism about the trustworthiness of this approximation is in order. Nevertheless, it is in excellent agreement with direct simulation results ( H = 1.6, J = 6 and σ = 100 again), as figure 6 shows. It is not clear why the seemingly crude approximation given here works so well. It is a pleasant surprise.

Discussion and conclusions
Descriptions of evolving nonequilibrium systems of interacting spins and the end states of such evolutions by means of effective temperature and more generally an effective potential can be rationalized by considering the evolution in a Gibbsian framework. The state is approximately represented at all times in a Gibbsian form with time-dependent couplings. But the effective potential is a package, without natural separation into temperature and energy. It may contain terms that do not correspond to actual physical interaction energies and the evolution may be given in a way which does not even make explicit mention of energies. One may ask whether the states we are trying to describe are really Gibbsian at all. Since that is trivially the case (though not necessarily in a useful way) for a finite system, this is a question about the thermodynamic limit, and convergence of the effective potential (1). The issue may be approached also through conditional expectations. Good convergence of the effective potential corresponds to conditional expectations depending only weakly on distant spins. Unfortunately, it turns out that various kinds of transformations applied to wellbehaved Gibbsian systems can result in a breakdown of this isolation from distant regions. This phenomenon was first analyzed thoroughly in the context of real-space renormalization group transformations [29], and later for stochastic evolution [30,31]. Conditional probabilities are discontinuous at certain 'bad' configurations for which the unobserved (integrated out) spins are at a phase transition point; in a neighborhood of such configurations, very distant spins can have a large influence on the conditional probabilities by tipping the unobserved spins one way or the other. Usually, the bad configurations have probability zero. It is unclear how large a problem this situation poses for what we propose here. Obviously, one cannot claim to be approximating the true effective potential if there is none. On the other hand, weaker concepts of effective potential may be rigorously available. Also, a trade-off between accuracy and economy of description is to be expected, so that the existence of an exact effective potential is not necessary, although the trade-off remains to be carefully quantified. The purpose of the Gibbsian representation is to economically encode the important correlations that are controlling all others. Generally, one would expect the important ones to be those generated directly by the dynamics. The effective temperatures previously arrived at for artificial spin-ice by simply fitting experimental data are partially rationalized by this program. Model dynamics can be treated explicitly, showing that states which look like thermodynamic equilibrium can come about even when each spin makes only a single 'decision' guided by something very much less than a real energy (Model K).
We have applied the general Gibbsianization method to some simple models, and have seen how the flow of couplings can be tracked by Monte-Carlo methods which are simple to implement. For rotationally demagnetized artificial spin lattices and other systems, it may be that the final state is of primary interest, but the details of the kinetics during the process are poorly known. In such cases, an examination of simple families of evolution rules may point to the important couplings to keep in an effective potential description. Fitting those couplings to experiment then has a stronger justification.
The coupling constant flows resemble renormalization-group flows and the calculational method resembles decimation in that past states are integrated out. Although our emphasis has been on spin distributions at a single time, the method could be extended to address dynamic correlations. If the dynamics is time-independent, so that the generator L s does not depend on s, a time-independent vector field dK α /dt can be mapped out, and then the relation [σ β (0) − σ β (0) ]σ α (t) = ∂ σ α (t) /∂ K β applied. Also, a 'partial trace' could be used, keeping a Gibbsian form depending upon both the moving time-slice t = s and some subset of the spins in the slice t = 0.
These methods can be useful for other systems, where the evolution involves significant changes in short-range correlations, particularly where interaction energy has limited significance. Granular materials may provide some examples, and some insight may be gained into the emergence of Edwards' measures [32,33].