Information geometric analysis of phase transitions in complex patterns: the case of the Gray-Scott reaction-diffusion model

The Fisher-Rao metric from Information Geometry is related to phase transition phenomena in classical statistical mechanics. Several studies propose to extend the use of Information Geometry to study more general phase transitions in complex systems. However, it is unclear whether the Fisher-Rao metric does indeed detect these more general transitions, especially in the absence of a statistical model. In this paper we study the transitions between patterns in the Gray-Scott reaction-diffusion model using Fisher information. We describe the system by a probability density function that represents the size distribution of blobs in the patterns and compute its Fisher information with respect to changing the two rate parameters of the underlying model. We estimate the distribution non-parametrically so that we do not assume any statistical model. The resulting Fisher map can be interpreted as a phase-map of the different patterns. Lines with high Fisher information can be considered as boundaries between regions of parameter space where patterns with similar characteristics appear. These lines of high Fisher information can be interpreted as phase transitions between complex patterns.


Introduction
Phase transitions are ubiquitous in nature. They are a dramatic change in a system's properties triggered by a minuscule shift in its environment. Phase transitions are often associated with spontaneous symmetry breaking, where the transition is between an unordered phase and an ordered, less symmetric phase [1]. In simple models of phase transitions an order parameter is defined, which is zero in the unordered phase and non-zero in the ordered phase [1][2][3]. This is the basis for a mean-field approach to phase transitions, where an expansion of the free energy in the order parameter is performed. Many powerful methods have been developed over the years to study J. Stat. Mech. (2016) 043301 phase transitions, especially in the study of universality in so-called second order (or critical) transitions, such as the Landau-Ginzburg theory [3,4] and Wilson's renormalization group approach [1,2,5]. For those transitions the order parameter is continuous, thermodynamic quantities obey scaling laws in the vicinity of the critical point, and there is a diverging correlation length. For first order trans itions, on the other hand, there is a jump in the value of the order parameter, there is no diverging correlation length and thus no scaling of the thermodynamic functions near the transition point. During the transition there can be a mixed phase with a stable interface between the two phases [1]. The study of first order transitions is very important for complex systems, especially social or ecological complex systems, because the sudden jump between two phases (which is discontinuous) can be quite dramatic [6].
While the Ginzburg-Landau-Wilson approach has been tremendously successful in explaining universality in second-order phase transitions, it requires the definition of an order parameter. Dierent approaches, which do not require an order parameter, can be useful for cases where an order parameter is dicult to identify, or does not exist (e.g. [7]). In one such approach we study the probabilistic description of the system while changing the parameters to bring the system across a transition. The statistical properties of the system in the dierent phases are very dierent. Therefore, at the phase transition the shape of the probability distribution function will change drastically. This is captured by the Fisher information matrix (FIM) through the Cramér-Rao bound [8][9][10][11]. A compelling dierential-geometric framework to study the changes that the probability distribution undergoes is information geometry (IG) [12]. In IG the family of probability distributions that are parametrized by a set of continuous parameters (designated here as the vector θ) is seen as a dierential manifold. The parameters form a coordinate system on the manifold, and distances are measured by the Fisher-Rao metric: which is a positive semi-definite, symmetric matrix that changes covariantly under reparametrizations of the probability distribution ( ) θ p x; . Here / θ ∂ ≡ ∂ ∂ µ µ is a derivative with respect to one of the parameters, indexed by μ. The IG of many models in statistical mechanics has been studied, e.g. in [13][14][15][16][17][18][19][20][21][22][23][24][25][26]. Of particular interest in these studies is the role of the scalar (Riemannian) curvature. It was shown to diverge at critical transition points and on the spinodal curve [25,26], thus eectively preventing geodesics from crossing into the unphysical area of phase space [27]. See also [28] for a general renormalization group analysis of IG near criticality.
A connection between phase transitions in IG and the Ginzburg-Landau-Wilson approach can be made when an order parameter φ µ is the derivative of a thermodynamic potential with respect to some thermodynamic variable θ µ . Then there exists a collective variable ( ) 4,29] and the Fisher information matrix can be shown to obey [29]: where is the inverse temperature, k B being Boltzmann's constant. At second order phase transitions in the thermodynamic limit this derivative diverges and therefore a corresponding entry of the FIM also diverges.

J. Stat. Mech. (2016) 043301
When the system is finite, the Fisher information does not diverge but rather attains a maximum. The maximum of the Fisher information has been used to accurately find the phase transition point in finite systems [29,30] and as a definition of criticality in living systems [31].
We have two main goals for the current work: first to test a conjecture set forth by Prokopenko et al in [29] that a divergence (or maximization) of the entries of the FIM can detect phase transitions even in the absence of an order parameter. Second, to measure the Fisher information matrix without resorting to the underlying dynamics of the system and without assuming a specific parametric model for equation (1). This addresses the problem that often the microscopic dynamics of complex systems are unknown, and an analytic description of the probability density function is missing.
To accomplish these two goals we chose to study the specific example of the two dimensional Gray-Scott (GS) reaction diusion model [32]. We chose the GS model for its rich variety of spatial and spatio-temporal patterns [33] and we consider the transitions between the dierent patterns as critical transitions. Among the dierent types of patterns one can find self-replicating spots [34][35][36], spatio-temporal chaos [37], and labyrinthine patterns [38]. These were first systematically classified by Pearson [33]. Our goal is to use the Fisher information matrix to construct a phase map for the Gray-Scott model, where we expect areas with high values of the Fisher information to demarcate the dierent patterns. As a probabilistic description for the system we chose the blob-size distribution, which we take to be a function of the control parameters of the model F and k and which we estimate non-parametrically by using image processing on the resulting spatial V concentration from our simulations.
The paper is organized as follows: in section 2 we discuss the relationship between Fisher information and criticality, which forms the motivation for our approach. We revisit the arguments in [29] and extend them to our case. In section 3 we introduce the Gray-Scott model and discuss some of its properties. In section 4 we present the results of the computations and in section 5 we explain the methods we used to compute the Fisher information in this settings. Last we discuss our results in section 6.

Fisher information and criticality
In this section we summarize the derivation performed in [29] leading to equation (2) that relates order parameters and the entries of the Fisher information matrix. This derivation leads to the conjecture [29] that it is enough to consider the Fisher information matrix entries rather than order parameters and is presented here because it is important for our exposition.
The Gibbs ensemble can be generically written in the following way: with ( ) θ Z set by normalization and with summation convention over repeated indices, which we use throughout the paper. For this distribution, the Fisher information equation where G is the Gibbs free energy. Performing the derivatives we obtain equation (2). For many systems, the order parameter can be defined by introducing an external field h to the free energy that couples to the order parameter φ [1]. The canonical example being the magnetization that couples to the external magnetic field, so that The external field being one of the external parameters θ µ . We then have: Setting θ = µ h for a particular μ we obtain: The meaning of equation (6) is that if an order parameter φµ is a derivative of the free energy G, then there exists a collective variable ( ) µ X x whose average is proportional to the order parameter [29]. It is important to note that many models exist whose order parameter is indeed the average of a collective variable [4,29].
Combining equations (4) and (6) we obtain equation (2). This links the value of the entries of the Fisher information with the derivatives of the order parameters φ µ of the system. Since at phase transitions the order parameter or its derivatives becomes non-analytic we can expect the Fisher information matrix to have diverging entries at the phase transition point. For example, at a ferromagnetic transition point, with ( ) ( ) θ θ = h T , , 1 2 , the diagonal elements of the Fisher information matrix are given by [29]: Since g 11 and g 22 are proportional to the magnetic susceptibility χ T and heat capacity C h respectively, we expect both entries to diverge at the point of the magnetic second order phase transition [29]. More generally, it is easy to show that [29] This is also called the generalized susceptibility in statistical mechanics [31]. This derivation led Prokopenko to propose that the maximization of the appropriate Fisher information matrix can detect phase transitions, without explicitly defining an order parameter [29]. The relation 2 suggests the introduction of an order parameter derived from the Fisher information by integrating it from one phase to the next, in the following way: doi: 10 We absorbed the inverse temperature in the definition of θ, and the integration path starts at θ 1 which is in one phase and ends at θ 2 which is in the other phase. In general this will depend on the integration path and the end-points. While the derivation above assumes the form (3) for the probabilistic description of the system, the idea of Fisher information maximization at phase transition points can be generalized for other probabilistic descriptions based on the Cramér-Rao bound [9][10][11] that states that the variance an unbiased estimator ˆ( ) θ x is bounded from below by the inverse of the Fisher information. We make the following heuristic argument: when a system is said to undergo a phase transition, it means that there is an observable change in some aspect of the system (often having to do with the symmetries of the system). This means that the statistical properties of the system in the two phases dier significantly. For example, if we sample the energy per spin of an Ising spin system repeatedly in the high-temperature phase, we will obtain a broad distribution of energies. Conversely, at the low-temperature phase the energy distribution is very narrow, since in the low-temperature phase the spins are aligned and the system is in the ground state [5]. Thus, if we look at the probability density function describing these observables change as a function of the control parameter, it undergoes a drastic change in its functional form. This, in turn, implies that we can estimate the value of the control parameter at the phase transition point accurately, because of the large change in the behavior of the density function. According to the Cramér-Rao inequality, the inverse of the value of the Fisher information serves as a lower bound on the variance of the estimated parameter. If this parameter can be estimated accurately then this implies a high value of the Fisher information. We therefore surmise that under very general circumstances the Fisher information is maximized at phase transition points.

The Gray-Scott model
The Gray-Scott model is a non-linear reaction-diusion model of two chemical species U and V with the reactions U is constantly supplied into the system and the inert product P removed. We can simulate the reaction using the law of mass action [39], where we assume that the rate of each reaction is proportional to the concentration of the reactants at each point. The resulting non-linear coupled dierential equations are: , are the (dimensionless) concentrations of the two chemical species, ∇ 2 is the Laplacian with respect to x, D u and D v are diusion coecients of u and v respectively, F represents the rate of the feed of U and the removal of U, V and P from the system and k is the rate of conversion of V to P. In practice this is a model of chemical species in a gel reactor where the rate F can be relatively easily modified, and k is dependent on the temperature of the system. D u and D v are more dicult to change and we will consider them constant, with / = D D 2 u v .

Linear stability analysis
We start with the standard stability analysis for the homogeneous system (i.e. without diusion). The Gray-Scott model has a trivial homogeneous steady state solution, referred to as the red state, at [ which is always linearly stable for positive F and k [33,40]. Under the condition that two additional homogeneous steady states solutions appear: These are referred to as the blue state and the intermediate state respectively [40]. Linear stability analysis shows that when these states exist, the intermediate state is always unstable whereas the blue state can be stable. The two states appear through a saddle-node bifurcation which in the F − k plane is defined by the curve: Under certain conditions [40] the blue state undergoes a Hopf bifurcation at For more details see [35,36,40,41]. The bifurcation curves are plotted in figure 1 together with the area where we performed our simulations.

Complex patterns
In the vicinity of the bifurcations a variety of inhomogeneous patterns may appear [40]. These can be observed by setting the initial state of the system to the red state and adding a finite perturbation that allows the system to reach a dierent attractor (for details see section 5). The first to systematically study these patterns in two dimensions was Pearson [33]. Pearson classified the patterns in 12 types and designated them with Greek letters. In this paper we use the same parameters Pearson used for his simulations, except that we will vary the simulation lattice size to include larger patterns, as will be described later.

J. Stat. Mech. (2016) 043301
The dierent patterns appear close to each other in the F − k space and often mix on the boundaries of regions of dierent patterns. One of the patterns that was first discovered by Pearson [33] is that of the self-replicating spots (e.g. figure 2(a)). A single spot of high V with a well defined boundary grows until at a certain point it will split into two spots. The process continues until the whole simulation area is covered with spots [34]. Depending on the parameters, the self-replicating spots will either reach an asymptotic fixed point or will start to flicker in what is known as spatio-temporal chaos [34-36, 42, 43]. Wang and Ouyang [37] derived a probabilistic description of the spot count in the chaotic regime as a function of the rate of spot creation and annihilation which were found to linearly depend on the spot-count. Another group of patterns are the worm-like patterns (figure 2(b)) which grow and fill the entire simulation lattice and then are fixed. In the parameter space, between the stable self-replicating spots and the worm-like patterns, a mixed pattern appears which contains both spots and stripes. This pattern (figure 2(c)) is reminiscent of a first order phase transition, where phase co-existence appears in the transition region. The degree of mixing gradually increases and then decreases again across the transition.

Probabilistic description
In order to construct a phase map using the Fisher information matrix, we follow Wang [37] in constructing our probabilistic description of the patterns. Wang treated each spot as an entity and computed the probability for a number of spots to appear at a unit time. We separate the pattern into similar entities, but regard not only the selfreplicating spots but also the stripes and the labyrinthine patterns as entities, which we call 'blobs'. We take the lattice of V values (the U value lattice is usually very similar to the V one) and treat it as an image. To identify a blob we first binarize the image using Otsu's method [45]. We use the find_blobs method from the image processing python library SimpleCV [46] to label continuous clusters in the binarized image. We extract the size of each blob (in pixels) and use a non-parametric estimation procedure to obtain a PDF of the sizes of the blobs. We assume that the blob sizes are characteristic of the dierent patterns and that by observing their distribution we could, with a high degree of certainty, deduce the type of pattern. We therefore expect the Fisher information to become large when a PDF changes rapidly and therefore we will be able to detect the transitions between the patterns. The density extraction process is depicted in figure 3. The choice we made of using the blob size distribution rather than any other distribution is pivotal to our analysis. Other choices would lead to a dierent Fisher information map. For example, we also looked at the distribution of aspect ratios of blobs (ratio of the largest width/height to the smallest width/height) and at the ratio of the blob area to its bounding box. These yielded pictures which were similar but a lot less clear than the blob size distribution (data not shown). This is somewhat similar to the choice of an order parameter for phase transitions in statistical mechanics, which is an art and is often not unique [47, p 214].
We assume that the extracted PDF is a function of the parameters F and k and that we can therefore compute derivatives of it with respect to these parameters by using a finite dierence scheme. We use a centered finite dierence scheme and the integration is performed by use of scipy.integrate.quad [48]. The expression we use for the Fisher information is: In this expression θ ∆ µ represents a small increment in either F or k, keeping the other fixed (so, for example, θ θ + ∆ F is interpreted as the probability density at + ∆ F F k , ). Following [30] we set the expression under the integration to zero whenever any of the densities was zero for a given x. An extended discussion about the use of non-parametric density estimation (such as Kernel Density Estimations) and using finite dierence schemes in the computation of the Fisher information is given in [49].
Since our premise is that the distribution of blob sizes is typical to the dierent patterns, we use the Shannon entropy, defined as: as a way to validate our method. The Shannon entropy (19) is a measure of uncertainty in the outcome of a random variable which is distributed according to ( ) θ p x; . In our case it represents our uncertainty about the size of one of the blobs in the pattern. In the stable spots pattern, for example, the uncertainty is relatively low and so is the Shannon J. Stat. Mech. (2016) 043301 entropy. At the stripe pattern it is high, since any given blob can belong to a short or a long stripe. This gives us a sort of an order parameter which, however, is not necessarily related to a change in the symmetry of the system. This allows us to compare the predictions from the Fisher information to that of the Shannon entropy. We will typically expect a line of high Fisher information where the Shannon entropy undergoes a large change.

Two dimensional phase map
To get an overview of where in parameter space each patterns is located, we first plot the Shannon entropy of the blob-size PDF. The Shannon entropy map for a simulation  There are two areas with low entropy (blue regions, one above and one below the saddlenode curve). Both blue areas have stable localized spots of roughly the same size, but only the spots in the area containing the (a) pattern are created through self replication. The other spots are created with a dierent mechanism, see the supplementary material (stacks.iop.org/JSTAT/2016/043301/mmedia) at for movies showing the evolution of these two patterns. When we move from the area containing (a) to the area containing (d) there is a well defined boundary where the blue region ends and where the spatiotemporal chaotic regime begins. The Shannon entropy increases slowly as one moves towards lower values of F and k since the size of the spots becomes less certain and since half formed spots are counted as blobs with smaller and more variable areas. We refer to this transition as 'transition I'. A second interesting transition occurs when moving from  to a varying degree, until there are no more spots and only stripes remain. As we can see the Shannon entropy increase is much steeper in this transition, since the blobs turn into stripes of wildly dierent lengths. We will refer to this transition as 'transition II'. Examples of trans itions I and II can be seen in figures 6 and 7 respectively. The dierent components of the Fisher information matrix and its trace and eigenvalues are plotted in figure 5. At each point we computed the eigenvalues and plotted separately the larger eigenvalue (figure 5(e) and the smaller one (figure 5(f )). We also looked into the square root of the determinant of G, which is the density of distinguishable models [50,51]. This did not yield additional insights that we did not see in the other measures, especially the trace (data not shown).
The main features of the Fisher information maps we see in figure 5 are that there are many curves with high values of the Fisher information ('ridges') which separate areas of lower FI. Most ridges follow the general curve of the saddle-node bifurcation line, except for the noticeable ridge separating the self-replicating spots pattern region (a) and the spatio-temporal chaotic area (d) (i.e. the ridge representing transition I). Almost all versions of the Fisher map show this ridge, but with a varying degree of clarity. Transition II is also clearly represented by a ridge which appears in almost all representations of the FIM (except for the G Fk component). This is the rather wide ridge where pattern (c) resides.

One dimensional transitions
We constructed the two dimensional phase map by simulating the dierent patterns that appear in all the dierent parameter values we were interested in. Often we do not have complete knowledge of the surrounding patterns and only observe one instance of the system at a time. In these cases, it can be useful to look at a one dimensional plot of the Fisher information along a given trajectory. This can happen, for example, when we observe a natural system over time. Then we could treat the observation time as the parameter θ and compute how much we know about the observation time from the distribution p. As an example we plot the Shannon entropy and the Fisher information interval along two trajectories. One crossing transition I and the other transition II. We define the Fisher information interval in analogy with special relativity [52]: It is an invariant measure that represents the squared distance along the path described by θ µ d . To compute the interval we first defined the start and end of the path ( ) = k F p ,

Transition I: self replicating spots to spatiotemporal chaos
The first one dimensional transition we analyze is transition I, which we define between the stable, selfreplicating, spots and the chaotic spots. One such trajectory crossing the transition is plotted in figure 6. Point (a) (at parameter values = = k F 0.061 41, 0.027 136) is located it reaches a maximum, after which is slowly decreases again. On the same graph we plot the Shannon entropy of the blob size PDF. The Shannon entropy is lower in the stable phase than in the spatio-temporal chaos, as can also be seen in figure 4. This is due to the increased uncertainty in blob sizes in the chaotic regime.
We inspected the patterns on both sides of the transition visually and as a function of time and it does indeed seem to capture the transition in the correct location. To verify this quantitatively we define an order parameter. Since the nature of the transition is dynamic (i.e. the time dependence of the pattern is dierent in both phases) we simulate the system further in time. We divide the trajectory from (a) to (d) to 100 points at each point run the simulation for 50 000 warm-up time steps for it to reach the state in which we computed the Fisher information. We then continue the simulation an additional 2000 time steps, computing the blob-count every 20 time steps. From these samples we compute the standard deviation of the blob count. The rationale is that for the stable spots the variability of blob count should be zero and at the chaotic regime it is non-zero. This is plotted on the bottom left in figure 6, below the  Fisher interval plot. We also plotted vertical lines that indicate the position of points (b) and (c). Because of the demanding computation time we performed the validation run on a × 400 400 grid as opposed to the × 800 800 grid from which the Fisher interval is computed. The blob-count indeed remains constant between (a) and (b). It then gradually becomes more variable as we go deeper into the chaotic regime. The increase in blob-count variability coincides with the rise in Fisher interval which we interpret as a verification that indeed the transition begins at this point.

Transition II: self-replicating spots to stripes
The second trajectory we chose crosses what we term transition II. As the first transition, it starts in the selfreplicating spots area (at a dierent point (a) which is located at parameter values k = 0.0652 and F = 0.0395) and crosses to the area of the stripe patterns to point (d) which is located at k = 0.0632, F = 0.0428. Along the trajectory the patterns evolve from completely spots to a mixed spot-stripe region and finally reach a region without any spots. Again we plot the trajectory in figure 7, along with the Fisher interval and Shannon entropy (top left). The Fisher interval clearly increases between points (b) and (c) which this time designate visually selected points of the start and end of the region where the Fisher interval is high. The Shannon entropy again increases across the transition and is higher in the stripe phase in comparison with the selfreplicating spot phase. Below the Fisher interval and Shannon entropy plots we draw a part of the patterns that appear at points (a) through (d). This helps to provide a visual verification of the position of the transition. Point (b) is the first point with stripes appearing together with the spots and point (c) is one of the last with spots.

Shannon entropy and Fisher information
It is tempting to compare the Shannon entropy and Fisher information in terms of how well they capture the transition. Both can be used as indicators, the Shannon entropy indicates a transition when its average value changes significantly and the Fisher information by rise and subsequent decrease in its value. We would like to note that, following the discussion earlier, the Fisher information acts as a susceptibility measure (as the magnetic susceptibility would act in an Ising model) and the Shannon entropy as an order parameter (as the magnetization would in an Ising model). We can however hypothesize that it is possible to find a transition between patterns in which the Shannon entropy is constant but the Fisher information peaks. For example, if the average blob size changes abruptly but the variance in blob sizes remains fixed. In that case the Shannon entropy would not catch the transition but the Fisher information would.

Eect of the simulation grid size
The size of the simulation grid, which we varied in our experiments from × 256 256 up to × 1600 1600 has a large eect on the resulting phase map. Small grid sizes result in a fairly noisy Fisher map, whereas larger ones give a smoother map with better defined 'ridges' which demarcate the dierent phases. We suspect this is a result of too few blobs appearing in the smaller grids. Because simulating large grid sizes requires considerable computation time (the × 1600 1600 took about a month on a cluster running on approximately 800 cores simultaneously), one has to find a balance between accuracy and computational time. We found that a grid size of × 800 800 provided good results. In order to compare the results obtained for each of the grid sizes, we plot the trace of the Fisher information matrix obtained from the computation of the dierent grid sizes. This is presented in figure 8. At × 256 256 there is quite some noise from large peaks of the Fisher information. The × 400 400 grid already provides a much better resolution of the ridges, which improves even more at the × 800 800 grid. The × 800 800 grid also shows the highest contrast in Fisher information between 'ridge' and 'valley' regions. As we go to the largest grid size we used, × 1600 1600, it seems that some of the well-defined ridges become less well defined. We suspect that this might happen because the system did not have sucient time to reach equilibrium, even though each simulation was run for 200 000 time steps.

Methods
The Fisher information is obtained by performing simulations on the parameter range where inhomogeneous patterns appear in the Gray-Scott model. . We performed a simulation at each point of parameter space, starting with identical initial conditions (same seed) and repeated for the same number of time steps (depending on the simulation grid size). The simulation was started with an initial condition of the red state (u, v) = (1, 0) with a finite perturbation in the form of a × 20 20 square in the center of the simulation grid in the state (u, v) = (0.5, 0.25) and an additional Gaussian noise with an amplitude of 0.05 covering the entire simulation grid. This initial state was then evolved by integrating numerically equation (12) using an Euler scheme until the final state was reached. We repeated the experiment with dierent simulation grid sizes, ranging between × 200 200 up to × 1600 1600. The simulation times ranged from 50 000 time steps (for the smallest grid sizes) to 200 000 for the × 1600 1600 grid. This was chosen such that the self-replicating stable spots will fill in the entire simulation window. The simulations were performed on the Lisa cluster run by SurfSara [53]. The python code to perform the simulation is based on the code found at [54].
For each simulation we extracted the PDF by following the steps described in section 3.3. We used the Python package SimpleCV for the binarization and blob detection of the images and scipy.stats.gaussian_kde for the computation of the PDF from the blob sizes. The computation of the Fisher information from the PDF followed the description in [49] and the code for this computation is available online at [55].

J. Stat. Mech. (2016) 043301
As mentioned in section 4 we also computed the Shannon entropy for each PDF we obtained. This was done by simple integration of the PDF using equation (19) and the Python function scipy.integrate.quad.
In addition to the Gaussian KDE, we used the novel density estimation method DEFT [56], and tried two dierent ways to integrate equation (18)-once as it is written in equation (18) and once by first performing the dierentiation of the logarithm (replacing ∂ µ p ln with [ / ]∂ µ p p 1 ) before defining the finite dierence scheme. As is described in [49], the first method yields better results for Gaussian KDE and the second for DEFT. We eventually used the results obtained from the Gaussian KDE rather than the DEFT ones because finding the correct parameters for DEFT for all para meter values was dicult, since both the range of blob sizes and the number of blobs in each simulation in each parameter value varied too much.

Conclusions
In this paper we explore the use of the Fisher information matrix to capture transitions between dierent patterns in the Gray-Scott model. The use of Fisher information for this purpose is inspired by an analogy with phase transitions in thermodynamic systems and follows the work by Prokopenko et al [29]. The main purpose of the study was to test whether such a description is feasible in a system whose probabilistic description is not derived from the microscopic dynamics and where no statistical model is assumed. We find that at least in the case of the GS model with the choice of blob-size PDF as a probabilistic description, this seems to be indeed possible.
The main diculty in using this approach to produce the entire phase map is that it is computationally very demanding. The very smooth phase map only really appears at grid sizes of × 800 800 which requires long computation runs. A more conceptual diculty is that once we obtain the maps, their exact interpretation is not trivial. The value of the Fisher information ranges over many orders of magnitude (between 10 3 and 10 9 ) and there is no theoretical value to compare with.
The strength of the method lies in that it manages to capture various types of transitions with the same metric. We did not have to define specific order parameters for the dierent patterns. It also captures our intuitive understanding of a 'phase transition' as a large change in the statistical properties of a system when the underlying 'external' parameters are changed.
In the one-dimensional cases, it seems that the Fisher information does indeed capture essential features of 'pattern transitions' in that it gives a clear signal in the form of a peak between two regions of low Fisher information. It is especially interesting to note that the Fisher information predicts an exact position for the transition to spatio-temporal chaos, as depicted in figure 6, since this transition is dynamic in nature. Further mathematical analysis similar to the one done in [42,43] for the twodimensional Gray-Scott model might be able to confirm the exact position of the transition line and compare it to the one detected by the FI.
Our results suggest that Fisher information can indeed serve as a generalized susceptibility measure in the study of complex systems. Because very few assumptions