Dynamics of critical fluctuations: Theory -- phenomenology -- heavy-ion collisions

This report summarizes the presentations and discussions during the Rapid Reaction Task Force"Dynamics of critical fluctuations: Theory -- phenomenology -- heavy-ion collisions", which was organized by the ExtreMe Matter Institute EMMI and held at GSI, Darmstadt, Germany in April 2019. We address the current understanding of the dynamics of critical fluctuations in QCD and their measurement in heavy-ion collision experiments. In addition, we outline what might be learned from studying correlations in other physical systems, such as cold atomic gases.

r School of Physical Sciences, National Institute of Science Education and Research, HBNI, Jatni 752050, India

Introduction
Ultra-relativistic heavy-ion collisions create small droplets of deconfined QCD matter -the Quark Gluon Plasma (QGP). As the system expands, it cools and eventually hadronizes. As a function of beam energy, system size, and rapidity the collision explores different regions of temperature T and baryo-chemical potential μ B in the QCD phase diagram [1][2][3][4], possibly including a conjectured QCD critical point [5]. This critical point is the endpoint of a line of first order QCD phase transitions, analogous to the critical endpoint in the phase diagram of water.
The main tool that connects the evolution of the matter produced in a relativistic heavy-ion collision to bulk properties of QCD is viscous relativistic fluid dynamics [6][7][8][9]. Fluid dynamics can be understood as the effective theory of the long-time and long-wavelength behavior of a classical or quantum many-body system. In this limit the system approaches approximate local thermal equilibrium, and the dynamics is governed by the evolution of conserved charges. The system produced in relativistic heavy-ion collisions is not truly macroscopic -the number of produced hadrons ranges from about a hundred to several tens of thousands -and the question just how far the hydrodynamic paradigm can be pushed towards smaller systems, lower energies, and more rare probes is an active area of study [10,11].
Researchers are also investigating why the fluid dynamic description is so effective, even in systems that are very small and very rapidly evolving [12,13]. While no complete consensus has been achieved, a number of important factors have been identified. The first is the fact that the QGP behaves as a nearly perfect fluid [14,15]. In particular, the mean free path is short and transport coefficients such as the shear viscosity to entropy density ratio η/s, are small. The second is rapid "hydrodynamization" [16,17]. There are indications, based on weak coupling kinetic models as well as strong coupling holographic approaches, that the fluid dynamic description is valid even in a regime where the system is still far from local thermal equilibrium.
The main observables that helped to establish the hydrodynamic paradigm are the spectra of identified particles, flow observables, and the spectra of certain rare probes, such as photons and dileptons [18,19]. In this report we will focus on fluctuation observables. There are several sources of fluctuations in relativistic heavy-ion collisions. The first is quantum fluctuations, in particular fluctuations in the initial multiplicity or energy deposition. The second is thermal fluctuations. In heavy-ion collisions the volume that is locally equilibrated is quite small, and fluctuations due to the finite size of the system are sizeable. These fluctuations are controlled by susceptibilities and related to the equation of state of the system. It is this connection that motivates a program of using fluctuation observables to investigate the phase structure of QCD. In particular, fluctuation observables may reflect the nature of the quasi-particles -quarks or hadrons -that carry the conserved charges, baryon number, electric charge, and flavor [20,21]. Furthermore, fluctuations probe the critical scaling of susceptibilities near a possible endpoint of a first order phase transition line in the QCD phase diagram [5].
A quantitative effort motivated by these ideas has to incorporate all sources of fluctuations in a heavy-ion collision. The central theme of this report is that such an analysis also requires a fully dynamical framework for the evolution of fluctuations. On a purely theoretical level, fluctuation-dissipation relations require that any dissipative theory of the evolution of a QGP has to include fluctuations, and any theory of fluctuations must incorporate dissipative effects. In thermal equilibrium, both effects balance, and a thermal spectrum of fluctuations emerges. Both dissipative effects as well as fluctuations are relatively more important in small systems.
At a practical level, the relative size of different sources of fluctuations depends on the evolution of the system, and a careful modeling of fluctuations in relativistic heavy-ion collisions requires a framework for the dynamical evolution: • Initial state fluctuations: Fluctuations of the initial state are related to quantum mechanical fluctuations in the distribution of initial sources in the transverse plane ("wounded nucleons"), and to large multiplicity fluctuations in individual proton-proton collisions. The presence of large initial state fluctuations is experimentally well established, based on the observation of odd Fourier moments of azimuthal flow [22]. Initial fluctuations have to be propagated through the event using viscous fluid dynamics, combined with kinetic theory for the final stages. The rate at which the amplitude of a fluctuation is damped, as well as the rate at which fluctuations diffuse, depends on the value of transport coefficients and on the precise spatial structure of the initial state. In order to analyze data from the beam energy scan we also need to understand how initial state fluctuations depend on beam energy and rapidity.
• Thermal fluctuations: As discussed above, local thermal fluctuations arise from the finite size of the volume that thermodynamic variables are coarse grained over, and their magnitude is governed by equilibrium susceptibilities, which are derivatives of the equation of state. At RHIC fluctuations of the net-proton number and charge have been observed [23][24][25][26], and in principle they can be related to lattice QCD calculations of the susceptibilities [27,28] provided one corrects for baryon-number conservation [29][30][31] as well as for the fact that the experiment only measures protons [32,33]. In an expanding system the growth, decay, and diffusion of fluctuations depends on the history of the system, the length scale of the fluctuation and the transport coefficients. This is of particular importance for critical fluctuations, because dynamical scaling implies that long-wavelength fluctuations evolve very slowly near a critical point. Furthermore, different moments of fluctuation observables evolve at different rates [34,35], making a naive comparison between a dynamical transit of a critical point and an equilibrium estimate at the freeze-out surface impossible. In this report we will discuss several implementations of the dynamical theory of fluctuations, based either on stochastic equations, or on deterministic equations for higher order correlation functions. We will also discuss the problem of backreaction, the degree to which large fluctuations may affect the equation of state or the transport properties of the QGP. • Hadronization: The formation of hadrons from a QGP is an intrinsically quantum mechanical process and involves fluctuations. This is evident from measurements of hadron production in pp collisions, which clearly show non-thermal tails in multiplicity and momentum distributions. This feature is also present in most models of hadronization, which involve stochastic processes such as string fragmentation or coalescence. In fluid dynamics hadronization is typically implemented using the Cooper-Frye formula [36]. This particlization method is based on matching the conserved quantum numbers between fluid dynamical densities and kinetic distribution functions across the freeze-out surface. If the kinetic framework is based on particles, as in molecular dynamics, then this process also involves a stochastic element, because we have to sample particles from a distribution function [37]. Any dynamical scheme for the evolution of fluctuation observables has to include not only a hadronization mechanism, but also a kinetic scheme for propagating fluctuations in the hadronic phase. Given that hadronization is a stochastic process, there is a question to what degree hadronization may wash out existing fluctuations, or create additional sources of fluctuations and correlations. • Detection: Detectors have finite acceptance and imperfect detection efficiency. Finite acceptance, coupled with global charge conservation leads to corrections to the measured fluctuation observables. Imperfect efficiency also leads to additional sources of fluctuations not present in the underlying event [38][39][40].
Quantifying the magnitude of these corrections not only requires a detailed understanding of the detector, but also detailed modeling of the evolution of initial state or dynamically created fluctuations in rapidity and transverse momentum. Finally we note that experiments can only measure correlations in momentum space. Therefore, the mapping of correlations in coordinate space, which are usually discussed by theoretical approaches, to those in momentum space needs to be better understood.
This report provides a summary of the discussions and presentations at the Rapid Reaction Task Force (RRTF) "Dynamics of critical fluctuations: Theory -phenomenology -heavy-ion collisions" organized by the ExtreMe Matter Institute EMMI. It describes ideas in an active and ongoing research effort, and the discussions at the workshop represented many different points of view. As a result, not all statements in this report necessarily reflect the opinion of every single author.
The document is organized as follows: In Section 2 we discuss dynamical approaches to fluctuations in fluid dynamics. There are two main frameworks, based on either stochastic equations for fluid dynamical variables (stochastic fluid dynamics), or on deterministic equations for correlation functions (hydro-kinetics). We also discuss the problem of hadronization and the issue of backreaction of fluctuations on the fluid dynamical evolution. In Section 3 we discuss experimental challenges. In Section 4 we discuss intersections and experimental opportunities related to fluctuation probes in other systems, in particular ultra-cold atomic gases. Additional details regarding a number of dynamical approaches are provided in an Appendix.

Theory of dynamical fluctuations
The study of physical effects arising from the presence of fluid dynamical fluctuations in the context of relativistic heavy-ion collisions was for a long time restricted to idealised systems with a large number of symmetries [41,42]. However, in recent years significant theoretical and phenomenological effort has been made to bring the simulations of fluctuating fluid dynamics closer to realistic scenarios. To this end two main avenues of simulating fluid dynamics with noise have emerged: stochastic fluid dynamics and hydro-kinetics, which are addressed in Sections 2.1 and 2.2. 1 Stochastic fluid dynamics refers to numerical implementations of viscous relativistic fluid dynamics with a stochastic conservation law [46,47] In this approach discretized noise is sampled event-by-event and the final observables are calculated after statistical averaging. The other approach, called hydro-kinetics, corresponds to a set of deterministic kinetic equations for the two-point functions of fluid dynamical fields, which are derived from the linearisation of stochastic fluid dynamics around a background flow. In this approach the statistical average of noise is performed analytically in the derivation of the deterministic equations. We note that for the study of critical fluctuations, notably in form of higher-order cumulants, the inclusion of non-linearities is essential. In such studies, fluctuating fluid dynamics needs to be supplemented by a model containing critical fluctuations, which may be done by using existing fluid dynamical fields, as done in Sections 2.3 and 2.6, or by introducing new, nonfluid dynamical degrees of freedom (see Sections 2.4 and 2.5) depending on which quantity one considers to be the critical slow mode. Similarly, one can choose to solve stochastic or deterministic equations of motion. Finally, the experimental observables are given in terms of correlations of produced particles, therefore the conversion from fluid fields to particle degrees of freedom, i.e. particlization, is a necessary step, which we discuss in Section 2.7. During the RRTF meeting the current status, advantages and challenges of these approaches were discussed.
Before discussing the details of possible implementations, it is important to recognize the multiple scales in the problem. In one limiting case, the largest wavelength perturbations will be dominated by the initial conditions. These perturbations of size l hydro ∼ R nucleus are not completely damped by dissipative processes and will survive until the end of the expansion. However the evolution of such modes can be affected by the influence of smaller scale l noise fluctuations, e.g., by the renormalization of effective transport coefficients and the equation of state. In the other limit, the smaller scale structure of initial conditions will be damped or mixed with the stochastic noise produced during the evolution. Although these two scales are often well separated at each point τ, x l noise (τ, x) l hydro (τ, x), (3) in an expanding system propagating perturbations can move from one domain to another, e.g., even small thermal fluctuations at initial time can be stretched to long wavelengths at later times. Similarly, the divergence of the correlation length close to the critical point, will be capped by the dynamics of the system. Therefore both spatial and temporal evolutions of fluctuations have to be understood to identify the relevant physical observables to be measured in the experiments.
In the following subsections we discuss different implementations of dynamical fluctuations and the relevant scales in the problem.

Implementation of stochastic fluid dynamics
The modeling of viscous relativistic fluid dynamics for heavy-ion collisions has made significant conceptional and technological advances [13], which goes beyond the relativistic Navier-Stokes equations [48]. Numerical implementations of 3 + 1 dimensional fluid dynamics using relaxation type equations exist and are publicly available (e.g. vHLLE [49], MUSIC [9], ECHO-QGP [50]). However stochastic fluid dynamics, although rather advanced in non-relativistic settings [51][52][53], has been a challenge to implement for the modeling of heavy-ion collisions. The stochastic energy-momentum tensor includes a thermal noise term, whose correlator is given by [46,47,54,55] Similarly, the stochastic current contains a noise term which satisfies Here, η, ζ and σ denote the relevant transport coefficients shear viscosity, bulk viscosity and charge conductivity for the conserved charge, respectively. The local approximation of white noise, given by the Dirac δ-function is an approximation of more complicated noise kernels, that can be obtained from microscopic calculations [56,57] or causality arguments [58]. The discretization of this Dirac-δ function leads to stochastic terms, which diverge δ ∼ 1 t V with decreasing grid spacing. There are several issues connected to it: 1. Stochastic noise introduces a lattice spacing dependence, 2. Correction terms due to renormalization become large for small lattice spacings, 3. Large noise contributions can locally lead to negative densities, 4. Large gradients introduced by the uncorrelated noise is a problem for partial differential equation (PDE) solvers.
The currently available implementations of fluctuating fluid dynamics have shown that for more than one spatial dimension one is forced to limit the resolution scale of the stochastic terms, i.e. only attempt to simulate noise down to a particular filter length scale, which is larger than the numerical grid spacing applied for the discretization of the deterministic fluid dynamical fields l grid < l filter l noise l hydro .
This can be justified physically, since at the shortest scale fluctuations decay almost instantaneously to equilibrium and from the point of view of measurable observables, there is no need to simulate them dynamically. A similar issue has been observed in nonequilibrium chiral fluid dynamics discussed in Section 2.4, where the noise field was effectively coarse-grained over the spatial extension of the equilibrium correlation length. Here, we discuss the various possibilities applied in stochastic hydrodynamical approaches.

Murase et al.:
In [59,60] the noise term is smeared by a Gauss distribution in rapidity and transverse direction. The widths of these Gaussians are chosen to be σ η = σ ⊥ /fm = 1 − 1.5. The dependence on this choice is not discussed. A large enhancement of the flow coefficients v n is observed when noise is included.

Nahrgang et al.:
In [61,62] the noise term is either propagated on a second grid with larger spacings x = 1 fm than typically used for the deterministic hydrodynamical fields or coarsegrained over the same scale. Both the energy density and the variance of the energy density fluctuations show a strong linear dependence on 1/ V . It is therefore mandatory to introduce correction terms on the level of the equation of state and the transport coefficients.

Singh et al.:
In [63] a high-mode filter is applied. Locally a cut-off of p cut = 0.6/τ π is determined in each fluid cell. Then the noise field is Fourier transformed and all modes with k > p cut are set to zero. After an inverse Fourier transform the noise field is smoothed. It is reported that energy conservation is verified and that the v n (2) are within statistical errors independent of p cut . In addition, it is shown that charged hadron multiplicities are little affected by the inclusion of fluctuations at this cut-off scale. One sees, however, that the coarse-graining scale that is introduced is quite large > 1 fm in the transverse plane.
The renormalization of the equation of state and the transport coefficients in stochastic fluid dynamics codes in the presence of fluctuations is a challenging task. The nonlinearities which are introduced by the full fluid dynamical equations lead to corrections [64,65], as one can for example observe in the retarded shear-shear correlator One can identify the first term in Eq. (7) as a cutoff-dependent contribution to the equilibrium pressure, while the second term is a cutoff-dependent contribution to the shear viscosity η. How this renormalization can be performed on the level of the numerical implementations represents an ongoing effort. In summary, the clear advantage to implement the full 3 + 1 dimensional event-by-event stochastic fluid dynamics is obvious: it allows us to evaluate all the relevant observables like the n-point correlation functions within the existing frameworks for simulations of heavy-ion collisions. It is therefore straightforward to include the kinematic cuts as applied in the experiment as well as taking initial and final state fluctuations into account. In return, stochastic fluid dynamics can easily incorporate the study of e.g. heavy and hard probes in order to investigate the impact of fluctuations on other observables in heavy-ion collisions beyond criticality.
However, the numerical challenges of implementing stochastic noise, validation of the effective equation of state and the statistical averaging over a sufficient number of events is a significant computational task requiring large resources.

Implementation of deterministic hydro-kinetics
As we have just discussed, solving stochastic fluid dynamics brings multiple new challenges compared to ordinary fluid dynamics. The Dirac δ-function correlation of the noise in Eqs. (4) and (5) has to be regularized in any numerical implementation and the stochastic terms make it difficult to apply standard PDE solvers. More subtly, the non-linearities of fluid dynamical equations lead to noise induced corrections to the effective equation of state and transport coefficients with divergent terms depending on the noise regularization cut-off. Therefore to simulate the cut-off independent physics the properties of fluid dynamical models have to be chosen in a non-trivial cut-off dependent way. Reproducing and understanding these subtle effects on a discrete grid is a considerable challenge and an alternative way of solving stochastic fluid dynamical equations, known as the hydro-kinetic approach, was developed recently [66,67], although similar ideas in the non-relativistic setting have been discussed earlier [68,69]. The advantage of this approach is that the divergent cut-off dependent terms are absorbed in the renormalization of background fields and the evolution equations for the two-point correlation functions can be formulated in terms of deterministic kinetic equations. In applications for heavy-ion collisions this approach was studied in the case of Bjorken boost-invariant expansion [66,70,71] and recently generalized to arbitrary backgrounds in Ref. [67].
Hydro-kinetics depends on the separation of scales between long-wavelength fluid dynamical modes and short wavelength fluctuations, which stay in equilibrium despite the expansion (see discussions in [66,67] and also Appendix A.1). Denoting the characteristic length-scale l noise marking the boundary between the expansion and dissipation dominated fluctuations we have where l micro is the microscopic scale, e.g. the mean free path or inverse temperature 1/T . The length scale at which the diffusive processes begin to over-come the macroscopic gradients driving the system out of equilibrium is given by where γ is the corresponding diffusion constant, e.g. γ η ∼ η/(e + p) for shear dissipation. Then the equal time correlation function of fluid dynamical fields φ A (t, x) represented by will satisfy the equilibrium fluctuation-dissipation relation at length scales |x − y| l noise , but will be driven away from equilibrium by long wavelength gradients over distances |x − y| l noise . The deviation of G AB (t, x, y) from equilibrium gives the non-trivial corrections to the constitutive equations, which can be estimated to be of characteristic size ∼ (c s /(γ l hydro )) 3/2 and are known in the literature as "long time tails" of fluid dynamical response [41,64,68,69]. It is important to note that such corrections are non-analytic indicating their non-local nature. In addition, in the fluid dynamical gradient expansion of constituent equations they come formally before the second order gradient terms, which are often included in relativistic fluid dynamical codes for stability and causality [72].
It is convenient to study the Wigner transform of the correlation function as the separation of scales allows us to write hydro-kinetic equations local in x for the relaxation of W AB (t, x, q) to equilibrium. For the non-trivial relativistic case the notion of equal time correlation functions has to be revised, which was recently accomplished in ref. [67]. Linearizing the equations of motion, Eq. (1), one derives the evolution equations for the perturbation fields φ A = (c s δe, wδu μ ), which in turn can be used to calculate the time dependence of the two-point correlation functions. After lengthy calculations [66,67] one arrives at hydro-kinetic equations for two propagating sound modes (±) and three diffusive modes for a fluid with no conserved charges. For example, for a sound mode one has where the left hand side is equivalent to the Liouville operator for a phonon with space-time dependent dispersion relation. On the right hand side one gets the relaxation term to equilibrium and the forcing term K proportional to fluid gradients. Once the W AB (x, q) is determined, the contribution to the energy momentum tensor at a point is given by the momentum integral of the Wigner distribution. The analysis of such contributions reveals the divergent universal corrections to the background equation of state and transport coefficients, which can be absorbed or renormalized. The remaining finite term (long-time tails) is particular to the given background expansion and has to be evolved dynamically. The outstanding challenge of deterministic hydro-kinetics is the application to a realistic QGP expansion in nuclear collisions. Formally the hydro-kinetic equation, Eq. (12), requires solving 3+3+1 dimensional equations, i.e. 3-dimensional momentum space equations for each spacetime point, to find out the equal-time correlation functions of the fluid dynamical fields. This is obviously numerically demanding in general, but the hydro-kinetic equations are linear and smooth, therefore one does not need fine momentum-space discretization to accurately solve the equations. In addition, hydro-kinetic equations could be solved using fictitious test particles which move on top of a fluid dynamical background solved using traditional approaches. One should note here that deterministic fluid dynamical simulations do not need to be repeated to obtain the statistical averages over thermal fluctuations. However, the currently derived hydrokinetic equations are limited to two-point functions. Interesting higher order correlation functions therefore require the generalization of this scheme, which is currently not done even for simple backgrounds.

Implementation of stochastic diffusion
Numerical simulations of the dynamics of fluctuations in the conserved net-baryon number N B both on the crossover and first-order phase transition sides near the conjectured QCD critical point have recently been performed for one spatial dimension without [73][74][75] and with nonlinearities [76,77]. Considering the net-baryon density n B as the slow critical mode [78][79][80], the dynamics of critical fluctuations may be studied by means of a stochastic diffusion equation in the form This equation describes the non-relativistic evolution of the current J μ B in Eqs. (2) with (5), which is decoupled from the evolution of energy and momentum densities, under the assumption of a spatially homogeneous temperature and a space-time independent fluid velocity field. The fluctuation dynamics is governed by the minimization of the free energy F in the system. The particular form of the free energy studied in the numerical simulations together with a discussion of the parameters and how criticality is embedded can be found in Appendix A.2. For a stochastic current J of the form and mobility coefficient = Dn c /T Eq. (13) becomes Here, D is the diffusion coefficient and ζ x is the white noise x-component with zero mean and . This ensures that the fluctuation-dissipation balance is guaranteed. The stochastic diffusion equation is solved numerically with a semi-implicit predictorcorrector scheme in which the non-linear terms in n B are treated explicitly. Equation (15) is valid for the propagation of fluctuations in one spatial dimension where the physics in the transverse area A has been scaled out. A static box of finite length L is considered with a resolution x = L/N x for N x lattice sites. Charge conservation is exactly realized by imposing periodic boundary conditions. The numerical framework has been tested extensively in both limits of a Gaussian (K = λ i = 0) and Gauss+surface (λ i = 0) model as discussed in [74] and [75,77], respectively. For these models analytic results both for the continuum and discretized space-time are available that the numerics can be confronted with. One notes that for a meaningful comparison charge conservation in a finite-size system must be included in the analytic results. It is found that the numerics can accurately reproduce the analytic expectations for the static and dynamic structure factor, the correlation function and the local variance for a given x. This implies that the lattice spacing dependence of physical observables is well under control. Moreover, the continuum expectations are approached with x → 0 which highlights that there is neither the need for renormalization nor a coarse-graining or filtering of the noise and the algorithm can well handle white noise on a finite grid of x and t.
In Fig. 1 some highlight results of this framework are shown. The employed parameters read n c = 1/(3 fm 3 ), T c = 0.15 GeV, ξ 0 = 0.479 fm, K = 1, λ 3 = 1, λ 4 = 10 and λ 6 = 3, see Appendix A.2. In the left panel of Fig. 1 the relaxation time τ * (circles) of the critical mode with k * = 1/ξ for a given fixed T is contrasted with a scaling function proportional to ξ z . It is found that the numerics is best described with z 4 (filled band) which shows that the expected dynamic critical scaling of model B is realized numerically. For this plot the correlation length ξ is deduced from the behavior of the equal-time correlation function n B (r) n B (0) . Moreover, the relaxation time τ k is obtained from the exponential decay ∝ e −t/τ k of the dynamic structure factor n B (k, t 0 + t) n B (−k, t 0 ) with time. For fixed wave-number, τ k is larger for temperatures near T c than further away, and it decreases with increasing k for fixed T . In the middle and right panels of Fig. 1 the volume-integrated skewness (Sσ ) V and kurtosis (κσ 2 ) V are shown. These are obtained for a subregion of observation V 2 fm smaller than L for a dynamically evolving system (full circles) and compared to the static equilibrium limit (open circles). The evolution takes place in form of a time dependence of the background temperature via T (τ ) = T 0 (τ 0 /τ ) starting in equilibrium at τ 0 = 1 fm with T 0 = 0.5 GeV and D(τ 0 ) = 1 fm which then decreases as D(τ ) = D(τ 0 )T (τ )/T 0 . The non-linear terms in Eq. (15) are essential for skewness and kurtosis to develop from purely white noise. One observes that the non-Gaussian fluctuations behave non-monotonically, and that in particular (κσ 2 ) V increases significantly near T c compared to its value at T 0 or τ 0 . Nonetheless, even in equilibrium (open circles) finite-size effects can modify the infinite-volume expectations [81] of the scaling behavior with ξ dramatically [82]. This can, in particular, be seen in the structure of (Sσ ) V which is a consequence of the competition of different scalings, see [76]. The evolution of T (full circles) results in dynamical, non-equilibrium effects notably a reduction of the fluctuation signals. Moreover, as a consequence of the finite relaxation times, the observables in the dynamical setting lag behind their equilibrium values. Both effects, which can also be seen in the variance [74][75][76][77], become more pronounced with decreasing D(τ 0 ).
For a realistic modeling of the physics in a heavy-ion collision the current framework still needs to be extended. In particular, a realistic spatio-temporal evolution of the fireball must be embedded. A first step into this direction is to consider a system undergoing a Bjorken-type expansion. Corresponding works are currently underway. With this the coupling of the dynamics of critical fluctuations to the evolution of other fluctuating fluid dynamical fields becomes feasible. This will allow one to quantify, for example, the impact of the critical fluctuations on the medium and vice versa or to study the role of advection. Eventually, the framework must be extended to three spatial dimensions. Only then one may study to what extent the dynamics of the fluctuations in the longitudinal direction is decoupled from the dynamics in the transverse direction as was assumed so far. This will necessitate, however, a careful analysis and understanding of renormalization effects. Nonetheless, the coupling to the evolution of the transverse velocity field will allow one for the first time to study numerically the physics of model H as the assumed dynamical universality class of QCD. Further future developments range from including realistic fluctuating initial conditions, to study the interplay and competition of different fluctuation sources, to embedding the conversion to measurable particles at chemical freeze-out by explicit charge conservation on an event-by-event basis, see section 2.7.

Implementation of nonequilibrium chiral fluid dynamics (Nχ FD)
In order to study the dynamics of critical fluctuations properly we need to include their evolution equations coupled to a fluid dynamical evolution. Within the framework of nonequilibrium chiral fluid dynamics, the chiral condensate σ = qq , which is considered as the critical mode, is propagated via a relaxation equation of the following form, The damping coefficient η, the noise ξ , and the potential terms can be obtained from an effective model of QCD, like the quark-meson (QM) or Polyakov-quark-meson (PQM) model. In the works [83][84][85][86][87][88][89][90][91] the mean-field approximation of the (P)QM model was applied. In a recent QCD assisted transport model [92] the equilibrium input is provided by FRG calculations.
It is assumed that the fluid consisting of the fermionic degrees of freedom and the fast modes of the sigma field are the heat bath in which the chiral order parameter σ evolves. Due to the mutual coupling the fluid equilibrates locally under the condition of the actual value of σ . The fluid dynamical pressure is therefore not determined at the mean-field value of σ but includes the backreaction of σ on the fluid. It depends explicitly on the fluctuations of the order parameter Contrary to standard Langevin-simulations the heat bath is not static, but evolves according to the equations of fluid dynamics, and describes the bulk evolution of a heavy-ion collision. Therefore, the total energy and momentum of the coupled system of the fluid and the order parameter need to be conserved. This is achieved by adding a source term to the standard fluid dynamical equations, The stochastic nature of the source term on the right hand side of Eq. (18) leads to a stochastic evolution for the fluid dynamical fields. Eqs. (16) - (19) are coupled and as a result of Eq. (17), the evolution of the fluid and the order parameter feed back on each of the other. More details on Nχ FD can be found in the Appendix A.3. It has been applied to calculating various observables in heavy-ion collisions, notably the critical enhancement of net-proton fluctuations [89]. In order to avoid an unphysical dependence on the lattice spacing, we model a spatial correlation of the noise field over a correlation length of 1/m σ , where m σ is the local equilibrium screening mass. This procedure is a regularization method of the otherwise white noise correlator, as discussed previously. The full solution of Eqs. (16), (18), (19) is obtained in 3 + 1 dimensions. It can be expected that the input equation of state is modified due to the cutoff (either x or the spatial correlation of the noise field). This could explain the quantitative differences of the susceptibilities, which are obtained in static box simulations, compared to the thermodynamic expectations, see Fig. 2. One should therefore check the equation of state in these box simulations to see if modifications to the original P 0 (T , μ B ) can be observed. This is rather complicated as many calculations need to be performed at various temperatures and baryo-chemical potentials all over the phase diagram. It is assumed to be easier to derive an analytic formula for the correction (see Section A.1) and fix the coefficients with a couple of test calculations. The boundary conditions must be fixed coherently and the finite piece of the correction needs to be treated separately. The corresponding calculations and tests are currently ongoing.  To perform calculations in the entire phase diagram it is important to have a reliable equation of state, which correctly describes the hadronic phase at high baryon densities but also retains the non-equilibrium fluctuations of the order parameter. First calculations have been performed for the equation of state of a hadronic SU(3) non-linear sigma model with quarks [94,95].
In QCD-assisted transport [92] a similar equation of motion for the chiral condensate as in Eq. (16) is solved. It contains a kinetic term related to the real part of the effective action (2) σ σ , a diffusion term sensitive to the imaginary part of (2) σ σ , and an effective potential, which can be obtained in FRG calculations. This description provides a systematic approach to the dynamics of the chiral order parameter, which is valid beyond mean field and beyond the scaling region around the critical point, which might be very small. A detailed description can be found in the Appendix A.4.
As an example result of QCD assisted transport we show in Fig. 3 (left panel) the timeevolution of the kurtosis scaled by its late-time equilibrium limit for the quench from high temperatures to two different points in the QCD phase diagram. Far away from the critical endpoint the scaled kurtosis exhibits a rather quick equilibration while close to it the corresponding time scale is clearly increased. For the quench through the phase boundary one furthermore ob-serves that the equilibrium value is approached from above as the equilibrium kurtosis is larger near the phase boundary than in the low-temperature phase.
Based on these results for the scaled kurtosis in the quench scenario, one may estimate the equilibration time of the critical fluctuations within the QCD phase diagram. This is shown in Fig. 3 (right panel). One can clearly identify both the phase boundary and the region near the critical endpoint and observe the expected increase of the equilibration time in that region. Nevertheless, this increase is found to be rather moderate suggesting that phenomena associated with critical slowing down are only moderately pronounced. This hints towards equilibrium dominated measurements and, thus, to the feasibility of studying the QCD phase diagram by means of heavy-ion collisions. For quantitative statements, however, the dynamical modeling of the fluctuations remains necessary.

Implementation of Hydro+
In the spirit of hydro-kinetics, the recently developed Hydro+ formalism allows for a consistent, deterministic description of both the dynamics of a fluid -which are described by the standard fluid dynamical variables ε (the energy density), u μ (the fluid four-velocity) and n B (the baryon number density) -and the out-of-equilibrium critical fluctuations induced by a critical point, including the feedback between the fluid dynamical variables and critical fluctuations. The formulation of Hydro+ can be found in Ref. [96] and its numerical implementation for a heavy-ion motivated model can be found in Ref. [97]. Many details omitted in this section can be found in these two references.
In Hydro+, the critical fluctuations are encoded in the Wigner transform of the equal-time two-point function of the fluctuation of an order parameter field M(t, x): where δM (t, x) ≡ M (t, x) − M (t, x) , with . . . denoting the ensemble average. If we consider the dynamics of a cooling droplet of QGP with μ B = 0, namely undoped QGP with zero net baryon number, allowing us to drop baryon density n B , and if we set the bulk viscosity to zero (although the relaxation of φ Q still leads to an effective bulk viscosity), the Hydro+ equations become where we have followed the Muller-Israel-Stewart formalism and have introduced terms involving the shear tensor μν to maintain causality of our equations. We have defined D = u μ ∂ μ and φ Q as the equilibrium value of φ Q , with all other quantities defined in Ref. [97]. These equations are very similar to standard fluid dynamical equations [98], except now φ Q (t, x) is treated as a dynamical variable in Eq. (21d) and obeys a relaxation equation, and standard fluid dynamical variables like pressure p, shear viscosity η, and bulk viscosity ζ have been replaced by generalized fluid dynamical variables p (+) , η (+) , and ζ (+) . These generalized fluid dynamical variables are dependent on φ Q and are different than their standard counterparts when the Ref. [97]. This simulation assumed azimuthal symmetry perpendicular to the collision (ẑ) axis and boost invariance along the collision axis, allowing all quantities to be plotted as a function of r and τ . In all plots, solid and dashed curves show φ(Q) and φ(Q) respectively and the red, blue and green curves show results at τ = 2, 3.5, and 5.5 fm, respectively. The fluid dynamical simulation started at τ = 1 fm with the initial condition that φ(Q) started in equilibrium, φ(Q, τ = 1 fm) = φ(Q). These results were obtained by solving Eqs. (21) with two different values of 0 , an unknown parameter determined by microscopic physics that controls the rate at which φ(Q) relaxes to its equilibrium value, which was set either to 0 = 1 fm −1 (left two plots) or a slower relaxation rate 0 = 0.25 fm −1 (right plot). These plots demonstrate key features of the Hydro+ equations. φ(Q) lags behind its equilibrium value φ(Q) because its relaxation rate Q is finite. Due to critical slowing down, with all else fixed, larger wavelength modes relax slower than smaller wavelength modes do. Additionally, due to the bulk radial outflow of the fluid, φ(Q) is advectively carried radially outward. For sufficiently small relaxation rates, this advection leads to memory effects, demonstrated in the right two plots by the radially outflowing peak. The peak originated in the initial condition for φ(Q) that was chosen in this simulation. Figures taken from [97].
φ Q modes are out of equilibrium. For example, the difference between the generalized entropy, which determines p (+) , and the entropy is given by which vanishes when φ Q = φ Q . It is through these generalized variables that the evolution of the standard fluid dynamical variables experience feedback from the out-of-equilibrium dynamics of critical fluctuations, an effect we call "backreaction," and it's through the explicit factor of u μ and the implicit dependence of φ Q on ε in Eq. (21d) that the evolution of the critical fluctuations depends on the bulk evolution of the fluid. In Fig. 4 we show a numerical solution of Eqs. (21) for a highly simplified, though heavy-ion collision inspired model, which includes a critical point [97]. These plots demonstrate key non-equilibrium effects coming from the Hydro+ equations, namely the finite relaxation rate of the critical fluctuations, which causes φ Q to lag behind its equilibrium value, the advection of the fluctuations, which causes φ(Q) to flow outward as the QGP droplet expands, and the existence of memory effects, resulting in the radially outflowing peaks in the right two plots of the figure.
One fortunate practical aspect of Eqs. (21) is that the addition of Eq. (21d) and the (+) substitutions do not add much more computational complexity to the fluid dynamical simulation. Naively, Eq. (21d) constitutes an addition of infinitely more variables to keep track of, one for each Q. To solve these equations on a computer, one must discretize momentum space and keep track of only a finite number of modes, say N modes. The continuous Q variable is then replaced by a finite list of momenta, Q i . Since the time derivative of φ Q i only depends on φ Q j if i = j , each of these N modes can be evolved forward in time independently of one another at each time step. If the derivative of φ Q i depended on φ Q j for i = j , then one would have found that A ij ∂ τ φ Q i = φ Q j for some matrix A ij , meaning that each time step of an Euler method would require the inversion of an N × N matrix, following the method described in [98]. The fact that A ij is diagonal seems to be a result of the fact that Hydro+ is currently only formulated up to two-point functions [96]. We then ask: to what extent will this simplification remain true when higher-point functions are incorporated into Hydro+, and is there an argument why the off-diagonal terms in A ij are negligible?
Additionally, when simulating Eqs. (21) in a heavy-ion inspired, though very simplified and phenomenologically inapplicable model, the authors of Ref. [97] found that the deviations caused by the feedback of the out-of-equilibrium φ Q modes on ε and u μ , which are due to the (+) subscripts in Eqs. (21), were at the percent level or below. Those authors argued that while the critical fluctuations from a single order parameter degree of freedom are enhanced near a critical point, the thermodynamics of the bulk of the QGP comes from a strongly coupled liquid built from 16 bosonic degrees of freedom and 36 fermionic degrees of freedom. Therefore, unless the QGP passes exactly through the critical point, the thermodynamics are dominated by the more numerous non-critical degrees of freedom, and the effects of the out-of-equilibrium φ Q modes on the bulk evolution of the fluid are small. If it remains true that the effects of backreaction are small for more realistic heavy-ion simulations, then the implementation of Hydro+ in these simulations will be greatly simplified. One could first perform a standard fluid dynamical simulation, and then, with its outputs, solve Eq. (21d) independently to determine the evolution of the φ Q modes. Our next question is therefore: are the effects of backreaction negligible for phenomenologically relevant heavy-ion fluid dynamical simulations?
Other open questions in the Hydro+ formalism involve higher-point functions, initial conditions, and freeze-out. How can we generalize Hydro+ to incorporate 3-point and higher-point functions? Were we to naively generalize Eq. (20) we would introduce another insertion of δM, and with it another momentum and spacetime dimension, leading to a proliferation of φ modes that need to be followed during the course of a simulation. How many modes must be tracked in order to accurately describe a heavy-ion simulation? Also, what are the initial conditions of these modes? Finally, what is the proper way to implement freeze-out for these modes?

Relevant scales for transits of the critical point
The deterministic method described in Section 2.2 can be used to obtain estimates of the length and time scales involved in transits of the critical region in a heavy-ion collision. The basic issue is that in a collision of heavy nuclei the trajectory of the system in the phase diagram is likely to miss the critical point by some amount, and to only spend a finite amount of time in the critical region. Combined with the expansion of the system, and the effects of critical slowing down this implies that the correlation length cannot become very large. The effects of critical slowing manifest themselves differently depending on the spatial and momentum scales at which correlations are being studied. In this section we will present simple estimates of these effects, following the work of [99].
We consider the two-point function of the entropy per particle ŝ = s/n, which serves as an order parameter near the critical endpoint. Following Section 2.2 we can derive a relaxation equation for the two-point function Wŝŝ(t, x, k). For simplicity we will focus on a fluid undergoing locally homogeneous isotropic expansion so that Wŝŝ(t, k) does not depend on x. The evolution equation for Wŝŝ has the form where ŝ is a relaxation rate, and W 0 sŝ (t, k) is the equilibrium correlation function. In a noncritical fluid the correlation length is small and W 0 The relaxation rate is related to the diffusion constant, ŝ = Dk 2 . The diffusion constant can be written as D = l 2 micro /τ 0 , where l micro is the microscopic length scale introduced above, and τ 0 is the non-critical relaxation time. The maximum wavelength of a fluctuation that can be equilibrated in a fluid that is expanding at a rate 1/τ Q is where we have introduced a small parameter ≡ τ 0 /τ Q , i.e. the product of the microscopic relaxation time τ 0 and the macroscopic expansion rate 1/τ Q . In the vicinity of the critical point the correlation length and the specific heat diverge. We can take the effect of the correlation length into account by taking the equilibrium correlation function to be of the form where η is the correlation length exponent in the 3-dimensional Ising model. We can also incorporate the effect of critical slowing down by modifying the relaxation rate as where λ T is the thermal conductivity. Eq. (26) is a simple model that corresponds to the model B dynamics discussed in Section 2.3. Consider now the time evolution in the vicinity of a critical point. We will define t = 0 to be the time at which the system reaches the critical value of the baryon density. Near t = 0 the equilibrium correlator evolves rapidly, (∂ t C p )/C p ∼ 1/t. However, because of critical slowing down, the equilibration rate of long wavelength fluctuations cannot keep up with this rapid evolution, and these modes necessarily fall out of equilibrium. Equating the rate of change of C p and the relaxation rate determines a characteristic time, known as the Kibble-Zurek time t KZ . The correlation length ξ at this time is the Kibble-Zurek length, l KZ = ξ(t KZ ). We can estimate the Kibble-Zurek length using the scaling form of the relaxation rate, and the critical scaling of the specific heat. We find l KZ ∼ l micro 1/(aνz+1) ∼ l micro −0.19 , where we have used the model B value for the dynamical exponent z, and Ising critical exponents for a = 1/(1 − α) and ν. This establishes a hierarchy Reference [99] provides numerical estimates for l micro and under conditions relevant to a possible critical endpoint, T 155 MeV and n/s 1/25. The authors find l micro 1.2 fm and 0.2. This corresponds to a hierarchy of scales 1.2 fm 1.6 fm 2.7 fm. (29) These results indicate that the correlation length does not become very large, and that the enhancement in the two-particle correlation function in the critical regime remains modest, on the order of a factor of 2.
The methods discussed in Section A.1 can also be used to study the rapidity structure of fluctuations in a QGP undergoing longitudinal expansion. For simplicity we consider Bjorken expansion. From the Green function of the diffusion equation in a Bjorken background we find that the width of a momentum fluctuation localized in rapidity at time τ 0 will increase to [42,55,66] where we have assumed that the shear viscosity to entropy density ratio is approximately constant. A similar formula can be derived for baryon number diffusion. Eq. (30) shows that in the regime in which fluid dynamics is a good approximation the rapidity width of an initial state fluctuation is small, σ η 1. We can also obtain a very rough estimate of the rapidity width of a critical fluctuation. Using the expansion rate to convert longitudinal distance to space time rapidity Eq. (29) gives The estimates discussed in this section indicate that in heavy-ion collisions the correlation length remains modest, even if the system passes close to a critical point in the QCD phase diagram, and that critical fluctuations are localized in specific regions in momentum space. Quantifying these statements requires the results of fluid dynamical simulations to be converted to particle spectra in momentum space, which will be addressed in the following section.

Implementation of fluid to particle conversion
After performing either stochastic fluid dynamics or hydro-kinetics the question arises how to compare the fluid dynamical and order parameter fields and their fluctuations and correlations given in coordinate space to experimentally observed quantities, which are constructed from measured particle spectra in momentum space and in a given, experiment specific, kinematics, and not from the fluid dynamical fields directly. Therefore, direct model to data comparisons require conversion of correlations in fluid fields to finite statistics particle correlations. For non-relativistic fluids this problem has been addressed in several ways [100]. One of them is to exactly match the fluxes at the interface, which in the relativistic case corresponds to local event-by-event conservation laws, or in other words, micro-canonical sampling. The Cooper-Frye (CF) particlization used in relativistic models (see e.g. [101]), on the other hand, is based on a grand-canonical local phase-space distribution. It combines the Cooper-Frye formula for the momentum distribution in a hypersurface cell [36] with Poissonian sampling of the multiplicity distributions. As discussed in [102] this procedure adds additional fluctuations to those obtained from stochastic fluid dynamics. (This method and thus the subsequent discussion are relevant only for stochastic fluid dynamics. At the moment it remains unclear how to freeze-out after hydro-kinetics.) In order to see this let us consider for simplicity the correlations of the baryon number for the case where we can ignore anti-baryons, i.e. for collisions at low energies. Stochastic fluid dynamics provides an ensemble of hydro events or configurations which reflect the fluctuations of the system. In addition particlization of a given event typically provides an ensemble of particle configurations. Therefore, for a given cell i and a given fluid dynamical (FD) event we have the following The final baryon number (in terms of particles) in cell i is then obtained by averaging over all fluid dynamical events as well as by averaging over the particle configurations for a given fluid dynamical event. Let us denote these averages as follows: . . . = average over many CF particle configs . . . = average over FD configs . . . = average over CF sampling AND over FD configs Thus, if for a given fluid dynamical event we average over the particle samples we get Further averaging over the fluid dynamical ensemble results in Since the Cooper-Frye sampling preserves the mean everything works out. However this is not the case if we look at correlations. For a given fluid dynamical event upon averaging over the particle configurations we get where in the last line we used the fact that for Poisson sampling we have δB(i) 2 Thus we get for the correlation function Therefore, for all non-identical cells the correlations are reproduced correctly, but we get spurious contributions from identical cells. If correlations could be measured in configuration space one could simply ignore the problem for identical cells, which is due to correlations of particles with themselves. However, in experiment, we look at correlations in momentum space and it is not clear how to remove this spurious contribution in this case. The problem gets even more apparent if one looks at cumulants. Given the above expression for the correlation function the second order cumulant, K 2 , is given by where we sum over a certain subset of cells of the freeze-out hypersurface. In addition to the true second order cumulant which reflects the fluctuation of the stochastic fluid dynamical simulation we have an extra, spurious term ∼ i B H (i) which arises from the Poisson sampling of the standard Cooper-Frye particlization. For example, in the case where we use stochastic fluid dynamics to simulate an ideal gas, where the fluctuations follow a Poisson distribution, we would simply double count the fluctuations so that in this case the resulting second order cumulant would be From this simple example it should be clear that particlization of stochastic fluid dynamics has to ensure that the conserved quantum numbers are conserved locally and event by event. This can be achieved by sampling the particles from a micro-canonical ensemble instead of a grand-canonical ensemble as it is done in the standard Cooper-Frye procedure. Such an algorithm has been developed and implemented in [37,103]. As discussed in some detail there, in case of the systems created in heavy-ion collisions the micro-canonical sampling requires some extra considerations, because contrary to typical non-relativistic fluids, one deals with a rather small number of particles of the order of 10 4 or so. At the same time, the computational grid is made of rather small cells in order to minimize numerical viscosity. As a consequence, the typical number of particles in a cell of the computational grid is much smaller than one. Microcanonical sampling, however, requires integer quantum numbers and, preferably, that the number of particles is large compared to one. To address this issues the authors of [37,103] introduced so-called "patches". These patches are larger than the computational cells and their size should be such that each patch has a sufficient number of particles for the micro-canonical sampling to be sensible. Thus the patch size introduces another scale, l patch . Since the conserved quantum numbers are not resolved within a patch, one can determine the correlation of conserved charges only for distances d > l patch .
The obvious question is how the new scale l patch compares with the other scales in the problem such as l noise or l filter and l hydro . The condition for the patch size is that one has a sufficiently large number of particles in the patch, N patch = l 3 patch ρ 1, where ρ is the particle density. Since after particlization one typically evolves the system with Boltzmann transport, the mean free path l mfp should be larger than the inter-particle distance, i.e. l mfp > 1/ρ 1/3 . Since fluid dynamics should be still valid at the point of particlization, the patch size should also be larger than the mean free path, l patch l mfp . This will automatically ensure that we have sufficiently many particles in the patch since N patch = l 3 patch ρ l 3 mfp ρ 1. Finally, of course the patch size needs to be smaller than the fluid dynamical scale, l patch l hydro and larger than the cutoff or filter scale required to regularize stochastic fluid dynamics. Thus we have l grid , l mfp < l filter l patch l hydro .
Note that l grid is the size of the discretized fluid cell, which is not really a physical scale. In order to resolve the correlations we should have l patch l KZ , where l patch is limited by the inter-particle spacing and therefore this condition is only marginally fulfilled in model estimates, see Eq. (29). Contrary to stochastic fluid dynamics, which provides an ensemble of fluid dynamical events encoding the fluctuations and correlation of conserved charges, in deterministic hydro-kinetics one calculates the time evolution of the means and n-particle correlation functions. Therefore, in this case particlization will involve sampling particles such that these correlation functions are faithfully reproduced in terms of particles. At present there is no algorithm available to address this problem.

Experimental challenges
The experimental measurement of fluctuation observables is currently of high interest to the heavy-ion physics community. Naturally, one of the main topics of the discussions during the RRTF meeting was the search for a QCD critical point in the SPS energy range as well as in the RHIC Beam Energy Scan (BES) program. However, an even larger fraction of the discussions was focused on the comparison of ALICE data with lattice QCD calculations. While the QCD critical point is not accessible via measurements at LHC energies, pseudo-critical fluctuations at higher orders are measurable. In addition, also the lower order fluctuation observables are of interest as they provide the unique opportunity to test lattice QCD results against experimental data.
Nevertheless, one should be cautious when making direct comparisons to results from lattice QCD calculations. On the one hand, actual systems in high-energy heavy-ion experiments are dynamical, finite, come in different sizes, and the plasma formed is very noisy, fluctuates considerably and is indirectly measured within a given acceptance. On the other hand, lattice QCD simulations are probing equilibrium in the thermodynamic limit, and are still very constrained by the sign problem. Still, several statistical mechanics techniques successfully applied in lattice simulations can be useful in analyzing the experimental data [104][105][106]. Near the critical region, one can systematically incorporate spurious contributions (resonances, acceptance limitations, finite size, finite lifetime and critical slowing down) expected to affect the fluctuations in the BES in a way that can be systematically improved or adapted [107,108].
This section of the paper is organised as follows. In subsection 3.1, the problem of matching between experimental observables and theoretical quantities is introduced. In the following, the effects of isospin and strangeness randomisation are discussed in subsection 3.2. The subject of volume fluctuations, that is closely related to different approaches in the centrality selection which come with different advantages and disadvantages, is addressed in 3.3. The rapidity window dependence of fluctuation observables is scrutinized in subsection 3.4 and the potential non-critical contributions from resonance decays are investigated in subsection 3.5. Finally, a summary of the different experimental methods and techniques is given for each experiment in subsection 3.6 including a recommendation for future common standards based on the pros and cons of the different approaches.

Matching between experimental observables and theoretical quantities
The physics program of fluctuation studies in heavy-ion collisions is characterised by a plethora of existing observables with different sensitivities to the underlying physics phenomena. From the theory side, only some quantities are directly accessible by ab-initio lattice QCD calculations. In any case, all observables must be properly matched between theory and experiment in order to allow for an apples-to-apples comparison.
The thermodynamic susceptibilities χ BSQ lmn of order l + m + n for baryon number B, strangeness S, and electric charge Q, are given by the derivatives of the pressure P with respect to the corresponding chemical potentials μ [109]: They can be calculated in lattice QCD from first principles using imaginary time (for details see e.g. [110]). The pressure in a system of volume V is connected with the partition function via [111] From the experimental side, the susceptibilities of the conserved quantities are accessible via the measurement of event-by-event fluctuations in the particle production. For a quantitative comparison, the cumulants K i of order i of the measured particle multiplicity distributions are analysed, although several authors argue in addition that the study of factorial cumulants provides a cleaner way to access possible non-trivial dynamics in heavy-ion collisions [112]. The cumulants can be calculated from the central moments In a traditional nomenclature, statistical distributions are often described with the mean M, the variance σ , the skewness S, and the kurtosis κ which corresponds to an equivalent parameter set of the first four central moments or cumulants, respectively. With respect to the thermodynamic susceptibilities χ , the following relations are found [113]: There are several significant effects, however, which must be carefully considered in the comparison of the theoretically calculated susceptibilities and the experimentally measured cumulants of identified particle multiplicity distributions, which will be discussed in more detail in the following sections: 1. The susceptibilities of the conserved quantities in QCD are calculable on the lattice while experimentally only net-charge, net-pion, net-kaon, net-proton, and net-distributions are accessible. The correspondence between, for example, the cumulants of the net-proton distribution and the susceptibilities χ B n is discussed in Sec. 3.2. 2. While the susceptibilities are calculated on the lattice in a fixed volume at a fixed temperature, which enter into the equations above as V T 3 terms, in heavy-ion collisions these quantities are unmeasurable. Therefore a common approach is to form combinations of the cumulants in order to cancel these unknown factors and compare them to ratios of the susceptibilities, such as Sσ = χ 3 /χ 2 and κσ 2 = χ 4 /χ 2 . However, as detailed in Sec. 3.3, the volume and temperature in heavy-ion collisions are related to the number and positions of the participating nucleons and therefore are not only unknown but fluctuate event-by-event.
These additional fluctuations mean that the V T 3 terms do not cancel precisely. 3. While the lattice QCD calculations are performed for a fixed volume in the infinite limit, and the correspondence to multiplicity fluctuations of conserved charges is done within the grand-canonical ensemble picture, heavy-ion collisions occur within a finite volume over which local and global conservation laws must hold. The effects of conservation laws can be experimentally probed by investigating the dependence of the multiplicity cumulants on the kinematic acceptance of the measurement, as described in Sec. 3.4. 4. Another major topic of discussion at the RRTF was the influence of resonance decays on fluctuation observables. For example, the feeddown of and baryons into the proton and anti-proton multiplicities (e.g. → pπ − , → pπ + ) and the influence of rho meson decays in the net-pion measurement (ρ → π + π − ) are of particular concern (see Sec. 3.5).

Light nuclei production
Light nuclei production is discussed as an observable related to spatial density fluctuations. The latter are expected to be enhanced in the vicinity of the critical point, but they are not measurable directly. It was suggested recently that they can be inferred from the light nuclei production [114,115]. In a simple coalescence model the yields of deuterons and tritons can be expressed as where proton and neutron densities are allowed to fluctuate in space: and proton-neutron density correlations and neutron density fluctuations are denoted as Constructing the following ratio one can see that the coalescence model predicts it to be independent of collision system, energy or centrality, but sensitive to spatial density fluctuations. In the vicinity of the critical point this ratio should exhibit a peak. Combining the data from NA49 [116][117][118], STAR [119,120], and ALICE [121] one indeed can see two clearly pronounced peaks in the dependence of (N t N p )/N 2 d on collision energy in central collisions. However, the interpretation of these peaks is currently not possible for two reasons: (1) available models disagree on the (N t N p )/N 2 d ratio without critical point, (2) there is currently no model that would include a critical point, spatial density fluctuations emerging from it and light nuclei production. It was argued that the structures in (N t N p )/N 2 d may be related to decays of excited 4 He states [122], and 4-particle correlations represented by the enhanced K 4 /K 2 ratio are originating from the same source. The observable (and similar related ratios, such as (N3 He N t )/(N d N α )) seems to be very promising, but requires further investigation, especially from the theory side.

Additional experimental observables
1. ν dyn In addition to cumulants of net-charge distributions, many experiments also measure fluctuations via the observable ν dyn . The fluctuations between two particle types A and B, which may represent particles and anti-particles or different particle species, can be quantified by The independent statistical fluctuations of N A and N B are then subtracted to obtain a measure of the dynamical fluctuations, If N A and N B have Poisson distributions and are uncorrelated, then ν dyn = 0. An important feature of ν dyn is that it is robust against particle detection efficiency losses in the case that the detector response can be described by a binomial distribution. At LHC energies, where particles and anti-particles are produced in equal amounts, ν dyn [A, A] is related to the second order moments via the relation where One should note that the ν dyn measure, by definition, has an intrinsic multiplicity dependence which has to be taken into account. Several scaling prescriptions are investigated in the literature such as charged-particle multiplicity density at mid-rapidity, number of participants [123,124] and mean multiplicities of accepted particles [125].

Balance functions
One of the additional elements which surfaced in the discussions was the possibility to experimentally measure the balance function B( η, ϕ), defined by the difference between the two-particle correlations of like-and unlike-sign pairs of particles. The correlation function itself can be written as the ratio of the particle pair density to the single particle densities, where ρ αβ is the distribution of pairs of particles of types α and β at angles (η 1 , ϕ 1 ) and (η 2 , ϕ 2 ), respectively, and ρ α is the single particle distribution for particles of type α at the angle (η 1 , ϕ 1 ). The correlation function can be further condensed by considering only the relative angle between the two particles in the pair When the correlation function is constructed for like-sign (C ++ + C −− ) and unlike-sign (C +− + C −+ ) particle pairs, then the balance function is defined as B( η, ϕ) = (C +− + C −+ − C ++ − C −− )/2. The integral of the balance function is directly related to ν dyn and thus to the second cumulant K 2 of the net-particle distribution [126].
Measuring the balance function has several advantages in that it clearly encodes the rapidity dependence of the measurement, which gives access to the correlation length ξ and also allows one to see the influence of other physical effects such as resonance decays, flow, and non-thermal particle production due to jets, etc. However, a precise measurement of balance functions naturally requires additional statistics, and while it is more straightforward to correct correlation functions for experimental efficiency it is unclear how to account for the effects of volume fluctuations, which would need to be understood before they could be interpreted as a measurement of second moments.

Intensive and strongly intensive quantities
Additional fluctuation observables were also discussed, such as the strongly intensive quantities and [127][128][129], which are insensitive to both the volume and volume fluctuations within models of independent particle sources (e.g. the Wounded Nucleon Model [130] and the grand-canonical ensemble of an ideal Boltzmann gas). The scaled variance, an intensive quantity, can be written as where E P = E beam − E F , the difference between the beam energy (E beam ) and the energy carried forward by spectators from the projectile (E F ).
In the search for critical behavior, it is most interesting to construct observables which are insensitive to both the volume and volume fluctuations, called strongly intensive quantities [128,129]. Some examples include and where N is the number of particles of a given type and P T is the sum of the absolute values of their transverse momenta p T . Another example involves measuring fluctuations and correlations for numbers from two non-overlapping sets of particles: One interesting observation is that [N 1 , N 2 ] in Eq. (59) reduces, in the special case N 1 = N 2 , to the ratio of the variance K 2 (N 1 − N 2 ) to the Skellam baseline N 1 + N 2 , which represents the limiting case of independent Poissonian particle and anti-particle multiplicity distributions. The condition N 1 = N 2 is realized to a high precision for measurements of particle and antiparticle distributions at the LHC. The variance-over-Skellam baseline ratio for the difference of particle and antiparticle numbers measured by ALICE (see Sec. 3.6.1) for various particle types belongs therefore to the class the strongly intensive quantities . The strongly intensive quantities have an advantage over standard cumulants in that they are insensitive to volume fluctuations and therefore they can be measured independently of the experimental method used to select centrality (as shown in, for example, Ref. [131]). However, while they have been or are being measured by several experimental collaborations, the question of how to relate them to lattice QCD and other theoretical predictions remains open.

Scaled factorial moments and intermittency
Finally, observables related to intermittency were discussed. In the grand-canonical ensemble the correlation length ξ diverges at the critical point (or second order phase transition line) and the system becomes scale invariant. This leads to large multiplicity fluctuations with special properties. They can be conveniently exposed using scaled factorial moments F r (M) [132] of rank (order) r: where M = /δ is the number of the subdivision intervals of size δ of the momentum phase space region . At the second order phase transition the matter properties strongly deviate from the ideal gas. The system is a simple fractal and the F r (M) possess a power law dependence on M: Moreover the exponent (intermittency index) φ r satisfies the relation: with the anomalous fractal dimension d r being independent of r [133]. It should be noted that F r (M) is sensitive to both volume fluctuations and conservation laws. A formulation of a new method to study intermittency using strongly intensive quantities is needed. The question how well these assumptions are fulfilled in realistic heavy-ion collisions was also discussed. The finite size of the created system limits naturally the growth of the correlation length ξ . Therefore, finite-size corrections modify the predictions made for an infinite system. In addition, it was argued that due to dynamical effects the correlation length ξ is not expected to exceed 2 − 3 fm [134], which is small compared to the size of the system. In this case, the analysis, which assumes scale invariance and requires ξ to be as large as the system itself, would not be directly applicable. Dynamical modeling of heavy-ion collisions is necessary to quantify the magnitude of these effects.

A new observable:
The quantity χ B 2 /χ Q 2 was proposed as a particularly interesting observable, since lattice QCD calculations indicate that this ratio behaves almost linearly as a function of T near the critical temperature contrary to the behaviors of χ B 4 /χ B 2 and other similar ratios that approach a constant value and are insensitive to T below T c . This is shown in Fig. 5. Therefore a measurement of K B 2 /K Q 2 would be more sensitive to the temperature and possible critical effects. A further advantage is that it only requires measuring second cumulants, which are more easily within the statistical reach of experiments. However, the net-charge cumulants are the most difficult to measure for several reasons, including the large mean multiplicities which increase the statistical uncertainties and the very significant effects of resonance decays which must be carefully controlled.

Isospin and strangeness randomisation across collision energies
In the hadronic phase, processes of the form p + π 0 ↔ + ↔ n + π + can alter the isospinidentity of nucleons. After only two of these cycles the original isospin distribution is completely randomised. This does not affect average quantities but significantly influences higher-order fluctuations. The efficiency for isospin randomisation depends strongly both on the density of pions and the regeneration plus decay time of the intermediate resonance in comparison to the duration of the hadronic stage. Isopsin randomisation might work efficiently at LHC energies because of the pion bath. However, the pion bath is not present at lower beam energies and the mechanism will break down at some point. The importance of isospin randomisation for relating the measured net-proton number cumulants to the theoretically interesting net-baryon number fluctuations has been worked out in [32,33]. In fact, using the isospin randomisation it has been argued that the net-baryon number cumulants can be constructed from the experimentally observed proton number distribution even without the measurement of neutrons [32,33]. In [136], the mechanism has been studied quantitatively and it was found that the net-proton distribution would be pushed strongly toward the Skellam limit if the distribution originally deviated from it.
Similar questions should be investigated for the measurement of fluctuations in the netstrangeness. There are ongoing and existing measurements of net-kaon and net-fluctuations and the question is if such measurements are sufficient to reconstruct the original net-strangeness fluctuations. Analogous to isospin randomisation, future work in this direction can be based on reactions of the type p + K ↔ + π that might occur frequently enough in the hadronic phase.

Volume fluctuations
While factors of volume appear in the relations between cumulants and susceptibilities listed above, the "volume" of the medium produced in a heavy-ion collision is not a very well defined quantity, although it is related to the number of nucleons that participate in the collision and their geometrical orientation and therefore to the collision centrality. Event-by-event fluctuations of the participants, and therefore the volume, are inescapable and are an additional source of fluctuations that must be assessed in experimental measurements.
Experimental measurements are performed in centrality classes, and the method used for estimating the centrality significantly influences the magnitude of the corresponding volume fluctuations. Depending on the detector setup, the centrality may be estimated by energy deposited at forward pseudorapidity (for example, in the V0 detector in ALICE), the number of charged particles reconstructed at midrapidity (e.g. in the STAR TPC), or the spectators measured at zero degrees (as is done e.g. in HADES). Each method has advantages and disadvantages and several experiments utilise different methods for cross-checks and cross-calibration. While decorrelation effects between different regions of phase space cause the participant fluctuations to be larger for a given centrality class, determining the centrality and measuring multiplicity fluctuations in the same region of phase space leads to autocorrelations.
Within the experimental community, there are two approaches to account for volume fluctuations in measurements of higher-order multiplicity cumulants. One approach is to attempt to correct the data by removing volume fluctuations. One method which is currently being developed is to use a data-driven unfolding for this correction. Another method is the centrality bin width correction (CBWC) [137], in which the moments measured in narrow centrality bins are combined to obtain the cumulants in a wide centrality bin. It should be noted, however, that participant fluctuations will be present even in the limit of very fine centrality bins, and therefore the CBWC is only a partial correction, see Fig. 6. An alternative approach is to evaluate the impact of the volume fluctuations on the measured cumulants and then fold them into the baseline or model comparison. Such an approach was explored in Ref. [138], where the ALICE centrality estimation procedure and mean (anti-)proton multiplicities were used to evaluate the effects of the volume fluctuations on the second cumulants within the Wounded Nucleon class of models. This approach means that no experimental information is lost in a correction procedure, but may also introduce a model-dependence into the interpretation of the results.

The rapidity window dependence
The acceptance dependence can provide information about the nature and origin of the correlations and fluctuations in heavy-ion collisions. For instance, the effects of conservation laws on fluctuation observables can be assessed by changing the kinematic acceptance, in particular the (pseudo-)rapidity range of the measurement. When the acceptance of the measurement is small, then the fluctuations are reduced to purely statistical (Poissonian) fluctuations. Meanwhile, when the acceptance of the measurement is large compared to the phase space of produced particles, conservation of baryon number, strangeness, and charge have an impact on the measured fluctuations. The effect of global and local baryon number conservation laws was demonstrated in a model [138,139] to explain the rapidity window η dependence of K 2 for net-protons measured by ALICE [25], see Fig. 7.
Furthermore, two other major sources of fluctuations have characteristically distinct rapidity window dependences: fluctuations of initial conditions and fluctuations due to thermal noise [55].  The contribution of thermal fluctuations to intensive measures (such as ω[N ]) grow with the acceptance window and saturate when the window width reaches the correlation rapidity range (typically, one unit of rapidity). In contrast, the initial fluctuations lead to long-range (up to several units) rapidity correlations and their growth continues until the fireball boundary effects become important, e.g., due to conservation laws [138,139]. Since the fluctuations due to the QCD critical point are essentially thermal fluctuations, their correlation range is of order one unit of rapidity [140]. It should be emphasized that the spatial correlation length, ξ , despite becoming anomalously large at the critical point, has little effect on  the range of the kinematic rapidity correlations (see Fig. 8). The larger spatial correlation length ξ translates into a larger number of particles being correlated and thus manifests in a larger magnitude of the fluctuation measures [81,141].
The rapidity window dependence of the cumulants (normalized by multiplicity to make them intensive) follows the pattern expected from thermal fluctuations [140], as shown in Fig. 9. For small rapidity windows y 1 the normalized cumulant, ω k /N , grows as a power y k−1 . Furthermore, since the critical fluctuations correlate particles with different transverse momenta p T , the magnitudes of the fluctuation measures increase with the p T acceptance, as also illustrated in Fig. 9.
Other effects which are sensitive to the width of the rapidity window were also discussed in detail, particularly resonance decays and the influence of diffusion. At the intermediate range of η, the dependence of higher-order cumulants on η is determined by the correlations between observed particles, and thus can be sensitive to the diffusion process and particle production mechanisms. Various estimates on the η dependence have been made in the literature based on models for the diffusion and particle production [34,142,140,143].

Influence of resonance decays on fluctuation observables
Resonance decays play an important role in particle production. For example, thermal model estimates show that -integrated over all transverse momenta -about 60% of all pions originate from the decays of heavier hadronic states [144]. On an event-by-event basis this can vary considerably and, thus, resonance decays may influence fluctuation observables significantly. In general, the decay of resonances follows a multinomial probability distribution [145] from which the impact of the decays on the cumulants of a particle multiplicity distribution can be derived. This has been done up to the fourth order cumulant in [146]. Based on those results, the influence of resonance decays on the fluctuations in the net-proton number has been estimated in [136]. It was found that the higher-order cumulants are stronger influenced by the decays. Moreover, a proper inclusion of the probabilistic character of the decay process turned out to be essential: for net-protons this can be an up to 20% effect on the cumulant ratios and is expected to be even stronger for pions [136].
An important question concerns the connection between resonance decay contributions and the rapidity window dependence. For the thermal model interpretation of the yields (first moments), this does not play a role: if a charged pion from a ρ decay leaves the acceptance window, it is quite likely that in another event a pion from a neighboring rapidity window enters the acceptance window, leaving the number of pions in the acceptance window unchanged on average. Higher-order fluctuations, however, are sensitive to resonance decays particularly if the acceptance window is smaller or of the same order as the average rapidity difference between the decay daughters. Therefore, decays of the type ρ → π + π − strongly influence the measurement of netcharge fluctuations. As a matter of fact, it was previously suggested to constrain the number of produced ρ and ω mesons via a measurement of net-charge fluctuations [147]. Similarly, the decay φ → K + K − influences the measurement of net-strangeness measurements. Detailed studies are needed for the understanding of net-charge and net-strangeness observables. These effects are best studied using event generators which are either QCD inspired such as HIJING or based on statistical-thermal particle yields coupled to hydrodynamic expansion. Early estimates [147] based on a Monte Carlo study for Pb-Pb collisions at SPS energies indicated a modification of only 1% in the resonance decay contributions to the fluctuations in the π + /π − ratio if the acceptance in rapidity is limited to η = 1. In any case, a proper particle decay model needs to be an integral part of the code.
Decays of the type ++ → pπ + and → pπ − are rather different because they do not change the baryon number in a correlated way. However, the effects of feed-down in the netproton measurements should be experimentally addressed. In principle, including primary and secondary protons from all sources would come the closest to a measurement of the baryon multiplicity. In practice, though, it must be noted that in every experiment the detection efficiency is different for primary particles and particles of secondary origin, and branching ratios have to be taken into account. Therefore, it would be necessary to measure the multiplicity fluctuations for each particle species individually. Experimentally, protons from weak decays are characterized by a larger distance-of-closest-approach (DCA) to the primary vertex and this information could be used to assign probabilities for each proton if it is of primary or secondary origin (analogous to the Identity Method for particle identification). From the theoretical side, such measurements could be accompanied by the simultaneous determination of χ B , χ BS , and χ S since for instance , , and decays contribute to all three of these fluctuation observables simultaneously. Setting up such a detailed experimental and theoretical framework should be one of the main goals of the future research activity in this field.
Additional complications in these studies might arise from re-generation and re-scattering processes which are likely to occur in the hadronic phase of the collision. Two primordially produced pions might pseudo-elastically re-scatter via the large cross-section process π + π − → ρ → π + π − and thus again leave the acceptance window. Re-generation effects are not modeled by event generators like HIJING and a correct treatment implies the usage of afterburners based on UrQMD.

Overview of the current experimental techniques
The experimental techniques and methodologies across experiments are currently not fully harmonised. As a matter of fact, one of the goals of the RRTF was to contribute to the communitywide efforts to establish a common approach in the measurement of fluctuation observables. In order to obtain an overview of the current status of fluctuation measurements in the various experiments, each experiment was asked to provide answers to the following set of questions: 1. Which fluctuation observables are measured? 2. Which results are already available and what are the plans for the future? 3. What are the applied acceptance cuts? 4. What are the available statistics, at which energies and systems, now and in the future? 5. Are unphysical sources removed (e.g. spallation protons) and how? 6. How are secondaries from weak decays treated (e.g. → p + π − )? 7. How is the efficiency correction performed? 8. How are potential event-by-event fluctuations in the efficiency treated or modeled?
The responses are summarized below.

Fluctuation measurements in ALICE
ALICE has shown preliminary results on the first and second central moments of net-pion, net-kaon, net-proton, and net-distributions [148,149], and third moments of the net-proton distribution [150], measured using the Identity Method [151][152][153], as well as the third-and fourth-order cumulants of the net-proton distribution [154] analysed with traditional cut-based particle identification.
One of the significant experimental challenges in performing measurements of the higher moments of multiplicity distributions for identified particles is the treatment of regions of phase space where particles cannot be identified with perfect accuracy. When traditional cut-based particle identification is used, then the usage of additional detectors may be required, or that region of phase space may need to be excluded entirely; both alternatives lower the particle identification efficiency and are detrimental to the cumulants measurements. In order to resolve this challenge, the Identity Method was developed, and provides a mathematical framework for evaluating the higher moments of identified particle multiplicity distributions when particle identification is unclear. The Identity Method takes the moments of the particle multiplicities weighted by the probabilities that those particles are of a particle species, and unfolds them based on a detailed parameterisation of the detector particle identification signal in order to obtain the true moments.
Furthermore, the ALICE collaboration has published measurements of particle ratio fluctuations [124] and net-charge fluctuations [155], quantified with the observable ν dyn , as well as balance functions [156,157] and strongly-intensive quantities [131].

Second and third cumulants of net-proton, net-kaon, and net-pion distributions
The analysis of the second moments of the net-pion, net-kaon, and net-proton distributions was performed with data from Pb-Pb collisions at √ s NN = 2.76 and 5.02 TeV collected by the ALICE detector. The kinematic acceptance of the measurement is |η| < 0.8 and 0.6 < p T < 1.5 GeV/c. In this momentum range, the tracking efficiency for protons (anti-protons) in the Time Projection Chamber (TPC) is roughly constant at approximately 78% (70%), which confers a technical advantage of allowing the analysis to be performed in a single inclusive momentum bin. In this analysis, the efficiency correction was performed at the level of the first and second moments using simulated Monte Carlo events passed through a GEANT model of the ALICE detector. Two Monte Carlo generators, AMPT and HIJING, were used for the efficiency correction; the small difference between the generators was used to estimate the corresponding systematic uncertainty. The procedure was also cross-checked by assuming binomial track loss [158], which was also used to correct the third moments for the detector efficiency. The accuracy of the correction procedure was estimated to be on the percent level and was included in the systematic uncertainties. Secondary protons, mainly from the decay of baryons, were not explicitly removed from the measurement, but their influence on the final results was evaluated by varying the selection on the DCA between the tracks and the primary vertex, and the observed small deviations were included in the systematic uncertainties.

Second cumulants of net-distributions
The analysis of the second cumulants of the net-distribution was performed for √ s NN = 5.02 TeV Pb-Pb collisions, using the data set collected in 2015. and baryons were reconstructed via their decay to (anti-)protons and charged pions. To account for the background of combinatoric proton-pion pairs, the Identity Method was applied along the invariant mass (m pπ ) axis by evaluating the probability at each value of m pπ that a proton-pair corresponds to a true baryon decay or a combinatoric pair. Since the reconstruction efficiency depends strongly on p T throughout the kinematic range, from a minimum of 10% at p T = 1 GeV/c to a maximum of roughly 30% at p T = 4 GeV/c, the p T -dependent efficiency correction was performed assuming binomial efficiency loss according to the prescription in Ref. [159]. The secondary contamination of and baryons originating from the decay of baryons is also incorporated into the efficiency correction procedure.

Third and fourth cumulants of net-proton distributions
The analysis of the third and fourth moments of the net-proton multiplicity distributions was performed in Pb-Pb collisions at √ s NN = 2.76 and 5.02 TeV; the results at both energies are consistent within statistical and systematic uncertainties. Protons in the kinematic range |η| < 0.8 and 0.4 < p T < 1 GeV/c are identified according to tight selection cuts on specific energy loss in the TPC. Due to the stricter particle identification cuts used in this analysis, the reconstruction efficiency for protons (anti-protons) is approximately 65% (60%). The moments are corrected for efficiency according to Ref. [158]; the centrality bin width correction is also applied.
Each of the cumulant measurements described above is compared to the Skellam baseline. Small deviations from the Skellam baseline are observed for the net-proton and net-second cumulants; within the precision of the measurement these deviations can be fully described by a model which includes the effects of baryon number conservation [138,139]. The effects of baryon number conservation were further tested by performing these measurements differentially with respect to the pseudorapidity acceptance ( η). It was observed that Poissonian/Skellam behavior is recovered for small η, and the measured cumulants decrease with respect to the Skellam limit as η increases, consistent with expectations. In the most central Pb-Pb collisions the higher-order cumulants are consistent with a Skellam distribution; the significant statistical and systematic uncertainties do not yet make it possible to observe any deviations.
An ongoing analysis of the third-and fourth-order cumulants of net-proton distributions using the Identity Method will allow the kinematic range and precision of the measurement to be extended. A phenomenological evaluation of the effects of volume fluctuations and baryon number > 850 conservation on the higher moments is also underway, which will make a precise and quantitative test of lattice QCD possible. Furthermore, the data collected by the ALICE experiment in Runs 3 and 4 at the LHC will make it possible to measure the fourth moments of identified particles with unprecedented precision, and the sixth moments are also foreseen to come within experimental reach.

Fluctuation measurements in STAR
The STAR experiment at RHIC has measured a range of fluctuation observables, including the higher moments of net-charge [24,160], net-proton [23,26,161], net-kaon [162], and net- [163] multiplicity distributions, as well as event-by-event fluctuations of identified particle ratios [164], mean p T fluctuations [165], and balance functions [166]. As part of the RHIC Beam Energy Scan program, these measurements have been performed in Au+Au collisions across a wide range of collision energies, from √ s NN = 7.7 GeV to 200 GeV. The available statistics from the BES Phase I and the top RHIC energies are listed in Table 1, as well as the projected statistics which will be collected in BES Phase II, to take place between 2018 and 2021.
Of particular interest are the higher moments of the net-charge, net-kaon, and net-proton multiplicity distributions measured across the full range of BES energies. The measurements are performed in the (pseudo)rapidity windows |η| < 0.5 and |y| < 0.5 for unidentified and identified particles, respectively, and in the following transverse momentum ranges: 0.2 < p T < 2 GeV/c (net-charge), 0.2 < p T < 1.6 GeV/c (net-kaon), 0.4 < p T < 0.8 GeV/c (net-proton); since publication the net-proton kinematic range has been extended to 0.4 < p T < 2 GeV/c [167].
Particle identification is performed using the specific energy loss in the TPC and time of flight from the TOF detector. The finite tracking efficiency is corrected under the assumption of binomial track loss, which has been extensively tested in Monte Carlo simulations [162]. Efforts to apply an unfolding procedure in the efficiency correction are underway. The CBWC is also applied. Decay products from weak decays and spallation protons are rejected with experimental cuts on the transverse momentum and distance of closest approach to the primary vertex, although there is no explicit correction for the residual contamination.
The centrality of each event is determined from the charged-particle multiplicity at midrapidity, not including the particle under study (i.e. in the net-proton measurement, protons are excluded from the centrality determination). While this avoids maximally correlating the observable with the centrality, remaining autocorrelations may still be present.
Future work will include the analysis of fluctuation observables in BES Phase II, where a massive increase in available statistics is anticipated (see Table 1). Furthermore, the beam energy scan program will be extended by inserting a gold target into the STAR detector, such that fixedtarget collision events can be recorded. For example, when the collider energy is √ s NN = 62.4 GeV, the fixed-target energy is √ s NN = 7.7 GeV. Similarly, for a collider energy of √ s NN = 7.7 GeV, the corresponding fixed-target center-of-mass energy is √ s NN = 3 GeV. By utilizing the flexibility and performance of the STAR detector and RHIC beams, fluctuation studies will be able to be performed across a wide range of the phase diagram in T − μ B space.

Fluctuation measurements in HADES
With the HADES detector, located at the SIS18 at GSI, the proton number fluctuations have been investigated in Au+Au collisions at √ s NN = 2.41 GeV. HADES operates at low energies where there is no anti-proton production, hence net-proton fluctuations are measured as proton number fluctuations. However, approximately 1 / 3 of the protons are bound in light nuclear clusters produced either thermally or via coalescence. While so far only the fluctuations in the number of free protons have been analyzed, future work will also include the protons bound in deuterons, tritons, and He nuclei into the fluctuation signals.
Protons are selected within the HADES geometrical acceptance with 0.4 < p T < 1.6 GeV/c and y = y 0 ± 0.5, where y 0 = 0.74 is the center-of-mass rapidity. For the analysis, the Au+Au data is subdivided into four centrality classes: 0-10%, 10-20%, 20-30%, and 30-40%; the resulting volume fluctuations also produce a fluctuation signal and must therefore be investigated carefully [138,168]. The effects of detector (in)efficiency have been extensively investigated in simulations. Corrections are applied on a bin-by-bin and event-by-event basis in 240 phasespace bins to account for the dependence of the reconstruction efficiency on rapidity, transverse momentum, and detector load, assuming binomial efficiency loss [38,158,169]. Event-by-event fluctuations of the detector efficiency are of order 10-15% in HADES and must be taken into account. A linear model, adjusted phase-space bin by bin to simulated events, is used to recalculate the efficiency in each event and for each bin. These event-by-event efficiencies are incorporated into Kitazawa's efficient scheme [158]. Unfolding techniques based on a simulated 2d detector response matrix using regularization or singular value decomposition (SVD) schemes have also been investigated. Furthermore, the newly proposed moment-expansion scheme [40] has been implemented and tested. Both, in simulations and in data, the three methods agree well for the first (mean), second (variance), and third order (skewness) moments, and still reasonably well in the fourth order (kurtosis). Details of the procedures are given in Ref. [170].
The analyzed data set consists of 2 · 10 8 Au+Au events at √ s NN = 2.41 GeV. Protons from weak decays are not fully subtracted, but partially suppressed by track vertex cuts. However, since strangeness production is subthreshold at SIS energies, these protons contribute on the < 10 −3 level. Contributions from spallation protons are being investigated in GEANT3 simulations, but are expected to be weak due to the fixed target setup and the low beam energies used. Event pile-up is at a < 2 · 10 −4 level.
In addition to proton number fluctuations, HADES can measure net-charge fluctuations by considering the free and bound protons as well as the charged pions. (Strangeness production is subthreshold and does not contribute much to the charge.) In the future, fluctuation observables will be analyzed in the high-statistic data sets recently recorded from 5 · 10 8 Ag+Ag collisions at √ s NN = 2.41 GeV and 6.5 · 10 9 events at √ s NN = 2.55 GeV. Within the FAIR phase-0 stage HADES may also request a more complete energy scan at SIS18 with Au beams of 0.2 -1.0 GeV/u to probe the liquid-gas phase transition region. Finally, beyond 2027, HADES is expected to extend the excitation function of fluctuation signals at SIS100 using various heavy-ion beams of 3.5 GeV/u.

Fluctuation measurements in NA61/SHINE
The NA61/SHINE experimental program encompasses a diverse set of colliding beams (p+p, p+Pb, Be+Be, Ar+Sc, Xe+La, Pb+Pb) over a wide range of beam momenta from 13A to 150A GeV/c. The detector offers a unique opportunity to study a wide range of fluctuation observables due to its high tracking efficiency (> 90% down to p T = 0 GeV/c) and particle identification capabilities.
The impact of volume fluctuations on fluctuation observables is reduced by analyzing only the most central collisions, where few spectators are detected in the Projectile Spectator Detector, thus maximizing the number of wounded nucleons participating in the collisions and limiting fluctuations. Furthermore, the use of strongly-intensive quantities [127][128][129] is preferred which are independent of the volume and volume fluctuations within models of independent particle sources, for example the Wounded Nucleon Model [130] and the grand-canonical ensemble of an ideal gas of Boltzmann particles.
The intensive quantity ω as well as the strongly intensive , , and observables have been measured across a wide range of collision systems and energies. Additionally, a systematic study of intermittency has also been performed. For a summary of recent results, see [171].

Intensive observables
The scaled variance, ω, has been measured in p+p, Be+Be and Ar+Sc across a range of energies [172][173][174]. In p+p interactions, and also in Be+Be collisions, multiplicity fluctuations are larger than predicted by statistical models. However, they are close to statistical model predictions for large volume systems in central Ar+Sc and Pb+Pb collisions [175]. The observed rapid change of hadron production properties that start when moving from Be+Be to Ar+Sc collisions may be interpreted as the beginning of the creation of large clusters of interacting matter, or the onset of the fireball [176].

Strongly intensive observables
The observable [P T , N ] has been measured for both positively and negatively charged hadrons in p+p, Be+Be, and Ar+Sc collisions at beam momenta of 20A, 31A, 40A, 80A, and 158A GeV/c [177]. In this scan of the phase diagram, no "fluctuation hill", or increase of fluctuations which would indicate the presence of a critical point, has been observed.

Future common standards to be followed by experiments
As the experimental measurements of higher-order moments are extremely challenging and sensitive, and high precision is needed for informative comparisons to theoretical calculations, it is important to establish a set of basic quality checks that can be performed in order to give confidence in the results. As a starting point, some suggestions are listed here: 1. Since the correction for detector (in)efficiency is a critical part of a higher-moments analysis, it is important to verify the underlying assumptions implicit in the correction procedure.
In particular, if the correction procedure relies on the assumption that particle loss occurs according to a binomial distribution (as in Refs. [38,169]), then it should be demonstrated that the detector response is, in fact, binomial. If it is not, the effect of deviations from a binomial track loss should be investigated in, for example, a Monte Carlo closure test (see below). 2. At the LHC and at top RHIC energies, where stopped baryons lie at far forward rapidities and not within the experimental acceptance, the odd net-particle moments (K 1 , K 3 , K 5 ,...) should be zero. Experimental verification of this is critical to determine if the detector efficiency correction procedure is under control. 3. A "Monte Carlo closure test" is an extremely useful tool for validating the accuracy of an analysis procedure. To check for closure, the observable of interest is calculated in a Monte Carlo model using generator-level particles. Then the generated events are passed through a simulated model of the detector and subsequent data reconstruction chain. The analysis can then be run on the reconstruction-level particles (in the form of tracks, etc), just as it is in real data. If the full analysis chain and all correction procedures are working well, then the corrected reconstruction-level results should match that obtained from the generator-level particles. Note that, to first order, it does not matter whether the Monte Carlo reproduces the physics seen in the real data -the Monte Carlo closure test is only a test of the analysis method, not a physics result. As such, however, it is a necessary but not sufficient proof of the robustness of an analysis.

Fluctuations in atomic gases and other related systems
Fluctuations and correlations play an important role in many other systems, ranging from the microscale as in ultra-cold atomic gases and condensed matter systems, to the largest structures in the universe, galaxies and clusters of galaxies. Indeed, the idea that fluctuations can serve as a signal of critical behavior goes back to the explanation of critical opalescence near the endpoint of the liquid-gas phase transition in water in terms of large fluctuations by Smoluchowski and Einstein. Since then, fluid mixtures and condensed matter systems have served as important testing grounds for ideas about dynamical critical phenomena and critical transport. During the RRTF meeting, connections between fluctuation studies in heavy-ion collisions and in other physical systems were discussed. In this section we will address possible avenues for testing the ideas presented in the previous sections in more controlled, table-top experiment, settings. We will also discuss how ideas about critical behavior in heavy-ion collisions may motivate new experiments in atomic or condensed matter physics. An important model system is given by ultra-cold atomic quantum gases. Atomic gases allow for a great amount of control, both in terms of the ability to select the initial state, as well as the ability to tune the strength of the interaction between the atoms. Atomic systems can be exposed to a variety of external probes and monitored in real time. Experiments can be performed in equilibrium and in conditions that are far away from thermal equilibrium.
Within atomic gases the BCS/BEC crossover in dilute atomic Fermi gases has received particular attention. Because the systems are dilute, details of the atomic structure are not important, and we can describe the gas as being composed of non-relativistic, point-like, spin 1/2 fermions that interact via zero-range forces. This force can be tuned using Feshbach resonances to cover the range between weakly attractive (the Bardeen-Cooper-Schrieffer, BCS, regime) to very strongly attractive (the Bose-Einstein condensation, BEC, limit). In the BEC limit the system forms tightly bound pairs with weak residual interactions. As a result the dilute Fermi gas is most strongly correlated at the BCS/BEC crossover. A special case arises when a bound state first appears in the two-body spectrum. In that case the s-wave scattering length is infinite, and the system exhibits non-relativistic scale invariance. This is known as the unitary limit, because the s-wave scattering length saturates the unitarity bound.

Equilibrium fluctuations and correlations
The unitary Fermi gas has played an important role in our understanding of nearly perfect fluidity because, like relativistic heavy-ion collisions, it exhibits strong elliptic flow when released from a spatially imhomogeneous initial state [15,181,182]. Indeed, this phenomenon can be analyzed in much the same way that flow is analyzed in heavy-ion collisions, using the known equation of state and viscous fluid dynamics.
More detailed information is provided by two-point correlation functions. The simplest correlator is the dynamic structure factor Here, δn(t, x) = n(t, x) −n with n = n(t, x) is a fluctuation in the particle density. The zero frequency limit of S nn (ω, k) is known as the static structure factor. A closely related quantity is the retarded response function R nn (ω, k), In thermal equilibrium the symmetric correlation function and the retarded response function are related through a fluctuation-dissipation relation, S nn (ω, k) = The response function of ultra-cold gases has been measured using Bragg scattering [183]. In these experiments one uses two crossed laser beams, where the differences in frequency and wave number between the two beams determine the (ω, k) at which the response is measured. The two laser beams drive two-photon transitions where one photon is absorbed from the first beam, and the second photon is emitted into the second beam. The rate is proportional to the response function. Early measurements focused on the tail of the structure factor, which is a measure of the short range structure of the many-body wave function. In the unitary Fermi gas, the short range correlations can be characterized in terms of a quantity known as the contact density [184]. First generation measurements suffered from the fact that the average density of the cloud was not constant, so that Bragg spectroscopy measures an average of the structure factor over the density profile of the atomic gas. More recently experimentalists have succeeded in generating confining box potentials in which the equilibrium density is approximately constant and the response of a homogeneous gas can be studied. Recent experiments have also investigated the dynamic structure factor in the hydrodynamic regime, where we expect the response to be dominated by the Rayleigh (diffusive) and Brillouin (sound) peaks. For this purpose a homogeneous gas is perturbed by a time and space-dependent external potential, and the density response is measured by taking images of the cloud. Experiments by groups at North Carolina State University and MIT [185,186] have demonstrated that this method can be used to extract the sound attenuation constant of the gas from the width of the Brillouin peak in the response function.
Static fluctuations of atomic gases can be measured more directly, by studying intensity fluctuations in absorption images of the cloud. This method was explored by a group at MIT [187], which demonstrated that Poissonian fluctuations in the density are suppressed in the quantum degenerate regime.

Fluctuations and transport phenomena in critical systems
The phase diagram of the dilute Fermi gas has a second order superfluid phase transition in the whole BCS/BEC regime. In the BCS limit this phase transition occurs at an exponentially small temperature, and is difficult to observe. In the strong coupling regime the critical temperature is of the same order as the degeneracy temperature T F , and the phase transition has been studied in some detail. In particular, experiments have observed the predicted critical behavior in the specific heat [188].
It would be interesting to extend the fluctuation measurements discussed in the previous section to the critical regime. In a harmonically trapped gas the temperature is constant but the density varies as a function of position, so that only a small part of the cloud is critical. This means that we do not expect to see a strong enhancement of fluctuation probes. However, as discussed above, recent experiments have employed box potentials. In connection with the heavyion program it would be particularly interesting to see if non-Gaussian density fluctuations are observable.
Note that the order parameter for the superfluid transition is not the density but the phase of the condensate. Gradients of the phase correspond to the superfluid velocity, which is difficult to measure. However, the density is a conserved quantity and the coupling to the critical equation of state is determined by thermodynamic relations. In liquid helium density fluctuations are small compared to temperature fluctuations [189]. This issue has not been carefully studied in cold gases, but the measured compressibility does show an enhancement near the critical temperature [188].
In cold gases it is also possible to measure the momentum distribution, which is defined as the spatial Fourier transform of the density matrix n k (x, t) = d 3 y e ik·y ψ † (x + y/2, t)ψ(x − y/2, t) .
This quantity can be determined using radio-frequency (RF) spectroscopy or time-of-flight analysis [190]. In RF spectroscopy an external RF source drives a transition from one of the trapped spin states to a non-interacting state (a state that can be thought of as a third spin component). The total absorption rate is proportional to the off-diagonal density matrix of the interacting state. In time-of-flight analysis the gas is rapidly swept to a non-interacting gas, in which the momentum distribution can be measured by simple expansion experiments. These methods have been used to study the large momentum tail of the momentum distribution, but they have not been used to study critical fluctuations with |k| ∼ ξ −1 , where ξ is the correlation length. The dynamical theory of critical behavior predicts that transport coefficients exhibit critical scaling in the vicinity of a phase transition [78]. There are some differences between the quark gluon plasma and ultra-cold gases. The critical endpoint in the QCD phase diagram is expected to be governed by model H in the classification of Hohenberg and Halperin [79], whereas the superfluid transition in ultra-cold gases is expected to be in the same universality class as the λ-transition in liquid helium (model F). However, both models predict a very weak singularity in the shear viscosity, and stronger effects in the thermal conductivity. QCD may also exhibit a strong divergence in the bulk viscosity [191], whereas the unitary Fermi gas is scale invariant, and the bulk viscosity vanishes.
Critical behavior in the sound attenuation constant has been observed in liquid helium [189], but recent measurements in the unitary Fermi gas performed by the MIT group do not show any non-analytic behavior [186]. It would be interesting to understand why this is the case. The experiment was performed in a box potential, with N ∼ O(10 5 ) atoms. If this number is too small to observe critical transport, then this observation would clearly hold important lessons for heavy-ion collisions.
The dynamical theory of critical phenomena also predicts enhanced long-time tails and critical slowing down. These phenomena should be visible in the long-time response of the density to an applied external potential, similar to what was done in the experiments of the North Carolina State University group [185]. However, so far no dedicated experiment of this type has been performed. This is an interesting problem beyond its relevance to the heavy-ion program, because aside from checks of the scaling behavior of the attenuation rate [78] there are no direct measurements of long-time tails in the literature.

Dynamical evolution in critical systems
Ultimately, one may envision using cold atomic gases as a testbed for dynamical theories of the time evolution of a near-critical system. As mentioned above, experiments have studied the time evolution of elliptic flow in a unitary gas released from a deformed harmonic trap [181]. These experiments can be analyzed using Navier-Stokes hydrodynamics, although care has to be taken in order to take into account effects of the dilute corona, which does not behave fluid dynamically [192].
Existing experiments cover the critical regime, but if the gas is released from a harmonic trap then the fraction of the gas that is critical at any point in time is always small. One might try to address this issue by releasing the cloud from a trap with a flat bottom, or by seeding fluctuations in the initial state. This would also correspond to an initial state that more closely resembles the Glauber initial conditions in a heavy-ion collision.
The superfluid transition is a second order phase transition in the entire BCS/BEC crossover regime. First order transitions appear in spin imbalanced gases or Bose/Fermi mixtures. A typical example is a spin imbalanced cloud in the unitary limit. At sufficiently low temperature the center of the cloud is a fully paired (spin balanced) superfluid state, separated by a first order transition from a polarized corona in the normal fluid state. There are some studies of collective oscillations in a spin imbalanced cloud [193], but the expansion after release from a harmonic trap has not been studied. The expectation is that the cloud would remain fluid dynamical, and the first order discontinuity expands with the gas. In order to study spinodal decomposition one might consider quenching the gas, for example by sweeping the scattering length across the first order transition.

Other physical systems
A classic nuclear system in which fluctuations have been investigated is the nuclear liquidgas phase transition. Multi-fragmentation experiments indicate that the endpoint of the liquidgas phase transition occurs at a temperature T c = 17.9 ± 0.4 MeV and a baryon density n c = 0.06 ± 0.01 fm −3 [194]. These numbers come from model fits to the fragment distribution for different system sizes and energies that explore the spinodal region. Model analyses suggest significant equilibrium fluctuations of the nucleon number close to the endpoint [195], which may have an influence on the higher-order cumulants of the net-proton number distribution in relativistic heavy-ion collisions [196]. It remains somewhat unclear what the correct dynamical theory of this transition is, and whether dynamical fluctuations of the type discussed in this report play an important role. It would appear that the criteria for the applicability of stochastic fluid dynamics or related theories are not met, because the mean free path is too long and the system size and life time too small. However, researchers have investigated kinetic theories that include mean field potentials and stochastic forces [197].
Neutron star mergers explore even higher temperatures in the baryon rich regime. Temperatures as high as T ∼ 100 MeV and chemical potentials of μ Q ∼ 400 MeV might be reached [198,199] and could potentially provide an astrophysical test for the existence of a firstorder phase transition in the QCD phase diagram [200,201] and, thus, an indirect proof of a critical point. The first order region may also be accessible in future heavy-ion collision experiments such as HADES and CBM at FAIR or the STAR at RHIC BES fixed target plans. For a discussion of the chemical freeze-out conditions in the low-energy region see e.g. [202].

Summary and outlook
In this report we summarize the presentations and discussions at the EMMI Rapid Reaction Task Force "Dynamics of critical fluctuations: Theory -phenomenology -heavy-ion collisions" held at GSI, Darmstadt, Germany in April 2019. Both theoretically and experimentally this field is actively developing and many discussions still have exploratory character. This is reflected in the diversity of the approaches and models that are presented in this report.
Efforts to study the QCD phase diagram using fluctuation observables have to go hand in hand with developing a fully dynamical treatment of the fluctuations, both critical and noncritical. Non-critical fluctuations provide a crucial baseline, and without understanding this part of the dynamics we cannot reliably address critical behavior. Only after a reliable framework for treating the dynamics of fluctuations has been developed can we hope to constrain critical behavior in the thermodynamics of QCD based on experimental observables.
On the theory side, two main avenues emerged in the discussions, both relying on the success of fluid dynamical simulations of heavy-ion collisions: Stochastic fluid dynamics and hydrokinetics. The first propagates fluid dynamical fluctuations explicitly in an event-by-event setup, while the second propagates correlation functions, which are already averages over thermal fluctuations, coupled to fluid dynamics.
Understanding the dynamics of thermal fluctuations is important even if one is not specifically interested in the vicinity of a critical point. Indeed, fluctuation-dissipation relations imply that a consistent treatment of dissipative fluid dynamics always has to include fluctuations. The critical point can then be included in the framework via an appropriate equation of state.
During the discussions we identified the following three aspects which deserve major theoretical attention: • Are the relevant scale relations, which separate the thermal noise, non-equilibrium fluctuations and the fluid dynamical evolution, fulfilled in a heavy-ion collision? This question is especially important when it comes to technical length scales, like the numerical regulator l filter or the patch size for particlization l patch . • How can stochastic fluid dynamics be properly regulated such that the averages and the fluctuation observables are independent of the length scale l filter but the essential critical physics is preserved? • How do we interface stochastic fluid dynamics with hadronic afterburners, in particular, how can we particlize fluctuations and correlations of conserved charges? Are there schemes that can be applied to both stochastic fluid dynamics as well as hydro-kinetics? What is the shortest length-scale that can be resolved in such a procedure?
In particular the last aspect is crucial for connecting theoretical approaches with experimental measurements. While stochastic fluid dynamics, hydro kinetics and related methods predict correlation functions of conserved densities in coordinate space, experiments measure distributions and correlation functions of emitted particles in momentum space. It is therefore essential to clarify how the two are related, or, more precisely, how the latter constrain the former. Specifically, the Cooper-Frye particlization prescription needs to be generalized to properly account for multi-particle correlations respecting the relevant conservation laws locally. While developments in this direction have started, the problem is far from being settled and further efforts are encouraged.
The key experimental challenges which were discussed are: • What are the underlying physical phenomena that affect the dependence on the rapidity window? Experiments only resolve the rapidity structure of fluctuations at freezeout. How do correlations in rapidity evolve over the course of the collision? • What is the influence of hadronic resonance decays on the final observable? Experiments cannot directly measure fluctuations of certain observables, such as net-baryon number, and standard proxies, like net-proton number are affected by resonance decays and hadronic rescatterings. • What are the observables that are the most sensitive to criticality? What are their advantages and disadvantages in terms of experimental feasibility and of theoretical accessibility from first-principle calculations and dynamical models?
Intersections with other physical systems, notably ultra-cold atomic gases, provide opportunities for fluctuation studies. Here, ideas could be tested in more controlled settings or motivate new experiments in atomic and condensed matter physics, which could help validate dynamical theories describing the dynamics of fluctuations in heavy-ion collisions.

CRediT authorship contribution statement
This was a team effort of all the listed authors. The Rapid Reaction Task Force "Dynamics of critical fluctuations: Theory -phenomenology -heavy-ion collisions" was organized by M. Bluhm

Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. 1 is the hydrodynamic gradient expansion parameter. Figure taken from [203].
In this model we consider the effective kinetic description of hydrodynamic fluctuations [66,70,203], which is based on the separation of scales (see Fig. 10) of small wave-number hydrodynamic modes (which are never in thermal equilibrium and are determined by initial conditions), and large wave-numbers k > k * = 1/c s τ √ , which are damped and excited at the comparable rate to the background expansion ∂ μ u μ = 1/τ . Here stands for the hydrodynamic expansion parameter = l mfp /(cτ ) 1. The equation of motion in relativistic hydrodynamics with noise is given by the conservation of the energy-momentum tensor and stochastic constitutive equations. For linear perturbations in energy and momentum φ a (τ, k) ≡ (c s δe, g) around a homogenenous Bjorken expanding background one can derive the Langevin type equation [66,70] −φ a (τ, k) = iL ab φ b Fluctuations can be decomposed into two propagating sound modes and two transverse diffusive modes. The two-point correlation functions N AB for these eigenmodes, defined as , then satisfy the relaxation type kinetic equations, e.g.
where γ η = η/(e 0 + p 0 ) and K ≡ (k x , k y , k η /τ ). The two-point correlations are relaxing to the instantaneous thermal equilibrium value with the rate ∼ γ η K 2 and driven away from the equilibrium by the expansion term ∼ 1 τ . In the presence of hydrodynamic noise, the effective long wavelength energy momentum tensor is modified by the contributions coming from the two-point correlators of out-of-equilibrium noise at scale k ∼ k * . Specifically, the energy and longitudinal pressure are increased by the non-linear contributions of momentum fluctuations g = (T τ x , T τy , τ T τ η ) The non-linear noise expectation can be written as an integral over the phase space of hydrodynamic modes, which is divergent due to the equilibrium expectation value of N AB and the leading large K 2 expansion term [66]. Regulating the integral by a UV cutoff , the universal divergent terms can be computed explicitly and agree with previous computations using diagrammatic approaches [64]. The divergent contributions reflect the fact that the initial bare pressure and viscosity are also cut-off dependent, but their sum is independent of . After absorbing divergent terms in the physical pressure and viscosity, the remaining finite term for longitudinal pressure is The finite correction (also known as long time tail) comes with the characteristic fractional power, which can be understood from the simple estimate of the phase space of modes around the critical scale k * and the equipartition of energy: In the presence of noise the evolution of the average energy density of the system obeys where the double brackets notate an average over (long range in rapidity) initial conditions and thermal noise. To close the system of equations, the relationship between average energy density T τ τ and the average rest frame energy density e(τ ) must be specified [66]. This relation receives contributions from the two-point correlation functions of noise and therefore the simultaneous solution of the background hydrodynamic and the hydro-kinetic equations is required for the effective description of hydrodynamics with noise. The presented framework of hydro-kinetic equations is a general and extendable way of calculating the physics of out-of-equilibrium noise in expanding systems. We successfully reproduce the universal renormalizations of bare energy, pressure and shear viscosity η in agreement with previous diagrammatic calculations for conformal systems. We also calculate corrections to bulk viscosity ζ in non-conformal systems due to hydrodynamic fluctuations. The hydro-kinetic equations represent an alternative way of studying hydrodynamics with noise and can be profitably applied to a versatile range of hydrodynamic systems (see recent work of [67,71,99]).

A.2. Stochastic diffusion of critical net-baryon density fluctuations
The free energy functional near the QCD critical point studied in the numerics [76,77], which discusses the dynamics of critical fluctuations in n B (see Section 2.3), has the following polynomial form where n B = n B − n c denotes the difference of n B from the critical density n c . In general, this Ginzburg-Landau form of F still needs to be supplemented by further regular contributions in line with the equation of state.
The coefficients in Eq. (72), i.e. their scaling with the correlation length ξ , may be obtained by mapping the 3-dimensional Ising model to a universal effective potential [204,205]. In this way criticality is included in the approach in line with the assumed underlying static universality class, which gives the following dependencies with ξ 0 the correlation length far away from the transition temperature T c . The inclusion of a term proportional to λ 6 in Eq. (72) turns out to be important for understanding Monte Carlo simulation results of the probability distribution in the Ising model [204,205]. Moreover, the term proportional to K represents a surface tension contribution to F . Special limits of the free energy functional represent the Gauss model form [74] with K = λ i = 0 and the Gauss+surface model form [75,77] with λ i = 0.
In general, the dimensionless couplings λ i are universal. Their values in the QCD phase diagram can be determined by translating the Ising model variables to T and μ B in QCD and comparing the cumulants of the critical mode as explained in [206]. This translation of variables is, however, non-universal and introduces uncertainties into the description. In the numerics [76,77], constant values for the non-linear couplings λ i were used for simplicity, see section 2.3. The free energy functional F still depends implicitly on the thermal variables via the correlation length ξ . This dependence is obtained from comparing the variance of the critical mode from the effective potential to the one obtained from the parametric representation [207] of the scaling equation of state of the Ising model as explained in [208]. In the numerics [76,77], ξ as a function of T for a constant μ B close to the critical point on the crossover side is considered.

.1. Quark-meson model
The Lagrangian of the quark-meson model is the foundation of the dynamical nonequilibrium fluid dynamics model. It reads with the light quark doublet q = (u, d) and the chiral condensate σ which dynamically generates the mass of the constituent quarks. We can fix the quark-meson coupling constant g from the vacuum nucleon masses m = 940 MeV to g = 3.37. The additional parameters used here are the pion decay constant of f π = 93 MeV and the pion mass m π = 138 MeV. The term proportional to σ accounts for the small explicit symmetry breaking due to the finite current quark masses.
The self-coupling constant λ is related to the sigma mass m σ = 600 MeV through λ 2 = m 2 π −m 2 σ 2f 2 π . This model is well studied and we can immediately write down the mean-field effective thermodynamic potential as with the degeneracy factor d q = 12 and the quasiparticle energy E q = p 2 + m 2 q . In this notation, the quark chemical potential μ = μ B /3 is used.

A.3.2. Nonequilibrium chiral fluid dynamics
From the quark-meson Lagrangian, Eq. (78), together with the thermodynamic potential, Eq. (80), we are able to obtain the full nonequilibrium dynamics, where we explicitly propagate the chiral order parameter with a Langevin equation of motion, derived from the 2PI effective action, The damping coefficient η arises from the σ ↔ qq reaction and has been evaluated as The stochastic noise term ξ is assumed to be Gaussian and white, and its width is determined by the dissipation-fluctuation relation To avoid unphysical dependences on the lattice spacing, we model a spatial correlation of the noise field over a correlation length of 1/m σ . Hereby, the mass of sigma can be determined as a function of temperature and chemical potential equal to the curvature of the thermodynamic potential in equilibrium, The locally equilibrated quark plasma acts as a heat bath in which the field σ evolves. The local pressure of this heat bath is given by allowing us to calculate the local net-baryon and energy densities in the standard fashion as As the total energy and momentum of the coupled system of fluid and field are conserved, we obtain the following expressions for the divergences of the ideal energy-momentum tensor of the fluid T μν = (e + p)u μ u ν − pg μν and the net-baryon current N μ = nu μ with the local four-velocity u μ , It is worth pointing out that the stochastic nature of the source term on the right hand side of Eq. (88) constitutes a stochastic evolution for the fluid dynamical medium.

A.3.3. Expanding medium
Modeling an expanding medium can be achieved by defining an initial profile T ( x) and μ( x) which determines the field in equilibrium and the hydrodynamic quantities, i.e. σ , e, n, p at t = 0 at each point in space. Hereby, spherical or ellipsoidal shapes with a smoothed edge are possible but also more realistic initial conditions which can be obtained e.g. from the UrQMD transport model. The expansion and cooling is then described selfconsistently by solving the coupled equations (82), (88), (89).
If one is not interested in spatial fluctuations, then a more simple approach can be used, reducing the dynamics to a Bjorken-type expansion along the beam direction. Contracting Eq. (88) with the four-velocity u ν gives the equation for the evolution of the energy density, with the constant D = 1 in the hubble term. The net-baryon density then follows the equatioṅ As a benchmark test, the equilibration in a box to specific values of T and μ has been tested and shown to reproduce the proper behavior of the cumulants in σ in comparison to the corresponding susceptibilities that are obtained from functional derivatives.
Further observables that have been studied in the past are: • Trajectories in the phase diagram • Density fluctuations in single events, azimuthal distributions • Net-proton fluctuations during a crossover at small μ (after a Cooper-Frye particlization) • Cumulants of the sigma field during a crossover at small μ • Production of entropy as a function of the initial condition

A.4. QCD assisted transport
In order to understand the connection between experimental results obtained in heavy-ion collisions and the underlying phase structure of QCD, we require an approach that connects both at a fundamental level. This is a necessary prerequisite to establish the existence of a critical endpoint (CEP) in the phase diagram of QCD. Transport approaches working towards this direction have initially been put forward in [76,83,84,89,92].
Within the transport approach outlined in this section, as put forward in [92], we require an accurate description of the equilibrium phase structure of QCD and a computation of the required real-time correlation functions. By now there are results for the equilibrium phase structure of QCD at finite density within Euclidean functional approaches, see e.g. [209][210][211][212][213][214]. First results on Yang-Mills real-time correlation functions at finite temperature and transport coefficients have been obtained in [215,216], but an extension of these results for real-time correlation functions to QCD at finite temperature and density is still missing.
Hence, for the time being we resort to their calculation in low energy effective theories. In particular the 2+1 flavour Quark-Meson (QM) model provides a quantitatively reliable description at small chemical potentials. Additionally, it features a CEP that is believed to be in the same universality class as the potential CEP of QCD. The treatment of this effective theory within the framework of the Functional Renormalization Group (FRG) allows for a systematic embedding within QCD, cf. the discussion in [212,217]. Moreover, we are able to obtain correlation functions not only in Euclidean space-time, but also in Minkowski space-time. Based on the equilibrium linear response functions of the low energy effective description, we use the associated transport equation to calculate the cumulants of the critical mode. Therefore this section is split into two parts, the first part describing the calculation of the required equilibrium correlation functions, and the second one focusing on the time evolution of the critical mode around a given set of equilibrium correlation functions.
The FRG is utilized to calculate all required equilibrium correlation functions for the subsequent transport evolution. Being a versatile, first-principle tool, the FRG has been applied successfully to QCD, see [212,213,[218][219][220][221], and low-energy effective versions thereof, see e.g. [222,223]. Its advantage in the present context is that it allows for the computation of the phase structure, i.e. the effective potential, and momentum-dependent correlation functions in a unified framework. The equilibrium part of our work, i.e. the equation of state and the equilibrium correlation functions, are based on a 2+1 flavour study of a low-energy effective description of QCD, where the dynamics of constituent quarks as well as the lowest scalar and pseudoscalar meson nonets, including their wave function renormalizations, are taken into account [223]. It captures, by design, the relevant physical effects at small chemical potential μ and temperatures T T c . Additionally, it features a critical endpoint which is in the same static universality class as the one potentially present in QCD. Therefore this model provides a well-suited base for studying how dynamical non-equilibrium effects manifest themselves in observables.
In general, spectral functions can be obtained either via analytically continuing numerical data, see e.g. [224], or via a direct computation from analytically continued flow equations, see e.g. [225][226][227]. If possible, the latter is preferred and also the option utilized in this work. The spectral functions of the sigma meson are calculated similarly to [228,229] with suitable modifications in order to take the non-trivial wave-function renormalizations into account. As a result we have access to the two-point correlator (2) σ σ (ω, | p|), depending on an external frequency ω and an external momentum p, as well as momentum-independent vertices (n) σ n which are extracted from the full effective potential computed in [223]. An exemplary spectral function is shown in Fig. 11. The two main features that influence the behaviour of the dynamical evolution are the transport peak and the mass peak. The transport peak, if present at small frequencies ω < | p|, dominates the long range behaviour of the sigma field. The mass peak, instead, becomes the driving force for the evolution dynamics when the transport peak is absent, e.g. in the vacuum.
We are now in the position to study the time-evolution of the critical mode and its event-byevent fluctuations. For this purpose, we solve the Langevin-type transport equation Above, the equation of motion contains a kinetic term related to the real part of (2) σ σ , a diffusion term sensitive to the imaginary part of (2) σ σ , and the effective potential mentioned above, while ξ represents the noise field chosen such that the fluctuation-dissipation balance is guaranteed.
For the numerical results presented in Fig. 3 in Section 2.4 we consider the critical mode to be spatially isotropic, i.e. σ ( x, t) = σ (r, t), where we split σ = σ 0 +δσ . We study the time-evolution of the critical fluctuations for a system subject to a sudden quench from high temperatures to a specific point in the QCD phase diagram. Accordingly, the system is initialized such that σ (r, t = 0) = 0 and ∂ t σ (r, t = 0) = 0 which implies that the initial fluctuations δσ (t = 0) are of the magnitude of the equilibrium value σ 0 after the quench. Moreover, we consider spatially constant Gaussian white noise, with zero mean and a variance given as [83]  The transport peak and the mass peak are associated with the diffusion in the transport equation. A detailed discussion of the seen structures can be found in e.g. [229]. Figure taken from [92].
where the diffusion coefficient η is extracted from the imaginary part of (2) σ σ .

A.5.1. Including spurious effects near criticality
We describe long-range fluctuations of the order parameter σ by a probability distribution assuming fluctuations of small amplitude, so that we can use a Gaussian approximation by considering only the mass term, where m σ ∼ ξ −1 . We also assume fluctuations to be homogeneous and use σ 0 = d 3 x σ (x)/V , and couple them to observable particles via mass corrections, i.e.
where we illustrate the couplings to pions and protons [81,141]. The pion-sigma coupling can be roughly estimated to be around G ∼ 300 MeV [141].
Fluctuations of the order parameter are then coupled to observable particles and will have an impact, for instance, in fluctuations of particle multiplicities. The effects of these fluctuations can be calculated by looking at the modification of the single-particle energy levels, due to fluctuations of the order parameter, i.e.
where we have used a Taylor expansion over the mass corrections δm from fluctuations of the order parameter. Expanding quantities in powers of the shift in the single-particle energies δω p and taking averages over the fluctuations of σ 0 , denoted by (· · · ), it is possible to calculate critical contributions to averages and correlations. For instance, where · · · 0 denotes the usual equilibrium averages in a grand-canonical ensemble and Q is a generic quantity [108]. Near criticality, the equilibration timescale of the system also diverges with some power of ξ due to critical slowing-down, which limits the growth of ξ and, hence, of possible signatures which scale with ξ to some power. It is implemented in the ansatz equation [107,134] where ξ eq (t) = ξ 0 |t/τ | −ν/βδ , ξ 0 ∼ 1.6 fm fixes the initial correlation length at proper time t = −τ and τ is the typical cooling time before reaching the neighborhood of the critical point. The critical exponents are given by α = 0.11, ν = 0.63, z = 2 + α/ν, β = 0.326, δ = 4.80, coming from universality class arguments [78,207]. The parameter A in Eq. (98) can be constrained by imposing causality (i.e. dξ/dt ≤ 1), constraining ξ/ξ 0 to below 1.3 for τ = 1 fm and below 1.9 for τ = 5.5 fm and significantly restraining signatures of criticality [108]. The statistics to be measured in collision experiments are contaminated by spurious fluctuations, modified by acceptance and efficiency limitations and are not calculated over direct particles only. These effects can be introduced into our calculations in a simple fashion. Effects such as the dynamical expansion of the system are, for now, neglected.
Effects from a limited acceptance window can be implemented in the calculation of multiplicity fluctuations by considering an acceptance probability factor F (p), such that each produced particle of momentum p (in modulus) has a probability F (p) of being detected [108]. For instance, if n p is the number of particles with momentum p, these kinematic cuts modify ( n p ) 2 according to ( n p ) 2 acc = F (p) 2 ( n p ) 2 + F (p) 1 − F (p) n p . Resonance decays can be introduced in a similar fashion. For a decay into two particles, we consider the probabilities that one (P 1 ), both (P 2 ) or neither (P 0 ) of the particles produced in a single decay are found in the acceptance window. Results can be shown as a function of the momentum p of the resonance and are calculated by using the phase-space volume as a measure of probability. A branching ratio of less than 100% can be implemented by simply rescaling P 1 , P 2 and P 0 .
Finally, spurious fluctuations coming from the imperfect control of the freeze-out thermodynamic variables, such as temperature, chemical potential and volume can also be included by shifting the one-particle energy levels ω p . Considering spherically symmetric boundary conditions, for instance, momentum levels are distributed as p i = α i /R, where R is the system radius. This means that a geometric fluctuation of the radius of δR will affect the energy levels through Fluctuations of temperature and chemical potential can likewise be included by introducing the effective energy shift δω T ,μ , such that ω + δω T ,μ − μ/T = ω − (μ + δμ)/T + δT . The results above are, then, used to calculate the average multiplicity of charged pions, M π ch , and its variance, V π ch , as a function of ξ . Then, we can compute the percentage by which the example-signature V π ch /M π ch grows with ξ , with respect to its value at ξ = 0.4 fm, when only critical, background and the decay of rho-meson contributions are taken into account. More details and results can be found in [108], where caveats are also discussed. Future work will extend these results to the more interesting signatures connected to protons and higher-order moments of particle multiplicities.

A.5.2. Finite-size effects
For the pseudo-critical chiral phase diagram within the linear sigma model with constituent quarks [105], it has been shown that the amplitudes of the shifts due to the finite volume are sizable for length scales probed at current experiments, so that the position of the CEP probed experimentally may differ significantly from the expected critical temperature and chemical potential in the thermodynamic limit. On the other hand, the non-monotonic behavior of correlation functions near criticality for systems of different sizes, tagged by different centralities in heavyion collisions, must obey finite-size scaling (FSS). In this vein, the fact that heavy-ion collisions generate data from an ensemble of systems of different sizes provides an alternative signature for the presence of a CEP.
In the FSS regime, any correlation function X(T , L) of the order parameter does not depend independently on the external parameter T and on the size L of the system, having the following scaling form [230]: X(T , L) = L γ x /ν f x (tL 1/ν ), where t = (T − T c )/T c represents a dimensionless measure of the distance, in the external parameter domain, to the genuine CEP (in the thermodynamic limit), γ x is a dimension exponent and ν is the universal critical exponent defined by the divergence of the correlation length. This scaling form implies (and is implied by) the existence of a scaling plot in which all the curves for different system sizes collapse into a single curve.
One can pragmatically map these quantities to experimental observables in heavy-ion collisions: the correlation functions should be related to pion multiplicity fluctuations or transversemomentum fluctuations; the distance t to the CEP is given in terms of the center-of-mass energy (which is related to a (T , μ) point in the freeze-out curve from thermal models); and the size L can be obtained, e.g., via HBT analysis. To identify FSS in the data, it is then necessary to have different measurements corresponding to the same value of the scaling variable. Since the available system sizes in heavy-ion collisions are limited, the range of energies that can be compared is also restricted. Nevertheless, one can assume the presence of FSS and predict from one data set the amplitude of the fluctuations at a different energy scale, in a thorough analysis of RHIC and SPS data [106]. Finite size effects can also modify considerably the dynamics in the first-order transition region [104].

A.6. Modeling of time correlations with hydrodynamic fluctuations
The approach to modeling time correlations and their effect on net-baryon fluctuations described here is based on the use of hydrodynamic fluctuations. The approach comprises two separate studies: the first, which uses white noise to model critical fluctuations of the baryon density [231]; and the second, which uses both white noise and colored noise to model (noncritical) electric charge fluctuations [58].
In Ref. [231], the authors consider the effects of a critical point on hydrodynamic fluctuations in heavy-ion collisions. They apply mode-coupling theory, together with a model of the free energy (which includes 3-dimensional Ising critical exponents and amplitudes) to model the behavior of the thermal conductivity near the critical point. Mode-coupling theory permits a rough separation of the critical and non-critical contributions to the thermal conductivity near the critical point, and the exact behavior of these contributions can be matched consistently onto an equation of state which exhibits the right critical scaling. One special advantage of modecoupling is that it can be readily extended outside the critical regime, and allows naturally for one to explore the effects of critical fluctuations which come to dominate non-critical fluctuations close to the critical point.
Within this formalism, the magnitude (i.e. the two-point function) of hydrodynamic fluctuations is proportional to the thermal conductivity, as a consequence of the fluctuation-dissipation theorem. The divergence of the thermal conductivity was thus found to lead to an enhancement in the magnitude of the fluctuations close to the critical point, and to generate corresponding enhancements in charge balance functions sensitive to net-baryon fluctuations (Ref. [231] considered both ππ and pp balance functions). The same approach was later applied to HBT fluctuations near the critical point [232], and yielded similar conclusions regarding the magnitude of effects due to critical fluctuations.
In Ref. [58], the authors considered the effects of non-trivial time correlations on electric charge fluctuations at top RHIC energies. The non-triviality was taken to be a simple, decaying exponential in proper-time separation between two correlated fluctuations in the system, containing a single free parameter τ Q which effectively fixes the rate at which fluctuations can propagate throughout the system and become correlated with one another (the limit τ Q → 0 corresponds to the trivial white noise case discussed above).
The authors then explored the effects of these non-trivial correlations on the (electric) charge balance functions discussed in Ref. [231], and found that choosing τ Q = 0 implied a reduction in the speed of propagation of wave fronts in the system; in fact, setting τ Q = 0 can be shown to lead to an infinite speed of propagation which violates relativistic causality. The requirement that τ Q > 0 then restores relativistic causality and leads to a corresponding reduction in the efficiency of conserved charge diffusion in heavy-ion collisions, and a consequent narrowing of the charge balance functions in rapidity separation. Similar effects should be expected near a critical point, where the phenomenon of critical slowing down results from the divergence of the system's relaxation timescale.
The comparison of white noise and colored noise in Ref. [58] allows one to understand the consequences of non-trivial time correlations on physical quantities such as electric charge and baryon densities. So far, non-trivial time correlations (i.e. colored noise) have been considered only for non-critical, electric charge fluctuations. Nevertheless, the same basic results would carry over to the case of critical fluctuations, with just a few straightforward modifications in accordance with the treatment of Ref. [231]. This would allow one to explore the effects of non-trivial time correlations near the critical point in heavy-ion collisions.
A weakness of the approaches described here are their inability to account for long-time tails explicitly, since they are based on a linearized version of the (fluctuating) hydrodynamic equations of motion with Bjorken expansion. In particular, these approaches take all linear fluctuating contributions to thermodynamic quantities to be vanishing on average: e.g. δ T μν ≡ 0. This differs from studies such as Ref. [66] where δ T μν = 0 as a result of the non-linear constitutive relation T μν = (e + P )u μ u ν − P g μν . Approaches such as Ref. [66] thus yield non-linear equations in the hydrodynamic fluctuations which generate "long-time tails" in thermodynamic time correlation functions, a standard signature of systems governed by fluctuating hydrodynamics which the approach presented here is unable to reproduce.
This failure to produce long-time tails is compensated for somewhat by exploiting the modecoupling approach described above, where the effects of long-time tails are essentially absorbed into the critical enhancement of the thermal conductivity, in a way which can be readily and smoothly extended away from the critical regime. Moreover, the use of colored noise permits a natural regularization of divergences associated to the standard white noise treatment; alternative approaches typically require a renormalized treatment of thermodynamic quantities to eliminate divergences which result from white-noise correlations [67]. Another key advantage of the approach described here is its ability to model the subtraction of so-called self-correlations from physical observables based on e.g. multi-particle correlations, where trivial correlations of a particle (or fluid cell) with itself are generally neglected. One way to do this was considered in [233] for the case of white noise, with the extension to colored noise being considered in [58]. It would be interesting to consider how this same subtraction could be performed in alternative approaches (or whether such a subtraction would even need to be carried out).

A.7. Summary of approaches to critical dynamics
Specific features of the different numerical implementations studying the dynamics of critical fluctuations are summarized in Table 2.