The Analytic Renormalization Group

Finite temperature Euclidean two-point functions in quantum mechanics or quantum field theory are characterized by a discrete set of Fourier coefficients $G_{k}$, $k\in\mathbb Z$, associated with the Matsubara frequencies $\nu_{k}=2\pi k/\beta$. We show that analyticity implies that the coefficients $G_{k}$ must satisfy an infinite number of model-independent linear equations that we write down explicitly. In particular, we construct"Analytic Renormalization Group"linear maps $\mathsf A_{\mu}$ which, for any choice of cut-off $\mu$, allow to express the low energy Fourier coefficients for $|\nu_{k}|<\mu$ (with the possible exception of the zero mode $G_{0}$), together with the real-time correlators and spectral functions, in terms of the high energy Fourier coefficients for $|\nu_{k}|\geq\mu$. Operating a simple numerical algorithm, we show that the exact universal linear constraints on $G_{k}$ can be used to systematically improve any random approximate data set obtained, for example, from Monte-Carlo simulations. Our results are illustrated on several explicit examples.


General presentation
Consider the space M of arbitrary two-point functions between bosonic operators A and B in Quantum Mechanics or Quantum Field Theory, at finite temperature T = 1/β. 1 As is well-known and will be reviewed in details in Section 2, the space M can be presented in several equivalent ways. One can consider various real-time two-point functions (advanced, retarded, time-ordered, etc.), which turn out to be all related to each other, since their Fourier transforms can be expressed in terms of a unique spectral function ρ(ω). Alternatively, one can work with the Euclidean-time two-point function G(τ ). By the KMS condition, G is periodic and can be expanded in Fourier series, where the Matsubara frequencies are defined by We shall often refer to the set of Fourier coefficients (G k ) k∈Z as the "data" which encodes the two-point function. In a generic strongly coupled quantum mechanical model, this data can only be computed numerically, using Monte-Carlo numerical simulations. Analytic non-perturbative methods exist only in rare occasions. 2 By Carlson's theorem [2], the real-time and Euclidean-time points of view are equivalent: the continuous spectral function ρ(ω) can be expressed in terms of the discrete set of Fourier coefficients G k and vice-versa, under some very general assumptions that are valid in all known interesting physical theories. 3 The map between the real-time and the Euclidean-time formalism is quite interesting and will be discussed very explicitly below.
The two-point functions must satisfy general well-known constraints that follow straightforwardly from the definitions and the spectral decomposition, see Section 2. For example, on top of being β-periodic, G(τ ) is analytic except at the points τ = kβ, k ∈ Z, where it is discontinuous if A and B do not commute. This implies in particular that G k = O(1/k) at large |k|. The Fourier coefficients also satisfy reality and positivity constraints depending on the reality properties of A and B. We shall call F the real vector space of β-periodic functions satisfying all these standard model-independent constraints.
One of the main goal of the present work is to show that M is a linear subspace of F of infinite codimension. This may come as a surprise. It means that a typical set of Fourier coefficients (G k ) k∈Z satisfying all the usual constraints is actually inconsistent! Our central result is to show that the Fourier coefficients must always obey an infinite set of universal, model-independent, linear equations. For reasons that will become clear below, we call these equations "Analytic Renormalization Group" (ARG) equations. We shall write down these equations very explicitly in Section 3 and use them extensively in Sections 4 and 5.
A spectacular concrete application of the existence of the ARG equations is as follows. Suppose that we have at our disposal an approximate data set (G a k ) k∈Z , obtained, in a strongly coupled model of interest, by using Monte-Carlo simulations, possibly combined with perturbation theory at high energies. This data set corresponds to a point G a ∈ F , randomly chosen in a small neighborhood of the exact, but unknown, result G e ∈ M . The point G a will never belong to M , because a random approximate data set always violate the ARG equations (actually, this violation is always massive, even if the precision of the data is excellent; see below). It is then possible to use the ARG equations to systematically improve the approximate data, by suitably projecting G a onto M . An explicit algorithm implementing this idea will be presented and tested in Section 5. In spite of its simplicity, our algorithm is able to improve the accuracy of typical random approximate data by a factor of 2 to 4! The startling feature is that the procedure is totally model-independent and can be applied straightforwardly to Monte-Carlo data in any quantum mechanical system.
The fundamental ingredient at the basis of the ARG equations is analyticity, which is itself a consequence of causality. The fact that analyticity yields non-trivial constraints on real-time correlation functions is of course well-known. For example, the famous Plemelj-Kramers-Kronig identities relate the real and imaginary parts of the Fourier transform of the retarded two-point function, The starting point of our work is the very same analyticity property at the basis of (1.3), but we use it in a more sophisticated way to derive the infinite set of linear constraints that the Fourier coefficients G k must satisfy.
An interesting aspect of the construction is to make an unexpected link, valid in any quantum mechanical system, between the concept of Renormalization Group (RG) and analyticity. The fundamental idea of the RG is to describe the physics below a certain RG scale µ in terms of a Wilsonian action S µ that takes into account the physics at scales greater than µ. When µ is lowered, the action S µ flows to a natural description of the physics at low energies. This flow is constrained by the obvious fact that physics is independent of the arbitrary RG scale. This yields the RG flow equations. The Analytic version of the RG that we find (the ARG) can be described as follows. To an arbitrary RG scale µ, we associate the integer k µ > 0 (that we also call the RG scale by abuse of language) such that ν kµ−1 ≤ µ < ν kµ . (1.4) We also introduce a strictly positive integer δ, that we call the "index" of the ARG. The ARG states that there exists linear maps A + kµ,δ and A − kµ,δ allowing to express the low energy Fourier coefficients, for 1 ≤ k < k µ and −k µ < k ≤ −1 respectively, in terms of the high energy Fourier coefficients G kµ+δq and G −kµ−δq , q ∈ N, respectively: G 1 , G 2 , . . . , G kµ−1 = A + kµ,δ G kµ , G kµ+δ , G kµ+2δ , G kµ+3δ , . . . , (1.5) Note that the zero mode G 0 plays a special role since it cannot be obtained, in general, from the ARG. This subtlety is related to the phenomenon of Bose-Einstein condensation (see e.g. [1]).
The low energy Fourier coefficients G k , for |k| < k µ , obviously do not depend on the arbitrary choice of RG scale k µ and index δ. On the other hand, the right-hand sides of the equations (1.5) and (1.6) depend explicitly, and non-trivially, on k µ and δ. As usual, this yields renormalization group equations. The same remark applies to the equations (1.7) and (1.8). More generally, by abuse of language, we call "ARG equation" any universal linear relation between the Fourier coefficients like (1.5) or (1.6).
Our results rely in an absolutely crucial way on a generalization of mathematical techniques first introduced in a remarkable paper by Cuniberti et al. [3]. 4 The 4 See also [4] for a concrete discussion of the construction in [3]. aim of [3] was to provide explicit formulas for the reconstruction of the real-time correlators from the Euclidean-time correlators, a notoriously difficult and important problem. One particular aspect of our results is to provide a useful generalization and simplification of the reconstruction procedure of [3].
The plan of the paper is as follows. In Section 2, we review basic facts on twopoint functions in quantum mechanics: definitions of real-time and Euclidean-time correlators, spectral decompositions and spectral function, the resolvent, analytic properties and Carlson theorem. Section 3 is devoted to the derivation of the ARG maps and equations. We start in 3.1 explaining a simple idea at the basis of the ARG. We then present in 3.2 some useful mathematical results on Laguerre polynomials, Pollaczek polynomials and the relation between them. These results are used in 3.3 to build the general ARG maps A ± kµ,δ (ω, Ω). We specialize to the case of the maps A ± kµ,δ in 3.4 and to the reconstruction of the real-time correlators from the Euclidean data in 3.5. This eventually yields a very general multi-parameter continuous family of ARG equations. In Section 4, we discuss the numerical implementation of the ARG. We use in particular the example of the damped harmonic oscillator to illustrate our results. In Section 5, we explain how the ARG equations can be used to systematically improve any given random approximate data set. This is certainly the newest, most surprising and most central concrete application of our work. We provide a simple numerical algorithm that we test successfully on several examples. Finally, we briefly summarize our results and suggest future directions of research in Section 6.

Basic definitions
Let H be the Hamiltomian. To simplify some formulas, we do as if the spectrum of H were discrete. The generalization to the case of a continuous spectrum is completely straightforward and the required modifications will be taken into account in our discussion. 5 Let {|p } be an orthonormal basis of energy eigenstates, H|p = E p |p .
The partition function at temperature T = 1/β is The expectation value of any operator O at temperature T is defined by The real-time and Euclidean-time evolutions are defined as usual by respectively.
We consider two particular bosonic operators A and B and denote their matrix elements as The spectral function is defined by where the zero-frequency contribution 6 reads By using the δ-function constraint, we may also rewrite (2.5) as where the sum over energy eigenstates is now unconstrained and thus includes the terms with E p = E q . Taking into account a possible continuous part in the spectrum of H, the spectral function can be written as a sum where ρ s is a smooth function associated with the continuous spectrum, ρ d is a sum of δ-function contributions centered at non-zero frequencies associated with the discrete spectrum and ρ 0 (ω) = βn 0 ωδ(ω) is the zero-frequency contribution.
Remark : our subsequent discussion does not depend on reality conditions on the operators A and B. However, let us note that, in the typical case B = A † , the representation (2.7) implies that ρ is a real function, positive for ω > 0 and negative for ω < 0.

Real-time correlators
We define By inserting a complete set of states in the above definition, one can straightforwardly derive the following spectral decomposition, (2.10) Using (2.7), this yieldsC (2.11) Note that terms with E p = E q a priori contribute in the sum (2.10). Accordingly, the zero-frequency piece in ρ can contribute in an essential way toC. Similarly, other real-time two-point functions can be studied, where T is the usual time-ordering. It is straightforward to check that all these twopoint functions can be expressed in terms of the spectral function, which thus contains all the relevant information, 7 ξ(ω) = πρ(ω) , (2.17) Evaluating ξ(0) and S(0) from the above relations yields the following important sum rules, In particular, taking into account the fact that ρ d picks contributions only at non-zero frequencies and that the integral on the left-hand side of (2.23) must converge, we find ρ d (ω = 0) = 0 , ρ s (ω = 0) = 0 . (2.24)

Euclidean-time correlator
We define G(τ ) = TA E (τ )B β for − β < τ < 0 or 0 < τ < β . (2.25) The standard KMS condition reads We can thus expand G in Fourier series as in (1.1) and use this expansion to extend the definition of G for all values of τ that are not multiples of β. Note that if A and B do not commute, G is discontinuous at τ = kβ, k ∈ Z, with As usual, is an infinitesimal strictly positive parameter.
Moreover, from Dirichlet theorem, we get The Fourier coefficients G k admit the following spectral decomposition, It is important to note that, in general, the zero mode G 0 picks a contribution proportional to n 0 defined in (2.6).
Even more generally, one can consider the two-point function for complex time t − iτ , (t, τ ) ∈ R 2 , defined by (2.30) The symbol T τ denotes the time-ordering with respect to the Euclidean time τ . The KMS condition reads G is then extended to the whole complex time plane by β-periodicity in τ . It is analytic in the strips kβ < τ < (k + 1)β, k ∈ Z and possibly discontinuous for τ = kβ with The function G can be expressed in terms of the spectral density ρ. If we expand it is straightforward to check that or, equivalently, that

The resolvent
The resolvent is defined by for any complex z with Im z = 0. This is equivalent to This is a very useful object and it is going to play a central role in our subsequent discussion. The spectral decomposition of R reads Note that the zero-frequency piece ρ 0 in ρ does not contribute to R; equivalently, states with E p = E q do not contribute in (2.36).
The resolvent has the following set of fundamental properties: i) It is holomorphic in the half-planes Im z > 0 and Im z < 0.
ii) For any η > 0 and | Im z| ≥ η, R has a simple large |z| asymptotic expansion This follows from the definition (2.34) and the sum rule (2.22).
iii) When the spectrum of the Hamiltonian is discrete, the only singularities of R are simple poles on the real axis at the Bohr frequencies E q − E p = 0. iv) More generally, when the spectral function is decomposed as in (2.8), the resolvent is discontinuous accross the support of ρ s + ρ d , with In particular, taking into account (2.24), we see that R(0) is well-defined.
v) The real-time correlators ξ(t), χ r (t) and χ a (t), that do not depend on the zerofrequency piece in the spectral function, can be obtained from the knowledge of R alone. This is a direct consequence of the relation (2.38). In particular, On the other hand, the real-time correlators C(t), S(t) and D(t) are given in terms of R up to a time-independent piece given by n 0 in each case.
vi) The function −R yields the analytic continuations to complex frequencies ofχ r andχ a for Im ω > 0 and Im ω < 0 respectively. This is a direct consequence of (2.39).
vii) The Euclidean-time correlator G(τ ) can be obtained from R up to the timeindependent piece n 0 . Indeed, the Fourier coefficients G k are given by This is a direct consequence of the spectral representations (2.29) and (2.36).

The Carlson's theorem
From the knowledge of the real-time correlator C(t), we get the spectral density, including the zero-frequency piece proportional to n 0 , by using (2.11). We then get the Euclidean-time correlator from (2.33). The analytic continuation from real-time to Euclidean-time is thus rather straightforward.
At non-zero temperature, the converse is much more subtle. The Euclidean-time physics is coded in the set of Fourier coefficients G k associated with the discrete Matsubara frequencies ν k = 2πk/β, whereas the real-time physics is determined by the spectral function ρ defined for all real frequencies ω. To go from the Euclidean time to the real time, one must thus convert a discrete set of data into a continuous set of data. The fact that this can be done in a unique way is ensured by the famous Carlson's theorem. From the holomorphicity of the resolvent R in the half-planes Im z > 0 and Im z < 0 and the asymptotic behaviour (2.37), the theorem implies that R is uniquely determined on the upper half-plane by the values R(iν k ) = −G k for k ≥ 1 and on the lower half-plane by the values R(iν k ) = −G k for k ≤ −1. One can then obtain the full spectral density from the G k s: the smooth and discrete pieces are derived from (2.38) and the zero frequency piece is derived from (2.40) at k = 0, n 0 = (G 0 + R(0))/β. The Carlson's theorem thus implies that real-time and Euclidean-time data are equivalent. However, it does not provide a constructive way to obtain ρ(ω) from the G k s. One application of our results, presented in the next section, is to obtain an infinite set of equivalent explicit reconstruction procedures, generalizing the results of Cuniberti et al. [3].

Basic idea
Let us pick an arbitrary RG scale µ > 0 and let k µ ∈ N * be defined as in (1.4). The existence of the ARG map relies on a very simple idea, which is illustrated on Fig. 1. We consider the resolvent R in the domain Im z > ν kµ−1 . By applying the standard Carlson's theorem to the function R µ (z) = R(z − ν kµ−1 ), we find that R for Im z > ν kµ−1 is uniquely determined by the Fourier coefficients G k for k ≥ k µ . But since R is holomorphic for Im z > 0, the principle of analytic continuation implies that the knowledge of R for Im z > ν kµ−1 uniquely fixes R on the whole upper halfplane Im z > 0. In particular, all the low energy Fourier coefficients G k = −R(iν k ) for 1 ≤ k ≤ k µ − 1 are then fixed in terms of the high energy Fourier coefficients G k for k ≥ k µ . In other words, there must exist a map A + kµ such that Similarly, there must exist a map A − kµ such that Moreover, the maps A ± kµ are linear, being the composition of the linear maps between the G k for |k| ≥ k µ and R, and between R and the G k for |k| < k µ , k = 0.
One can actually further refine the above reasoning. The analytic function R µ is also completely fixed by its values at z = iν kµ+δk , for any strictly positive integer δ and k ≥ 0. This yields the general ARG maps of "index δ," A ± kµ,δ of (1.5) and (1.6). Similarly, we also get the maps A ± kµ,δ (ω, Ω) of (1.7) and (1.8), since the advanced and retarded correlators can be obtained from the resolvent. 8 Our goal, in the remaining of this section, is to find explicit expressions for these maps.

On Laguerre and Pollaczek functions
We now briefly review some useful results on Laguerre and Pollaczek polynomials.
For any real a > −1 and integer n ≥ 0, generalized Laguerre polynomials can be defined in terms of Kummer's confluent hypergeometric function by denotes the usual Pochhammer symbol. The L (a) n are degree n polynomials, with Figure 1: The upper half z-plane, Im z > 0. The Fourier coefficients G k in the UV region above the cut-off µ (k ≥ k µ , gray area), determine R for any z in this region and thus, by analytic continuation, for any z in the upper half-plane. As a consequence, the Fourier coefficients G k = −R(iν k ) in the IR region below the cut-off µ (0 < k < k µ , white area), together with the spectral function and all two-point correlators, are fixed in terms of the Fourier coefficients in the UV region. This is the ARG map.
The associated Laguerre functions form a real complete orthonormal basis of the Hilbert space Similarly, for any real α > 0 and integer n ≥ 0, we define the generalized Pollaczek polynomials in terms of the ordinary hypergeometric function F = 2 F 1 by The P (α) n are degree n polynomials. The factor i n is inserted to make them real. It will be useful to know that and that P (α) The associated Pollaczek functions form a complete orthonormal basis of the Hilbert space L 2 (R), There exists a very natural relation between the Laguerre and the Pollaczek functions. Let us consider the linear map U : Morevoer, its inverse is given by the Mellin inversion theorem, Using the identity which is valid when Re β > −1 and Re u > 0 [5], we find that, up to a phase, the image under U of the orthonormal basis of L 2 (R + ) given by the Laguerre functions (3.6) is the orthonormal basis of L 2 (R) give by the Pollaczek functions (3.13), (3.21)

The general ARG map
Let us consider The scales Λ > 0 and Ω > 0 are arbitrary, the sign in front of Λx being chosen for future convenience. From (2.37), it is clear that r Λ,Ω ∈ L 2 (R). We can thus expand on a basis of Pollaczek functions (P (α) n ), for any choice of α > 0: It is a bit more convenient to rewrite the integral in the variable z = −Λx + iΩ. Explicitly, we get The contour of integration in the complex z-plane is depicted on Fig. 2. Using (2.37) and the good asymptotic behaviour of the Γ function given by Stirling formula, it is easy to show that the integral can be computed by closing the integration contour from above by an infinite semi-rectangle. In the region encircled by the rectangle, R is holomorphic. The only poles we pick come from the Γ function at the non-positive integer values of its argument. This yields We now see the magic of using the Pollaczek functions basis: due to the presence of the Γ function, the coefficients of the expansion depend only on the values of R at the discrete set of points i Ω + Λ(α + k) , k ∈ N, on the imaginary axis.
Since the only data we want to use are the Fourier coefficients where the "index" δ is an arbitrary strictly positive integer. We then choose a cut-off scale µ ≥ Ω, associate to it the integer k µ as in (1.4) and set Note that µ ≥ Ω implies α µ,δ (Ω) > 0, as required. With these choices, Eq. (3.26) and (3.23) yield Of course, the same reasoning as above can be repeated on the lower half-plane Im z < 0.
To write down the final result in a convenient way, we introduce the coefficients The Eq. (3.29), together with the similar formula valid in the lower half-plane, is then equivalent to This is the fundamental formula of the ARG, from which everything else can be derived. Taking into account (2.39), it provides in particular the explicit form of the general ARG maps A ± kµ,δ (ω, Ω) introduced in (1.7) and (1.8). Important remarks: i) The formula (3.30) is manifestly linear in the Fourier coefficients G k and thus the ARG map (3.31) is linear as well, as expected.
ii) At large k, the general term of the series defining the coefficients χ ± n is equivalent to and thus the sum over k in (3.30) converges rapidly.
iii) The sum over k in (3.30) must be performed first and the sum over n in (3.31) second. Indeed, if one makes the sum over n first, one gets infinity. This is a very important qualitative property of the ARG maps, to be discussed further in Section 4.
iv) In Eq. (3.31), we have complete freedom in choosing the index δ ≥ 1 and the cut-off k µ , as long as µ ≥ Ω > 0. 9 Of course, R(ω + iΩ) does not depend on these arbitrary choices. This automatically yields highly non-trivial Analytic Renormalization Group equations, which take the form of universal linear relations constraining any admissible set of coefficients G k .

The maps
The construction of the ARG maps A ± kµ,δ is now completely straightforward. We simply set ω = 0 and Ω = ν k in (3.31). The formula simplifies because which is equivalent to (3.11) and (3.12). We get and the sign ± on the right-hand side of (3.34) is chosen according to the sign of k.
10 It is funny to note that, by using Euler identity, one can easily prove ∞ m=0 iii) The relations (3.34), for k = 0, are universal, model-independent linear constraints on the Fourier coefficients. From this point of view, they are not different from (3.36) and for this reason we also call them "ARG equations." More generally, any universally valid linear relation between the Fourier coefficients is called an ARG equation in the present paper.
3.5 Real-time physics from Euclidean data 3.5.1 The spectral function By using (2.38), the general ARG map (3.31), applied for Ω = , allows to reconstruct explicitly the spectral function ρ(ω) from the Euclidean Fourier coefficients G k . This provides a full solution to the problem of reconstructing the real-time two-point functions in terms of the Euclidean data, using (2.17)-(2.21). Actually, we have obtained an infinite set of equivalent reconstruction formulas, each associated with a choice of cut-off k µ and index δ, using only subsets of Fourier coefficients G ±kµ±δk for k ≥ 0 (as usual, we also need G 0 to get the zero-frequency piece in the spectral function, if n 0 = 0).

The real-time retarded and advanced correlators
Explicit formulas can be obtained for the retarded and advanced functions χ r (t) and χ a (t). The most general formulas are actually obtained by considering e −Ωt χ r (t) and e Ωt χ a (t), for any Ω ≥ 0. For example, using the analyticity of the resolvent R on the upper half-plane, we get, from (2.14) and (2.39), From (3.29), we see that to evaluate this integral we need to know the Fourier transform of the Pollaczek functions. But the Fourier transform of an arbitrary function φ is directly given in terms of the unitary operator U −1 defined in Section 3.2. Indeed, (3.39) As usual, these equations are valid for any choice of strictly positive integers k µ and δ. Moreover, the parameter α, being related to the arbitrary Ω that we have introduced in (3.37) by the equation (3.28), can be chosen at will in the interval ]0, kµ δ [.

A very general form of the ARG equations
Using the fact that the left-hand sides of (3.39) do not depend on k µ , δ or α, we immediately get many ARG equations. Moreover, causality immediately implies for any (k µ , δ) ∈ N * 2 , 0 < α < k µ δ and u > 2 , since χ r (t) = 0 if t < 0 and χ a (t) = 0 is t > 0.

The long time behaviour
Of particular interest is the long time behaviour of the correlation functions. In particular, linear response theory implies that the retarded correlator χ r (t) governs the response of the operator A to a small perturbation of the system by the operator B. If the system thermalizes, we thus have lim t→∞ χ r (t) = 0. In many interesting examples, χ r (t) decays exponentially, where 1/γ > 0 is the thermalization time scale. The behaviour (3.41) occurs when the analytic continuation of the resolvent R(z) from the upper half-plane to the lower half-plane admits poles for Im z < 0. If z 0 is the pole closest to the real axis, then γ = − Im z 0 .
The representation (3.39) allows to study quite efficiently the large time behaviour of χ r . For example, if we choose α = kµ 2δ , which is equivalent to Ω = πkµ β , and use (3.5), we get More generally, let us assume that the large time behaviour is of the form (3.41). Let us then pick aγ > 0 and choose k µ > βγ 2π . If we examine the t → ∞ limit of (3.39) for This provides a sharp criterion to compute the thermalization time scale γ from the Euclidean data.

The ARG equations and the space of two-point functions
As announced in Section 1, we have shown that analyticity implies an infinite set of linear constraints on the Fourier coefficients G k , the ARG equations. In other words, the space M of two-point functions is a subspace of infinite codimension of the space F of Fourier coefficients.
It is not too difficult to understand that the full set of ARG equations is enough to characterize M : if a set of Fourier coefficients (G k ) k∈Z satisfy all the ARG equations mentioned above, then it belongs to M . This is equivalent to saying that there exists a resolvent R, with the analyticity properties discussed above, such that G k = −R(iν k ) for k = 0. The argument to show this goes as follows.
One starts with the full set of ARG equations (3.40), together with the equations ensuring that the right-hand sides of (3.39) do not depend on the choice of k µ , δ and α ∈]0, kµ δ [. One then uses Eq. (3.39) to define χ r and χ a and Eq. (2.35) to define R. Thank's to (3.40), R is automatically analytic in the upper and lower half-planes. Moreover, evaluating explicitly the integrals in (2.35) starting from (3.39) amounts to doing the inverse of the Fourier transform performed in Section 3.5.2. This obviously yields the formula (3.31) for R. We can then evaluate R(iν k ) by using the ARG equations (3.34), which eventually yields G k = −R(iν k ) as was to be shown.
A more difficult question is to find a minimal set of ARG equations that fully characterize M . This is non-trivial, because non-trivial linear relations between the ARG equations do exist, see Section 5. A detailed discussion of this issue if beyond the scope of the present paper. One may conjecture that the equations (3.40), for an arbitrary but unique choice of parameters k µ , δ and α ∈]0, kµ δ [, but for all u > 2, form a complete set of linearly independent ARG equations.

Numerical analysis and simple applications
We are now going to explain how the formalism of the previous section can be implemented numerically and used in practice. Our aim is to get more intuition on how the ARG actually works and to illustrate the ARG maps and equations on simple explicit examples.

General remarks
Let us start by discussing three qualitatively important properties of the numerical implementation of the ARG.

Finite precision and the matrix form of the ARG
The ARG maps, as well as the ARG equations, all entail a sum over n ≥ 0 involving the coefficients χ ± n defined in (3.30). To obtain a numerical approximation, we keep only a finite number of terms in this sum, restricting the integer n to the interval 0 ≤ n ≤ N . Obviously, the more terms we keep, the better precision we get. For this reason, we call N the "precision" of the numerical implementation. Of course, the actual numerical precision achieved for a given choice of N depends on the particular example under study. Equalities at precision N will be denoted by = N . For a given finite precision N , the sum over n and the sum (3.30) over k can be permuted. Unlike the exact maps, the finite-precision linear ARG maps can thus be written in a familiar finite-dimensional matrix form. For example, the general ARG map (3.31) at precision N is given by where the matrix elements A ± kµ,δ (N ; ω, Ω, p) are universal numbers given by the sign ± being fixed by the sign of k and And, finally, (3.39) corresponds to where the matrix elements A α (N ; u, p) are given by for any 0 < α < k µ /δ and u > 0. In particular, the ARG equations (3.40) take the form for any 0 < α < k µ /δ and u > 2.

Decoupling of the UV
Using we see that the matrix elements in (4.2), (4.4) or (4.6) are proportional to p N /p! at large p. They are thus decreasing very quickly when p → ∞. This implies that the Fourier coefficients G k above some Euclidean UV cut-off K, i.e. for |k| > K, are totally irrelevant to evaluate the ARG maps. Of course, the UV cut-off must be much larger than the RG scale, K k µ . Moreover, if we increase the precision N or the index δ, K must also be increased accordingly. In practice, working with K ∼ δN k µ is more than enough, see the examples below.
The conclusion is that we can always use a finite dimensional data set (G k ) |k|≤K in numerical calculations, the UV cut-off K k µ being chosen according to the precision goal.
This phenomenon of decoupling of the UV physics is of course one of the most important consequence of the usual RG ideas. Here we obtain a mathematically rigorous and universal version of this decoupling, as a consequence of analyticity. Moreover, the p-dependence of the matrix elements of the ARG maps (see e.g. Fig. 3 and 4) quantifies in a very precise way how the low energy physics can be influenced by the data above the RG scale, as a function of energy.

Extreme sensitivity on the data set
On top of their large p behaviour and the associated decoupling of the UV that we have just mentioned, the ARG maps matrix elements have another remarkable feature: in the range of energy where they are not infinitesimally small (i.e. for p K), they are typically huge. This is due to the fact that the sums over n (or m) in (4.2), (4.4) and (4.6) diverge, as already emphasized in Section 3. This property implies an extreme sensibility, which increases with the precision N , of the ARG maps on the values of the Fourier coefficients G k in the relevant energy range.

Illustrations
There is nothing better than a few plots to illustrate the properties listed above. On Fig. 3 and 4, we have depicted the values of some matrix elements (4.4) at k µ = 5 and k = 1, for δ = 1, 2, 3 and precisions N = 100, 500. Very similar plots are obtained for different values of k µ and k, or for matrix elements A ± kµ,δ (N ; ω, Ω, p) and A α (N ; u, p). We see that: i) On Fig. 3, the dots that are visibly above or below the abscissa axis on the plots correspond to matrix elements that are huge in some range of the energy k µ + δp, of order 10 10 for N = 100 and 10 25 for N = 500! For example, A 5,1 (100; 1, 11) 1.82 10 10 . This means that a tiny error in the Fourier coeffients G 16 that multiplies this huge number in the ARG map (4.3), let's say of order 10 −5 , would yield a huge error in the coefficient G 1 given by the ARG map (4.3), of order 10 5 ! ii) On the logarithmic plots of Fig. 4, which both correspond to δ = 1, we clearly see that the matrix elements have a maximum for some energy and that they remain sizeable below this energy. For example, A 5,1 (100; 1, 0) 9.64 10 4 . This property is of course completely generic and in particular remains valid for the other values of δ.
In Fig. 5, we have depicted the real-time frequency dependence of a typical large matrix element of the general ARG map (4.2). This dependence is very complicated, but is eventually tamed for large frequencies. This property is true for all the matrix elements.

Using the ARG with an exact data set
We are now going to illustrate explicitly how the ARG works, by using mainly (but not exclusively) the simple example of the damped harmonic oscillator. This case captures well many basic qualitative features of more realistic models. We shall give examples of the maps A ± kµ,δ and of the reconstruction of the spectral function and of the real-time functions from the Euclidean data. We fix the overall energy scale by setting the temperature to β = 2π . (4.9)

The damped harmonic oscillator
For an oscillator frequency m > 0 and damping coefficient Γ > 0, we consider the resolvent function if Im z < 0 . (4.10) The associated Euclidean Fourier coefficients are and the spectral function is given by The spectral function is smooth and has no discrete or zero-frequency piece. The retarded two-point function is given bỹ  The matrix elements A kµ=5,δ (N ; k = 1, p) for δ = 1, 2, 3 (dots, squares and diamonds) and N = 100, 500 (left inset, right inset), as a function of the "energy" k µ + δp. The peaks on the plots correspond to energy ranges for which the matrix elements are huge. This implies an extreme sensibility of the ARG maps at these energies.  on the complex frequency plane and by (4.14) in real time. The case m > Γ corresponds to mild damping, the poles ofχ r (z) on the lower half-plane having a non-zero real part, whereas Γ > m corresponds to strong damping, with poles on the imaginary axis. Similar formulas give the advanced function too.

The ARG maps A + kµ,δ
We focus on the maps A + kµ,δ . The maps A − kµ,δ work in a similar way (note that, moreover, G k = G −k for the damped harmonic oscillator).
On Fig. 7, we have depicted the Fourier coefficient G 1 obtained from the maps A + kµ=2,δ=1 and A + kµ=5,δ=2 , as a function of the precision N (the UV cut-off being adjusted according to the precision). Similar plots are obtained for the reconstruction of other Fourier coefficients and for other values of the RG scale k µ and the index δ; convergence is slower when k µ and/or δ are increased. An example with δ = 2 is provided on Fig. 8.

The spectral functions from the Euclidean data
In the case of the damped harmonic oscillator (4.10), Eq. (2.38) yields On Fig. 9, we have depicted the reconstruction of the spectral function ρ from (4.15) using the general ARG map (4.1) and (4.2). In some cases, an excellent result is obtained using a small precision (e.g. the spectral density obtained for N = 15 in the case (m, Γ) = (2, 2) is already excellent), whereas in other cases a much higher precision is needed. Quite generally, a reliable reconstruction of sharp peaks requires a high value of N .
We have also included an example for the Wigner's semi-circle law. For any choices of b > a, it corresponds to the Fourier coefficients and the spectral function This example has some qualitative difference with the damped harmonic oscillator: the spectral function has a compact support, the resolvent has square root branch cuts and the real-time two-point functions has a power-law decay at large time instead of an exponential decay. Fig. 10 illustrates the direct reconstruction of the real-time retarded two-point functions χ r (t) from the Euclidean data, using (4.5) and (4.6). We are using two different ARG maps: one with (k µ , δ, α) = (2, 1, 1), for which the exponential pre-factor in (4.5) vanishes; and one with (k µ , δ, α) = (1, 1, 3/4), for which there is a non-vanishing exponential pre-factor e −t/2 . For relatively short times, an excellent reconstruction can be obtained using modest values for the precision. However, to get the long-time behaviour right requires higher and higher precisions. Using values of (k µ , δ, α) for which there is an explicit exponential damping factor in (3.39) of course helps in this respect, since the exact result goes to zero at large time. This is true even if the rate of damping associated with the chosen values of k µ , δ and α doesn't match the exact result, as for the choice (k µ , δ, α) = (1, 1, 3/4) in Fig. 10.

The long-time behaviour from the Euclidean data
Let us now illustrate the criterion (3.44) for the computation of the thermalization time scale γ −1 , defined by (3.41), from the Euclidean data. The results of the previous subsection showed that a reliable description of the long-time behaviour of χ r requires a very high precision N and thus we do not expect that the condition (3.44) will be satisfied very sharply for moderate values of N . However, Fig. 11 is rather suggestive. It clearly hints at the existence of two qualitatively disctinct regions for the left-hand side of (3.44), as a function ofγ: one for which it is nearly zero and one for which it deviates from zero and tends to diverge. This is a convincing sign that the correlator decays exponentially when t → ∞, but only a rough estimate of the corresponding thermalization time scale γ −1 is obtained, even when one uses the high precision N = 1000.

Incomplete data set and analytic interpolation
The ARG maps A + kµ,δ allow to reconstruct the low energy Fourier coefficients G k for k < k µ in terms of the high energy coefficients G k for k ≥ k µ . This amounts to performing an exact "discrete" analytic interpolation of the Fourier coefficients below some energy scale k µ . If one knows the G k below some cut-off K only, i.e. for |k| ≤ K, one can still use the approximate ARG maps given by (4.3) and (4.4) to perform the analytic interpolation with some finite precision.
One can also consider more general analytic interpolation problems. For example, assuming that the Fourier coefficients are known for 1 ≤ k < k 1 and for k > k 2 , one could try to deduce the coefficients in the interval k 1 ≤ k ≤ k 2 . One obvious way to do this is to use an ARG map A + kµ,δ for some k µ > k 2 . This does not use the knowledge of the coefficients for 1 ≤ k < k 1 . In practice, using this knowledge allows to immensely improve the precision of the interpolation, thank's to the extreme sensitivity of the ARG maps on the data set discussed in 4.1.3.
To clearly understand this point, let us start with the simplest possible exercice: the reconstruction of a single unknown coefficient, say G 10 , assuming that all the others are known. To do that, we may use (4.3) with, for instance, the choices k µ = 2, δ = 1, k = 1 and some finite precision N . Due to the UV decoupling, the Fourier coefficients above a certain N -dependent UV cut-off K are irrelevant. We thus get a single linear constraint with a finite number of terms, which allows to obtain an approximate value for the single unknown G 10 . The approximate value we obtain in this way is extremely precise, because the matrix element A 2,1 (N ; 1, 10) multiplying the unknown coefficient is typically huge (see Fig. 3 and 4), whereas the left-hand side of (4.18) involves the known coefficient G 1 multiplied by one! For example, N = 200 yields the correct value for G 10 with a relative error of the order of 10 −15 .
More generally, it is convenient to use the ARG equations in the form (4.7). If all the Fourier coefficients except n are known, we can use n equations (4.7), obtained by choosing n different values for the parameters α, k µ , δ, u, to perform the analytic interpolation.

ARG and data improvement
In this last section, we show how to use the ARG equations to systematically improve random approximate Euclidean data sets obtained, for example, from Monte-Carlo simulations. The basic philosophy is very similar to the use of standard RG equations in field theory to improve perturbation theory: one relies on the fact that the RG equations are exact statements. By combining these exact statements with approximate data, it is not surprising that one can devise algorithms to improve the data.
The rather spectacular aspect of the method is that the ARG equations are completely universal, model-independent constraints. The very same algorithms can thus be applied in principle to improve Monte-Carlo data for problems as diverse as lattice QCD, strongly correlated electron systems or strongly coupled matrix quantum mechanical models of black holes, etc.

General principle
As in the general presentation in Sec. 1, let F be the set of Fourier coefficients (G k ) k∈Z satisfying all the basic standard constraints (but not the ARG equations). We endow F with a scalar product, which induces a notion of distance d. The distance d gives a measure of how much two sets of Fourier coefficients are physically close to each other. There may be several natural choices for d, see below.
Let us assume that we have at our disposal an imprecise data set G a = (G a k ) k∈Z . This is of course a very common and important situation, since most interesting models cannot be solved exactly. The data G a can be seen as a point in F . It approximates an exact, but in principle unknown, set of Fourier coefficients G e = (G e k ) k∈Z . The distance d(G e , G a ) measures the accuracy of the approximation. 11 The point G e belongs to the linear subspace M of F defined by the set of all the ARG equations, for example the equations (3.40) for all the allowed choices of α, u, k µ and δ. The point G a , on the other hand, is a random point in F belonging to a certain ball centered on G e ; the better the accuracy of the Monte-Carlo simulation, the smaller the radius of the ball.
One can then improve systematically the approximate data by using an extremely simple idea: we consider the orthogonal projectionG a of G a onto M . By the Pythagoras' theorem, d(G e ,G a ) ≤ d(G e , G a ): the new data pointG a is automatically more accurate than the data point G a we started with! This simple method is illustrated on Fig. 12. 12 F M G e G aG a Figure 12: A typical Monte-Carlo simulation yields an approximate data point G a in the vicinity of the exact result G e . G a can be seen as a random point belonging to a certain ball around G e whose radius parameterizes the accuracy of the Monte-Carlo simulation. By projecting G a onto the linear subspace M defined by the ARG equations, we obtain a new approximate data pointG a . By construction,G a is more accurate than G a , d(G e ,G a ) ≤ d(G e , G a ).

Important properties of the numerical implementation
Finite dimensional space In all numerical implementations, we work with a finite numerical precision N and a finite cut-off K (adjusted according to N ). The vector space F is thus replaced by its finite dimensional version F K . A point in F K is a set of Fourier coefficients (G k ) |k|≤K . Moreover, it is convenient to separate the positive, k > 0, zero, k = 0, and negative, k < 0, frequencies, since the ARG equations do not mix positive and negative frequencies.
Natural distance functions A priori, any scalar product on F K can be used to define a distance. Since the approximate data set G a plays a special role, a natural choice is data versus precision of the numerics, to avoid confusion. 12 We do not discuss here the subtleties associated with the fact that the spaces F and M are infinite dimensional. Indeed, for all practical purposes, in numerical implementations, we work in finite dimension.
In practice, we shall focus on F + K , with distance function

(5.3)
A random data point never belongs to M Let us note that a randomly chosen point G a around G e will virtually never belong to M . Actually, even if the point G a is very close to G e , that is to say, even if the accuracy of the approximation is excellent, the ARG equations will usually be violated by huge amounts. This is due to their extreme sensitivity on the data set, as explained in 4.1.3. In other words, the ARG equations constitute a very delicate set of constraints which allow to detect, with very high precision, whether a set of Fourier coefficients is consistent with analyticity or not.
This property is illustrated on Fig. 13. On the left inset is plotted the left-hand side of (4.7), as a function of u, for N = 200, 13 α = 1/2 and various values of k µ and δ, for the data set G e corresponding to the damped harmonic oscillator (4.11) at (m, Γ) = (2, 0.5). These functions all vanish (to a very good precision) for u ≥ 2, as implied by the ARG equations (4.7). On the right inset is plotted an instance of the same function, but using an approximate data set G a instead of the exact values (4.11). The approximate data set is related to the exact data set by where the ε σ,k are independent Gaussian random variables of width σ, with probability density On the plot we choose σ = 10 −5 which yields, on the particular realization we use, d + 75 (G e , G a ) 9.65 10 −6 . This means that G a is a very good approximation to G e . In particular, the Euclidean correlators G(τ ) computed from G e and G a are almost indistinguishable on a plot. Nevertheless, the graph on the right inset of Fig. 13 clearly shows that the ARG equations are wildly violated. Actually, one must go to accuracies as good as σ ∼ 10 −15 for the approximate plot to start looking like the exact plot! And this value of σ would be even smaller if we were working at a higher precision N .
Singularity near the UV cut-off and the "perturbative" cut-off The region near the UV cut-off is, of course, singular. Indeed, for |k| > K we set G k to zero artificially. This is manifestly inconsistent with the analyticity properties, except if G k = 0 for all k.
13 For this value of the precision, a cut-off K = 75 is amply enough. However, this is not a serious flaw. As we have explained previously, working with a finite cut-off has essentially no effect much below the cut-off. The only obvious limitation is that we cannot expect to improve significantly the data near the cut-off using the ARG.
Moreover, let us note that, at high energies, a very reliable approximation to the coefficients G k can be obtained in many models of interest by using perturbation theory. 14 For a given accuracy goal, there exists a "perturbative cut-off" K p above which perturbation theory is enough to reach this accuracy goal. In practice, we thus choose K sufficiently greater than K p . For |k| ≤ K p we use the ARG equations to improve the non-perturbative Monte-Carlo data. For |k| > K p , we are satisfied with perturbation theory.
Defining the space M in finite dimension Since we work with a finite dimensional space F K , it is clear that we cannot impose the infinite set of ARG equations on it. Indeed, if we used more than K independent ARG equations, the only solution would be the trivial G k = 0 for all k. This is not surprising: working with a finite cut-off implies that we work with a finite precision N and the ARG equations can be satisfied only approximately. For example, if we zoom the graph on the left-hand side of Fig. 13, we get the plots depicted on Fig. 14. This implies that there is no unique way to define a finite dimensional version M K ⊂ F K of M . This is an important feature that we have to deal with to implement in practice the general principles outlined in 5.1. A possibility is to replace the ARG equations by linear inequalities. For example, we can replace (4.7) by for a suitable choice of ε, depending on the precision N . The advantage of this method is that we may use in principle as many values of k µ , δ, α and u that we wish. However, the space M K defined in this way is not a linear subspace of F K and the resulting linear programming problem that we have to solve is rather complicated.
Instead, we are going to limit ourselves in the present paper to a much simpler approach. We define M K by a finite numbern of independent ARG equations of the form (4.7), for certain choices of the parameters k µ , δ, α and u. There is an obvious ambiguity in these choices but we shall see that, to a large extent, this ambiguity is irrelevant. In particular, for a given K, it turns out that there is always a prefered order of magnitude forn, that yields the codimension of M K . This codimension turns out to be largely independent of the precise set of ARG equations one chooses. Moreover, the accuracy of the improved data that we get also turns out to be largely independent of the choice of equations. The conclusion is that all reasonable choices seem to yield the construction of a subspace M K which provides a good approximation to M .
We are now going to illustrate very explicitly all the above-mentioned properties by implementing an explicit algorithm.

Explicit algorithm
Step 1 We choose a precision N and evaluate the associated cut-off K as explained in Sec. 4.1. Most of our explicit examples will correspond to N = 200 and K = 75, but we shall also use N = 1000 and K = 150.
Step 2 We build an approximate data set (G a k ) 1≤k≤K belonging to F + K from an exact data set G e by using (5.4) for some σ. We will mostly use exact data sets corresponding to the formula (4.11), for (m, Γ) = (2, 0.5). We have studied several other values of m and Γ and they all yield very similar results; see also Sec. 5.4.5 for a very different example.
Step 3 The perturbative, or high energy, expansion of (4.11), up to one loop, reads (recall that β = 2π) This formula is very poor for very low values of k, but the accuracy becomes excellent for large values of k. For example, for (m, Γ) = (2, 0.5), we get an accuracy better than 1% for |k| ≥ 17. Even if we use only the leading 1/k 2 term in (5.7), we obtain an accuracy better than 5% for |k| ≥ 25. So, for this example, a reasonable value for the perturbative cut-off is K p ∼ 25. All the values of this order of magnitude yield similar results, see below.
We set This measures the accuracy of the non-perturbative piece of the original approximate data set. It is this accuracy that we want to improve using the ARG equations.
Step 4 We now start the delicate discussion of how to get the best possible subspace M K . We focus on the positive frequency space M + K without loss of generality. This discussion will be continued in Step 6.
We define the subspaces M + K,n by a set of equations for n different values of {k µ , δ, α, u}. 15 We observe numerically that the linear equations (5.9) are not all independent in general, as was already suggested in Section 3.6. We denote byn(n) ≤ min(n, K) the number of independent equations (5.9), such that dim M + K,n = K −n. Of course, the subspace M + K,n depends on the precise values for {k µ , δ, α, u} that we use, and the choice of these values is a priori quite arbitrary. We have tested many possibilities. To be specific, we proceed as follows. We choose three lists, l δ , l α and l u , of possible values for δ, α and u that we want to use. Then, for a given value of n, Note that {1, 2, 3/4, 3} and {1, 2, 3/4, 5} are not included because they do not satisfy the constraint 0 < α < k µ /δ. We shall see that the various possible choices yield very similar results, but it seems to be always better to sample at least a few values of α and u.
Step 5 We construct the orthogonal projectionsG a (n) of G a onto the spaces M + K,n , associated with the distance function (5.3). We set If the algorithm works, we expect that the function ∆(n), which measures the accuracy of the improved data setG a (n), will be a decreasing function ofn, up to some optimal value ofn 0 which, of course, must be less than K. Indeed, whenn = K, G a (n) = 0 and ∆(K) 1.
The question is, then, how to find this optimal value ofn 0 in general?
In the articifial situation where one actually knows the exact data set G e , the optimal value ofn 0 is, obviously, the one that minimizes ∆(n). The accuracy gain is then defined to bew = ∆(0) ∆(n 0 ) · (5.11) An interesting observation is that, for sufficiently large N and K, the optimal valuē n 0 turns out to be always more or less the same, independently of the precise choice of the {k µ , δ, α, u} that defines M K,n .
However, in a real-life calculation, the exact data set is unknown. One then needs a criterion to obtain an estimateñ 0 of the optimal valuen 0 . It is quite important for this estimate to be reliable: if the guess is over-evaluated, we are likely to get a totally non-sensical result likeG a (ñ 0 ) 0; if it is under-evaluated, then we will get a data improvement significantly inferior to the maximal value the method can produce in principle.
Step 6 We introduce the partial norm of the improved data, defined to be µ(n) = d + Kp 0,G a (n) . (5.12) By construction, if K p is not too small, µ(n) will be an almost always decreasing function ofn, with µ(0) = 1 and µ(K) = 0. 16 The detailed behaviour of µ as a function ofn is generically as follows (see the examples below). Ifn is below the optimal valuen 0 , M + K,n yields a better and better approximation to the positive frequency space M + whenn is increased. In this regime, the data pointG a (n) will be slightly modified at each stepn →n + 1 and the function µ decreases mildly at each increment ofn. To the contrary, whenn is abovē n 0 , the approximation of M + by M + K,n becomes inconsistent. The data pointG a (n) then departs significantly from the correct value and tends to zero. We thus expect a rather sudden and sharp decrease of µ(n) whenn exceedsn 0 .
This sudden sharp decrease allows to "detect"n 0 . A very simple procedure is to setñ 0 = [ 3 4n 1/2 ], wheren 1/2 is the smallest value ofn for which µ(n) < 1/2 (the brackets denote the integer part). The effective accuracy gain of the algorithm is then defined byw The use of the factor 3 4 in the definition ofñ 0 is of course a matter of choice, but it seems to be very reasonable. On the one hand, a greater value could jeopardize the whole scheme, by potentially producing, at least in some cases, an estimate beyond the value for which the approximation of M + by M + K,n makes sense. 17 On the other hand, 3 4 is large enough to ensure that we are always not too far below the genuine optinal valuen 0 and thus that the effective accuracy gainw is not much lower than its maximal possible valuew.
A finer procedure consists in estimatingn 0 by looking in more details at the shape of the curve representing µ(n). This yields in general the best results, but, for our pruposes, the crude recipe proposed in the previous paragraph works well enough.

The algorithm on a specific case
Let us first illustrate all the basic properties of the algorithm on a specific typical example. We pick N = 200, K = 75, (m, Γ) = (2, 0.5) and build an approximate data set G a from (5.4) with σ = 0.05. We also choose K p = 25. For the particular realization of G a that we use, we find that ∆(0) 0.0456. This means that our approximate data set has an accuracy of about 4.56% for the first 25 Fourier coefficients. The goal is to improve these Fourier coefficients to get a better accuracy. 16 We may use the total norm ν(n) = d + K 0,G a (n) which, by construction, is a strictly decreasing function ofn. Using ν instead of µ yields similar results.
17 Of course, using the factor of 3 4 does not preclude this problem from happening on special cases. One could use a factor 1 2 to be on the completely safe side. as explained below Eq. (5.9). On Fig. 15, we plot the numbern of independent ARG equations obtained in this way, as a function of the total number n of equations that we use. We clearly see that all the equations are not independent. One needs n = 182 equations to span the whole 75-dimensional space F + 75 . The accuracy function ∆ is plotted on Fig. 16, as a function ofn. As expected, we observe that ∆(n) decreases, down to the minimal value ∆(49) 0.0157 obtained for n =n 0 = 49. The algorithm is thus able to produce an improved data set of accuracy ∼ 1.57%, starting from a sample of accuracy ∼ 4.56%. The accuracy gain isw 2.9 in this case, which is quite good. Whenn > 50, the algorithm brutally breaks down. The corresponding data pointsG a (n) do not appear on the plot because they are off scale.
On Fig. 17, we have plotted both the norm µ(n) and ∆(n). The behaviour of µ is as described in Section 5.3. Forn ≤n 0 , it is a mildly decreasing function ofn. Then it sharply decreases, which means that the algorithm no longer provides a good approximation to M + . We can thus estimate the optimal value ofn 0 using the idea explained in the Step 6 of Section 5.3. We get in this wayñ 0 = 44, ∆(ñ 0 ) 0.0224 and thus an effective accuracy gain ofw ∼ 2, a very decent result.
On Fig. 18 is displayed the accuracy of the original data (left inset) versus the accuracy of the best improved data obtained atn =n 0 (right inset), Fourier coefficients by Fourier coefficients. This shows in great details how the algorithm acts on the data set to improve it. We see that our previous choice of K p = 25 was rather conservative, since the algorithm works pretty well up to k ∼ 40.

Varying parameters
We now keep using the very same approximate data set G a as in the previous subsection, but we run the algorithm with different parameters. Four typical results for ∆(n), µ(n) and the detailed accuracy of the best improved data sets are depicted on Fig. 19, 20 and 21. As announced, all the plots look qualitatively the same and the accuracy gain produced by the algorithm is very similar in all cases.

Other data sets
On Fig. 22, using the same values of N , K, K p , l δ , l α and l u as on Fig. 16, we plot the accuracy function starting from new data sets of various accuracies. In all cases, the algorithm yields an accuracy gain (optimal or effective) in the range 2-4.

Working at precision N = 1000
We now use the much improved precision N = 1000. A UV cut-off K = 150 is then adequate. We pick an approximate data set for our favorite values (m, Γ) = (2, 1/2) and σ = 0.05. We run the algorithm with the same l δ , l α and l u as in Fig. 16, but we now choose a larger K p = 50, consistently with the idea that the increased precision should allow to improve the Fourier coefficients up to some higher energy. The results are depicted on Fig. 23. The optimal accuracy gainw 3.13 is excellent in this case, but the effective onew 1.51 is much lower. This is explained by the fact that the graph of the accuracy starts to become rather fuzzy for values ofn as low as 85 and our simple algorithm to estimaten 0 is not very good in such a case. The plot of the detailed accuracy of the improved data also shows that the Fourier coefficients are greatly improved up to |k| in the range 60-70, which is much better than the value ∼ 40 obtained when working with N = 200, as expected. On Fig. 24, we have run the algorithm with l δ = {1} instead of l δ = {1, 2}. The graph of the accuracy is then sharper and the algorithm yields a better estimateñ 0 ofn 0 , with an effective accuracy gain of about 2.09.
The conclusion is that using an improved precision does not seem to yield a much better accuracy gain for the algorithm. However, and as expected, the higher precision allows to work with a higher K p .

A last example
We have mainly focused, in our applications, on the example of the damped harmonic oscillator, Eq. (4.10)-(4.14). Even though this is a nice example capturing interesting physics, it is natural to ask whether the good results we have obtained might depend on the fact that the resolvent function is a very simple analytic function in this case. In more realistic examples, the resolvent is typically an extraordinarily complicated function for which no explicit closed-form formula is available.
For this reason, we also include a more complicated and interesting example, corresponding to the large N solution of a quantum mechanical theory of N × N Hermitian matrices modeling some interesting properties of quantum black holes [1]. One can show that the full solution of the model is encoded in the Euclidean two-point function where the operators a † i and a i create and destroy strings interacting with the black hole.
Of course, our purpose here is not to discuss the physics of the model, which can be found in [1], but instead to test our algorithm in a very non-trivial case. The Fourier coefficients G k for (5.14) cannot be found in closed form, but are determined in principle by a Schwinger-Dyson equation which is equivalent to the following infinite hierarchy of constraints on the coefficients, As usual, we chose the inverse temperature β = 2π. The masses m, M and the coupling λ are parameters in the model. Note that the G k are not real but satisfy instead G * k = G −k . Since the ARG equations are linear with real coefficients, we can use them to improve the real and imaginary parts Re G k and Im G k of the coefficients independently of each other.

Conclusion
A central result of our work is to show that analyticity implies an infinite set of linear equations that any set of Fourier-Matsubara coefficients G k associated with a quantum mechanical finite temperature Euclidean two-point function must satisfy. Some of these equations admit an interesting Renormalization Group interpretation. Remarkably, these equations can be used to improve systematically any random approximate data set obtained, for example, from Monte-Carlo simulations.
Our main intention in this paper was to explain the main ideas and equations, with the physics applications in mind. It would be interesting to have a more complete, mathematically rigorous, presentation. In particular, a detailed discussion of the linear dependence between the ARG equations, that we have explicitly seen numerically (see e.g. Fig. 15), would be handy. Precise statements about how the finite dimensional spaces M K ⊂ F K approximate M ⊂ F when K → ∞ would also be useful. Our results on the codimensionn 0 of M K suggest that dim M k / dim F K ∼ 0.4. Can we make this statement precise, in particular in the limit K → ∞? More generally, a direct analysis of the geometry of M in infinite dimension, which we have avoided because the practical applications always deal with finite dimensional spaces, is desirable. Moreover, some fine aspects of our results, for example the curious but clearly visible oscillatory structure of the improved data sets seen in Fig. 18, 21, 23 and 24, require a better understanding.
But the most compelling goal to pursue is probably to better assess how effective the use of the ARG equations can be in real-world problems. To do so, one has to apply our algorithm, or, better, some significantly improved version thereof, to the Monte-Carlo data found in interesting strongly coupled problems, including lattice QCD and condensed matter systems.