Riding on irrelevant operators

We investigate the stability of a class of derivative theories known as $P(X)$ and Galileons against corrections generated by quantum effects. We use an exact renormalisation group approach to argue that these theories are stable under quantum corrections at all loops in regions where the kinetic term is large compared to the strong coupling scale. This is the regime of interest for screening or Vainshtein mechanisms, and in inflationary models that rely on large kinetic terms. Next, we clarify the role played by the symmetries. While symmetries protect the form of the quantum corrections, theories equipped with more symmetries do not necessarily have a broader range of scales for which they are valid. We show this by deriving explicitly the regime of validity of the classical solutions for $P(X)$ theories including Dirac-Born-Infeld (DBI) models, both in generic and for specific background field configurations. Indeed, we find that despite the existence of an additional symmetry, the DBI effective field theory has a regime of validity similar to an arbitrary $P(X)$ theory. We explore the implications of our results for both early and late universe contexts. Conversely, when applied to static and spherical screening mechanisms, we deduce that the regime of validity of typical power-law $P(X)$ theories is much larger than that of DBI.


Introduction
The latest decades have witnessed much effort being put into obtaining theoretical predictions from models which attempt to describe the relevant processes in either the early or the late universe (or both). We often argue that an inflationary period of expansion in the early universe allowed the amplification of quantum fluctuations, which later became imprinted in the cosmic microwave background radiation [1]. The statistics of this anisotropic map have become the principal object of interest in early universe cosmology, as they might enable the reconstruction of the parameters of the microphysical Lagrangian-a process usually referred to as 'bottom-up approach. ' Since theories attempting to describe the early universe are quantum by nature, a natural question to ask is what sort of operators are generated by radiative corrections to the classical theory and if the theory is indeed stable, and hence both natural and predictive. If the model is described by an Effective Field Theory (EFT), quantum corrections should not introduce important operators which would then offer additional interaction channels and spoil the classical solutions. If that were to happen, the theory could run out of control, since it would have to be augmented by an infinite tower of operators, from an EFT standpoint. The recent results of BICEP2 [2], which if confirmed would suggest a detection of primordial gravitational waves and constrain the tensor-to-scalar ratio, also reinstate the relevance of understanding the merger between inflation models and quantum mechanics.
These concerns are not exclusive of inflation and also arise in theories which model the physics of the late Universe. In particular, to address the current accelerated expansion of the universe, one can argue the dark energy sector responsible for this behaviour consists of one or more light scalars. These are subject to screening mechanisms that rely on strong self-interactions and interactions with matter to effectively hide these light degrees of freedom from the scrutiny of laboratory and solar system experiments [3,4]. In this paper we will be interested in a specific type of screening called Vainshtein or kinetic Chameleon [5] (see Ref. [6] for a recent review).
Most if not all the theories exhibiting the Vainshtein mechanism are not typical EFTs since they exhibit the wrong sign for analyticity and include superluminalities [7,8]. These properties imply that they cannot enjoy a standard Wilsonian UV completion 1 and EFT arguments might not always be appropriate [12][13][14][15]. Nevertheless, because of their useful insight, standard EFT arguments are sometimes applied to these theories in the literature. As such, we shall consider them in this paper within the EFT framework.
Our focus of interest is to understand whether a specific class of derivative scalar field theories is radiatively stable and to establish the regime of validity of their respective classical predictions.
For concreteness, we will explore a special type of theories involving only single derivatives of a light field φ, usually referred to as P (X), where X = −(∂φ) 2 /Λ 4 and Λ is the strong coupling scale. Such models enjoy a global shift symmetry. These types of theories are especially appealing for models of inflation, where they go by the name of k-inflation, and they were first introduced in Refs. [16,17]. There inflation is driven by the non-canonical kinetic term of φ. Since models inspired by string theory typically produce a nontrivial kinetic structure, this category of models is indeed extremely interesting. Moreover, one of the key features of these models is that the tensor-to-scalar ratio can be enhanced [18,19].
P (X) models could also be relevant for the late time acceleration of the Universe (see, for instance, k-essence models [20][21][22]), where the scalar field can be screened via the Vainshtein mechanism [23]. Indeed, in this paper we shall be interested in exploring these multiple phenomenological facets.
Among the entire class of P (X) theories, the Dirac-Born-Infeld (DBI) [24][25][26][27] model, where the Lagrangian is roughly has taken a lead role owing to its additional non-linearly realised symmetry, whose infinitesimal form is given by [28] φ with x µ labelling the 4-dimensional space-time coordinate. This symmetry is the remnant in four-dimensions of a fully realised five-dimensional Poincaré invariance. DBI has been an extremely popular model for inflation giving rise to large non-gaussianities (see, for instance, Refs. [29][30][31][32][33][34]). The common prescription for DBI is to assume that its EFT can satisfy the criterion of |X| ∼ 1 provided the acceleration (which should be properly defined) is small. We will revisit this intuition later, and elaborate on its exact interpretation for different background configurations. DBI has also been adopted for models of quintessence or 'DBI-essence' in Refs. [35][36][37].
Another type of higher derivative theories which also have a reorganised EFT dictated by a hierarchy of derivatives of the field are Galileon theories, which can arise in a certain limit of massive gravity theories (examples include the Dvali-Gabadadze-Porrati (DGP) model [38] and massive gravity [39][40][41]). Galileon theories are invariant under the transformation where c and v µ are (scalar and vector) constants. Guided by this symmetry and the requirement of the absence of ghosts, the derivative structure of the Galileon Lagrangian is of the symbolic form [42,43] L Galileons ∼ 5 n=2 c n φ EE(∂∂φ) n−1 η 5−n , (1.4) where E is the antisymmetric Levi-Civita symbol, η refers to the flat (Lorentzian) Minkowski space-time metric, and the contraction of indices is implied. It is a common statement in the literature that theories described by the Lagrangian (1.4) have a well defined EFT provided ∂ n φ/Λ n+1 1, for n ≥ 3. We shall revisit this criterion in this paper. Traditionally, the existence of an additional symmetry (like in DBI and in Galileon theories) is associated with the radiative stability of the model. However, as we shall see in this paper, the symmetry on its own is not sufficient to render the theory stable. Neither is the symmetry necessarily required to ensure the radiative stability of the theory. The role of the symmetry is rather reserved to protect the derivative structure of the terms generated by the radiative corrections, which should, in principle, respect the same symmetry the classical action does.

Summary
Given the significant progress in developing models both of the early and the late universe, we believe it is timely to revisit their fundamental features as EFTs to fully realise the precision era of cosmology we have recently entered. P (X) theories regroup a large class of these models, which are both theoretically and observationally relevant. The main regime of interest in such theories is when the kinetic term of the field φ is large, |X| 1 (for DBI) and potentially even |X| 1 in some other P (X) models. Then the dynamics is mostly driven by the kinetic structure of the field, rather than its potential. The main purpose of this paper is to explore the quantum consistency and classical validity of P (X) models including DBI field theories in their respective regime of interest. Our results will be focused on P (X) theories for simplicity of the discussion, but can also be applied to theories with higher-order derivative interactions, such as Galileons. We will briefly specify our results for this class of theories-see appendix E for more details.
Conventionally, a higher level of symmetry in these models has been associated to a better control of the full theory as a whole (i.e., when including quantum corrections.) DBI has therefore played a pivotal role amongst P (X) theories, often claimed to be more 'natural' or more 'radiatively stable' than an arbitrary model within the P (X) class. In this manuscript we show that while the symmetry does play a crucial role in preserving a given structure in the quantum corrections, the symmetry by itself does not change the overall magnitude of these corrections. This implies that models endowed with more symmetries are not necessarily more 'natural,' and in particular their regime of validity is not necessarily larger compared to other P (X) theories.
The primary results we have established in this paper are the following: • Regime of validity of the classical solution: a perturbative approach-Thinking about DBI as a theory in its own right, it is commonly argued that its classical solutions are under control even if |X| ∼ 1 provided some measure corresponding to an acceleration is small. The reason behind this belief is that the logarithmic and finite contributions arising from loops of the field itself involve terms of the form ∂ 2 φ, which are assumed to be small within the regime of validity of the theory.
In the first part of the manuscript we quantify this regime of validity of arbitrary P (X) models, based on the same criterion as for DBI and simply ask the question of whether or not symmetries play a crucial role in determining this regime of validity. We follow a conventional 'covariant' perturbative approachà la Barvinsky & Vilkovisky to compute the quantum corrections.
For the specific case of DBI, we show that the result is independent of whether or not the formalism preserves the underlying symmetry. In particular, in a five-dimensional approach which makes the DBI symmetry manifest, we find the same results as in its four-dimensional counterpart. We also show that contrary to the expectations and despite enjoying an additional symmetry, the regime of validity of DBI classical solutions is typically smaller compared to other P (X) models.
• Naturalness and Wetterich exact renormalisation group approach-Next we address the core of the naturalness question by considering the Wetterich exact renormalisation group (ERG) equation, which is valid at all loops and which at lowest order in a derivative expansion for P (X) takes the form, whereR κ is a regularisation operator, κ is the infrared regulator and P κ is the modified effective action at κ (also known as effective average action). The complete exact form of this equation is derived in appendix A. In the above, Z ∼ P (X) is related to the effective kinetic metric in these P (X) models. The exact expression for Z is given in Eq. (3.6). In the regime of interest (large kinetic term) it follows that |Z| 1. This procedure differs from the previous one in that it is exact to all loops and Z is not considered to be a fundamental metric to be introduced in the regularisation scheme.
We solve the full ERG equation by performing a derivative expansion (still non-perturbatively, that is, valid at all loops). We find that to all orders in derivatives, the all-loop quantum contributions introduce negligible modifications to the effective action in the large kinetic term regime where |Z| 1 (provided derivatives remain under control).
We can understand this result more intuitively by noticing that the path integral for these theories behaves as where χ is the field perturbation, so there is an effective reduced Planck constant, eff ≡ /Z. In the regime where |Z| 1, eff → 0 and quantum corrections become irrelevant.
We emphasise that this result is shown to all loops and is non-perturbative. These results are very different from what one would have guessed following a perturbative prescription, or considering potential interactions rather than kinetic interactions. While the analysis focused on P (X) models, it is clear that the results hold for any theory exhibiting the Vainshtein mechanism. Indeed, this paper highlights a very nontrivial implementation of the Vainshtein mechanism at the quantum level. Such implementations were found previously in Ref. [44] for massive gravity [41,45], though in a perturbative version.
Our analysis therefore confirms the naturalness of P (X) models deep within the large kinetic term regime where |Z| 1. Importantly, our conclusions are again drawn independently of the fact that the model might enjoy an additional symmetry, which could in principle cloud the requirements for naturalness properties. In fact, our work allowed us to highlight the following facts: 1. While symmetries are crucial in establishing the form of the quantum corrections, they play little role in naturalness arguments for P (X) theories when the strong coupling scale of the theory does not coincide with the cut-off. In particular, symmetries do not enhance their regime of validity. We emphasise that if we follow a procedure for which DBI does not receive large self-corrections of order of the cutoff then, consistently following the same procedure for an arbitrary P (X) model, implies that terms of the form X n are not generated by quantum effects in P (X).
2. Models relying on a large kinetic term can be made natural deep within their 'Vainshtein' region where |Z| 1. This is an exact statement and shows the direct implementation of the Vainshtein mechanism within the loops.
Outline.-This paper is divided into two parts. Part I discusses the regime of validity of classical solutions following a perturbative approach, whereas Part II investigates naturalness considerations fully non-perturbatively in loops.
In §2 we start by defining essential concepts for this paper, namely the cut-off and the strong coupling scales, relevant and irrelevant operators, and discuss the ambiguities in considering power-law divergences. Readers familiar with these concepts may wish to proceed directly to §3, where we track finite and logarithmic contributions from loops following a conservative viewpoint. As a by-product of this analysis, we explore the role of symmetries in these contributions. We derive the regime of validity of tree-level calculations by requiring that the previous quantum contributions are small. We then apply this criterion to DBI during inflation in §4, and recover a criterion consistent with previous results in the literature. We then move in §5 to static and spherically symmetric background field profiles, appropriate in screening mechanisms, and compare generic P (X) results with those obtained in DBI and Galileon theories.
Part II starts with a discussion of Wilsonian and effective field actions in §6. We revisit the standard question of naturalness and address it using an ERG approach valid at all loops in §7. We establish the naturalness of P (X) theories deep within the high kinetic term regime, which is the regime of phenomenological interest. We draw a comparison between DBI, Galileons and generic P (X) models.
We briefly summarise our findings in §8. The appendices collect further details about our calculations. They are organised as follows. Appendix A contains the derivation the Wetterich ERG equation and it plays a pivotal role in part II, while appendix B includes further details on the derivation of the quantum stability in the large kinetic term regime by solving the dimensionless version of the ERG. The other appendices collect material which is relevant for part I. Appendix C confirms the results of §3 by explicit computation of Feynman diagrams. In appendix D we generalise the one-loop argument of Part I to higher loops, in appendix E we derive some relevant results for the cubic Galileon and finally in appendix F we provide a complementary derivation of quantum effects in DBI using a symmetry-preserving five-dimensional approach.
Conventions.-We will mostly assume (for simplicity) that the background scalar field is living in Euclidean space-time. A generalisation to more arbitrary backgrounds is, however, straightforward, and indeed for the inflationary scenario discussed in §4.2 we will relax this assumption and consider a non-flat, though maximally symmetric, space-time. Greek letters are reserved for space-time indices. Partial derivatives are denoted by ∂, whilst covariant derivatives are represented by ∇. We use units for which the speed of light and the reduced Planck constant, , are set to unity, except when explicitly said otherwise. The Planck mass is defined by M Pl ≡ (8πG) −1/2 .

Part I -Standard EFT perturbative approach
We start by computing the quantum corrections to a given single-field model by considering loops from the field itself. Consequently, in the first part of this paper, we will not be addressing the questions of how that theory could have been obtained from integrating out heavy fields, or even naturalness questions such as how high-energy physics affect this lowenergy EFT. This is where power-law divergences may be used as a surrogate for high-energy effects-we leave this to be explored non-perturbatively in Part II. For now, however, we focus on the regime of validity of the field theory by itself for which it is sufficient to follow only loops of the field, and focus on their logarithmic divergencies.

Effective field theory considerations
From a standard standpoint, EFTs provide a low-energy insight into the full theory without resolving the high-energy behaviour. This very appealing feature relies on the existence of a certain decoupling limit, which separates high from low-energy phenomena. At low energies we say that operators with scaling (E/Λ) α , for some α, are suppressed by the strong coupling scale Λ, and therefore dubbed as irrelevant, in the action where the operator O n has dimensions [mass] n with n > 4. The other operators included in L low-energy which do not carry such suppression are, on the other hand, relevant operators. This classification relies uniquely on the mass dimension of the operator, and its usefulness is linked to the existence of a hierarchy between energy scales. However, irrelevant operators are not necessarily unimportant. Indeed, in this paper we will assume a slightly different way of organizing the EFT expansion of operators, which has been very common in higher derivative theories (see, for example, Ref. [28,42]). For background configurations which are large (compared to Λ), a subclass of operators are no longer suppressed by Λ, that is, Nevertheless, they are still irrelevant operators from the standard EFT viewpoint. 2 We will see in this paper such a family of operators arising, and to verify their relevance one needs to check they are not redundant operators, in the technical sense of not generating vanishing equations of motion. Our principal concern will be to identify the relevant and irrelevant operators which are quantum mechanically induced and hence correct the classical Lagrangian.
To summarise and to avoid any confusion in this manuscript an "irrelevant operator" refers to an operator which has (mass) dimension greater than 4 in four dimensions. This is an operator which is suppressed from the traditional EFT interpretation, but not necessarily from the perspective of the re-organised EFT, based on the hierarchy between derivatives. If an operator is important in the re-organised EFT we refer to it as "technically important."

Cut-off versus strong coupling scale
Before we proceed with the computation of the quantum corrections, it is instructive to recapitulate the concept of regime of validity of the classical field theory. In the literature the difference between the concepts of cut-off, Λ c , and that of strong coupling scale, Λ, has sometimes appeared blurred, and so we will define them here. We will also need to introduce the notion of regularisation scale, Λ r , and infrared regulator, κ, which are independent from both the cut-off and the strong coupling scale. The only requirement is that Λ r , κ < Λ c and Λ ≤ Λ c .
By definition the strong-coupling scale of a theory, Λ, is the scale at which the dominant interactions arise and it signals the break-down of perturbative tree-level unitarity. In a standard EFT approach, at this scale the classical solutions are no longer a good description for the physical system at hand, and quantum corrections (i.e., loops) have to be taken into account.
However, the breakdown of perturbative unitarity does not necessarily imply the breakdown of unitarity and hence new physics. The later scale is the cutoff of the theory, the highest scale at which the EFT can be utilised without introducing new heavy physics. The reason the strong coupling scale and the cut-off are not necessarily the same is that the breakdown of perturbative unitarity only indicates the breakdown of perturbation theory. In a theory with a hermitian Hamiltonian, strongly coupled loop effects may restore unitarity postponing the true breakdown of the EFT to a higher scale.
The concept of strong coupling scale is thus very distinct from that of cut-off which defines the onset of new physics. The practical implications of identifying the scale Λ depend on the theory at hand, but the following statements are generically true: 1. In many cases, the strong coupling scale, Λ, coincides with the onset of new physics, in which case Λ ∼ Λ c .
2. However, there can also be a hierarchy between Λ and Λ c . At the strong coupling scale, Λ, different scenarios may occur and we highlight that in some of them the theory may still provide a correct description of the physics at that scale Λ, if Λ Λ c . In particular: (a) In certain cases it is sufficient to include a finite number of loops to restore a good description of the microphysical processes at that scale (see, for instance, Ref. [47] for an instructive 'self-healing' example). (b) In most cases an infinite number of diagrams contributing at the scale Λ should be taken into account in order to provide a good description of the physical processes at that scale. However, this does not mean that the theory necessarily loses predictivity at the scale Λ. It only signifies that, at that energy, accurate estimates can only be obtained by applying some resummation technique. Physical systems where an infinite number of classes of loop diagrams may be resumed to give finite results (and sometimes even close to classical results) are well known and include Bremsstrahlung scattering (vacuum version of the Cherenkov radiation process) [48]. See also Ref. [49] for an example in a nonlinar chiral theory. (c) Finally, if an infinite number of loop diagrams ought to be included and if one can prove that there is no possible converging resummation, then the theory loses predictivity at the scale Λ, at least from a standard EFT viewpoint.
Any theory which relies on irrelevant operators to make classical predictions and exhibits a Vainshtein or screening mechanism must lie within the second set of possibilities, namely Λ Λ c . In the past decade, there has been a large interest in models where the strong coupling scale, Λ, gets redressed by a large background field configuration. If this redressing is to make sense, it is crucial to differentiate between Λ and Λ c .
We conclude this small detour by noting that whilst the estimate of the cut-off energy scale of the theory can be sometimes ambiguous (since it may be difficult to determine the scale at which other fields ought to be included in the action without knowing the details of the UV completion of the theory), the strong coupling scale is somewhat easier to assess. It may indeed vary from the usual method in which one identifies the energy scale contributing in the perturbative expansion of scattering amplitudes in terms of Feynman diagrams. As we mentioned before, this happens in cases where a strongly self-interacting background implies a redressing of the interactions, which sometimes has the effect of raising the naive strong coupling scale [50]. Given these possible ambiguities, our principal goal is to obtain results which are explicitly independent of the cut-off of the theory, Λ c , which should render them physically trustworthy.

Cut-off dependence and the Wilson action
Divergencies in loops appear in the form of power-laws and logarithms. The central reason for why power-law divergences should not necessarily be trusted as an indication of loop corrections from UV physics, is that the effective action, which controls the physically renormalised quantities, is by definition independent of power-law divergences (see, for example, Ref. [51]). To understand this we briefly review the Wilsonian picture to renormalisation.
Given a field theory for φ we define the Wilsonian action S Λr (φ) by integrating out all modes in the path integral whose momenta are larger than some Λ r , which is the regulator scale. This can be accomplished by splitting the fields into light and heavy modes, and then the Wilsonian action, S Λr (φ), only depends on the modes lighter than Λ r . We must perform this computation in Euclidean signature, which we will keep throughout the remaining of this manuscript.
Universal prediction from the logarithmic term.-The Wilson action is given by By construction this action is strongly dependent on the chosen regulator scale Λ r . In particular, at one-loop we expect contributions to S Λr (φ) which are quartic and quadratic in Λ r . This scale may be chosen arbitrarily and need not be related with the strong coupling scale, Λ, nor the cutoff, Λ c . However, on the basis of the discussion in §2.1, we do require that Λ r ≤ Λ c so that the integral on the right hand side is meaningful. We can then define the Wilson action at another arbitrarily chosen scale Λ r < Λ r via the finite integral Again by construction S Λ r (φ) is independent of the scale Λ r since we may equivalently define it by the integral which is manifestly independent of Λ r . This means that in particular the one-loop divergences that arise in S Λr (φ) can be written as where we have chosen an arbitrary sliding scale µ to define the logarithm. Crucially the power-law divergencies are automatically cancelled by the loop corrections that arise from integrating out modes between Λ r and Λ r : where At one-loop this takes the form so that we have Now since by definition ∆Γ Λ r <k<Λr is independent of the sliding scale µ, we get an analogue of the Callan-Symanzik equation for ∆Γ Λ r <k<Λr , as follows Then we have ∂ µ W µ,finite − W µ,finite = 0, and similarly the coefficient of the logarithmic divergence at any chosen regulator scale Λ r is universal Thus the only universal prediction we obtain from the cutoff dependence is the logarithmic term which is captured by the sliding RG scale µ. Indeed, the standard picture which accompanies the significance of the logarithmic divergencies follows automatically. Starting at some high energy-scale Λ r , Eq. (2.11) uses the logarithmic running divergence to effectively absorb all the high-energy subprocesses which happen between Λ r and Λ r by sliding the renormalisation scale µ from Λ r until it arrives at Λ r . Of course this process can be extended iteratively until all relevant soft microphysics is encoded in logarithms of large ratios of energy scales and the relevant EFT is obtained. When the logarithms themselves become large, which is rather typical in QCD for example, there are a number of well-known prescriptions which can be applied to make the theory results as competitive as the observational precision at hadron colliders [52].
Effective action.-The quantity of interest to us is the effective action, Γ, which may be defined in terms of the original action as Assuming φ is build out of modes with k < Λ r , then the support of δΓ(φ) δφ χ for χ modes with k > Λ r is vanishingly small, and similarly for these modes we expect S(φ + χ) ∼ S(χ). Then we have and so we may define the effective action in terms of the Wilsonian action defined at an arbitrary scale Λ r as Again since by definition ∂ ∂Λ r Γ(φ) = 0 , (2. 16) it follows that all the power-law divergences that arise from one-loop calculations automatically cancel against the power-law divergences in the definition of the Wilson action S Λr . For this reason it is consistent to neglect power-law divergences.
On the other hand the logarithmic terms represent a universal correction that is present even in the infrared limit for S κ with κ → 0. This is the reason why in the first part of this work we shall mainly focus on logarithmic divergences and neglect power-law divergences. As we mentioned before, when asking naturalness questions power-laws are sometimes viewed as indicators of the high-energy behaviour of the theory. For this reason we shall keep them in the second part of this work when addressing naturalness questions-see Part II for more details.

'Standard' covariant perturbative prescription
We start by considering the class of P (X) theories, in which the Lagrangian only depends on the first derivatives of the scalar field φ through X = −(∂φ) 2 /Λ 4 . We write with the understanding that P is some dimensionless function of X and satisfying The Lagrangian enjoys a global shift invariance where c is some constant. In some particular cases, the action may have an additional global symmetry such as the DBI symmetry (1.2) for the DBI models given by (1.1). We remain generic for the rest of this section and consider an arbitrary function P (X).
In the presence of a source, J, the classical equation of motion for the field φ is

Background field method
Expanding the action (3.1) around a background profile 3 , φ, up to quadratic order in the fluctuations, χ, we find where the kinetic operator, Z µν [φ], only depends on the field φ through its first derivatives As a result, Z[φ] is manifestly invariant under a global shift. Notice that the boundary terms can be omitted in this process since they do not contribute to the dynamics. We include in appendix E the respective formula for the kinetic operator in Galileon theories for completeness.
Regions of interest.-For models described by the action (3.1) the phenomenological regime of interest is that in which |Z| may be large, that is, when the kinetic term comes to dominate. In the DBI model, this happens when |X| → 1. In other P (X) models this may occur when |X| 1. In what follows by 'large kinetic term regime' we implicitly assume |Z| 1 meaning at least one of the (absolute) eigenvalues of Z is large. We sometimes symbolically refer to this regime as the Vainshtein or screening regime, even though strictly speaking no screening mechanism may occur in that regime.
Integrating (3.5) by parts, we get where g µν eff is defined via the relation √ g eff g µν eff ≡ Z µν , (3.8) and ∇ µ represents the covariant derivative with respect to g eff,µν . It is clear that g µν eff plays the role of (the inverse of) an effective kinetic metric, with corresponding determinant in Euclidean space-time given by g eff which enters in the integration measure in the action (3.7).

One-loop effective action
We now compute the one-loop quantum effective action, which is the sum of all the oneparticle irreducible graphs. The one-loop quantum effective action Γ is a functional of the scalar field φ and given by (in the Euclidean) Starting from the Euclidean action (3.7) we can write where 'det' should be understood as a functional determinant, which represents an infinite sum of Feynman loop diagrams, and provides a (covariant) generalisation to the Coleman-Weinberg effective action [53]. Notice that this expression is exact as far as its dependence on the background scalar field profile goes. This object can be computed using, for example, a technique based on the heat kernel expansion [54,55], which organises the UV divergences as powers of the local curvature built out of the effective metric in Eq. (3.10). This technique implicitly uses the metric g eff in the definition of the regularisation scale and the results are manifestly covariant in terms of that metric. This differs significantly from the approach followed in Part II where the metric g eff is not considered to carry any information about the UV physics.
The power-law divergences are captured by the first two so-called Seeley-DeWitt coefficients, and the associated quantum corrections read [56,57] Notice that regardless of the specific form of Z µν these power-law divergencies will always be non-zero both for P (X) and Galileon theories. At one-loop, the logarithmic quantum contributions are simply given by [56,57] where here again the curvature operators are built out of the effective metric. This result is due to Barvinsky  Power-law divergences.-The power-law divergences in (3.11) are similar in spirit to the renormalisation of the cosmological constant and the Planck scale if we were dealing with a gravitational theory. For our P (X) theory, it is clear that the quartic divergences involves operators of the same form as X n as the one present in the original P (X). Even in DBI, if Λ Λ c and these power-law divergences were taken seriously, one could never access the regime of interest of these theories (large kinetic regime) without quantum corrections becoming large. Despite the existence of a non-renormalisation theorem for Galileons [42], the situation is no different there. Indeed, the power-law divergent operators can be made arbitrarily close to the Galileon ones. This means that even for Galileons, one cannot enter the regime of interest (i.e., the Vainshtein region) without being dominated by quantum corrections of the power-law type even if one were to identify Λ = Λ c .
In the case where we identify Λ = Λ c , the situation is better for DBI in the fivedimensional embedding as quartic divergences would simply change the original DBI effective action by order one corrections, but keeping the same DBI structure. However, in that case we would need to identify the strong coupling scale with the five-dimensional Planck scale and bulk loops would not decouple. This should be studied with care.
As a result, with the potential exception of DBI, for all these theories to make sense in this perturbative approach-be it Galileons or an arbitrary P (X)-the power-law divergences must be unrepresentative of the UV physics. As discussed in §2.2 this may well be the case for many theories since power-law divergences are not necessarily good indicators (a similar viewpoint was expressed by Burgess & London in Ref. [51]).
In Part I of this paper we will therefore take the approach that power-laws cannot be trusted, and focus solely on logarithmic divergences. This is the approach that needs to be followed perturbatively for Galileons (and DBI unless Λ = Λ c = M 5 , where M 5 is the fivedimensional Planck scale), and it is therefore natural to keep the same one for more generic P (X) models. We emphasise, however, that this approach is only temporary and the core of the naturalness problem including power-law divergences will be fully investigated in Part II.
Logarithmic divergences.-As justified in the previous arguments, we now turn to the one-loop logarithmic divergences presented in (3.12). Crucially, all the operators in Eq. (3.12) involve higher derivatives compared to the ones in (3.1), and they cannot be written as a simple function of X on its own. This means that provided we only follow the logarithmic divergences and the finite contributions, tree-level calculations computed with the original action (3.1) are under control so long as the higher derivative operators generated in (3.12) remain small. The higher derivative operators depend on the background field, which implies that the regime of validity of the classical (tree-level) results themselves also depend on the background field configuration. 4 In appendix C, we carry out a one-loop calculation in a specific theory within the P (X) class in which we keep track of the logarithmic divergencies, where the derivative structure of the answer in Eq. (3.12) can be seen explicitly. The generalisation of this result to higher loops is performed in appendix D. We show that the logarithmic divergences and finite contribution from the higher loops involve even more derivatives and are thus under control provided derivatives are small, and in particular that the one-loop contributions are small.
In what follows we use this criterion to derive the (perturbative) regime of validity of the classical theory.

Regime of validity of the classical theory
Depending on the context, one may either be interested in a regime where |X| 1, or allow for a regime where |X| 1: • In the first case where |X| 1, any operator of the form X m ∂ n X with n ≥ 1 can be made unimportant compared to the classical operators which are all of the form X m , regardless of how large m is.
• If we allow for |X| 1, the situation is more subtle. Requiring that higher derivatives acting on the field are small may not always be sufficient to effectively suppress an operator of the form X m ∂ n X when m 1. In §5.1 we shall provide an example where |X| 1 and yet the quantum corrections from the field itself combine to remain small subject to higher derivatives being small.
We conclude that for any Lagrangian built out of derivative interactions involving only first derivatives acting on the field at the level of the Lagrangian, the contributions from the logarithmic and finite parts of the quantum corrections are under control and do not spoil the classical solutions of the theory as long as we are in a regime where higher derivatives are suppressed. In practise this means that the classical solutions are always under control provided the curvature invariants R[g eff ] built out of the effective metric g eff satisfy This criterion should be applied with care. It is equivalent to the statement that the acceleration in DBI ought to be small, as long as the acceleration is computed appropriately. The unambiguous way of parameterizing this acceleration is discussed in appendix F. The effective metric defined in Eq. (3.8) is conformally related to Z µν computed in (3.6), and we can write 14) and the criterion 5 for the validity of the classical solution can thus be symbolically written as We derive the corresponding criterion for Galileons in appendix E.
Focusing on the requirement (3.15), since Z goes as the field velocity, ∂Z goes symbolically as the local field acceleration. At this level, we stress two points: 1. To be more precise, the criterion in (3.13) involves the eigenvalues of Z µν . On the other hand, (3.15) implicitly assumes that Z µν is conformally flat, Z µν ∼ Zδ µν . One can always choose a basis in which Z is diagonal. However, when there is a hierarchy between the eigenvalues of Z µν , one needs to ensure that all the combinations of ratios between the different eigenvalues of Z (which appear in the expressions for the curvature quantities in the one-loop effective action) are kept small.
2. The previous expressions are very symbolic, and in particular ∂ designates the partial derivative if we were in cartesian coordinates of Minkowski. In different coordinate choices, however, the connection should be included. As we shall see, this is especially important when looking at configurations in spherical coordinates with radius r, as we shall discuss explicitly in §5.
Whether the Lagrangian itself is stable against quantum corrections is yet another question which is related to the naturalness of the Lagrangian and will be addressed in part II. We notice that nowhere in the derivation of our result have we invoked any symmetry and as such these results are certainly independent of any additional symmetries that may or may not be present in a particular model.
While it is true that some symmetries can protect the structure of the Lagrangian, they have little to do with their magnitude and with protecting the Lagrangian and its classical equations of motion from large quantum corrections. For example, given the shift symmetry in P (X) theories, the only requirement imposed by the presence of this symmetry is that the operators generated by quantum corrections in the effective action obey the same symmetry. However, the symmetry itself is unrelated to the scale at which quantum corrections enter (be it from finite contributions or from divergent pieces).
We explain more explicitly in appendix F how the role of the symmetry enters in DBI models. We follow a fully covariant five-dimensional analysis where the symmetry (fivedimensional diffeomorphism invariance) is manifest. Despite this elegant procedure, which explicitly keeps the symmetry manifest, we recover precisely the same regime of validity for the classical solutions as obtained had we perform the four-dimensional estimation and used the criteria (3.13) or (3.15) without invoking the symmetry. We illustrate the determination of the regime of validity of the EFT in specific examples of P (X) theories in the ensuing analysis.

Implications for inflation
To gain more insight on our results we apply them now to specific classes of models under certain assumptions of the background field configuration. In particular, we can gauge the impact of our results on inflation model building. In this case, the background field profile is statistically homogeneous and isotropic, and evolves in time. It is its quantum fluctuations which become imprinted in the microwave sky and whose statistics are later observed in the temperature maps. Whichever microphysics operated in the early universe, the same quantum fluctuations which are responsible for structure formation and the temperature anisotropies in the CMB, should also be under control to assure predictiveness of the model.

DBI
The DBI model is explored in more details in appendix F where we present its five-dimensional embedding. We expand the DBI Lagrangian where again X = −(∂φ) 2 /Λ 4 . We split the field φ into a time-dependent background φ 0 (t) and small inhomogeneous quantum fluctuations, which propagate with speed of sound whereφ 0 denotes the derivative of the background field with respect to the physical time.
One of the most attractive features of DBI is that the speed of sound of the scalar fluctuations can be made arbitrarily smaller than that of the light when X =φ 0 2 /Λ 4 is arbitrarily close to (but smaller than) unity. In this case the Lorentz boost factor, defined as γ = (1 − X) −1/2 ≡ c −1 s , can become arbitrarily large. As a result this theory is falsifiable since its microphysics signature can be significantly constrained by CMB data. In particular, Planck data limits non-gaussianity signals which restrict γ 14 at 95% CL [58]. This surely means that DBI inflation cannot operate in its most interesting regime, where γ → ∞. Nevertheless, we take on a conservative approach and explore this model from purely theoretical grounds.
Another reason why DBI has been extremely appealing is that it arises in the context of higher-dimensions and to be more precise in brane scenarios as a generalisation of the Nambu-Goto action. As explained in appendix F, we can picture a D3-brane moving in an unwarped space with φ 0 being the position of the brane relative to the tip of the throat. The scalar field φ 0 therefore plays the role of the inflaton, and the DBI action characterises the motion of the brane in a generically warped throat.
In this construction, the criterion (3.13) signifies that the brane can move in this higherdimensional geometry at a very large speed, but the acceleration of both the scalar fluctuations as well as the brane itself ought to be small. Specializing to the logarithmic quantum corrections in the action (3.12) for the Lagrangian (4.1), we impose In the regime of small c s and focusing on the most relevant operator, this corresponds to This estimate is also precisely equivalent to the condition defined in (F.14) using a purely five-dimensional picture (recalling that γ = 1/c s ). The condition above is also compatible with the statement usually stated in the literature that the 'acceleration' should be small; however, here we make this statement much more accurate.
To conclude, and without loss of generality, the classical inflationary background in DBI can be justified on theoretical grounds whilst being under control provided where we have assumed that γ is as large as possible within the Planck constraints of DBI inflation [58]. This result is comparable to what happens in screening solutions as we shall see in §5.2.

Application to DBI inflation in de Sitter
So far we have assumed that the background field lives in flat Minkowski (or rather Euclidean) space-time. However, if we are to apply these results to an inflationary setup, we need to consider the generalisation to an arbitrary space-time background. In particular, we can assume a de Sitter background, which not only breaks Lorentz invariance but also the shift and DBI symmetry (1.2). We expect the breaking of the symmetry to be quantified by some power of H/Λ, and we will make this statement more precise next. We adapt our previous results and write the classical action where indices are lowered and raised with respect to the background metric g µν and its inverse g µν . This should not be confused with the effective metric defined in Eq. (3.8).
Expanding in perturbations as outlined in Eq. (3.6) yields the kinetic operator We can proceed as in §3.1 and define the following effective metric In de Sitter, the explicit computation of the one-loop effective action (again not trusting the power-laws) shows the first non-redundant operator which is produced by quantum effects is of the form where H is the Hubble parameter associated with the de Sitter metric. Following the requirement (4.3), we conclude that the quantum effects are under control provided Likewise we can quantify the degree of DBI symmetry breaking introduced by the de Sitter expansion, which can be read off from Eq. (4.9) and is of order (H/Λ) 4 , with the hierarchy between H and Λ being of order 10 −2 .

Implications to screening
Derivative theories such as the Galileon models introduced in Ref. [42] have also seen raised interest as potential actors in the late time history of the universe. They can also be relevant for IR modifications of GR like DGP [8] or massive gravity [39,40]. We start by investigating screening mechanisms for P (X) theories and use spherical coordinates, writing the background profile solution as φ(r). In what follows we consider a conformal coupling between the field φ and an external matter source at the Planck scale of the form φT /M Pl , where T is the trace of the energy-momentum tensor of the fluid associated with the matter field. This coupling manifestly breaks the shift symmetry (3.3), though very mildly since the coupling is also Planck suppressed.
The most general type of Vainshtein screening mechanism with generalised P (X) models was considered in Ref. [23]. In this section our intention is to illustrate this mechanism and its classical validity by studying two examples: a generic P (X) screening and a DBI screening. We later compare our results to screening from Galileons. Some earlier work includes Ref. [59] who focused in obtaining screening solutions. In our paper we rather explore the consistency of screening solutions within the framework of a controlled EFT.

P (X)-screening
Quantum fluctuations play an important role in inflationary theories. Likewise, in theories of late time cosmology, if a screening solution exists which is capable of efficiently hide away the presence of the scalar field, φ, then one ought to be sure that the quantum corrections in that model are also under control. Below we explore simple cases of Vainshtein-screening which belong to the general class of models explored in Ref. [23].
Suppose the scalar field interacts with a fixed point source distribution through a conformal coupling, with T = −M δ (3) ( r). Then we can show that the equations of motion can be integrated once with respect to the radial coordinate to give Searching for screening solutions involves obtaining an associated fifth-force which ought to be much smaller than the Newton gravitational one at small enough distances, while maintaining the Newton square law at large distances. Such solutions will only exist for certain choices of P (X), but the analysis of quantum corrections is naturally independent of this choice. First, we assume P (X) → +X/2 for |X| 1. This ensures the correct behaviour at large distances. For a screening mechanism to happen efficiently, X should either be of order unity or dominate at small distances. Assuming that X is allowed to dominate, |X| 1, and that in this strongly coupled regime P (X) ∼ −c N N −1 (−X) N , with c N and N positive constants, then we have .
We are implicitly assuming that P (X) is such that one can extrapolate between the free behaviour, φ (r) ∼ M/M Pl r 2 at infinity, to this screened behaviour for small r without any classical instability. The behaviour (5.2) is consistent with the strong coupling assumption |X| 1 provided N > 1/2 and where r * is the strong coupling radius (sometimes also dubbed Vainshtein, or screening radius).
In this strongly coupled regime, and assuming the effect on a test-particle of a given mass, we can compare the magnitude of the force mediated by the field φ, F φ , with that of the standard Newton's square law, F N . We find We infer that the screening is effective (in the sense that the force is suppressed compared to the Newton's law) provided The larger the power N the more efficient the screening is. For large N , the screening behaviour asymptotes to F φ /F Newton ∼ (r/r * ) 2 which is as strong a screening as in DBI [60]. However, as we shall see below, unlike DBI, the regime of validity of this classical P (X)-screening solution is much larger, making P (X)-screening much more appealing in that respect.
Validity of the EFT.-Calculating the local curvature quantities in the one-loop effective action (3.12) and imposing we determine that, the regime of validity of this classical screening solution is Therefore the background can be very large whilst satisfying (5.2), without the theory running out of control if Eq. (5.7) is verified. This is similar in spirit to the regime of validity of theories in which the background field was only evolving in time, as we explicitly discussed in §4.1.
For completeness, we next turn to one of the most popular models within the class of P (X) theories and look into its regime of validity.

DBI-screening
Consider a static, spherically symmetric field profile, φ(r), which is governed by the DBI action (with the sign flipped so as to allow screening) which is another special case of the models considered in Ref. [23]. Assuming again the coupling to matter is conformal and T = −M δ (3) ( r), the solution to the equations of motion satisfies [9] φ (r) = Λ 2 where the Vainshtein radius is given by (5.10) Here again the Vainshtein radius has the same dependence on the point source mass, M , and the strong coupling scale, Λ, as the previous P (X) example (5.3), and different from the one arising in the case of the cubic Galileon (though the same as in the quartic and quintic Galileons).
Screening occurs for small enough r, that is, when r < r , which corresponds to large γ, Since we are still interested in the regime corresponding to γ 1 we will be able to compare the constraints arising from the validity of the classical solution directly with those from §F.2, which rely on higher-dimensional arguments. Indeed, we are now in a position to fully appreciate the insights offered when embedding DBI in a higher-dimensional space, which we have addressed in §F.
Validity of the EFT.-The condition (3.13) which is equivalent to (F.10) for DBI is key to understanding the regime of validity of the screening mechanism as K is the invariant measure of the acceleration that transforms appropriately under the DBI symmetry.
For a static and spherically symmetric configuration, K µ ν is given by The classical screening solution is therefore under control provided 6 which for the screening solution (5.9) above implies r 14) or equivalently, to compare with Eq. (5.7) associated with the generic power-law P (X) model, The conditions in Eqs. (5.13) are the static and spherically symmetric equivalent of the conditions obtained in Eq. (4.5) for a time-dependent background profile. Indeed, (5.13) is a particular case of the criterion derived in Eq. (F.14).

Comparison between screenings
At this point, one might wonder whether some sub-classes of P (X) theories are more competitive when we study the range of scales allowed by their classical description in static and spherically symmetric profiles.
For comparison purposes, we consider only the region in parameter space in those models which gives rise to screening mechanisms. To make the comparison as generic as possible we might wish to include the cubic Galileon [42]. The details of the analysis for the cubic Galileon are provided in appendix E. We start by comparing Eqs. By inspection of Eqs. (5.7) and (5.15) we conclude that, for these backgrounds, Galileon theories have a broader range of scales for which their classical screening solution is under control 7 , compared to all P (X) models we considered here, including DBI. Among the P (X) models the ones which are of power-law type typically have a larger domain of classical validity than DBI, if one relies on the criterion (5.6) to determine the regime of validity of the EFT. We reiterate that this is true despite the fact that DBI is motivated by a higher-dimensional construction and enjoys an additional symmetry compared to generic P (X) models. This comes to show how subtle the role of symmetry is when applied to these types of considerations. This is an interesting point worth exploring further which could make screening mechanisms exhibited by P (X) theories as compelling, if not more, compared to DBI models if this is a criterion one values.
Part II -Naturalness of P (X) theories So far we have been focusing on logarithmic (and finite) contributions arising from quantum effects in P (X) theories. However, these considerations had little to say about the naturalness of this class of models. Power-law divergences have indeed been discarded so far for reasons explained previously, but they can be indicative of how low-energy subprocesses are affected by high-energy degrees of freedom.
To address the question of naturalness we now proceed with an exact renormalisation procedure called Wetterich's ERG equation. This procedure differs from the previous one in three ways. First, in this part we remain agnostic about the exact role played by different divergences and keep all the contributions from quantum corrections (the power-laws, the logarithmic divergences and the finite pieces). Second, the approach in what follows will be fully non-perturbative making it much more insightful than any perturbative analysis. For instance, a perturbative analysis might find a large one-loop correction to the classical action going as Λ 4 c X n for a given n > 0. Stopping there would lead us to deduce that the EFT description would break down when |X| ∼ (Λ/Λ c ) 4/n 1. However, a fully non-perturbative analysis might give a result going as Λ 4 (1 − (1 + Λ 4 c /Λ 4 X n ) −1 ) making these non-perturbative contributions irrelevant in the regime where |X| (Λ/Λ c ) 4/n . Finally, a last difference with the approach from Part I is that we do not consider the effective metric (3.8) as being fundamental. As a result this metric does not enter in the regularisation scheme (unlike what is implicitly assumed in §3.2) and the result is not manifestly covariant with respect to that metric. We believe this procedure is better justified since we would not expect UV physics to have any knowledge about the low-energy effective metric.
6 Standard naturalness problems in P (X) theories Traditionally, there have been two ways to view naturalness problems in field theory.
Heavy mass dependence.-One way is to track the dependence on the heavy mass threshold corrections that lie from the first mass states beyond the regime of validity of the EFT. This definition is largely insensitive to field redefinitions and respects both linearly and nonlinearly realised symmetries of the system.
The second is to track the cut-off dependence. In the language of the Wilsonian picture, the idea is to assume that if the EFT has a cutoff Λ c , then the theory should be naturally defined by S Λc (φ) in the notation of the previous section.
If we take this point of view then the trivial mathematical identity that Γ(φ) should be independent of Λ r , even when Λ r = Λ c , is turned into a 'surprising' fine tuning-it appears necessary to significantly tune the Λ c dependence of the form of S Λc so that the predicted physical quantities determined by Γ(φ) are not strongly dependent on Λ c .
Power-law divergences.-The second way to phrase the naturalness problem proceeds as follows. We start with the classical action (3.1) for P (X) theories. We take Λ r = Λ c and follow the power-law divergences which, at one-loop, include the following operators where α n and β n are dimensionless parameters which only depend on n. One crucial aspect of these divergencies is that the sum does not truncate (i.e., there is no N for which α n = 0 or β n = 0 for n > N ). We can get a better insight by performing a wave function renormalisation. The kinetic term is of the form Z(∂φ) 2 . In this one-loop perturbative analysis, Z ∼ (1 + α 1 (Λ c /Λ) 4 ). We perform a wave function renormalisation by introducing the renormalised field φ R defined as and the one-loop contributions go as In the large kinetic region, this is worrisome for several reasons. First, the strong coupling scale flows towards the cutoff. Indeed the only relevant scale in (6.3) is the cut-off, and the original strong coupling scale Λ does not even enter. At higher loops the situation is even worse where the renormalised interaction scale goes as (Λ c /Λ) Λ ≥ Λ c , where is the number of loops. This is often incorrectly used as an argument that the theory cannot be made sense of above Λ so that we must take Λ c ∼ Λ. Second, even if we take Λ c ∼ Λ all powers of X n receive an order unity modification at the strong coupling scale Λ and the functional form of P (X) effectively becomes arbitrary. As a consequence we would inevitably return to the standard EFT picture that these theories are at best EFTs defined with a cutoff of Λ c ∼ Λ. Even resorting to a symmetry (like in DBI) would not prevent renormalizing the overall coefficient of P (X) to an amount proportional to Λ 4 c and again we would need Λ c ∼ Λ to make sense of that theory. In the absence of some symmetry protecting the form of P (X), the functional form of the P (X) Lagrangian appears uncontrolled. These perturbative considerations therefore suggest that we cannot trust the classical background as soon as |X| ∼ 1.
In the next sections we will argue that even within the cut-off framework, this perspective is too pessimistic, and is an artefact of perturbative arguments. On the contrary, it appears that the large kinetic term region |Z| 1 (where Z µν is defined in Eq. (3.6)) is precisely the regime where all quantum effects are most suppressed whether or not a symmetry is present.

Wilsonian exact renormalisation group
Up to now, we have seen that if we work within the Wilsonian picture, and track power-law divergences, then by taking Λ r > Λ the loop expansion becomes uncontrolled. This is frequently used to argue that the strong coupling scale, Λ, must also be the cutoff of the EFT.
In reality all this identifies is that perturbation theory which generates the contributions to the loops coming from k > Λ is not converging. It may, nevertheless, be possible to find a non-perturbative method that reorganises the expansion and makes this problem disappear. 8 The ERG is an exact equation that describes how S Λr must vary with Λ r so that physical quantities such as Γ(φ) are independent of Λ r . This is the approach utilised for example in Polchinski's ERG equation [65], and it is widely applied in quantum field theory and statistical physics contexts (see Ref. [66] for a review). However, as we have emphasised, this equation keeps track of the unphysical dependence of S Λr on the arbitrarily defined regularisation scale which must automatically cancel in the construction of Γ(φ). An approach that is more suitable for our purposes was given by Wetterich which uses the effective action as the fundamental quantity [67] (see also Tetradis & Wetterich [68]). In brief, this approach introduces an infrared cutoff, κ, into the definition of the effective action. This is appropriate here since we will be interested in theories such as P (X) models for which the shift symmetry renders them massless making the infrared contribution to the loops problematic.

Exact renormalisation group equation
The modified definition of the effective action which includes the infrared regulator 9 κ, usually called the effective average action, is and the regularisation operatorR κ is chosen to have the following propertieŝ where Z κ is the standard wave function renormalisation, not to be confused with Z µν κ .
The choice of IR regulator χR κ χ in Eq. (7.1) (and in Eq. (A.1)) acts as a mass term which explicitly breaks the shift symmetry. Notice, however, that it merely regulates the field propagator and does not act as a new interaction. As a result, there is no change in the Feynman rules associated with this new operator. Consequently, no new, symmetry-violating operators can be generated from this IR regulator. As pointed out in Ref. [69] within the context of Galileons, even though a mass term breaks the shift symmetry, it can still be consistently treated as an irrelevant deformation of a shift-invariant Lagrangian. 8 We emphasise that the techniques we have in mind are very different from those used in cosmological settings to resum logarithmic contributions by dynamical renormalisation group instruments [61][62][63] (see also Ref. [64] for a pedagogical review). In that case the resumation procedure takes care of large distance (IR) perturbative divergences which are not related to the questions addressed in this paper. 9 It is interesting to point out here that since we are introducing an IR regulator rather than a UV one, we would ultimately send κ → 0 which means there should be no issue promoting this prescription to Lorentzian.
Wave Function Renormalisation.-In usual presentations of the ERG it is common to introduce a wavefunction renormalisation Z κ to account for anomalous dimensions of the field and for the existence of critical points. Here the entire functionP κ (X κ ) itself is already a highly nontrivial wave function renormalisation and it would not make sense to define the wave function renormalisation as a function of the field itself. Rather, we define the wave function renormalisation Z κ by the behaviour of the theory in the small kinetic term regime, where we define In the small kinetic term regime |Z µν κ [φ]| ∼ Z κ , whereas in the large kinetic term regime, In the case of screening, the choice (7.5) is equivalent to setting the wave function renormalisation based on the behaviour of the field at infinity which is the only meaningful choice.
Example of regularisation operator.-For example, we may take the form The effect of this operator is to give a mass, and hence infrared cutoff, to the low momenta modes, but leave the high momenta modes (compared to κ) unaffected. Despite appearances, the effective average action is related to the Wilsonian action S Λr by a Legendre transformation [70], and therefore encodes the same information. The intuitive reason for this is that in S Λr we include all contributions for modes with k > Λ r , but only tree contributions for modes with k < Λ r . Similarly for Γ k we include only loops from modes with k > κ. The conditionR κ (−2) → ∞ as κ → ∞ forces the path integral do be dominated by χ = 0 with vanishingly small fluctuations implying lim κ→∞ Γ κ (φ) = S(φ) . (7.7) Alternatively, we may modify the definition ofR κ so thatR κ (−2) → ∞ as κ → Λ c so that where S Λc (φ) is the Wilson action at the cutoff scale, Λ c . Implicit in this last statement is the idea that the Wilson action defined at the cutoff is the natural action to define the EFT. However, we can equivalently choose to define the theory at any scale we choose. In particular, in the case of P (X) models, it is more natural to define the theory at the strong coupling scale, Λ.
From the definition of the effective average action we can derive the ERG equation [67] ∂Γ κ ∂κ = 1 2 Tr ∂ κRκ 1 We give the details of its derivation in Appendix A. This is an exact (all loop orders) nonperturbative renormalisation group equation that contains all the information about a given field theory. It automatically satisfies and is usually solved with the boundary condition Γ κ=Λc (φ) = S Λc (φ) .

(7.11)
Connection with the one-loop effective action.-This ERG equation can be seen simply as a renormalisation group improved version of the one-loop effective action. To see this we note that if we compute (7.1) to one-loop we would obtain Differentiating with respect to κ gives This would be the one-loop result. The ERG improvement corresponds to effectively replacing S on the right hand side of this equation with Γ κ which then gives us back the ERG equation to all loops.
Choice of Regulator.-As in any cut-off regularisation scheme, the answer we obtain is not typically invariant under field redefinitions. In reality there is an infinite number of possible ERG equations we could derive for a given field theory [71]. For this reason we may choose one best suited to the problem at hand. In particular the choice of regulator should respect the symmetries of the low energy EFT. To see how this works in the case of a P (X) model, let us make the approximation thatR κ = Z κ (κ 2 + 2) Θ(2 + κ 2 ). This is a common choice in the literature as an optimised regulator for convergence of the approximate solutions of the ERG equation [72].
Derivative Expansion.-We now compute the trace at leading order in a derivative expansion assuming that Γ κ (φ) = Λ 4 d 4 x P κ (φ) + higher derivative terms . (7.14) The ERG (7.9) then gives at lowest nontrivial order in the derivative expansion where Z µν [φ] is defined in (3.6), and symbolically, Z ∼ P (X). Since P (X) is a function, we see that the ERG is really an infinite number of equations for the full functional dependence of P (X).
Scale Dependence.-It is common to remove the overall scale dependence κ by defining X κ = −(∂φ) 2 /κ 4 = XΛ 4 /κ 4 , Λ 4 P κ (X) = κ 4P κ (X κ ), and k µ = κq µ so that the ERG can be put in the dimensionless form This formalism is common and extremely useful when looking for fixed points of the RG flow. In this work we shall be interested in another question, namely the amplitude of the quantum corrections in different regimes, for which this dimensionless formalism appears to be less convenient. Moreover, note that even though Eq. (7.16) is the most common presentation of the ERG equation, it makes the distinction between Λ and Λ c less transparent. Given the arguments in part I, this distinction is critical for this class of theories. To make the notation as close as possible with the one presented in part I, we will attempt to solve the ERG equation in the two limiting cases mentioned below, in its dimensionful form. We include a derivation using the dimensionless couplings in appendix B for completeness.
As it stands, the ERG, be it in its form (7.15) or (7.16), is still too difficult to solve explicitly and we need to make some additional approximations to gain traction. There are two obvious regimes of interest: • The normal perturbative region, for which |X| 1, so that P (X) may be expanded as a polynomial (assuming analyticity at X = 0 which is guaranteed from our original assumption in Eq. (3.2)); • The large kinetic term region, which is our main interest since this contains the new physics we are seeking traces of.
We consider these two cases in turn below.

RG flow for small kinetic term regime
As mentioned before, although elegant, the ERG equation is difficult to solve explicitly. As with other non-perturbative systems of equations (such as the Schwinger-Dyson equations), one can truncate the infinite set of equations at some chosen finite order, and solve the resulting finite system of equations exactly. This is not guaranteed to be a good approximation, but it may allow us to capture certain non-perturbative features of the full theory.
If we are only interested in the small kinetic term region, we may expand P κ (X) as a polynomial c n (κ)X n , (7.18) where c 1 (κ) is the renormalisation of the kinetic term for the scalar field defined previously as Z κ = 2c 1 (κ). The other coefficients c n with n > 2 are the interaction coefficients. The idea here is to truncate this expansion at some order n = N , and then insert it into the RHS of the ERG equation (7.9). Then we expand the RHS only to order N and neglect the remaining terms. This reduces the ERG equation to a system of N renormalisation group equations which may be solved exactly or numerically to determine the flow.
Instructive toy-model.-We illustrate this method with the simplest possible nontrivial example N = 2. Notice that this case is also studied in a perturbative language in terms of Feynman diagrams in appendix C. For this example it is enough to expand the RHS of the ERG equation to second order in X, where we have defined X µν κ = Z µν κ − Z κ δ µν . The first term in the square brackets of (7.20) is just the usual renormalisation of the cosmological constant which we ignore (i.e., absorb into c 0 (κ)). The next terms lead to a renormalisation of the coefficients c 1 and c 2 following the ERG equation , (7.22) which are easily solved in terms of their values at Λ c as follows The renormalised theory is then (ignoring the constant term going as c 0 (κ)), We now perform the wave function renormalisation, φ = φ R / √ Z κ , with Z κ = 2c 1 and get (7.26) The renormalised scale at which the interaction (∂φ) 4 arises is therefore When 10 Λ Λ c , and starting at Λ c with c 1 (Λ c ) ∼ c 2 (Λ c ) ∼ 1 we see that Λ κ→0 ∼ Λ c as was the case in the perturbative one-loop argument presented in (6.3). Notice however that this result is exact at all loops, unlike the perturbative argument which would have inferred a different behaviour at higher loops. We have therefore shown that this ERG method is consistent with the one-loop perturbative result in the weak kinetic term region. We obtain a result which is physically entirely consistent: starting at κ = Λ c with interactions X which are already small, |X| 1, we see that these interactions become even more irrelevant as we run to lower energy scales.
We now turn to the other regime of interest which is the main attraction for this types of theories, namely when |X| 1 or even |X| 1. Recall that X is defined as X ≡ −(∂φ) 2 /Λ 4 . From the analysis above, the scale Λ κ does flow from κ = Λ c to κ = 0. However, in what follows, by 'large kinetic region' we will only make an assumption on the behaviour of the field at κ = Λ c . The real assumption behind the 'large kinetic region' is that the magnitude of at least one of the eigenvalues of Z µν Λc is large (compared to unity). 7.3 Quantum stability of large kinetic term regime

Leading order in derivatives
It is the large kinetic region which comes in the description of screening mechanisms or inflationary models with large non-gaussianities. For concreteness let us have in mind screening solutions that work via the Vainshtein effect. These mechanisms rely on the fact that when the kinetic term becomes large, the effective coupling of the scalar to matter becomes small. Qualitatively this is the region for which the eigenvalues of Z µν defined in Eq. (3.6) are large in comparison to unity. To be more precise, by 'large kinetic term regime', we have in mind the regime where at least one eigenvalue Z µν at κ = Λ c is large, symbolically |Z µν Λc | 1. In this section we perform the analysis keeping the scale dependence explicit. We find this is the most efficient prescription to answer the question of when quantum corrections can be small. See Appendix B for the derivation using the dimensionless couplings introduced in Eq. (7.16).
In this region the ERG at leading order in derivatives may be approximated by It is justified to neglect the Z κ κ 2 in the denominator as we have done because the integral is already finite in the IR. We defineẐ µν κ [φ] ≡ Z µν κ [φ]/Z κ . The second approximation performed in (7.28) is justified if we remain in the large kinetic regime |Ẑ µν κ [φ]| 1 for all values of κ. As we shall see, |Z µν Λc | 1 implies |Z µν κ | 1, so this is a consistent approximation. We refer to Appendix B for a more careful analysis where this simplifying approximation is not made.
We recall that we define our P (X) theory at Λ c . This means that Z Λc = 1 (which is of course what was set in the previous example), and soẐ µν Λc = Z µν Λc . If Z µν is conformal, Z µν κ = Z κ δ µν , then the integral is easy to perform. We find In realityẐ µν is always anisotropic, but it is clear that it is the maximum eigenvalue that will dominate in the denominator, and therefore we approximate the solution as where Max[Ẑ µν κ ] denotes the maximum eigenvalue ofẐ µν κ = Z µν κ /Z κ . Now we want to solve this equation assuming that the bare theory defined at the scale Λ c is specified by a function P Λc (X). A priori the running of the function P κ (X) is highly complicated and involves evaluating the following integral However, to get some insight on this expression, we may start by expanding 11 the integrand in a Taylor series about κ = Λ c . At leading order in this expansion, we obtain the following contribution where we have used the fact thatẐ µν Λc = Z µν Λc . In the case where the leading contribution going as Λ 4 c /Max[Z µν Λc ] is large, the flow from κ = Λ c to κ = 0 is large and the next to leading corrections to this expansion are important. However, in the opposite case where the contribution from Λ 4 c /Max[Z µν Λc ] is suppressed, the flow from κ = Λ c to κ = 0 is also suppressed and the approximation (7.32) is then justified, see appendix B for more details.
The key point is that although the leading contribution Λ 4 c /Max[Z µν Λc ] looks like a large quartic divergence, it is Vainshtein suppressed by a factor of Z which becomes larger as we head into the Vainshtein or screening region (or correspondingly the relevant region when dealing with k-inflation or DBI-inflation). This means that deep inside the large kinetic term region, the all-orders-in-loop corrections to the leading order in derivative terms in the effective action can be negligible. We conclude that within the screened region, i.e. when Z is large, the classical theory is protected from large quantum effects by the Vainshtein mechanism itself.
Power-law example.-As an illustrative example, suppose we take the theory defined at the scale Λ c to be polynomial of N -th order P Λc (X) = N n=0 c n X n , (7.33) where the c n coefficients are assumed to be of order unity. Note again that we assume that even at the scale Λ c Λ, the scale that enters explicitly in the Lagrangian of the P (X) model is set by the strong coupling scale Λ and not Λ c . For large kinetic terms, |X| 1, we may approximate P Λc (X) ∼ c N X N , and similarly Max[Z µν Λc ] ∼ c N X N −1 . Thus the condition that contributions to the effective action at all loops are negligible is This condition becomes increasingly easier to satisfy as N increases and in the limit N → ∞ simply becomes |X| 1, i.e., which is automatically satisfied in the large kinetic term region.

Quantum stability at all orders in the derivative expansion
The previous analysis has shown that if we truncate the ERG to lowest order in the derivative expansion, then P (X) models that have a power-law growth at large X are generically stable under quantum corrections to all orders in loops in the large kinetic term/screening region |X| 1. We now extend this argument to all orders in the derivative expansion. To do this we need to establish how to compute the derivative expansion of the ERG equation.
Returning to the exact form of the Wetterich ERG We may equivalent rewrite this by introducing a Schwinger parameter s as Here bothR κ andÂ ≡ δ 2 Γκ δφδφ are differential operators which in a derivative expansion have a quasi-local formÂ where coefficient functions a n are functions of φ and potentially all orders of derivatives of φ.
To compute the trace we can use the trick that for any differential operatorÔ(x, ∂) then where on the RHS the operator is viewed as acting on unity. This relation is easily proven by using a complete set of position and then momentum states to compute the trace. This gives Denoting Γ κ = d 4 x L κ (x) then if we are interested in the Lagrangian at the point x * we can split the operator in the exponent as which defines the operatorB. The idea of this split is that we assume ∂ acts only on x and not on the reference point x * . At the end of the calculation we may then take the limit x → x * , and by definitionB vanishes if we set ∂ = 0 and x = x * . The derivative expansion corresponds to expanding in powers of the operatorB. This is very similar in spirit to the point-splitting regularisation method which serves to regularise the short distance singularities which appear when two given points are taken to coincide [73]. The corrections to the effective Lagrangian at the point x * then take the form We may now perform the integral over s, and using a common, crude choice for the regulator Again working with a theory which is at leading order L κ (x) = P κ (X) + . . . then at leading orderÂ(x * , ik) = Z µν κ (x * )k µ k ν + . . . , and assuming we are in the region withẐ 1 we have This form is finally tractable. The argument for quantum stability now proceeds as before. If we start with the theory defined at the cutoff scale Λ c to be a pure P Λc (X) model, then at worstB scales asB ∼ Z κ κ 2 . Thus, quite regardless of the functional dependence of the RHS, the 'worst case' estimate for the magnitude of the contributions to L 0 obtained from running down from κ = Λ c yields where the b n are order unity functions build out of the first and higher derivatives of the field.
Convergence of the derivative expansion.-We expect the sum to converge if the derivative expansion is well defined. The exact criterion behind the validity of the derivative expansion in (7.42) is beyond the scope of this study but one can see that (7.42) involves higher and higher orders of ∂Z/Z. We therefore expect the sum to converge as long as derivatives are small, ∂ Λ. For sake of simplicity, we apply here without further justification the same criterion (3.13) or (3.15) as that derived in Part I, which ensured that the derivatives were small compared to Λ.
It is very possible that this estimate is too conservative. Indeed, the coefficients b n already include contributions from momenta k of order Λ c so it is very likely that the derivatives could get arbitrarily close to Λ c , in which case we would only need |∂Z/Z| Λ c rather than the much stronger requirements (3.13) or (3.15). As explained at the beginning of §II, there are several reasons why the conditions obtained here could potentially be relaxed compared to that found in Part I.
Then assuming the sum converges, the conditions that the all-loop contributions are negligible modifications to the effective action in the large kinetic term region, |Z| 1, is that We have therefore generalised the result (7.32) to all orders in the derivative expansion. The condition (7.44) is easier and easier to satisfy as one enters deeper within the 'Vainshtein' or large kinetic term region.

Application to screening
To illustrate the previous result, let us revisit the case of static and spherically symmetric screening introduced in §5, under the same conditions of conformal coupling. Regardless of whether we are dealing with P (X), DBI, or Galileons 12 , for all these screening mechanisms the criterion (7.44) implies where the Vainshtein radius was introduced in (5.10) for P (X) theories, including DBI, and in Eq. (5.16) for the cubic Galileon. Notice that the lower limit is an estimate on when the sum in Eq. (7.43) is expected to converge, which is the case if the derivative expansion is well-defined. Assuming that this sum converges, the upper bound arises from the naturalness requirements deep inside the Vainshtein radius. As such, it might be overly conservative, but it is nevertheless suggestive of the limiting length scales for which this theory is well-defined. In Eq. (7.45) the coefficients p and q are model-dependent if one were to follow the criterion (3.13) or (3.15); in particular, q = 3/2 for the cubic Galileon whereas q = 1 for generic P (X) models. The exact expressions of the coefficients p were derived in Eq. (5.7). For the power-law P (X) model then p < 0, and in Eq. (5.15) for DBI we find p = 2/3. For the cubic Galileon, p = −3.
For concreteness, let us consider for instance Λ c ∼ eV. This is of course well below the Planck scale, but still much larger than the strong coupling scale Λ usually considered during screening. It would be already a major improvement in our understanding if we were able to push the cut-off scale for these types of theories to values as large as ∼ eV. Actually any value which would be larger than the scale of dark energy (10 −3 eV) should already be considered a success.
Then with Λ c ∼ eV, the quantum contributions at all-loops introduce negligible modifications to the effective action within the entire solar system (apart from the regions close enough to dense objects such as the Sun and the other planets). This result suggests that the strong coupling scale, Λ, could be well separated from the cut-off scale, Λ c , which is a remarkable feature in these types of theories which 'ride on irrelevant operators.' The fact that the criterion RHS of (7.45) is the same for DBI as for P (X)-screening and that the LHS is actually tighter for DBI than that for a generic P (X) model suggests once more, that the additional existence of a symmetry has surprisingly little to do with these considerations. We summarise our results in Table 1.

Background vs. perturbed-field EFT
So far we have centered our analysis on the question of naturalness. For this we have focused on the EFT of the 'background' field φ, which we have found to be valid both when the kinetic term is small (|X| 1) and when the kinetic term is large and the criterion (7.44) is satisfied provided the derivative expansion is under control. It does not mean, however, that the EFT as a whole is valid in all these regimes. The EFT of the background field can be under control and quantum corrections to the background EFT may be small, but this does not yet mean that the perturbed field χ living on the background determined by φ is weakly coupled and that quantum corrections are not important to determine its scattering or evolution.
When the EFT for the perturbed field χ is valid is a separate question which may involve the redressed strong coupling scale as computed for instance in Ref. [50] for the cubic Galileon. Yet again, as explained in §2.1, the redressed strong coupling scale which determines the breakdown of tree-level unitarity for the perturbations is well distinct from the cut-off. 13 model Lagrangian regime of validity of the EFT Table 1. Comparison between regimes of validity of different derivative theories (including when the theory is technically natural) determined as a function of range of scales. Note that r scales slightly different with the mass of the matter distribution which sources the background field from model to model as cautioned before. Any screening solution has Λr * 1. In the P (X) model we have N > 1 (and potentially N 1). The lower side of the regime is determined by requiring that the derivative expansion converges, using Part I as an indicator. It is likely that the LHS of these criteria are overly restrictive and could be relaxed significantly, as cautioned in the main text.
Moreover, the break-down of tree-level unitarity at the (redressed) strong coupling scale does not necessarily mean a loss of predictivity of the theory.
For a power-law P (X) screening of the form P (X) = X/2 − a N (−X) N , we expect the redressed strong coupling scale to go as Λ * ∼ ΛX N/4 ∼ (r * /r) 1/2 Λ in the limit of large N .
For DBI, on the other hand, there are some higher order operators which are enhanced by higher powers of the Lorentz factor, and we expect the redressed strong coupling scale to go instead as Λ * ∼ Λ/γ 1/4 ∼ (r/r * ) 1/2 Λ which would make the redressed strong coupling scale smaller in screened region. This is an interesting effect due to the square root structure of DBI. In DBI it is therefore particularly important to dissociate the cut-off scale and the (redressed) strong coupling scale.

Summary and discussion
This paper has addressed two essential questions in a class of derivative Lagrangians, usually known as P (X) models. These theories are of special interest when the irrelevant operator X = −(∂φ) 2 is large, or at least of order unity. In this regime we are 'riding on irrelevant operators' which can be worrisome from a standard EFT viewpoint. Such operators are important if they are governed by a scale Λ which is much smaller than the cutoff of the theory. This immediately begs the question of whether or not the EFT of P (X) models can to be well-separated from the cut-off scale but it also means that the strong-coupling scale is independent from the cut-off. Indeed, the cut-off of the theory, i.e., the onset of new physics cannot depend on the background behaviour of the low-energy theory without violating decoupling between low and high energy physics. ever be under control against quantum corrections, meaning whether the renormalised action is close to (or even overrides) the classical action.
We have addressed this question following two different procedures proposed in the literature: 1. Covariant and perturbative approachà la Barvinsky & Vilkovisky-In this first part, we ignored the power-law divergences arising from quantum effects. We justified this treatment in depth emphasising that it is appropriate if we do not ask a naturalness question from integrating out heavier fields, but are only interested in the quantum corrections from the field itself. We find that classical solutions are under control as long as higher derivatives of X are suppressed, or more precisely provided (∂ 2 Z/Z) 2 Λ 4 P (X). We derived the explicit (covariant) criterion for the suppression of quantum effects and applied it to different contexts: • First, during inflation we recovered the standard result for the regime of validity of DBI inflation amounting to the acceleration of the field being small.
• Second, in static and spherically symmetric screening setups. We compared the screening mechanisms for a 'generic' power-law P (X) screening to that of DBI, and have shown that generic P (X) screenings can have a larger regime of validity for their respective classical background solution. The comparison between screenings in different models is summarised on Table 1. 2. Exact Wetterich renormalisation group procedure and addressing the naturalness question-In the second part of this work we have applied an exact all loops renormalisation procedure and have addressed the core of the naturalness question for generic P (X) models. In this approach we have kept all the contributions from the quantum corrections, including the power-law and logarithmic divergences, as well as finite pieces.
The ERG approach shows the direct implementation of the 'Vainshtein' mechanism in the renormalised effective action. It serves as a suppression mechanism for the quantum effects at all-orders in the loops. We emphasise that this procedure is unrelated to that of the redressed strong coupling scale. Instead, following an ERG approach we find that the new operators in the renormalised effective action are suppressed by a factor of 1/Z where Z ∼ P ,X , and |Z| 1 in the region of interest for this type of theories.
This proves the full quantum stability of the theory in the regime where the kinetic term is large, |Z| 1. P (X) theories are therefore more and more natural as one enters that regime. The same would apply to other theories which exhibit the same type of 'large kinetic term regime', like Galileons. Indeed, similar conclusions were drawn by Brouzakis et al. [76,77] in galileon theories using the heat kernel technique, and by Codello et al. [78] within a braneworld setup.
For completeness, we have also considered the less interesting regime, for which |X| 1, where the conclusions match that of the perturbative approach at one loop.
3. The role of symmetries-In this work we kept a close look at the potential role played by symmetries in these questions of naturalness and 'validity of the classical solution.' We found that the symmetry does of course play a crucial role in repackaging the quantum corrections in a way which preserves the symmetry (this was performed in DBI using a five-dimensional embedding approach). Nevertheless, this nice repackaging of the quantum structure does not say much about the overall order of magnitude of the quantum corrections. As a result when the strong coupling scale does not coincide with the cut-off scale, DBI enjoys the same renormalisation features as any other P (X) theories. In fact, deep in the high kinetic term region, DBI is as natural as any other P (X) model, despite the presence of an additional symmetry.
To conclude, the net effect of most calculations in derivative Lagrangians has produced a remarkable change in our understanding of the way their EFTs are organised, which relies on the hierarchy between scales being addressed as a derivative hierarchy. The results in this paper could have profound consequences for these types of theories in general, including Galileon and other models exhibiting the Vainshtein mechanism [23]. See also Refs. [79][80][81][82] for related considerations in Galileon theories.
The Vainshtein mechanism relies on non-linear kinetic interactions being important below the cut-off. The principal result of this paper is precisely that the quantum consistency of these theories is tied with these important kinetic interactions. Incorporating the Vainshtein mechanism within the loops themselves has uncovered a mechanism by which quantum corrections are under control. This can open the venue for more models to be taken seriously in model building, both during inflation and late time acceleration. CdR and RHR are supported by a Department of Energy grant DE-SC0009946. RHR would like to thank DAMTP (Cambridge, UK) for hospitality and the Perimeter Institute for Theoretical Physics (Waterloo, Canada) for hospitality and support whilst this work was in progress. The tensor algebra in appendix C was performed using the xAct package for Mathematica [83].

A Derivation of the Wetterich ERG equation
In the second part of the main body of this paper we have addressed the naturalness question of P (X) theories. In §7 we required the exact renormalisation group flow equation as a means to compute the quantum corrections to the classical Lagrangian to all-orders in loops. In this appendix we review the derivation of the Wetterich ERG equation. We begin with the definition of the infrared regulated generating functional W κ defined by Since the only place the regularisation scale, κ, enters is throughR κ , we have where R κ (x, y) =R κ (x)δ 4 (x − y) and the angle brackets denote the path integral average is a generating functional it determines the two-point function whereφ = φ , then takingφ to be independent of κ (which implies J is dependent of κ) and differentiating we have The two-point function φ(x)φ(y) may also be obtained fromΓ κ via In index suppressed notation, from Eq. (A.4), we symbolically write Putting this together into Eq. (A.2) we obtain the flow equation forΓ κ Finally for convenience we define the effective averaged action Γ κ via so that the final form of the ERG equation is (dropping the bar on φ) This is the form used in the main text in §7 for which

B Dimensionless couplings analysis
In this appendix we re-derive the quantum stability argument in the large kinetic term regime of §7.3.1. We will only assume that the derivative interactions dominate over the standard kinetic term where the P (X) theory is defined at Λ c and make no further assumption at different values of κ.
We start with the ERG in its dimensionless form derived in Eq. (B.2) where similarly to § 7.3.1, we defineẐ µν κ ≡Z µν κ /Z κ . We recall here again that we define our P (X) theory at Λ c . This means that Z Λc = 1 andẐ µν Λc ≡Z µν Λc . For simplicity, we focus here on the case whereZ µν is conformal,Z µν κ =Z κ δ µν , then we find a perturbative analysis. To obtain the individual operators in terms of a sum of Feynman diagrams and then covariantise the result would be a herculean task. So for simplicity, we consider in what follows the first term in such a perturbative approach for a simple toy-model and compare the result with that obtained in (3.12). The model we will investigate is or equivalently, where λ is some positive 14 coupling constant. We exemplify how quantum operators are generated by explicitly computing one-loop diagrams in the theory given by the Lagrangian (C.1) using dimensional-regularisation. The lowest n-point function which can be corrected by quantum fluctuations to (C.1) is the 2-point function as depicted in Figure 1. The background field is massless and the amplitude of the one-loop contribution associated to the diagram in Figure 1 is forced to vanish in dimensional regularisation. Hence the Lagrangian (C.1) does not logarithmically correct the 2-point function at one-loop. This is a well-known result that massless fields have a vanishing tadpole.
Four-point function.-Next we look at the 4-point function. The corresponding Feynman diagram is depicted in Figure 2.
We label the external legs with different momenta, p 1 , p 2 and p 3 , subject to 4-momentum conservation. The amplitude associated with this process is thus where the sum is performed over all the cyclic permutations of momenta. Using dimensionalregularisation, we indeed recover the result from (3.12) expanded to the same order, 14 Since we only want to focus on the radiative stability of the classical theories, we choose the sign of λ appropriately so that it does not generate other possible issues with the theory. To be more precise, the positivity of this coefficient is tied with a well-defined local S-matrix [9]. Figure 2. One-loop contributions to the 4-point function. By conservation of 4-momentum, it follows that q=k-p 1 -p 2 =p 4 +p 3 -k.
As expected, we observe the higher derivative terms emerging at the quantum level.

The rising of a ghost?
The operators generated at one-loop in (C.3) are not a total derivative, and thus are not redundant in the technical sense. The reader might be worried that the one-loop effective Lagrangian generated quantum mechanically now contains operators which have more than two derivatives acting on the fields, which would signal the presence of a ghost. We stress that quantum effects will inevitably generate higher derivatives terms (like in GR). Higher derivatives would be unacceptable if they led to an Ostrogadski instability, or in other words if they produced a new pole in the propagator.
Let us focus, for example, on the operator (∂ 2 φ) 4 /Λ 8 . We can expand it about an arbitrary background, φ 0 , and deduce that the mass of the would be ghost is m ghost Λ 4 /(|∂ 2 φ 0 |). However, as we have argued in the main text, we can design background configurations for which |∂φ 0 | ∼ Λ 2 , provided |∂ 2 φ 0 | Λ 3 . This condition ensures both the radiative stability of the theory, as well as the effective absence of ghosts at energy scales which could be probed by this EFT.

D Generalisation to higher-loops
In part I of this paper we have quoted the formula for the logarithmic corrections induced by quantum effects. The result presented in Eq. (3.12) is valid at one-loop. We now generalise this argument to an arbitrary number of loops and focus again on the running of the operator coefficients. It is understood that all the statements below apply to the finite contributions as captured by the logarithms.
For an arbitrary P (X) model, since the field has no mass (nor potential), one can never generate a running of the zero-point function (i.e., cosmological constant) nor of a potential for the scalar field (as is well known, the running of the cosmological constant only comes from massive fields). For a P (X) model, we have seen that all the finite contributions involve higher derivatives of the scalar field. In what follows we generalise this argument to an arbitrary number of loops.
Consider a generic P (X) model, which can be written as a series such as L = m λ m Λ 4 X m and let us compute a (2n)-point function. At the very least, to have a finite contribution, this diagram must have M ≥ 2 vertices of the form X m j with j = 1, · · · , M and must involve -loops, with with r = M j=1 m j , following Euler's formula. Then on simple dimensional grounds, such a diagram has finite amplitude of the form where p plays the role of the external momentum, which translates into the following operator The result in Eq. (D.2) is much more powerful and reinforces the results at one-loop. Indeed, from Eq. (D.1) we have (1+r−M ) = n+ we immediately infer that the number of derivatives in the (2n) fields is 2n + 4 , which inevitably means that there is always more than one derivative per field. We can always express these operators (symbolically) as f (X)(∂ 2 +1 φ) 2 .
Remarkably, the number of derivatives per field increases with the number of loops. This means that in the derivative expansion higher order loops are even more suppressed.
E The cubic Galileon: an illustrative example of a higher-order derivative theory Our analysis in this paper is primarily focused on P (X) theories, where the Lagrangians only depend on the first derivative of the scalar field. However, our results can be readily generalised to Galileon theories. These theories are very rich phenomenologically and their most interesting regime is that of large non-linearities for which screening solutions of fifthforces exist. As before, there are a number of ways of computing the quantum corrections in these Galileon models, namely using the point-splitting technique [76], or performing canonical normalisation and substituting into the Coleman-Weinberg effective potential formula [8,50]. On the other hand, the quantum effective action (3.12) allows for a direct derivation of the covariant version of the Galileon non-renormalisation theorem. This is precisely what we shall do in this appendix.
Consider the cubic Galileon. This is the simplest of the Galileon operators, and for the purposes of our discussion it suffices to apply the results to this case. Starting with the Lagrangian (1.4) we take c 4 = c 5 = 0, and it simply reads where the Lorentzian signature was used in the contraction of the Levi-Civita symbols, and 2 ≡ η µν ∂ µ ∂ ν . We can fix c 2 = −1/2, so that φ is canonically normalised, and assume c 3 < 0 for stability requirements under quantum corrections to be met (see footnote 14). Using the background field method of §3.1, we can identify the elements in the kinetic operator (3.6) where all the boundary terms have been discarded in the process. Then the quantum corrections given in Eq. (3.12) are simply a function of the curvature invariants built out of the effective metric given by Eq. (3.8). In analogy to the conclusions of §3.2, the Ricci curvature tensor involves terms of the schematic form The formula above agrees with the analysis of Refs. [8,50], which cited the quantum corrections as being schematically of the form by arguing that Z µν ∼ Zδ µν . Notice that from Eq. (E.2) the kinetic operator Z µν for the cubic Galileon involves operators with two derivatives acting on the fields (the same will be true for the other Galileon terms in the Lagrangian (1.4)), while the quantum corrections introduce operators which are at least one higher order in derivative counting. Therefore, we recover the usual result for Galileons: focusing on the logarithmic divergencies, the EFT defined by the Lagrangian (E.1) is well defined provided φ ∼ Λ, ∂φ ∼ Λ 2 and ∂ 2 φ ∼ Λ 3 , while ∂ n φ ∼ Λ n+1 . This hierarchy between derivatives of the fields ensures that quantum corrections are kept under control. To be more rigorous, the EFT for the cubic Galileon is defined by the regime for which where the RHS is rather symbolic (the complete expression should be read from the RHS of Eq. (E.2). As noted in §3.2, if we use the power-law divergencies as indicators of high-energy dependence, then the quantum corrections will read symbolically as where R is the Ricci scalar built out of the effective metric g eff µν = g eff Z µν and Z µν is the inverse of the kinetic operator Z µν in Eq. (E.2). As soon as we consider solutions inside the Vainshtein radius, the corrections generated by power-law divergencies excite operators of the same form as the Galileon ones originally present in the classical action. In part I of this paper we discarded this family of divergencies, for the reasons explained in §3.2. Applying the same arguments to the Galileons, the quantum corrections in Eq. (E.5) can be dismissed as not providing an accurate accounting of high-energy physics effects. F A closer look at DBI: a symmetry manifest approach In the main text we have discussed the features of DBI as a four-dimensional EFT. In fact, DBI arises in the context of higher-dimensional brane models, as a nontrivial combination of the Dirac and the Born-Infeld actions, where the reparametrisation invariance is made manifest. In this appendix we investigate whether performing the calculations of the quantum corrections in a higher dimensional setup offers special (if any) insights.
F.1 Where did the symmetry go?
All the terms in the one-loop effective action (3.12) trivially satisfy the shift symmetry of P (X) Lagrangians. One could wonder if other symmetries in the classical action are also preserved at the level of the quantum effective action in (3.9).
To address this question, we consider the special example of DBI, as briefly introduced in §1. With a higher-dimensional motivational setup, the DBI action describes the relativistic motion of a brane moving in a generically warped geometry. We suppose for simplicity the brane moves along a cut-off throat, to mimic the absence of warping. The DBI Lagrangian in this case is given by where again X = −(∂φ) 2 /Λ 4 . Not only is this theory invariant under the shift symmetry, but it is also invariant under a non-linear diffeomorphism given in Eq. (1.2). In fact, DBI is the only model within the class of P (X) theories which is invariant under this non-linear symmetry. For small X the Lagrangian (F.1) reproduces the theory of a canonically normalised scalar field, with the first interaction being of the form modelled in Eq. (C.1). But the most interesting regime is that of large self-interactions measured by powers of X. The presence of the square root in (F.1) provides a means to resum an infinite tower of such interaction channels within the strong coupling regime of the theory. In that case and following the terminology in Eq. (2.2), we can say that DBI contains an infinite number of irrelevant but important operators, of the form X n , where n runs from 1 to infinity.
Does the quantum effective action (3.12) satisfy the DBI symmetry whose infinitesimal form is (1.2)? Explicit verification shows that it does not, which might be indication of trouble and hint at a lack of consistency of our result. Indeed, one expects that invariance of the classical action under a certain symmetry should be respected by quantum effects and therefore be manifest at the level of the quantum effective action. For most cases, both the Lagrangian density as well as the measure of the path integral remain in fact invariant under the symmetry transformation. Nevertheless, exceptions exist and, in particular, when the symmetries are non-linearly realised, the invariance under the symmetry is not preserved at the quantum level [84].
One way of understanding how to preserve the symmetry (1.2) under quantum corrections is to notice that the formula for Z µν in Eq. (3.6) can have an origin in higher-dimensional models. Indeed, it is conformally related to the metric induced on a probe brane immersed in a higher-dimensional space-time [25,28] Z µν = Ω 2 (X) q µν with q µν = δ µν + 1 The induced metric q µν appropriately transforms as a tensor under the DBI symmetry associated with boosts and rotations in the extra dimension, as described by the non-linear transformation (1.2). If Z µν = q µν , or equivalently Ω 2 = 1, then Z µν and scalar quantities constructed from it would be explicitly invariant under the transformation in Eq. (1.2). However, because of the X-dependence of the conformal factor Ω, Z µν and therefore the effective metric do not transform as tensors under the transformation. The degree of breaking of the symmetry will be measured by operators originated from terms such as (∂Ω/Ω) 2 ∼ (∂Z/Z) 2 , and similar derivatives as we have deduced in Eq. (3.14), at the level of the quantum corrections.
Ultimately to keep a prescription where the symmetry is made manifest one should rather work in the higher-dimensional setup, where the DBI symmetry originated from.

F.2 DBI from a five-dimensional embedding
In what follows we consider a probe-brane located at x 5 = φ(x µ ) in the flat-slicing of fivedimensional Minkowski (or Euclidean space). The induced metric on the brane is thus given by The inverse of the induced metric on the brane is simply given by where indices are raised and lowered using δ µν and with being the Lorentz boost factor. In five dimensional GR with a brane, there will be bulk loops and brane loops. Performing again a one-loop effective action, one can check that the bulk loops take the form where R and ∇ are derived with respect to the induced metric q µν , which has determinant denoted by q. In the limit M 5 → ∞ keeping Λ finite, the bulk loops completely decouple while the brane loops remain. Here K µ ν represents the extrinsic curvature given by [28] K µ ν = − 1 Λ 2 q µα γ∇ α∇ν φ . (F.9) where∇ is to be understood as the covariant derivative with respect to the metric δ µν . For cartesian coordinates, this is simply the usual partial derivative, but whenever the coordinate system is not cartesian, there will be important differences. Notice that in this formalism both the bulk and the brane loops are manifestly invariant under the DBI symmetry. Indeed, the induced metric, the extrinsic curvature and the fivedimensional Riemann tensors all transform as tensors under (1.2) and the brane and bulk actions constructed out of scalar quantities are thus manifestly invariant.
Regime of validity of the EFT.-Classical solutions computed using the DBI action (F.1) are within the regime of validity of the theory as long as the contributions from (F.8) are small compared to the operators in (F.1).
Power-law divergences include contributions in (F.8) with { , n, m} = {0, 0, 0} corresponding to the equivalent of the cosmological constant problem. If that power-law divergence were taken seriously, DBI would not be technically natural unless the strong coupling scale was identified with the cut-off which is at least M 5 . If that were true, the bulk loops would not decouple. In what follows we take the approach that power-law divergences are regularisation and field-dependent and may not capture the UV physics (see also Ref. [51]). Moreover, we put them under the same category as the cosmological constant problem until Part II of the paper where naturalness questions are addressed precisely.
Therefore focusing on logarithmic divergences, given by { , n, m} = {0, 0, 4}, and regardless of the classical configuration, all the eigenvalues of K µ ν should be small compared to the scale Λ |λ K | Λ . (F.10) The most interesting regime of DBI is that of large self-interactions where |X| ∼ 1 and more specifically when |X| → 1 and γ 1, with γ defined in Eq. (F.5). In that case the criterion (F.10) inferred from the previous symmetry-preserving argument implies where care should be taken in evaluating the double derivative if the coordinates are not cartesian.
To compare this with the result (3.15), which was derived following a master formula due to Barvinksy & Vilkovisky, we start by writing Z µν given in (3.6) as In the regime where γ 1, the smallest eigenvalue of Z goes as λ min ∼ γ, while the largest goes as λ max ∼ γ 3 . Using the criterion (3.15) derived from the four-dimensional one (and )loop effective action, we can in principle infer how such condition translates explicitly in terms of the eigenvalues of Z µν , including when there is a hierarchy between them. The contractions implied in the expression for the Ricci scalar in Eq. (3.12) show that the hierarchy of the eigenvalues only enters in a very peculiar way. A direct calculation shows that, at worst, the eigenvalues λ min and λ max need to satisfy where the right-hand side is symbolic. When |∂φ| ∼ Λ 2 , this implies |∂ 2 φ| γ −3 Λ 3 , (F.14) which is precisely the same criterion as (F.11) found using the five-dimensional embedding picture. Finally, notice that in principle the generic criterion (3.15) could have been too restrictive for DBI as it might have included contributions which would not have been generated had one followed a fully higher-dimensional description. The four-dimensional and the higherdimensional theories have different fundamental degrees of freedom, so it is not surprising the respective quantum corrections might differ. However, on a practical level, if we only keep track of logarithmic divergencies (as was done in part I of this paper), we have shown that the different perspective does not affect our results.