Can Compactifications Solve the Cosmological Constant Problem?

Recently, there have been claims in the literature that the cosmological constant problem can be dynamically solved by specific compactifications of gravity from higher-dimensional toy models. These models have the novel feature that in the four-dimensional theory, the cosmological constant $\Lambda$ is much smaller than the Planck density and in fact accumulates at $\Lambda=0$. Here we show that while these are very interesting models, they do not properly address the real cosmological constant problem. As we explain, the real problem is not simply to obtain $\Lambda$ that is small in Planck units in a toy model, but to explain why $\Lambda$ is much smaller than other mass scales (and combinations of scales) in the theory. Instead, in these toy models, all other particle mass scales have been either removed or sent to zero, thus ignoring the real problem. To this end, we provide a general argument that the included moduli masses are generically of order Hubble, so sending them to zero trivially sends the cosmological constant to zero. We also show that the fundamental Planck mass is being sent to zero, and so the central problem is trivially avoided by removing high energy physics altogether. On the other hand, by including various large mass scales from particle physics with a high fundamental Planck mass, one is faced with a real problem, whose only known solution involves accidental cancellations in a landscape.

Recently, there have been claims in the literature that the cosmological constant problem can be dynamically solved by specific compactifications of gravity from higher-dimensional toy models. These models have the novel feature that in the four-dimensional theory, the cosmological constant Λ is much smaller than the Planck density and in fact accumulates at Λ = 0. Here we show that while these are very interesting models, they do not properly address the real cosmological constant problem. As we explain, the real problem is not simply to obtain Λ that is small in Planck units in a toy model, but to explain why Λ is much smaller than other mass scales (and combinations of scales) in the theory. Instead, in these toy models, all other particle mass scales have been either removed or sent to zero, thus ignoring the real problem. To this end, we provide a general argument that the included moduli masses are generically of order Hubble, so sending them to zero trivially sends the cosmological constant to zero. We also show that the fundamental Planck mass is being sent to zero, and so the central problem is trivially avoided by removing high energy physics altogether. On the other hand, by including various large mass scales from particle physics with a high fundamental Planck mass, one is faced with a real problem, whose only known solution involves accidental cancellations in a landscape.

I. INTRODUCTION
A range of cosmological observations indicate that the current expansion rate of the universe is accelerating [1]. This is accommodated within general relativity by the introduction of the so called cosmological constant Λ. Viewed as a source of vacuum energy density, it has an equation of state w = −1. By taking its energy density to be ∼ 70% of the current critical density, it leads to acceleration in a fashion that is beautifully compatible with current data [2,3]. On the one hand, this is another spectacular triumph for general relativity and particle physics, which suggests the appearance of vacuum energy. On the other hand, typical estimates for the value of the vacuum energy are many orders of magnitude larger than the observed value of Λ obs ∼ (10 −3 eV) 4 . This leads to the problem of why the observed cosmological constant is so small; for a review see Ref. [4].
Many proposals have tried to address this problem, often involving radical modifications to the structure of gravity or quantum field theory. One class of recent proposals stays within the framework of ordinary field theory, but appeals to the existence of extra dimensions [5,6]. An appreciable vacuum energy is included in the higher-dimensional theory and various other sources of energy; fluxes and curvature (for earlier work on flux compactifications, see [7,8], and for some difficulties in achieving a positive cosmological constant in string Electronic address: mark.hertzberg@tufts.edu Electronic address: ali@cosmos.phy.tufts.edu compactifications, see [9,10]). It is then found that upon compactifying to 4 dimensions, the resulting lowerdimensional cosmological constant is arbitrarily small and has an accumulation point at Λ = 0. This appears to be a wonderful solution to the cosmological constant problem. At the same time, it is acknowledged that this solution comes at a price; it predicts the existence of additional arbitrarily light scalars, with masses of order Hubble.
In this paper, we show that the lightness of various scales, completely undermines the whole purported solution. As we explain, the heart of the cosmological constant problem is to explain why Λ is so small despite the presence of various heavy scales (such as the Standard Model fields, and possible heavier fields associated with unification and quantum gravity). It misses the essential problem to merely send all mass scales to zero in a toy four-dimensional theory (in units of the four-dimensional Planck mass). Moreover, we show that in order for the cosmological constant to be small in these models, the fundamental Planck scale is also being sent to zero; thus removing any high energy completely. However the heart of the cosmological constant problem is to explain why Λ is small despite the existence of high energy physics, including heavy fields and a high fundamental scale. Related to this, we clarify some issues surrounding the problem in different setups, such as electromagnetism and gravity, and provide a general explanation as to why the moduli masses are naturally of order Hubble and how Λ is linked to the higher-dimensional fundamental scale.
Our paper is organized as follows: In Section II we describe two different notions of fine-tuning. In Section III we describe fine-tuning in two different models. In Sec-tion IV we discuss a class of compactification models. In Section V we compute the moduli masses and the fundamental Planck mass in this class of models. Finally, in Section VI we conclude. We work in units in which c = 1, but we will keep powers of to track classical versus quantum effects. We will write the "mass" couplings in the field theory as m, even though m in classical field theory is actually a frequency, and the mass of quanta is m.

A. Sharp Cutoff Analysis
To set the stage for our later argument, it is useful to study the vacuum energy in free theories. Of course the vacuum energy only has consequences when we include gravitation, so we mean "free" in the particle sector.
The vacuum energy receives a quantum contribution Λ quant from a one loop diagram. It is well known that this leads to the following vacuum energy where g is the number of degrees of freedom, + is for bosons, − is for fermions, and ω k = √ k 2 + m 2 (we allow for massive free particles). It follows that quantum corrections provide a quartic divergence Λ quant ∼ m 4 UV , and if we put m UV = 1/ √ G (the Planck frequency) then we have a Planck energy density. On the other hand, the total contribution to the vacuum energy receives a contribution from the "bare" or "classical term" Λ bare , so that the total vacuum energy is Hence in order for Λ to be much less than the cutoff density, requires an extreme fine-tuning between these two contributions. So it appears as though any model which can dynamically produce a very small Λ, especially if Λ can be made arbitrarily small, is a solution of the cosmological constant problem. Moreover, if we define the momentum integral with a UV cutoff m UV and expand in powers of m/m UV we obtain where c 1,2,3 = O(1) numbers that depend on choice of regularization. This is the most general expansion for a field at one-loop. The first term provides the usual claim of a quartic contribution to vacuum energy. It suggests that even if m = 0, there must be tremendous fine-tuning to cancel against Λ bare in order for Λ to be small.

B. Renormalization Group Analysis
The above analysis is highly suggestive that there is a quartic sensitivity to the cutoff, requiring significant fine-tuning to avoid a large Λ.
One could, however, focus on another notion of "finetuning". To explain this, we should recall that the only physical parameters of a theory are the renormalized couplings, rather than the bare couplings or quantum corrections, which are scheme dependent. Such couplings are defined at some renormalization scale and change according to the renormalization group. By including gravity, this includes the physical cosmological constant Λ. We can write where µ is some renormalization scale. Within the framework of local quantum field theory, one physical notion of fine-tuning is that there is a delicate cancellation among renormalized parameters in order to fit the data. In standard renormalization schemes, the quartic and quadratic diverges of eq. (3) can be absorbed by Λ bare , while the logarithm, proportional to m 4 , is associated with an actual flow of the coupling. Further discussion of these issues includes Refs. [11,12].
In particular, as we flow from some high scale µ H ≫ m, down to some low scale µ L ≪ m, there is a jump in the renormalized coupling from passing through a mass scale of the order ∼ m 4 . If we pass through several mass scales, denoted m i , the change is roughly where we suppress possible logarithmic and threshold effects. Hence, in order for Λ obs ≈ Λ(µ L ) to be very small, there must be some exquisite cancellation between the renormalized coupling at a high scale Λ(µ H ) and the sum and differences of various renormalized masses ∼ m 4 i . Conversely, if one investigates theories that are built out of massless or extremely light particles (and no dynamically generated scales due to strong dynamics) then the flow of the renormalized Λ(µ) will be small, and it does not require fine tuning. We will return to this issue when considering a class of compactification models.

C. Summary
In physical models, we generally study interesting theories with various heavy particles, and the challenge is to explain how the observed cosmological constant is small compared to the quartic power of some scale. From the point of view of the "sharp cutoff analysis", we compare Λ obs to m 4 UV , and from the point of view of the "renormalization group analysis", we compare Λ obs to m 4 i , where m i is the heaviest mass scale. Often these two scales will not differ too much anyhow, as often heavy fields appear around the cutoff of an effective theory. If we take m UV ∼ M Pl , or we suppose there are particles whose mass is close to m i ∼ M Pl , then in either point of view we expect a Planck density for Λ. Moreover, there can be additional classical field contributions to the vacuum energy in interacting theories (for example, from scalar potentials or when gravity is included) that can be large too, but this again depends on the presence of high scales. Let us now further explore this in some important interacting theories.

A. Pure Electromagnetism
As a warm-up to the gravitational case, we start here by studying the problem of vacuum energy from photons. We consider the following interacting theory of only massless photons minimally coupled to gravity As shown in the previous section, introducing a sharp cutoff on the vacuum energy loop integral reveals a vacuum energy contribution Λ quant ∼ m 4 UV . However if we focus on the renormalization group flow in a local Lorentz invariant theory, we see that there is no flow from massless fields, and the higher order interaction term does not change this conclusion (at least for energies well below the cutoff). Hence a theory of massless photons does not strictly speaking have any cosmological constant problem from the point of renormalization group flow.
However the interaction term renders the theory nonrenormalizable. So one expects new physics to enter at some scale m new satisfying m new < M/ 1/4 to cure the problem of unitarity violation at frequencies above m UV ∼ M/ 1/4 . We expect that the new physics can renormalize the vacuum energy. Dimensional analysis selects a unique form: Of course we know that this theory is UV completed by QED (modulo the Landau pole) with the introduction of the electron with frequency which is parameterically smaller than the cutoff m UV . Indeed the electron introduces a renormalization to the vacuum energy of the form ∆Λ ∼ m 4 e , in accord with the above mentioned expectation.
Hence even though the action in eq. (7) does not by itself generate a large renormalization in vacuum energy, there is a large contribution introduced by the physics associated with its UV completion. This leads to a vacuum energy renormalization that is ∼ 32 orders of magnitude larger than the observed value. Hence a solution to this cosmological constant problem is to invent a mechanism by which Λ ≪ m 4 e in a natural way in the theory with electrons.

B. Pure Gravity
Let us now consider the case of pure Einstein gravity with standard action where M 2 Pl ≡ 1/(16πG). Here we indicate a tower of higher derivative corrections by the dots. Let us expand around flat space by writing the metric as Focussing on the curvature term, this leads to an action that is schematically given by In any case, we expect that the new physics will introduce contributions to shift the vacuum energy. In this case we can have at least two scales: (i) the scale that sets the new physics m new (which might be many new scales), and (ii) (assuming the new physics permits a four-dimensional description in some regime) we also have Newton's constant G. This permits the following tower of dimensionally correct possibilities in increasing powers of : The first term involves no powers of ; it is a classical contribution from possible phase transitions, etc. It may or may not arise, depending on how the new physics interacts with gravity. The second term is the leading (1-loop) quantum contribution to the renormalization of Λ. Assuming new physics enters below the Planck scale, the higher order terms will be sub-dominant. The challenge to solve this cosmological constant problem is to find a mechanism in which Λ ≪ M 2 Pl m 2 new , m 4 UV , . . ..
It would be some significant progress to have a dynamical mechanism wherein Λ is naturally much less than the leading few terms, say, even if it is not smaller than the higher order terms. On the other hand, the absolute worst case scenario is to have a mechanism in which Λ ∼ M 2 Pl m 2 new or Λ m 4 UV ; this would not represent any progress at all. One might think this was progress if Λ was small in the Planck units, due to m new and/or m UV being small, but this is not the real cosmological constant problem. If no Planck scale energies are permitted in the theory, then comparing to the Planck scale is irrelevant. The problem is to be small in terms of the energy scales that arise from the particle sector and in terms of the fundamental cutoff. Since (i) we know that even conventional Standard Model particle physics gives masses m new ∼ m t ∼ m H ∼ 100 GeV (which is "new" physics from the low energy pure gravity point of view, even though these degrees of freedom may not be relevant to unitarizing graviton scattering), (ii) it is plausible that there are many heavier particles, such as at some extra dimension "radion" scale, the GUT scale m new ∼ 10 16 GeV, and perhaps even heavier particles still, and (iii) the fundamental cutoff m UV must be correspondingly even larger, if one was to obtain Λ ∼ M 2 Pl m 2 new , or Λ m 4 UV , it would be catastrophically large. Indeed one would anticipate that any purported dynamical solution at least achieves Λ ≪ M 2 Pl m 2 new , m 4 UV . Shortly we will show that in a class of compactification models, Λ ∼ M 2 Pl m 2 new and Λ m 4 UV is generically obtained, which is indeed the worst case scenario and is therefore not a dynamical solution of the real problem.

IV. COMPACTIFICATION MODELS
Consider the following D-dimensional action involving gravity and a collection of p-form field strengths where we have also included a higher-dimensional cosmological constant Λ D and the fundamental Planck mass is . We now illustrate how the fourdimensional theory emerges by compactification.
Here we consider a product manifold in the form R 4 ⊗ M N where M is a d-dimensional manifold with a metric h ij . This means we are consideringd ≡ N d extra dimensions. We write the metric in the form where a is an index that runs from a = 1, . . . , N over each of the internal manifolds of dimension d. We have considered the simple case in which the higher-dimensional metric decomposes into a four-dimensional piece and a compact space with radion moduli ψ a = ψ a (x) that only depends on the large dimensions. Here µ, ν ∈ {0, 1, 2, 3} and x is the large dimension co-ordinates, while i, j ∈ {1, . . . , d}. In the first term we have, without loss of generality, pulled out a factor of e −d(Ψ(x)−Ψ0) so that the four-dimensional action is immediately in the Einstein frame, where and ψ 0a is the value of ψ a at its stabilized value from compactification. When integrating the fluxes over the compact space, we assume the fluxes are only functions of the compact co-ordinates, apart from an overall volume dependence. For the flux wrapping around the compact space labelled (a), we have We can now integrate over the compact space. This leads to 3 quantities that characterize its structure, namely where the first quantity V (a) is the volume of each compact space, with total volume the second quantity C (a) is a volume integral over the compact space Ricci scalar R (a) , and the third quantity f p(a) is a volume integral over the flux number that threads the compact space. Note that each of these 3 quantities are constant; independent of space and time.
Dropping boundary terms, we obtain the following action in 4 dimensions where K E and V 4 are the kinetic terms and the fourdimensional potential term given by Here the four-dimensional Planck mass M Pl is related to the fundamental Planck mass M * by The above action involves scalar fields ψ a that are not canonically normalized. We can switch to a new set of fields φ a that are canonically normalized by defining which is defined such that the minima of the potential is at φ a = 0. In terms of these canonically normalized fields the action takes on the canonical form (29) The potential function V for the canonically normalized field is simply V = V 4 , but expressed in different variables.
For a positively curved compact space, C (a) > 0, or for a negative higher-dimensional cosmological constant, Λ D < 0, one or both of the first two terms of V is negative and it can compete with the other positive flux terms to lead to a stable vacuum solution, either AdS, Minkowski, or dS, depending on parameters.
In the simplest versions of these models, with only one internal manifold N = 1 and Λ D > 0 there can be dS vacua, but no accumulation of vacua as Λ → 0 + in the large flux limit. On the other hand, for Λ D ≤ 0 there is an accumulation of vacua with Λ → 0 − in the large flux limit.
More interestingly, for N ≥ 2 an accumulation point can arise for dS vacua as Λ → 0 + (as well as a much more dominant accumulation of vacua as Λ → 0 − ). This was shown in Ref. [5] in the case where the internal manifolds are spheres. This appears to be a beautiful solution of the cosmological constant problem.
In the next section we will describe a general property of their solutions regarding moduli masses.

A. Moduli Scale
In this class of compactifications, and even for more general classes, the potential energy is a sum of terms involving exponentials of the radion fields φ a , as seen in eq. (23). In a general fashion, we can write the potential as Comparing to (23) it is straightforward to read off the value of V q and β qa in this particular class of models.
What is important to note is that the coefficients in the exponents β qa are O(1). The mass of the moduli is given by the eigenvalues of the Hessian matrix of the potential at the minimum φ a = 0. This is Notice the eigenvalues cannot be much larger than the elements of H ab , and so we will estimate the typical masses m a to be of order the typical values of H ab . On the other hand, the cosmological constant is given by Then assuming the potential does not have any accidental cancellations at its minima Λ = V φ=0 , and recalling that β aq = O(1), we can express Λ in terms of m 2 a by comparing (30) to (31), giving Then using the Friedmann equation, we have m a ∼ H quite generally, where H is the Hubble parameter. So we see that generically the moduli mass is related to the Hubble scale, which is reasonable on dimensional grounds. Typically, most vacua are AdS. One can restrict attention to only dS vacua (as was the case in Refs. [5,6]). These dS vacua require a fine-tuning to achieve a special cancellation in the potential between the large V Λ contribution and a curvature contribution, leaving Λ especially small (see Section V B 2 for more details). However, one can show that a typical dS vacuum has a corresponding special cancellation in the potential, leaving Λ ∼ M 2 Pl m 2 a still valid for light moduli (although some moduli can be heavier). In more complicated models, even if one were to obtain Λ ≪ M 2 Pl m 2 a , there is no evidence that Λ ≪ m 4 a can be obtained within this framework.
Hence from the point of view of the renormalization group flow of Λ, we see that these models do not produce a Λ that is smaller than the typical expectation from renormalization. In the next section, we will study whether Λ is much smaller than the estimates based on a sharp cutoff.

AdS Vacua (generic)
In this class of models, the higher-dimensional cosmological constant Λ D plays a very important role. In most compactifications, it provides an O(1) contribution to the four-dimensional vacuum energy. As shown in [6] most of these vacua are AdS. For these vacua the fourdimensional vacuum energy is Now suppose we parameterize the higher-dimensional cosmological constant as So if λ D is chosen to be λ D = O(1) (in units of ), then we have a "naturally" large value for the higher-dimensional cosmological constant, according to the "sharp cutoff analysis" of Section II A. Now, on the one hand, we can eliminate M * in the expression for Λ, by using eq. (26) leading to whered = N d. Since these are large volume compactifications, in the large flux limit, we see that Λ → 0 − , which appears to solve the problem of why the cosmological constant is small (although this is negative). On the other hand, we can eliminate V and express Λ in terms of M * leading to For λ D not too small, this Λ is much larger than even the "natural" value of ∼ M 4 * (in units of ), since M * ≪ M Pl in the large volume limit. Hence this clearly does not address the cosmological constant problem. We see that the only reason Λ → 0 − is because M * → 0 which removes all high energy physics trivially.

dS Vacua (non-generic)
Alternatively, one can introduce very special choices of flux parameters, so as to fine-tune away such huge contributions to the vacuum energy, and allow for dS vacua. To do this, we need a hierarchy among the internal radii, which leads to a hierarchy among the curvature contributions V C(a) . There are two important possibilities: In the first case (i), the curvature contribution N a=2 V C(a) is tuned to cancel against the vacuum energy contribution V Λ . The residual four dimensional vacuum energy can be estimated by the residual curvature contribution From eq. (19), the curvature parameter C (1) is roughly . Expressing Λ in terms of M Pl and V (1) gives which shows that indeed the vacuum energy is even smaller than previously in eq. (36), for large volumes V (1) . This is the result of fine-tuning the leading contributions to vanish, and allows Λ → 0 + more rapidly, which appears to solve the problem of why the cosmological constant is small and positive. However, we can again eliminate V (1) and express the result in terms of M * , to find We note that for curvature to be present, we obviously need d ≥ 2. So Λ M 4 * (in units of ) and is bounded by Λ M 2 * M 2 Pl for high d.
using the fact that V (a) are all similar for a = 2, . . . , N in this case. This again says that the vacuum energy tends to Λ → 0 + in the large volume limit. We now eliminate the volume dependence to express the result in terms of M * , as we did in case (i), to find Since we need d ≥ 2 for curvature to be present and N ≥ 2 for this cancellation to take place, we again have Λ M 4 * (in units of ) and is bounded by Λ M 2 * M 2 Pl for high d or N . So in both cases (i) and (ii) we see that despite having fine-tuned away the leading term to produce dS vacua, the resulting cosmological constant is still not smaller than the estimate based on a simple cutoff.

VI. CONCLUSIONS
Hence even though there are interesting compactification models [5,6] in which the four-dimensional cosmological constant has an accumulation point as Λ → 0, it does so only in so far as the mass scales of fundamental physics m new , m UV → 0. Generally in these models, Λ scales as some power of the product of the mass and Planck mass appearing in the four-dimensional theory, or alternatively, as a power of the fundamental Planck scale. This is essentially the worst case scenario from both the renormalization group point of view, and also from the sharp cutoff point of view. This does not address the real cosmological constant problem, wherein we need to explain how Λ is incredibly small, despite the presence of high scales of physics.
A general way to see the problem is the following: From the low energy four-dimensional point of view, it cannot matter that there are extra dimensions, or otherwise, in the UV. Effective field theory says that these effects cannot naturally reach down and remove the already large contributions to vacuum energy from known Standard Model physics. In the above toy models, an attempt to do this comes from having the new "UV" physics scale simply inserted at fantastically low energies, which misses the real problem.
Returning to the structure of eqs. (30) we see that the only possibility would be to allow the potential V to have various accidental cancellations at its minimum Λ, while maintaining large mass scales m φ , M * . This could be conceivable in some landscape framework with many fluxes and an exponentially large number of vacua, and is a conceivable solution [13,14]. Other directions to address the problem could involve radical alterations to local quantum field theory.