Spacetime Average Density (SAD) Cosmological Measures

The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.

A solution to the measure problem should not only be able to give normalized probabilities for observations (that is, normalized so that the sum over all possible observations is unity, not just the sum over all the possible observations of one observer at one time) but also make the normalized probabilities of our observations not too small. One can take the normalized probability of one's observation as given by some theory as the likelihood of that theory. (Here a theory includes not only the quantum state but also the rules for getting the probabilities of observations from it.) Then if one weights the likelihoods of different theories by the prior probabilities one assigns to them and normalizes the resulting product, one gets the posterior probabilities of the theories. One would like to find theories, including their solutions to the measure problem, that give high posterior probabilities.
A major threat to getting high posterior probabilities for theories is the possibil-ity that they may predict that observations are dominated by those of Boltzmann brains that arise from thermal and/or vacuum fluctuations [246,247,248,85,249,106,110,112,114,115,117,118,122,250,251,130,252,253,138,139,145,146,254,255,152,156,256,189,196,245]. Most such Boltzmann brain observations seem likely to be much more disordered than ours, so if the probabilities of our ordered observations are diluted by Boltzmann brain observations in theories in which they dominate the probabilities, that would greatly reduce the likelihood of such theories. Therefore, one would like to find theories with solutions to the measure problem that suppress Boltzmann brains if this can be done without too great a cost in complexity that would tend to suppress the prior probabilities assigned to such theories.
Most proposed solutions to the measure problem of cosmology tend to lean toward one or the other of two extremes. Some, particularly those proposed by Hartle, Hawking, Hertog, and/or Srednicki [33,55,57,68,130,141,143,149,151,170,171,180,184,192,212,229,237], which often apply the consistent histories or decohering histories formalism [257,258,259,260,261,262,263,264,265,266,267,268,269,270] to the Hartle-Hawking no-boundary proposal for the quantum state of the universe [271], tend to suggest that the measure is determined nearly uniquely by the quantum state (at least if a fairly-unique typicality assumption is made [130,171,184]). Others, particularly those proposed in the large fraction of papers cited above for the measure problem that are focused on eternal inflation, tend to suggest that the quantum state is mostly irrelevant and that the results depend mainly on how the measure is chosen in the asymptotic future of an eternally inflating spacetime. Here I shall steer a middle course and suggest that both the quantum state and the measure are crucial, further suggesting that the measure is dominated by observations not too late in the spacetime.
I am sceptical of the extreme view that the measure is determined uniquely (or nearly uniquely) by the quantum state, partly because of my demonstrations of the inadequacy of Born's rule [158,168,173,183] when it is interpreted to imply that the probabilities of observations are the expectation values of projection operators, and mainly because of the logical freedom of what the operators are whose expectation values give the relative probabilities of observations in what I regard as the simplest formalism for connecting observations to the quantum state [14,15,16,17,52,107,200]. It also appears to me that the particular assumptions used in the papers of Hartle, Hawking, Hertog, and/or Srednicki cited above, which tend to give nearly equal weights to observations that almost certainly occur at least once in the spacetime histories that dominate the probabilities for histories (which would be a vast range of observations if the histories whose probabilities dominate have eternal inflation and so become so large that the probability of at least one occurrence of even very rare observations would be unity or nearly unity), would lead to the normalized probability for our observations to be extremely low (diluted by the vast number of other observations of nearly equal probabilities), therefore giving extremely low likelihoods for theories making these assumptions. Thus I believe that, unless one by fiat assigns nearly all the prior probability to such theories, their posterior probabilities will be very small, much less than better theories that use more sophisticated measures.
I am also sceptical of the opposite extreme, the view that the probabilities of observations are essentially independent of the quantum state and only depend upon the dynamics of eternal inflation. It is logically possible that this is the case, say by having the probabilities of observations given by the expectation values of the identity operator multiplied by coefficients that will then be the probabilities of the observations for any normalized quantum state [175]. But then our observations of apparent quantum effects would just be delusions, since the observations would not depend upon the quantum state. It seems to me much more plausible that our observations appear to depend upon the quantum state because indeed they do depend upon the quantum state. Now a less-extreme view would be that whether eternal inflation occurs depends upon the quantum state, but that if the quantum state is such that eternal inflation does occur, the probabilities of observations do not depend upon further details of the quantum state. This is more nearly plausible, but I find it also a rather implausible view. If the relative probabilities of observations are given by the expectation values of positive operators that are not just multiples of the identity operator, they would generically be changed by generic changes in the quantum state. That is, it seems very hard to have the relative probabilities insensitive to generic changes in the quantum state if these relative probabilities are nontrivial linear functionals (i.e., not just the expectation values of multiples of the identity operator) of the quantum state, which to me appears to be the simplest possibility [14,15,16,17,52,107,200], though I do not claim to see any logical necessity against nonlinear functionals [168] for the relative probabilities.
If the gross asymptotic behavior of an eternally inflating universe is insensitive to the details of the quantum state (say other than requiring that the state be within some open set), then I would think it implausible that the relative probabilities of observations would depend only on the gross asymptotic behavior of the universe. Therefore, I do not favor measures that have temporal cutoffs that are eventually taken to infinity and have the property that for a very large finite cutoff they depend mainly on the properties of the spacetime at very large times (e.g., times near the cutoff). If there is a time-dependent weighting to the measure, I suspect that it should not depend mainly on the asymptotic behavior. In particular, I am sceptical of the specific form of "Assumption 3. Typicality." of Freivogel [204] that "we are equally likely to be anywhere consistent with our data . . . [so that with] a finite probability for eternal inflation, which results in an infinite number of observations, . . . we can ignore any finite number of observations." In a footnote to this statement, Freivogel admits, "This conclusion relies on an assumption about how to implement the typicality assumption when there is a probability distribution over how many observations occur [183]." In this paper I shall reject this assumption, which Freivogel notes [204] that I have called "observational averaging" [183], and instead investigate non-uniform measures over spacetimes that suppress the asymptotic behavior.
As an example of what I mean, in [157] (see also [158,168,173,214] for further discussion and motivation) I proposed volume averaging instead of volume weighting to avoid divergences in the measure of Boltzmann brain observations on spatial hypersurfaces as they expand to become infinitely large. However, summing up over all hypersurfaces still gave a divergence if that were done by a uniform integral over proper time t and if indeed the proper time goes to infinity. One could make this integral finite by cutting it off at some finite upper bound to the proper time, say t * , but then as t * is taken to infinity, asymptotically half of the integral would be given by times within a factor of two of the temporal cutoff t * . Therefore, as t * is taken to infinity, the relative probabilities will be determined by the asymptotic behavior of the spacetime. For example, if it were an asymptotically de Sitter spacetime that does not have bubble nucleation to new hot big bang regions that lead to a sufficiently large number of ordinary observers, the relative probabilities will apparently be dominated by Boltzmann brains in the asymptotic de Sitter spacetime. Even if de Sitter keeps nucleating new big bangs at a sufficient rate for ordinary observers produced by these big bangs to dominate over Boltzmann brains in the expanding regions that remain asymptotically de Sitter, if a transition occurs to Minkowski spacetime that cannot nucleate new big bangs, and if Boltzmann brains can indeed form in the vacuum state in Minkowski spacetime [85,106,112,115,118,138,145,152,189,196,245], they will eventually dominate over ordinary observers if the weighting is uniform over proper time up to some cutoff t * that is taken to infinity. Therefore, in [196] I proposed Agnesi weighting, integrating over dt/(1+t 2 ) (with the proper time t measured in Planck units) rather than over dt, the uniform integral over proper time. In this case the measure will be dominated by finite times even without a cutoff. Alternatively, if one did continue to use a cutoff t * , the range of times which dominates the integral will not grow indefinitely as t * is taken to infinity but instead will remain at fixed finite times (assuming a measure on hypersurfaces that does not diverge as the hypersurfaces become larger and larger).
When Agnesi weighting [196] is combined with volume averaging [157], it appears to be statistically consistent with all observations and seems to give much higher likelihoods than measures using the approaches of Hartle, Hawking, Hertog, and/or Srednicki. It does not require the unproven hypothesis that bubble nucleation rates for new big bangs are higher than Boltzmann brain nucleation rates [159,110,160,156,204], as the most popular eternal inflation measures require [204]. It also does not lead to measures dominated by observations of a negative cosmological constant [166,176,204], which is contrary to what our observations give. Therefore, for fitting observations without needing to invoke unproven hypotheses, it seems to be the best measure proposed so far.
On the other hand, Agnesi weighting is admittedly quite ad hoc, so there is no obvious reason why it should be right. Ideally one would like to find a measure that is more compellingly elegant and simple and which also gives high likelihoods for theories using it and also having elegant and simple quantum states. However, since none of us have found such a measure, it may be worthwhile to investigate other alternatives to Agnesi weighting.

Spacetime Average Density (SAD) Measures
Here I wish to propose new solutions to the measure problem with a weighted distribution over variable, rather than fixed, proper-time cutoffs depending on the spacetime average density of observation occurrences up to the cutoff. The ad hoc weighting function dt/(1 + t 2 ) of Agnesi weighting will be eliminated, though at the cost of two different rather ad hoc algorithms for constructing a weighting over proper time to damp the late-time contribution to the measure for observations. Let me use the index i to denote the theory T i , which I take to include not only the quantum state of the universe but also the rules for getting the probabilities of the observations from the quantum state. I shall assume that the quantum state given by i gives, as the expectation value of some positive operator depending upon the spacetime and perhaps also upon the theory, a relative probability distribution or measure µ ij for different quasiclassical inextendible spacetimes S j , labeled by the index j, that each has definite occurrences of the observation O k , labeled by the index k, occurring within the spacetime at definite location regions that I shall assume are much smaller than the spacetime itself and so can be idealized to be at   [14,15,16,17,107] or Mindless Sensationalism [52,200].
I shall also assume that each such spacetime S j with positive measure µ ij in the theories T i that I shall be considering is globally hyperbolic with compact Cauchy The sum of these over all observation types O k is the total Spacetime Average Density of all observation occurrences in the spacetime region R jt ,  [271], which seems to predict mostly nearly empty de Sitter spacetime that would apparently be dominated by Boltzmann brains even for times much shorter than the time needed for Boltzmann brains to dominate in a universe that starts with a hot big bang [112]). However, I do not want to introduce some fixed parameter value for what t is for the spacetime regions R jt to be used for the Spacetime Average Densities of the various observations. Instead, I shall seek a measure to be given in terms of an auxiliary function f ij (t), determined both by the theory T i and by the spacetime S j existing within the theory with quantum measure µ ij , which increases monotonically from 0 to 1 as t ranges from 0 to ∞ within the spacetime S j . (If t runs only from 0 to t j < ∞ for some spacetimes S j , I shall require that f ij (t) increase monotonically from 0 to 1 as t increases from 0 to t j and then stay at 1 for all values of t greater than t j that do not actually occur within the spacetime S j , so that for simplicity I can take t running from 0 to ∞ for each spacetime.) I shall then assume that equal ranges of f ij (t) contribute equally to the measure in choosing the value of t used to cutoff the spacetime.
First I shall explain more explicitly how to use f ij (t) to get the measure, and then I shall postulate different ways (labeled by the index i in the theory T i that includes not only the quantum measures µ ij for the different spacetimes S j but also the rules for getting the measure for converting the quantum state to observational probabilities) to get the auxiliary function f ij (t) from the SAD functionn j (t) for the spacetime S j . In particular, I shall propose that the weighted Spacetime Average Density for the occurrences of the observation O k in theory T i and in spacetime S j with auxiliary function f ij (t) is The weighted Spacetime Average Density in theory T i and spacetime S j for all observations O k is the sum of this over the k that labels the observations: Next, to include the quantum measure µ ij that the theory T i assigns to the spacetime S k , I propose that the unnormalized measure or relative probability p ik in theory T i of the observation O k is the sum of the weighted Spacetime Average Densities further weighted by the quantum measures: Finally, dividing by the normalization factor gives the normalized probability in the theory T i of the observation O k as Of course, it remains to be said what different theories T i give for the way to get the auxiliary function f ij (t) from the SAD functionn j (t) for the spacetime S j .

Maximal Average Density (MAD) Measure
First, consider theories T i that employ what I shall call the Maximal Average Density (MAD) measure. These make use of the time t * j that is the value of t that gives the global maximum value of the SAD function of t for the spacetime S j , n j (t) = N j (t)/V j (t), the Spacetime Average Density of the total occurrences of all observations up to proper time t in the spacetime S j . That is,n j (t) ≤n j (t * j ) for all t in the spacetime S j . (For simplicity, I shall assume that there is zero quantum measure µ ij for spacetimes with more than one value of t * j at whichn j (t) attains its global maximum, so thatn j (t) <n j (t * j ) for all t = t * j in all spacetimes S j with positive measures.) In particular, the Maximal Average Density or MAD measure is the one in which the Heaviside step function, being 0 for times t before the global maximum forn j (t) and being 1 for times after this global maximum. Then df ij /dt = δ(t − t * j ), a Dirac delta function centered on the global maximum forn j (t) for the spacetime S j , so Eq. (5) givesn ijk =n jk (t * j ).
This then leads to the normalized probability for the observation O k given a MAD theory T i as being Of course, there is not a unique MAD theory, since for this MAD function f ij (t) = θ(t − t * j ), there are many different MAD theories giving different quantum states and hence different quantum measures µ ij for the spacetimes S j .
One might suppose that a typical inextendible spacetime which gives a large contribution to the probability P ik in a plausible theory T i of a typical human observation O k would have something like a big bang at small t (though perhaps actually a bounce at t = 0 [172]), a relatively low density of observation occurrences until a period around t ∼ t 0 when planets heated by stars exist and have a relatively high density of observation occurrences produced by life on the warm planets (compared with that at any greatly different time), and then a density of observation occurrences that drops drastically as stars burn out and planets freeze, until the density of observation occurrences asymptotes to some very tiny but still positive spacetime density of Boltzmann brain observations. In this case it is plausible to expect that n ij (t) will start very small for small t when the spacetime S j is too hot for life (and when life has not had much time to evolve), rise to a maximum at a time t ∼ t 0 when planetary life prevails, and then drops to a very small positive asymptotic constant when planetary life dies out and Boltzmann brains dominate.
For example, a mnemonic k = 0 ΛCDM Friedmann-Lemaître-Robertson-Walker model [276] with Λ = 3H 2 ∞ ≈ (10 Gyr) −2 ≈ ten square attohertz ≈ 3π/(5 3 2 400 ) Planck units (and so fairly accurately applicable to our universe only after the end of radiation dominance but here used for all times) gives with V 3 the present 3-volume (unknown and perhaps very large because the universe appears to extend far beyond what we can see of it, though here I shall assume that it is finite) and A very crude toy model for the SAD functionn j (t) for the Spacetime Average Density of all observations might bē where A is an unknown constant that parametrizes the peak density of ordinary observations that are represented by the first term that rises and then falls, and where A ǫ is the much, much smaller density of Boltzmann brain observations that is crudely assumed to be constant. (For a Boltzmann brain that is the vacuum fluctuation of a human-sized brain, one might expect ǫ ∼ 10 −10 42 [112].) The total number of ordinary observation occurrences in this crude model is finite, A V 3 , but the total number of Boltzmann brain observations grows linearly with the 4-volume V j (t) = V 3 x(t) and hence diverges if indeed t and x(t) go to infinity as assumed.
The SAD function rises from A ǫ at t = 0 and x = 0 (probably an overestimate, as I would suspect that when the universe is extremely dense, Boltzmann brain production would be suppressed, but since ǫ ≪ 1 I shall ignore this tiny error, no doubt much smaller than the error of the crude time-dependent term for the Spacetime Average Density of ordinary observations) monotonically to A(0.5 + ǫ) at t = t * j that gives x * j ≡ x(t * j ) = 1 and then drops monotonically back to A ǫ at t = ∞ that gives x = ∞.
The first term in the sum corresponds to ordinary observations, and the second corresponds to Boltzmann brain observations. If all the different spacetimes S j that have positive quantum measure µ ij had SAD functionsn j (t) that were proportional to this one (with the same ratio of ordinary and Boltzmann brain observations), then the total normalized probability for Boltzmann brain observations would be only ǫ/(0.5 + ǫ) ≈ 2ǫ ≪ 1. Thus the MAD measure would solve the Boltzmann brain problem, even for models in which Boltzmann brain production is faster than the production of new bubble universes and even for models that asymptote to Minkowski spacetime with its infinite spacetime volume and presumed positive density per 4-volume of Boltzmann brain observations that would cause Boltzmann brain domination in most other proposed solutions to the measure problem.

Biased Average Density (BAD) Measure
Next, consider what I shall call the Biased Average Density (BAD) measure. A motivation for going from MAD to BAD is that the MAD measure does not give any weight to observations within a spacetime that is after the time t * j at which the SAD,n j (t), is maximized. It does not seem very plausible that any observation occurrence within a spacetime of positive measure would contribute zero weight to that kind of observation O k , so the BAD measure replaces the MAD measure by a weighting that is positive for all observation occurrences within an inextendible spacetime (except possibly for a set of measure zero). The auxiliary function in the BAD measure is given by which again increases monotonically from 0 at t = 0 to 1 at t = ∞, but now continuously rather than suddenly jumping from 0 to 1 as the MAD auxiliary function In particular, ifn j (t) increases monotonically from n 0 at t = 0 to a single local maximum (the global maximum) value n * at t = t * j and then decreases monotonically to n ∞ at t = ∞ (where for now I am suppressing the overbar and j index on n j (t) at these three special times), then Then Eq. (6) givesn In the case in which n 0 = n ∞ (as was assumed above in the crude toy model), This is what was assumed above in the crude toy model, which has n 0 = n ∞ = A ǫ and n * = A(0.5 + ǫ), givingn Again taking the first term in the sum to correspond to ordinary observations and the second term to correspond to Boltzmann brain observations, and assuming that all the different spacetimes with positive quantum measure given the same ratio of the first term to the second term, one gets that the total normalized probability for Boltzmann brain observations in the BAD measure would be ǫ/(0.25 + ǫ) ≈ 4ǫ ≪ 1, roughly twice what it would be in the MAD measure but still extremely small.
Therefore, both the MAD and BAD measures would solve the Boltzmann brain problem.

Conclusions
One might compare the MAD and BAD measures, which are SAD measures using the Spacetime Average Density, with the Agnesi measure [196], which uses spatial averaging over hypersurfaces and a weighting of hypersurfaces by the Agnesi function of time, dt/(1 + t 2 ) with time t in Planck units. In some ways the MAD and BAD measures appear to have more complicated algorithms to define, but they do avoid the use of an explicit ad hoc function of time such as the Agnesi function, though it is one of the simplest functions that is positive and gives a finite integral over the entire real axis.
The weighting factor of the Agnesi measure does favor earlier times or youngness (as both the MAD and BAD measures do in different ways), but in a fairly weak or light way, without exponential damping in time. Therefore, it might be called a Utility Giving Light Youngness (UGLY) measure. As a result, I have now made alternative proposals for measures that are MAD, BAD, and UGLY. I am still looking for one that is GOOD in a supreme way of giving a high posterior probability by both giving a likelihood (probability of one's observation given the theory that includes the measure) that is not too low (which it seems that all three of my proposed measures would do with a suitable quantum state, such as perhaps the Symmetric Bounce state [172])and giving a prior probability (assumed to be higher for simpler or more elegant theories) that is not too low (which my measures might not be in comparison with a yet unknown measure that one might hope could be much simpler and more elegant). However, if GOOD were interpreted as simply meaning Great Ordinary Observer Dominance, then since all three of my proposed measures suppress Boltzmann brains relative to ordinary observers, one could say that the MAD, the BAD, and the UGLY are all GOOD.