Time evolution as refining, coarse graining and entangling

We argue that refining, coarse graining and entangling operators can be obtained from time evolution operators. This applies in particular to geometric theories, such as spin foams. We point out that this provides a construction principle for the physical vacuum in quantum gravity theories and more generally allows to construct a (cylindrically) consistent continuum limit of the theory.


Introduction
Renormalization and coarse graining have become powerful tools to connect microscopic and macroscopic regimes of a given theory. In particular many approaches to quantum gravity postulate or aim to derive macroscopic space time as arising from the collective dynamics of basic building blocks [1,2,3,4]. To validate such a picture one has to show that the many body dynamics of such systems gives indeed a smooth space time if sufficiently coarse grained.
To this end coarse graining techniques [5] need to be employed. The question of how to coarse grain or block fine degrees of freedom into coarser ones is essential for determining good truncations for the coarse graining schemes. Coarse graining maps are dual to refining maps, in fact tensor network renormalization schemes [10,11] put the emphasize rather on refining maps, that then also determine the properties of the truncation in these schemes [12].
In this note we point out that time evolution maps, that appear in simplicial discretizations [13,14], can also be interpreted as refining and coarse graining maps. As we will argue here this applies in particular to gravitational dynamics, e.g. spin foams [15,16,17,18].
One reason why the appearance of time evolution as coarse graining or refining maps applies in particular to gravitational or other diffeomorphism invariant systems is the following: As argued in [19,20,21,22,23] diffeomorphism symmetry in discrete systems translates to a symmetry, which can be interpreted as moving vertices in the discrete space time described by the dynamical variables of the theory. These vertex translations can also be understood as time evolution. Now, vertices can be even moved on top of each other, which gives a coarse graining of the underlying state. Alternatively vertices can split into two and in this way give a refinement. Indeed this argument was used in [23] to show that diffeomorphism symmetry implies discretization independence.
More generally diffeomorphism invariant systems are totally constrained, i.e. the Hamiltonian is given by a combination of constraints. In the case of a totally constrained system the time evolution operator should be a projection operator [24,25], projecting onto so-called physical states. Thus physical states should not evolve. 1 For discrete time evolutions that change the number of degrees of freedom, this leads to the puzzle of how to identify states from Hilbert spaces of 'different size'. 2 We will argue that such states describe indeed the same physical state, however expressed on two different discretizations. The equivalence relation is provided by the refining time evolution operator. We will explain how this notion can be formalized into the construction of an inductive limit Hilbert space. Such an inductive limit construction is also used for the (kinematical) Hilbert space of loop quantum gravity [27,28].
The inductive limit Hilbert spaces, which are defined via an equivalence relation between states from Hilbert spaces based on different discretizations, require however (so called cylindrical) consistency conditions: physical observables should not depend on which representative they have been determined on. Indeed we will connect these consistency conditions with a notion of path independence for (refining) time evolution. This relates then to the requirement of diffeomorphism invariance.
Discrete (non-topological) theories typically break the diffeomorphism symmetry [22]. The hope however is that diffeomorphism symmetry can be recovered in the continuum limit. We will explain how to formulate the continuum limit of the dynamics of a given quantum gravity theory and how such a continuum limit can be constructed by an iterative coarse graining procedure akin to tensor network renormalization schemes.
Topological theories can be often discretized without breaking diffeomorphism symmetry. In this case refining time evolution maps indeed satisfy the consistency conditions. We point out that this provides a construction principle for inductive limit Hilbert spaces, that can for instance be applied to find new quantum representations for loop quantum gravity [29].
The idea that time evolution can be interpreted as coarse graining, refining or entangling occurs in many approaches. Tensor network coarse graining algorithm can be easily seen as time evolution in radial direction (in an Euclidean space time), which itself leads to holographic renormalization [30]. Entanglement renormalization [31], which is also based on tensor network techniques, can be interpreted in a space time picture, again involving holographic renormalization, see for instance [32]. Here the tensor network and the entanglement it encodes are interpreted as a (background) AdS space time. Although such geometrical interpretations appear very naturally, the interpretation of the underlying geometry as a background geometry might not apply straightforwardly to gravity. The reason is that the dynamical variables include the geometric degrees of freedom. Hence the geometry is encoded in the boundary state itself, and has to be extracted from it.
A main point of this paper is to bring together coarse graining tools developed in condensed matter with methods developed in loop quantum gravity and to point out the many peculiarities that arise if one considers totally constrained systems such as general relativity. This leads to our proposal of how to construct the continuum limit of a given quantum gravity theory, together with a notion of a physical vacuum state. Furthermore we point out a general construction principle for inductive limit Hilbert spaces based on time evolution maps of topological theories.

Overview
In this paper we will employ a generalized meaning of time evolution, which will be explained in sections 2 and 3.2. The first generalization applies in particular to discretized field theories, where we allow for a time evolution, which changes the number of variables, that is phase space or Hilbert space dimension, from one time step to the next. The second issue we will discuss, is the meaning of time evolution in a totally constrained system, such as general relativity.
Usually one considers a discretization that does not change in time, and thus the number of degrees of freedom stays also constant. However, for theories involving a curved background, or gravity as a dynamical theory, one often uses an irregular lattice, where the discretization and the number of variables do change in time.
In section 2 we will discuss time evolution in simplicial discretizations, where in general the number of degrees of freedom change. Such simplicial discretizations are in particular used for (the quantization of) gravity, for instance in Regge calculus [33] or spin foams [17]. geometrical objects. The Hilbert space describing the states at a given time is then (typically) given as a tensor product of these basic Hilbert spaces. 'Size' then refers to the complexity of the underlying discretization, that is the number of sites, edges etc.
The quantization of the Hamiltonian constraint in loop quantum gravity [34] also involves a change of the underlying discretization (in the form of a graph). The interpretation of this graph changing Hamiltonian is an open issue. In this work we will suggest an interpretation for a graph or discretization changing time evolution. On the other hand this interpretation will help to actually design reasonable discrete dynamics involving a change of phase or Hilbert space.
In sections 2 and 3 we will also explain how to formulate such a dynamics with varying number of degrees of freedom in the classical and quantum realm respectively and propose that such a dynamics can be interpreted to refine or coarse grain a given state. This is underlined with a number of examples in section 2.
This interpretation is strengthened if we consider totally constrained systems, such as general relativity or topological field theories. In a totally constrained system the Hamiltonian is given as a combination of constraints C i , that generate gauge transformations. Thus time evolution is equivalent to a gauge transformation, realizing the fact that in such systems the choice of time coordinate is arbitrary.
The classical evolution of such systems does not change the states, as these are defined as gauge equivalence classes. The quantum evolution in the form of a path integral is supposed to act as a projector onto physical states ψ phys (X) annihilated by the quantized constraintŝ C i ψ phys = 0 [24]. Thus evolution with respect to (coordinate) time is 'frozen'. Consider a discretization of a totally constrained system and allow for the number of degrees of freedom to change during time evolution. Here we will understand time evolution in the sense of (1.1), that is we consider a discretized path integral. How should we interpret this time evolution, which is supposed to be 'frozen', in the case that the number of variables involved (including physical and gauge degrees of freedom) does change?
We will propose in section 3 that in this case time evolution is equivalent to a refining or coarse graining of a state. (In case the initial state is not physical, unphysical degrees of freedom might be also projected out.) We will connect the case of refining time evolution to the construction of a continuum Hilbert space via an inductive limit, explained in section 3.1, as is used in loop quantum gravity [27]. Such a construction provides a precise sense in which states from Hilbert spaces of 'different size' can be equivalent. Note that this inductive limit construction for the continuum Hilbert space has so far been used only for the kinematical Hilbert space in loop quantum gravity. We propose here a construction which involves the dynamics. Thus the dynamics defines which states are equivalent, as one would expect for the physical Hilbert space, i.e. the space of states, satisfying the constraints.
Considering a time evolution where the number of degrees of freedom change, we can go to the extreme, and start from an 'empty' discretization, supporting no variables at all. This will be discussed in section 3.2. A state resulting from such a refining time evolution with such initial conditions defines the (Hartle-Hawking) no-boundary state. The different stages of evolution just represent this state on different discretizations, which is consistent with the construction of a Hilbert space via an inductive limit. We will propose that refining a given state via time evolution, means to put the additional degrees of freedom into a state, that resembles this Hartle-Hawking state in some localized form. It is thus natural to see the Hartle-Hawking state as the vacuum state of the system. (Note that in constrained systems the definition of vacuum via minimal energy is usually not available -all states satisfy the Hamiltonian constraints and have therefore zero energy, at least in systems without a boundary.) Often discretizations provide the only method to make sense of the formal continuum path integral. However for (non-topological) systems the continuum diffeomorphism symmetry is typically broken by the discretization [22]. But without a realization of diffeomorphism invariance in the path integral (1.1), it cannot act as a projector onto the physical states. To deal with this issue one attempts to restore diffeomorphism invariance via refining the building blocks and finding effective amplitudes for the coarser building blocks by integrating out the finer degrees of freedom [35,36]. This we usually refer to as coarse graining flow (although the initial step is a refining). We will explain how this defines a continuum limit of (1.1), which can be expressed as a cylindrically consistent amplitude map on an inductive limit Hilbert space in section 4. For such an inductive limit Hilbert space one needs to again define refinement maps, which we propose to be given by (effective) time evolution maps. This holds in particular if one wants to express the physical Hilbert space as an inductive limit.
Thus what has been said above about equivalence of time evolution and refining and coarse graining will hold in general only in some approximate sense. In fact, one can now attempt to construct discretizations for which this holds to a good approximation. This will also provide the means to define the continuum limit via a coarse graining flow. In this continuum limit one then expects this equivalence to hold exactly. Section 5 will explain that tensor network renormalization schemes provide a means to construct cylindrically consistent amplitude maps and an inductive limit physical Hilbert space. On the other hand the insight that time evolution maps provide refining maps might help to develop new tensor network renormalization schemes.
The breaking of diffeomorphism symmetry by discretizations can often be avoided in topological theories, such as three-dimensional gravity. Here the relation between time evolution and refining or coarse graining can be made exact. We will therefore illustrate our claims with examples from topological field theories in section 6. In particular the (refining) time evolution maps can be taken as refinement maps for the construction of an inductive limit Hilbert space. Note that the applicability of this idea is not exclusive to topological field theories: one can also use the time evolution maps of topological field theories to define inductive limit Hilbert spaces for other theories. Based on this idea a new representation for loop quantum gravity has been recently defined in [29], based on the time evolution maps for BFtheory. We will explain in section 6.4 that this construction can be generalized to other (discretized) topological field theories. It provides a method to find Hilbert space representations for non-topological theories based on vacua provided by the topological theories.
Section 7 will comment more on the peculiarities in gravitational theories, where the geometric scale is part of the dynamical variables. It will provide a geometric interpretation of the refining time evolution maps and make clear that these should indeed be rather seen as refining than time evolution. Furthermore the properties of these maps are related to the appearance of (bubble) divergences in spin foams [20,37,38,39].

Time evolving phase spaces
Here we are going to explain, how to understand discrete time evolution in systems where the phase space dimension (or the 'size' of the Hilbert space) can change from one time step to the next. We will consider theories which assume a notion of equal time states, which indeed is the case is many discrete theories, such as Regge calculus [33] or loop quantum gravity restricted to discrete graphs [40,41,42].
First let us illustrate that the need for such a time evolution scheme appears naturally in simplicial discretizations, i.e triangulations. Assume a triangulated hypersurface. The configuration space of the theory is given by an association of variables to certain type(s) of simplices or combinations of simplices. For instance in (length) Regge calculus [33] one associates lengths to the edges of the triangulations, other formulations work also with areas [43] or areas and angles [44] and references therein. Scalar fields can be associated to vertices, discrete n-forms to n-simplices or their n-duals, etc. [45].
Time evolution in a d-dimensional theory is given by gluing d-simplices to the triangulated (d − 1)dimensional theory. This discrete time evolution appears as a change of triangulation -indeed a Pachner move [46] -in the triangulated hypersurface, see also [47]. One can understand Pachner moves as the most elementary change of a triangulation, Pachner moves divide time evolution into basic steps.
'Gluing' a d-simplex to the hypersurface, means identifying the variables on the (sub)-simplices that are now shared between this d-simplex and the hypersurface as well as solving for (integrating over) the variables that are now in the bulk, i.e. not associated to the hypersurface any more.
The d-simplices can be glued to the hypersurface in different ways. Depending on how many faces ((d− 1)-subsimplices) of a d-simplex are identified with (d − 1) subsimplices of the triangulated hypersurface, Figure 1: A 1 − 3 move in the 2D hypersurface can be obtained by gluing a tetrahedron with one of its triangles to the hypersurface.
the number of variables associated to the hypersurface might either increase, decrease or stay constant. Accordingly we will interpret these Pachner moves as refining, coarse graining, or of 'mixed type'. 3 These 'mixed type' Pachner moves can be seen as entangling moves, appearing in the entanglement renormalization approach [31,48], see the discussion in section 6. For example in (1 + 1) dimensions, gluing triangles to a triangulated line can be done in two ways, which are named 1 − 2 and 2 − 1 Pachner move. For the 1 − 2 Pachner move we glue a triangle with its base to an edge of the 1D line. This edge is mapped to two edges under time evolution -which alternatively can be interpreted as refining the state. Indeed we will later see that this is exactly the case in topological theories. In the 2 − 1 move we glue a triangle with two edges to two neighbouring edges of the 1D line.
In three spacetime dimensions one can reproduce the coarse graining 3 − 1 and refining 1 − 3 Pachner moves by gluing a tetrahedron with three triangles and one triangle respectively, to the triangulated hypersurface, see figure 1.
However if one wants to produce a very refined state and uses only 1 − 3 Pachner moves one will end up with a very peculiar geometry, known as stacked sphere. Even in 4D, where gravity is non-topological and interacting, such stacked sphere geometries are not dynamical (do not allow for curvature) and span the flat sector of the theory as defined in [40]. Thus, to arrive at more interesting spatial geometries one needs to include other Pachner moves. For (2 + 1) dimensions these are the 2 − 2 moves which can also be obtained by gluing a tetrahedron with two triangles to the hypersurface, see figure 2. Such 2 − 2 moves can be used as entangling moves to produce the long range entanglement in topological phases [48]. For (3 + 1) dimensions one can generate analogously 4 − 1 and 1 − 4 as well as 3 − 2 and 2 − 3 Pachner moves by gluing a 4-simplex to the 3D triangulated hypersurface. These moves can be described via canonical evolution equations, despite the change in phase space dimension [49,13,14]. Generalizing work of [50,51,52] such discrete time evolution maps can be understood as canonical transformations generated by an action. This action is associated to the dsimplices and can be understood as Hamilton's principal function depending on the boundary data of this simplex. 4 Hamilton's principal function is a generating function for the momenta, that is we use the 3 Such moves of 'mixed type' can be avoided, if one considers so called Alexander moves instead of Pachner moves. These Alexander moves can be understood as combinations of Pachner moves, thus they will both refine the hypersurface and entangle certain degrees of freedom of this hypersurface. 4 The advantage of such a formulation is that it reflects how simplicial path integrals are defined. There, i.e. in spin foams, one associates an amplitude to a d-simplex, which in the semi-classical limit does indeed give the Regge action for the simplex [53]. The path integral (say with boundary) is then defined by summing the product of all simplex amplitudes action associated to a simplex S s to define old momenta p and new momenta p ′ . Schematically we have where we denote old and new configuration data by q and q ′ respectively. 5 How can the equations (2.1) describe a canonical, i.e. symplectic, transformation, if the number of old and new variables differ? The answer is that pre-and/or post-constraints appear on the initial or final phase space respectively. One has to reduce the phase spaces with respect to these constraints and finds a symplectic transformation on these reduced phase spaces.
The constraints have to appear for a simple reason from the equations (2.1). There one would have to solve the first set of equations for the new configurations in terms of the old configurations and old momenta. However, if we have N old > N new variables, the first set of equations will give N old relations for N new unknowns. Thus, if the equations are independent, they will give the solutions q ′ (q, p) but also (N old − N new ) pre-constraints C i (q, p), i = 1, . . . , (N old − N new ), that is constraints on the initial phase space. Similarly we obtain post-constraints, if N new > N old . (Constraints can also appear independently of this mechanism, that is N new = N old does not guarantee that constraints will not appear.) As the pre-or post-constraints are defined via a generating function, they are first class. Thus the evolution equations (2.1) leave a number of configurations undetermined ('pre-and post gauge degrees of freedom), corresponding to the number of constraints that appear. The status of these gauge degrees of freedom might change under further evolution: constraints appearing in the future might lead to a gauge fixing. The pre-constraints have to be satisfied for an evolution move to take place. The post-constraints are automatically satisfied, after an evolution move has taken place.
We should point out that the pre-and post-constraints include constraints which might arise due to gauge symmetries, including Hamiltonian and diffeomorphism constraints. For instance the 4 − 1 Pachner move in 4D leads to Hamiltonian and diffeomorphism constraints [40,13]. In this case the post-constraints exactly coincide with the Hamiltonian and diffeomorphism constraints, no new truly physical degree of freedom is added by such a refinement move. There are however also the 2 − 3 moves that add degrees of freedom and therefore lead to constraints, which do however not coincide with the Hamiltonian and diffeomorphism constraints. As noted further evolution might fix the gauge degrees of freedom implied by the Hamiltonian and diffeomorphism constraints. This is due to the breaking of diffeomorphism symmetry in discretization of 4D gravity [22] over all bulk variables. Thus changing the boundary state by gluing a simplex to the boundary (i.e. multiplying the state with the simplex amplitude and summing over the variables which are now bulk variables), we automatically obtain the amplitude for the evolved state. Hence one would expect that the semi-classical limit reproduces the equation of motion as obtained from the canonical transformation generated by the action associated to this simplex. 5 Some configuration variables are neither old or new, as these are represented in the hypersurface before and after the move. Such variables count as (non-dynamical) parameters in this move. Here we will only need this schematic discussion, for explicit discussion of all Pachner moves see [13]. Figure 3: The scalar field on a circle and its time evolution. The circle is drawn as an interval with periodic boundary conditions indicated by the dashed lines.
Post-and pre-constraints also appear for theories without any a priori gauge symmetries, such as a scalar field theory. We propose here that such constraints can be interpreted as describing the state of finer degrees of freedom. We will motivate this proposal with examples.

2.1
Example: evolution of a scalar field on an extending triangulation As a first example we consider a massless scalar field on a 2D (Euclidean) equilateral triangulation. The action associated to one triangle is given as where s(e), t(e) denote the source and target vertex of an oriented edge respectively. We now consider a time evolution between two spatial periodically identified 1D triangulations, i.e. circles subdivided into edges. We assume that the earlier equal time hypersurface has N edges and the later one N ′ edges and we connect these two hypersurfaces by "one slice of triangles", see figure 3. The triangulation can be described by an adjacency matrix A vv ′ , where A vv ′ = 1 if the vertex v at the earlier time is connected to the vertex v ′ at the later time, and A vv ′ = 0 if this is not the case. The canonical time evolution map can be easily computed in this case. In particular the momenta π ′ v ′ at the later time step are given by where S is the action associated to the interpolating triangulation obtained by summing the action contributions (2.2) of the triangles. If A vv ′ has right null vectors R v ′ r , i.e. such that A vv ′ R v ′ r = 0, we obtain constraints by contracting (2.3) with these null vectors. These constraints are of the form with some specific functions f r of the fields φ ′ v ′ at the later time step. Coming from a generating function the constraints are Abelian. They generate gauge transformations in the sense that the evolution step leaves indeed certain combinations of field values unspecified. Assuming an orthogonal basis of the null vectors, these combinations are given as For a refining evolution step we have a larger number N ′ of vertices at the later time than the number of vertices N at the earlier time, N ′ > N . In this case there are at least N ′ − N right null vectors R r , with r = 1, . . . , N ′ − N . We want to argue that these gauge degrees of freedom correspond to the finer degrees of the field.
Consider specifically a regular refinement as in figure 3, with N ′ = 2N . The corresponding adjacency matrix is given by (2.5) The matrix can be 'diagonalized' by a Fourier transform. Let us formally introduce the notation so that we can write We thus have N right null vectors R k ′ k (here k labels the null vectors) given by In general the coefficient in front of the higher momentum |k + N is non-vanishing (it only vanishes for k = 2N/3). Thus we can say that (almost) all momenta π ′ k ′ associated to finer degrees of freedom, i.e. with momenta k ′ > N are determined by the constraints (2.4). The same holds if we add a potential V to the scalar field, and discretize this in a local manner, i.e. as a term v∈∆ Ar(∆, v) V (φ v ) added to the action (2.2) with Ar(∆, v) denoting some association of an area to the vertex-triangle pair.
The post-constraints signify in particular that no new information is added, the physical phase space cannot be enlarged during evolution. 6 In fact, we can interpret this in the following way: given a state with a certain coarse graining, i.e. discretization scale, we can apply refining time evolution steps. This will lead to a state with the same coarse graining scale, however represented on a finer discretization.
One would like that the finer degrees of freedom, that are added during refining time evolution, are in a vacuum state. In the case of a scalar field we have a notion of energy, thus the statement is that the refining time evolution should not increase the energy as defined by some energy functional on the two different discretizations. See also the discussion in [54], which considers this issue however in a covariant quantization scheme. Whether this is actually the case will depend on the quality of the discretization.
This holds in particular as the degrees of freedom that are added are typically defined on scales near the discretization scale. Typical discretizations will rather give unreliable results on this scale. A way out is to design discretizations, so that the added degrees of freedom are in fact in a vacuum state.

Refinement as adding degrees of freedom in the vacuum state
In the last section 2.1 we have used the Fourier transform to identify finer degrees of freedom (higher modes) and coarser degrees of freedom (lower modes). In fact in a free theory the Fourier modes decouple and allow us to assign an energy per mode.
We can thus turn the argument that refining time evolution should add degrees of freedom in a vacuum state around and design a discretization for which this is the case. This will in general result in a non-local (in space) discretization -as can be already suspected if one uses the Fourier transform.
The general idea is to match for a set of Fourier modes up to a cut-off K exactly the dynamics of the continuum, see also the related arguments in [55,56].
The construction is as follows: We consider the canonical data of a continuum scalar field on a 1D circle, representing the spatial hypersurface. We only consider fields that include Fourier modesφ(k 1 ) up to a cut-off K ∈ N on the spatial momentum component |k 1 | ≤ K, so that the fields are superpositions of N = 2K + 1 modes. Such fields can therefore be parametrized one-to-one (in the general case) by the set of N values of the field at pre-specified positions along the circle.
We thus have a mapping M K from the phase space describing continuum field configurations with a cut-off K to a phase space describing a discretized scalar field on N = 2K + 1 vertices.
We have now to decide on an embedding map C K,K ′ from the phase space with cut-off K to a phase space with K ′ ≥ K. Once such a map is chosen we can construct the corresponding map for the phase spaces describing the discretized fields.
As an example, one can choose C K,K ′ such that in a mode expansion the coefficients of the additional modes are vanishing. This minimizes the energy of the additional modes for a free theory. For interacting theories one can choose more generally E K,K ′ such that the energy of the refined configurations (in the space of fields with a mode cut-off K ′ ) is minimized, keeping the coarser modes fixed.
Note also that one can attempt to define an embedding C K,K ′ such that it includes some proper time evolution step T . However a time evolution will entangle the modes up to the cut-off K, with (possibly) all continuum degrees of freedom, that is the image of T applied to a phase space given by modes with cut-off K will in general include modes K ′ → ∞. We therefore face a problem in pulling back the evolved continuum configuration to a discrete one, as the evolved continuum configuration might be infinitely refined with respect to the embedding maps chosen.
Thus one has either to find an (embedding) map of discrete configurations into continuous ones, where this is not the case. In this sense the choice of the (generalized) maps M K should be informed by the dynamics. We believe however that such examples are rare, see section 2.3 for one of these cases. Alternatively, one chooses a truncation of the image of T back to the phase space describing modes with a cut-off K ′ . This introduces an approximation to the continuum dynamics. However for sufficiently refined configurations, one would of course expect, that these errors do not affect sufficiently coarse grained observables.
In general the discrete embedding maps D K,K ′ will be highly non-local, but this will be necessary to obtain a good approximation also for modes near the discretization scale. This construction will be very involved for an interacting theory, as it basically requires the solution of the dynamics.
However it can serve as a guide line of what to expect from 'good' discretizations, that also involve a possible change of degrees of freedom. Thus even if one has a discretization that does not exactly mirror this behaviour (i.e. is not 'perfect') one can hope that via coarse graining one reaches an effective theory, that actually does so. This is the philosophy behind perfect discretization, which can be constructed as fixed points of renormalization flows [57,36,58] or by pulling back continuum physics to the lattice ('blocking from the continuum') [59]. A more abstract approach is to select as observables spectra of geometric operators [60].
The construction described in this section allows a more explicit choice of the (post)-constraints, than in the discussion in section 2, where the post-constraints are determined by the chosen discretization of the action. Here the post-constraints are determined by the choice of vacuum state, given by the minimization of an energy functional. Note that the constraints are rather second class. For instance for a free theory these are given by the vanishing of all higher modes in the fields and momenta. Thus in comparison with the discussion in section 2 one has gauge fixed the first class post-constraints appearing there with additional constraints. This is what one would however expect also from the quantization of a (free) scalar field: the vacuum functional in a given mode is given as a Gaussian of the field variable. Such a Gaussian can also be found by minimizing the 'master constraint' M (k 1 ) = ω 2 (k)φ 2 (k 1 )+π(k 1 ), given as sum of (weighted) squares of the individual constraints [61]. For gravity the situation is less clear what kind of vacuum to expect. On the one hand (continuum) gravity constitutes a first class constrained system, so all physical states have to satisfy these constraints. Thus, physical states are squeezed states in the conjugated degrees of freedom describing the gauge choice and the constraints. In 4D we of course have additional physical degrees of freedom, however the characterization of a vacuum state (without a background and boundary) is an open issue. We will discuss in section 3.2 the Hartle Hawking no-boundary proposal [62] for a vacuum state, that can be naturally implemented with a refining time evolution.

Massless scalar field in a 2D Lorentzian space time
Here we will discuss an example of a perfect discretization with local embedding maps, namely the discretization of a massless scalar field in 2D Minkowskian space time. Note that this is the only such example of a non-topological theory that we are aware of, and that the locality of the embedding maps might actually change in the quantum theory. We will consider equal time hypersurfaces given by piecewise null lines, akin to characteristic evolution schemes [63].
We will identify a given discrete configuration of field values with a continuous configuration by assuming the continuum field to be piecewise linear. Such a piecewise linear field can be parametrized by a discrete set of scalar field values at points where the derivative of continuum field is not continuous.
One motivation for this example is to provide an interpretation for graph changing Hamiltonians appearing in loop quantum gravity [34,64] or for the parametrized scalar field [65]. This example will illustrate that indeed refining evolution splits into an embedding map and a proper evolution.
The example is furthermore interesting as it introduces the concept of piecewise null hypersurfaces, that on 'larger scales' can be either put together to a spatial hypersurface, or alternatively to a null hypersurface. Thus problems involving a null boundary can be easily treated, with a natural specification of 'boundary conditions' at the null hypersurface (or null line). For a discussion of issues related to holography involving such discretizations, see [66], which very much inspired the development of this example. Null surface formulations also attracted recent interest in (loop) gravity [67].
To be concrete we consider a 2D cylinder space time endowed with the Minkowski metric. We consider piecewise null 'hyperlines' that close around the cylinder. Thus we will have null edges connected via kinks.
For every such kink we have to introduce a vertex ν. We allow furthermore vertices ν on the null edges themselves. We will associate scalar field values φ ν to these vertices ν.
As we will show in the following, such a configuration of scalar fields φ ν specifies a piecewise linear solution to the continuum dynamics.
Let us start with the set of continuum solutions to φ = 0, which are given by where (u = t + x, v = t − x) are light cone coordinates 7 . We will consider functions of the form (2.9) with f and g piecewise linear (continuous) functions. Thus φ(u, v) will be smooth (even linear) everywhere except at a set of null lines u = c I or v = c ′ J , where {c I , c ′ J } are a set of constants. Such a solution induces a scalar field configuration on any piecewise null line in the following way: As outlined above, we have a vertex at every kink of the piecewise null line. Additionally we introduce vertices for every null line u = c I or v = c ′ J that cuts our 'equal time hypersurface' transversally. The values of the scalar field at these vertices are now just given by the values of the solution (2.9) at the position of the vertices.
This gives a configuration of scalar field values on a piecewise null hyperline. From this configuration we can re-construct the solution. We basically do the inverse of the above procedure: For every kink we draw two null lines u = c I and v = c ′ J emanating from this kink. Furthermore we draw from every vertex on a null edge a transversal null line. These null lines give the possible non-smooth behaviour of the solution. We can reconstruct the solution everywhere by linear extrapolation.
A concrete way of constructing such a solution is given by a time evolution of the scalar field configuration on a piecewise null line, by pushing the null line forward in time.
Note that the scalar field values on a given piecewise null line are sufficient to reconstruct the full space time solution. There are no additional momenta needed. Intuitively this can be imagined the following way: drawing a null zigzag line we obtain a set of initial fields at two consecutive time steps. Thus the fields themselves provide the momenta.
To describe the time evolution consider a piecewise null line with a vertex ν 1 at a kink. We wish to evolve this vertex by an amount ǫ in say the direction of u, see figure 4. (Note that there is no absolute length attached to ǫ as it is an affine parameter. The time evolution itself can only be characterized by the area of the rectangular diamond that will be glued to the hypersurface.) Let us assume (for simplicity) that the next vertex ν 2 to the right of ν 1 is also a kink. If we want to move the vertex ν 1 and keep the hypersurface null we have to also move the vertex ν 2 to a new vertex ν ′ 2 . Thus we move an entire null edge of the hyperline.
Note however that although we move the vertex ν 1 to a new vertex ν ′ 1 we actually have to keep the vertex ν 1 as a vertex in our hypersurface. This is due to the possible non-smooth behaviour in the field that might still occur at ν 1 . Thus the new hypersurface will have one additional vertex. In this way time evolution is necessarily 8 refining.
Thus we have to determine the values of the scalar field at the new vertices ν ′ 1 and ν ′ 2 . We construct the field φ(ν ′ 2 ) by linearly interpolating between φ(ν 2 ) and the field φ(ν 3 ) at the next vertex ν 3 to the right of ν 2 : (2.10) (One might consider this step to be somewhat non-local.) Having constructed the field φ(ν ′ 2 ) we now know three fields at the vertices of the diamond formed by ν 1 , ν ′ 1 , ν 2 , ν ′ 2 . The field at the tip ν ′ 1 of this diamond is imposed by the form of the solution (2.9) to be This description can be easily generalized to other situations. In the end time evolution proceeds by gluing rectangular diamonds to the null hypersurface. Allowing the side length of these diamonds to vary, we do not pick out a Lorentz frame, thus we have a Lorentz independent cut-off.
Here we have a situation very similar to loop quantum gravity [34], or parametrized and polymerized scalar field theory [65], with a 'graph changing' Hamiltonian. One can choose ǫ (or the area of the diamond if one generalizes the framework to allow also 'past directed' null edges, see figure 5) arbitrarily small -there remains a discontinuous action of the time evolution, which is to produce new vertices.
The time evolution map can be also split into two parts: one is a pure refining part, introducing the vertex ν ′ 2 and the associated field value (2.10). The second step is the 'proper' time evolution step, in which a diamond with vertices ν 1 , ν ′ 1 , ν 2 , ν ′ 2 is glued to the hypersurface. This part keeps the number of vertices constant, as the new vertex ν ′ 1 is compensated by the loss of the old vertex ν 2 . The scheme we have described is a perfect discretization, that exactly mirrors the continuum solutions. We can also embed any discrete configuration into a refined configuration, where the new field values are given by linear interpolation as in (2.10). This allows to identify discrete and continuum configurations, which can be formalized into an inductive limit construction, which we will explain in section 3.1 for the quantum theory. See [68] for an alternative proposal to identify discrete and continuous phase space configurations, based on gauge fixing infinitely many degrees of freedom of the continuum theory.
ν ′ ν ′′ Figure 5: Time evolution can also proceed by gluing small diamonds to the hypersurface, which will however produce past directed null edges. Note that in this case φ(ν ′′ ) will be constrained and determined by the fields at the other vertices on the 'equal time' hypersurface.

2.4
Refinement moves in simplicial gravity: adding gauge and vacuum de- The examples discussed here involved scalar field theories, for which a notion of energy and hence vacuum is available. In section 2.1 we proposed to use directly an energy functional to characterize the state of the additional degrees of freedom. For the massless scalar field on null lines we determine the values of additional 'finer' fields by demanding piecewise linearity of the field. In case of gravitational theories the notion of energy is less clear, in particular if one considers compact space times. Time evolution itself is rather understood as a gauge transformation. Indeed as mentioned in section 2, part of the degrees of freedom added in a refining time evolution will be gauge (or pseudo gauge if diffeomorphism symmetry is broken [51,22]). But in general refining time evolution also adds physical (non-gauge) degrees of freedom. Thus refining time evolution leads to states that are supposed to be gauge equivalent, but seem to be based on different number of degrees of freedom.
To resolve this puzzle, we need to understand states resulting from a refining time evolution as equivalent -they represent the same state on different discretizations. We will discuss quantum theory in section 3.1 in which this notion can be indeed made precise. For this interpretation it is important that the refined degrees of freedom are indeed in a state, that can be interpreted as vacuum. For instance in loop quantum gravity one uses the so-called Ashtekar-Lewandoswki vacuum [27], in which spatial geometry is sharply peaked to be totally degenerate. An alternative vacuum state has been recently introduced [29], in which the vacuum is rather peaked on flat connections. In both cases these vacua are used to define a notion of refining, in the second case this refining origins indeed from a time evolution of BF theory, a topological theory that describes flat connections.
However both these choices are rather kinematical vacua, at least in 4D gravity ( [29] gives actually the physical vacuum for 3D gravity). The interpretation as vacua is not tied to an energy functional, but rather to the fact that these states are the simplest possible ones from different viewpoints. The Ashtekar-Lewandowski vacuum can be understood as the state giving a constant value to all connection fields, whereas the BF vacuum [29] gives a constant value to the conjugated variables, the flux fields describing spatial geometry.
Naturally one would prefer a construction based on some notion of physical vacuum. In particular if a given state satisfies the constraints, this should also hold for the state obtained by a refining time evolution. This however does not specify a (vacuum) state uniquely, we need to choose additional requirements.
One possibility for simplicial (Regge) gravity is to use, similar to the massless field, a refining based on piecewise flat geometry, at least on the classical level. Another possibility is to use some quasi-local notion of energy and to minimize this energy for the region that is being refined, in analogy to the procedure described in section 2.1.
However, we propose here that in the case of gravitational theories, refining time evolution itself might provide a notion of physical vacuum, as we will discuss in section 3.2.

Refining in quantum theory
Here we will discuss some aspects of refining time evolution in quantum theory. See also [69], which introduces a framework for time evolving Hilbert spaces, in which the number of degrees of freedom can increase and decrease. In this work we will base our discussion more on inductive limit Hilbert spaces and rather see refining time evolution as a means to define embedding maps needed for the construction of these inductive limit Hilbert spaces. We will explain this construction shortly in section 3.1.
So far, this framework has been used on the kinematical level in loop quantum gravity. The main proposal of this work is that one should actually use the dynamics, that is time evolution, to define the embedding maps needed for this framework. With this proposal one adopts a no-boundary Hartle Hawking state as vacuum state, thus the vacuum (and a notion of equivalence between states of different refinement degree) is determined by the dynamics of the theory. This will be lined out in section 3.2.

Inductive limit construction of a continuum Hilbert space
The inductive limit construction allows to define a continuum Hilbert space from a family of Hilbert spaces associated to discretizations (for instance graphs as in the Ashtekar Lewandowski representation [27] or triangulations as in the BF vacuum introduced in [29]). The discretizations need to be organized into a directed partially ordered set, denoted by ({b}, ≺). The ordering provides a notion of coarser and finer discretizations, that is b ≺ b ′ denotes that b ′ is a refinement of b. In a directed partially ordered set one can always find a common refinement b ′′ for two discretizations b and b ′ .
We associate to each such discretization b a Hilbert space of states H b . For any two Hilbert spaces (3.1) These embedding maps have to satisfy consistency conditions: As we will see, these conditions encode, under the identification of the embedding maps with time evolution maps, a path independence requirement of the time evolution maps. Given such a system, we can define the continuum limit of the theory, as an inductive limit. This limit is defined as the space of equivalence classes H := ∪ b H b / ∼. The equivalence relation is defined as follows: two states ψ b and ψ . In words, two states on different discretizations b, b ′ are equivalent, if they can be refined to the same state. This notion of inductive limit allows to embed any 'discrete state' ψ b into the continuum Hilbert space H via an embedding ι b .
As mentioned this construction is used in loop quantum gravity on the kinematical level, that is the choice of embedding maps is not tied to a dynamics. Indeed, in a theory with a proper time evolution one would need to separate the refining time evolution steps into a 'purely refining' part and a 'proper evolution' part, as in the example in section 2.3. Otherwise one would identify time evolved states as equivalent.
However, in gravitational theories, time evolution is a gauge transformation. In the quantum theory, the time evolution operator (1.1) is supposed to act a projector and thus as identity on physical states. Hence one can indeed attempt to use refining time evolution, to define the embedding maps ι bb ′ between different discretizations. As we will discuss, the difficulty is that refining time evolution maps based on 'naive' discretizations, will not satisfy the consistency conditions (3.2). Here coarse graining provides a means to reach theories in which the consistency conditions are actually satisfied.

Refining time evolution and no-boundary vacuum
Let us return to the time evolution operator (kernel) defined from the path integral (1.1) Here we denote by X ini and X f in initial and final configuration data. In, for instance simplicial, discretizations of the path integral (3.3) the wave functions ψ i (X i ) and ψ f (X f in ) might be from two Hilbert spaces H bi and H b f associated to two different discretizations b i and b f . Even if we consider a system with proper time evolution the path integral (3.3) will project onto states satisfying the pre-and post constraints discussed in section 2. The reason is similar to the mechanism turning path integrals for gauge theories into projectors [24], we will sketch an argument here, valid for linearized theories [70]: Consider for instance the case of post-constraints C i (X, P ), where P are the momentum variables conjugated to X.
We discussed in section 2 that these post-constraints are first class and lead to post-gauge degrees of freedom, that is part of the configuration data X at final time remain undetermined. On the other hand there will be also post-Dirac observables, i.e. functions on phase space, that Poisson commute with the post-constraints C(X, P ). The structure of the constraints allows to make a canonical variable transformation such that the configuration variables separate into post gauge X G and post-Dirac X D degrees of freedom. The constraints then involve only variables conjugated to the post-gauge variables X G and the variables X G themselves.
By integrating over all bulk variables in (3.3) we can define an effective action that only depends on initial and final configuration variables: We can use the canonical variable transformation for the final configuration data X f in . The fact that the classical action leads to post-constraints means that the effective action decouples gauge and Dirac degrees of freedom This makes the appearance of constraints C(X G f in , P G f in ) obvious (3.6) The time evolution kernel (3.4) is therefore of the form All states resulting from a time evolution have a prescribed factor exp i S G (X G f in ) determining the dependence of the wave function in the gauge variables. Adopting a Schroedinger quantization scheme, with the momenta quantized as derivative operatorsP = ∂/∂X and configurations as multiplication operators, the states (3.8) satisfy the quantized constraintsĈ (3.9) In summary the choice of discrete action for a refining time evolution leads to constraints that determine the behaviour of the resulting wave functions in the 'finer' degrees of freedom, which here are characterized as post-gauge degrees of freedom.
As mentioned this mechanism holds also for theories which a priori do not show any gauge symmetries and thus we deal with a proper time evolution operator. General relativity is a totally constraint system. Formal arguments show that the path integral (3.3) is equivalent to a projector onto the Hamiltonian and diffeomorphism constraints C I of the theory [24] DN I exp i N IĈ I . (3.10) Here N I denote Lagrange multiplier, known as lapse and shift. The integration over these multipliers induces a averaging over the action of the Hamiltonian and diffeomorphism constraints. For a discussion of the many subtleties involving this proposal see for instance [25,28,61].) The averaging would therefore project onto states that satisfy the constraints. In our discrete context, allowing for the possibility of discretizations changing in time, one expects that the Hamiltonian and diffeomorphism constraints will be part of the post-or pre-constraints. As mentioned this issue is however involved, as discretizations typically break diffeomorphism symmetry, which leads to the constraints. For the moment we will ignore this issue and comment later how to deal with it.
Thus we can hope 9 that a simplicial discretization of a path integral describing refining time evolution will lead to states which (a) satisfy the Hamiltonian and diffeomorphism constraints and (b) in which the finer (Dirac) degrees of freedom are also put into a specific state, characterized by the remaining postconstraints.
With a simplicial path integral we can in particular consider the extreme case of a refining time evolution, that is to start with zero dimensional configuration space and evolve to a large triangulated spherical hypersurface [13]. That is the first evolution step evolves from a vertex to the boundary of a d-dimensional simplex, where d denotes the space time dimension. The wave function will be just given as the (path integral) amplitude associated to this simplex. The following evolution steps can be understood as gluing further simplices to the one we started with, by multiplying the wave function with the corresponding simplex amplitudes and integrating over all variables that become bulk.
In this case we will have at every step as many post constraints as (configuration) variables, i.e. the reduced phase space is zero-dimensional. Indeed all momenta P b are generated by Hamilton's principal function S H , i.e. the action evaluated on a solution prescribed by the boundary configurations X: We thus have constraints C = P − ∂SH ∂X . These are Abelian, as the momenta are coming from a generating function. The phase space is foliated by gauge orbits, generated by the constraints, i.e. all configurations X are post-gauge degrees of freedom.
In the quantum theory this corresponds to a unique physical wave function 10 , given by a (Hartle Hawking) no-boundary vacuum [62]. In the semi-classical approximation we have Here one would indeed expect the appearance of the standard vacuum, at least in the limit of infinitely large regions [87]. Gravitational theories play a special role here, as the size of the region is encoded in the state itself, thus the wave function gives rather a probability distribution for the geometrical volume of the hypersurface. 'Radial' evolution, as described here should not change physical states as it is just another form of time evolution. Thus we can hope that a vacuum is reached for degrees of freedom describing scales (much) larger than the discretization scale of the boundary.
We note that a framework, which permits time evolution with phase spaces or Hilbert spaces that change in time, allows to define a notion of vacuum. For instance starting with a very coarse state and refining this state in an homogeneous manner should result into a state describing homogeneous geometries. This allows for applications for cosmology based on lattice treatments, for instance [71].
An interesting question for future research will be to investigate which simplicial quantum gravity models will lead to an acceptable (Hartle Hawking) vacuum and to investigate the properties of this vacuum.
Apart from defining a no-boundary wave function, the refining time evolution can of course also be used to refine states -and thus to provide the embedding maps needed for the construction of inductive limit Hilbert spaces, as discussed in section 3.1. Such (dynamical) embedding maps are therefore selected by taking the dynamics of the theory into account, which is particularly advised for coarse graining [12]. Here one has however to address the issue that discretized path integrals will in general break diffeomorphism symmetry and, related to this fact, be triangulation dependent. This will be subject of the next sections.

Path independence of evolution and consistent embedding maps
We argued that a discrete evolution starting from a zero-dimensional phase space or a one dimensional Hilbert space produces a vacuum state. However this vacuum state will in general depend on the order of the time evolution steps, which for a simplicial discretization determines the triangulation of the bulk that is bounded by the triangulated hypersurface on which the vacuum is defined.
Similarly if we aim to use the refining time evolution defined by the path integral as embedding maps, the consistency conditions (3.2) will not be satisfied. These consistency conditions can now be interpreted as demanding of a given state from the evolution path. It can be understood as a discrete version of implementing the Dirac algebra of (Hamiltonian and spatial diffeomorphism) constraints. As pointed out in [72] the Dirac algebra implies path independence, with respect to evolving through arbitrary choices of spatial hypersurfaces. This constitutes a further 11 relation between diffeomorphism symmetry, that yields the constraints, and triangulation independence [23].
So far we discussed only consistency for the embedding maps, which is needed to make the projective limit Hilbert space well defined. Observables on this Hilbert space need also to satisfy conditions known as cylindrically consistency: Observables O b defined on the family of Hilbert spaces H b need to commute with the embedding maps ι bb ′ : for all states ψ b ∈ H b and all pairs b ≺ b ′ . This ensures that the observables are well defined on the continuum Hilbert space, i.e. do not depend on the representative ψ b chosen. In the case that ι bb ′ is given by a refining time evolution consistent observables have therefore to be 'refining Dirac observables'. The algebra of refining Dirac observables characterizes the resulting continuum Hilbert space, as it provides a representation of this algebra. Topological theories can often be discretized such that partition functions and physical observables are triangulation independent. This also includes 3D gravity, which is topological. Due to the triangulation invariance the refining time evolution maps defined via the discretized path integral do satisfy the consistency conditions (3.2). We will illustrate this situation in section 6. As we will comment there, the set of 'refining Dirac observables' is much bigger than the set of (standard) Dirac observables given by the topological theory. This allows to use refining time evolution maps stemming from topological theories to define (kinematical) Hilbert spaces for other theories. This can be indeed be used to construct a new Hilbert space for loop quantum gravity, based on the time evolution map of BF -theory [29].
We believe that the application of refining time evolution maps is however not restricted to topological theories, despite the challenges posed by the triangulation dependence of the path integral. The strategy to attack this issue is to improve a given discretization by coarse graining. The fixed point of the coarse graining flow is hoped to show enhanced symmetry properties, in particular diffeomorphism symmetry which is tied to triangulation dependence [23,75].
Such a coarse graining flow leads however to non-local couplings 12 , which are difficult to control. One would then also expect the embedding maps, if defined via refining time evolution, to be highly on-local. In section 4 we will discuss a coarse graining framework which avoids this issue, and moreover is based on the concepts introduced so far.
Let us comment on the appearance of discretization changing time evolution in loop quantum gravity. There graph changing (actually graph refining) Hamiltonian constraints have been defined by Thiemann [34]. These constraints are anomaly free, in the sense that the commutator of two Hamiltonians vanishes if evaluated on the Hilbert space of diffeomorphism invariant states, see [34,77,64] for discussions.
What is missing is a concrete geometric interpretation of the action of these constraints and a concrete connection to the path integral. (The notion of graph changing Hamiltonians inspired the development of spin foams, as time evolved spin networks [78].) This discussion here suggest a possible interpretation for the graph changing Hamiltonians, the exponentiation of which should lead to a (refining) time evolution. Thus one could attempt to extract a notion of vacuum from the Hamiltonian constraints.

Pre-constraints and coarse graining
We suggested to use the refining time evolution to define embedding maps for the inductive Hilbert space construction. Refining time evolution leads to post-constraints, which we argued characterize the (vacuum) state, into which the finer degrees of freedom are put. Here we want to comment shortly on the role of pre-constraints.
These appear for coarse graining time evolution steps, that is the number of variables decreases. Classically these constraints demand that a state needs to satisfy certain conditions, so that the time evolution move can be applied. By time inversion symmetry we can understand this condition in the following way: the state has to be equivalent to a refining of a coarser state. Although the state is represented on a fine triangulation it does only include degrees of freedom in non-vacuum states on a coarser scale.
Although for classical evolution one has to satisfy these constraints, quantum mechanical evolution is always possible. This also holds for standard gauge systems: a priori a quantum state does not need to satisfy any constraints to serve as a boundary condition in a path integral. Rather the path integral itself will project out non-physical degrees of freedom [24]. Thus, one can indeed expect that in a quantum evolution, the degrees of freedom which are too fine to be evolved classically (as identified by the constraints) will be projected to the vacuum. In this sense the quantum mechanical evolution is automatically providing a coarse graining. Note that this will be a non-unitary evolution as it includes a projective part. A unitary description can only be obtained if one restricts to the subspace of the Hilbert space describing only sufficiently coarse degrees of freedom, i.e. the physical Hilbert space with respect to the pre-constraints, see also [69].
Thus time evolution cannot be inverted: Concatenating coarse graining and refining we isolate the projective part, that can be understood as projecting fine degrees of freedom to the vacuum state, that is the state obtained will automatically satisfy the initial pre-constraints. This provides an interesting asymmetry in time evolution that might serve as an arrow of time. See [79] for another proposal on the origin of the arrow of time, in which also the notion of complexity of the state is crucial.

How to define a continuum theory of quantum gravity
The investigation of time evolution with changing phase space or Hilbert space dimension is motivated by the simplicial discretization of gravity [49,13,14]. However, such discretizations break diffeomorphism symmetry for the 4D theory [21]. This appears both at the classical level [22] , and even in more severe form on the quantum level. For instance one can show, that no local path integral measure factor exists, that makes the theory invariant under 5 − 1 moves, which leave the classical action invariant [80,81].
This implies in particular that the consistency conditions formulated in (3.2) are violated. A way out is to improve the discretization by coarse graining, see [36,23] for examples. At fixed points of the coarse graining flow one might arrive at perfect discretizations [57], for which consistency conditions of the form (3.2) are satisfied. These fixed points represent the continuum limit of the theory one started with, however expressed on a discretization.
There are different ways to proceed with the coarse graining. One is to keep basic building blocks but to allow highly non-local couplings, which are naturally induced by the coarse graining [59,58]. As was pointed out in [12], there is an alternative inspired by tensor network renormalization (which we will explain in section 5) and the generalized boundary proposal [82].
This alternative construction of a consistent theory would not put basic building blocks with their amplitudes in the centre but instead amplitude maps for space time regions. These amplitude maps are built from the basic amplitudes, and agree basically with the (dual of the) Hartle Hawking no boundary wave function. The amplitude maps are defined on Hilbert spaces H b associated to the discretized boundaries b of a space time region: where we denote the bulk configuration variables with X and the boundary variables with X b . Thus the amplitude map applied to the wave function ψ b is given by the inner product between this wave function ψ b and the no-boundary wave function. This no-boundary wave function is here expressed as time evolution operator K ∅b applied to the one-dimensional wave function ψ ∅ associated to the empty discretization. The second line in (4.1) defines the physical inner product, between the projections onto physical states of the two (kinematical) states ψ ∅ and ψ b . As usual the path integral in (4.1) is a discretized one. Thus the first task is to arrive at amplitude functionals A b for fixed boundaries b that are independent of the bulk triangulation. One way to reach such amplitudes is by coarse graining, as will be explained in the next section 5.
For a very coarse boundary b we can triangulate the bulk with very few simplices. For instance the boundary of a simplex can be triangulated with just this simplex and thus the amplitude functional A b , discretized in this way, would be just given by the pairing of the simplex amplitude with the boundary wave function. However there are infinitely many ways to subdivide this simplex (keeping the boundary), and thus one would have to find a method to determine the actual A b . Indeed we will specify a further criterion for these amplitudes, which will actually help to construct the coarse graining flow of these amplitudes.
It is important to note that bulk triangulation independence of the amplitude maps A b is not sufficient for the construction of the continuum limit. (Indeed one could just declare some rule for selecting a particular bulk triangulation for each boundary.) We rather need to demand a condition that connects the amplitude maps A b for different boundaries b.
Thus we need first to choose embedding maps ι bb ′ that connect the different boundary Hilbert spaces, as explained in section 3.1. As we explained, there might be different sets of embedding maps, leading to different continuum Hilbert spaces. We will see that some choices are preferred over others. With a given choice of embedding map we require that the amplitude maps are cylindrically consistent functionals, that is In words, if we take a coarse state and evaluate the corresponding amplitude map A b on it, we should get the same result as first embedding the state into the 'finer' Hilbert space H b and then evaluating with the 'finer' amplitude map A b ′ . Thus, the result should not depend on which boundary we choose to represent the equivalence class of states [ψ b ] under the equivalence relations of the inductive limit. This allows to actually define the amplitude map as a functional on the inductive (i.e. continuum) limit Hilbert space H defined in section 3.1. Such a requirement was proposed in [83] with regard to the (kinematical) embedding maps of the Ashtekar Lewandowski Hilbert space. We will argue here that the construction of cylindrically consistent amplitudes is facilitated by the adoption of dynamical embedding maps, as provided by refining time evolution. The amplitude map A [b] is technically not any more labelled by a discretization as such, but by equivalence classes of discretizations. Here two discretizations are equivalent if they can be refined to the same discretization. Thus, the information that is left over could just carry topological information (for gravitational theories where metric variables are dynamical) of the boundary.
In our case we assumed spherical topology, thus a cylindrical consistent family of amplitude maps defines a continuum amplitude A. This amplitude A replaces the basic amplitude for, say the boundary of a simplex, one starts with in the regularization of the path integral. We can recover a 'perfect' amplitude, by evaluating A on states that are equivalent to states defined on a simplex boundary under the chosen embedding map.
The cylindrically consistency requirement for the amplitude maps is a very strong requirement -it basically encodes the solution of the theory. We can hope to build such amplitude maps iteratively, for more and more refined boundaries, as will be the subject of the next section. To this end it is important to choose embedding maps that are adapted to the dynamics of the system [12]. In particular we suggested that refining time evolution should give good embedding maps. A priori these will typically fail to satisfy the consistency requirement (3.2). However the improved amplitude maps A b also allow to define an improved discretization of the path integral and thus to define a (refining) time evolution, that will satisfy the consistency requirement to a better approximations 13 . In an iterative process one therefore improves both the amplitude maps and the embeddings, if these are defined by refining time evolution.
The reason why such embeddings are particularly apt to define cylindrically consistent amplitudes is in the definition of the amplitudes in (4.1). If ι bb ′ = K bb ′ we will have Here we wrote ∼ in the second equation as (K ∅b ′ ) † • K bb ′ ∼ (K ∅b ) † holds only approximately in the discretization. However we see that embedding maps defined via refining time evolution simplify the task of constructing cylindrically consistent amplitudes. Indeed the consistency condition for these embedding maps are tied to the cylindrical consistency of the amplitudes. We want to remark that we do not require a consistent gluing between the cylindrically consistent amplitude maps as long as this gluing is performed on a discrete boundary b. (That is the gluing involves integration only over the variables X b .) One could for instance require that the amplitude for a region with a more complicated boundary A b3 arises as the gluing between two amplitudes with less refined boundaries A b1 and A b2 , similar to the way one would glue simplices together. We expect that such a relation might indeed hold, however only if one preforms a continuum limit for the piece of boundary that is glued over.
Let us emphasize that the amplitude maps A [b] are the end point of a construction to reach the continuum limit of the theory. Of course one hopes that the 'initial' theory defined via basic building blocks and local couplings, provide the basis for the construction of such a theory. This implies that this 'initial' theory can nevertheless be used to extract sufficiently coarse grained observables from sufficiently refined discretizations. 13 The consistency equations can be tested if one considers the equations on matrix elements . Under an iterative improvement the equations will be satisfied for a larger and larger class of states involving finer and finer boundaries.
As mentioned we aim at constructing both, cylindrically consistent amplitudes and consistent embedding maps given by refining time evolution. In the next section 5 we will explain that tensor network coarse graining tools provide methods to construct these.

Tensor network coarse graining: time evolution in radial direction
Tensor network renormalization group methods [84,10,11,85] can be understood to implement an iterative method to construct cylindrically consistent amplitudes. Coming back to equation (4.3) we can understand the second term to consist of two parts: the first is the computation of ψ ∅ |(K ∅b ′ ) † , that is the basically the amplitude functional A b ′ for a more refined boundary. One would build such an amplitude functional from gluing amplitudes A b for less refined boundaries b. However we want to define an iterative process that improves the amplitude maps A b , which are functionals on H b . We thus have to find a way to pull back the amplitudes A b ′ to H b , which is done by using the embedding map ι bb ′ = K bb ′ . Thus one defines the improved amplitudes Here both (K ∅b ′ ) † and K bb ′ are built from using the initial A b as basic amplitudes. The process is repeated for the improved amplitudes A imp b until the procedure converges to a fixed point A f ix b . This fixed point amplitude can be used to proceed to a more refined pair of boundaries (b ′ , b ′′ ) with b ≺ b ′′ to find the next fixed point amplitude A f ix b ′ and so on. There are many tensor network renormalization algorithms [84,10,11,85], which differ in their geometric setup and the details of how to define (K ∅b ′ ) † and the embedding ι bb ′ . We will shortly explain a method that can be interpreted as radial evolution, as this also matches nicely with amplitudes being defined via the no-boundary wave function, as in (4.1).
The name tensor networks refers to the fact that the amplitudes of a space time region are encoded in tensor of a given rank n, associated to an n-valent vertex, which we can imagine to sit inside this space time region. The indices of this tensor encode the boundary data of the space time region, hence contracting tensors of two neighbouring regions corresponds to gluing the associated amplitudes. The rank n and the bond dimension χ (equal to the number of values the index can take) determines the amount of boundary data and hence the fineness of the boundary in question. Note that one can redefine higher rank tensors to tensors of lower rank by summarizing for instance two indices (i, j) with χ i , χ j into effective indices I = (i, j) with bond dimension χ I = χ i · χ j . This interpretation matches nicely with spin nets [86] and spin foams describing gravitational dynamics. The former can be naturally understood as tensor networks [2,8,9]. A tensor network description of spin foams can be found in [2].

Radial evolution
As we will see tensor network algorithms are related to transfer matrix methods in which the (Wick rotated) time evolution operator is diagonalized. For the latter Wick rotation is essential, as the the eigenvalues of the transfer operator need to be ordered in size and in this way distinguish relevant from irrelevant degrees of freedom. However one can understand tensor networks to replace the time evolution operator with a radial evolution operator. Even if the (standard) time evolution operator might be unitary, and hence all eigenvalues with absolute value equal to one, the radial evolution operator will include a projective part, that -as we have argued will project out finer degrees of freedom. This can then be used for the construction of an embedding map. Figure 6: Illustration of radial time evolution in tensor networks: By adding eight additional tensors (in gray) we perform one time evolution step. The boundary data grows exponentially from χ 4 to χ 48 .
An evolution in radial direction is also expected to project onto the vacuum state [87,10], see also the discussion in section 3.2 which involves the non-Wick rotated amplitudes. 14 Consider a radial evolution as in figure 6. Here the amplitude / tensor for a larger region is built from the amplitude/ tensor of a basic building block, represented by a (dual) vertex. One would now like to treat the amplitude for the new region as an effective tensor and repeat the procedure. However one has to face the problem, that the number of boundary data, grows exponentially during this procedure, making it impossible to implement in practise.
We thus have to find a method to project back the amplitudes to a boundary with less data, i.e. to coarser boundaries. The radial time evolution can be split into steps with time evolution operators This is a refining time evolution in the sense that the Hilbert space H 2 at R 2 will have more kinematical degrees of freedom than the Hilbert space H 1 at R 1 . Thus T (R 1 , R 2 ) will have a projective part (even if we would not have Wick rotated), which can be identified by a singular value decomposition. This would give a maximal number of dim(H 1 ) singular values. Hence T (R 1 , R 2 ) will have a non-trivial co-image in H 2 , which can be projected out. A new amplitude can therefore be defined on a dim(H 1 ) subspace, which can be identified as the subspace carrying coarse boundary data. Such a scheme might be indeed worthwhile to investigate further (for statistical systems), in order to obtain an intuition about the truncations. One would however expect that the reorganization of the degrees of freedom via the singular value decomposition will be highly non-local, as we have seen for the example in section 2.1. The time evolution considered there exactly corresponds to T (R 1 , R 2 ). This non-locality makes it however difficult to turn this into an iterative procedure. The new amplitudes will be expressed with respect to data spread over the entire boundary, which makes a local gluing of these amplitudes difficult.

Truncations via singular value decomposition
In practice one therefore employs schemes which involve more local truncations. The basic idea is as follows. Imagine two space time regions or effective vertices connected with each other by two edges, representing the summation over a certain set of variables, see figure 7. We would like to replace these edges carrying an index pair {α, β} of size χ 2 with an effective edge carrying only a number χ of indices. We choose an optimal truncation for the summation over the index pair {α, β}, which is given by the singular value decomposition of M Aαβ : 14 For a (Wick rotated) time evolution operator exp(− R 0 Hrdr) acts as a projector on the ground states of the Hamiltonian H for R going to infinity. Here Hr denotes the Hamiltonian for radial evolution at the radius r. For large radius, we will have small dr/r, and hence Hr approaches the Hamiltonian H for the time evolution of constant volume hypersurfaces. where λ 1 ≥ λ 2 ≥ . . . ≥ λ χ 2 ≥ 0 are positive, and U, V are unitary matrices. The truncation consists then in dropping the smaller set of singular values λ i with i > χ. Pictorially V iαβ restricted to i ≤ χ defines a three-valent vertex and we can use these three-valent vertices as in figure 7 to arrive at a coarse grained region with less boundary data.

Embedding maps and truncations
We can understand the tensors V as coarse graining maps. Alternatively, if read in the other direction, these maps provide the embeddings ι bb ′ from a coarser to a finer discretization. This interpretation comes from seeing the partition function (with boundary) as a functional (or amplitude map) A b on a 'boundary' Hilbert space H b . Gluing several space time regions together we obtain a partition functional A ′ b ′ which a priori acts on a Hilbert space H b ′ with finer boundary, see figure 8. We can however pull back this functional with the embedding map defined via the tensors V and in this way obtain an effective amplitude map A ′ b : This gives a renormalization flow for the amplitude maps, and a fixed point is reached if We see that it is essential to construct three-valent vertices (with its associated tensors), which we can see as special cases of coarse graining or refining maps. These three-valent vertices should be adapted to the four-valent ones giving the 'regular' dynamics. They can be understood to give a coarse graining or refining version of the regular evolution defined by the four-valent vertices. Note that this adaptation has to happen after each of the coarse graining steps, as the four-valent tensors flow under coarse graining.
The singular value decomposition (or generalizations as in [85]) provides one method to construct such three-valent tensors. Interestingly, geometric theories such as, spin foams [16] or spin nets [86,2] provide already descriptions for vertices of arbitrary valence [88,89]. Thus one can imagine a lattice of say four-valent and three-valent vertices, which automatically implements the coarse graining procedure. However, if one believes that these vertices provide good truncations, in the sense of approximating the summation over an index pair well by a summation over just one index, one needs to adapt the threevalent vertices to the dynamics encoded in the four-valent vertices. That is the embedding maps have to flow together with the effective amplitude maps. Such a relation is provided by the singular value decomposition in (5.4). Alternatively, spin foam construction tools [88,89,9] provide methods of how to build vertices of arbitrary valency out of a given vertex. This construction has also to be performed at every step of the coarse graining. It will be interesting to see, whether such a method gives a good truncation. The advantage of such a method is that it might be far easier to implement for spin foams, than the singular value decomposition, and might lead to a closed flow equation.

Embedding maps for the fixed points
Ultimately one would expect that the relation between vertices of different valencies is just given by gluing, i.e. a four-valent vertex is given as a gluing of two three-valent vertices. Indeed this relation can be obtained from the singular value decomposition (5.4) in case that (a) all non-vanishing singular values are equal to one and (b) if one is working in a truncation, the number of non-vanishing singular values needs to be smaller than χ. These conditions are satisfied at the stable fixed points of the renormalization flow (describing the phases of a given system), see for instance [8].
Condition (a) is expected to arise in theories with diffeomorphism symmetry -where time evolution is a projector. (Again the problem is that diffeomorphism symmetry is broken under discretization, so the projector property does not hold exactly and might be rather expected to emerge after sufficient coarse graining. For a computation of the transfer matrix in spin foam theories and a discussion whether these are projectors, see [86,7].) Condition (b), in case that one is working with a cut-off, basically imposes a topological theory for fixed points that are triangulation invariant 15 . For instance fixed points identified in [2,8,9] via a tensor network coarse graining describe triangulation invariant and therefore topological models and χ gives the maximal number of propagating degrees of freedom. Indeed we will see in section 6 that all the proposals outlined here are explicitly realized. For an interacting theory, such as 4D gravity, one would expect to need an infinite bond dimension χ, as indeed arises around phase transitions. With a fixed χ one can however approach the phase transition up to a certain precision, and, as the method is designed to keep the variables describing coarse excitations, one can hope to obtain reliable predictions for sufficiently coarse observables. In light of the previous discussion this means to obtain amplitude functionals which satisfy the cylindrically consistency conditions approximately well for coarse boundaries.

Topological theories
The previously introduced and discussed concepts of time evolution via coarse graining / refining and the concept of cylindrically consistent amplitudes are perfectly realized in topological field theories. In the following we would like to emphasize a few key points. In the first part of this section, we will mainly refer to topological lattice field theories in 2D, for instance [90]. For theories with a geometric interpretation, see for instance [76,9].
Consider a 2D lattice topological field theory with partition functions defined on three-valent graphs. As for a three-valent tensor network we have weights or tensors associated to the vertices and variables and hence Hilbert spaces associated to the edges. The partition function is then defined by summing the variables associated to bulk edges. In case of a boundary, we keep the corresponding variables fixed, thus obtaining a partition function depending on these boundary values. Alternatively we can understand the partition function as an operator (between two boundary Hilbert spaces) or a function (on one boundary Hilbert space).
In this vain a three-valent graph (6.1) represents the simplest graph and the associated partition function, interpolating between a two-site boundary Hilbert space on the lower boundary and a one-site boundary Hilbert space on the upper boundary. In this case we can understand this partition function as a (coarse graining) time evolution map, not involving any bulk summation. The same holds for the time inverted vertex. Topological lattice theories are triangulation invariant (referring to the triangulation dual to the graph). Thus the partition function does only depend on the topology of the manifold and not on the choice of the triangulation. (We will later show that this also holds in a certain sense for the boundary triangulation, due to the cylindrical consistency of the partition function.) Pictorially this corresponds to the following equalities = c , = c , = . (6.2) Here we assume always a summation over the variables or indices associated to the bulk edges. The equations have to hold for all possible choices of the boundary variables. In this section we will assume that the constant c is actually finite and hence can be adjusted to c = 1 by a rescaling of the amplitudes. Given the 2 − 2 move invariance, we can replace the 3 − 1 move by the so-called bubble move 3) The equivalence of bubble and 3 − 1 move (given the 2 − 2 move holds) follows from the following calculation = = c . (6.4)

Time evolution as coarse graining and refining
Using this pictorial representation we can symbolize a local time evolution operator or transfer matrix acting on a two site boundary Hilbert space as follows: Time is flowing upward. From the bubble move we see that the time evolution operator is actually a projector: Hence the eigenvalues are λ i = 1 ∨ λ i = 0. The construction of dynamical embedding maps via a singular value decomposition is trivial and it is straightforward to split the projector into two maps, one that can be interpreted as a coarse graining, the other in terms of a refining: C := , R := . (6.7) Each of these maps can be interpreted as maps between Hilbert spaces of different dimension. Concatenating the two gives either the time evolution operator back or is just the identity: This tells us both that the refined state does not carry additional information and that no (physical) information is lost under coarse graining. We will extend this to arbitrarily large triangulations below. The time evolution operator acts locally, such that it is possible to only locally evolve a state in time, e.g.: . (6.8) However in which order one time evolves pairs of lattice sites is an arbitrary choice, one which should not influence the results of the theory such as the partition function. Therefore we impose the following consistency condition, which is satisfied in topological field theories and allows us to define the transfer matrix for three discretization sites uniquely: This construction can be generalised to arbitrarily many discretization sites. In all cases we can replace the time evolution map with a graph of the form on the right hand side of equation (6.9). Thus the maximal rank of this time evolution map (which is a projector) is given by the bond dimension of the kink in the middle, i.e. the bond dimension of one edge. This gives also the maximal number of physical degrees of freedom.

Consistent embedding maps and inductive limit
Furthermore if one cuts this diagram into two pieces at the kink, one obtains both more general coarse graining and refining maps. Hence the consistency conditions naturally translate to both the coarse graining and refining maps, as we demonstrate for the refinement map R: (6.10) Thus these refinement maps satisfy the path independence conditions as outlined in section 3.3, and therefore allow the construction of an inductive limit of Hilbert spaces as described in section 3.1. This gives the continuum limit of this theory. Furthermore, understanding the partition function A b (with boundary b) as a functional on the boundary Hilbert space we also obtain that the partition function is a cylindrically consistent operator [76]: (which coincides with the fixed point condition (5.5)). Here ι bb ′ is built from the refinement maps R, in the way described above. Given a boundary b we choose a triangulation (or dual three-valent graph) interpolating this boundary. As long as we choose a fixed topology for this interpolating triangulation, the partition function will not depend on this choice and hence is a well defined functional on the boundary Hilbert space. Additionally we can refine the boundary via a refinement move. An interpolating triangulation can be obtained by just including a coarse graining move at the appropriate dual edge. This will give a 'bubble' that can be removed due to the bubble move invariance, and we arrive at the previous partition function acting on the unrefined Hilbert space. Hence the partition function (actually a family of functionals labelled by the boundaries b) is cylindrically consistent with respect to the embeddings provided by the refining time evolution. That automatically allows to define from the (so far) discrete partition functions a continuum limit on the inductive limit Hilbert space. This is to our knowledge a new insight, as topological theories are often only discussed with regard to the invariance of the bulk triangulation.
The refinement maps can also be used to construct the Hartle Hawking vacuum states mentioned in section 2. To this end one has either to dualize one edge (graphically a bent or cup). Alternatively in examples where the edges are labelled by (SU (2)) spins, we start with a refining map for which we fix on the incoming edge j = 0, which gives the Hilbert space C associated to this edge. The (two-site) Hilbert space can then be refined further in an arbitrary way, giving the Hartle Hawking vacuum state on boundaries with different numbers of sites.
Doing this in a linear way, i.e. as for the graph on the right in equation (6.10) gives matrix product states (MPS) [91]. MPS provide ansätze for ground state wave functions of Hamiltonians. The projectors T defined above correspond to exponentiated Hamiltonians and the type of MPS defined in (6.10) is the ground state to the following Hamiltonian: where I denotes the index pair the projector is acting on for a total of N outgoing legs. Thus Hartle Hawking vacuum states appear here as the ground states of the Hamiltonians (6.13), justifying again the notion of these states as vacuum states.

3D topological theories and entangling moves
Similar statements hold for higher dimensional theories, for instance BF theories. There are however interesting differences pertaining to the role of Pachner moves in discrete topological theories based on triangulations, such as the Turaev-Viro models [93]. The physical states of these models can be described as string net states [94]. For a canonical time evolution in (2+1)D we will have 3 − 1, 1 − 3 and 2 − 2 moves as time evolution moves as described in section 2. These allow to build an arbitrary complex triangulated hypersurface from a simple one. In this way one can build up an analogous MPS representation of a string net state on arbitrary complex 2D triangulations or on the corresponding dual graphs. The 3 − 1 and 1 − 3 moves serve as (purely) coarse graining or refining moves, whereas the 2 − 2 moves (dis-)entangle the degrees of freedom. The latter play an important role in entanglement renormalization [31,48].
Interestingly MPS states in one (spatial) dimensions do not lead to long range entanglement [95], whereas the example just described gives a phase with long range entanglement in two (spatial) dimensions [48]. This might be due to the necessity of the 2 − 2 move to obtain triangulations not equivalent to a stacked sphere (which would not support long range entanglement). In the case of the stacked sphere the consecutive 1 − 3 moves can be represented by a tree graph. In (1+1)D all triangulations of a circle can be obtained as 'stacked spheres', which are dual to trees.
In (3 + 1)D we have similarly 1 − 4 and 4 − 1 moves as refining and coarse graining moves respectively. As mentioned before the 4 − 1 move does not add physical degrees of freedom, as all additional degrees of freedom are associated to Hamiltonian and diffeomorphism constraints [22,13]. Additionally we have 2 − 3 and 3 − 2 moves, which can be interpreted as entangling moves, similarly to the 2 − 2 move in (2 + 1)D.

Constructing inductive limit Hilbert spaces for non-topological theories
We discussed that the embedding maps provided by the refining time evolution of topological theories satisfy the consistency conditions (3.2). Thus one can use these embeddings to construct an inductive limit Hilbert space as outlined in section 3.1.
Note that this Hilbert space will support a much bigger class of observables than just the Dirac, i.e. topological observables of the topological theory. The set of observables supported by this Hilbert space is determined by the cylindrical consistency conditions (3.13) for observables. The cylindrical consistent observables then describe excitations from the vacuum state, which is given by the no-boundary wave function of the topological theory. Thus the excitations in particular violate the constraints (implementing the equations of motion) of the topological theory. This is a general proposal for the construction of inductive Hilbert spaces. It will be interesting to explore more in detail the relation between the set of cylindrical observables which characterize the inductive Hilbert space and the topological theory which provides the embedding maps.
This strategy to construct an inductive limit Hilbert space has been realized recently [29] for the topological BF theory, which describes the moduli space of flat connections. In this case the excitations are parametrized by curvature observables. The set of cylindrically consistent observables is given by a holonomy-flux algebra underlying the formulation of loop quantum gravity and (lattice) gauge theories. This method therefore resulted in an alternative representation and a new vacuum for loop quantum gravity.

Geometric interpretation of the refining maps
Here we wish to point out that geometric theories are very special with regards to coarse graining and refining. 16 This is due to the fact that the geometry itself is included into the set of dynamical variables. The (say semi-classical) state on the boundary of a region determines a geometry for the bulk, defined as solution of the Einstein equations for the given boundary data the state is peaked on. (This of course assumes that one has a sensible theory of quantum gravity, which would result in a semi-classical state for the Hartle Hawking state.) Thus setting (geometric) scale and number of coarse graining steps equal is, at least a priori, senseless in such theories. A renormalization scale is rather given by the coarseness or fineness of the boundary data, that is the scale on which geometric properties.
Even if one peaks the boundary state on a given geometry with a fixed (hypersurface) volume one cannot expect to find that the partition functions peaks on some regular bulk geometry such that the bulk volumes are bounded by the hypersurface volume.
The reason is that one expects diffeomorphism symmetry to emerge in the form of vertex translation invariance. This symmetry even allows to move the vertices such that orientations of building blocks are inverted. This corresponds to 'spikes' in the geometry, see for instance [97,38]. These spikes give rise to divergences [20,37,38,39], related to the non-compactness of the diffeomorphism gauge orbits. As we will argue below this mechanism allows the appearance of arbitrarily large spins even in a region bounded by a small boundary geometry. This might make even a theory describing flat geometry, such as 3D BF , appear as highly fluctuating. However (almost) all these fluctuations are gauge fluctuations [37], due to the diffeomorphism gauge symmetry. We will illustrate this with a 2D example below.
The relation between the sum of orientations and divergences has been pointed out in [97] which also argues that allowing only one orientation could cure the problem of divergences. However, we will show here, that from the perspective of time evolution as a refining and coarse graining map, the appearance of two orientations is very natural. (It is also natural as the gravitational constraints are quadratic in the momenta describing time symmetric evolution. The two solutions of the quadratic equation correspond to the two orientations.)

2D example
Let us illustrate this with the intertwiner models introduced in [76]. These host families of topological theories, for which all what has been said in section 6 applies. Moreover the theories allow for a natural geometric interpretation, as they are defined on three-valent graphs. The edges carry a spin j (SU (2) representation) and a magnetic quantum number. The spin can be interpreted as a length variableindeed at the three-valent vertices triangle inequalities have to be satisfied arising from SU (2) recoupling theory.
Consider one upward pointing line as in the examples before: This line can be interpreted as a line in a (2D) space with a length given by the spin j. Using our previously defined refining maps R, we can map it to a different state, which is now labelled by two spins j ′ and j ′′ : where the spins j ′ and j ′′ have to satisfy triangle inequalities. However this picture is indistinguishable from adding a triangle with opposite orientation and hence 'removing' it: Thus for a quantum theory this means that both possible orientations have to be taken into account in a superposition: This picture conforms with refining the edge and adding a vacuum degree of freedom (this degree of freedom is not physical, as we are considering a topological theory here): This vacuum degree of freedom allows fluctuations of the edge geometry around a flat subdivision -in the sense that the refined edge can bend upwards or downwards. Any asymmetry would appear as proper time evolution, which we do not expect for a topological or gravitational theory. In this case time evolution is generated by constraints and hence gauge -and as explained before realized as a projector in the quantum theory.
Note that the fluctuation can be arbitrarily large, as argued below. In this case this can be linked to diffeomorphism symmetry realized by a vertex translation symmetry. The middle vertex can be translated an arbitrary large distance forward or backward (or sideways) in 'time'. Moreover, as this is a gauge symmetry, all such configurations have to be gauge equivalent, i.e. come with the same amplitude (and a diffeomorphism invariant measure 17 ).

Spin foams
A similar picture applies to spin foams, where gluing a simplex to a boundary can be done with two orientations. From the semi-classical expressions for the simplex amplitude we again obtain a geometric picture: i.e. with a 1 − 4 move (from gluing a 4-simplex to one boundary tetrahedron) we replace a boundary tetrahedron with a complex of four tetrahedra, that now allows the inner geometry of the original tetrahedra to fluctuate around a flat subdivision. Note that also in this case one did actually not add a physical degree of freedom, at least not if one deals with Regge geometries [40,49,13]. The reason is that the new kinematical degrees of freedom (the four new edge lengths) are accompanied by four (Hamiltonian and diffeomorphism) constraints, associated to the new vertex. These allow to move the additional vertex arbitrarily forward or backward in time, explaining the appearance of the two orientations. As before the diffeomorphism or vertex translation symmetry also means that configurations with arbitrarily large length of the four inner edges have equal amplitude to those describing a 'flat' subdivision, thus one would expect divergences to appear for every inner vertex, for a discussion in spin foams see [20,37,98,38,39]. Thus one should be very careful with treating the spin j variable, which encode the length or area variables in 3D or 4D respectively, as an order parameter. (Indeed one should consider diffeomorphism invariant observables as order parameters, which are however hard to come by [26]. ) We can provide here an interpretation of the divergences as coming from (extremely) squeezed states: As mentioned we add only degrees of freedom in the vacuum state (including gauge degrees of freedom), in the case of 4 − 1 moves these have to satisfy the Hamiltonian and diffeomorphism constraints. Thus, fluctuations in the 'constraint' directions are completely suppressed, whereas fluctuations in the conjugated (i.e. gauge) directions become infinitely large, represented as a non-compact gauge orbit of configurations with equal weight.

Discussion
We pointed out that gravitational theories in a simplicial description, provide with their time evolution maps automatically refining, coarse graining and entangling maps. More generally we interpret the degrees of freedom added during refining time evolution moves as degrees of freedom in the vacuum state (or gauge degrees of freedom). This suggests the construction of a global vacuum state as a state evolved from a one-dimensional Hilbert space C, see also [13,69], which gives a simplicial realization of the no-boundary proposal [62]. Indeed via the notion of dynamical cylindrical consistency [12], we can identify the vacuum states as representing the equivalence class which includes the unique state in the 'no-boundary' Hilbert space C on different discretizations.
We argued that the time evolution maps provide embedding maps for the construction of the continuum limit via inductive limit techniques. In section 4 we outlined how to define and arrive at a consistent continuum dynamics for quantum gravity. This is based on the dynamical embedding maps and proposes to construct the amplitude maps as cylindrically consistent maps based on these embeddings. This allows to define the amplitude maps as objects of the continuum theory on the continuum Hilbert space.
Such (dynamical) embedding maps have however to satisfy stringent path independence conditions, which we related to the path independence under different choices of interpolating hypersurfaces [72] and an anomaly free representation of the Dirac algebra of constraints [34,74]. These conditions are indeed hard to satisfy exactly for interacting theories but should be valid in some approximate sense if considering sufficiently coarse grained observables.
We explained that tensor network renormalization algorithms provide a method to construct dynamical embedding maps that do satisfy the consistency conditions to a better approximation and the related (approximately) cylindrically consistent amplitudes. An important ingredient in these algorithms are truncations. Good truncations are basically good reorganizations of the degrees of freedom into coarser ones and finer ones. We argued that such a splitting can be found by employing radial, that is refining, evolution.
In topological theories the refining time evolution maps typically satisfy the path independence conditions. This allows the construction of projective limit Hilbert spaces using refining time evolution as embedding maps. This will realize the physical state of the topological theory (satisfying the constraints of the topological theory) as a vacuum in this projective limit Hilbert space. This vacuum coincides with the no-boundary wave functions. Excitations can be produced by cylindrically consistent observables. An example of this construction has been recently provided in [29].
For non-topological theories, such as 4D gravity, we suggest that an exact satisfaction of the path independence conditions for the embedding maps would rather involve non-local dynamics, as is indicated by the discussion in section 2.3. The construction of the continuum limit in section 4 allows for such non-local embeddings. The necessity of a non-local dynamics has been recently argued for in [81], which points out that linearized 4D quantum Regge calculus requires a non-local path integral measure in order to show invariance under 5 − 1 moves.
From a statistical physics point of view one would expect that a second order phase transition is needed for the continuum limit, leading to long range (in terms of number of lattice cites) correlations and a conformal theory at the boundary. Indeed in the context of tensor network algorithms and radial evolution the appearance of a conformal theory at the fixed point leads to an interpretation in terms of AdS geometries and holographic renormalization, for instance [32]. For the case of non-perturbative gravity such an interpretation might not apply straightforwardly. Here one would expect that the boundary variables or the quantum state defined on the boundary encodes the geometry of the boundary and -via the equations of motion -of the bulk.
There are still many puzzling features to explore in the context of discretization changing time evolution. This in particular applies to interacting theories, such as 4D gravity. As we outlined here such discretization changing evolution might however provide a definition of the physical vacuum and more generally allow the construction of the continuum limit of the theory. This makes the explorations of these issues very worthwhile.