The Holst Spin Foam Model via Cubulations

Spin foam models are an attempt for a covariant, or path integral formulation of canonical loop quantum gravity. The construction of such models usually rely on the Plebanski formulation of general relativity as a constrained BF theory and is based on the discretization of the action on a simplicial triangulation, which may be viewed as an ultraviolet regulator. The triangulation dependence can be removed by means of group field theory techniques, which allows one to sum over all triangulations. The main tasks for these models are the correct quantum implementation of the Plebanski constraints, the existence of a semiclassical sector implementing additional"Regge-like"constraints arising from simplicial triangulations, and the definition of the physical inner product of loop quantum gravity via group field theory. Here we propose a new approach to tackle these issues stemming directly from the Holst action for general relativity, which is also a proper starting point for canonical loop quantum gravity. The discretization is performed by means of a"cubulation"of the manifold rather than a triangulation. We give a direct interpretation of the resulting spin foam model as a generating functional for the n-point functions on the physical Hilbert space at finite regulator. This paper focuses on ideas and tasks to be performed before the model can be taken seriously. However, our analysis reveals some interesting features of this model: first, the structure of its amplitudes differs from the standard spin foam models. Second, the tetrad n-point functions admit a"Wick-like"structure. Third, the restriction to simple representations does not automatically occur -- unless one makes use of the time gauge, just as in the classical theory.


Introduction
Spin foam models (SFM) [1] are an attempt at a covariant or path integral formulation of canonical Loop Quantum Gravity (LQG) [2,3,4]. In their current formulation, SFM exploit the Plebanski formulation [5] of pure General Relativity (GR) as a constrained BF theory. This approach is well motivated because one can view the Plebanski action as a kind of perturbation of the BF action (albeit the perturbation parameter is a Lagrange multiplier field which one needs to integrate over in a path integral). The path integral for BF theory, however, is under good control [6] so that one may hope to get a valid path integral formulation for GR by functional current derivation methods [7] familiar from ordinary QFT.
As we will try to explain in the next section (see also [3]) the quantum implementation of the so called simplicity constraints of Plebanski theory, to the best knowledge of the authors, has still not been achieved to full satisfaction from first principles in SFM They are called simplicity constraints because they enforce the B field of BF theory to be simple, that is, to originate from a tetrad. Clearly, unless the simplicity constraints are properly implemented, the resulting theory has little to do with quantum gravity. An issue to keep in mind is that the solutions to the classical simplicity constraint consist of five sectors, two of which give rise to ± times the Palatini action, two of which give rise to ± times a topological action and a degenerate sector. All of these sectors are included in a sum over Plebanski histories which may or may not be what one wants.
It is appropriate to mention also further constraints in SFM at this point: Namely, as we also explain in the next section, SFM rely on a simplicial triangulation τ of the differential 4-manifold as well as a dual graph τ * . A recent analysis has shown [8] that freely specifying geometrical data (areas or fluxes) on the faces of τ tends to lead to inconsistent values of the lengths of the edges of τ unless so called Regge constraints in addition to the simplicity constraints are imposed. These constraints are important to be taken care of if one wants to relate SFM to the established theory of Regge calculus [9] and in order to capture the correct semiclassical limit. The underlying reason for these constraints is that Regge calculus is formulated directly in terms of edge lengths while in SFM one rather works with electrical fluxes or areas. However, a typical simplicial triangulation has far more faces than edges in τ so that assigning a length to an edge from given area values maybe ambiguous and/or inconsistent. The authors of [8] speculate that this may pose a problem also for canonical LQG and that the Hamiltonian constraint of canonical LQG should maybe commute withe these Regge constraints. In order avoid further confusion in the literature on this puzzling issue we will take here the opportunity to show that there are no Regge like constraints in canonical LQG to be taken care of. Notice that since our path integral is explicitly based on the Holst action, there is no necessity to relate it to the Regge action which for Plebanski's theory is of course a challenge.
One possibility to make progress on these issues is based on a very simple idea that, to the best knowledge of the authors, occurs for the first time in [10]: Namely, simply try to formulate the path integral in terms of the Holst action [11] rather than the Plebanski action. Not only is the Holst action a valid starting point for canonical LQG, also the simplicity constraints are explicitly solved in that one works entirely with tetrads from the beginning. More precisely, the Holst action uses a specific qudratic expression in the tetrads for the B field of BF theory which also depends on the Immirzi parameter [25]. Hence, the Holst action depends on a specific, non degenerate linear combination of the four non degenerate solutions of the simplicity constraints (see next section for details) and is thus at the same time more general and more restricted because the Holst path integral will not sum over the afore mentioned five sectors of Plebanski's theory. As already mentioned, it is at present debated how the fact that one actually takes a sum over all histories with a mixture of positive and negative Palatini and topological actions affects the semiclassical properties of the Plebanski path integral.
As observed in [10], since the Holst action is quadratic in the tetrads, one can in principle integrate out the tetrad in the resulting Gaussian integral. This has been sketched in [10], however, the expressions given there are far from rigorous. Here we will give a rigorous expression. Also, we will include the correct measure factor [12] resulting from the second class constraints involved in the Holst action which makes sure that the path integral qualifies as a reduced phase space quantisation of the theory as has been stressed in [13]. A similar analysis has been carried out for the Plebanski theory in [14], however, the resulting measure factor is widely ignored in the SFM literature. The result of the Gaussian integral is an interesting determinant that displays the full non linearity of Einstein's theory. When translating the remaining integral over the connection in the partition function into SFM language, that is, sums over vertex, edge and face representations, one sees that our model differs drastically from all current SFM.
Of course, we also need to introduce an IR and UV regulator in the form of a finite cell decomposition. Two observations lead us to depart from the usual SFM approach where one works with simplicial cell complexes. The first is the result [15] which demonstrates that current semiclassical states used in LQG do not assign good classical behaviour to the volume operator [16] of LQG unless the underlying graph has cubic topology. Since the volume operator plays a pivotal role for LQG as it defines triad operators and hence the dynamics, this is a first motivation to consider cubic triangulations of the four manifold, also called "cubulations". Notice that any four manifold can be cubulated and that within each chart of an atlas the cubulation can be chosen to be regular (see e.g. [16] and references therein). The second observation is that the original motivation for considering simplicial cell complexes in current SFM comes from their closeness to BF theory. BF theory is a topological QFT and therefore one would like to keep triangulation independence of the BF SFM amplitude. That this is actually true is a celebrated result in BF theory. In particular, in order to keep triangulation independence it is necessary to integrate the B field over the triangles t of the tringulation and the F field over the faces f bounding the loops in a dual graph [19]. However, GR is not a TQFT and therefore the requirement to have triangulation independence is somewhat obscure. Of course it is natural if one wants to exploit the properties of BF theory but not if one takes a different route as we tend to do here. Hence, if we drop that requirement, then it is much more natural to refrain from considering the dual graph in addition to the triangulation. This also solves another issue: Notice that the gauge group SO(p, q) acts on the B field of BF theory by the adjoint action and on the connection A underlying F in the usual way. The question is where the gauge transformation acts on the discretised variables B(t), A(∂f ) (flux and holonomy). It would be natural to have the gauge group act at the barycentres of t and at the starting point of the loop ∂f which will be a vertex of the dual graph. However, notice that the vertices of the dual graph and the triangles are disjoint from each other, the edges of the graph are dual to the tetrahedra of the cell complex. Hence, local gauge invariance in discretised BF theory at the level of the action is not manifest and even less in Plebanski theory. In fact gauge invariance is related to the closure constraint in SFM which is a subtle issue as we will see in the next section. If one works just with a triangulation and drops the dual graph then gauge invariance issues are easy to take care of. Hence, it is motivated to work with a triangulation that maximally simplifies the Gaussian integral. As we will show, this again leads to cubulations. This also nicely fits with the framework of Algebraic Quantum Gravity [20] which in its minimal version also is formulated in terms of algebraic graphs of cubic topology only.
The architecture of this article is as follows: In section two we give a non technical review of current spin foam models. We sketch their derivation from the classical Plebanski action focussing on the points where a first principle argument is missing. These issues will be the motivation for our different route.
In section three we clarify the issue mentioned above, namely whether the Regge constraints on SFM discovered recently in [8] also arise in the canonical formulation. We prove that they do not play any role in LQG.
In section four we derive the Holst SFM using cubulations as UV regulator as motivated above. As this is a exploratory paper only, we will not worry about convergence issues which will be properly addressed in subsequent works. More precisely, what we compute are tetrad n-point functions. These should contain sufficient information to compute anything of interest in LQG such as graviton scattering amplitudes as in [21] via LSZ (Lehmann -Zimmermann -Symanzik) like formulas as in ordinary QFT [22] which allows to reconstruct the S -matrix from symmetric vacuum n -point functions. Of course, how these n-point functions are related to true observables in a diffeomorphism invariant theory is a subtle issue which will be clarified in a separate paper [23]. Here we only give a summary. The n-point functions can be computed in closed form up to a remaining functional integral over the connections. This can be done for either signature of the spacetime metric. At this point one could invoke SFM techniques and expand the integral using harmonic analysis on the gauge group. The resulting intertwiner displays a much more complictated structure than in any of the current spin foam models. In particular, pictorially speaking one basic building block is an octagon diagramme an analytic expression for which could be called the 96 -j symbol in the case of G = SO(4). Yet, the n-point functions display a certain Wick like structure as if they came from a Gaussian integral. What makes the theory interacting and obstructs the tetrads from being a generalised free field 1 is the additional functional integral over the connection. In background dependent QFT the moments of a Gaussian measure depend on a background dependent covariance (usually depending on the Laplacian (in the Euclidean setting) and the mass). Our theory behaves similar, just that due to background independence the covariance is itsef a field that must be integrated over. This is similar in spirit to what happens in 3D [24] when coupling GR to point particles: There, when integrating over the gravitational degrees of freedom one ends up with particles moving on a non commutative geometry. Here instead of a non commutative geometry we obtain an interacting theory of tetrad fields.
In section five we conclude and outline the missing tasks that need to be performed before our model can be taken seriously. An interesting result of our analysis is that in the present formulation which lacks the simplicity constraints of Plebansk's theory, the irreducible representations of Spin(p, 4 − p) are not forced to be simple. Simple representations, which basically reduce Spin(p, 4 − p) to an SU (2) subgroup, can only arise if we impose the time gauge which in the classical theory is used in order to reduce the Holst connection to the Ashtekar -Barbero -Immirz connection which is a necessary ingredient in the canonical quantisation programme. Gauge fixing conditions of course naturally arise in any attempt to make formal path integral expressions better behaved and here the situation is similar.
Two appendices treat some simple technical aspects of this work. Most parts of this paper do not depend on whether the spacetime signature is Lorentzian or Euclidean.

Outline of Current Spin Foam Models
In this section we intend to give a brief summary of the developments in SFM theory with a focus on the derivation of the current models from the Plebanski action and the gaps in that derivation. This serves as the motivation for the present paper.
To begin with, it is worth mentioning that the classical solutions to the simplicity constraints actually comprise altogether five sectors, namely two topological sectors B = ±e ∧ e, two Palatini sectors B = * e ∧ e (where * denotes the Hodge map with respect to the internal Minkowski or Euclidean metric) and one degenerate sector. In a path integral a sum over all these sectors will occur while one would expect that one should only include one of the Palatini sectors or maybe a Holst sector B = * e ∧ e + 1 γ e ∧ e [11] in order to have a path integral for Einstein's theory. Here γ is the Immirzi parameter of LQG [25].
We now sketch the usual "derivation" of SFM from the Plebanski action: The Plebanski action is of the form where Φ is a scalar Lagrange multiplier field with values in the tensor product of two copies of so(1, 3) or so(4) depending on the signature and F is the curvature of the connection A. In a formal path integral formulation one integrates exp(iS) over A, B, Φ. Integrating first over Φ we are left with a partition function of the form where C(B) denotes the collection of the simplicity constraints on B. If one would solve the delta distribution by integration over B one would get the afore mentioned sum over the five sectors and integral over the tetrad fields. However, this would result in a complicated expression which does not exploit the relation of Plebanski's formulation to BF theory. Thus, rather than doing that, one notices that roughly speaking Denoting the functional derivative by X one can now formally pull the δ distribution out of the B integral and perform the integration over B resulting in Without the "operator" δ(C(X)) this would be the formal partition function of BF theory. Thus, one has achieved the goal to preserve the closeness of the theory to BF theory. One now should expand δ(F ) in terms of eigenfunctions of the collection of operators C(X) and keep only the zero eigenfunctions multiplied by δ(0). In order to give meaning to those formal expressions one has to introduce a UV and IR cutoff as is customary in constructive QFT. That is, one considers finite simplicial triangulations τ of the (possibly compact) differential 4 manifold and dual graphs τ * . The two forms B are now approximated by integrals B(t) of B over triangles t of τ while the curvatures F are approximated by holonomies A(∂f ) around the loops ∂f of the faces f dual to the triangles t. One writes f (t) for the face dual to t. The BF action is then discretised by The reason to work with both τ and τ * is that in fact is an exact identity [19] where F (f ) denotes the integral of F over f . This is very convenient in particular for pure BF theory. The only approximation thus consists in replacing F (f ) by A(∂f ) − 1 G . Likewise, the functional derivatives X must be approximated by ordinary derivatives X t with respect to the variables F (t) := A(∂f (t)) − A(∂f (t)) T . Notice that when defined like that, the X t are mutually commuting. The exploration of the model with this definition of X t will appear in [26]. However, this is not what is done in current SFM. Rather one replaces X t by Y t , the right invariant vector field on the copy of G associated with the variable A(∂f (t)). The reason for doing that is that the Y t have a simpler action on the delta distribution It is usually justified by saying that δ(F ) has support on A(∂f ) = 1 G and that Y t , X t differ by multiplication with holonomies which should be supported at 1 G . However, this argument is certainly not rigorous because the support of the δ distribution can drastically change when acting with differential operators. Moreover, as already said, this substitution comes with a price: While the simplicity constraints in terms of X t are mutually commuting, those in terms of Y t are not, in fact they are inconsistent with each other. In addition, one does not impose all the simplicity constraints but only a subset of them: There are three types: Constraints involving 1. the same triangle, 2. two triangles sharing an edge and 3. two triangles sharing a vertex. The latter constraint is implied by the so called closure constraint on tetrahedra T t∈T Y t = 0 (2.8) (but not vice versa). This constraint looks as if it would be automatically satisfied because it looks like a gauge invariance condition. However, the product of δ distributions (2.7) in ∆(F ) is not annihilated by the closure constraint (2.8)! This is obvious from the fact that the product of δ distributions involves products of the form t∈T χ πt (A(∂f (t)) (2.9) where π denotes an irreducuible representation of G and χ π its character. However, there is no gauge invariant interwiner among the loops ∂f (t), t ∈ T . One usually argues that the closure constraint is taken into account because after integrating over A one is only left with gauge invariant intertwiners, but this is wrong before integrating. In fact since integration with the Haar measure always projects out the gauge invariant part, anything can be made gauge invariant this way. We feel that neglecting the 3rd kind of simplicity constrant (implied by taking the closure constraint for granted) makes the model too local. The effect of truly taking the closure constraint into account will also be explored in [26]. As already said, even the simplicity constraints of the first two types are anomalous as they imply vanishing volume [5,3] and fix the above mentioned intertwiner to be unique (the model has not enough degrees of freedom). This and other investigations has ruled out the Barrett -Crane model [6] which however was an important step in SFM research because it triggered the development of model independent mathematical tools. Recent activities in SFM therefore have focussed on trying to implement the simplicity constraints of the first two types differently. There are two camps: In [27] one uses Master Constraint type of techniques which were developed in a different context [28] in order to treat anomalous (rather: second class) constraints. In [29] one refrains from imposing the simplicity constraints as operator conditions altogether but rather imposes them semiclassically by expanding SFM amplitudes in terms of group coherent states [30] developed by Perelomov [31] and then uses the simplicity condition on the classical bivectors on which the semiclassical amplitudes depend.
In our view, while these methods seem to give rise to models with better semiclassical properties [32] and to a better understanding whether one is looking at the topological or Palatini sector respectively, in our opinion it is fair to say that a satisfactory derivation from first principles is missing. By this we mean that one should be able to arrive at those models starting from the Plebanski action, another classically equivalent action or the Hamiltonian formulation and then carry out integrations and imposition of constraints without intermediate approximations or ad hoc substitutions as those listed above. This is precisly the motivation of the current paper.

On Regge Constraints in Canonical LQG
In canonical LQG the states are not labelled by a pair consisting of a graph and a dual polyhedronal complex (in fact, generic graphs do not admit such a dual complex at all). Rather states are just labelled by graphs, edge representations and vertex intertwiners. This is the spin network orthonormal basis. In fact, all graphs are used in canonical LQG, not only four valent ones as the ones that are used in present SFM analysis based on simplicial trangulations which is why current SFM only capture a small subspace of the LQG Hilbert space. This subspace is not left invariant under the LQG holonomy flux algebra [33] so that this subspace is not a representation of it. Hence, in order to establish the relation between LQG and SFM it would seem that one needs to include spin network states on all possible boundary graphs into the SFM analysis 2 .
In addition to this basis of states, one has holonomy, flux, area, triad or length operators labelled by curves or surfaces. None of them has a physical meaning, it is only their occurence in compound operators assembeled from them and which are Dirac observables or constraint operators respectively which has a physical meaning. These arise as integrals over spatial scalar densities and by the theorem established in paper V of the series [35] any dependence of these quantities on the electric degrees of freedom is via triad like operators. More in detail, the cylindrical projections of such operators are labelled by arbitrary graphs and the triad like operators as well as the holonomies that arise in the corresponding expressions are defined by the edges of that graph. Hence we see that also physically relevant operators just depend on the structure of a graph and not on a dual cell complex. This is maybe confusing because flux and operators depending on arbitrary (piecewise analytic) surfaces and curves actually define the fundamental kinematical algebra underlying LQG. However, flux operators are used at intermediate stages, they are not even Gauss invariant. What one uses are volume operators and constructs from them length operators [34] and more generally triad like operators via the techniques developed in [35]. These triad operators which enter gauge invariant and diffeomorphism co(in)variant expressions such as the Hamiltonian constraint only excite the states along the edges of its underlying graphs. Again, there is no reference to any polyhedronal cell complex whatsoever. There are efforts to invent a length operator which also uses a cell complex [36], however, the resulting operator in contrast to [34] is not cylindrically consistent. Now although they do not have a direct physical meaning, LQG admits these holonomy and flux or area operators labelled by arbitrary curves and surfaces respectively not related to any graph [16,39]. A given collection of such operators may or may not have good semiclassical properties with respect to a given semiclassical state. But this is always true in quantum theory: Even the standard coherent states of the harmonic oscillator do not assign good semiclassical expectation values to all operators that one can construct in the corresponding Hilbert space, see e.g. [40] for a detailed discussion. What is important is that there is a sufficient number of functions on phase space that should seperate the points 3 . In our case these could be holonomies and triads integrated along the edges of the graph underlying the given semiclassical state. These variables are enough to separate the points of the classical phase space to an arbitrary precision depending on how fine the given graph is embedded with respect to the background metric to be approximated. Now it is true that there are certain semiclassical states in LQG that depends on a polyhedronal 3D complex τ [37] as an additional structure. However, this 3D τ does not have any dynamical interpretation as in SFM since it does not define a 4D extension. It just serves to construct a complexifier [37] and thus to produce an infinite family of states labelled by τ , a fixed point (A 0 , E 0 ) in the classical phase space and any graph γ. Any sufficiently fine τ would do and so τ does not have any fundamental meaning. In fact one could choose to consider complexifiers that do not come from any τ , see e.g. the first reference of [37]. The semiclassical properties are optimal for those members of the family for which γ is in fact dual to τ . For each edge e of such a graph there is a classical flux value E 0 (S e ) and a classical holonomy A 0 (e) which label the state where S e is the unique face in τ dual to e. Notice that what one is interested in are the lengths of the edges e in γ and not the length of the links in the 1-skeleton of τ . Due to duality, there is a one to one correspondence between the classical values E 0 (S e ) and the expectation values of the length operators associated to e, no overcounting of degrees of freedom takes place, no ambiguities arise, whatever τ is.
That the corresponding length operators assume the correct semiclassical expectation value with respect to these states follows from the analysis performed in [38,20] where one has chosen cubical τ for simplicity. In fact, the analysis of [15] has demonstrated that the volume operator of LQG assumes the correct semiclassical expectation values only if τ has cubical topology so that one does not really have too much of a choice as far as τ is concerned. This in turn implies that current spin foam models based on simplicial cell complexes do not admit the semiclassical states [37] as boundary states which could mean that the current models maybe have to be extended to more general triangulations.
Notice that curiously, in 3d, in contrast to simplicial cell complexes, the number of faces of (sufficiently regular) cubical τ equals the number of links in the 1-skeleton of τ so that also here there is a one to one correspondence between faces and links within τ . However, this does not have any relevance for our main point, namely that in LQG the fundamental quantum objects are holonomies along the edges of graphs and triad or length operators associated to those edges, there is no association of length or area operators to links or faces in some dual cell complex.
To summarise: In LQG there is a basis of states labelled by graphs. Given an arbitrarily small error, there is a set of operators whose classical counterparts separate the points of the classical phase space up to that error and whose cylindrical projections are labelled by the edges of graphs. In addition, there are also semiclassical states depending on graphs with respect to which this separating system of operators have optimal 4 semiclassical properties. There is no triangulation with respect to which one would be interested in relating the lengths of the edges of its 1 -skeleton to the areas of its surfaces. Regge like constraints never occur.

Derivation of the Model
This is the main section of the present paper. It is subdivided into five parts. In the first we motivate the use of cubulations from different perspectives and discuss some of their properties. In the second we sketch the relation between path integral n-point functions and physical (observable) correlators in terms of the physical inner product of the theory. More details on that issue will appear elsewhere [23]. This crucially works via a choice of gauge fixing or clock system. In the third part we apply our machinery to n-point tetrad functions or equivalently to a generating function of a (complex, regulated) measure. That measure displays a Gaussian like structure and we can accordingly integrate out half of the degrees of freedom under some assumptions about the choice of gauge fixing. In the fourth part we discuss the properties of the resulting integral over the remaining degrees of freedom, its Wick like structure and the structure of the vertex amplitude of the corresponding SFM model when invoking harmonic analysis on the gauge group. Finally, in the fifth part we discuss how these n-point functions are related to the physical inner product and the kinematical Hilbert space of LQG, in particular, how the covariant connection of the Holst path integral reduces to the Ashtekar -Barbero -Immirzi connection of the canonical theory in physical amplitudes.

Cubulations
We start by motivating the use of cubulations rather than simplicial triangulations.

Gauge invariance
Let us look more closely at the issue of gauge invariance for BF theory which makes use also of a dual graph. Here gauge invariance is not preserved locally (i.e. triangle wise) in the formula T r(B ∧ F ) = We do not know if such a consistent prescription can be found at all but rather take these complications that come from the fact that one is dealing simultaneously with a (simplicial) complex and its dual cell complex as a further piece of motivation to work just with the triangulation.

Cubulations versus simplicial triangulations
The previous considerations do not specify the type of triangulations to be considered. As already said, the first piece of information why to use cubulations rather than simplicial triangulations is because the boundary graphs must contain cubical ones in order to make sure that the corresponding boundary Hilbert space contains enough semiclassical states [15]. However, there is an additional, more practical motivation for doing so which we discuss now.
Recall that the Holst action is given by Here κ denotes Newton's constant, where F IJ = dA IJ + A IK ∧ A K J denotes the curvature of the connection A, γ is the Immirzi parameter, * denotes the internal Hodge dual, that is, where I, J, K, .. = 0, .., 3 and η is the Minkowski or Euclidian metric for structure group G = SO(1, 3) or G = SO(4) respectively. As motivated in the introduction, we plan to keep the co -tetrad one forms e I rather than introducing a B field and thus the simplicity constraints are manifestly solved. Moreover, the issue raised in [8] is circumvented as co-tetrads are labelled by curves and not by (overcomplete) surfaces.
In order to give meaning to a path integral formulation we consider a UV cutoff in terms of a triangulation τ of M which we choose to be finite, thereby introducing an IR regulator as well. Let us denote the two -dimensional faces of τ by f and the one dimensional edges of τ by l. We want to discretise (4.1) in a manifestly (and locally) gauge invariant way, just using edges and faces. To do so we equip all edges with an orientation once and for all. Given an edge l consider Here l(x) for x ∈ l denotes the segment of l that starts at the starting point of l and ends at x and [A(p)] I J denotes the G valued holonomy of A along a path p. Evidently, under local gauge transformations g : M → G, (4.4) transforms as e I l → g I J (b(l)) e J l where b(l) denotes the beginning point of l. To avoid confusion, here g ∈ G means the following: The fundamental objects are the matrices g I J .
If g I J = [exp(F )] I J for some generator F I J then (4.5) means thatF IJ +F JI = 0. In abuse of notation one usually uses the same symbols g, F andg,F respectively but unless we are in the Euclidian regime we should pay attention to the index position.
Clearly, the curvature F must be discretised in terms of the holonomy of A along the closed loops ∂f where we have also equipped the faces f with an orientation once and for all. We have where we have used the non Abelian Stokes theorem for "small" loops, that is and we have writtenF IJ (x) := F IJ (x). We may now define the antisymmetric matrix Imagine now that we would use a simplicial triangulation. Hence M is a disjoint (up to common tetrahedra) union of four simplicices σ = [p 0 (σ), .., p 4 (σ)]. For each p j (σ) label the four boundary edges of σ starting at p j (σ) by l j µ (σ) and let the face (triangle) of σ spanned by l j µ (σ), l j ν (σ) be denoted by f j µν (σ) with the convention f j µν (σ) = −f j νµ (σ). Now the orientation of l j µ (σ) either coincides with the given orientation of the corresponding edge in σ or it does not. In the former case define e where we have averaged over the corners of a 4 -simplex. For any simplicial triangulation the (symmetric in the compound index (I, l)) matrix G ll ′ IJ is difficult to present explicitly due to bookkeeping problems, even if we refrain from averaging over the five corners of a 4 -simplex. Moreover, as we intend to perform a Gaussian integral over the e I l , we need the determinant of that matrix which is impossible to compute explicitly unless it is block diagonal in some sense.
The latter observation points to a possible solution. First of all any manifold admits a cubulation, that is, a triangulation by embedded hypercubes 6 [17]. We assume that M has a countable cover by open sets O α . Consider a stratification by 4D regions S α subordinate to it. Then S α admits a regular cubulation, that is, the 1 -skeleton of the cubulation of M restricted to S α can be chosen to be a regular cubic lattice. Non trivial departures from the regular cubulation only appear at the boundaries of the S α . We restrict attention to those M admitting a cubulation such that in every compact submanifold the ratio of the number of cubes involved in the non -regular regions divided by the number of cubes involved in the regular regions converges to zero as we take the cubulation to the continuum. For those M , up to corrections which vanish in the continuum limit we can treat M as if it would admit a global, regular cubulation.
Given a regular cubulation τ , consider its set of vertices. In 4D, each vertex v is eight valent and there are four pairs of edges such that the members of each pair are analytic continuations of each other while the tangents at v of four members from mutualy different pairs are linearly independent of each other. It is therefore possible to assign to each edge a direction µ = 0, 1, 2, 3 and an orientation such that adjacent edges in the same direction have a common analytic continuation and agree in their orientation. We label the edges starting at v in µ direction by l µ (v). Notice that this labelling exhausts all possible edges and unambiguously assigns an orientation to all of them. The discretised co -tetrad is then given by Notice that the hypercubic lattice that results solves all our bookkeeping problems since we now may label each vertex by a point in Z 4 .
Next, given a vertex v we denote by v ±μ the next neighbour vertex in µ direction. We define the plaquette loop in the µ, ν plane at v by Notice that again this labelling exhausts all minimal loops in the one skeleton of τ . The discretised "curvature" is therefore Denoting by σ the 4D hypercubes in τ we notice that there is a one to one correspondence between the vertices v in the 0 -skeleton of τ and the hypercubes given by assigning to σ that corner v = (z 0 , .., z 3 ) of σ with smallest values of all z 0 , .., z 3 ∈ Z. We then find The This means that using (regular) cubulations indeed the matrix G ll ′ IJ becomes block diagonal where each block is labelled by a vertex and corresponds to the symmetric 16 x 16 matrix G(v). This is what makes the computation of the detrminant of the huge matrix with entries G ll ′ IJ practically possible. As we will see, the matrices G(v) have a lot of intriguing symmetries which makes the computation of their determinant an interesting task.
Interesting questions that arise in algebraic topology and which we intend to address in future publications are: 1. Given any D -cubulation, does there exist a cubulated refinement such that one can consistently assign to every D cube σ a vertex v and to all edges an orientation such that there are precisely D edges outgoing from v? We call cubulations for which this is possible regular. If that would be the case, we could generalise our discretisation from regular hypercubic lattices to arbitrary cubic ones and thus would not have to make any error at the boundaries of the stratified regions mentioned above.
2. If the answer to [1.] is negative, can one choose maximally regular cubulations as to minimize the error in our assumption of globally regular cubulations? In 3D some results on that issue seem to exist [17].
3. Given maximally regular cubulations, can one make an error estimate resulting from the neglection of the non -trivial topology?

Notes on n-point functions
In the SFM literature the first task that one addresses is the computation of the partition function. However, the partition function itself has no obvious physical meaning even if one imposes boundary conditions on the paths (spin foams) to be integrated (summed) over. The hope is that SFM provide a formula for the physical inner product of the underlying constrained canonical theory which starts from some kinematical Hilbert space H. The purpose of this section is to sketch the connection between path integrals and n -point functions for a general constrained theory. We will use reduced phase space quantisation as our starting point. The connection with operator constraint quantisation and group averaging [42] and more details will appear elsewhere [23].
We assume that we are given a classical theory with first class constraints {F } and possibly second class constraints {S}. We turn the system into a purely second class system by supplementing {F } with suitable gauge fixing conditions {G}. The canonical Hamiltonian H c is a linear combination of the primary constraints plus a piece H ′ 0 non -vanishing on the constraint surface of the primary constraints (it could be identically zero). It can also be written as a first class piece H 0 and (some of) the first class constraints F . The gauge fixing conditions fix the Lagrange multipliers involved in the canonical Hamiltonian. One may split the complete set of canonical pairs (q, p) on the full phase space into two sets (φ, π), (Q, P ) such that one can solve the system S = F = G = 0 which defines the constraint surface for (φ, π) = f (Q, P ) in terms of Q, P . The Q, P are coordinates on the reduced phase space which is eqipped with the pull -back symplectic structure 7 induced by the embedding of the constraint surface specified by f .
The gauge fixing conditions also induce a reduced Hamiltonian H r which only depends on Q, P and which arises by computing the equations of motion for Q, P with respect to H c and then restricting them to the gauge fixed values of the Lagrange multipiers and to the constraint surface. Then H r is defined as the function of Q, P only 8 which generates these same equations of motion. We are now in the situation of an ordinary Hamiltonian system equipped with a true Hamiltonian H r . We quantise a suitable subalgebra of the reduced Poisson algebra as a * −algebra A and represent it on a Hilbert space H. This Hilbert space is to be identified with the physical Hilbert space arising from reduced phase space quantisation. Its relation with Dirac's constraint quantisation will appear elsewhere [23]. Let t → U (t) be the unitary evolution induced by H r . Then the object of interest is the transition amplitude or n-point function between initial and final states ψ i , ψ f at initial and final times t i , t f respectively with intermediate measurements of the operators a 1 , .., a n ∈ A at t 1 < t 2 < .. < t n .
Preferrably one would like to be in a situation in which there is a cyclic vector Ω for A which is also a ground state for H r . The existence of a cyclic vector is no restriction because representations of A are always direct sums of cyclic representations. In this case AΩ is dense in H and we may therefore restrict attention to ψ i = ψ f = Ω by choosing appropriate a 1 , .., a n in (4.15). The existence of a vacuum state for H r means that zero is in the point spectrum of H r . Let us make this assumption for simplicity.
Let us abbreviate the Heisenberg time evolution as a k (t) := U (t) −1 a k U (t). In principle it would be sufficient to restrict the a k to be configuration operators Q because their time evolution contains sufficient 7 This symplectic structure coincides with the pull -back of the degenerate symplectic structure on the full phase space corresponding to the Dirac bracket induced by the system {S, F, G} [41]. 8 For simplicity, we are assuming a gauge fixing which leads to a conservative reduced Hamiltonian.
information about P as well. However, we will stick to the more general case for reasons that will become clear later. This leads us the n-point function where we have properly normalised as to give the 0 -point function the value unity. This has the advantage that certain infinities that woudl otherwise arise in the following can be absorbed. Notice that since Ω is a ground state, the U (t f ) and U (t i ) as well as the denominator could be dropped in (4.16). Now a combination of well known heuristic arguments [41], [43] reviewed in [23] reveals the following: Consider any initial and final configuration q i , q f on the full phase space and denote by P((t i , q i ), (t f , q f )) the set of paths 9 in full configuration space between (q i , q f ) at times t i , t f respectively. Consider Here j is a current in the fibre bundle dual to that of q, S[q, p, λ, µ] is the canonical action after performing the singular Legendre transform from the Lagrangian to the Hamiltonian formulation 10 and ρ is a local function of q, p which is usually related to the Dirac bracket determinant det[{S, S}] [43]. Now the primary constraints are always of the form π = f (Q, P, φ) where we have split again the canonical pairs into two groups. Thus, S[q, p, λ, µ] is linear in those momenta π and we can integrate them out yielding δ distributions of the form δ[λ − (.)] δ[µ − (.)] which can be solved by integrating over λ, µ. If we assume that the dependence of the remaining action on P is only quadratic and that G and | det[{F, G}]| are independent of P then we can integrate also over P which yields in general a Jacobian I coming from the Legendre transform. We can then write (4.17) as  have the canonical or physical interpretation of < Ω, T (a 1 (t 1 )..a n (t n ))Ω > (4.20) where T is the time ordering symbol, Ω is the aforementioned cyclic vacuum vector defined by the physical (or reduced) Hamiltonian H r induced by the gauge fixing G, a k (t) is the Heisenberg operator at time t (evolved with respect to H r ) corresponding to a k and a k itself classically corresponds to a component of q evaluated on the constraint surface S = F = G = 0. The scalar product corresponds to a quantisation on the reduced phase space defined by G. Notice how the gauge fixing condition G (or choice of clocks) prominently finds its way both into the canonical theory and into the path integral formula (4.18). In particular, notice that the seemingly similar expression does not have any obvious physical interpretation and in addition lacks the important measure factors ρ, I.

Remarks:
1. One may be puzzled by the following: From ordinary gauge theories on background spacetimes such as Yang -Mills theory on Minkowski space the path integral or more precisely the generating functional of the Schwinger functions (in the Euclidian formulation) does not require any gauge fixing in order to give the path integral a physical interpretation. One needs it only in order to divide out the gauge volume in a systematic way (Fadeev -Popov identity) while the generating functional is independent of the gauge fixing. The gauge fixing also does not enter the construction of gauge invariant functions (such as Wilson loops). In our case, however, the gauge fixing condition is actually needed in order to formulate the physical time evolution and the preferred choice of gauge invariant functions on phase space.
The resolution is as follows: The difference between Yang -Mills theory and generally covariant systems such as General Relativity that we are interested in here is indeed that the canonical Hamiltonian is in fact the generator of gauge transformations (spacetime diffeomorphisms) rather than physical time evolution. It is even constrained to vanish. In contrast, in Yang -Mills theory there is a preferred and gauge invariant Hamiltonian which is not constrained to vanish. In order to equip the theory with a notion of time we have used the relational framework discovered in [44] which consists in choosing fields as clocks and rods with respect to which other fields evolve. Mathematically this is equivalent to a choice of gauge fixing. Hence, in our case the gauge fixing plays a dual role: First, in order to render the generating functional less singular and secondly in order to define physical time evolution.
2. The appearance of the δ distributions and functional (Fadeev -Popov) determinants in (4.17) indicates that we are not dealing with an ordinary Hamiltonian system but rather with a constrained system. One can in fact get rid of the gauge fixing condition involved if one pays a price. The price is that if one considers instead of q its gauge invariant extensionq off the surface G = 0 [41,45], then, since we consider the quotient Z[j]/Z[0] which leads to connected n -point functions, by the usual Fadeev -Popov identity that exploits gauge invariance we may replace [23] (4.17) bỹ However, (4.22) is not very useful unlessq(q, p) is easy to calculate which is typically not the case. Hence, we will refrain from doing so. Nevertheless, no matter whether one deals with (4.17) or (4.22), the correlation functions depend on the gauge fixing G or in other words on the choice of the clocks [45,46] with respect to which one defines a physical reference system.
3. The correspondence between between (4.19) and (4.20) also allows to reconstruct the physical inner product from the n -point functions: given arbitrary states ψ, ψ ′ ∈ H we find a, a ′ ∈ A such that ||aΩ − ψ||, ||a ′ Ω − ψ ′ || are arbitrarily small. Now pick any t i < t 0 < t f then < aΩ, a ′ Ω >=< Ω, a † a ′ Ω > (4.23) By assumption, the operator a † a ′ can be written as a finite linear combination of monomials of homogeneous degree in the components of the operator q which we write, suppressing indices for the components as q n . Then < Ω, q n Ω >= lim t 1 ,..,tn→t 0 ;tn>..>t 1 < Ω, q(t n )..q(t 1 )Ω > (4.24) which can be expressed via (4.19). The existence of this coincidence limit of n -point functions is often problematic in background dependent Wightman QFT [22] but their existence is actually the starting point of canonical quantisation of background independent non -Wightman QFT as one can see from the identity (4.24).

The Generating Functional of Tetrad n -point functions
We now want to apply the general framework of the previous section to General Relativity in the Holst formulation. Classically its is clear that without fermions all the geometry is encoded in the co-tetrad fields e I µ because then the spacetime connection is just the spin connection defined by the co-tetrad (on shell). If fermions are coupled, the same is still true in the second order formulation so that there is no torsion. But even in the first order formulation with torsion one can atttribute the torsion to the fermionic degrees of freedom. Hence we want to consider as a complete list of configuration fields the co -tetrad.
We will now make two assumptions about the choice of gauge fixing and the matter content of our system.
I. The local measure factors ρ, I depends on the co -tetrad only analytically. This is actually true for the Holst action [12], see also [14].
II. The gauge fixing condition G is independent of the co -tetrad and the Fadeev -Popov determinant det({F, G}) depends only analytically on the co -tetrad. With respect to the first class Hamiltonian and spatial diffeomorphism constraint this can always be achieved by choosing suitable matter as a reference system, see e.g. [47]. However, in addition there is the Gauss -law first class constraint.
Here it is customary to impose the time gauge gauge condition [11] which asks that certain components of the tetrad vanish. This will also enable one to make the connection with canonical LQG where one works in the time gauge in order to arrive at an SU(2) rather than G connection. Fortunately, in this case it is possible to explicitly construct a complete set of G -invariant functions of the tetrad, namely the four metric 11 g µν = e I µ e J ν η IJ and if we only consider correlators of those then we can get rid of the time gauge condition as indicated in the previous section (Fadeev -Popov identity). In section 4.5 we will come back to this issue, however, in trying to make the connection of the SFM obtained with canonical LQG for which the time gauge is unavoidable. We will then sketch how to possibly relax Here φ denotes the matter configuration variable. We have split the total action into the geometry (Holst) part S g and a matter part S m which typically depends non trivially but analytically on e. Also the total current was split into pieces J, j respectively taking values in the bundles dual to those of φ, e respectively. A confusing and peculiar feature of first order actions such as the Holst or Palatini action is that from a Lagrangian point of view both fields e, A must be considered as configuration variables. In performing the Legendre transform [12] one discovers that there are primary constraints which relate certain combinations of e to the momenta conjugate to A. One can solve these constraints and then (A, e) appear as momentum and configuration coordinates of this partly reduced phase space. This is the reason why consider only correlations with respect to e.
The idea is now as usual in path integral theory: We set  Of course e iSm must be power expanded in a perturbation series in order to carry out the functional derivations with resepect to j. Indeed, if we consider just the functional integration with respect to e and think of A, φ as external fields then the piece S g being quadratic in e is like a free part while S m being only analytic in e is like an interaction part of the action as far as the co-tetrad is concerned. Of course, in the computation of the physical tetrad n -point functions all the functional derivatives involved in (4.27) are eventually evaluated at j = 0. It follows that the object of ultimate interest is the Gaussian integral which is computable exactly. Of course, it is not a standard Gaussian, first because the exponent is purely imaginary. Secondly because the "metric" G µν IJ (A) is indefinite so that z[j; A] would be ill defined if the exponent was real 12 . In the appendix we remind the reader how to integrate such non -standard Gaussians. In order to carry out this integral we must make the technical assumption that configurations A for which G is singular have measure zero with respect to DA. We will come back to this assumption later.
It is at this point where we must regularise the path integral in order to perform the Gaussian integration 13 and we write the discretised version on a cubulation of M as motivated in section 4.1, that is, we replace (4.28) by the discretised version The results of appendices A and B now reveal that where we dropped a factor √ π 16N for a cubulation with N vertices because it is cancelled by the same factor coming from the denominator in χ(j, J), see (4.25).

Wick structure
Formula (4.30) explicitly displays the main lesson of our investigation: The full j dependence of the generating functional written as (4.27) rests in (4.30). We are interested in the n-th functional derivatives of (4.30) at j = 0. Now similar as in free field theories, the corresponding n -point functions vanish for n odd. However, in contrast to free field theories, for n even, the n−point functions cannot be written in terms of polynomials of the 2-point function. The reason is that the "covariance" G −1 [A] of the Gaussian is not a background structure but rather depends on the quantum field A itself that one has to integrate over. This renders the co -tetrad theory to be a non -quasi -free, that is, interacting theory. Nevertheless it is true that all Wick identities that have been derived for free field theories still hold also for the n−point tetrad functions albeit in the sense of expectation values or means with respect to A.

Graviton Propagator
To illustrate this, consider a fictive theory in which σ(e, A, φ), G(A, φ) are both independent of both A, e. This is not a very physical assumption but it serves to make some observations of general validity in a simplified context. This means that we can drop the φ dependence because the generating functional factorises. Thus in our fictive theory we are looking at the generating functional where µ H is the 14 Haar measure on G. Now let It is immediately clear that unless v 1 = v 2 . This is reassuring because as we said above, physically it makes only sense to consider correlators of G−invariant objects such as the metric. The simplest n−point function of interest is therefore the 4-point function If we are interested in something like a graviton propagator we are interested in v 1 = v 2 and obtain µν does not vanish automatically. We see that we are basically interested in correlators of the inverse matrix G(v) −1 with respect to the joint Haar measure. Whether these have the correct behaviour in a situation where, instead of vacuum boundary states, one chooses coherent states peaked on a classical background metric as suggested in [38,21] is currently under investigation.

SFM Vertex Structure
Finally, in order to translate (4.36) into spin foam language, we should perform harmonic analysis on G and write the integrand of the Haar measure in terms of irreducible representations of G. In particular, the vertex structure of a SFM is encoded in z[0] so that we are interested in harmonic analysis of the function det(G(v))| (4.37) To derive its a graph theoretical structure it is enough to find out which F (v) depend how on a given holonomy A(l). Recall that F (v) is a function cylindrical over the graph γ(v) = ∪ µ<ν ∂f µν (v) which is the union of its respective plaquette loops . Consider a fixed edge l = l µ (v). It is contained in γ(v ′ ) if and only if it is contained in one of the plaquette loops ∂f µν (v ′ ) or ∂f νµ (v ′ ) with µ < ν or ν < µ respectively. In the first case it must coincide either with l µ (v ′ ) or with l µ (v ′ +ν). In the second case it must coincide either with l µ (v ′ +ν) or with l µ (v ′ ) as well. Thus in either case we must have either For our illustrative purposes let us consider for simplicity that G is compact, the non compact case has the same SFM vertex structure but the harmonic analysis is a bit more complicated. Then each function F (v) can be formally expanded into SO(4) (or rather the universal cover SU (2)×SU (2)) irreducible representations 15 with respect to the six plaquette holonomies A(∂f µν (v)), µ < ν. These representations π are labelled by pairs of half integral spin quantum numbers but we will not need this for what follows. Thus F (v) admits an expansion of the form where ι ′ {πµν } is a gauge invariant intertwiner for the six -tuple of irreducible representations {π µν } µ<ν which is independent of v, the only v dependence rests in the holonomies. It depends on the specific algebraic form of F (v) which derives from the Holst action.
Let us define π νµ := π µν for µ < ν. By writing the six plaquette holonomies in terms of four edge holonomies it is not difficult to see that F (v) can also be written in the form which displays explicitly the 16 variables A(l µ (v)), A(l µ (v +ν), ν = µ involved and consists of 24=6 x 4 tensor product factors. In order to arrive at (4.39) we had to rearrange the contraction indices which induces the change from ι ′ to ι and we have made use of π(A(l) −1 ) = π T (A(l)) for G = SO(4).
We may now carry out explicitly the integrals over edge holonomies in z[0] by inserting the expansion (4.39). We write symbolically 16 Here in te second step we have shifted the vertex label in one of the tensor product factors in order to bring out the dependence on the A(l µ (v)). It follows that the end result of the integration is that for each edge l = l µ (v) there is a gauge invariant intertwiner 17 which intertwines six representations rather than four as in (constrained) BF theory on simplicial triangulations. The origin of this discrepancy is of course that we are using cubulations rather than simplicial triangulations. These six representations involved for edge l µ (v) correspond precisely to the six plaquette loops ∂f µν (v), ∂f µν (v −ν), ν = µ of which l µ (v) is a segment. Therefore, if we associate to each face f = f µν (v) an irreducible representation π f = π v µν and denote by {π} the collection of all the π f then the basic building block (4.41) can be written in the more compact form which of course hides the precise tensor product and contraction structure but is sufficient for our purposes. Formula (4.43) is precisely the general structure of a SFM. Moreover, the intertwiner (4.42) is the direct analog of the intertwiner in BF theory which there defines the pentagon diagramme [6]. If we would try to draw a corresponding picture for our model then for each vertex v we would draw eight points, one for each each edge l incident at v. These edges are labelled by the intertwiner ρ l . Given two points corresponding to edges l, l ′ consider the unique face f that has l, l ′ in its boundary. Draw a line between each such points and label it by π f . The result is the octagon diagramme, see figure 1. Concretely, the edges adjacent . The corresponding label on the lines is thus π v µν , π v−ν µν , π v−μ µν , π v−μ−ν µν respectively. Thus the octagon diagramme has eight points and 6 x 4 = 24 lines (each line connects two points). These correspond to the 24 plaquettes that have a corner in v namely for each µ < ν these are In the case of G = SO(4) each irreducible representation is labelled by two spin quantum numbers. The intertwiner freedom is labelled by three irreducible representations of of SO(4) and there is one irreducible representation corresponding to a face. Thus the octagon diagramme depends on 3 x 8 + 24=48 irreps of SO(4) or 96 spin quantum numbers. Since each intertwiner (4.42) factorises into two intertwiners [3] (one for the starting point and one for the beginning point of the edge but both depending on the same representations) we may actually collect those eight intertwiners associated to the same vertex. The collection of those eight factors is actually the analytic expression corresponding to the octagon diagramme which therefore maybe called the 96 j -symbol. The decisive difference between (constrained) BF theory and our model is however that in (constrained) BF theory the analog of the function F (v) is a product of δ distributions, one for each face holonomy. The simplicity constraints just impose restrictions on the representations and intertwiners, but this cannot change the fact that there is factorisation in the face dependence. In our model, the face dependence does not factorise, hence, in this sense it is less local or more interacting.

Relation between covariant and canonical connection
Another striking feature of our model is the following: Constrained BF theory, that is, Plebanski theory, should be a candidate for quantum gravity. Our Holst model should be equivalent to that theory at least semiclassically because morally speaking, the only difference between them lies in the technical implementation of the simplicity constraints, modulo the caveats mentioned in section 2. Now one of the most important property of the implementation of the simplicity constraints in usual SFM is that the irreducible Spin(4) representations that one sums over are the simple ones 18 . In our model we do not see any sign of Figure 1: The octagon diagramme associated to vertex v. The eight corners correspond to the eight edges l = l σ µ (v) = l µ (v + σ−1 2μ ), σ = ± adjacent to v. The line between corners labelled by l σ µ (v), l σ ′ ν (v) for µ = ν corresponds to the face f = f σσ ′ µν (v) = f µν (v + σ−1 2μ + σ ′ −1 2ν ). We should colour corners by intertwiners ρ l and lines by representations π f but refrain from doing so in order not to clutter the diagramme. Altogether 48 irreducible representations of Spin(4) (or 96 of SU (2)) are involved.
that. This is an important issue because the restriction to simple representations means that the underlying gauge theory is roughly SU(2) rather than Spin(4) which looks correct if the SFM is to arise from canonical LQG which indeed is a SU(2) gauge theory. Thus, in usual SFM the simplicity constraints seem to already imply the gauge fixing of the "boost" part of the Spin(4) Gauss constraint that is necessary at the classical level in order to pass from the Holst connection to the Ashtekar -Barbero -Immirzi connection [11]. Strictly speaking, that has not been established yet as pointed out in [48] where it is shown that the connection used in SFM is actually the spin connection and not the Holst connection. But apart from that, in the considerations of the previous section we do not see any simplicity restrictions on the type of group representations.
However, notice that what we did in the previous section was incomplete because in order to properly define the n -point functions we must gauge fix the generating functional with respect to the G Gauss constraints. Formally this is not necessary if we only consider correlators of G invariant functions such as the metric because the infinite gauge group volume formally cancels out in the fraction z[j]/z[0]. However, details matter: The formal arguments cannot be substantiated by hard proofs in this case. Specifically, if we consider G = SO (1, 3), there is no measure known for gauge theories for non compact groups (see [49] for the occurring complications) and thus we are forced to gauge fix at least the boost part of the Gauss constraint. This is the same reason for which one uses the time gauge in the canonical theory. We expect that implementing the time gauge fixing [11] in a way similar to the implementation of the simplicity constraints in usual BF theory will effectively reduce the gauge group to SU (2). Details will appear elsewhere, but roughly speaking the idea is the following: The time gauge is a set of constraints C[e] on the co -tetrad e. By the usual manipulations we can pull the corresponding δ distribution out of the cotetrad fuctional integral and formally obtain where χ Holst is the generating functional of the previous section and χ ABI stands for the Ashtekar -Barbero -Immirzi path integral. Whether this really works in a rigorous fashion remains to be seen. However, we find it puzzling that the simplicity constraints in usual SFM, which classically have nothing to do with the time gauge, should automatically yield the correct boundary Hilbert space. It seems intuitively clear that the time gauge must be imposed in the quantum theory in addition to the simplicity constraints, just like in the classical theory, as we suggest. Without imposing it, we do not see any sign of a restriction from G to SU (2) in our model where we solve the simplicity constraints differently. We expect this to be related to the work [50]. This observation indicates that usual SFM and our formulation are rather different from each other.

Conclusions and Future Work
In this paper we have proposed a different strategy to construct spin foam models for GR. Rather than the Plebanski action we take the Holst action as our starting point. This means that the simplicity constraints of the Plebanski formulation have been correctly taken care of. The price to pay is that the connection to BF theory is lost. The motivation behind our strategy is that BF theory is a TQFT and therefore quite different from GR which has an infinite number of physical degrees of freedom. Hence the usual triangulation independent methods developed for TQFT and employed in current SFM are possibly less powerful in the context of GR. In particular, the fact that it is difficult to deal with the simplicity constraints in current SFM might be a sign of that. Another problem with the Plebanski formulation that we have not mentioned yet is that it is difficult to couple matter because matter directly couples to the cotetrad rather than the B field. In principle one can express e via B modulo simplicity constraints but the corresponding formulas are even more involved than those for e. Notice that one must couple matter to BF theory in order to get a realistic model. For 3D gravity coupling matter is straightforward [51] because there B field and e coincide while in 4D this has not been done yet except for non standard model fermions which just couple to the connection [52] or membranes coupled to pure BF theory [53].
The method we proposed in this paper might be a called a brute force and textbook strategy. Dropping the insistance on triangulation independence right from the beginning we proposed a Wilson action like naive discretisation of the Holst action. We carefully studied the connection of the Holst path integral with the canonical LQG correlation or n -point functions and used relational techniques to make the connection with the physical Hilbert space and observables. In principle, none of these ingredients are new, they have been successfully employed in other contexts. Of course, the appealing elegance of (constrained) BF theory has disappeared in our formulation, the integrals to be computed are rather challenging (but not impossible) and the gauge fixing conditions for spatial diffeomorphism and Hamiltonian constraint as well as a local measure factor without which the connection to observation is lost complicate the formalism. Yet, we feel that the resulting structure, while far from being worked out in detail, has some interesting features such as the Wick like structure of the physical tetrad correlators and a less local structure. In particular, we have shown that imposing the time gauge also in the quantum theory comes out as a necessary and natural condition in our model in order to make contact with the LQG Hilbert space.
As already mentioned in the introduction, this paper is exploratory in nature. It focuses more on ideas rather than analysis and there are many open issues that need to be settled before the present model can be taken seriously. Apart from the topological issues mentioned in section 4.1, the convergence and measure theoretic issues discussed in section 4.3 and finally the issues with the imposition of the time gauge outlined in section 4.5 there are further points that need to be addressed. One of the most serious ones is the continuum limit: The fact that we are working with cubulations suggests a naive but natural notion of continuum limit which consists in studying the behaviour of the correlation functions under barycentric refinement of the hypercubes at fixed IR regulator (boundary surface). Of course, in the spirit of the AQG framework [20] one could also say that the continuum limit has been taken already provided that one works with infinite cubulations. This requires then, in a seperate step, to remove the IR regulator. A more practical but still important problem is the following: Even at finite UV and IR regulator, it is already hard enough to compute the determinant of the covariance matrix of the co-tetrad Gaussian and to determine its index (which may vanish automatically, see appendix B). But since these covariances are highly correlated, the practical computation of the n -point functions at least in the macroscopic regime will be possible only if the corresponding non trivial measure has some kind of cluster property [54]. We hope to come back to all of these issues and many more in several future publications.

B On the index of special matrices
While determinants maybe tedious to calculate, its is always analytically possible. However, the index of a matrix is harder to obtain. While there exist algorithms to obtain it just from its characteristic polynomial (rather than from its spectrum which would be impossible to determine analytically for general large matrices) for concrete matrices such as Déscartes sign rule [55], for general matrices of a given restricted structure there are no such algorithms available except for in a few cases. Our situation is the following: Consider the 16 x 16 matrix G with entries G (µI),(νJ) := G µν IJ . Since G µν IJ = −G νµ IJ = −G µν JI = G νµ JI it is symmetric. Let us also write e µI := e I µ . We consider the lexicographic ordering of the compound index (µI) as (00), .., (03), (10), .., (13), (20), .., (23), (30), .., (33). Consider the antisymmetric 4 x 4 matrix G µν with 0 ≤ µ < ν ≤ 3 given by (G µν ) IJ := G µν IJ . Then the 16 x 16 matrix G has the following block structure Then ind(G) = 0.
It turns out to be extremely hard to prove this conjecture although it is rather plausible. For example, it is easy to show that the conjecture is correct when the matrices A, B, C, D, E, F are 2 x 2 antisymmetric matrices. It is also true when the matrices A, B, C, D, E, F are linearly dependent. We delay the proof (or disproof) of this conjecture to future publications.
If the conjecture was true and G is non singular then we would know that G has eight positive and eight negative eigenvalues. Hence we would know that det(G) > 0. In order to compute det(G) we make use of the following basic factorisation property for an arbitrary block matrix with blocks A, B, C, D with In our situation, by means of (B.4) we can iteratively downsize the size of the matrix of which we have to compute the determinant from rank 16 to 8 and then to 4. At rank 4 we may use Cayley's theorem [55] in order to express det(G) directly in tems of polynomials of the traces of products of the G µν and thus in terms of traces of products of the plaquette loops.