Tropical Monte Carlo quadrature for Feynman integrals

We introduce a new method to evaluate algebraic integrals over the simplex numerically. This new approach employs techniques from tropical geometry and exceeds the capabilities of existing numerical methods by an order of magnitude. The method can be improved further by exploiting the geometric structure of the underlying integrand. As an illustration of this, we give a specialized integration algorithm for a class of integrands that exhibit the form of a generalized permutahedron. This class includes integrands for scattering amplitudes and parametric Feynman integrals with tame kinematics. A proof-of-concept implementation is provided with which Feynman integrals up to loop order 17 can be evaluated.


Introduction 1.Motivation
Feynman integrals are ubiquitous in various branches of theoretical physics.They are hard to evaluate and predictions for particle physics experiments rely heavily on them.Their evaluation even poses a bottleneck for the analysis of the data from some high accuracy experiments [60].This situation has fostered the development of extremely sophisticated and specialized technologies aimed to obtain a manageable analytic expression for a given Feynman integral.The state of the art technique is the differential equation method [72,93,62].A slightly less powerful method, which is amendable to more general algebraic integrals, is systematic algebraic integration [27,89].See also [100] for an overview on other methods.
The rapid development of these technologies in the last decades has been driven largely by new deep insights into the underlying mathematical structures.Recent advances in the differential equation method were inspired by the simple analytic expressions which can be obtained in supersymmetric quantum field theories via generalized unitarity and recursion relations [11,4,47].A program to study the arithmetic properties of parametric Feynman integrals [13,28,29] led to the development of systematic algebraic integration algorithms.This arithmetic understanding of the relevant function classes was also one of the driving forces of the differential equation method [63] and is still driving new developments in the especially challenging elliptic regime [26].
All these technologies aim to obtain a closed form analytic expression for the Feynman integral and they all fail once the underlying graph and the associated physical parameters exceed a certain complexity.In these cases a numerical approach is the only way to proceed [24].
The most established numerical approach to tackle such integrals is sector decomposition.Sector decomposition as a tool for numerical evaluation of Feynman integrals has been developed by Binoth and Heinrich [12].It was subsequently improved by Bogner and Weinzierl [15].Another conceptual innovation of the overall method was achieved by Kaneko and Ueda [68] who brought sector decomposition on a geometric footing.Today, geometric sector decomposition is still the most powerful method for the numerical evaluation of Feynman integrals.It lies at the heart of two popular software tools [23,98].Another promising numerical technique for Feynman integration is loop-tree duality [37], which is in an active development phase (see for instance [94,36] and the references therein).
In contrast to analytic evaluation methods the mathematical structures exhibited by Feynman integrals are an essentially untapped resource in the context of numerical evaluation.Most numerical techniques are completely oblivious to the rich specific structure of the integrals as they are designed to be applicable to arbitrary algebraic integrands.For this reason, the major objective of this paper is to use some of these mathematical structures to improve the numerical evaluation of Feynman graphs and to show that these dormant resources can be harnessed.The overall endeavour behind this objective consists of making progress towards the following two goals: The first goal is to make numerical evaluation techniques more applicable to real world phenomenology.There are integrals which contribute to interesting measurable processes, but cannot be calculated analytically with available methods.For these integrals numerical evaluation is currently the only way to make predictions for experiments.Numerical methods naturally come with a caveat: they suffer from long evaluation times or they are limited in accuracy.Feynman integrals usually need to be evaluated a large number of times in a big parameter space.This is not difficult if an analytic expression for the integral is known, which can be evaluated sufficiently fast, but poses a tough challenge for numerical methods which sacrifice evaluation speed and accuracy for generality.The task for this goal is therefore to increase the performance of numerical methods.
The second goal is to obtain reliable data in the large-order regime where analytic methods hopelessly fail.There are many indications that the large-order behaviour of perturbation theory is deeply intertwined with non-perturbative phenomena [77].The analysis of the large-order behaviour of perturbation theory in quantum mechanics by Bender and Wu [7] has sparked an extremely fruitful branch of research in theoretical and mathematical physics [45,77].Non-perturbative analytic calculations in quantum field theory are plagued with various gaps in our understanding of the underlying mathematics [82].A repetition of an explicit Bender-Wu like numerical analysis for perturbative quantum field theory is very desirable as it would shed some light into a highly unexplored territory.Unfortunately, this is extremely challenging as the evaluation of large numbers of Feynman integrals of order ∼ 100 would be necessary.It is hopeless to approach this task using the naive method of evaluation Feynman integrals one by one.New methods with which whole classes of diagrams can be evaluated at once need to be developed.The growing understanding of the geometry of amplitude integrals could lead the way to such methods.It is also necessary that these methods are computationally efficient: The demand for computing resources shall at most depend polynomially on the size of the problem (i.e. the respective order in the perturbative expansion).
This paper achieves some progress towards both these goals.The strategy is to employ tropical geometry [80] for numerical quadrature.Panzer [90] recently showed that a tropical version of a Feynman graph's period behaves similar to the period itself and anticipated that this tropical version may be used for explicit numerical evaluation.Tropical geometry has also recently been used in the context of string theory and scattering amplitudes [35,5].
We will introduce a new Monte Carlo algorithm with which the numerical evaluation of Feynman integrals can be significantly accelerated.It is based on the established geometric sector decomposition principle and can be applied to general algebraic integrals such as the one below in eq.(1).The improvement comes from a stratified sampling approach to Monte Carlo quadrature, which is driven by the (tropical) geometric structure of the algebraic integrand.We will call this method tropical sampling.Tropical sampling effectively decouples the complexity of the underlying integral from the achievable accuracy within the Monte Carlo approach.
Even though this new tropical-geometric technique offers a significant improvement over the traditional procedure already for general algebraic integrals, a lot more can be achieved if further information on the geometric structures of the integral is used.
As an example of this, we will give a specialized algorithm for cases where the integrand at hand exhibits the form of a generalized permutahedron [92] in a certain sense which will be defined later.Many integrals in quantum field theory and string theory fall under this category.For instance, integrands for complete amplitudes in various theories [2] and Feynman integrals with generic Euclidean kinematics are of this kind [97].
A proof-of-concept implementation of the resulting algorithm is provided.With this implementation high dimensional Feynman integrals can be numerically integrated using widely available hardware.High dimensional explicitly means that integrals corresponding to Feynman graphs with around 20 edges can be estimated up to 10 −3 relative accuracy in a couple of CPU-seconds and integrals for graphs with up to 30 edges in about half an hour.Ultimately, this approach is not CPU but memory constrained when the complexity increases.For instance, for a graph with 30 edges already 16 GB of computer memory are required to run the algorithm.To integrate an 18 loop ϕ 4 -theory four-point graph with the implementation 1 TB of memory would be necessary.
Although both algorithms are not efficient in the strong sense, as exponential runtime and memory requirements start to dominate at some point, there is hope for the existence of an algorithm that evaluates a Feynman graph of loop order n up to a given accuracy with runtime and memory demands bounded by a polynomial in n (see Section 8 (1)).

Algebraic integrals over the simplex
The central object of study in this article is the integral • the positive orthant of real projective space as integration domain, • the sets of homogeneous polynomials • where the coefficients ν i , ρ j ∈ C have non-negative real part: Re ν i , Re ρ j ≥ 0 and • a fixed branch choice for the each of the non-integer powers, e.g.
It follows that I is a projective integral over the projective simplex.The differential form Ω, which is homogeneous degree 0, is also called the canonical form on this simplex [3].Parametric Feynman integrals in quantum field theory are of the same type as the integral in eq. ( 1) [84].In string theory this type of integrals plays an equally important role [55].Integrals over positive geometries, which appear in the theory of scattering amplitudes can also be brought into this form [4,2,3].
The integral in eq. ( 1) can be written as an integral over the positive orthant of R n : R n−1 >0 = {(x 1 , . . ., x n−1 ) ∈ R n−1 : x k > 0} by picking an affine chart for projective space, for instance by pulling back the diffeomorphism . Such an integral is called a generalized Euler-Mellin integral.Continuing a program started by Gelfand, Kapranov and Zelevinsky (GKZ) [54], these integrals have been studied extensively by Nilsson and Passare and others [86,10].This analysis is compatible with the geometric sector decomposition approach as was shown by Schultka [97], who studied these integrals using toric geometry.Recently, generalized Euler-Mellin integrals gained new attention in the context of positive geometries, scattering amplitudes and string theory [5,58].Along these lines also the analysis of Feynman integrals as GKZ-type hypergeometric functions has recently gained a lot of attention [44,69,50].
As every non-homogeneous polynomial in n − 1 variables can be homogenized by introducing a new variable, every generalized Euler-Mellin integral such as the one in eq. ( 3) is equivalent to an integral of the projective form in eq.(1).For our considerations it will be more convenient to work with the projective form.

Outline of the paper
After introducing the necessary preliminaries from polyhedral geometry and numerical integration in Section 2, we will establish the most important tool in this paper in Section 3: an approximation of a multivariate polynomial, which is obtained by setting all its coefficients to 1 and replacing + by max.Starting for instance with the polynomial p(x 1 , x 2 , x 3 ) = ax 2  1 x 2 + bx 1 x 2 x 3 + cx 3  3 , we get the 'approximation' ).This procedure is inspired from and closely related to tropical geometry.Hence, p tr will be called the tropical approximation of p.This tropical approximation will be the subject of the main Theorem 8 of this article, where it will be proven that p tr can indeed by used to approximate the polynomial p in a certain sense, as long as p is completely non-vanishing.In the rest of the article we will apply this property in various contexts.
In Section 4, we will reformulate Kaneko-Ueda geometric sector decomposition in a tropical geometric framework.This reformulation will enable us to introduce the new tropical sampling algorithm in Section 5.This algorithm is significantly more efficient than traditional Monte Carlo methods as the achievable accuracy is effectively decoupled from the complexity of the integral.Only the runtime of a preprocessing step still depends heavily on the complexity of the integral.
This new algorithm can be improved further if more is known about the structure of the integrand polynomials {a i } and {b j }.As an example for this, we will specialize to the case where the Newton polytopes of these polynomials are generalized permutahedra in Section 6. Generalized permutahedra are a family of polytopes with a rich combinatorial structure.Many polytopes are from this family including associahedra, the Newton polytopes of Symanzik polynomials and other polytopes at play in the theory of scattering amplitudes [2].We will use results by Postnikov [92], Aguiar, Ardila [1] and Fujishige, Tomizawa [51] to formulate a specialized algorithm.This second new algorithm has more favorable runtime and memory requirements and is easier to implement.
Even though the improved algorithm can be applied to all generalized permutahedra integrands, as for instance the ones for complete scattering amplitudes introduced by Arkani-Hamed, Bai, He and Yan [2], we will specify to Feynman integrals in Section 7 for illustrative purposes.
The first, general tropical sampling algorithm can always be applied to Feynman integrals regardless of their explicit form.The second algorithm can only be applied if the Newton polytopes of the Symanzik polynomials are generalized permutahedra.We will use results by Brown [28] and Schultka [97] which ensure that this is the case as long as we are in a non-exceptional Euclidean kinematic region.Subsequently, we will discuss some experimental results which have been obtained using a proof-of-concept implementation of the second algorithm.
We conclude with a selection of future research directions resulting from this project in the last Section 8.

Notation for polytopes and multivariate polynomials
The integral in eq. ( 1) is convergent if the sets of polynomials {a i } and {b j } fulfill certain properties, which essentially have been determined by Nilsson and Passare [86].In this section, we will briefly review these properties and introduce the necessary vocabulary from polyhedral and tropical geometry.
To keep the notation simple, we will identify the space of linear forms on R n with R n via the usual scalar product v • w = n k=1 v k w k .A polytope is the intersection of a finite number of half-spaces in R n .A subset F ⊂ P of a polytope P ⊂ R n is a face of P if there is a vector y ∈ R n and a scalar ξ ∈ R such that P is contained in the half-space {v ∈ R n : y • v ≤ ξ} and F is the intersection of P with the hyperplane {v ∈ R n : y • v = ξ}.Equivalently, a face of a polytope is a subset of P which maximizes a given linear functional y ∈ R n , F = {v ∈ P : y • v = max w∈P y • w}.We will assume polytopes to be bounded, i.e. max w∈P y • w < ∞ for all y ∈ R n .
For a pair of non-negative real numbers λ, µ ≥ 0 the weighted Minkowski sum of two polytopes P, Q ⊂ R n is λP + µQ = {λv + µw : v ∈ P, w ∈ Q} ⊂ R n .The relative interior, relint(P), of a polytope P is the interior of P determined in the subtopology of the affine hull of P, which is the affine subspace of minimal dimension that contains P.
We can write a generic multivariate polynomial p in the variables x 1 , . . ., x n as where supp(p), the support of p, is the set of all multi-indices ( 1 , . . ., n ) = ∈ Z n such that c = 0. We will make regular use of the multiplicative multi-index notation x = n k=1 x k k as above.The Newton polytope of p is the convex hull of the elements in supp(p) interpreted as vectors in R n : The Newton polytope N p of a homogeneous polynomial p in n variables is at most (n − 1)-dimensional as it is contained in the hyperplane {v ∈ R n : where 1 is the only-ones-vector 1 = (1, . . ., 1) ∈ R n .Definition 1.For each face F of N p associated to a polynomial p, we define the truncated polynomial p F by To ensure convergence of integrals such as the one in eq. ( 1), the following property of polynomials is useful: Definition 2. A polynomial p ∈ C[x 1 , . . ., x n ] is completely non-vanishing on a domain X if for each face F ⊂ N p , the truncated polynomial p F does not vanish on X.
With this terminology at hand we can give a convergence criterion for the integral in eq. ( 1).Theorem 3. We define the polytopes A, B ⊂ R n as the weighted Minkowski sums of the Newton polytopes of the numerator and denominator polynomials {a i } and {b j }.
The integral in eq. ( 1) is convergent if A similar theorem in the equivalent context of Euler-Mellin integrals was proven in [86] (see also [10]).The tropical approximation that we will introduce later will lead to an alternative proof of Theorem 3 which we postpone to Section 4.

Monte Carlo quadrature
We will be interested in situations where the dimension n of the integral in eq. ( 1) is 'not small'.The dimension of an integral is small from the perspective of numerical quadrature if fast-converging deterministic quadrature methods are feasible.The computational demands of deterministic quadrature methods such as Gauss-quadrature grow exponentially with the dimension.For this reason, it is necessary to use nondeterministic methods which do not suffer from an exponential slow-down if n is not small.Monte Carlo quadrature is the most elementary of these.The working principle behind it is the following fact: Theorem 5 (Monte Carlo quadrature (see for instance [57])).If x (1) , . . ., x (N ) are independent random variables with probability density measure µ, i.e. 1 = Γ µ and µ > 0 on the domain Γ and provided that the integrals in the last two lines exist.
This theorem may be applied to approximate the integral Γ f (x)µ as long as we have a way of generating samples from the distribution µ.The condition that the integral for the variance shall exist effectively restricts the set of functions f , which can be integrated numerically, to the set of square integrable functions under the measure µ over the domain Γ.Note that in our convention of the statement of Theorem 5 the expectation value E[•] may be complex, but the variance Var[•] is always real and non-negative.
The integral in eq. ( 1) is not directly amendable to Monte Carlo quadrature as the differential form Ω over the domain P n−1 >0 as defined for eq.( 1) is not a probability distribution: it is not normalizable.Even if we use the affine representation in eq. ( 3) and map R n >0 onto a bounded domain via a variable transformation (for instance by x → x/(1 + x) which maps R >0 → (0, 1) smoothly and injectively), the resulting integral will, in the general case, not be square integrable.A pragmatic solution to this problem is sector decomposition [12], where the integral I is expressed as a sum of integrals, which are each individually directly amendable to Monte Carlo integration.

Sector decomposition
In the context of quantum field theory, sector decomposition goes back to Hepp and Speer who used the technique to prove the finiteness of renormalized Feynman integrals [64,101].Even though Hepp/Speer sector decomposition can be employed to deal with the singularities of Euclidean Feynman integrals, it turned out to be insufficient to handle more general singularities which appear in Minkowski space Feynman integrals (see [59] and [100,Chapter 4] for reviews on the topic).A more general approach was pioneered by Binoth and Heinrich [12], who introduced a recursive algorithm that decomposes general integrals of the form in eq.(1) (or equivalently as in eq. ( 3)) into a set of sector integrals: such that the auxiliary polynomials p s,i (x) and q s,j (x) do not vanish inside the integration domain [0, 1] n−1 (at least as long as all the coefficients of the initial denominator polynomials b j are positive, which implies that these polynomials are completely nonvanishing) and C s is a prefactor for each sector s ∈ S. If all components of the vector m (s) are positive, i.e. m (s) k > 0 for all k ∈ {1, . . ., n − 1}, a simple reparametrization ξ = x m (s) produces an integral with a bounded integrand to which basic Monte Carlo as described in Theorem 5 can immediately be applied using the uniform measure µ = n−1 k=1 dξ k on the unit (n − 1)-cube Γ = [0, 1] n−1 .Although a problem of the method which impeded the recursion from terminating was solved by Bogner and Weinzierl [15], this class of algorithms suffers from a proliferation in the numbers of sectors with rising complexity of the underlying polynomials.Moreover, a new set of polynomials p s,i , q s,i is associated to each sector.This means that we are not necessarily dealing with a partition of the integration domain, but a non-trivial distribution of the volume of the integral into each of the sectors s ∈ S.
A both conceptual and practical innovation was achieved by Kaneko and Ueda, who reinterpreted this decomposition as a geometric problem [68].This geometric viewpoint results in more economical decompositions, in terms of the total number of sectors (see [99] for a comparison of different methods), while also arguably being conceptually more elegant.

Analytic continuation
By Theorem 3, the convergence of the integral in eq. ( 1) depends on the values of the parameters {ν i } and {ρ j }.Provided that there is an extended domain of such parameters where the integral is convergent, we can interpret it as a function of these parameters and perform an analytic continuation.It turns out that this analytic continuation is a meromorphic function in these parameters [86,10].
A sector decomposition as in eq. ( 4) provides a pragmatic way to perform this analytic continuation.A violation of the condition A ⊂ relint B in Theorem 3 corresponds to a component of m (s) in the integral in eq. ( 4) being non-positive, i.e. m (s) k ≤ 0 for some sector s ∈ S. Hence, if we assume that the polynomials p s,i , q s,i are non-vanishing on the integration domain, then the associated integral I s is divergent.Performing an analytic continuation of this integral, interpreted as a function of the coefficients of m (s) is a simple task.A standard approach is to integrate over a Pochhammer contour instead of the unit interval in eq. ( 5).This avoids the singularity at the integration boundary and agrees with the integral over the unit interval if convergence is ensured (see for instance [103,).
For explicit computations it is sufficient to compute a Taylor expansion of the rational function in eq. ( 5) up to an appropriate order, integrate the analytically continued expansion terms analytically and the remainder term numerically.See for instance [12,Part III], where this process is described in detail.
A more sophisticated strategy to perform this analytic continuation is based on iteratively performing 'directed integration by parts' on the integral in eq. ( 1) and thereby extending the domain of {ν i }, {ρ j } parameters in which the integral converges.The inner workings of this procedure are of geometric nature and make use of the structure of the polytopes A, B. See [86,Theorem 2] and thereafter and also [10,Theorem 2.4] for a description of this method.This approach gives, after being applied to a given integral as the one in eq. ( 1), a sum of integrals of the same type where each integral has a larger region of convergence than the original one.A similar procedure has been developed independently in [81] for the special case of parametric Feynman integrals.
In this work, we will therefore assume that the integral has been subjected to such a procedure and we can assume that we are within the region of convergence in terms of the {ν i }, {ρ j } parameters.

The tropical approximation
For the considerations in this article the following tropical approximation of a polynomial will be central: x .
Such an object has been defined by Panzer [90] for the Kirchhoff polynomial to study the Hepp-bound, a graph invariant relevant for Feynman period integral calculations.We adopt Panzer's notation and denote tropically approximated polynomials with a superscript tr .
To give some additional motivation to consider this 'tropical approximation' suppose that a polynomial p has only real and positive coefficients and interpret it as a function p : R n >0 → R >0 .The tropical limit is This way, p tr can be seen as a deformed version of p: the function p(x ξ ) 1 ξ interpolates between p and p tr with ξ between 1 and ∞.A limit as ξ → ∞ with the associated phenomenon of transforming a very smooth object-in this case a polynomial-into a function with non-differentiable singularities, is something commonly encountered in physics.For instance, the thermodynamical limit is of similar nature.These kind of limits give rise to numerous critical phenomena.Also the weak string coupling limit α → 0 shows this behavior [5].
In our case p tr is of 'simpler' nature as the original polynomial p. Information is lost while going from a polynomial to its tropical approximation, as p tr only depends on the support of p.In fact, p tr is nothing but a realization of a geometric object: the Newton polytope of the polynomial p.
To make this explicit, change to logarithmic coordinates y k = log x k in Definition 6 and use the fact that the Newton polytope is the convex hull of the support of the underlying polynomial.We find that log p tr (x) = max which is a piece-wise linear function R n → R, y → max v∈N p y • v.This function is the support function of the Newton polytope N p [61].Often it is useful to write p tr in exponential form using the support function: where we used the notation e y = (e y 1 , . . ., e yn ) to denote the component-wise exponential.This support function is also a tropicalization of the polynomial p, which uses the trivial valuation on C to tropicalize.Because much of the algebro-geometrical information of the polynomial p carries over to its tropicalization, tropical geometry developed into a fruitful branch of algebraic geometry in the recent years [80].It is tempting to call p tr the tropicalization of p. Unfortunately, this name is reserved for y → log p tr (e y ) and we will use the name tropical approximation instead.The motivation for this is that besides the fact that p tr is a simplification of p, it can also be used to approximate p.

The approximation property
The main theorem of this article is the following approximation property of p tr with respect to the polynomial p: This property trivially extends to homogeneous polynomials which are naturally considered to be functions on projective space:

and if p is additionally completely non-vanishing on
Strictly speaking p(x) and p tr (x) are not well-defined objects for x ∈ P n−1 >0 .The quotient p(x)/p tr (x), on the other hand, makes sense for all projective x ∈ P n−1 >0 .Inequalities as the one above, which can be written as quotients of homogeneous objects, shall be interpreted accordingly, in this obvious sense as statements on these quotients.
Proof.If p is homogeneous and completely non-vanishing on P n−1 >0 , it is also completely non-vanishing on R n >0 .The inequality in Theorems 8.A and 8.B is homogeneous, therefore it trivially extends to P n−1 >0 .
By Theorem 8, p tr can indeed be used to 'approximate' p, i.e. it provides a lower and an upper bound of p with appropriate prefactors, as long as p is completely non-vanishing.
Theorem 8.B is substantially harder to prove than Theorem 8.A.This proof of Theorem 8.B will be given in the next Section 3.2.Only a special case is also trivial: If the polynomial p has only positive coefficients (which implies that p is completely non-vanishing on R n >0 ), there is a simple lower bound for |p(x)|: take for instance C = min ∈supp(p) c .For such a lower bound to exist it is not necessary for the polynomial to have only positive coefficients; it is sufficient for the polynomial to be completely non-vanishing.In fact, the existence of such a lower bound is also necessary for a polynomial to be completely non-vanishing, which can be proven using a similar argument as in the proof of Theorem 8.B below.

Cones and normal fans
A (polyhedral) cone is a subset of R n that is closed under linear combinations with only non-negative scalars, e.g.C = { k λ k u (k) : λ k ≥ 0} for some set of given vectors u (1) , u (2) , . . .∈ R n .A fan in R n is a family F = {C 1 , C 2 , . ..} of cones with the property that every face of a cone in F is also in F and that the intersection of two cones C 1 , C 2 ∈ F is a face of both C 1 and C 2 .The normal cone associated to a face F of the polytope P is the set of all linear functionals that are maximal on the respective face, The set of normal cones of a polytope is its normal fan: The normal fan is always complete, that means R n = C∈F N relint C, where denotes the disjoint union.If a face F has dimension d, then the associated normal cone C F has dimension n − d, where n is the dimension of the ambient space.The maximal cones in the fan are the cones of maximal dimension.Figure 1a and 1b depict a polytope and its normal fan.Note that the maximal cones can be associated to the vertices of the polytope.
For the proof of Theorem 8.B, it is convenient to have three further properties of the tropical approximation p tr and the associated polytopes at hand: Lemma 10.For each face F of the Newton polytope N p of a polynomial p, the truncated polynomial p F fulfills p F (e s+t ) = p tr (e s )p F (e t ) for all s ∈ C F and t ∈ R n .
x 1 The normal fan of P with the maximal cones labelled.
The normal fan of P with the modified cones C indicated.
Figure 1: A polytope and its normal fan.
Lemma 11. p tr (e s+t ) ≤ p tr (e s )p tr (e t ) for all s, t ∈ R n .
Proof.This follows from Proposition 7 and for all R ≥ 0 and all faces F ⊂ P with k ∈ vert(P) \ F where vert(P) is the set of vertices of P, the union is over all faces of P of higher dimension than 1c.Note that the size of the 'gaps' between the modified cones is the diameter 2R of the ball B R .
Proof.For a given face F ⊂ P and k ∈ vert(P) \ F , choose some vertex v ∈ vert(F ) and consider the hyperplane . By definition of the normal cone, this hyperplane will not intersect with the interior of C F as y • v ≥ y • w for all w ∈ P and y ∈ C F .We can project any point y ∈ C F onto H v using the orthogonal projection The line segment from y to y ⊥ will intersect a face of C F .Let C F be this face.Clearly, dim F > dim F .If y ∈ C F + B R , then the vector y must have a larger distance than R from all points in C F .By construction, the orthogonal projection y ⊥ is at least as far away from y as the closest point in C F .Hence, where we used that y • (v − k) ≥ 0 for all v ∈ F , k ∈ P and y ∈ C F .To prove the statement, choose C = min v =w∈vert(P) v − w .Theorem 8.B follows now as a Corollary from the following where B R ⊂ R n is a ball of radius R.
Proof.We are going to prove this by induction in the codimension of F .Starting with codimension 0, i.e.F = N p , we have by Definition Suppose F is of codimension d and eq.( 8) holds for all faces up to codimension d − 1.By the induction hypothesis, for each R > 0 there exists a constant C > 0 such that eq. ( 8) is fulfilled in a ball of size R around all cones C F of lower dimension, dim C F < dim C F .We therefore only need to prove the existence of such a constant for the smaller domain, , where the union is over all faces of N p of higher dimension than F .See Figure 1c for an illustration of this smaller domain.
By definition of the truncated polynomial, we can write p(e s+t ) as, Remark 14.In the proofs of Lemma 12 and Proposition 13, we actually constructed explicit bounds for the constants in Theorem 8 which depend on the geometry of the relevant polytopes and polynomials.These explicit bounds might be useful for further considerations, but we will not make use of them in this article.

Geometric sector decomposition
The rough overall plan of our take on the integral in eq. ( 1) is as follows: we can trivially 'factorize' its integrand and write it as The second factor is bounded by Corollary 9 as long as the {b j } polynomials are completely non-vanishing.The first term has a geometric interpretation in terms of the polytopes A = i (Re ν i ) N a i and B = j (Re ρ j ) N b j as defined in Theorem 3.
As before it will be handy to change to logarithmic coordinates to expose this geometric interpretation.The component-wise exponential Exp : R n → R n >0 , y → e y = (e y 1 , . . ., e yn ), extends to a smooth bijective map Exp : R n /1R → P n−1 >0 , as Exp respects the respective equivalence relation.That means if x = e y and x = e y with y, y ∈ R n , then y = y + µ1 for some µ ∈ R if and only if x = λx for some λ ∈ R >0 .For this reason the quotient R n /1R is also called tropical projective space.
By Proposition 7 and the definition of the weighted Minkowski sum with x = e y i a tr i (x) Re ν i j b tr j (x) Re ρ j = exp If A and B fulfill the requirements R1 and R2 of Theorem 3, this exponent is falling sufficiently fast for large y for the integral in eq. ( 1) to be convergent.

Lemma 15.
Let A and B be the polytopes defined in Theorem 3. If A and B fulfill the requirements R1 and R2 of Theorem 3, then there is a constant ε > 0 such that max where y R n /1R = inf µ∈R y + µ1 is the norm on the quotient space R n /1R induced from the standard norm • on R n .
Proof.First note that the inequality in the statement is well-defined for y ∈ R n /1R, as A and B both lie in the same hyperplane As B is full-dimensional in H ξ (see Remark 4) and A ⊂ relint B, we can Minkowski add a ball B ε to A such that A + B ε ⊂ B, provided that this ball only extends in the subspace orthogonal to 1R which is parallel to H ξ and ε is sufficiently small.Let B ε = {v ∈ R n : 1 • v = 0 and v ≤ ε} be such a ball.The resulting convex set A + B ε is the outer parallel body of A restricted to its affine hull.
Observe that max v∈Bε y . By the definition of the Minkowski sum and because To be able to handle Kaneko-Ueda geometric sector decomposition with our tropical approach, we will need additional tools from convex geometry.
A cone is called pointed if it contains no 1-dimensional subspace.A fan is pointed if all its cones are pointed.If a polytope P ⊂ R n is full-dimensional, its normal fan F is pointed.For lower dimensional polytopes P ⊂ R n , the normal cone C P associated to the polytope itself is non-trivial: It consists of all linear functionals that are constant on P.This subspace is contained in each cone of the normal fan.Taking the quotient with respect to this subspace within each cone in the normal fan C ∈ F results in a pointed fan F/P ⊥ on the quotient vector space R n /P ⊥ .This fan is the reduced normal fan.
Given a fan F, another fan F refines F if every cone in F is a union of cones in F .If F and G are both fans, then their common refinement is defined as F ∧ G = {C ∩ C : C ∈ F, C ∈ G}.Let F AB be the common refinement of the normal fans of the polytopes A and B which were defined in Theorem 3. Recall that the polytopes A, B are not full-dimensional, because they are weighted Minkowski sums of the Newton polytopes of homogeneous polynomials (see Remark 4).As B is required to be full-dimensional in a (n − 1)-dimensional hyperplane which is orthogonal to the 1-vector, we have B ⊥ = 1R.We will therefore consider the reduced refined normal fan F AB /1R, which is pointed.The following lemma identifies the exponentiated cones of C ∈ F AB /1R as the domains where the function i a tr i (x) Re ν i / j b tr j (x) Re ρ j behaves like a monomial.Proof.As C is a refinement of the normal fans of A and B, there must be normal cones for all y ∈ C, where we can choose arbitrary w A ∈ F A and w B ∈ F B by definition of the normal cone in eq.(7).
Since A and B are required to lie in the same hyperplane orthogonal to the 1-vector, we also have 1 > 0 for all y ∈ C \ {0} and the statement follows from eq. (9).
where u (1) , . . .u (d) are linear independent.For a given cone C, we can always find a set of simplicial cones e. a refinement of F AB /1R such that each cone in F ∆ AB /1R is simplicial.A feature of simplicial cones is that there are convenient coordinates describing points in their interior.This fact is important while proving the following lemma: Lemma 17.If a pointed simplicial cone C ⊂ R n /1R is generated by linear independent vectors u (1) , . . ., where f : P n−1 >0 → C is a measurable homogeneous function of degree 0 and x(ξ) ∈ Exp(C) is given component-wise by Remark 18.By slightly abusing the notation, we identified the vectors u (1) , . . ., u (n−1)  with appropriate representatives in R n in the statement of this lemma.The value of the integral does not depend on the specific choice of representatives, because 1 • w = 0 and f is homogeneous of degree zero.Hence, the expression on the right hand side is invariant under shifts u (k) → u (k) + µ k 1 for all µ k ∈ R. It is also invariant under rescalings of the vectors u (k) → λ k u (k) for all λ k > 0 as it should be due to the equivalence of the cone representation.Even though changing the representatives of the u-vectors modifies the vector x(ξ), it only does so by an overall scaling, which does not modify the point in P n−1 >0 , which x(ξ) represents.
Proof.Start by changing to logarithmic coordinates x = e y , Exp(C) where Ω = Exp * Ω = n k=1 (−1) n−k dy 1 ∧ . . .∧ dy k ∧ . . .∧ dy n is the pullback of Ω under Exp.Using barycentric coordinates y = n−1 k=1 u (k) λ k shows that this is equal to = | det(u (1) , . . ., u (n−1) , 1)| The form of the determinant follows from the form Ω, Laplace's expansion and As y • w > 0 for all y ∈ C \ {0}, it follows that u (k) • w > 0 for all k ∈ {1, . . ., n − 1}.We can therefore change variables via λ k = − 1 u (k) •w log ξ k which proves the statement.With these tools at hand, we are ready to give our tropical formulation of geometric sector decomposition: Theorem 19 (Geometric sector decomposition).Let A and B be the polytopes defined in Theorem 3. If A and B fulfill the requirements R1 and R2 of Theorem 3 and M ∆ AB ⊂ F ∆ AB /1R is the set of maximal cones, i.e. the cones of maximal dimension, in a simplicial refinement of the reduced common normal fan of A and B, then we can write the integral as a sum 1) , . . ., u (C,n−1) , 1) where A with some w (C) is finite and positive for each C ∈ M ∆ AB . Proof.
Because each cone C ∈ M ∆ AB refines a cone in F AB /1R and because of Lemma 16, where A with some w Eq. ( 10) follows from Lemma 17, because we can always pick a set of generators u (C,1) , . . ., u (C,n−1) ∈ R n /1R for every simplicial cone C ∈ M ∆ AB .As y • w (C) > 0 for all y ∈ C \ {0}, we also have u (C,k) • w (C) > 0 for all k ∈ {1, . . ., n − 1}.The positivity of the determinant is obvious because of the linear independence of the vectors u (C,k) .
in Theorem 19, we recover the integral in eq. ( 1).
Proof of Theorem 3. We only need to prove that each sector integral in the geometric sector decomposition of Theorem 19 with f (x) = R a/b (x) from eq. ( 11) is finite.As all the denominator polynomials {b j } are completely non-vanishing, Corollary 9 implies that |R a/b (x)| is bounded on Theorem 19 provides a sector decomposition as it was formulated in eq. ( 4), because the sector integrands in Theorem 19 are bounded as long as the function f is bounded on P n−1 >0 .This way, Theorem 19 not only ensures finiteness of the integral in eq. ( 1) under appropriate conditions, but also allows to evaluate the integral via Monte Carlo quadrature.
If we have triangulated the reduced normal fan F AB /1R, i.e. we have computed a simplicial refinement F ∆ AB /1R and stored the vectors u (C,1) , . . ., u (C,n−1) and w (C) for each maximal cone C ∈ M ∆ AB ⊂ F ∆ AB /1R in a table, then we can estimate the integral using Algorithm 1.

Algorithm 1 Basic Monte Carlo quadrature of Euler-Mellin integrals
for all maximal cones C ∈ M ∆ AB do for ∈ 1, . . ., N do Draw a random vector ξ Proof.Algorithm 1 is an application of Theorem 5 on the integral As |R a/b (x)| is bounded on P n−1 >0 and x (C) (ξ) ∈ P n−1 >0 by construction, the integrand is bounded and therefore also square integrable.Hence, there is a constant C C ≥ 0 for each cone integral such that Var[I Effectively, Proposition 20 ensures that we can consider the random variable I (N ) as an approximation for I with relative accuracy δ = 1 I C/N .Estimating the constant C is usually easy in practice: as long as sufficiently high powers of the integrand f (x) are integrable, we can also use Theorem 5 to estimate Var[f (x)].
Variants of Algorithm 1 are implemented e.g. as SecDec-3 [23] and as FIESTA 3 [98].Both these implementations provide a variety of different ways to perform the preprocessing triangulation step which computes M ∆ AB .A dedicated tool to perform such a triangulation is Normaliz [34] which is also used internally in SecDec-3.Subsequently, both programs use a version of the VEGAS algorithm [78,56] to numerically integrate I C [f ] in eq. ( 10) over the unit hypercube [0, 1] n−1 or, equivalently, to execute the inner loop of Algorithm 1 for each individual cone C ∈ M ∆ AB .We can estimate the computational complexity of the algorithm by counting the number of necessary evaluations of the function R a/b (x).This is justified because we can assume that the runtime to evaluate R a/b (x) overshadows the time it takes to compute a random vector ξ ∈ [0, 1] n−1 and the value of x(ξ) ∈ P n−1 >0 from it.Therefore, the estimation step summarized in Algorithm 1 needs N |M ∆ AB | evaluations to produce the estimate I (N ) for the integral in eq. ( 1).Equivalently, as the relative accuracy δ ≈ I (N ) /I of the resulting estimate is inverse proportional to √ N , the number of evaluations needed is proportional to δ −2 |M ∆ AB | to achieve an estimate of δ accuracy.A severe bottleneck is the number of maximal cones |M ∆ AB | which tends to grow exponentially with growing dimension n of the problem.A particularly unsatisfying aspect of this bottleneck is that the value of the individual sector contributions I C typically varies quite much in magnitude.Consequently, only a fraction of the geometric sector contributions in eq. ( 10) are relevant for the overall integral I and much of the computational effort spent to estimate each of the integrals I C is wasted.In the next section, we will explain how to overcome this bottleneck.

Tropical sampling
In summary, the strategy to overcome this problem is the following: instead of numerically integrating each of the sector integrals individually and eventually summing all the resulting numbers to obtain an estimate for the integral in eq. ( 1), we can use a more 'inclusive' Monte Carlo approach, where we evaluate both the individual integrals I C [R a/b ] in eq. ( 10) and the sum over these integrals This approach is much more efficient than the traditional one because there is a canonical way to perform importance sampling on the sum.That means that we can expose the individual sectors to our sampler 'undemocratically' such that more important sectors are sampled more often than less important contributions.
To do this it is convenient to define a tropically approximated version of the integral in eq. ( 1): Such tropically approximated integrals have been considered as a simple avatar of period Feynman integrals [90] and identified to appear in the weak string coupling limit [5].Moreover, this tropically approximated integral also gives rise to the canonical function of a polytope under certain conditions on the polynomials {a i } and {b j }, which has applications in the theory of scattering amplitudes [3,5].It follows from Theorem 19 with f (x) = 1 that I tr is finite and that I tr > 0, provided that the conditions R1 and R2 on the polytopes A and B in Theorem 3 are fulfilled.Moreover, the integrand in eq. ( 12) is obviously positive for all x ∈ P n−1 >0 .Hence, we can define a probability distribution given by the differential form, such that 1 = P n−1 >0 µ tr .The integral in eq. ( 1) can now be written as, with R a/b as defined in eq.(11).As µ tr is a properly normalized probability distribution on P n−1 >0 , we can use Theorem 5 to get a direct estimation algorithm for I from this, provided that we have a reasonably efficient way to sample from the distribution µ tr .Algorithm 2 Monte Carlo quadrature using tropical sampling for ∈ 1, . . ., N do Generate a random sample x ( ) ∈ P n−1 >0 distributed as µ tr from eq. ( 13).end for Return Algorithm 2 is not obviously simpler or more efficient than Algorithm 1, as the complicated part-generating a sample from the random distribution given by µ tr -has been conveniently out-sourced.
A simple method to sample from µ tr is to again use a geometric sector decomposition.By Theorem 19 the tropically approximated integral in eq. ( 12) can be written as a sum, 1) , . . ., u (C,n−1) , 1) where I tr C > 0 for all maximal cones C ∈ M ∆ AB .Hence, we can interpret I tr C /I tr as a probability assigned to each cone C ∈ M ∆ AB and draw a random cone accordingly.Drawing a random sample from a finite discrete probability distribution is a classic problem.It can be solved in constant time independent of the number of possible outcomes if a table of the probabilities of the respective outcomes is appropriately preprocessed, for instance by using the alias method [70, Section 3.4.1].Provided that we have generated such a table together with a table of appropriate values of w (C) and u (C,1) , . . ., u (C,n−1) we can execute the following algorithm:

Algorithm 3 Algorithm to generate a sample with distribution µ tr
Draw a random cone C ∈ M ∆ AB with probability I tr C /I tr .Draw a random vector ξ ∈ [0, 1] n−1 from the uniform distribution. Set Proposition 21.Algorithm 3 generates a sample x ∈ P n−1 >0 , distributed as µ tr in eq.(13).
Proof.For any test function f : P n−1 >0 → C and a random sample x ∈ P n−1 >0 generated by Algorithm 3, we have Using eq. ( 14) and Theorem 19 gives To run both Algorithms 2 and 3 together we need N evaluations of the function R a/b (x).Equivalently, we need proportional to δ −2 evaluations to obtain an estimate I of δ accuracy.This is a significant improvement over Algorithm 1 as the runtime is now independent of the number of sectors |M ∆ AB |.It has to be stressed that this suggested direct comparison between Algorithm 1 and the combination of the Algorithms 2 and 3 is flawed by the inherent difference in the respective proportionality factors for δ −2 or equivalently, in the number of samples N that results in a given accuracy.In a situation, in which the sector integrals all contribute roughly the same value to the overall integral, Algorithms 2 and 3 offer no advantage over Algorithm 1.For practical applications the values of the sector integrals tend to differ heavily in magnitude, which makes Algorithms 2 and 3 favorable.
Just as for Algorithm 1 a preprocessing step needs to be performed for Algorithms 2 and 3: the triangulation M ∆ AB and the associated table needs to be calculated.This computation is also necessary to compute the normalization factor I tr = C∈M ∆ AB I tr C .In the best case, the time it takes to create such a table will be proportional to the number of sectors |M ∆ AB |, but we only need to compute this table once and can evaluate an arbitrary large number of samples afterwards.
Therefore, even though we are still effectively constrained by the dimension of the problem, which has to be small enough for the preprocessing step to be finished in a reasonable time, this constraint on the dimension is decoupled from the achievable accuracy.
Recall that so far, we considered completely general integrals in eq. ( 1).Although we already managed to accelerate the integration for the general case in comparison to the traditional approach, further improvements are possible if more specific properties of the integrand are used.Especially, integrals that come from physical applications are well-known to carry a very rich geometric structure, whose exploitation offers a whole new set of tools to improve numerical approximation methods.In the following, we will achieve a further improvement in runtime, memory requirement and overall complexity by using a specific structure which is exhibited by a large family of integrals.Integrals of this family appear in many contexts in high energy physics.This family consists of all integrals as in eq. ( 1) where the Newton polytopes of the polynomials {a i } and {b j } are generalized permutahedra.

Generalized permutahedra
The permutahedron Π n is an (n − 1)-dimensional polytope in R n .It can be defined as the convex hull of n! vertices determined by permutations in S n : where the vector v (σ) = (σ(1), . . ., σ(n)) ∈ R n encodes the permutation σ.The permutahedron is contained in the hyperplane Π n ⊂ {v ∈ R n : 1 • v = n(n + 1)/2} and is full-dimensional within this hyperplane.The permutahedron Π 3 is depicted in Figure 2a.The cones of maximal dimension in the reduced normal fan F Πn /1R of Π n are labelled by permutations as well.They are of the form, such a domain is called a Weyl chamber.It is not hard to see that these are simplicial cones as where we chose the set of representatives in R n of the vectors in u (σ,k) ∈ R n /1R by fixing u (σ,k) σ(n) = 0 for all k ∈ {1, . . ., n − 1}.The remaining cones of the reduced normal fan F Πn can be constructed by taking arbitrary intersections of these cones.This fan is also called the braid arrangement fan.The reduced normal fan of Π 3 is illustrated in Figure 2b.

Corollary 24.
If both G z 1 and G z 2 are generalized permutahedra and Proof.By Theorem 23 it follows immediately that G z 1 ⊂ G z 2 .The inequalities in eq. ( 17) are strict [1,Theorem 12.3].Therefore the statement follows.
The Minkowski sum of two generalized permutahedra is again a generalized permutahedron (see for instance [46,Lemma 2

.2.2]):
Lemma 25.If both G z 1 and G z 2 are generalized permutahedra, then also their Minkowski sum G z 12 = G z 1 +G z 2 is a generalized permutahedron with the boolean functions z 1 , z 2 , z 12 :

. , n}.
A vector v ∈ G z which maximizes all linear functionals in a Weyl chamber C σ is a vertex of G z .This gives a canonical map from permutations σ ∈ S n to the vertices of a generalized permutahedron.We can use a result of Fujishige and Tomizawa to explicitly construct this map: a permutation and w (σ,z) ∈ R n is the vector given component-wise by where A σ k = {σ(1), . . ., σ(k)} ⊂ [n] = {1, . . ., n}, then w (σ,z) is a vertex of the generalized permutahedron G z and where C σ is a Weyl-chamber in the braid arrangement fan as defined in eq.(15).

Tropical sampling for generalized permutahedra
In general, it is necessary to compute a triangulation of the reduced refined normal fans of the A and B polytopes to perform the procedure described in Section 5.This cumbersome computation can be circumvented if the A and B polytopes are generalized permutahedra.In this case, there is an especially simple way to sample from the associated tropical measure µ tr defined in eq. ( 13) without the need for an explicit triangulation as required for Algorithm 3.
From now on, we will therefore assume that the polytopes A and B are both generalized permutahedra.This implies by Theorem 23 that there are unique boolean functions z A : 2 [n] → R and z B analogously which describe these polytopes.Using these functions and the properties of generalized permutahedra introduced above, we can state Theorem 27 (Geometric sector decomposition for generalized permutahedra).Let A and B be the polytopes defined in Theorem 3. If A and B are generalized permutahedra with associated boolean functions z A , z B : 2 [n] → R which fulfill the requirements R1 and R2 of Theorem 3, then we can write the integral , which fulfills r(A) > 0 for all non-empty proper subsets A [n] and Proof.This theorem is a specialization of Theorem 19 to the generalized permutahedron case.The braid arrangement fan defined in eq. ( 15) provides an appropriate reduced simplicial fan.By Lemma 26 we have vertices w (σ,z A ) ∈ A and w (σ,z B ) ∈ B such that y • w (σ,z A ) = max v∈A y • v and y • w (σ,z B ) = max v∈B y • v for all σ ∈ S n and y ∈ C σ .Using the explicit representatives of the generators u (σ,1) , . . ., u (σ,n−1) of the cone C σ from eq. ( 16) together with Lemma 26 gives u (σ,k) • w (σ,z A ) = −z A (A σ k ) and u (σ,k) • w (σ,z B ) = −z B (A σ k ).It follows from this and Lemma 15 that u (σ,k) • > 0 for all σ ∈ S n and k ∈ {1, . . ., n − 1} which implies r(A) > 0 for all non-empty A [n]. From the form of the u (σ,k) vectors in eq. ( 16) it is obvious that | det(u (σ,1) , . . ., u (σ,n−1) , 1)| = 1.
Theorem 27 ensures that we can proceed as above and perform the Monte Carlo Algorithms 2 and 3 just as in the general case.It is clear that the preprocessing step will be straightforward as generalized permutahedra come with an appropriately simplicial fan 'built in'.Algorithm 3 requires us to generate a table of size n! as we need one entry for each cone in the braid arrangement fan.For this algorithm to be applicable in a computationally feasible way that table needs to be stored in the memory of the computer.Hence the naive algorithm is only practically applicable for relatively small values of n.
However, a further significant improvement can be achieved: it is not necessary to store an entry for each permutation in a table.If a small additional computation for each sampled point is performed, a table of size proportional to 2 n suffices.We will describe this specialized version of Algorithm 3 in the rest of this section.
First observe that the overall normalization factor needed to apply Algorithm 3 is given by where r(A) = z A (A) − z B (A) for all non-empty A [n].This equation is just eq. ( 14) specified using Theorem 27 to the generalized permutahedron case.For the following considerations it will be convenient to declare r(∅) = 1, which opens the way towards the following generalization that promotes I tr to a boolean function on 2 [n] : Definition 28.For a boolean function r : 2 [n] → R with r(∅) = 1 and r(A) > 0 for all non-empty A [n], we define the boolean function J r : 2 [n] → R >0 recursively as Proof.We will prove that J r (A) = , where the sum is over all bijections σ : [m] → A. Fixing such a bijection is equivalent to fixing a pair (e, µ) of an element e ∈ A and a bijection µ : [m − 1] → A \ e. Decomposing the sum in this way and using eq.( 19) gives the statement.
Remark 30.This recursive method to calculate the normalization factor I tr might also be useful in other contexts.For instance, this can be used to calculate the volume of the polar dual of a generalized permutahedron fairly efficiently.In the context of scattering amplitudes this recursion can also be used to calculate the canonical form of a generalized permutahedron.
If we prepare a table of the values J r (A) and r(A) for all A ⊂ [n], we can run the following algorithm: Note that the probability distribution p e = 1

Jr(A)
Jr(A\e) r(A\e) over the elements e ∈ A is properly normalized due to Definition 28.

Proposition 31. If r(A) = z A (A) − z B (A) for all non-empty A
[n], r(∅) = 1 and J r is the boolean function given in Definition 28, then Algorithm 4 generates a sample x ∈ P n−1 >0 , distributed as µ tr in eq.(13) in the generalized permutahedron case.
Proof.For any test function f : P n−1 >0 → C and a random sample x ∈ P n−1 >0 generated by Algorithm 4, . . .
where we gave distinguished subscripts to the numbers e and sets A in the reverse order in which they appear in Algorithm 4 and The sum can be written as a sum over all permutations in σ ∈ S n and A k = A σ k .The statement follows from Proposition 29 and Theorem 27.
Algorithm 4 allows us to integrate any integral of the form in eq. ( 1) via Monte Carlo quadrature without actually performing any complicated triangulation or nontrivial sector decomposition step if the polytopes A and B are generalized permutahedra.Compared to the naiver approach where a table of size n! is needed, only a table of size 2 n is required.The complexity of the preprocessing step is similarly reduced as the recursion in Definition 28 gives an efficient way to calculate all the necessary constants: The table for J r (A) can be calculated in O(n2 n ) steps.
All this achieves not only a huge improvement in the required runtime and memory of the algorithm, but also significantly reduces the complexity of the overall algorithm.Triangulating an n-dimensional polytope is an involved algorithmic task.Circumventing this triangulation with the approach above makes it straightforward to implement an efficient integration algorithm.A detailed example is given in the following section.

Feynman integrals
A scalar Feynman integral associated to a Feynman graph G with E edges and V vertices in parametric representation in D-dimensional Euclidean space can be written as, which depends on the edge weights ν 1 , . . ., ν E , which we will assume to be positive and real.The superficial degree of divergence ω(G) is given by ω(G) = e ν e − (G)D/2, where (G) is the number of loops of G (i.e. the first Betti number of G).The Kirchhoff-Symanzik polynomials Ψ G and Φ G are homogeneous of degree (G) and (G) + 1 in the x e variables.Obviously, the integral I G is a specific instance of an integral of the form in eq.(1).To simplify the notation, we omitted a prefactor of Γ(ω G )/ e Γ(ν e ), which is usually included in the definition of scalar Feynman integrals.See for instance [84] for details on this representation of Feynman integrals.
A complete finiteness proof of the Euclidean space Feynman integral I G together with an analysis of its analytic continuation properties in the ν 1 , . . ., ν E parameters has been achieved by Speer [101].More recently, Brown [29] showed that there is a canonical way to associate the integral I G to a motivic avatar, which can be thought of as a specific representation of a conjectured cosmic Galois group.This group suggests the existence of a coaction principle which relates different Feynman integrals in a highly non-trivial way and it allows to analyse Feynman integrals with a whole new toolkit of homological methods and representation theory.
For our endeavour to merely evaluate the integrals I G , we can make use of parts of Brown's analysis [29] to ensure that the relevant polytopes associated to the Ψ G and Φ G polynomials are generalized permutahedra.We will start by giving some additional details on these polynomials including an efficient way to evaluate them.

Symanzik polynomials
Explicitly, the polynomials can be expressed as sums over spanning trees T 1 and spanning 2-forests (spanning forests with two connected components) T 2 of the graph G: where p(T 2 ) is the total momentum flowing between the two components of the 2-forest T 2 .Only the Φ G (x) polynomial depends on the external physical parameters: a set of momenta p (1) , . . ., p (V ) ∈ R D incoming into each of the vertices and a set of masses m 1 , . . ., m E ∈ R associated to the edges of the graph.These polynomials can also be written in terms of the weighted V × V Laplace matrix of the graph (see for instance [16]), which is component-wise if there is an edge e between v and w This matrix is only positive semi-definite whereas the reduced Laplacian L v,w (x), which is given by an arbitrary leading principle minor of the matrix L v,w (x), is positive definite.The Symanzik polynomials can be written as where P is the (V − 1) × D matrix, given row-wise by the incoming momenta, p (v) ∈ R D : µ where v = 1, . . ., V − 1 and µ is a D-dimensional spacetime index.Note that due to momentum conservation no information is lost when only V −1 of the V incoming momenta are used.
The second representation of the Symanzik polynomials in eq. ( 22) is more suitable for numerical evaluation than eq.( 21).The number of spanning trees of a graph grows exponentially with the number vertices V [83] and the evaluation of the expressions in eq. ( 21) quickly becomes intractable when the graph gets large.The evaluation of the determinant with the other matrix operations in eq. ( 22) is computationally much more favorable: With a Cholesky decomposition of the matrix L(x) both the value of Ψ G (x) and Φ G (x) can be immediately calculated.Computing the Cholesky decomposition of a (V −1)×(V −1)-matrix takes O(V 3 ) time.Due to the special structure of the problemthe matrix L(x) being the reduced Laplace matrix of a graph-there even exists a nearly linear time approximation algorithm [102] to compute this decomposition.
To give a precise account on the Newton polytopes of the Symanzik polynomials we need some additional notation from [29] for subgraphs of Feynman graphs.A subgraph γ ⊂ Γ is equivalent to a subset of edges of the graph Γ.The set of subgraphs is therefore isomorphic to the set 2 [E] and we will identify boolean functions 2 [E] → R with functions defined on the set of subgraphs of the graph G. Just as for G, we will denote the first Betti number of a subgraph (i.e. the number of loops) as (γ).A subgraph γ ⊂ G is called mass-momentum-spanning (m.m.) in G if the second Symanzik polynomial of the contracted graph G/γ vanishes Φ G/γ = 0. Mass-momentum-spanning graphs can also be defined combinatorially as subgraphs that contain all massive edges and one connected component which connects all vertices with non-zero incoming momentum.See [29,Definition 2.6] for details on these types of subgraphs.
Theorem 32.If we restrict to Euclidean and non-exceptional kinematics, then the Newton polytope of Ψ G and Φ G are generalized permutahedra.A facet presentation of these polytopes is given by the supermodular functions for all subgraphs γ ⊂ Γ.
See [29,Section 1.7] for a definition of non-exceptional or generic kinematics.Briefly, this condition ensures that there is no non-trivial combination of the external momenta that adds up to 0. It is worth remarking that this condition is not necessary if the combinatorial concept of mass-momentum-spanning is slightly generalized while keeping the equivalence Φ G/γ = 0 ⇔ mass-momentum-spanning.With this generalization it is sufficient to require that any external momenta are non-zero.
The first statement that the Newton polytopes of Ψ G and Φ G are generalized permutahedra can be traced back to Hepp [64] and Speer [101], who realized that a complete ordering of the integration parameters in eq. ( 20) is sufficient to capture the relevant singularities of parametric integrals in the Euclidean non-exceptional case.See also [99] for a comparison of this viewpoint with modern sector decomposition techniques.
The form of the boolean functions z Ψ G and z Φ G follows directly from the factorization laws [29, Proposition 2.2], [29,Proposition 2.4] and [29,Theorem 2.7] of the Ψ G and Φ G polynomials.Their supermodularity follows from the argument in [97] after Corollary 4.12.
Remark 33.It was implicitly proved by Panzer [90,Lemma 2.8] that the Newton polytope of Ψ G is a generalized permutahedron using an elegant argument based on Kruskal's algorithm [76].This argument can also be generalized to the Φ G polynomials by an extension of Kruskal's algorithm to minimal 2-forests.
Remark 34.Theorem 32 is also of interest in a different context: Generalized permutahedra have a universal property with respect to their Hopf monoid structure.Feynman graphs carry a Hopf algebra structure which is deeply intertwined with renormalization [40] and encodes the singularity structure of the integrand [13,29].The relationship between these two structures remains to be explored.Remark 35.For general non-Euclidean kinematics, the Newton polytope of Φ G is not a generalized permutahedron.An explicit counterexample is given in [100,Section 2.4].We emphasize that the general tropical sampling algorithm introduced in Section 5 still applies.The caveat is that an explicit triangulation has to be computed in contrast to the generalized permutahedron case where no explicit triangulation is necessary.

Tropical Monte Carlo quadrature of Euclidean Feynman integrals
To perform the generalized permutahedron tropical Monte Carlo routine from Section 6 on the parametric Feynman integral in eq. ( 20), we still have to ensure that the numerator monomials e x νe e are generalized permutahedra.This is of course trivial, as the Newton polytope of a monomial is zero-dimensional and its normal fan is trivial.The braid arrangement fan is automatically a refinement of this fan and the conditions for Definition 22 are fulfilled.The facet presentation of these 0-dimensional polytopes associated to the Newton polytope of the polynomial p e (x) = x e in the form of Theorem 17 is given by the boolean function z pe (γ) = 1 if e ∈ γ and z pe (γ) = 0 if e ∈ γ for all subgraphs γ.
Because we assume that the edge weights ν e , the dimension D and the superficial degree of divergence ω(G) are real, we have we can define the boolean function r G : 2 [E] → R as in Theorem 27, For Euclidean kinematics, the polynomials Ψ G and Φ G have only positive coefficients.Therefore, they are completely non-vanishing on P E−1 >0 .We can apply Theorem 27 independently of the sign of ω(G) and find that the parametric integral in eq. ( 20) is convergent if r G (γ) > 0 for all non-empty proper γ G by using Corollary 24 which implies A ⊂ relint B in this case.In fact, it is sufficient that r G (γ) > 0 holds for all proper motic subgraphs γ as defined in [29,Definition 3.1] for I G to be convergent.
As defined in eq. ( 13) the tropical differential form associated to I G is with an appropriate normalization factor I tr G such that 1 = P n−1 >0 µ tr G .For ω(G) = 0 this normalization factor is a certain invariant of the graph G which has been studied by Panzer [90].This invariant is the Hepp-bound.The Hepp-bound is independent of the physical parameters encoded in the masses and external momenta.It mirrors many properties of the period, which is given by the integral in eq. ( 20) in the same special case ω(G) = 0.The period is another graph invariant which has interesting number theoretical properties [25,28,33,66].
By Definition 28, the normalization factor can be generalized to a subgraph function J G : 2 [E] → R which is determined by the recursion for all non-empty γ ⊂ G with J G (∅) = 1 and r G (∅) = 1.
The actual normalization factor is recovered for γ = G, i.e.I tr G = J G (G) by Proposition 29.With a precalculated table of the values r G (γ) and J G (γ) for all γ ⊂ G, Algorithm 4 provides an efficient way to sample from the distribution given by the differential form µ tr G on P n−1 >0 .Using this sampling algorithm we can obtain estimates for the parametric Feynman integral eq. ( 20) by the standard Monte Carlo procedure from Theorem 5 or equivalently Algorithm 2.

Expansions in regularization parameters
Often not only the integral in eq. ( 20) is of interest, but also the Taylor expansions of the parameters D and ν e around specific points.Very important is the ε-expansion of the parametric Feynman integral in eq.(20) in the context of dimensional regularization.Effectively, such an expansion results in integrals of the form (23) for some set of integers s, t ∈ N and k 1 , . . ., k E ∈ N.The estimation of this generalization is also possible using Algorithm 3 or Algorithm 4. Using gives the desired estimate.A caveat is that the integrand is not bounded anymore, as the logarithms will exhibit singularities at the boundary of the integration domain.This is not a severe problem, as these singularities are square integrable and Theorem 5 may still be applied.

Some experimental results
A proof-of-concept C++ implementation of this algorithm, which evaluates general Euclidean Feynman integrals, is available on the author's personal web page2 and in the ancillary files to the arXiv version of this article.The algorithm has been tested on various graphs from ϕ 4 -theory in four dimensions, which have been generated using tools from [17].To illustrate the performance of the algorithm a benchmark is given in Table 1.
The benchmark has been performed on a single core of an AMD EPYC 7702P processor.
The columns E and (G) show the number of edges (equivalently the dimension of the integral +1) and the corresponding number of loops of the underlying ϕ 4 -graph.The column σ I /I gives the relative standard deviation of the samples, i.e. if δ −2 • σ I /I samples are drawn, then a relative accuracy δ can be expected from the resulting estimate.Up to this expected accuracy, all obtained estimates are consistent with the available analytic results from [25,95,32,91,96].The implementation has also been checked using numerical calculations of non-ϕ 4 graphs with non-trivial masses and kinematics performed with pySecDec [22,21].
Recall that the algorithm can be applied to arbitrary D-dimensional scalar Feynman integrals with arbitrary kinematics in the Euclidean regime and the benchmark results can expected to be representative for the evaluation of all such graphs with the same number of edges.The choice for ϕ 4 -theory and D = 4 is practical because much analytic data is available even at high loop orders, which allows for convenient checks of the Figure 3: A 8-loop ϕ 4 -graph whose period does not evaluate to a linear combination of multiple zeta values or multiple polylogarithms at roots of unity.
As can be seen from the table, the number of samples per second decreases slowly with the loop order or equivalently the dimension of the problem.The necessary time for the preprocessing step on the other hand depends exponentially on the dimension.For example: it takes 2.5 CPU-seconds to evaluate a graph with 10 edges and general kinematics up to δ = 10 −3 relative accuracy.The necessary time for the preprocessing step of 6.6 • 10 −4 s is negligible and the memory requirements of 16 KB insignificant.It takes 20 CPU-seconds to evaluate a graph with 20 edges and general kinematics up to the same relative accuracy.The time for the preprocessing step is 1 second and the memory requirements of 16 MB are still very manageable.Similarly, it takes about 2 CPU-minutes to evaluate a Feynman graph with 30 edges up to this accuracy, after the preprocessing step has been performed.At this point this preprocessing step unfortunately already takes about 30 minutes and 16 GB of RAM are necessary.
The evaluation step of the algorithm is fully parallelizable and the preprocessing step partially.The memory requirements can be reduced in the special case ω(G) = 0 or by using a more efficient storage of the relevant constants.The overall picture of exponentially growing memory demands and an exponential time for the preprocessing step will not change without modifying the algorithm significantly.
An interesting example of a ϕ 4 -graph in D = 4, whose evaluation was not approachable by any previously existing techniques, is the graph in Figure 3.It is one of the smallest graphs in ϕ 4 -theory whose period is not a linear combination of multiple zeta values or multiple polylogarithms at roots of unity.This has been proven in [31,Section 6.2] for a graph which is equivalent with respect to its period by the completion identity [95].Sampling 10 12 points in about 24 hours on 54-CPU-cores results in the following estimate for the period of this graph,

Further research directions 1. (Markov chain Monte Carlo based sampling)
The tropical Monte Carlo algorithms are still very limited in terms of the complexity of the integrals to which they apply, because of the cumbersome preprocessing step that has to be performed for each integral.To overcome this bottleneck without relying on special structures of the integrals, it would be necessary to find a more efficient way to sample from µ tr than Algorithm 3 or Algorithm 4, while also having access to the normalization factor I tr .Eventually, one has to settle with a still relatively slow algorithm for this task, as we have a 'no-go Theorem' in a special case: if all the numerator polynomials are monomials, i.e. a i (x) = x i , then I tr corresponds to the volume of a certain polytope.Computing or approximating the volume of a general n-dimensional polytope is a task that cannot be performed deterministically in polynomial time [6].A workaround is to use a non-deterministic algorithm for both the computation of the normalization factor I tr and to obtain samples from µ tr .There are many highly advanced Markov chain Monte Carlo algorithms that have been developed to perform exactly this task (see for instance [48,79] and the references therein).It is very plausible that adapting these polytope integration and sampling algorithms to our algebraic integral quadrature application should result in the sought after polynomial time algorithm for algebraic and Feynman integral evaluation.

(Physical integration regions and components of coamoeba)
The last condition in Theorem 3 is closely related to the coamoeba of the set of polynomials {b j }.If a polynomial p ∈ C[x 1 , . . ., x n ] has zero locus Z p = {z ∈ (C \ {0}) n : p(z) = 0}, then the coamoeba of p is the image of Z p under the coordinate-wise complex arg-function: A p = Arg(Z p ) ⊂ [0, 2π] n .The coamoeba is related to the amoeba which goes back to Gelfand, Kapranov and Zelevinsky [53] and has numerous applications in tropical geometry.By a result proven independently by Johansson [67] and Nisse, Sottile [87], a polynomial is completely non-vanishing if the origin is not in the closure of its coamoeba 0 ∈ Ā p .In [86] it was shown via Cauchy's theorem that the integration cycle R n−1 >0 of the integral in eq. ( 3) can be replaced with the Arg −1 (θ) as long as θ and 0 lie in the same connected component of the intersection of the coamoeba of the denominator polynomials.A similar argument works for the projective version of generalized Euler-Mellin integrals which was considered here.
A strikingly reminiscent procedure is necessary while evaluating Feynman integrals with kinematics in Minkowski space.The necessary analytic continuation in this case is governed by the iε-prescription, which ultimately results from causality and unitarity constraints on the amplitude [49].Formulating this procedure in terms of a canonical choice of a component in the respective coamoeba would result in a canonical analytic continuation procedure in the Minkowski case.See also [44] where related observation regarding parametric Feynman integrals and coamoeba have been made.

(Further acceleration of the algorithms by using more structures)
In the ω(G) = 0 case the normalization factor of the µ tr G distribution for the parametric Euclidean Feynman integral in eq. ( 20) reduces to the Hepp-bound studied by Panzer [90].He gave more efficient ways to compute this normalization factor I tr G which likely can also be used to sample from the µ tr G distribution more efficiently.Moreover, these more elaborate ways to compute the Hepp-bound can probably be generalized to deal with the ω(G) = 0 case using results from Brown [29].
In a broader sense, an extension of Algorithm 4 beyond the generalized permutahedron case could be possible.Especially attractive would be an extension which includes the interesting Minkowski space Feynman integral case.Further analysis of the relevant structures for the tropical geometric framework, starting for instance with the explicit counterexample in [100, Section 2.4], could lead to an appropriate refinement of the braid arrangement fan.Such a refinement could lead to a direct generalization of the generalized permutahedron sampling Algorithm 4, which would make the integration of high dimensional Feynman integrals (i.e. with ∼ 30 edges) also possible in the Minkowski regime.

(BPHZ renormalization)
As mentioned above, Feynman integrals as the one in eq. ( 20) with non-integrable singularities are often interesting.A common approach to deal with these singularities is to subject the integral to an analytic continuation procedure before any numerical integration is performed.Ultimately, these singularities have a well-studied physical origin and are handled via renormalization.The momentum BPHZ renormalization scheme takes care of these singularities before any integration is performed.This renormalization scheme can be implemented on the level of the parametric integrand [30].Such an implementation would make the analytical continuation step in the ν e edge weights and the dimension D unnecessary.

(Estimates for large loop order β-functions)
The estimates that can be obtained using the proof-of-concept implementation of Feynman graph integrals up to loop order 17 in ϕ 4 -theory can immediately be used for a numerical estimation of the β-function up to this loop order.This has phenomenological applications for the calculation of critical exponents for various complex systems and even works without the need for an analytic continuation as it has been observed that the non-primitive contributions to the β-function become negligible with sufficiently large loop order.The approach can be further amplified by making use of the observed Hepp-bound -period correlation which has been applied by Panzer and Kompaniets [71] to obtain estimates of the ϕ 4 -theory β-function up to order 13.
Even if it is not possible to evaluate all necessary Feynman diagrams individually for the respective loop order, a numerical approach could be sufficient to gain enough insights on the distribution of the value of these integrals.If such statistical knowledge is available, the numbers of (renormalized) Feynman diagrams are sufficient to extrapolate values for the entire β-function contribution [18].
Such an approach naturally extends to a question for the inherently large-order regime: suppose that G is a random 1PI ϕ 4 -graph without subdivergences and (G) loops.Is there a limiting distribution lim The analysis [43] gives some positive indication for the existence of such a distribution.
An overall normalization constant for such a distribution, which normalizes its expectation value to one, can be calculated using instanton methods [82] and renormalized graph counting [18], as was pointed out by Panzer [88]: where γ E is the Euler-Mascheroni constant and A is the Glaisher-Kinkelin constant.
An exhaustive statistical analysis using the algorithms from this article should give further indication for or against the existence of such a limit.A combination with analytic combinatorial methods for Dyson-Schwinger equations might lead to an explicit form of a limit distribution of Feynman integrals [74,75,41].
6. (Phase-space integration) Simple phase space integrals, which are another type of integrals necessary for particle physics phenomenology, also fall under the category of integrals in eq. ( 1).For more elaborate phase space integrals more complicated non-simplicial integration domains are necessary.It is possible that the algorithms discussed in this article may be extended to these more complicated domains.Writing the phase space integrals in terms of kinematic variables as in [52] and using a geometric subtraction scheme for the infrared singularities [65] could be instrumental for this extension.

(Tropical sampling applied to sums of Feynman diagrams)
The general tropical sampling algorithm described in Section 5 is made possible by a wellcalculated emancipation from the rigid concept of sectors as parts of the integral, which each have to be attacked individually.
Following a line of thought from [3], we can say that a similar but much stronger bias exists on the level of the amplitude.The time-honoured approach to amplitude calculation is to write it as a sum over Feynman graphs with the same number of loops L (pictorially in disregard of renormalization and the explicit form of the integrals), and evaluate the indicated integrals one by one.There are promising indications that there is a superior structure which can be 'triangulated' into Feynman integrals in an appropriate sense yielding a sum as the one above, similar to the sector decomposition approach, where an individual integral is decomposed in terms of the triangulation of the respective normal fan.
An especially suggestive candidate for such a superior object in the case of scalar quantum field theories is Outer space [42] and its quotient formed under the action of Out(F n ) which is the moduli space of graphs.For instance, unitarity and branch cut properties of Feynman integrals can be understood using Outer space [14,73,8].This space can be seen as a tropical analogue of Teichmüller space [38] and the moduli space of curves, which holds a similar superior role in string theory.
Recently, quantum field theory inspired techniques have been successfully applied in the theory of Outer space [20].
A problem to overcome for such an approach are the UV-divergences that naturally appear in renormalizable QFT calculations.It is well-known how such divergences can be handled both on the amplitude or on a per integral level [39] and also mathematically these divergences are quite well understood, even in the large-order regime [18,19].These divergences would also appear in a geometric setting for the amplitude and dealing with them would mean to work on a certain compactification of Outer space and the moduli space of graphs.One such compactification has been constructed by Berghoff [8] (see also [9]), which might be usable for the numerical evaluation of amplitudes.
8. (Quasi Monte Carlo) State of the art implementations for numerical Feynman integral integration employ quasi Monte Carlo methods [85] for the actual integration of the sector integrals instead of traditional Monte Carlo methods.This has the simple and obvious advantage of a significantly increased rate of convergence.The disadvantage of the quasi Monte Carlo approach is that it is mathematically much more challenging to handle.It is plausible that the algorithms introduced in this article can be accelerated using quasi Monte Carlo methods.A challenge will be the handling of the mixture of discrete and continuous probability distributions in the sampling algorithms from Sections 5 and 6.

(Systematic tropical expansions)
The tropical approximation p tr can be interpreted as a certain limit as shown in Section 3. It is natural to ask if one can interpret p tr as the 'zeroth' order in a systematic expansion and it is plausible that a systematic improvement of the technique can be obtained this way.The ultimate aim of such investigations would be an efficient approximation scheme that gets by without a final Monte Carlo step and immediately yields a deterministic result.

Lemma 16 .
Let A and B be the polytopes defined in Theorem 3. If A and B fulfill the requirements R1 and R2 of Theorem 3 and C is a cone in the reduced common refinement F AB /1R, then i a tr i (x) Re ν i j b tr j (x) Re ρ j = x −w for all x ∈ Exp(C), where w = w B − w A and w A ∈ A, w B ∈ B such that y • w A = max v∈A y • v and y • w B = max v∈B y • v for all y ∈ C.Moreover, 1 • w = 0 and y • w > 0 for all y ∈ C\{0}.

B
∈ B such that y • w (C) A = max v∈A y • v and y • w (C) B = max v∈B y • v for all y ∈ C. By Lemma 16, we also have 1 • w (C) = 0 and y • w (C) > 0 for all y ∈ C \ {0}.

Proposition 20 .
If the conditions of Theorem 3 are fulfilled, then the random value I (N ) returned by Algorithm 1 has expectation value equal to the integral in eq.(1), I = E[I (N ) ] and Var[I (N ) ] = C N with some constant C ≥ 0.

C
] = C C /N and Var[I (N ) ] = C∈M ∆ AB The braid arrangement fan F Π3 /1R which partitions R 3 /1R with equivalent hyperplanes orthogonal to 1R indicated.

Figure 2 :
Figure 2: The permutahedron Π 3 and its reduced normal fan.Vertices and maximal cones are both labelled by the associated permutations.

Algorithm 4 1 Jr
to generate a sample from µ tr for generalized permutahedra Set A = [n] and κ = 1.while A = ∅ do Pick a random e ∈ A with probability p e = ) .Remove e from A, i.e. set A ← A \ e. Set σ(|A|) = e.Set x e = κ.Pick a uniformly distributed random number ξ γ) − ω(G) δ m.m. (γ) for all non-empty γ ⊂ G, where we used Lemma 25 and Theorem 32 and where δ m.m. (γ) = 1 if γ is massmomentum-spanning and 0 otherwise.Note that up to the δ m.m. -term the function r G (γ) is equal to the superficial degree of divergence ω(γ) of a subgraph.
normalization constant C L for each loop order L = (G) and if yes, what does it look like?
numerator polytope A is contained in the relative interior of B: A ⊂ relint B, Because each of the {a i } and {b j } polynomials is homogeneous, neither A nor B is full dimensional in R n .The condition in eq.(2), which implies that i Re ν i deg a i = j Re ρ j deg b j , guarantees that A and B both lie in the same hyperplane A where C is the constant we obtained from Lemma 12.We can choose a constant C such that 2C = min t∈B R |p F (e t )|/p tr (e t ) > 0 and R >1C log( k∈supp(p)\F |c k |/C).This gives the desired bound.Proof of Theorem 8.B.Use Proposition 13 and the completeness of the normal fan.
)•k for all s ∈ C F and t ∈ R n , and estimate using Proposition 7, Lemma 10, Lemma 11 and Lemma 12, |p(e s+t )| ≥ p tr (e s )   |p F (e t )| − k∈supp(p)\F |c k |e t•k e s•k−max v∈N p s•v   ≥ p tr (e s+t ) |p F (e t )| p tr (e t ) − e −R C k∈supp(p)\F |c k |e t•k p tr (e t ) ≥ p tr (e s+t )   for all s ∈ C F and t ∈ R n , The fan F ∆ AB /1R is complete, i.e. it corresponds to a partition ofR n /1R = C∈F ∆ AB /1R C. Because Exp : R n /1R → P n−1>0 is smooth and bijective this partition gives also a partition of P n−1 >0 = C∈F ∆ AB /1R Exp(C).Since we would like to integrate over P

Table 1 :
Benchmark of Feynman integral evaluations with different numbers of edges.