A solution for tensor reduction of one-loop N-point functions with N>= 6

Collisions at the LHC produce many-particle final states, and for precise predictions the one-loop $N$-point corrections are needed. We study here the tensor reduction for Feynman integrals with $N \ge 6$. A general, recursive solution by Binoth et al. expresses $N$-point Feynman integrals of rank $R$ in terms of $(N-1)$-point Feynman integrals of rank $(R-1)$ (for $N\ge 6$). We show that the coefficients can be obtained analytically from suitable representations of the metric tensor. Contractions of the tensor integrals with external momenta can be efficiently expressed as well. We consider our approach particularly well suited for automatization.


Introduction
In a recent article [1], we have worked out an algebraic method to present one-loop tensor 5-point functions in terms of scalar one-loop 1-point to 4-point functions. The tensor integrals are defined as with denominators c j , having chords q j , In a subsequent article [2] we have calculated contractions of the N = 5-point tensor Feynman integrals with external momenta, resulting in the analytic evaluation of sums over products of scalar products of the chords and signed minors, yielding compact expressions for them. The present article is based on the observation that those sums are valid for arbitrary N. This allows us to extend our formalism to N-point tensor integrals with N ≥ 6. Following ideas presented in [3], an iterative approach has been systematically worked out in [4]. The N-point tensor integrals are represented there in terms of (N − 1)-point tensor integrals with smaller rank for arbitrary N ≥ 6 as where r indicates the line scratched from I . Equation (61) of [4] will be our starting point; it contains an implicit solution for the coefficients C µ j : [4] . (1.4) The subscript [4], indicating explicitly the 4-dimensional metric tensor, will be skipped in the following. Further, we will set q N = 0 throughout this article, in notations of [4] r N = 0, and ∆ ν jN = −q ν j . In the present work we develop a procedure to solve (1.4) analytically for arbitrary N ≥ 6. Explicit examples will be given for N = 7 and N = 8.
Assume a representation of the metric tensor in the form is available. Then, necessarily, the vector is a solution of (1.4). An additional requirement according to eq. (62) in [4] has to be fulfilled for this vector: which we will verify for the solutions we obtain. Our approach consists in finding an object which, contracted with chords q a and q b , results in (q a · q b ). This yields a representation of the metric tensor requested in (1.5), from which the coefficients C µ j can be obtained. For an N-point function with N external momenta one has to do with (N − 1) vectors, q N = 0. For N = 5 one has in general 4 independent vectors from which one can construct uniquely the metric tensor in 4 dimensions. For N > 5 one has (N − 5) "scratched" vectors which are not supposed to enter the construction of the metric tensor. Nevertheless the contraction of the metric tensor with any pair of the available (N − 1) vectors must result in their scalar product. This is the problem we solve in the present article analytically.
It might be interesting here to recall the corresponding relation for 5-point functions [5], which can be considered as an inhomogeneous analogue to (1.3): , s,t = 1 · · · 5.
In the following, setting up sums over scalar products multiplied with Cayley determinants, we will implicitly use the relation which is valid if q N = 0. Therefore we have to assume this from the very beginning. In (4.6) of [7], the C µ r ≡ C s,µ r = −v µ r is given: Here the index σ indicates a certain redundancy, i.e. the vector C r is not unique. This reflects the property of eq. (58) of [4] to have only a "pseudo-inverse". We now will verify this equation in order to demonstrate our approach. Due to (1.3), we have to find a solution of (1.4). To be systematic, we first collect the following set of sums, with s = 1 . . . n: In fact (2.4) and (2.5) have been written in [2] for n = 5, but as mentioned above it turns out that all the sums written in [2] are as well valid for any n. For n = 6 it is () 6 = 0. Indeed equations (2.3)-(2.5) have to be considered as identities in terms of the Y i j , and the property () 6 = 0 has to be taken into account as special property for the Y i j in (1.11). Thus we can finally write the metric tensor as and due to (1.6), equations (2.6) and (2.7) yield the solution (2.2) for σ = 0 and σ = s = 0, respectively, while (2.8) yields another option; see also (75) in [8]. In this case we have in the notation of (1.10). We remark that s s 6 in (2.9) is a Gram determinant of a 5-point function, which may become small in certain domains of phase space. Here, we have a certain choice, s = 1, . . .6, so that one presumably will find an s for which s s 6 is not small. It is interesting to note that the C r in (2.2) for σ = 0 and σ > 0 agree. One may see this, starting from the identity (see also (A.11) of [7]  (2.11) Multiplying with q µ i and summing over i proves our statement since The last identity follows from the vanishing of all scalar products of (2.12) with any non-vanishing chord; see also (A.2) of [2]: A similar calculation shows that (2.9) indeed differs from (2.2). The reason why different representations are of interest is a possible optimization of the numerics: Some representations may have small determinants in the denominator, while others don't. It remains to verify (1.7). For (2.2) with σ = 0 we use

7-point functions
For the 7-point functions we first investigate the corresponding representation of (2.2) for σ = 0. Eq. (A.9) of [2] can be written for arbitrary n and s = 1 . . . n as an identity in terms of the Y i j : As was observed for the 6-point function in the discussion of (2.3) -(2.5), the vanishing of certain determinants simplifies the result considerably. Quite generally with dimension 4 of the chords, all () n , n ≥ 7, have rank 6, i.e. any (signed) minor of order 7 vanishes [9]. The () 7 is of order 8 and thus also the 0 0 7 , s 0 7 and s s 7 vanish 3 , and therefore the whole curly bracket in (3.1) vanishes with the result In general, it is 0s 0s 7 = 0 and one can write from which we read off This is exactly the result as in (2.2) (σ = 0), only that now a line and a column of the 0 0 7 is scratched in addition -a very natural result. Even more, we also find the correspondence of (2.2) for σ = s > 0 in the form . (3.7) Multiplying with q µ i and summing over i proves the statement since Eq. (3.8) follows again from the vanishing of all scalar products of (3.8) with any non-vanishing chords as given in (A.6) of [2]: For n = 7 the order of the determinants on the right-hand side of (3.9) is 7, but their rank is 6 and thus they all vanish. Similarly we proceed for the analogue of (2.9), which was proven for the 6-point function. We start from a relation like (3.1), which was not directly needed in [2], but occurred there in an intermediate step: with s,t = 1 . . .n. According to the above discussion, it is s s 7 = t t 7 = s t 7 = 0, such that again the curly bracket in (3.10) vanishes. Since st st 7 = 0 in general -it is a 5-point Gram determinant -we can write

8-point functions
For the 8-point functions -and analogeously for N > 8 -the calculation follows the same lines as for the 7-point function. Again, at first we investigate the representation corresponding to (2.2) for σ = 0. Eq. (A.13) of [2] can be written for s,t = 1 . . . n as For n = 8, the determinants in the curly bracket of (4.1) are of order 7 while their rank is only 6. Therefore they all vanish and so does the curly bracket. Since 0st 0st 8 = 0, we obtain from which we read off Similarly, we proceed in order to obtain the relation corresponding to (2.9). Here we start from (A.14) of [2] which we write for s,t, u = 1 . . . n as Again, the curly bracket in (4.4) vanishes. stu stu 8 is also the Gram determinant of a 5-point function, obtained from the 8-point function under consideration by scratching lines and columns s,t and u. It does not vanish in general. Thus we obtain from which we read off in the notation of (1.9) and (1.10). Here again the upper indices s,t and u stand for the redundancy of the vector and can be freely chosen.

Contractions of tensor integrals with external momenta
In [2] and [10] we have advertised the contraction of tensor integrals with external momenta for the calculation of Feynman diagrams. This led us to a systematic study of sums over products of contracted chords and signed minors. These sums have found in the present work a generalisation by exploiting the fact that they are valid not only for the specific value n = 5 as was assumed for the 5-point functions. Indeed, they are correct for any n. In the following, we demonstrate that due to this property we cannot only derive as above specific representations of the metric tensor, but also the contraction with external momenta can be performed in the same way for N-point functions with N ≥ 6, quite similarly as it was done for 5-point functions. Just for the purpose of demonstration we confine ourselves to tensors up to rank 3 of the 7-point function.
For the vector integral (1.3) yields can be chosen, e.g., from (3.12), and the I r 6 is the scalar 6-point function obtained from the scalar 7-point function by scratching line r. For the contraction of C µ r with a chord q µ a we need the generalization of (A.15) of [2] for n = 7: with s,t, r = 1 . . . n. Eq. (5.2) has a surprising consequence. According to the construction in [4], the original tensor remains unchanged no matter how the vector C µ r is chosen, as long as conditions (1.5) and (1.7) are fulfilled. Thus, contracting C r with some chord q a , one still may select the redundancy indices s,t. The optimal choice is apparently s,t = a, N. In this case only the first term in the square bracket of (5.2) remains and moreover st st n cancels, i.e. the redundancy disappears and formally we can write As a result, only 2 terms remain in the sum (1.3) after the contraction . Since C r carries the first index µ 1 of tensors of any rank this scalar product will occur in all applications.
For the tensor of rank 2, eq. (1.3) yields where the square bracket is the 6-point vector according to (2.1). For C s t we have taken for convenience the form resulting from (2.9). Here again (5.2) with n = 7 can be used for the projection of the 6-point vector. The only freedom we have now, however, is the choice of s. Contracting with another vector q b the choice s = r, b, N is optimal with the result (N = 7) Only the sums over the basic functions I rt 5 survive. The reason for the appearance of the second term in (5.5) is that for r = b the 6-point function as a scratched 7-point function is contracted with a vector, which is not among the vectors defining the 6-point function, and for r = N all 6-point vectors are nonvanishing. Thus there remains the redundancy index s.
Similarly we proceed for the tensor of degree 3: The vector of the 5-point function can be written as [1] I λ ,rt According to (5.7), we see that the only new sum needed for the projection is 5 and s,t, u = 1 . . . n, with n = 7 here. We would like to close this section with few general remarks. From the very beginning we work with the original tensors, i.e. we do not consider cancellations of scalar products appearing in the numerator against propagators in the denominaotors, i.e. we do not intend to cancel "reducible" terms. As a consquence, no shifts of integration momenta are needed.
Additionally, no shifts of integration momenta are needed in the iterations from N-point to (N −1)point functions. This is nicely seen from (5.6) and (5.7). Working only with the original Gram determinant () N of the diagram under consideration, the "scratches" are simply done in the () N . For this purpose it is important to recall that all our sums are also valid for the "last" value of s,t, · · · = N.
Assume that in (5.7) r,t = N, then four vectors contribute in the sum for the 5-point vector, i.e. q N = 0 is still valid. If otherwise r = N, then the line N is scratched and q N = 0 is not available anymore. In this case 5 vectors q i contribute in the sum for the vector of the 5-point function, which is just what is needed here [1], since the general integration rules of course allow all chords to be different from 0.
In order to take advantage of the above formalism, the contractions of tensor integrals with external momenta should be implemented in a software package as building blocks, which are parameterised by the chords, a, b · · · = 1 . . . (N − 1). In fact it does not matter if 4 or 5 vectors contribute since this is automatically taken into account by the Kronecker δ -functions in the sums like (5.2) and (5.8). If a calculation is organized such that the tensor integrals are contracted exclusively by external momenta (or chords), a quite efficient numerics will result.

Conclusions
For tensor N-point functions of rank N ≥ 6 we found an analytic form for the coefficients C µ 1 r appearing in the iterative scheme (1.3). The crucial point in the derivation is the observation that sums over products of scalar products of chords and signed minors, derived earlier for n = 5, also hold identically for arbitrary n. Applying recurrence relations directly to the evaluation of higher rank tensors, the appearance of vanishing Cayley determinants makes their application cumbersome, as indicated in [8]. In the present approach the vanishing of certain Cayley determinants is obviously quite welcome, reducing the sums for the vectors C µ r considerably. As a result we obtain, for any N, expressions with inverse 5-point Gram determinants with an arbitrary choice of the scratched lines. Also the inverses 0s··· 0s··· N and 0t··· st··· N are admitted. The sums at our disposal also allow to perform contractions of the tensor integrals with external momenta quite efficiently. In particular use can be made of the freedom to choose the vectors C µ r to find the optimal form for these contractions. For higher rank tensors this property appears of particular relevance because to each tensor index corresponds a summation over the nonvanishing chords. For the tensor of rank 7 of the 7-point function, e.g., this corresponds to 6 7 = 279936 terms. All these terms are summed here analytically to a product of 7 terms, which will certainly save a big amount of computer time and storage.