Hadroproduction of t anti-t pair with two isolated photons with PowHel

We simulate the hadroproduction of a t anti-t pair in association with two isolated hard photons at 13 TeV LHC using the PowHel package. We use the generated events, stored according to the Les-Houches event format, to make predictions for differential distributions formally at the next-to-leading order (NLO) accuracy. We present predictions at the hadron level employing the cone-type isolation of the photons used by experiments. We also compare the kinematic distributions to the same distributions obtained in the t anti-t H final state when the Higgs-boson decays into a photon pair, to which the process discussed here is an irreducible background.


Introduction
The Higgs-boson was discovered by the ATLAS [1] and CMS [2] experiments two years ago, and many of its properties have been measured since then. The results of these measurements are in agreement with the predictions of the Standard Model within the uncertainties of the measurements: (i) it is a J P = 0 + particle, (ii) the branching ratios are as predicted, and (iii) it couples to the masses of the heavy gauge bosons [3,4]. Its mass is not predicted, but measured consistently by the two experiments: m H /GeV = 125.5 ± 0.2 stat ± 0.6 syst by ATLAS [5] and 125.6 ± 0.4 stat ± 0.2 syst by CMS [6].
The measurements in order to discover the properties of the Higgs-boson are not over.
It is yet to measure its couplings to the fermions y f as well as to check if its triple and quartic self couplings are consistent with its mass. The smallness of these couplings require large integrated luminosity and such measurements become feasible only at the designed c.m. energy of the LHC. In this respect the t-quark is special among the fermions due to its large mass. As m H < m q , the Higgs-boson cannot decay into a tt-pair, therefore, the Yukawa-coupling y q can only be measured directly from tt H final states, which is an important goal at the LHC.
Measuring the tt H production cross section is very challenging due to the small production rates and in general large backgrounds. The experiments concentrate on studying many decay channels sorted into three main categories: (i) the hadronic, (ii) the leptonic and the (iii) di-photon channels. In the hadronic channel the Higgs-boson is assumed to decay into a bb or into τ + τ − pair, while one or both t-quarks decay leptonically (hadrons with single lepton or dilepton channels). In the leptonic channel the Higgs-boson decays into charged leptons and missing energy (through heavy vector bosons), while one or both t-quarks decay again leptonically. Finally, in the di-photon channel the Higgs-boson decays into a photon pair, while the t-quarks decay into jets (di-photon with hadrons), or into the semileptonic channel (di-photon with lepton). Common characteristics of all these channels is the large background from other SM processes. Thus a precise measurement needs to be aided by theory through precise predictions of distributions at the hadron level for the hadroproduction of a tt-pair in association with one or two hard object(s) X, with X = H [7], Z 0 [8,9], W ± [10], an isolated photon [11], jet [12], bb-pair [13], or a pair of jets or isolated photons. The goal of the present article is to make high-precision predictions for the last of these.
Isolated hard photons are important experimental tools at the LHC. However, in perturbation theory these are rather cumbersome objects because the photons couple directly to quarks. If the quark that emits the photon is a light quark, treated massless in perturbative QCD, then the emission is enhanced at small angles and in fact, becomes singular for strictly collinear emission. Due to this divergence of the collinear emission, the usual experimental definition of an isolated photon, which allows for small hadronic activity even inside the isolation cone, cannot be implemented directly in a perturbative computation. One has to take into account the non-perturbative contribution of photon fragmentation, too [14].
Recently, a new method was presented to make predictions for processes with isolated hard photons in the final state [11]. This method builds on the POWHEG method [15,16] as implemented in the POWHEG-Box [17]. The POWHEG-Box program requires input from the user -the Born phase space and various matrix elements: (i) the Born squared matrix element (SME), (ii) the spin-and (iii) color-correlated Born SME, (iv) the SMEs for real and (v) the virtual corrections. In the PowHel framework [18] we obtain all these ingredients from the HELAC-NLO code [19]. The result of PowHel is pre-showered events (with kinematics up to Born plus first radiation) stored in Les Houches event (LHE) files [20]. These events can be further showered and hadronized using standard shower Monte Carlo (SMC) programs up to the hadronic stage, where any experimental cut can be used, so a realistic analysis becomes feasible.
The essence of the new method of predicting cross sections for processes with isolated photons is to introduce generation cuts [12,21] during event generation that are sufficiently small so that predictions at the hadron level obtained with physical cuts and isolation do not depend on the generation cuts. If the generation cuts include the smooth isolation of the photon [22], that is infrared safe in all orders of perturbation theory, then the LHEs can effectively be considered as sufficiently inclusive event sample, which leads to correct predictions for distributions obtained with usual experimental isolation.
The new method of making predictions for processes including isolated photons in the final state at the next-to-leading order (NLO) accuracy matched with parton shower (PS) have been tested thoroughly for the case of tt γ hadroproduction in Ref. [11]. Here we use the same method for the case of tt γ γ hadropoduction, an irreducible background for tt H final states in the H → γγ decay channel. This process was first computed at leadingorder accuracy in Refs. [23,24] but predictions at higher order have never been considered. With the new general purpose tool MADGRAPH5/aMC@NLO [25] such a computation can be performed, but has not been published. Furthermore, in MADGRAPH5/aMC@NLO only the smooth photon isolation of Frixione [22] can be used, which can be implemented experimentally only approximately. Thus we make also first predictions of this process at NLO accuracy and compare the predictions of PowHel to those of MADGRAPH5 using the Frixione isolation. Then we show that predictions at the hadronic stage are independent of the generation cuts and finally make predictions at the LHC at 13 TeV collision energy and compare the tt H(H → γγ) signal to the tt γ γ background.

Implementation
Details of generating LHEs with the POWHEG-Box have been discussed extensively in the literature. Here we only summarize those features of our implementation that are specific to the tt γ γ hadroproduction process.
The process p p(p) → tt γ γ involves several subprocesses. Not all of these are independent, but related by crossing. Hence matrix elements, both at tree-level and at one-loop, are generated for a subset of subprocesses by HELAC-NLO and all others are ob-tained by crossing into relevant channels. In HELAC-NLO the matrix elements are stored in separate files using an internal notation, these are commonly dubbed as skeleton files. For this process, we generated skeleton files for the following subprocesses: g g → γ γ tt, uū → γ γ tt and dd → γ γ tt for the Born and virtual, while g g → γ γ tt g, uū → γ γ tt g and dd → γ γ tt g for the real-emission contribution. The ordering among final state particles is in compliance with that used by POWHEG-Box. We checked the correct evaluation of all the matrix elements by comparing them in several, randomly picked phase space points to the original ones in HELAC-NLO. The self-consistency between real emission contribution and subtraction terms is established by verifying that the limiting value of the ratio of the real emission part and the subtraction terms in all kinematically degenerate configurations approaches one.
The phase space for tt γ γ-production with two massless particles in the final state at the Born level is quite involved provided by possible singular photon-radiation from initial state (anti)quarks. Thus we decided to build the Born phase space in a recursive fashion using the method of [26,27]. To construct an n + 1 particle phase space from an n particle one we use the inverse constructions of [16] with three random numbers of unity. A choice can be made between two possible inverse constructions, initial and final state ones, depending upon the underlying splitting. Due to the mass of the t-quark, singular photon radiation can only come from the initial state hence in our phase space construction we considered only initial state as a possible source for photons, which we arranged symmetrically for the two photons.

Predictions at NLO
With the internal consistency of matrix elements having checked and a suitable underlying Born phase space set up, the code is validated to make predictions at NLO accuracy. With the publicly available independent programs MADGRAPH5/aMC@NLO [25] we can compare the predictions of the two independent codes. For this comparison we used the following set of parameters: √ s = 8 TeV at the LHC, CTEQ6L1 PDF with a 1-loop running α s for the LO and CTEQ6M PDF with a 2-loop running α s for the NLO comparison. We used equal factorization and renormalization scales set to the mass of the t-quark, m t = 172.5 GeV. We set the fine-structure constant to α EM = 1/137 and kept fixed at that value. We used five massless-quark flavores throughout the paper. We employed the following set of selection cuts: (i) both photons had to be hard, p ⊥, γ i > 30 GeV, i ∈ {1, 2} and (ii) central, |y γ i | < 2.5, i ∈ {1, 2}, (iii) a Frixione isolation [22] was applied to the photons with δ 0 < 0.4, γ = n = 1. When this isolation is applied the partonic activity is limited around the photon if the separation is less than δ 0 . The partonic activity in a cone of δ around the photon is measured through the total transverse energy deposited by partons in this cone, this total transverse energy is limited, aMC@NLO 863 ± 1 1063 ± 2 PowHel 864 ± 1 1060 ± 7 Table 1: Cross sections at LO and NLO accuracy obtained with MADGRAPH5/aMC@NLO and PowHel. We refer to the main text for the details of the computations. where δ ≤ δ 0 . Our purpose with this comparison is only to show that the two codes lead to the same predictions.
The cross section corresponding to our selection cuts both at LO and NLO can be found in Table 1. For illustrative purposes four sample distributions are shown on Figs. 1 and 2. As it can be seen from comparing the numbers obtained for cross sections at LO and NLO and from the distributions the two calculations are in good agreement with each other.

Technical issues 4.1 Generation cuts
In a recent work [11] we proposed a new method to generate sufficiently inclusive LHEs including photons, which can be used to make predictions for the hadroproduction of isolated hard photons at the hadron level, formally correct at NLO accuracy in perturbative QCD. The essence of the method is to introduce generation cuts that are sufficiently small so that the physical cross section with usual isolation parameters is independent of the parameters of the isolation needed for generating the events. For processes without a light parton in the final state at LO this technical isolation can be either a narrow cone, or the smooth isolation of Frixione [22] with a small value for the parameter δ 0 . For final states with light partons present already at LO, only the latter option can be used. We have shown that the two methods yield identical predictions for the case of tt γ hadroproduction, if both are applicable.
The process of investigation in this paper is also free of light partons at LO, therefore, both technical isolations can be used, and in fact we employed both (for details see Sects. 5 and 6). In addition to the isolation during event generation, we also need a generation cut on the transverse momentum of each photon, that we set to 5 GeV and varying it we checked that our physical predictions for hard photon production do not depend upon this cut.

Suppression factors
This process has two photons in the final state. As no photon-photon splitting is possible two technical cuts are sufficient to obtain a finite cross section at LO: lower limits are needed for the transverse momentum of the two photons. To enhance event generation in the physically relevant portion of phase space we use a suppression factor which could be used to suppress events in regions of phase space that are expected to be cut by physical selection cuts. Instead of this straightforward generalization of the suppression factor for the tt γ process [11] we suggest where p ⊥, γ i is the transverse momentum for the ith photon, m γγ is the invariant mass of the two-photon system and k = 2 was chosen throughout. The suppression over the invariant mass of the two-photon system is introduced to allow for a much better population of the large and moderately large m γγ region. Our results were obtained with p ⊥,supp = 100 GeV and m supp = 100 GeV.
In the POWHEG-Box the differential cross section correct up to NLO has the form of where B is the Born term, V is the regularized virtual contribution, R is the real-emission part, C is a short-hand for all the local subtraction terms regularizing the real-emission, G ⊕ and G are remnants of collinear factorization, dΦ B is the underlying Born while dΦ rad is the one-particle phase space measure and the integral over momentum fractions is considered implicit. In the POWHEG-Box it is possible to decompose the real-emission part into two, disjoint contributions: the singular (R s ) and remnant (R r ) ones. Originally this bisection was introduced to deal with problems arising when the Born term becomes zero but not all the contributions [28]. This decomposition resulted in faster event generation and later on became standard [26]. With this decomposition the differential cross section takes the form of by introducing B we can now speak about a B and a remnant contribution. To decide whether a real-emission contribution is the singular or the remnant one, the following criterion is used as default in the POWHEG-Box: where C C is the sum of all the collinear while C S is the sum of all the soft subtraction terms and ξ is an arbitrary parameter set to 5 as default in the POWHEG-Box. When dealing with a process with photon(s) in the final state hence having a technical cone or technical smooth isolation large contributions can turn up when the separation between one of the photons in the final state and one final state massless quark becomes close to the technical isolation limit or when a smooth isolation is used and the separation is below δ 0 . These contributions end up in the remnant provided by the large R/B ratio degrading the efficiency of remnant-type event generation. This is a consequence of the method of event generation which is based upon the hit-and-miss technique: an overall upper bound is determined which is used in a subsequent step to unweight on the phase space. Large contributions tend to push this upper bound very high hence large difference can build up between the upper bound and regular contributions characterizing the vast majority of phase space hampering the unweighting procedure.
When the underlying Born process is already singular close to the singularity the contributions are enhanced due to the apparent, unregularized singularity. These large contributions result in a large norm which forces the hit-and-miss based event generation mechanism to generate almost all the events close to the singular region. This problem can be successfully solved by introducing a suppression factor which, as its name suggests, suppresses these large contributions by multiplying them with a dynamics-dependent, though small number. In the generated event file the phase space coverage of the events is such that less events are generated in and near singular regions. Since the suppression is a purely non-physical quantity the events can only bear physical meaning if the event weight incorporates the inverse of the suppression factor. Thus the suppression of the number of events in regions bearing no physical importance is compensated by scaling up the event weight accordingly.
The concept of the suppression factor can be used to improve the efficiency of remnanttype event generation. We modify the differential cross section by introducing a second suppression factor only affecting the remnant contribution: where F r supp (Φ R ) is the remnant suppression factor depending upon the real-emission phase space. To obtain physically meaningful events when a remnant event is being generated the event weight has to be multiplied by the inverse of the remnant suppression factor. Since large contributions to the remnant part of the cross section only come from those configurations where a final state (anti)quark is not well-separated from one of the photons the following form of remnant suppression was used during our calculations where p r ⊥,supp is the remnant suppression parameter set to 20 GeV throughout our calculation, p ⊥ (γ i , q) is the relative p ⊥ of the ith photon and the final state (anti)quark and we use k = 4. The suppression is used only for those contributions where a massless (anti)quark is present in the final state. The presence and this form of remnant suppression significantly increased the efficiency of remnant event generation.

Choice of scale
The final state for tt γ γ production contains four particles at the Born level which leaves us with a rich set of kinematic variables usable for setting the renormalization and factorization scales for the process. In our previous work [13] for the hadroproduction of the tt bb final state, with massless b-quarks, we found that the half sum of the transverse masses of final state particles provides a good scale. It yields a moderate K-factor and small differences in the shapes of the distributions at LO and NLO accuracies. Thus we adopt the same choice for tt γ γ production for the central scale, where the hat indicates that the underlying Born kinematics was used to construct the scale. We show the scale dependence of the cross section as the scale is varied around

The effect of the POWHEG Sudakov factor
When pre-showered events are generated the POWHEG Sudakov factor may have an effect on measurable quantities through higher order terms in the perturbative expansion.
Hence it is always informative to compare predictions at the NLO, that is fixed order, and those predictions which are obtained from pre-showered events. In order to quantify the effect of the POWHEG Sudakov factor events were generated with the setup of Sect. 3 and the predictions drawn from these events were compared to those of the NLO calculation.  Sample distributions can be found on Figs. 4-6. From these distributions, except for the transverse momentum of the extra parton, a good agreement can be seen between the fixed-order calculation and predictions drawn from pre-showered events. Taking a look at the prediction of the transverse momentum of the extra parton obtained using preshowered events the usual Sudakov shape can be seen at smaller values while for large transverse momentum, log 10 (p ⊥ / GeV) > 1.75, the LHE prediction merges into the fixed order one. As we already pointed out in the case of tt γ production [11] the Sudakov shape gets distorted at around 0.75 and below due to the presence of the cut (Frixione isolation) acting on the real-emission part.

Independence of the technical isolation
In our previous work [11] an extensive study was carried out to prove the independence of technical isolation at various levels of evolution of the pre-showered events. For the present process we have repeated all steps, but we restrict ourselves to present the case of full SMC. At other stages of event evolution the predictions taken with different technical isolations are in agreement with each other. The events were generated for the setup listed in Sect. 4.3. For concreteness the cross sections obtained at various levels for different technical isolations are listed in Table 2. These cross sections correspond to the following set of cuts: • Jets are reconstructed with the IR-safe anti-k ⊥ algorithm [29] with R = 0.4 and p ⊥,j > 30 GeV using FastJet [30,31].
• The photons should be isolated from the jets and each other such that ∆R > 0.4. The separation is measured in the rapidity-azimuthal angle plane.
• The hadronic activity is limited around both photons in a cone of R γ = 0.4: E max ⊥, had = 3 GeV.  The hadronic activity in a cone of R γ around the photon momentum, p γ , is measured through the total hadronic transverse energy deposited in this cone: where the summation runs over all the hadronic tracks. To make our predictions at the hadron level we used PYTHIA-6.4.25. At the hadron level we allowed tops to decay, but kept QED shower and multiparticle interactions inactivated. To see the independence of technical isolation we generated events with three different technical cone isolations (R γ, q ∈ {0.1, 0.05, 0.01}) and compared these pairwise. Sample distributions at the hadron level are depicted on Figs. 7-9. Among these distributions t-quark related observables can be found too. These were obtained by reconstructing the t-quark using MCTRUTH. From these distributions we can see that the deviation of predictions from each other is only due to statistical fluctuations. All of the event sets taken with the three different technical cone isolations result in the same predictions at the hadron level which means that any of these can be safely used in analyses to provide theoretical predictions.

Independence of the smooth technical isolation
In the previous section we showed that it is possible to generate event samples with such a technical cone isolation that physical predictions are independent of it. This is provided by the small size of the technical isolation cone, R γ, q , and the smallness of the fragmentation contribution. The technical cone isolation proved a useful tool to generate events when the y t Figure 9: The same as Fig. 7 but for the rapidity of the hardest photon and the top quark.
Born process does not contain massless partons in the final state. The smooth isolation, Eq. (1) with a sufficiently small value of δ 0 , can provide a possible alternative since when it is applied there is no fragmentation contribution and it can be used for processes even with light partons in the final state at the Born level. To investigate the possible independence of technical smooth isolation of physical predictions at various levels we generated two event samples with two different technical δ 0 values (0.05 and 0.1). We then compared the predictions obtained with these events to those generated with the technical cone isolation such that only those predictions were compared where the two technical isolation parameters did coincide in numerical value.   The cross sections obtained with the two different technical isolations are collected in Table 3. From this numerical data it is evident that the cross sections agree between the two types of technical isolations if the parameter characterizing the isolation is matched. Hence trustworthy predictions can be made with both methods. The agreement can also be seen on kinematic distributions, a short selection of those is included on Figs. 10-12. Apart from statistical fluctuations the two different technical isolation schemes provided the same predictions. This observation gives a strong foundation that using a suitably chosen isolation parameter during event generation, both schemes can be used to make physical predictions with physical cuts and isolations. Since the applicability of the smooth isolation is not restricted to the case where no light parton is present at the Born level in the final state, we propose that version to be used generally. In the next section we are going to use it to provide signal and background predictions to tt H production in the case when the Higgs boson decays into a photon pair.

Phenomenology
The main consequence of the previous two sections is the ability to make physical predictions with experimental isolation criteria using pre-showered events prepared with a technical isolation chosen suitably small. The tt γ γ production plays an important role in Higgs physics when attention is turned to the properties of the Higgs boson. One such important property is the Yukawa coupling of the Higgs boson to various massive fermions. The Yukawa coupling of the Higgs boson to the top quark can be measured in tt H production. When the Higgs boson decays in the diphoton channel the irreducible background is tt γ γ production. As we provided predictions for tt H production in Ref. [7] at the hadron level, we decided to present predictions for both the signal and the background at the hadron level with the highest available precision. For the tt γ γ background we take the events with δ 0 = 0.05. For the signal we generated a new bunch of pre-showered events sharing the same parameters with the tt γ γ sample, generated for the 13 TeV LHC with CT10nlo PDF and accordingly chosen 2-loop α s , m t = 172.5 GeV, m H = 125 GeV. We chose the renormalization and factorization scales equal to each other and set to µ R = µ F = m t + m H /2. To make our predictions, we used PYTHIA-6.4.25 for simulating the evolution of the events to the hadron level, but with multiple interactions turned off.
The partonic final state contains a t-quark pair for both the signal and the background. There are two ways to detect t-quarks: either through their decay products, by looking for displaced secondary vertices as tagging for decaying b-quarks, or using t-tagging. We decided to use the latter as provided by the HEPTopTagger [32,33]. In order to perform t-tagging we followed the following steps: • Jet reconstruction using all the hadronic tracks with the C/A algorithm with R = 1.5 using FastJet [30,31].
• Only those jets were considered for which p ⊥ > 200 GeV and |y| < 5.
• The t-quark candidate subjets were looked for in the jet mass range of 150 -200 GeV.
• We selected that particular subjet as a t-quark jet for which the jet mass was the closest to m t .
In respect to other parameters concerning the top tagging we kept the default values provided by the HEPTopTagger. The cuts applied to the hadronic events were the following: • Two hard photons were requested in the central region with p ⊥, γ > 30 GeV and |y γ | < 2.5.
• To isolate the photons from hadronic activity a jet clustering was performed on all the hadronic tracks with the anti-k ⊥ algorithm [29] as implemented in FastJet [30,31] with R = 0.4 and p ⊥,j > 30 GeV. The photons were requested to be isolated from these jets by ∆R(γ, j) > 0.4 measured on the rapidity-azimuthal angle plane.
• Both of the hard photons had to be isolated from the top jet obtained by top tagging and from the three subjets of the top jet by ∆R(γ, j) > 0.4 measured on the rapidity-azimuthal angle plane.
• One hard lepton was requested in the central region with p ⊥, > 30 GeV and |y | < 2.5. To select leptons we did not make any distinction between leptons and antileptons.
• The lepton had to be isolated from both the jets and the photons with ∆R( , i) > 0.4, i ∈ {γ, j} measured on the rapidity-azimuthal angle plane.  Figure 13: The invariant mass and rapidity azimuthal separation is shown for the diphoton system at the hadron level for the signal (tt H, red) and for the background (tt γ γ, blue) such that the signal sits on the top of the background. Further details of the calculation can be found in the main text. • Around both photons a minimal hadronic activity was allowed in a cone of R = 0. 4 such that E max ⊥, had = 3 GeV.
In order to reach reasonable statistics even for the signal we turned off all decay channels of the Higgs boson but the diphoton one and rescaled the event weights with the H → γ γ branching ratio. The most interesting distribution is the invariant mass of the two-photon system along with this a couple more distributions are depicted in Figs. 13 and 14. By taking a look at the distributions it is clear that the cross section drops into the attobarn range. Due to the narrow width of the Higgs boson the signal appears as a single, welldefined spike in the m γγ spectrum with excellent signal/background ratio. For all the other distributions the background overwhelms the signal.

Conclusions
We presented the first predictions for the hadroproduction of tt γ γ final state both at NLO accuracy and at NLO matched with parton shower. The predictions at NLO were computed by two programs, PowHel and MADGRAPH5 and complete agreement was found.
While the predicitions at NLO accuracy can be made only using smooth isolation prescription of Frixione for the hard photons, the matched predictions are based on LHEs that can be exposed to the usual cone-type isolation of experimenters. This is achieved by generating the events using generation cuts that are sufficiently loose, so that the predictions do not depend on those when usual physical isolations are used.
We found that using half of the sum of the transverse masses of particles in the final state the NLO corrections are about 25 %, and decrease the dependence on the renormalization and factorization scales significantly. The generated events are stored and can be downloaded from http://www.grid.kfki.hu/twiki/bin/view/DbTheory/TtaaProd. We generated similar events for the tt H final state, to which the tt γ γ process is an irreducible background in the H → γγ decay channel. We compared some kinematic distributions of both signal and background process and found that the invariant mass of the two hard photons shows a clearly visible narrow peak, but with small cross section, in the attobarn range for the LHC at 13 TeV. For other kinematic distributions the signal and background have rather similar shapes.