The ATLAS hadronic tau trigger

With the high luminosities of proton-proton collisions achieved at the LHC, the strategies for triggering have become more important than ever for physics analysis. The naive inclusive single tau lepton triggers now suffer from severe rate limitations. To allow for a large program of physics analyses with taus, it is required that we combine tau signatures with other objects, including electrons, muons and missing transverse energy (MET). These combined triggers open many opportunities to study new physics beyond the Standard Model and to search for the Standard Model Higgs. We present the status and performance of the hadronic tau trigger in ATLAS. We explain how hadronic tau events are identified at trigger level using the ATLAS calorimeter and tracking system. Results from performance studies of the tau trigger are also shown, including measurements of the trigger efficiency using Z → ττ events.


Introduction
Hadronic tau decays play an important role in precision measurements of the Standard Model (SM) processes and in searches for physics beyond the SM. Important particle searches in ATLAS include Higgs boson, supersymmetric and exotic (W and Z ) particles.
The decay length of the tau is small in comparison to the LHC's beam pipe radius, thus, taus are identified through their decay products measured in the detector. A tau will decay into electron or muon and two neutrinos 35% of the time. The remaining taus will have decays with products including a neutrino and prompt charged hadrons (mostly pions). These hadronic decays can be identified in the detector as a jet of collimated calorimeter clusters with low multiplicity of tracks.
An overwhelming rate of jets from quarks or gluons (QCD) presents a great challenge, yet the ATLAS experiment has designed a dedicated tau trigger to identify the hadronic tau decays [1]. This has proven fruitful for many ATLAS analyses [2][3][4][5].

The ATLAS trigger system
The ATLAS trigger system [6] is a three tiered system which is hardware-based at the first level (L1). The remaining two levels, jointly called the high level trigger (HLT), are software-based. At L1, spatial (∆η × ∆φ ) regions-of-interest (RoI) are identified using calorimeter and muon systems, the RoI is passed to HLT. At the second level (L2), further identification is done, taking into account all detector subsystems. At the third and final level, the event filter (EF) algorithms closely match the algorithms used for offline reconstruction. Latencies at the first, second and third levels are approximately 2.5 µs, 40 ms and 4 s, respectively.

The ATLAS tau trigger
The ATLAS tau trigger selects hadronic tau decays, chiefly characterised by a narrow jet with one (single-prong) or multiple (multi-prong) charged tracks. At L1, electromagnetic (EM) and hadronic (HAD) calorimeter trigger towers are used to calculate the energy in the core and isolation regions. The trigger towers have a granularity of ∆η × ∆φ = 0.1 × 0.1. The core region is a two-by-two collection of trigger towers while the isolation region is the four-by-four perimeter surrounding the core. Tracking and calorimeter information is used at HLT to discriminate taus from QCD jets. It is advantageous to optimize single-prong and multi-prong taus separately. To maximize efficiency with respect to offline selection, EF selection uses similar algorithms to those used in the offline [7]. For a more thorough discussion of the algorithms used in 2011 please refer to [8].

Improvements of the ATLAS tau trigger for 2012
Changing running conditions in 2012, including an increase to 8 TeV center-of-mass energy and doubling of peak luminosity, have resulted in a dramatic increase in pile-up. One way to measure the pile-up is to count the number of primary vertices identified in a bunch-crossing. Performance studies with 2011 data indicated that trigger efficiency with respect to offline identification had severe pile-up dependence, as can be seen in figure 1. Figure 1 shows the efficiency of the tau20 medium trigger with respect to offline identified taus. Shown here is preliminary 2011 data corresponding to 2.5 f b −1 of data. The trigger efficiency is measured using a tag and probe analysis with Z → τ µ τ h events following the offline tau identification efficiency measurement [7]. The symbol τ µ corresponds to taus decaying leptonically to muons while the symbol τ h corresponds to taus which have decayed hadronically. Offline candidates are identified by a Boosted Decision Tree (BDT) Multivariate (MV) algorithm. The term tau20 medium indicates a 20 GeV threshold on transverse energy at EF and medium selection on shower shape variables. The efficiency is plotted after applying each successive level of the trigger (L1, L2, and finally EF). This loss of efficiency was deamed unacceptable which presented a pressing need to reform algorithms for 2012 to battle pile-up dependence. Figure 2 shows the marginal efficiencies for a cut on the L2 EM radius calculated with clusters within two cone sizes ∆R = (∆η) 2 + (∆φ ) 2 , of 0.2 and 0.4 with respect to the L1 RoI. Marginal efficiency is the efficiency with respect to L1 of that particular requirement, in this case L2 EM radius. EM radius is the electromagnetic energy weighted radius of the L2 tau candidate. The term tau20 medium1 restricts the number of tracks up to three, this corresponds to a tighter requirement on number of tracks with respect to tau20 medium. Figure 2 indicates an improvement in pile-up dependence if the cone size is reduced to 0.2 for 2012 running, 0.4 was used in 2011. A smaller cone size decreases the number of tracks coming from pile-up events as well as the amount of pile-up signal summed in the calorimeter. A useful discriminating variable used at L2 is the ratio of the scalar sum of p T of tracks in the core region to that in the isolation region. An additional criterion for tracks entering this calculation was proposed in 2012. The track variable ∆Z 0 is defined as the difference along the z-axis between the given track and the highest p T track in the RoI. Tracks in the RoI of a L2 tau candidate are    now required to have |∆Z 0 | < 2mm. This new track requirement serves to discriminate those tracks coming from hadronic tau decays from pile-up tracks.
In 2011, cut based selections were used at EF, for 2012 these are replaced by MV algorithms. Two MV algorithms, BDT and Log Likelihood (LLH), take pile-up robust variables as input. These were optimized and an operating point was chosen to respectively have 80% and 85% signal efficiency for medium selection for multi-prong and single-prong taus, with respect to offline tau candidates. The BDT algorithm was chosen for online running given its higher background rejection. Figures 3 and 4 show multi-jet background rejection as a function of signal efficiency for single and multi-prong taus.

Performance of the ATLAS tau trigger in 2012 Data
Beginning May 2012, ATLAS has collected data using the algorithms described in [8] with the additional changes outlined above. As expected and as evidenced in figures 5 and 6 the tau trigger  Figure 5. Efficiency of tau20 medium1 trigger, with respect to the offline tau candidates identified by the BDT algorithm, as a function of number of vertices measured in 2012 data [9].  Figure 6. Efficiency of tau20 medium1 trigger, with respect to the offline tau candidates identified by the BDT algorithm, as a function of offline tau p T measured in 2012 data [9].
performs well with high pile-up. Comparing figure 1 and figure 5, the severe pile-up dependence is mitigated. Figures 5 and 6 show the efficiency of tau20 medium1 trigger, with respect to tau candidates identified by offline BDT algorithms. Data is plotted as a function of number of primary vertices and p T of offline tau candidate, respectively. It is important to note that the p T threshold has remained unchanged in 2012, with respect to 2011, despite the changing running conditions.

Conclusions
The ATLAS tau trigger successfully collected data in 2011 LHC running, however, an expected increase in pile-up presented the opportunity to revise algorithms for 2012 running. This opportunity was met by implementing pile-up robust selections, including a decrease in cone size and the introduction of the ∆Z 0 track variable. Coupled with a move to MV classification algorithms, these changes have proven successful for 2012.