Brought to you by:
Paper

An overview of the trigger system at the CMS experiment

and

Published 11 April 2022 © 2022 IOP Publishing Ltd
, , Focus on New Frontiers in Physics - Selected Papers from ICNFP 2021 Citation Pallabi Das and on behalf of the CMS Collaboration 2022 Phys. Scr. 97 054008 DOI 10.1088/1402-4896/ac6302

1402-4896/97/5/054008

Abstract

The trigger system of the Compact Muon Solenoid (CMS) experiment at CERN has been evolving continuously since the startup of the LHC. While the base of the current configuration will remain in use for the next LHC running period (Run 3 starting in 2022), new features and algorithms are already being developed to take care of higher data loads due to increasing LHC luminosity and pileup but also of new experimental signatures to be investigated, in particular, displaced decay vertices stemming from relatively long-lived particles created in proton-proton collisions. Beyond this period, the trigger system will undergo a major upgrade to prepare for the high-luminosity LHC (HL-LHC) operations, which will deliver a luminosity of 5–7.5 times the design value. It corresponds to 140-200 pileup events, defined as overlapping proton-proton interactions in the same or nearby bunch crossings. During HL-LHC, information from the silicon pixel and strip tracker will be available already for the Level-1 Trigger, detector granularity and pseudorapidity coverages will increase. Trigger rates will rise by a factor of about 7.5 both at Level-1 (to 750 kHz) and at the High Level Trigger (to 7.5 kHz) and the latency—the processing time available for arriving at the Level-1 trigger decision—will increase significantly from 3.8 μs to 12.5 μs, allowing for the use of more sophisticated algorithms at the Level-1 trigger.

Export citation and abstract BibTeX RIS

1. Introduction

The Higgs boson, the last ingredient of the Standard Model (SM) of particle physics, postulated nearly 60 years ago, was discovered by the LHC experiments in 2012 [13]. This observation came at the cost of major advancements in accelerator and detector technology, and data analysis techniques during five decades. One of the most crucial aspects of the experiments at hadron machines is the trigger system [4, 5] that provides the first decision to accept or reject a collision event in real time. During LHC Run 2 operations (between 2016-2018), the instantaneous luminosity of the proton-proton collisions reached as high as 2 × 1034 cm−2 s−1. However, not all collision events are interesting. On average, the interacting partons carry about 1/10th of the proton energies. Hence the majority of interactions in hadron colliders is soft. The probability of interesting hard collisions at TeV energies is small, depending on the parton distribution functions (PDFs). Constraints in online data processing bandwidth and storage capacity demand a judicial decision for selecting potentially interesting collision events in real time.

In the case of the CMS detector [6], each subsystem, except the silicon pixel and the strip tracker, have a dedicated trigger algorithm in Level-1 (L1). A combination of these individual triggers delivers the global decision to forward the event to High Level Trigger (HLT) processing. Here, the final trigger decision is based on complete event reconstruction, as close as possible to the offline reconstruction, identified using final state kinematics of interesting physics processes. This article describes the simplified subsystem algorithms of L1 [7], and a few selected, highly specific HLT algorithms.

After successful Run 1 and Run 2 operations, the next LHC collisions are scheduled to take place during 2022-2024 (Run 3). Beyond this period, the CMS experiment will undergo the Phase-2 Upgrade that will result in an entirely new detector compatible with the high luminosity collisions planned to start with 2028 [8]. The envisioned improvements to the trigger system to complement this change [9, 10] are briefly reported here.

2. The CMS detector trigger system

A two-level system for event selection is implemented online to reduce resource usage by reconstructing only potentially interesting collisions at the HLT, after being preselected at L1. This provides the flexibility to have overlapping triggers and adjusting of thresholds with changing luminosity, keeping the bandwidth of data transfer fixed.

  • L1 trigger: Highly sophisticated customized hardware is used at the first level of trigger selection, which operates synchronously with the LHC collisions, meaning a decision is taken every 25 ns. Various particle signatures at the subdetectors, such as energy deposits in the calorimeters, track segments in the muon detectors, etc are denoted as trigger primitives (TPs). The TPs are first sent to a pre-processor that does energy calibration, clustering, assigning transverse momentum (pT ) or energy (ET ) to the tracks or clusters. Position co-ordinates in terms of pseudorapidity (η) and azimuthal angle (ϕ) are also encoded in the TPs. The second processor identifies the candidate objects as jets, electrons, photons, hadronic τs, energy sums, and muons. These objects are then sorted with respect to their pT to ET ratio and the leading twelve of each category are sent to global trigger. The final decision is the OR of the subsystem triggers, ensuring high efficiency of event selection due to trigger redundancy. The latency of the L1 decision is 3.8 μs including propagation delay. By design, one out of every 400 collision events is selected at L1, the output rate being 100 kHz.
  • HLT: The second level of the trigger system runs asynchronously with the LHC collisions on commercial CPUs. The events that are selected by L1 are fully reconstructed, with a performance as close to offline as possible, but taking about 1/100th fraction of the processing time. The complete detector information, including the tracker, is analyzed, which requires immense resources. In order to minimize resource utilization, a modular approach is taken. Alternate event building and filtering steps quickly reject uninteresting data before proceeding to complete reconstruction. Then, algorithms or paths analyze the event to identify several hundred final state signatures and deliver the accept/reject signal for each path. A single event can trigger multiple paths, and the event is recorded if at least one HLT condition is satisfied. The output bandwidth of the HLT decision is 2 GB/s with a rate of 1 kHz, the latency is ∼300 ms.

The primary goal of the CMS experiment was to study high-pT physics, including the detection of the Higgs boson. However, it is also highly efficient in detecting low-pT objects. To study unknown physics processes, the experiment also records minimum bias events without any trigger, randomly selecting one out of every 106 events.

3. Level-1 trigger performance

3.1. Algorithms

The trigger algorithms at L1, based on calorimeter and muon system information, are described in the following, along with their performance. L1 objects are used as seeds for HLT reconstruction. The calorimeter-based objects utilize trigger towers (TTs), consisting of both electromagnetic calorimeter (ECAL) crystals and hadron calorimeter (HCAL) tower energies, while the muon system provides charge accumulation or hit information.

  • Electrons and photons: At L1, electrons and photons cannot be distinguished as the tracker information is not available. They are denoted together as e/γ objects. Using dynamic clustering, the algorithm first identifies a local energy deposit maximum in a TT and then adds neighboring energies. The resulting cluster shape and electromagnetic energy fraction (from ECAL) are used to discriminate against jets. The algorithm calculates an energy-weighted position of the objects, and further determines whether the object is isolated or not. Figure 1 shows the efficiency turn-on curve against the offline reconstructed ET of e/γ objects on the left, while the right plot shows the stability of efficiency against pileup in different data-taking periods during Run 2 [11].
  • Hadronic taus: The tau lepton reconstruction using calorimeter energies is similar to e/γ objects at L1. However, the discrimination against jets using cluster shapes is optimized separately for tau-like deposits. Multiple clusters are merged to obtain the hadronic τ candidate, since it may decay to one, two, or three pions, and the energy deposits are more spread across the detector. The left plot in figure 2 shows the efficiency to identify hadronic taus at L1 as a function of the offline reconstructed pT , for different thresholds applied online [12].
  • Jets and energy sums: A sliding window algorithm first looks for a local energy deposit maximum, then constructs a 9 × 9 TTs square area around it, corresponding to a jet cone radius of 0.4 in the ηϕ plane at offline. The jet energy is the sum of the constituent TT energies, from which an estimated energy due to pileup is subtracted. The scalar sum of jet energies (HT ) and the imbalance of calorimeter energy measured in the transverse plane (ET miss ) are event variables that are also used as L1 seeds, important to identify certain physics processes. The right plot in figure 2 shows the jet reconstruction efficiencies at L1 as functions of offline reconstructed pT , for different thresholds applied online [12].
  • Muons: Three partially overlapping muon subdetectors of the CMS experiment use different techniques for muon identification. Based on the η of the TPs, the track finder algorithm utilizes extrapolation methods (∣η∣ < 0.83) or pattern matching (0.83 < ∣η∣ < 2.4). The pT assignment of the tracks is done using difference in ϕ co-ordinates within ∣η∣ < 0.83, pattern matching within 0.83 < ∣η∣ < 1.2 and Boosted Decision Tree (BDT) regression techniques within 1.2 < ∣η∣ < 2.4. Figure 3 shows the efficiency of reconstructing muons at L1 up to an offline pT of 1 TeV [13].

Figure 1.

Figure 1. Left: Efficiency to reconstruct e/γ objects at L1 as a function of offline ET , different curves denoting selection requirements at L1. Right: Reconstruction efficiency of e/γ objects satisfying tight isolation criteria as a function of pileup during Run 2. ET thresholds of the L1 and offline matched objects are indicated in the labels [11]. Copyright [2022] CERN for the benefit of the CMS Collaboration. Reproduction of this figure or parts of it is allowed as specified in the CC-BY-4.0 license.

Standard image High-resolution image
Figure 2.

Figure 2. Reconstruction efficiency turn-on curves for hadronic taus (left) and jets (right) as functions of offline reconstructed tau pT and jet ET , respectively. Different curves denote different thresholds applied online [12]. Copyright [2022] CERN for the benefit of the CMS Collaboration. Reproduction of this figure or parts of it is allowed as specified in the CC-BY-4.0 license.

Standard image High-resolution image
Figure 3.

Figure 3. Efficiency of reconstructing muons at L1 as a function of offline reconstructed muon pT [13]. Copyright [2022] CERN for the benefit of the CMS Collaboration. Reproduction of this figure or parts of it is allowed as specified in the CC-BY-4.0 license.

Standard image High-resolution image

3.2. Phase-2 upgrade

The high luminosity collisions will see an increase in L1 bandwidth up to 750 kHz, while the latency will increase to 12.5 μs. The average pileup is estimated to be 140–200, challenging the detector capabilities. The high granularity calorimeter (HGCAL) [14] will address the limitations of the current calorimeter system in the high pileup scenario. There will be a significant improvement in the position resolution of calorimeter energy deposits. The muon system coverage will increase to ∣η∣ value of 2.8, improving the reconstruction performance in the forward region. Most importantly, the tracker subdetector information will be available at L1, as well as an additional processing layer implementing the Particle-flow (PF) [15] algorithm. Below are two examples out of the many new L1 algorithms exploiting the upgraded detector.

Figure 4 shows the PF-jet reconstruction performance in terms of the kinematic variables. The firmware and emulator outputs are nearly identical, promising great potential for high luminosity data-taking.

Figure 4.

Figure 4. Comparison between the firmware and emulator output of jet pT , η, ϕ for the PF-jet algorithm [9]. Copyright [2022] CERN for the benefit of the CMS Collaboration. Reproduction of this figure or parts of it is allowed as specified in the CC-BY-4.0 license.

Standard image High-resolution image

An excellent example of the PF-based reconstruction is hadronic tau reconstruction, which has been studied using three different approaches. The left plot in figure 5 shows the direct comparison of the single-tau trigger efficiency of a calorimeter-based, a track + e/γ based, and a neural network (NN) based reconstruction, as functions of the generator-level tau pT , for a fixed rate at 200 average pileup events. New algorithms allow for an earlier turn-on and hence lower pT threshold for selecting the offline tau, compared to the standard calorimeter-based reconstruction. The input objects to the NN-based tau reconstruction are clustered using the pileup per particle identification (PUPPI) algorithm [16], providing better pileup resiliency.

Figure 5.

Figure 5. Left: improved trigger turn-on for track-based and PF-based tau identification compared to simple calorimeter based algorithm. For the same trigger rate, the PF-based algorithm utilizing Neural Network (NN) for tau-identification shows the best overall performance. Right: muon trigger efficiency using the Kalman filter algorithm (KBMTF) is observed to be ten times more efficient than the Barrel Muon Track Finder (BMTF) at high impact parameter values, suitable to identify displaced muons [9]. Copyright [2022] CERN for the benefit of the CMS Collaboration. Reproduction of this figure or parts of it is allowed as specified in the CC-BY-4.0 license.

Standard image High-resolution image

In order to extend the physics reach to beyond the SM (BSM), long lived particles may be studied in the final state. Such particles may have muon-like features, which could be distinguished by displaced tracks in the muon system. An algorithm using the Kalman filter is developed that improves the efficiency to trigger on muons in general within ∣η∣ < 0.83, specially that of the displaced muons, by an order of magnitude. The trigger efficiencies as a function of the impact parameter in the transverse plane, dxy , are shown in the right plot of figure 5.

4. HLT trigger performance

4.1. Algorithms

At the HLT, hundreds of algorithms or paths exist that utilize the entire event information, with a reduced level of input details suitable for running online. A broad selection of event topologies are targeted, a selected few are described below.

As the tracker, in particular the pixel subdetector, information is available at HLT, electrons can be distinguished from photons using hits in the pixel layers. Figure 6 shows the trigger efficiencies of selecting single electrons as functions of offline reconstructed electron pT on the left, while the right plot shows its variation against pileup during different data-taking periods of Run 2 [17]. The stability of efficiency with respect to pileup is achieved by requirements on shower shape and isolation variables within the barrel region.

Figure 6.

Figure 6. Left: efficiencies for triggering on electrons in different η regions of the detector, measured in data. The bottom panel shows the ratio of the data to Drell-Yan simulation. Right: variation of single electron trigger efficiency with pileup [17]. Copyright [2022] CERN for the benefit of the CMS Collaboration. Reproduction of this figure or parts of it is allowed as specified in the CC-BY-4.0 license.

Standard image High-resolution image

Many BSM predictions require high pT photons in the final state, for example, gauge-mediated supersymmetry (SUSY) mechanism, assuming R-parity conservation [18]. Therefore, studying the photon trigger performance is of prime importance. The photon reconstruction algorithms are designed to have stability against pileup using identification requirements. Trigger efficiency for selecting a photon with pT > 200 GeV is shown in figure 7 [19].

Figure 7.

Figure 7. The efficiency of a HLT algorithm that requires a photon with pT higher than 200 GeV, passing the loose identification criteria online [19]. The distributions of the offline photon pT in events with (numerator) and without (denominator) the trigger requirement are also shown. Copyright [2022] CERN for the benefit of the CMS Collaboration. Reproduction of this figure or parts of it is allowed as specified in the CC-BY-4.0 license.

Standard image High-resolution image

Currently, the Higgs boson mass is measured at permil level precision [20], driven by the clean Hγ γ decay signature. The trigger used for this final state selection requires the presence of two photons, with at least one photon matched to an L1 e/γ candidate, called seeded. The HLT decision also assesses the ratio of energy deposits in the HCAL and ECAL, shower shape and isolation variables, and the mass of the di-photon system. Figure 8 shows the efficiencies of selecting the seeded and unseeded photons as functions of electron ET in Zee events, inverting the electron veto in identification [21].

Figure 8.

Figure 8. Efficiency of the trigger used in the SM Hγ γ analyses, as a function of photon (probe electron) ET , where one of the two photons is L1 seeded (left) and the other one is identified without seeding (right) [21]. The labels denote different categories of photons, based on being located inside the barrel (EB) or endcap (EE) region and on the value of R9, defined as the ratio of the energy contained in a cluster of 5 × 5 ECAL crystals over the total supercluster energy. Copyright [2022] CERN for the benefit of the CMS Collaboration. Reproduction of this figure or parts of it is allowed as specified in the CC-BY-4.0 license.

Standard image High-resolution image

HLT taus and muons are reconstructed with almost 100% efficiency, shown in figure 9 [22, 23]. The muon algorithm was updated near the end of LHC Run 2 to generate more seeds for building tracks, an additional iterative tracking step was included. Hadronic tau or τh triggers are important to study Hτ τ decays which have been recently observed at the LHC [24]. The hadrons-plus-strips (HPS) tau reconstruction [25], which provides higher efficiency at high pileup compared to the existing cone-based algorithm, was deployed at the end of Run 2.

Figure 9.

Figure 9. Left: Muon reconstruction efficiency as a function of offline reconstructed pT [22]. Right: efficiencies for the hadronic leg of the μ τh trigger as a function of offline reconstructed tau pT [24]. Copyright [2022] CERN for the benefit of the CMS Collaboration. Reproduction of this figure or parts of it is allowed as specified in the CC-BY-4.0 license.

Standard image High-resolution image

HLT jets are first reconstructed using calorimeter information, then tracking and PF information are added. Several thresholds of the single-jet trigger exist, the performance of which is shown in the left plot of figure 10 [26]. B-tagging of jets uses regional tracking information at HLT, showing very good efficiency. This algorithm is used in several analyses, for example, a boosted Higgs decaying to a pair of b jets. As the decay products are collimated into a single jet having a larger cone radius of 0.8, called AK8, also fat jet, the HLT algorithm uses a multivariate discriminant to determine the likelihood of the jet containing two b quarks. The triggering efficiency as a function of the fat jet pT is shown in the right plot of figure 10 [27].

Figure 10.

Figure 10. Left: jet trigger efficiency as a function of offline reconstructed jet pT , for different thresholds applied online [26]. Right: Double b-tagger efficiency as a function of the offline reconstructed leading fat jet pT [27]. Copyright [2022] CERN for the benefit of the CMS Collaboration. Reproduction of this figure or parts of it is allowed as specified in the CC-BY-4.0 license.

Standard image High-resolution image

4.2. Phase-2 upgrade

The inclusion of HGCAL translates to a manyfold increase in the number of detector readouts. The present CPU system will not be able to support the monumental demand for resources. In order to mitigate this problem, a different computing configuration involving GPUs has been studied. A heterogeneous system using both types of processors will provide the necessary computing power at an affordable price. Additionally, it will be used to implement more sophisticated algorithms, such as 3-dimensional shower reconstruction, pixel tracks and vertices reconstruction, and pileup suppression. The estimated output rate during high-luminosity operation is 75 kHz, with an increased bandwidth of 61 GB/s.

Figure 11 shows the energy resolution of electrons reconstructed using HGCAL in the left plot. The reconstruction performance is observed to be stable over the complete η range of the endcaps. During HL-LHC, the reconstruction of jets and missing transverse energy/momentum will suffer from the large pileup. The PF algorithm must be complemented with pileup-mitigation techniques for optimal performance. Examples of such methods are the charged-hadron subtraction (CHS) [15] and the PUPPI algorithm [16]. The missing transverse momentum (pT miss ) or MET resolution is shown in the right plot of figure 11 which indicates that the PUPPI pT miss is highly pileup resilient at HLT.

Figure 11.

Figure 11. Left: energy resolutions of electrons reconstructed using HGCAL information. Right: pT miss resolutions for different reconstruction algorithms [10]. Copyright [2022] CERN for the benefit of the CMS Collaboration. Reproduction of this figure or parts of it is allowed as specified in the CC-BY-4.0 license.

Standard image High-resolution image

5. Preparation for run 3

Some of the improvements planned for Phase-2 are already being implemented to the trigger system for Run 3. Few of the displaced particle triggers will be implemented at both L1 and HLT, including the Kalman filter algorithm for muons. HCAL depth and timing information is going to be utilized for triggering on displaced jets. At HLT, the heterogeneous computing architecture involving GPUs, described in [10], will be tested. A reduction of ∼ 100 ms processing time, compared to only using the CPU system, is foreseen.

As the HLT rate and bandwidth are still limited by current hardware constraints, few work-around strategies have been developed. Firstly, the size of the stored events can be reduced by only saving the HLT reconstructed objects in a method called scouting, being used in CMS since Run 1 [28]. The second method relies on reducing computing resources by not reconstructing the collision events at all, instead, parking them for later processing. The parked data can be analyzed during the shutdown period when resources are free: this method has proved to be very useful for studying low-pT b-physics events.

6. Summary

The trigger system of the CMS experiment performs the crucial job of identifying interesting events in real time. Both L1 and HLT need to perform in the best way possible to maximize the physics reach, staying within the resource limitations. In addition to maintaining the performance with increasing pileup and detector degradation, future plans involve augmenting the system with new hardware and software. Novel ideas to select highly specific final states need both state-of-the-art processors and complex algorithms capable of being implemented as firmware. The near future aim of the CMS experiment is to test new ideas in the trigger system before leaping toward high-luminosity collisions.

Data availability statement

No new data were created or analysed in this study.

Please wait… references are loading.
10.1088/1402-4896/ac6302