Performance of the CMS Level-1 trigger in proton-proton collisions at √s = 13 TeV

At the start of Run 2 in 2015, the LHC delivered proton-proton collisions at a center-of-mass energy of 13\TeV. During Run 2 (years 2015–2018) the LHC eventually reached a luminosity of 2.1× 1034 cm-2s-1, almost three times that reached during Run 1 (2009–2013) and a factor of two larger than the LHC design value, leading to events with up to a mean of about 50 simultaneous inelastic proton-proton collisions per bunch crossing (pileup). The CMS Level-1 trigger was upgraded prior to 2016 to improve the selection of physics events in the challenging conditions posed by the second run of the LHC. This paper describes the performance of the CMS Level-1 trigger upgrade during the data taking period of 2016–2018. The upgraded trigger implements pattern recognition and boosted decision tree regression techniques for muon reconstruction, includes pileup subtraction for jets and energy sums, and incorporates pileup-dependent isolation requirements for electrons and tau leptons. In addition, the new trigger calculates high-level quantities such as the invariant mass of pairs of reconstructed particles. The upgrade reduces the trigger rate from background processes and improves the trigger efficiency for a wide variety of physics signals.


Introduction
The CERN LHC collides bunches of particles in the CMS and ATLAS experiments at a maximum rate of about 40 MHz, where the bunches are spaced 25 ns apart. Of these only about 1000 per second can be recorded for further analysis. The Level-1 trigger system uses custom hardware processors to select up to 100 kHz of the most interesting events with a latency of 4 µs. The High Level Trigger (HLT) then performs a more detailed reconstruction, including particle tracking, on a commodity computing processor farm, reducing the rate by another factor of 100 in a few hundred milliseconds. Events passing the HLT selection are sent to a separate computing farm for more accurate event reconstruction and storage.
The LHC operation is organized into periods of physics production, where protons or heavy ions are collided, and periods of shutdown during which repairs and upgrade work are performed. The original CMS trigger system performed efficiently in the LHC Run 1 (between 2009 and 2013) and 2015. Its design is described in ref. [1] and its performance in ref. [2]. In 2015 the LHC increased the proton-proton center-of-mass collision energy from 8 to 13 TeV. The instantaneous luminosity steadily increased throughout Run 2, which ended in 2018. These changes were designed to provide a larger data set for studies of rare interactions and searches for new physics, but they also presented several challenges to the trigger system. Improved trigger algorithms were needed to enhance the separation of signal and background events and to provide more accurate energy reconstruction in the presence of a larger number of simultaneous collisions per LHC bunch crossing (pileup).
The CMS Collaboration undertook a major upgrade to the Level-1 trigger system (Phase 1) between Run 1 and Run 2, and plans a second upgrade (Phase 2) after Run 3 ends (expected in 2024). The Phase 1 upgrade replaced all of the Level-1 trigger hardware, cables, electronics boards, firmware, and software, as described in the Technical Design Report for the Level-1 trigger upgrade [3]. Despite higher instantaneous luminosity, energy, and pileup, the upgraded Level-1 trigger maintained or increased its efficiency to separate the chosen signal events from background, because of finer detector input granularity, enhanced object reconstruction (e.g., µ, e/γ , jet, τ, and energy sums), and correlated multi-object triggers targeting specific physics signatures. This paper describes the trigger algorithms of the Phase 1 Level-1 trigger upgrade and reports their performance, measured using Run 2 data. A brief overview of the CMS detector is given in section 2. Section 3 describes the performance of the LHC and its impact on the CMS trigger system in Run 2. Section 4 provides an overview of the large collection of algorithms used to select events for physics measurements. Section 5 describes the design of the upgraded Level-1 trigger, including updates since ref. [3]. The reconstruction algorithms, along with their performance, are described in detail for each subdetector: the muon trigger in section 6, and the calorimeter trigger in section 7; appendix A provides details on a calorimeter trigger issue that affected Run 2 data. Section 8 provides two examples of new multi-object global trigger algorithms, while section 9 describes how the data quality of the Level-1 trigger is monitored in real time. Section 10 summarizes and draws conclusions regarding the achievements of the upgraded Level-1 trigger in Run 2.

The LHC in Run 2
Trigger performance depends on the running conditions of the LHC, such as instantaneous luminosity, number of colliding bunches, and even the structure of the filling scheme. The LHC was designed to collide protons with a center-of-mass energy of 14 TeV and an instantaneous luminosity corresponding to 1.0 × 10 34 cm −2 s −1 , but it initially operated at lower energies and intensities. During Run 1 the center-of-mass energy was increased in steps up to 8 TeV with a peak instantaneous luminosity near 8.0 × 10 33 cm −2 s −1 . At that time, the LHC operated with a longer minimum bunch spacing of 50 ns, instead of the originally foreseen 25 ns.
During the first long shutdown period of the LHC in 2013-2014, the accelerator was modified to provide safe operation at 13 TeV with 25 ns bunch spacing, and the CMS experiment underwent upgrades [5] to prepare for a dramatic increase in collision rate. Run 2 of the LHC lasted from 2015 until the end of 2018 with peak instantaneous luminosities of about 2.1 × 10 34 cm −2 s −1 . A typical filling scheme for the LHC in Run 2 comprised 2556 proton bunches per beam out of 3564 possible bunch locations. The bunches were grouped in "trains" of 48 bunches with 25 ns spacing, with larger gaps between trains. Of these, 2544 bunches collided at the CMS interaction point. In the second long shutdown of the LHC (2019-2020), upgrades to the accelerator are planned, possibly increasing the center-of-mass energy for Run 3 (foreseen to start late 2021 or early 2022), and allowing the LHC to sustain a maximum instantaneous luminosity of 2.0 × 10 34 cm −2 s −1 for longer periods of time.
In 2017 the LHC suffered frequent beam dumps. These were caused when an electron cloud generated by tightly packed bunches interacted with frozen gas in the beam pipe. The gas had become trapped in one area of the LHC during the year-end technical stop between 2016 and 2017 [6]. To mitigate this effect, the LHC moved to a special "8b4e" filling scheme in September 2017. In this scheme the standard 48 bunch trains are replaced by mini-trains of 8 filled bunches followed by 4 empty slots, suppressing the formation of electron clouds. Since the 8b4e filling scheme allows a maximum of 1916 filled bunches in the LHC, the peak instantaneous luminosity was leveled to ≈1.55×10 34 cm −2 s −1 , so the average pileup would not exceed 60. The LHC delivered 41.0 and 49.8 fb −1 of proton-proton collisions to CMS in 2016 and 2017, respectively, during which 35.9 and 41.5 fb −1 of good quality data were recorded.
- 3 -In 2018 the beam dump issues had largely been mitigated so that a return to the preferred nominal scheme was possible. The advantage of this scheme is the use of a larger number of colliding bunches, providing higher instantaneous luminosity without increasing the pileup. The peak luminosity of about 2.0 × 10 34 cm −2 s −1 led to an average pileup of 55, similar to that at the start of 2017. The LHC ran smoothly in 2018 and delivered an integrated luminosity of 68.0 fb −1 to CMS, which recorded 59.7 fb −1 of good quality data.
The LHC periodically provides short special runs, such as the van der Meer scans, with nonstandard beam settings, which require dedicated triggers and calibrations. The precise measurement of the integrated luminosity recorded by CMS is a necessary ingredient for most of the CMS physics results, and the CMS experiment has several detectors dedicated to this measurement. The van der Meer scans provide data necessary to calibrate these measurements.
During the van der Meer scans, the LHC beams are scanned across each other to provide an accurate luminosity calibration. The trigger system is used to measure the rate of the beam collisions, which is used to calculate the luminosity, as described in ref. [7]. For some periods of the van der Meer scan, the Level-1 trigger system records, with high rate, only events from selected bunch crossings in the LHC orbit bunch structure to improve the precision of the luminosity calibration.

The physics program and the trigger menu
The CMS physics program targets many areas of interest to the high-energy physics community. After the discovery of the Higgs boson [8][9][10], measuring its properties, which are currently compatible with the standard model (SM) predictions [11], became of central importance. Searches for supersymmetric and exotic particles, together with candidates for dark matter, are also central to the CMS physics program and they require a high-performance trigger. Such a high-performance trigger also enables precision measurements of SM properties in the electroweak, top quark, and quantum chromodynamics (QCD) sectors, with special attention to the physics of bottom quarks, where triggering objects often have low transverse momentum (p T ). Heavy ion collisions are included in the CMS physics program, expanding our knowledge of quark-gluon plasma dynamics.
The Level-1 trigger information from the muon and calorimeter detectors with coarse granularity and precision is used to select collision events for investigations in all of the previously mentioned physics areas. The selection is performed using a list of algorithms (known as "seeds"), which check events against predetermined criteria, that are collectively called the "menu". Any event that satisfies the conditions of at least one seed in the menu is accepted for further processing in the trigger chain. This initiates a readout of the complete detector information from the data acquisition system, and the data are sent to the HLT. The broad range of menu algorithms reflects the wide variety of research interests of the CMS Collaboration. The Level-1 menu evolves with shifting CMS physics priorities and adapts to changes in beam or detector performance.
The most straightforward trigger algorithms consist of criteria applied to one or more objects of a single type, such as muons, hadronic jets, tau leptons, photons or electrons, scalar sum of transverse energy (H T ), and the energy corresponding to the vector sum of the transverse missing momentum (E miss T ). Typical criteria include thresholds on the transverse component of the object's energy E T (or momentum), and on its η. Signal processes with massive particles typically produce objects at high p T and low |η| values (central in the detector), whereas the vast majority of background  Figure 1. Fractions of the 100 kHz rate allocation for single-and multi-object triggers and cross triggers in a typical CMS physics menu during Run 2. objects are low p T and tend to have higher |η|. Single-and double-object seeds form the majority of the menu and cover about 75% of the available rate. Muon and electron thresholds are chosen to efficiently select leptonic W and Z boson decays, and ττ thresholds are set to maximize the Higgs boson acceptance in this decay channel.
The "cross" seeds combine physics objects of different types, for example a muon and a jet, allowing lower thresholds that target a diverse range of signals. More complex algorithms using correlations between multiple objects select highly specific signal events, such as hadrons decaying to muons, or Higgs bosons produced via vector boson fusion (VBF). Finally, a small fraction of events passing less restrictive algorithms are collected to calibrate the detectors and measure trigger efficiencies. Figure 1 shows the "proportional rate", the fraction of the maximum Level-1 trigger rate allocated to single-, multi-(same type), and cross-(different type) object seeds. In the proportional rate calculation, events triggered by N different seeds are weighted by 1/N to ensure that the total sums to 100%.
The menu algorithms are designed using a simulation of the Level-1 object reconstruction using either Monte Carlo (MC) simulated collision events or, where possible, previously collected data. The seed thresholds are adjusted to achieve a total menu rate that is less than 100 kHz, estimated with data collected with a trigger that requires only a crossing of proton bunches, referred to as a zero-bias trigger. The detection of the crossing of bunches consists of the coincidence of two simultaneous signals from the two beam pick-up monitors installed at the opposite ends of CMS along the beam line.
Trigger algorithm rates depend on the ability of the trigger reconstruction to discriminate between signal objects, arising in hard collisions, from backgrounds or misidentified objects. This becomes more difficult as pileup increases. Figure 2 shows the rate of some benchmark trigger seeds targeting leptons (left) and hadrons (right) as a function of pileup. Rate and pileup are measured Level-1 trigger rates as a function of pileup for some benchmark seeds targeting leptons (left) and hadrons (right). Rates are measured using data recorded during the 2018 LHC run. Definitions of the seed names are in table 1. The curves represent fits to the data points that are quadratic and constrained to pass through the origin. in a time interval of a "luminosity section", corresponding to 2 18 LHC orbits or 23.3 seconds of data taking. In this and subsequent figures, error bands in the data points represent their statistical uncertainty only. Single-object trigger rates generally increase linearly with pileup, whereas doubleobject paths may have a higher-order dependency. The largest dependence on pileup is shown by the seeds based on the missing transverse energy. The Level-1 trigger reconstruction cannot distinguish between objects generated by different collisions within the same bunch crossing. However, in offline reconstruction the objects are associated with different reconstructed vertices that originate from different collisions. This requires tracking information, which is not available in the Level-1 trigger. The rate of an algorithm can be reduced by applying a "prescale" that determines what fraction of events selected by the seed will pass the trigger. A prescale of N means that only one in every N events satisfying the condition is accepted. Prescale values can only be positive integer numbers.  (13 TeV) Figure 3. Total Level-1 menu rates as a function of pileup for three sets of algorithms, or "prescale columns", defined in the text. The rates were recorded during a LHC fill with 2544 proton bunches. The instantaneous luminosities of 2.0×, 1.7×, and 1.5 × 10 34 cm −2 s −1 correspond to an average pileup of 55, 47, and 42 respectively. The curves represent fits to the data points that are quadratic and constrained to pass through the origin.
With a prescale of two, for example, only half of the events selected by the seed will propagate to the HLT. A "prescale column" is a set of prescale values applied to each of the seeds in a particular menu. During an LHC fill the beam intensity decreases with time, so multiple prescale columns with decreasing prescale values are used, to maximize signal efficiency while keeping the rate under 100 kHz.
Trigger algorithms used for most physics analyses have a prescale value of one in all columns, whereas high rate calibration triggers generally have prescale values that are greater than one. Figure 3 shows the trigger rate as a function of pileup, defined as for figure 2, for a few benchmark prescale columns of the trigger menu. These were tuned to reach a total Level-1 trigger rate of 100 kHz for three different target instantaneous luminosity values. The prescale columns for luminosities of 1.5 × 10 34 cm −2 s −1 and 1.7 × 10 34 cm −2 s −1 , represented by the black dots and red squares, respectively, were not used to collect data at the highest pileup, but were activated only when their corresponding Level-1 trigger rate was lower than 100 kHz. Although quadratic functions fit the data points well, the very small quadratic coefficient of these fits indicates a mostly linear dependence of the rate on pileup suggesting a negligible contamination from pileup events.
The total number of algorithms in the CMS Level-1 menu used in proton-proton collisions is between 350 and 400; the system architecture is limited to 512, which is a factor 4 larger than the Run 1 system. There were about 150 unprescaled seeds in the menu at the end of 2018, of which approximately 100 were "contingency" seeds with stricter selection requirements. The remaining 50 were responsible for collecting data for all physics analyses that used the full -7 -2020 JINST 15 P10017 integrated luminosity delivered by the LHC. The other 250 algorithms were prescaled and used for calibrations, monitoring, trigger efficiency measurements, and other ancillary measurements. Tables 2 and 3 show the unprescaled algorithms and their corresponding thresholds. Algorithms with no p T threshold for muons have an effective minimum p T that varies as a function of |η|, because very low-p T muons do not reach the muon chambers.

The Level-1 trigger architecture
During the first LHC long shutdown and extending into 2015, the new CMS Level-1 trigger was installed to run in parallel with the Run 1 (legacy) Level-1 trigger, and eventually replaced it. The upgraded Level-1 trigger is described in detail in ref. [3], with the exception of two new muon systems: the concentrator and preprocessor fanout (CPPF) and the TwinMux, which are described in section 6. Section 5 summarizes the overall design of the upgraded trigger, shown in figure 4.
In contrast to the Run 1 system that used the Versa Module Eurocard (VME) standard and many parallel electrical cables for the interconnects, the upgraded trigger uses Advanced Mezzanine Cards (AMC) based on MicroTCA technology [12] and multi-Gb/s serial optical links for data transfer between modules. The MicroTCA crate provides a high-bandwidth backplane, system monitoring capabilities, and redundant power modules. The number of distinct electronics board types is greatly reduced because many components are based on common hardware designs.
The calorimeter trigger consists of two layers: Layer-1 receives, calibrates, and sorts the local energy deposits ("trigger primitives") which are sent to the trigger by the ECAL and HCAL; Layer-2 uses these calibrated trigger primitives to reconstruct and calibrate the physics objects such as electrons, tau leptons, jets, and energy sums. The calorimeter trigger follows a time-multiplexed trigger design [13] illustrated in figure 5. Each main processing node has access to a whole event with a granularity of ∆η×∆φ of 0.087×0.087 radians (where phi is azimuthal angle) in most of the calorimeter acceptance (a slightly coarser granularity is used at high |η|). A demultiplexer (DeMux) board then reorders, reserializes, and formats the events for the global trigger (µGT, which is pronounced micro-GT to emphasize the connection to the MicroTCA technology used in this upgrade) processing. Because the volume of incoming data and the algorithm latency are fixed, the position of all data within the system is fully deterministic and no complex scheduling mechanism is required. The benefits of time multiplexing include removal of regional boundaries for the object reconstruction and full granularity when computing energy sums. The multiplicity of processing nodes provides the flexibility to add nodes as required by complex trigger algorithms. These algorithms are fully pipelined and start processing as soon as the minimum amount of data is received. The muon trigger system includes three muon track finders (MTF) which reconstruct muons in the barrel (BMTF), overlap (OMTF), and endcap (EMTF) regions of the detector, and the global muon trigger (µGMT, pronounced micro-GMT) for final muon selection. The µGT finally collects muons and calorimeter objects and executes every algorithm in the menu in parallel for the final trigger decision.
-8 - Medium quality: muons with hits in at least 2 different muon stations.
The "non-colliding BX" requirement selects beam-empty events.
OS: Opposite Sign (of electric charge).
E T : scalar sum of p T of calorimeter deposits.
H T : scalar sum of p T of jets.
Isolation and loose isolation: the isolation requires an upper limit on the transverse calorimeter energy surrounding the candidate. The limit depends on the pileup, the Level-1 candidate E T and |η|. Details are given in sections 7.2 and 7.3.
-9 - Table 3. List of the most used cross object unprescaled Level-1 trigger algorithms (seeds) during Run 2 and their corresponding requirements.
Algorithm Requirements (p T , E T , m µµ , and m jj in GeV)

Two objects
Three objects   In the upgraded trigger, the BMTF, µGMT, µGT, and Layer-2 use the same type of processor card. The OMTF and EMTF electronic boards similarly share a common design, whereas Layer-1, TwinMux, and CPPF each use a different design. All processor cards, however, use a Xilinx Virtex-7 Field Programmable Gate Array (FPGA). Thus many firmware and control software components, e.g., data readout and link monitoring, can be reused by several systems, reducing the workload for development and maintenance.

Five objects
An advanced mezzanine card called the AMC13 [14] provides fast control signals from the trigger control and distribution system to the trigger AMCs over the MicroTCA backplane. If an event is selected, the trigger AMCs send their data over the backplane to the AMC13, which also connects to the central CMS data acquisition system via 10 Gb/s optical links. More details on the hardware can be found in ref. [3]. De-multiplexing card µGT Figure 5. The time-multiplexed trigger architecture of the upgraded CMS calorimeter trigger.

The Level-1 muon trigger and its performance
The CMS muon detector is composed of three partially overlapping subdetectors (CSCs, DTs, and RPCs, as described in section 2), whose signals are combined together into "trigger primitives" (TPs) to reconstruct muons and measure their p T . Trigger primitives provide coordinates, timing, and quality information from detector hits. Figure 6 shows the geometrical arrangement of the three muon subdetectors in a quadrant of the CMS detector.
In the legacy trigger, data from each of the three subdetectors were used separately to build independent muon tracks, which were combined by a global muon trigger. The upgraded Level-1 trigger combines information from all available subdetectors to reconstruct tracks in three distinct pseudorapidity regions, improving the muon reconstruction efficiency and resolution while reducing the misidentification rate.
The BMTF takes inputs from DT and RPC chambers in the barrel; all three muon subsystems contribute to the OMTF tracks in the overlap between barrel and endcap; and the EMTF uses CSC and RPC information to reconstruct endcap muons. Detector symmetry allows each track finder to run the same algorithm in parallel for different regions in φ. The BMTF is segmented in twelve sectors of 30 • each, and both the OMTF and EMTF are segmented into 12 sectors of 60 • , six on each end of the experiment. A single board builds tracks in one sector, plus 20-30 • of overlap to account for muon bending in φ.
-12 - Figure 6. An R-z slice of a quadrant of the CMS detector [15]. The origin of the axes represents the interaction point. The proton beams travel along the z-axis and cross at the interaction point. The three CMS muon subdetectors are shown: four stations of DTs in yellow, labelled MB; four stations of CSCs in green, labelled ME; and four stations of RPCs in blue, labelled RB or RE.
The track finders use muon detector TPs to build muon track candidates, assign a quality to each, and measure the charge and the p T of each candidate from the bending in the fringe field of the magnet yoke. Each track finder uses muon finding and p T assignment logic optimized for its region, and assigns the track quality corresponding to the estimated p T resolution.
Each track finder transmits up to 36 muons to the µGMT, which resolves duplicates from different boards, and sends the data for a maximum of eight muons of highest rank (a linear combination of p T and a quality value) to the µGT, where they are used in the final Level-1 trigger decision.

Barrel muon trigger primitives
The DT and RPC barrel systems consist of four cylindrical stations wrapped around the solenoid, each split into 12 wedges in φ and 5 wheels along the beam direction. In the upgraded Level-1 trigger, a new layer called the TwinMux merges DT trigger primitives and RPC hits from the same station (i.e., detector layer) into "superprimitives". Superprimitives combine the better spatial resolution of the DT and the more precise timing from the RPC. Each superprimitive is assigned a quality, which depends on the location of its inputs, η and φ coordinates, and an internal bending angle φ b . The TwinMux then sends superprimitives to the BMTF. The TwinMux also transmits unmerged DT TPs and RPC hits to the OMTF. In both cases the TwinMux increases the bandwidth of the data links used to transmit TPs, thus reducing the number of data links. Merging DT and -13 -RPC hits also improves the TP efficiency and timing in each station, which results in improved BMTF performance. The TwinMux is described in detail in ref. [16].

Endcap RPC trigger primitives
The CPPF consists of eight MicroTCA boards with FPGA processors, designed to concentrate endcap RPC TPs for transmission onto higher-bandwidth optical links. The CPPF clusters RPC hits in adjacent strips into a single TP, and computes their θ and φ coordinates before transmitting up to two clusters per 10 • chamber to the EMTF. The CPPF was commissioned in 2017. A detailed description is given in ref. [17].

Barrel muon track finder
The BMTF reconstructs muons in the barrel region (|η| < 0.83). The BMTF track finding and p T assignment algorithms are similar to their predecessors running on the DTTF [2,18]. Look-up tables (LUTs) use the bending angle and the quality of the superprimitives of an inner station to form an acceptance window for the outer station through an extrapolation unit. Each extrapolation unit receives superprimitives from one thirty-degree sector/wheel and its five neighbors, i.e. the two adjacent sectors in the same wheel and the corresponding three in the neighboring wheel. The track assembler unit receives the paired superprimitives for all stations and combines them. Tracks with more stations, especially inner stations where the magnetic field is stronger, are assigned higher quality.
The assignment unit uses LUTs to assign p T , φ, and η of a track. The p T value is assigned based on the difference of the φ coordinates of TPs in neighboring stations, ∆φ, for the majority of tracks. However, ∆φ by itself cannot distinguish high-and low-p T tracks because of the inversion of their curvature due to the inversion of the magnetic field direction in the yoke with respect to the inner solenoid region. For this reason two LUTs encode the p T value for either the high-or low-p T case, and the internal bending angle of the superprimitive, φ b , is used to select the appropriate result. A LUT based purely on the bending angle φ b augments the p T assignment for tracks reconstructed from only two superprimitives, where at least one of the TPs is assigned good quality by the TwinMux. The p T assigned by this LUT is compared to the one obtained using the TP ∆φ and the smaller value is selected.

Overlap muon track finder
The OMTF receives data from three DT and five RPC stations in the barrel, plus four CSC and three RPC stations in the endcap, giving 18 total "layers" that are used to build tracks (since each DT station has two layers). Track reconstruction occurs independently in each sector in φ. The OMTF uses detector hits directly from the RPC system and trigger primitives from the DT and CSC systems. In the following section the word "hits" is used to indicate either. Each track is constructed starting from a single reference hit in one layer, so the first step is to select up to four reference hits, favoring hits from inner layers and those with good φ resolution. Up to two reference hits may come from the same layer, enabling efficient reconstruction of nearby muons.
The algorithm uses patterns generated from simulated events to associate hits in other layers with the reference hit. For each muon charge there are twenty-six patterns corresponding to different p T ranges, from 2 to 140 GeV. Each pattern encapsulates information about the average muon track -14 -propagation between layers and the probability density function of hit spread in φ in each layer, with respect to the reference hit. The patterns differ depending on the reference layers used. When multiple patterns match a given hit, a statistical estimator based on the φ distribution of the hits resolves the ambiguity, preferring patterns with a larger number of matched layers. The OMTF reconstruction algorithm can be regarded as a naive Bayes classifier.
Properties of the best matched patterns, together with the reference hit φ, are passed to the internal muon sorter, which removes possible duplicates from a single muon producing multiple reference hits. The three best muons per board are transmitted to the µGMT, giving a maximum of 36 muons. A more detailed description of the algorithm is found in ref. [19].

Endcap muon track finder
The EMTF builds muon tracks from CSC and RPC TPs in the endcap. Both detectors are composed of four stations separated in z and covering 360 • in φ. The CSCs have complete four-station coverage in the pseudorapidity range 1.2 < |η| < 2.4 in two or three concentric rings of detectors per station, whereas the endcap RPCs cover approximately 1.2 < |η| < 1.7 in two rings of detectors per station. The CSCs deliver up to two local charged tracks per BX from each 10 • or 20 • chamber in each station and ring, with ≈1/16 • precision in φ and ≈1/4 • precision in θ. The RPCs send hits from chambers with similar geometry, which are clustered by the CPPF into TPs with ≈1/4 • precision in φ and ≈1 • precision in θ.
The EMTF builds tracks using at most one TP (CSC or RPC) per station. The algorithm first looks for CSC TPs correlated in φ in multiple stations consistent with the presence of a muon track, matching at least one of the five predefined patterns. The pattern recognition runs in parallel in four zones in θ. After the patterns are found, the CSC or RPC TP in each station closest to the pattern is taken for further processing. Resulting tracks are ranked according to their straightness and the number of stations with hits. Stations 1 and 2 are prioritized because the magnetic field is much stronger between stations 1 and 2 than beyond station 2. A muon track with TPs in these two stations therefore has a more precise p T assignment. The three hit patterns with highest quality from each sector are kept for the p T assignment, and the others are discarded.
The bending angles in φ and θ of the muon track are used to calculate the track p T . However, this relationship is complicated by several factors. At low p T , muons can experience significant multiple scattering and energy loss and at high p T , they can initiate electromagnetic showers. In addition, the CMS magnetic field strength and direction varies with η outside the solenoid, so muons of similar momenta can have different behavior in the more central region (|η| < 1.55) than in the more forward region (|η| > 2.1). The complicated dependencies make this an ideal case for machine learning. A boosted decision tree (BDT) regression technique is used to provide an estimate of the track p T , taking these dependencies into account. The BDT input variables are compressed into 30 bits, and training parameters are optimized using MC simulation of single-muon events. The BDT output values are pre-evaluated and stored in a LUT loaded in a ≈1 GB memory module of the EMTF for fast determination. Additional details about the design, training, and implementation of the BDT can be found in ref. [20].

Global muon trigger
The µGMT receives up to 108 muon candidates (3 per sector) sent from the three muon track finders. The µGMT sorts the muons and identifies and removes duplicates, sending up to eight muons to the µGT. Such duplicate muons would significantly increase the trigger rate for multimuon trigger algorithms and must be removed while keeping a high efficiency for events with two genuine muons. In parallel to the duplicate removal and sorting stage, the µGMT also corrects the spatial coordinates of each muon by extrapolating the track from the muon stations back to the interaction region.
The µGMT uses the p T and the quality of input muons to define an initial ranking, separately sorting muons from the positive and negative η sides of the OMTF and EMTF, as well as from the BMTF. It keeps the four highest ranked muons coming from each endcap of the OMTF and EMTF, along with the highest ranked eight BMTF muons. The second sorting stage compares the ranks of muons coming from the first stage and selects the eight with the highest rank.
Because of the overlap between adjacent wedges or sectors of the track finders (TFs), a muon traversing the detector in these overlap regions can be found by the TF processors of both sides on the overlap. In addition to this overlap in φ, the different regional TFs also have an overlap in η where a muon can be found by both the BMTF and OMTF, or by the OMTF and EMTF. Two different methods are used for the identification of duplicates. The first method makes use of the "track address" of the muon, which encodes the TPs used to build the muon track, to find duplicates between BMTF wedges. The second method uses the muon track coordinates, which are applied to find duplicates between adjacent sectors in the OMTF and the EMTF, and between different regional TFs. For the second method, simulated events are used to determine the optimal size and shape of the regions in which tracks should be marked as duplicates.
Because the TF systems measure the muon coordinates within the muon systems, the µGMT extrapolates all input muon track parameters back to the collision point. The extrapolation corrections are derived from MC simulation as a function of p T , φ, η, and charge of the muon, and are stored in a LUT. The corrections have a coarse granularity since they are limited to 4 bits: they have steps of 0.05 radians in ∆φ and 0.01 in ∆η and are applied to muons with p T < 64 GeV. These corrected coordinates are then propagated to the µGT to improve the performance of trigger algorithms relying on the invariant mass or difference in spatial coordinates between multiple muons.
The µGMT also transmits the track quality to the µGT as a selection option for specific trigger paths. Quality is also used for cancellation in case duplicates are found. Muons passing the "tight" quality criteria have good p T resolution, and are used in single-muon seeds. All BMTF tracks pass the tight criteria, thanks to the strong magnetic bending effect in the barrel region, whereas OMTF and EMTF tracks must have TPs in at least three layers, and in EMTF one of those TPs must be in the innermost layer. The "medium" and "loose" criteria are used in OMTF and EMTF to increase the trigger efficiency for events with multiple muon tracks by including tracks with fewer TPs, or without a TP in the first layer.

Performance
The data recorded since the start of Run 2 are used to study the performance of the upgraded muon trigger. The performance studies presented in this section use data collected during 2018. Data collected during 2016 and 2017 give similar results. Figure 7 shows the correlation between the -16 -inverse of the muon p T assigned at Level-1, proportional to the track curvature, and the inverse of the offline reconstructed muon p T for the three η regions of interest. The correlation is linear but slightly off-diagonal, because Level-1 muon p T values are scaled up to provide 90% efficiency for any given trigger p T threshold. The resolution in the barrel shows better resolution because the orientation of the magnetic field with respect to the muon track causes less bending in the forward regions. The figure uses a data set triggered by a single isolated muon, with two oppositely charged muons consistent with a Z boson decay.
The efficiency measurements use a tag-and-probe [21] technique with offline reconstructed muons from preselected Drell-Yan events. The tag muon is reconstructed with the CMS particleflow algorithm [22], and it is required to have p T > 26 GeV and be isolated such that nearby calorimeter energy deposits must sum to less than 15% of the muon p T . The tag muon must match within a cone of ∆R = (∆η) 2 + (∆φ) 2 < 0.1 to a muon reconstructed by the single isolated muon HLT algorithm with p T > 24 GeV. The HLT muon must be seeded by the single-muon Level-1 trigger with a p T threshold of 22 GeV.
The numerator of the efficiency measurement includes events where a Level-1 muon from the triggering bunch crossing matches a probe muon, reconstructed using the particle-flow information, within ∆R < 0.2. The denominator includes all events with a tag muon. The tag and the probe muons must be separated by ∆R > 0.4. This guarantees that the tag and the probe are two different muons. Figure 8 shows trigger efficiencies measured for a single-muon trigger with a p T threshold of 22 GeV as a function of the offline reconstructed muon p T . At the threshold value the efficiency reaches about 86% of the plateau, which is measured to be ≈93%. A more detailed description of the trigger performance at high muon p T , where radiative showering complicates the reconstruction, is given in ref. [23]. Figure 9 shows the efficiency as a function of the reconstructed p T of the probe muon, p offline T , for the three track finder regions (left), and as a function of η (right). The three track finders reach an efficiency plateau over 90% for the same p reco T value, with the barrel track finder exhibiting the sharpest turn-on curve. Figure 10 includes efficiency measurements for different quality thresholds versus muon p T and η. The detector geometry is responsible for the reduction of trigger efficiency in certain η regions. Figure 11 shows the efficiency in different |η| regions as a function of the number of pileup vertices and muon φ. In events with high pileup, extra tracks can confuse the endcap muon reconstruction, causing the trigger efficiency to drop by a few % in the far forward region.
In comparison to the legacy trigger system, the efficiency from the upgraded muon trigger is similar or higher, depending on the η region, as seen in figure 12. Figure 13 overlays the re-emulated Run 1 (legacy) single-muon algorithm rates and Run 2 (upgrade) rates as a function of Level-1 muon p T (left) and η (right). The muon trigger rate was studied with an unbiased Run 2 data sample taken with a prescaled trigger that only required colliding bunches for triggering. For the single-muon trigger with a 22 GeV threshold, the rate is approximately a factor of 2 lower than for the legacy trigger system, estimated from studies with simulated events. The rate reduction improves at higher trigger thresholds, giving flexibility for tuning in higher instantaneous luminosity conditions. The use of more sophisticated p T assignment algorithms, also exploiting multivariate analysis tools allowed by the more powerful trigger firmware and hardware, result in a significant rate reduction compared to the legacy system.

The Level-1 calorimeter trigger and its performance
The calorimeter trigger was partially upgraded before data taking in the spring of 2015, and was completed in March 2016.
It is organized in two layers: Layer-1 collects and calibrates the trigger primitives coming from the calorimeters. Layer-2 receives the output from Layer-1 and reconstructs and calibrates further physics objects like electrons, photons, tau leptons, jets, and energy sums. The following sections describe the algorithms developed to reconstruct and identify electrons and photons, tau leptons, and hadron jets, and to assign accurate energies and positions to each.

Input calorimeter trigger primitive processing
Calorimeter trigger towers (TTs) group 5×5 crystals in the ECAL barrel (EB) along with the HCAL barrel (HB) tower directly behind them, with a ∆η×∆φ size of 0.087×0.087. In the endcaps (EE crystals, HE, and HF), the grouping logic is more complicated because of the layout of the crystals, which results in TTs with ∆η×∆φ sizes of up to 0.17×0.17. Look-up tables are implemented in Layer-1 to calibrate electromagnetic energy deposits in the ECAL, as well as hadronic energy deposits in both ECAL and HCAL towers. This calibration is performed in addition to calibrations already applied by the ECAL and HCAL electronics, and accounts for the changing calorimeter response over time, in particular, from radiation damage. An unforeseen timing effect of the changing crystal response is discussed in appendix A. The Layer-1 calibrations compensate for various effects including, but not limited to, the average particle energy loss in the tracker material in front of the calorimeters. The calibration factors for ECAL (HCAL) are binned in η and E T , and are derived from single-photon (single-pions) simulations. Figure 14 shows the scale factors derived for both ECAL and HCAL trigger tower inputs, as a function of η, for various bins in E T . The increase of the calibration factors with η reflects the profile -21 -2020 JINST 15 P10017 The ECAL and HCAL TT information sent to the Layer-2 contains the combined ECAL plus HCAL energy sum, the ECAL/HCAL energy ratio, and additional flags, such as the fine-grain veto bit described in section 7.2, and a minimum-bias collision bit based on the HF detector used for some special runs. The TT information, which constitutes the calorimeter trigger primitives, is streamed with a 9-fold time multiplexing, and sent via asynchronous 10 Gb/s optical links to the Layer-2 trigger.

The electron and photon trigger algorithm
Electrons (e) and photons (γ ) are indistinguishable to the Level-1 trigger since tracking information is not available. The e/γ reconstruction algorithm proceeds by clustering total (ECAL plus HCAL) energy deposits around a "seed" trigger tower defined as a local energy maximum above E T = 2 GeV. Clusters are built dynamically, i.e., including surrounding towers over 1 GeV without any predetermined cluster shape requirement, and further trimmed to include only contiguous towers to match the electron footprint in the calorimeter and optimize the trigger response. The trimming process results in various candidate shapes being produced that can be categorized and used for identification purposes. As illustrated in figure 15, the maximum size of the clusters is limited to 8 TTs to minimize the impact of pileup energy deposits, while including most of the electron or photon energy. An extended region in the φ direction is used to obtain better coverage of the shower since the electron energy deposit extends along the φ-direction because of the magnetic field and bremsstrahlung.

Second neighbours
Isolation region Figure 15. The Level-1 e/γ clustering algorithm and isolation definition. A candidate is formed by clustering neighboring towers (orange and yellow) if they can be linked to the seed tower (red). Each square represents a trigger tower. A candidate is considered isolated if the E T in the isolation region (blue) is smaller than a given value. Details are given in the text.  Figure 16. The pseudorapidity position of Level-1 e/γ candidates with respect to the offline reconstructed electron position, separately for the barrel and endcap regions(left). The relative transverse energy of the Level-1 e/γ candidates with respect to the offline reconstructed electron transverse energy, also separately for the barrel and endcap regions (right). The functional form of the fits consists of a two-sided tail symmetric Crystal Ball function for the left plot and a combination of a Gaussian and an one-sided tail asymmetric Crystal Ball function for the right plot.

JINST 15 P10017
The e/γ candidate position is the energy-weighted position of the cluster towers. Figure 16 shows the position and transverse energy compared with those for objects reconstructed offline. Better position resolution improves the computation of more sophisticated variables, such as invariant masses at the µGT level. The data used consist of events triggered by a single electron trigger and tag-and-probe selections, which makes the sample pure in Z → ee candidates, with the corresponding p T spectrum. The resolution of the offline position is driven by the tracker track uncertainty.
To reduce background rates, a shape veto is defined to reject the clusters least compatible with a genuine e/γ candidate such as pileup-induced energy deposits. Additional identification criteria are also defined: • The Fine Grain Veto Bit. This veto is used in the barrel to quantify the compactness of the electromagnetic shower within the seed tower and discriminates against hadron-induced showers.
• The H/E veto. This veto requires a low ratio of HCAL to ECAL energy in the seed tower. Different thresholds are used in the barrel and the endcap regions.
These identification variables are optimized to reduce the rate of misidentified electrons while maintaining the maximum trigger efficiency for genuine electrons, and are removed for candidates with E T > 128 GeV.
Isolation requirements are added to the identification criteria to produce a collection of isolated Level-1 e/γ candidates. The isolation transverse energy E iso T corresponds to the E T deposit in the 6×9 TT region in η×φ around the seed tower, from which the e/γ E T is subtracted (illustrated in figure 15). To determine if an e/γ candidate is isolated, a threshold stored in a LUT is applied to E iso T depending on the E e/γ T , the η position, and a pileup estimator called n TT . The latter is obtained by counting the number of TTs with E TT T ≥0.5 GeV in the eight central η rings of the calorimeters (|η| ≤ 0.34). The isolation threshold is optimized to target a specific rate and efficiency for certain E T ranges. Two working points were derived using Z → ee collision events and a zero bias trigger sample to estimate the rate. A loose set of isolation requirements is used for candidates in trigger algorithms with intermediate E T thresholds (between 20 and 30 GeV), which are typically dielectron and cross-trigger algorithms. For single electron algorithms, which apply energy thresholds on the electrons above 30 GeV, that are targeting events with a Z or a W boson, a tighter set of isolation requirements is implemented.
The sum of the E T of the seed and clustered towers is the raw E T of the e/γ candidate. An additional energy calibration is performed in the Layer-2 trigger with the scale factors derived from Z → ee collision events. The raw energy is scaled with factors depending on the η position of the seed tower, the cluster shape, and the cluster E T .
The trigger efficiency of the upgraded e/γ algorithm is shown in figure 17. Performances for both the nonisolated and the isolated Level-1 e/γ triggers are provided. The studies are performed using a tag-and-probe technique based on Z → ee events recorded in 2018 by an HLT trigger path requiring a tight electron with p T > 32 GeV. Both the tag and the probe are offline electrons required to be within the ECAL fiducial volume (|η| < 1.4442, or |η| > 1.566 and |η| < 2.5) and to pass the loose electron identification criteria. In addition, the tag is required to have a p T above 30 GeV, and to be geometrically matched to the HLT electron triggering the event within -24 -2020 JINST 15 P10017  (right): an E T threshold of 34 GeV in black, and of 28 GeV with the tight set of isolation requirements in red (as discussed in the text). The efficiency curve for the logical OR of the two algorithms is shown in blue. The functional form of the fits consists of a cumulative Crystal Ball function convolved with a polynomial or exponential function in the low E T region. ∆R < 0.3. All other reconstructed electrons in the event passing the loose identification criteria are probe electrons. They are geometrically matched to Level-1 e/γ candidates with ∆R < 0.3 and are used to evaluate the Level-1 e/γ trigger efficiency. The tag-and-probe electrons in the pair must not be within ∆R < 0.6 of each other. The invariant mass of the tag-and-probe electron system is required to be between 60 and 120 GeV. The trigger efficiency as a function of the number of offline reconstructed vertices is shown in figure 18. The left plot shows the Level-1 e/γ isolated trigger efficiency for a 32 GeV threshold as a function of the number of offline reconstructed vertices. The trigger efficiency is also shown for the tight set of isolation requirements. The right plot shows in black (red) the Level-1 trigger rate, measured using an unbiased data set with an average pileup of 49, for a single e/γ algorithm as a function of the E T threshold applied on the candidate without (with) the tight set of isolation requirements. The same plot shows in blue (yellow), the Level-1 trigger rate for a double e/γ algorithm as a function of the E T threshold applied on the subleading e/γ candidate without (with) the tight set of isolation requirements on the leading e/γ candidate (the E T threshold on the leading candidates is always 10 GeV higher). The rates of seeds with and without isolation converge at high E e/γ , L1 T because of the relaxation of the isolation criteria with E e/γ , L1 T .

The hadronic tau lepton trigger algorithm
The hadronically decaying τ lepton trigger algorithm efficiently reconstructs τ lepton decays to one, two, or three charged or neutral pions (τ h ). These pions may produce more than one cluster spatially separated in φ because of the magnetic field. Although the τ h energy deposit is typically more spread out than that of an electron, the dynamic clustering developed for the e/γ trigger is adapted to reconstruct these individual clusters, which can subsequently be merged.  Figure 19. The Level-1 τ clustering algorithm and isolation definition. The e/γ dynamic clustering is used to reconstruct single clusters around local maxima or seeds (yellow and green), which can then be merged into a single τ h candidate. Each square represents a trigger tower where the ECAL and HCAL energies are summed. A candidate is considered isolated if the E T in the isolation region (white) is smaller than a chosen value. Figure 19 illustrates the τ lepton reconstruction algorithm, which merges two neighboring clusters under some proximity conditions. Hadronically decaying τ leptons are typically lowmultiplicity jets, and have less surrounding hadronic activity than QCD-induced jets. The candidate position is computed as an energy-weighted average centered around the seed tower of the main cluster, giving four times better resolution than the Run 1 τ lepton trigger algorithm. An isolation threshold, which depends on the E T and η of the τ lepton, and the n TT variable (as discussed in section 7.2), is applied to discriminate genuine τ leptons from QCD-induced jets. The isolation requirement is loosened for high n TT to ensure constant τ lepton identification efficiency as a function of pileup. A relaxation of the isolation with E T is also implemented to achieve the -26 -2020 JINST 15 P10017 maximum efficiency at high E T . The isolation thresholds are stored in a LUT that can be optimized to target a specific rate and efficiency for a given p T range, e.g., for a τ lepton pair from a Higgs boson decay. With the intense LHC running conditions during Run 2, the working point for isolation is adjusted to provide optimum efficiency even at the peak instantaneous luminosity of 2.1 × 10 34 cm −2 s −1 . The isolation optimization is performed on simulated Z → ττ samples to evaluate the signal efficiency and on unbiased data to estimate the rate.
The τ lepton E T is calibrated using corrections that depend on the raw E T and η of the candidate, the presence of a merged cluster, and an estimate of the H/E fraction. The upgraded Level-1 τ lepton trigger energy resolution for barrel and endcap separately is shown in figure 20 (left).
By using a smaller number of TTs to reconstruct the energy deposit footprint of the τ lepton more precisely, the upgraded algorithm is more resilient against pileup and allows more precisely adjustable thresholds for physics. Figure 20 (right) shows the energy resolution of the upgraded τ trigger algorithm as a function of p T .
The performance of the Level-1 τ algorithm is measured in Run 2 data for τ leptons from Z → τ µ τ h decays using a tag-and-probe technique, where τ µ represents a decay to a muon and neutrinos. The measurement is performed in events that satisfy the single-muon HLT path with a 27 GeV threshold on the muon p T . The events contain a well-identified and isolated µ-τ h pair satisfying transverse mass m T (E miss T , µ) < 30 GeV and visible mass 40 < m vis (τ h , µ) < 80 GeV, where the computation of m vis (τ h , µ) only includes the visible decay products of the τ h . The tag muon is required to have ∆R < 0.5 to the HLT muon. The probe hadronically decaying τ leptons are reconstructed using the standard hadrons-plus-strip algorithm [24], and selected using a "medium" isolation criteria [24], and are required to satisfy p T > 20 GeV and |η| < 2.1; discriminators are also applied to reduce the contamination from muons and electrons. The details of the offline τ lepton reconstruction are described in ref. [24]. The probes are matched to Level-1 hadronic τ candidates within ∆R < 0.5 and used for efficiency measurements. The trigger efficiency, plotted as a function of the offline reconstructed τ lepton p T , is shown in figure 21 for nonisolated and isolated Level-1 τ candidates. The relaxation of the isolation identification criteria with E T ensures that the efficiency reaches a plateau value of 100% at high E T . The turn-on curves are obtained by matching geometrically the τ candidates reconstructed offline that pass all the identification and isolation requirements of the H → ττ analysis with its Level-1 counterpart. The stability of the efficiency with respect to pileup is illustrated in figure 22 (left). Figure 22 (right) shows the double-τ rate as function of the E T threshold applied to both of the Level-1 τ candidates. The rate is measured in an unbiased data sample. For typical thresholds of ≈30 GeV, a significant rate reduction is achieved by using the isolation requirement.

The jet and energy sum trigger algorithms
The Level-1 jet reconstruction algorithm is based on square-jet approach similar to that used in Run 1, but uses a 9×9 TT sliding window centered on a local maximum, the jet seed, with E T > 4 GeV. In the barrel, the window size matches the anti-k T [25] clustering size of 0.4 used in the offline jet reconstruction. A jet candidate must have a seed energy greater than the TTs in the triangle above the diagonal of the 9×9 square window, and greater than or equal to the TTs in the triangle below the same diagonal. This is to avoid double counting and to prevent TTs with the same energy from vetoing one another when being considered as a jet seed. The veto condition applied is antisymmetric along the diagonal of the 9×9 window to prevent TTs with the same energy from vetoing one another. The jet candidate energy is the sum of all TT energies in the 9×9 window. In addition to reconstructed jets, the total scalar sum of transverse energy over all TTs, E T , and the magnitude of the vector sum of transverse energy over the same TTs, E miss T , use trigger tower granularity. The total scalar transverse energy of all jets, H T , and the corresponding magnitude of the vector sum H miss T are computed using Level-1 jets.
-28 -2020 JINST 15 P10017  Figure 22. The integrated Level-1 selection efficiency for the isolated τ trigger with E T ≥30 GeV, matched to an offline reconstructed and identified τ lepton with p T > 50 GeV, as a function of the number of offline reconstructed vertices (left). The Level-1 double-τ trigger rate, as a function of the E T threshold, for τ candidates with and without an isolation requirement applied (right). The rate is measured requiring two τ candidates with E T larger than the bin value, in a unbiased data set with an average pileup of 55. The estimated E T from pileup, which is subtracted from each jet, is computed locally on a jet-by-jet basis in each bunch crossing, to respond dynamically to fluctuating pileup conditions. The chosen pileup subtraction algorithm provides a significant rate reduction, while maintaining efficiency. Figure 23 shows the regions that are used to estimate the local pileup energy to be subtracted from the jet energy. The pileup is estimated using four 3×9 outer regions, one on each side of the 9×9 jet square. The pileup E T is calculated as the energy sum of the three lowest energy regions, so the E T from an adjacent jet in the remaining outer region is not subtracted from the E T . Since this area for subtraction (3 of 4 outer areas) equals the jet area, the implementation is simple. To ensure consistent jet energy response, Level-1 jets are calibrated in bins of jet p T and η, since any loss or mismeasurement will depend on the energy of the jet and the material it traverses. A dedicated LUT is derived from a QCD multijet simulation that returns a p T scale factor that is applied to each jet. The LUT is derived by matching Level-1 jets to generator jets within ∆R < 0.25, then fitting correction curves produced in bins of jet η of 1/ E L1 T /E gen T as a function of E L1 T . Figure 24 shows the performance of the Level-1 jet triggers in the combined barrel and endcap region and in the forward region, measured using an independent data sample collected with a single-muon trigger. The efficiencies show a sharp turn-on and high efficiency for a number of thresholds, representative of those used in Run 2 for various single-jet and multijet seeds. Figure 25 shows the efficiency curves for the Level-1 H T and E miss T triggers. The E miss T trigger efficiency is measured using events triggered by and reconstructed with a single muon, and is plotted as a function of offline E miss T , which is the magnitude of the negative vector sum of the p T of all calorimeter energy deposits, with |η| ≤ 5.0.
Toward the end of 2016 data taking, an increase in the instantaneous luminosity revealed a significantly nonlinear dependence of the E miss T rates on event pileup. For 2017 and 2018 data taking, pileup mitigation was implemented and applied on an event-by-event basis to the E miss T algorithm. The event pileup is estimated with the variable n TT (described in section 7.2) and is used along with the TT η to retrieve from a LUT a pileup-and η-dependent E T threshold below which TTs do not enter the calculation of the E miss T . The LUT was derived using functions encoding the pileup estimate, the TT η, and the TT width in η, since the pileup energy per TT increases with |η| and the TT size. The functional form and corresponding constant factors were optimized to give the best trigger efficiency, measured in single-muon triggered data, for a fixed rate calculated from unbiased data. The LUT was also derived by calculating the average TT E T for each value of η from unbiased data, and this gave a similar performance to the function-based LUT.  -31 -

JINST 15 P10017
The improvement of the E miss T trigger efficiency after using the pileup mitigation algorithm is shown in figure 26, for events from 2018 single-muon triggered data with pileup between 50 and 60. The rate of the Level-1 E miss T trigger with a threshold of 80 (120) GeV with pileup mitigation enabled is the same as the rate for a threshold of 118 (155) GeV with pileup mitigation switched off. Also shown in figure 26 is the pileup dependence for fixed thresholds of the Level-1 E miss T algorithm, with and without pileup mitigation. Rate is calculated from unbiased data for 2855 filled bunches for the Level-1 thresholds of 80 and 120 GeV, where the pileup shown is the average pileup per luminosity section. Applying pileup mitigation, by excluding low-energy TTs in events with significant pileup and reducing the contribution from large TTs at large eta, provided a significant rate reduction while maintaining trigger efficiency. This allowed the Level-1 E miss T threshold to be reduced, increasing sensitivity to a range of important physics channels.

Adjustments for heavy ion collisions
In heavy ion (HI) lead-lead collisions, a large particle multiplicity variation is observed; although peripheral collisions can result in only a few particles per interaction, central events can produce large multiplicities equivalent to pp collisions with pileup of 200-300. While most of the algorithms developed for pp collisions were reused, the wide range of multiplicity required that some of the Level-1 algorithms were optimized, and a few were developed specifically for HI collisions.
To select low-p T hadronic collisions efficiently, a minimum bias trigger was developed based on a coincidence of energy deposits in the positive and negative η sides of the HF calorimeter. Using the same principle, an ultra-peripheral collision (UPC) trigger was designed to be activated only in a specific low-energy region. A high multiplicity UPC algorithm was also developed, based on the imbalance between the positive and negative η sides of the sum of trigger tower E T in the barrel calorimeter.
In addition, the parameters of the e/γ algorithm were adapted by removing the H/E constraint and adjusting the fine grain bit threshold. For optimal performance in the HI environment, the jet pileup subtraction algorithm used for proton collisions was replaced with an alternative, based on the average energy in φ-rings of the calorimeter.

The global trigger
The µGT combines information from both the µGMT and the calorimeter Layer-2, and it performs a trigger decision based on a menu of sophisticated algorithms, as described in section 4. The µGT is made compact and reliable by merging the functionality formerly distributed across multiple distinct boards into a single processor board type. The µGT distributes its processing across up to six of these common boards working independently of each other. The outputs of the processing boards are merged before being sent to the HLT.
The µGT began operation with one processing board in 2016 and was extended to its final form of six processing boards by the beginning of 2017. The use of multiple processing boards with larger FPGAs permitted the computation of more high-level quantities, such as invariant or transverse masses, by using LUTs and digital signal processors. In this way, it is possible to migrate increasingly higher-level quantities from the HLT into the Level-1 trigger.
Occasionally, the LHC running parameters change on short notice, making it operationally challenging to reoptimize the Level-1 trigger menu. The µGT calculates preview rates for each -32 -prescale column, so that the shift crew can avoid premature enabling of prescale columns that would raise the Level-1 rate above the limit.
A unique classification of certain physics objects input to the µGT can be difficult. For example, a hadronic jet could be separately reconstructed as both a τ lepton and a jet by the Layer-2 trigger. This poses a problem in algorithms looking for both jets and τ leptons. The µGT implements a dedicated treatment to resolve ambiguities for all possible object combinations between Level-1 objects, such as τ leptons and jets. For example, in an event with two jets, each having E T > 35 GeV, and one τ lepton with E T > 45 GeV, both jets must be separated by ∆R > 0.2 from the τ candidate, which ensures that such an event contains at least three nonoverlapping objects.

Dedicated analysis triggers
The large processing power available in the µGT permits the implementation of sophisticated analysis-targeted trigger algorithms. In this section, three types of such algorithms are discussed. The first type selects vector boson fusion (VBF) events using the invariant mass of jet pairs. The second type targets the production of low-mass dimuon resonances (e.g., Υ decays), and the third tags b jet candidates using jet-muon coincidences.

Dedicated vector boson fusion trigger.
Higgs boson production via VBF occurs through the interaction of two W or Z bosons. The incoming quarks only lose a small fraction of their energy in the interaction. After hadronizing, the outgoing quarks typically form jets in the forward direction, with a large invariant mass and separation in η. The VBF algorithm looks for at least two jets with E T > 115 and E T > 35 GeV and at least one pair of jets with E T > 35 GeV each and an invariant mass greater than 620 GeV. In the µGT, half of the squared mass is computed: where cosh(∆η j 1 j 2 ) and cos(∆φ j 1 j 2 ) are obtained through dedicated LUTs using the η and φ of the jets as inputs. The algorithm can select 2-or 3-jet topologies, depending on whether the jet with E T > 115 GeV enters a pair with m j 1 j 2 > 620 GeV. The performance of the Level-1 VBF trigger algorithm was measured in 2017 data, using an unbiased sample collected with a singlemuon trigger. Figure 27 shows that the efficiency, as functions of the offline leading jet p T and the maximum dijet invariant mass, reaches a high efficiency plateau for VBF-like events, making it suitable as a lower rate and high efficiency trigger for VBF-like topologies. The Level-1 VBF trigger algorithms were used to seed HLT paths in 2017 and 2018, increasing the signal acceptance, especially for invisible Higgs boson decays and H → ττ [26].
Low-mass dimuon triggers. The p T thresholds for the usual dimuon triggers are not well adapted to record dimuon resonances with masses less than 20 GeV. These thresholds are typically 15 GeV on the leading muon and 5 GeV on the subleading muon, so they only select very boosted low-mass dimuon resonances. To collect inclusive low-mass dimuon pairs at low enough rates, the µGT can compute the dimuon invariant mass m µµ , using the same technique described above in the case of the VBF trigger. Seeds requiring 3 < m µµ < 9 GeV and 5 < m µµ < 17 GeV are included in the menu, as shown in table 2. Figure 28 shows the Level-1 and the offline m µµ spectrum in Run 2 data collected with multi-muon triggers. The 9.46 GeV Υ meson peak can be isolated quite distinctly after the muon coordinates are extrapolated to the nominal vertex, as described in section 6.6. A recent example of a successful low-mass trigger is the 5.6 sigma observation of B 0 s → µ + µ − with a branching fraction of 2.9 ± 0.7 ± 0.2 × 10 −9 with a limit set on B 0 → µ + µ − < 3.6 × 10 −10 at 95% confidence level [27].
b jet tagging using muons. A significant fraction of b hadron decays produce muons. These are often in the same direction as the rest of the products of the b hadron. The Level-1 trigger includes a simple b-tagging algorithm based on the proximity of a muon to a jet. For example, the µGT implements seeds looking for events with one p T > 3 GeV muon and two E T > 16 GeV jets, where the muon is within ∆R < 0.4 of one of the jets. This new feature improves the efficiency and reduces the rate of the already available b jet tagging seeds that were previously limited by the use of uncorrelated ∆η and ∆φ information between jets and muons.

Data certification and validation
The Level-1 trigger performance is monitored online by physicists working in shifts for nonstop datataking operational support, who are trained to recognize and solve trigger problems. Trigger rates are continuously displayed for each algorithm, as well as occupancy plots and energy distributions for each physics object. Unexpected discrepancies compared with the reference distributions are investigated promptly by Level-1 object experts who determine the appropriate course of action. The Level-1 trigger system uses a two-step process to certify the collected data. "Express certification" is typically performed within 24 hours, and identifies any anomalous behavior of the trigger that may have passed unnoticed during data taking. In the "final certification", high-quality data are selected for physics analyses. The certification is performed for both collision and cosmic ray data taking.  Figure 28. The offline and Level-1 m µµ spectra of oppositely charged muons, with and without extrapolation of the Level-1 track parameters to the nominal vertex, using a data set of low-mass dimuons. The highestmass resonance corresponds to the Υ mesons, and is clearly identifiable both offline and in Level-1, after extrapolation. The Level-1 m µµ spectrum is shifted higher compared with the offline spectrum because of p T offsets designed to make the Level-1 muon trigger 90% efficient at any given p T threshold.
During express certification, the time evolution of the total output rate of the Level-1 trigger is examined, taking into account information about the beam conditions, prescale values applied, status of each subdetector, and dead time (the recording time lost because the readout system is not ready to accept new events). Individual rates of different trigger seeds targeting physics objects are compared with reference rates as a function of pileup.
For each run, data quality monitoring (DQM) plots are produced, including occupancy of muon and calorimeter trigger systems, physics object variables (such as muon η and φ), and the timing of trigger seeds. The data are also compared with an emulation of the Level-1 trigger reconstruction. The DQM system performs statistical tests to identify distributions that differ from expectations. Any abnormal rates or DQM distributions may indicate incorrect functioning of some part of the Level-1 trigger system, which will be studied, corrected (when possible), and taken into account in the final certification.
The final Level-1 trigger certification is based on the comparison of the efficiency and resolution measured for each type of Level-1 object to the corresponding offline quantities, combined with the information from express certification. The efficiencies are calculated for different types of trigger seeds using a tag-and-probe method, and the resolutions are determined by comparing the triggerlevel kinematic variables with their offline reconstructed counterparts, similarly to the performance studies presented in this paper. If the efficiencies and resolutions show no significant deviation from the expected performance, and the results of the express certification indicate that the trigger -35 -operated successfully, the data is certified as valid for physics analyses from the point of view of the Level-1 trigger.
If a certain run does not pass the certification criteria, the source of the performance loss is identified and analyzed. In general, trigger performance losses are caused either by a malfunctioning Level-1 trigger subsystem itself, or by missing or corrupted input from other detector subsystems. In case of a severe performance loss, the data must be discarded independently of the origin of the problem. To minimize the data loss, the certification is performed per luminosity section.
In 2018, 1.36% of the collision data collected by CMS was certified as "bad" by Level-1, but only 0.016% was invalidated solely from Level-1 trigger issues. The remainder included some other significant detector malfunction.

Summary and conclusions
The CMS Level-1 trigger system was upgraded for Run 2 of the LHC. The system improved in performance and flexibility using high-bandwidth serial I/O links for data transfer and large, modern field-programmable gate arrays for reconfigurable algorithms. Maintenance improved with increased standardization through the use of the MicroTCA telecommunications standard and common hardware designs for its components.
The new trigger hardware provides improved e/γ isolation performance, substantially more efficient τ lepton identification, improved muon transverse momentum resolution, and the ability to reconstruct jets with finer calorimeter granularity. New features, such as pileup subtraction and invariant mass calculations, expand the trigger design possibilities. These improvements help to control trigger rates and keep thresholds at lower levels than would be required with the previous system despite the significantly increased LHC energy, luminosity, and pileup in Run 2. The adoption of more powerful trigger processors led to the deployment of more advanced trigger algorithms, targeting specific analyses, resulting in significant improvements in physics capability compared to Run 1.
The upgraded Level-1 trigger system operated during Run 2 with high efficiency for all physics objects, and adapted to the rapidly changing LHC running conditions. As a result, the trigger efficiency was stable and independent of the evolving LHC parameters. Special LHC running conditions and heavy-ion data taking were accommodated effectively as well, exploiting the full capability and flexibility of the trigger system.
The upgraded system improved the energy and momentum resolution, and the identification efficiency and background rejection of the Level-1 physics objects. This significantly lowered the rate at a given threshold compared with the Run 1 system, thereby allowing similar trigger requirements to fit within the unchanged Level-1 rate limit.
An analysis of Run 2 data shows that the trigger rate reduction and efficiency gain benefited the physics program of the CMS Collaboration under conditions of increased LHC energy, luminosity, and pileup. An example includes the H → ττ analysis [28], which shows a significant improvement in trigger efficiency; other Higgs boson decay channel analyses maintained a similar trigger efficiency despite the harsher beam conditions. Moreover, all analyses looking for large transverse missing energy (E miss T ), including searches for dark matter, supersymmetry [29], and invisible Higgs boson decay [26], were only possible in Run 2 because of the improved resolution of the Level-1 E miss T -36 -and the pileup mitigation algorithm. Searches for low-mass dimuon resonances exploited the invariant mass requirement for reducing the rate and lowering the muon momentum requirement [27].

Acknowledgments
We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies:

A Level-1 trigger prefiring
Since the beginning of Run 2, a slowly developing shift in the shape of the ECAL pulses was observed. This effect, which manifests itself as an increasing offset in the timing calibration of the pulses, is radiation-induced and is related to the transparency loss of the ECAL crystals. Because of this, the endcap crystals at highest pseudorapidity are most affected. This timing calibration offset is compensated offline via regular pulse shape and timing calibration measurements, but was not corrected online in the formation of the ECAL TPs. With time, the accumulated offset brought the endcap pulses to the limit of the region where the trigger bunch-crossing assignment would be affected. Once this was realized, in early 2018, the endcap timing delays in the ECAL front-end -38 -electronics were corrected, and the pulse synchronization was optimized. However, in 2016-2017, a gradually increasing fraction of ECAL TPs at |η| > 2.5 had wrongly associated an energy deposit to the previous bunch crossing (BX −1). When such a misassignment occurs it causes several effects on the data. First, it may lead the Level-1 trigger system to "prefire", i.e., to accept the earlier collision in BX −1, whereas the collision in BX 0 is the one of interest. Secondly, when the misassigned TP energy is not large enough to pass the trigger condition, it induces a bias in the energy measurement of calorimeter deposits in the trigger chain and offline.
Prefiring happens, e.g., when an ECAL TP, whose E T exceeds the threshold of the single electron trigger, is assigned to BX −1; or when the misassignment of an ECAL TP leads to a large E miss T reconstructed at Level-1 in BX −1. Prefiring of Level-1 triggers represents a problem in their combined effect with the CMS trigger rules. These are the conditions that prevent buffer overflows in special cases. Triggers rules are enforced immediately after the final decision of the global trigger (µGT). The most commonly enforced trigger rules prevent the issuance of more than one Level-1 trigger acceptance decision in three consecutive bunch crossings, or more than two Level-1 trigger acceptances in 25 consecutive BXs. Thus, when a trigger accepts the event in BX −1, the interesting event in BX 0 will not be accepted. The readout event in BX −1 will likely be rejected by the HLT since it is unlikely to reconstruct any interesting physics objects. The main consequence of prefiring is therefore an inefficiency in recording potentially interesting events.
The measurement of the prefiring rate requires the use of a special set of events called "unprefirable" events. An event in BX 0 is unprefirable when the event in BX −3 is accepted by the Level-1 trigger: the trigger rules veto events in BX −2 and BX −1. For every triggered event, all Level-1 objects and µGT decision bits are stored in a window of ±2BX. Therefore, from a set of selected unprefirable events, the prefiring probability can be computed for a specific analysis selection. The rate of unprefirable events is very small compared with the total number of events in any given primary data set, about 0.1%. Ad hoc corrections at the analysis level are applied to correct for this effect. One of the most affected analyses is the search for invisible decays of a Higgs boson produced via VBF, with energetic forward jets. Their measurements from an unbiased data sample result in a correction of about 1% for m jj of 200 GeV and up to 20% for m jj larger than 3.5 TeV [26].
Secondary effects of the TP time shift are a potential bias in the energy measurement of the calorimeter deposit in the trigger chain. If the energy of early TPs is large enough to create a Level-1 object that prefires a Level-1 trigger path, the event in BX 0 is lost. In contrast, if BX −1 is not accepted, a residual effect on BX 0 is still present because the information about the TPs associated with BX −1 is lost. This residual effect biases the energy of several Level-1 objects and causes a degradation of the trigger efficiency turn-on . Standard trigger efficiency measurements and scale factors generally applied in physics analyses account for this effect.
A second bias arises because of the impact on the ECAL selective readout logic. The TP inputs are used by the ECAL selective readout units to decide whether a certain region of the detector needs to be read out or not (zero-suppressed). Crystals associated with the early TP will be read out by the ECAL data acquisition system in zero-suppression mode, injecting a bias into the HLT/offline energy measurement. For high-p T jets this effect is expected to be small because the zero-suppression thresholds are low. This energy bias is mostly recovered by the residual jet energy corrections applied at the analysis level.