The ALICE Transition Radiation Detector: construction, operation, and performance

The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD also contributes significantly to the track reconstruction and calibration in the central barrel of ALICE. In this paper the design, construction, operation, and performance of this detector are discussed. A pion rejection factor of up to 410 is achieved at a momentum of 1 GeV/$c$ in p-Pb collisions and the resolution at high transverse momentum improves by about 40% when including the TRD information in track reconstruction. The triggering capability is demonstrated both for jet, light nuclei, and electron selection.


Introduction
A Large Ion Collider Experiment (ALICE) [1,2] is the dedicated heavy-ion experiment at the Large Hadron Collider (LHC) at CERN. In central high energy nucleus-nucleus collisions a high-density deconfined state of strongly interacting matter, known as quark-gluon plasma (QGP), is supposed to be created [3][4][5]. ALICE is designed to measure a large set of observables in order to study the properties of the QGP. Among the essential probes there are several involving electrons, which originate, e.g. from open heavy-flavour hadron decays, virtual photons, and Drell-Yan production as well as from decays of the ψ and ϒ families. The identification of these rare probes requires excellent electron identification, also in the high multiplicity environment of heavy-ion collisions. In addition, the rare probes need to be enhanced with triggers, in order to accumulate the statistics necessary for differential studies. The latter requirement concerns not only probes involving the production of electrons, but also rare high transverse momentum probes such as jets (collimated sprays of particles) with and without heavy flavour. The ALICE Transition Radiation Detector (TRD) fulfils these two tasks and thus extends the physics reach of ALICE.
Transition radiation (TR), predicted in 1946 by Ginzburg and Frank [6], occurs when a particle crosses the boundary between two media with different dielectric constants. For highly relativistic particles (γ 1000), the emitted radiation extends into the X-ray domain for a typical choice of radiator [7][8][9]. The radiation is extremely forward peaked relative to the particle direction [7]. As the TR photon yield per boundary crossing is of the order of the fine structure constant (α = 1/137), many boundaries are needed in detectors to increase the radiation yield [10]. The absorption of the emitted X-ray photons in high-Z gas detectors leads to a large energy deposition compared to the specific energy loss by ionisation of the traversing particle.
Since their development in the 1970s, transition radiation detectors have proven to be powerful devices in cosmic-ray, astroparticle and accelerator experiments [10][11][12][13][14][15][16][17][18][19][20]. The main purpose of the transition radiation detectors in these experiments was the discrimination of electrons from hadrons via, e.g. cluster counting or total charge/energy analysis methods. In a few cases they provided charged-particle tracking. The transition radiation photons are in most cases detected either by straw tubes or by multiwire proportional chambers (MWPC). In some experiments [10,13,16,21] and in test setups [22][23][24][25], short drift chambers (usually about 1 cm) were employed for the detection. Detailed reviews on the transition radiation phenomenon, detectors, and their application to particle identification can be found in [10,[26][27][28].
The ALICE TRD, which covers the full azimuth and the pseudorapidity range −0.84 < η < 0.84 (see next section), is part of the ALICE central barrel. The TRD consists of 522 chambers arranged in 6 layers at a radial distance from 2.90 m to 3.68 m from the beam axis. Each chamber comprises a foam/fibre radiator followed by a Xe-CO 2 -filled MWPC preceded by a drift region of 3 cm. The extracted temporal information represents the depth in the drift volume at which the ionisation signal was produced and thus allows the contributions of the TR photon and the specific ionisation energy loss of the charged particle dE/dx to be separated. The former is preferentially absorbed at the entrance of the chamber and the latter distributed uniformly along the track. Electrons can be distinguished from other charged particles by producing TR and having a higher dE/dx due to the relativistic rise of the ionisation energy loss. The usage of the temporal information further enhances the electron-hadron separation power. Due to the fast read-out and online reconstruction of its signals, the TRD has also been successfully used to trigger on electrons with high transverse momenta and jets (3 or more high-p T tracks). Last but not least, the TRD improves the overall momentum resolution of the ALICE central barrel by providing additional space points at large radii for tracking, and tracks anchored by the TRD will be a key element to correct space charge distortions expected in the ALICE TPC in LHC RUN 3 [29]. A first version of the correction algorithm is already in use for RUN 2. In this article the design, construction, operation, and performance of the ALICE TRD is described. Section 2 gives an overview of the detector and its construction. The gas system is detailed in Section 3. The services required for the detector are outlined in Section 4. In Section 5 the read-out of the detector is discussed and the Detector Control System (DCS) used for reliable operation and monitoring of the detector is presented in Section 6. The detector commissioning and its operation are discussed in Section 7. Tracking, alignment, and calibration are described in detail in Sections 8,9, and 10, while various methods for charged hadron and electron identification are presented in Section 11. The use of the TRD trigger system for jets, electrons, heavy-nuclei, and cosmic-ray muons is described in Section 12. 3  wards radially to the centre of the LHC ring and the z lab -axis coinciding with the direction of one beam and pointing in direction opposite to the muon spectrometer. According to the (anti-)clock-wise beam directions, the muon spectrometer side is also called C-side, the opposite side A-side.
The design of the TRD is a result of the requirements and constraints discussed in the Technical Design Report [44]. It has a modular structure and its basic component is a multiwire proportional chamber (MWPC). Each chamber is preceded by a drift region to allow for the reconstruction of a local track segment, which is required for matching of TRD information with tracks reconstructed with ITS and TPC at high multiplicities. TR photons are produced in a radiator mounted in front of the drift section and then absorbed in a xenon-based gas mixture. A schematic cross-section of a chamber and its radiator is shown in Fig. 2. The shown local coordinate system is a right-handed orthogonal Cartesian system, similar to the global coordinate system, rotated such that the x-axis is perpendicular to the chamber. Six layers of chambers are installed to enhance the pion rejection power. An eighteen-fold segmentation in azimuth (ϕ), with each segment called 'sector', was chosen to match that of the TPC read-out chambers.
In the longitudinal direction (z lab ), i.e. along the beam direction, the coverage is split into five stacks, resulting in a manageable chamber size. The five stacks are numbered from 0 to 4, where stack 4 is at the C-side and stack 0 at the A-side. Layer 0 is closest, layer 5 farthest away from the collision point in the radial direction. In each sector, 30 read-out chambers (arranged in 6 layers and 5 stacks) are combined in a mechanical casing, called a 'supermodule' (see Fig. 3 and Section 2.3).
In total the TRD can host 540 read-out chambers (18 sectors × 6 layers × 5 stacks), however in order to minimise the material in front of the PHOS detector in three sectors (sectors 13-15, for numbering see Fig. 1) the chambers in the middle stack were not installed. This results in a system of 522 individual read-out chambers. The main parameters of the detector are summarised in Table 1.
At the start of the first LHC period (RUN 1) in 2009 the TRD participated with seven supermodules. Six further supermodules were built and integrated into the experiment during short winter shutdown periods of the accelerator, three in each winter shutdown period of 2010 and 2011. The   coverage in azimuth was accomplished for the second LHC period (RUN 2) starting in 2015.

Read-out chambers
The size of the read-out chambers changes radially and along the beam direction (see Fig. 3). The active area per chamber thus varies from 0.90 m × 1.06 m to 1.13 m × 1.43 m (x × z). The optimal design of a read-out chamber (see Fig. 2) was found considering the requirements on precision and mechanical stability, and minimisation of the amount of material.
The construction of the radiator, discussed in the following sub-section, is essential for the mechanical stability of the chamber. The drift electrode, an aluminised mylar foil (25 µm thick), is an integral part of the radiator. To ensure a uniform drift field throughout the entire drift volume, a field cage with a voltage divider chain is employed [44]. The  1.45 m. The maximum deformation of the chamber frame was 150 µm under the wire tension indicated, leading to a maximum 10% loss in wire tension. Even with an additional 1 mbar overpressure in the gas volume (see Section 3), the deformation of the drift electrode can be kept within the specification of less than 1 mm. The segmented cathode pad plane is manufactured from thin Printed Circuit Boards (PCB) and glued on a light honeycomb and carbon fibre sandwich to ensure planarity and mechanical stiffness. The design goal of having a maximum deviation from planarity of 150 µm was achieved with only a few chambers exceeding slightly this value. The PCBs of the pad plane were produced in two or three pieces. The PCBs are segmented into 12 (stack 2) or 16 pads along the z-direction, and 144 pads in the direction of the anode wires (rϕ). The pad area varies from 0.635 cm × 7.5 cm to 0.785 cm × 9 cm [45] to achieve a constant granularity with respect to the distance from the interaction point. The pad width of 0.635 cm to 0.785 cm in the rϕ direction was chosen so that charge sharing between adjacent pads (typically three), which is quantified by the pad response function (PRF) [46], is achieved. As a consequence, the position of the charge deposition can be reconstructed in the rϕ-direction with a spatial resolution of 400 µm [46]. In the longitudinal direction, the coarser segmentation is sufficient for the track matching with the inner detectors. In addition, the pads are tilted by ± 2 • (sign alternating layer-by-layer) as shown in Fig. 4, which improves the z-resolution during track reconstruction without compromising the rϕ resolution. For clusters confined within one pad row, a z position at the row centre is assumed, z cluster = z 0 . The honeycomb structure also acts as a support for the read-out boards. The pads are connected to the read-out boards by short polyester ribbon cables via milled holes in the honeycomb structure.
The original design of the TRD was conceived such that events with a multiplicity of dN ch /dη = 8000 would have lead to an occupancy of 34% in the detector [44]. The fast read-out and processing of such data on 1.15 · 10 6 read-out channels required the design and production of fully customised front-end electronics (see Section 5).
The positive signal induced on the cathode pad plane is amplified using a charge-sensitive PreAmplifier-ShAper (PASA) (see Section 5) and the signals on the cathode pads are sampled in time bins of 100 ns inside the TRAcklet Processor (TRAP, see Section 5). For LHC RUN 1 and RUN 2 running conditions (see Section 7.2), the probability for pile-up events is small. The averaged time evolution of the signal is shown in Fig. 5 for pions and electrons, with and without radiator. In the amplification region (early times), the signal is larger, because the ionisation from both sides of the anode wires contributes to the same time interval. The contribution of TR is seen as an increase in the measured average signal at times corresponding to the entrance of the chamber (around 2.5 µs in Fig. 5), where the TR photons are preferentially absorbed. At large times (beyond 2.5 µs), the effect of the slow ion movement becomes visible as a tail. Various approximations of the time response function, the convolution of the long tails with the shaping of the PASA, were studied in order to optimally cancel the tails in data, see Section 8.
The knowledge of the ionisation energy loss is important for the control of the detector performance and for tuning the Monte Carlo simulations. A set of measurements was performed with prototype read-out chambers with detachable radiators for pions and electrons at various momenta [48]. An illustration of the measured data is shown in Fig. 6 for pions and electrons with a momentum of 2 GeV/c. The simulations describe the Landau distribution of the total ionisation energy deposition, determined from the calibrated time-integrated chamber signal. A compilation of such measurements over a broad momentum range including data obtained with cosmic-ray muons and from collisions recorded with ALICE is shown in Section 11, Fig. 37.
Measurements of the position resolution in the rϕ-direction (σ y ) and angular resolution σ ϕ , conducted with prototype chambers, established that the required performance of the detector and electronics (σ y 400 µm and σ ϕ ≤ 1 • ) is reached for signal-to-noise values of about 40, which corresponds to a moderate gas gain of about 3500 [46].
The production of a chamber was performed in several steps [49] and completed in one week on average. First, the aluminium walls of the chamber were aligned on a precision table and glued to the radiator panel. The glueing table was custom-built to ensure the required mechanical precision and time-efficient handling of the components. For almost all junctions the two-component epoxy glue Araldite R AW 116 with hardener HV 953BD was used. In a few places, where a higher viscosity glue was needed, Araldite R AW 106 was applied. In a second step, the cathode and anode wires were wound on a custom-made winding machine and glued onto a robust aluminium frame in order to keep the wire tension. This aluminium frame was subsequently placed on top of the chamber body, and the cathode and anode wires were transferred to the G10 ledges glued to the chamber body. After gluing of the anode and cathode wire planes, the tension of each wire was checked by moving a needle valve with pressurised air across the wires. The induced resonance frequency in each wire was determined by measuring the reflected light of an LED [50]. Afterwards the pad plane and honeycomb structure were placed on top of the chamber body. Following this production process, each chamber was subjected to a series of quality control tests with an Ar-CO 2 (70-30) gas mixture. The tests were performed once before the chamber was sealed with epoxy (closed with clamps) and repeated after chamber validation and glueing. In the following the requirements are described [51]. The anode leakage current was required not to exceed a value of 10 nA. The gas leak rate was determined by flushing the chamber with the Ar-CO 2 gas mixture and measuring the O 2 content of the outflowing gas. It was required to be less than 1 mbar · l/h. In addition, the leak conductance was measured at an underpressure of 0.4-0.5 mbar in the chamber. The underpressure test was only introduced at a later stage of the mass production after viscous leaks were found, see Section 3.4.1 for more details. Comparisons of the anode current induced by a 109 Cd source placed at 100 different positions across the active area allowed determinations of the gain uniformity. The step size for this two-dimensional scan was about 10 cm in both directions and the measured values were required to be within ± 15% of the median. Electrically disconnected wires were detected by carrying out a one-dimensional scan perpendicular to the wires with a step size of 1 cm. This scan clearly identified any individual wire that was not connected due to the visible gas gain anomaly in the vicinity of this wire, and allowed for repair. For one position the absolute gas gain was determined by measuring the anode current and by counting the pulses of the 109   characterised by monitoring the gas gain in intervals of 15 minutes over a period of 12 hours.

Radiator
The design of the radiator is shown in Fig. 7. Polypropylene fibre mats of 3.2 cm total thickness are sandwiched between two plates of Rohacell R foam HF71, which are mechanically reinforced by lamination of carbon fibre sheets of 100 µm thickness. Aluminised kapton foils are glued on top, to ensure gas tightness and to also serve as the drift electrode. For mechanical reinforcement, cross-bars of Rohacell R foam of 0.8 cm thickness are glued between the two foam sheets of the sandwich, with a pitch of 20-25 cm depending on the chamber size. After construction the transmission of the full radiator was measured using the K α line of Cu at 8.04 keV to ensure the homogeneity of the radiators [52]. This line was chosen as its energy is close to the most probable value of the TR spectrum (see Fig. 8).
Measurements with prototypes [53] indicated that such a sandwich radiator produces 30-40% less TR compared to a regularly spaced foil radiator. However, constructing a large-area detector with radiators made out of 100 regularly spaced foils each is infeasible. The impact of various radiators constructed from fibres and/or foam on, e.g. particle identification is discussed in [47,53]. Based on these measurements the fibre/foam sandwich radiator design was chosen for the final detector.
The spectra of TR produced by electrons with a momentum of 2 GeV/c as measured with the ALICE TRD sandwich radiator is shown in Fig. 8. Such a measurement is important for the tuning of simulations in the ALICE setup. As the production of TR is not included in GEANT3 [54], which is used to propagate generated particles through the ALICE apparatus for simulations, we have explicitly added it to our simulations in AliRoot [55], the ALICE offline framework for simulation, reconstruction and analysis. An effective parameterisation of the irregular radiator in terms of a regular foil radiator is employed as an approximation. The simulations describe the data satisfactorily including the momentum dependence [53].

Supermodule
The detector is installed in the spaceframe (the common support structure for most of the central barrel detectors) in 18 supermodules, each of which can host 30 read-out chambers arranged in 5 stacks and 6 layers (see Fig. 3). The overall shape of the supermodule is a trapezoidal prism with a length of 7.02 m (8 m including services). Its height is 0.78 m and the shorter (longer) base of the trapezoid is 0.95 m (1.22 m). The weight of a supermodule with 30 read-out chambers is about 1.65 t. Mechanical stability is provided by a hull of aluminium profiles and sheets, connected with stainless steel screws. The materials were chosen to minimise the interference with the magnetic field in the solenoid magnet. In front of PHOS, where minimal radiation length is required, the aluminium sheets of the short and long base of the trapezoid were replaced by carbon-fibre windows.
All service connections must be routed internally to the end-caps of the supermodule. Those that require materials with large radiation length are placed at the sidewalls, outside the active area of the TRD and most other detectors in ALICE. This includes the low-voltage power distribution bus bars as well as other copper wires for the Detector Control System (DCS) board power, network and high-voltage (HV) connections between the fanout boxes and read-out chambers, and the rectangular cooling pipes (see Section 4 for more details).
Low-voltage (LV) power for the read-out boards is provided via copper power bus bars (2 for each layer and voltage as described in Table 3) with a cross-section of 6 mm × 6 mm (per channel) running along the sidewalls of the supermodule. Each read-out board is connected directly to the power bus bars. Heat generated by ohmic losses in the power bus bars is partially transferred to the adjacent cooling pipes (see Section 4.2). The power bus bars protrude about 30 cm from each side of the supermodule hull, where they are equipped with capacitors for voltage stabilisation. On one end-cap of the supermodule the power-bus bars are connected via a low-voltage patch panel to the long supply lines to the power supplies outside of the magnet.
Each read-out chamber is equipped with 6 or 8 read-out boards (see Section 2.1) and one DCS board (see Section 4.4). Power is provided and controlled separately for each DCS board by a power distribution box. The DCS boards are connected via twisted-pair cables to Ethernet patch panels at the end-caps and the boards of two adjacent layers are connected via flat-ribbon cables in a daisy chain loop to provide low-level Joint Test Action Group (JTAG) access to neighbouring boards.
For each chamber, three optical fibres are routed to the end-cap on the C-side. Two fibres connect the optical read-out interfaces to a patch panel, where they are linked via the Global Tracking Unit (GTU) (see Section 5) to the Data AcQuisition (DAQ) systems. One trigger fibre connects the DCS board to the trigger distribution box (see Section 5.1), which receives the trigger signals from the pretrigger system or its back-up system and splits them into 30 fibres (+ 2 spares).
The supermodules were constructed from 2006 to 2014. In the following, we discuss the sequence of required steps. After the construction of the supermodule hulls, the power bus bars and patch panels for the distribution of low voltage for the read-out boards and the cooling bars for the water cooling were mounted on the sidewalls. Next the power distribution box (DCS board power), the box for trigger signal distribution, a patch panel for the optical read-out fibres, and the high-voltage distribution boxes were installed at the end-caps.
Before integrating the read-out chambers into a supermodule, they were equipped with electronics (readout boards, DCS boards) and cooling pipes. After a series of tests were performed to ensure stable operation [56,57], the chambers were then inserted layer by layer. The first connection established during the installation was the gas link between the chambers (using polyether ether ketone connectors). The chambers were fixed to the hull with three screws on each of the long sides after performing a manual physical alignment. As demonstrated by later measurements (Section 9), the alignment in rϕ between the chambers is of the order of 0.6-0.7 mm (r.m.s.).
The cables to and from the read-out boards used for JTAG, low-voltage sensing, Ethernet, and DCS power were routed along one side of the chambers. The cable lengths in the active area on top of the chambers were minimised, avoiding cables from the read-out pads to cross. On the other side of the chambers, only the high voltage cables were routed. They were soldered at two separate HV distribution boxes for anode and drift voltage at one end-cap of the supermodule. Each read-out board (38 per layer) was connected to the power bus bars (low voltage) using pre-mounted cables. The cooling pipes (4 per read-out board) were connected by small Viton tubes. In the z-direction across the read-out chambers, only optical fibres for the trigger distribution (1 per chamber) and data read-out (2 per chamber) were routed.
In addition to layer-wise tests during installation, a final test was done after completion. The test setup consisted of low-voltage and high-voltage supplies, a cooling plant, a gas system [58], as well as a full trigger setup and read-out equipment. Also a trigger for cosmic rays was built and installed [59,60]. It was used for first measurements of the gas gain and the chamber alignment, and to also study the zero suppression during assembly [50,[61][62][63][64][65].
After transport to CERN pre-installation tests were performed (see Section 7.1 and [66]) and the supermodules were installed in the space frame with a precision of 1 cm (r.m.s.) in z lab -direction. The maximum tolerance in ϕ is 2 cm due to constraints given by the space frame.
In addition to the sequential assembly and installation, four supermodules were completely disassembled again in 2008 and 2009. The initial tests were not sensitive to viscous leaks of the read-out chambers and thus the supermodules were rebuilt after improving the gas tightness (see Section 3.4.1). Furthermore, in 2013 during LS 1, one supermodule was disassembled in order to improve the high-voltage stability of the read-out chambers (see Section 7.3).

Material budget
A precise knowledge of the material budget of the detector is important to obtain a precise description of the detector in the Monte Carlo simulations, which are used, e.g. to compute the track reconstruction efficiencies.
The TRD geometry, as implemented in the simulation part of AliRoot, consists of the read-out chambers, the services, and the supermodule frame. All these parts are placed inside the space frame volume. The material of a read-out chamber is obtained including several material components. A general overview of the various components is given in Table 2.
The material budget in the simulation was adjusted to match the estimate based on measurements during the construction phase of the final detector. The supermodule frames consist of the aluminium sheets on the sides, top, and bottom of a supermodule together with the traversing support structures, such as the LV power bus bars and cooling arteries. Additional electronics equipment is represented by aluminium boxes that contain the corresponding copper layers to mimic the present material. The services are also introduced, including, e.g. the gas distribution boxes, cooling pipes, power and read-out cables, and The radiation length map in units of X/X 0 in a zoomed-in part of the active detector area as a function of the pseudorapidity and the azimuthal angle, calculated from the geometry in AliRoot (the colour scale has a suppressed zero). The positions of the MCMs and the cooling pipes are visible as hot spots. The radiation length was calculated for particles originating from the collision vertex. Therefore the cooling pipes of the six layers overlap for small, but not large η.  Table 2: Parts of one read-out chamber, radiator, electronics, and their average contribution to the radiation length in the active area for particles with normal incidence.
power connection panels. Figure 9 shows the resulting radiation length map, quantified in units of radiation length (X/X 0 ), in a zoomed-in part of the active detector area. It is clearly visible that the Multi-Chip Modules (MCM)s on the read-out boards (see Section 5) and the cooling pipes introduce hot spots in X/X 0 . After averaging over the shown area, the mean value is found to be X/X 0 = 24.7% for a supermodule with aluminium profiles and sheets and 30 read-out chambers (6 chambers per stack with the material budget as indicated in Table 2). The reduced material budget of the supermodules in front of the PHOS detector (carbon fibre inserts instead of aluminium sheets and no read-out chambers in stack 2) is likewise modelled in the simulation. In regions directly in front of PHOS X/X 0 is only 1.9%.
The total weight of a single fully equipped TRD supermodule as described in the AliRoot geometry, including all services, is 1595 kg, which is about 3.3% less than its real weight. This discrepancy can be attributed to material of service components, such as the gas manifold (see Section 3.3) and the patch panel, outside the active area, which were not introduced in the AliRoot geometry.

Gas
At atmospheric pressure, a total of 27 m 3 of a xenon-based gas mixture must be circulated through the TRD detector. This expensive gas cannot be flushed through, but rather has to be re-circulated in a closed loop by using a compressor and independent pressure and flow regulation systems. The gas system of the TRD follows a pattern in construction, modularisation, control, and supervision which is common to all LHC gaseous detectors, with emphasis on the regulation of a very small overpressure on the readout chambers and on the minimisation of leaks. The basic modules such as mixer, purification, pump, exhaust, analysis, etc., are based on a set of equal templates applied to the hardware and the software. A Programmable Logic Controller (PLC) controls each system and the user interacts with it through a supervision panel. Upon a global command, the PLC executes a sequence that configures all elements of the gas system for a given operation mode and continuously regulates the active elements of the system. In this manner the modules and operational conditions can be customised to the specific requirements of each detector, from the control of the stability of the overpressure in the detectors, the circulation flow, and the gas purification, recuperation and distillation, to the monitoring of the gas composition and quality (Xe-CO 2 , and as little O 2 , H 2 O and N 2 as possible).

Gas choice
As well as being an array of tracking drift chambers, the TRD is an electron identification device, achieved through the detection of TR photons. In order to efficiently absorb these several keV photons, a high Z gas is necessary. Figure 10 shows, for three noble gases, the absorption length of photons of energies in the range of typical TR production. At around 10 keV the absorption length in Xe is less than a cm, whereas for Kr it is several cm. This argues for the choice of Xe as noble gas for the operating mixture. CO 2 is selected as the quenching gas, since hydrocarbons are excluded for flammability and ageing reasons. The choice of the exact composition is in this case rather flexible, since the design of the wire chambers leaves enough freedom in the choice of the drift field and anode potential. The best compromise for the CO 2 concentration corresponds to the mixture Xe-CO 2 , which ensures a very good efficiency of TR photon absorption by Xe and provides stability against discharges to the detector.
Furthermore, this mixture exhibits a nice stability of the drift velocity, at the nominal drift field, also with the inevitable contamination of small amounts of N 2 that accumulates in the gas through leaks (see Section 3.2). The drift velocity of the Xe-CO 2 (85-15) mixture, pure and with substantial admixtures of N 2 , as a function of the drift field, is shown in Fig. 11 (left). The drift velocity does not depend on the N 2 contamination at the nominal drift field of 700 V/cm. On the other hand, as illustrated in Fig. 11 (right), the anode voltage would need a 50 V readjustment to keep the gain constant when increasing the concentration of N 2 by 10% in the mixture. It should be noted that intakes of less than 5% N 2 are typically observed in one year of operation. After 2-3 years of operation, the N 2 is cryogenically separated from the Xe (see Section 3.3.9). The operation of the chambers in a magnetic field of 0.5 T, perpendicular to the electric drift field (700 V/cm), forces the drifting electrons on a trajectory, which is inclined with respect to the electric field. The so-called Lorentz angle is about 9 • for this gas mixture (see Section 10).
For commissioning purposes, where TR detection is not necessary, the read-out chambers are flushed with Ar-CO 2 , which is available in a premixed form at low cost.

Requirements and specifications
The TRD consists of read-out chambers with an area of about 1 m 2 which are built with low material budget. This poses a severe restriction on the maximum overpressure that the detector can hold. Therefore, while in operation, the pressure of each supermodule is regulated by the gas system to a fraction of a mbar above atmospheric pressure and the safety bubblers, installed close to the supermodules, are adjusted to release gas at about 1.3 mbar overpressure. The detector can hold an overpressure in excess of 5 mbar.
Another tight constraint arises from the highly disadvantageous surface-to-volume ratio of the detector, which enhances the challenge of keeping the gas losses through leaks to a minimum. Cost considerations drive the criterion for the maximum allowable leak rate of the system: a reasonable target is to lose less than 10% of the total gas volume through leaks in one year. This translates into a total leak conductance of 1 ml/h per supermodule at 0.1 mbar overpressure. As a result, unlike in other gas systems, gas is not continuously vented out to the atmosphere. Furthermore, the filling and emptying of the system must be performed with marginal losses of xenon. Adequate gas separation and cryogenic distillation techniques are therefore implemented. Furthermore, any pulse-height measuring detector must be operated with a gas free of electronegative substances, such as O 2 , which is continuously removed from the gas stream. Precautions are taken by chromatographic analyses of both the supply xenon and of the air inside the volume of the solenoid magnet to avoid any SF 6 contamination of the gas through gas supply cylinders or from neighbouring detectors.

Description of the gas system
The TRD gas system follows the general architecture of all closed loop systems of the LHC detectors, but is customised to meet the requirements specified above. The various modules of the gas system are  Fig. 12: Schematic view of the TRD gas system. The gas circulates in a closed loop pushed by a compressor. The flow for each supermodule is determined by the pressure set at individual pressure reducers in the inlet distribution modules. The overpressure is regulated with individual pneumatic valves at the return modules. The gas is purified at the surface and, when needed, supply gas is mixed and added to the loop. For the filling and the removing of the expensive xenon, semipermeable membranes are used to separate it from the CO 2 . The recovered xenon can be treated in a cryogenic plant in order to remove accumulated N 2 , prior to storage. distributed, as shown schematically in Fig. 12, on the surface, in a location halfway down the cavern shaft, and in the cavern. The gas is circulated by compressors that suck the gas from the detector and compresses it to a high pressure value. This pumping action is regulated to keep the desired overpressure at the detector. In the high-pressure part of the system, at the surface, gas purification, mixing, and other operations are carried out. On its way to the cavern, the gas is distributed to individual supermodules using pressure regulators. The gas circulates through the detector and at the outlet of each sector a gas manifold is used to return the gas through a single line and to hold the pressure regulation hardware. Halfway to the surface, a set of pneumatic valves is used to regulate the flow from each supermodule in order to keep the desired overpressure. The gas is then compressed into a high pressure buffer prior to circulation back to the surface.

Distribution
Xenon is a heavy gas; its standard condition density at ambient conditions is 5.76 kg/m 3 , 4.7 times that of air. This means that over the 7 m height-span of the TRD in the experiment, the total hydrostatic pressure difference between the top and the bottom supermodules would be about 2.8 mbar. In order to overcome this, gas is circulated separately through each supermodule (except the top three and the bottom three, which are installed at similar heights) and the pressure is thus individually regulated to equal values everywhere. In addition, due to the different heights of the supermodules, the gas, supplied from the surface, would flow unevenly through the different supermodules, the lower ones being favoured over the higher ones. This second inconvenience is overcome by supplying the gas to each supermodule from the distribution area (half way down the cavern shaft) through 4 mm thin lines over a length of about 100 m. The pressure drop of the circulating gas in these lines, of several tens of mbar, is much larger than the difference in hydrostatic pressure between supermodules, and therefore nearly equal flow, at equal overpressure, is assured in all supermodules.
The six layers of the supermodules are supplied from one side (A-side) with three inlet lines, each of them serving two consecutive layers. Small bypass bellows connect two consecutive layers on the opposite side. In the A-side, a manifold arrangement is used to connect the gas outlets and a common safety bubbler, pressure sensors and back-up gas. The return outlets in each supermodule are connected together into one line which returns to the pump module. The three top and three bottom supermodules are connected to one single return line each. This arrangement results into 14 independently regulated circulation loops. Each supermodule has its own two-way bubbler, which provides the ultimate safety against over-or underpressure.

Pump
In the distribution area, the flow through each return line is regulated by a pneumatic valve per loop driven by the pressure sensors located at the detector. In this area, the gas is kept at a pressure slightly below atmospheric pressure, and it is stored in a 0.8 m 3 buffer container before it is compressed by two pumps which operate at a constant frequency. The compressor module drives a bypass valve in order to maintain a calculated pressure set point at its inlet. In this manner, a dual regulation concept is used to handle the 14 loops. The role of the inlet buffer is to act as a damper of possible regulation oscillations. This pressure regulation system keeps the overpressure in the supermodule stable at 0.1 mbar above atmospheric pressure (set point) within 0.03 mbar.
A 0.93 m 3 high pressure buffer at the compressor outlet is used as a storage volume. Its content varies according to the atmospheric pressure, either by providing gas to the detectors, or by receiving it from them. The overpressure in this buffer typically ranges between 0.8 and 2 bar. Knowledge of all the system volumes allows the pressure in the buffer to be predicted for any atmospheric pressure value. Gas leaks ultimately result in a reduction of this pressure, in that case the dynamic regulation of the high pressure triggers the injection of fresh gas from the mixer until the high pressure is restored. From this buffer, the pressurised gas is circulated up to the gas building at the surface.

Purifier
The purifier module consists of two 3 litre cartridges each filled with a copper catalyser which is efficient in chemically removing oxygen by oxidising the copper, and mechanically removing water by absorption. Upon saturation, the PLC switches between cartridges at the pre-defined frequency, and launches an automatic regeneration cycle where CuO 2 is reduced at high temperatures with a flow of H 2 diluted in argon. As the detector is rather gas tight, the O 2 intake through leaks is moderate, and the purifier keeps it between 0 and 3 ppm. However, H 2 O diffusion, probably through the aluminised Mylar foil which constitutes the drift electrode of every read-out chamber, makes it necessary to switch between purifiers about every 3.5 days, in order to keep the H 2 O content below a few hundred ppm.

Recirculation
The surface module is used to recirculate the gas at high enough pressure to the distribution modules in the cavern shaft area. It also contains provisions for extracting gas samples for analysis, and a bypass loop to allow for the installation of containers such as a krypton source for gain calibration (see Section 10).

Mixer
Under normal operation and since the gas is only exhausted through leaks, gas injection into the system happens only if the pressure in the high pressure buffer falls below a dynamic threshold, as explained above. On such occasions, the mixer is activated and injects the nominal gas mixture at a rate of a few tens of l/h until the high pressure buffer is replenished. The amount of gas injected by the mixer during a given period provides a direct measurement of the leak rate.
In addition, a second set of mass flow controllers provides flows in the m 3 /h range and is used for filling and emptying the detector.

Backup system
When the gas system is in stop mode, e.g. when there is a power failure, the safety bubbler installed on each supermodule ensures that the detector pressure always remains within about ±1.3 mbar relative to atmospheric pressure. In order to avoid that air, i.e. oxygen, enters the detector, the external side of the bubbler is connected to a continuous flow of neutral gas, in this case N 2 , that flows through the bubbler in case of a large detector underpressure. The choice of N 2 is driven by the small influence on the gas properties that this admixture has (see Fig. 11). The full TRD is served by three independent backup lines, each with connections to six supermodule bubblers, and arranged such that the flow points downwards. In this way, if the xenon mixture is exhausted through the bubblers, it falls down the backup line, relieving its high hydrostatic pressure. A differential pressure transmitter measures the pressure difference between the detector and the backup gas.

Analysis
The control of the gas quality is perhaps the most demanding aspect of running detectors where both signal amplitude and drift time information are important. This control is even more crucial for the ALICE TRD, where accurate and uniform drift velocity and gain values are needed for triggers based on online tracking and particle identification. Thus, in addition to effective tightness of the system and continuous removal of O 2 and H 2 O, constant monitoring of the gas composition and in particular of the N 2 is necessary. Although for a large volume system such as that of the TRD the changes in composition are obviously slow, the precision and stability requirement of the measuring instruments are quite challenging. Furthermore, constantly measuring analysers, such as O 2 , H 2 O and CO 2 sensors, must be installed in the gas loop, since xenon must not be exhausted. Therefore they must be free of outgassing of contaminants into the gas.
The analysis module samples the return gas from individual supermodules in a bypass mode, before it is compressed. For this, a fraction of the gas is pushed through the analysis chain by a small pump, and returned to the loop at the compressor inlet. Usually, the PLC is programmed to continuously sample one supermodule after the other, for about 10 minutes each.
An external gas chromatograph is used to periodically measure the gas composition. This device is not in the gas loop; rather, the gas is exhausted while purging and sampling a small stream for a few seconds every few hours.

Membranes
One system volume of xenon is injected for operation and, typically every two or three years, removed for cleaning and storage. This means that it must be possible to separate CO 2 from Xe. This separation is achieved with a set of two semipermeable membrane cartridges. Each cartridge consists of a bundle of capillary polyimide tubes through which the mixture flows. The bundle is in turn enclosed in the cartridge case. While the CO 2 permeates through the polyimide walls, most of the xenon is contained and continues to flow into the loop. The permeating gas can be circulated through the second membrane cartridge to further separate and recover most of the Xe.
During the filling, the detector is first flushed with CO 2 and then, in closed-loop circulation, the xenon is injected as the CO 2 is removed through the membranes. The reverse process is used for the recuperation of the xenon into a cryogenic plant.

Recuperation
N 2 inevitably builds up in the gas through small leaks and cannot be removed by the purifier cartridges. Therefore, after each long period (2-3 years) of operation, the N 2 is cryogenically separated from the Xe.
A cryogenic buffer is filled with xenon after separating it from CO 2 . At the same time, CO 2 is injected into the gas system in order to replace the removed gas.
The cryogenically isolated buffer is surrounded by a serpentine pipe with a regulated flow of liquid nitrogen (LN 2 ) in order to keep its temperature at −170 • C, just above the N 2 boiling point (−195.8 • C). At this temperature Xe (and CO 2 ) freezes whereas N 2 stays in the gaseous phase. Once the buffer is full, the stored gas is pumped away. After this, the buffer is heated up in a regulated way, and the evaporating Xe is compressed into normal gas cylinders. The resulting Xe has typically a N 2 contamination of <1%, and the total Xe loss (due to the efficiency of the membranes and the cryogenic recovery process) is about 1 m 3 for a full recovery operation.

Operational challenges
The gas system has been operating reliably over several years in several modes, but mainly in so-called run mode. Aside from minor incidents, a number of important leaks have been dealt with, which deserve a brief description.

Viscous leaks
As part of the standard quality assurance procedure, a leak test was performed on each chamber prior to installation in the supermodule. The leak test consisted of flushing the chamber with gas and measuring the O 2 contamination at the exhaust, where the overpressure was typically about 1 mbar. It was found, however, that a supermodule would lose gas even if the O 2 content was very low. The reason turned out to be the particular construction of the pad planes, which are glued to a reinforcement honeycomb panel with a carbon fibre sheet. Viscous leaks would develop between the glued surfaces and gas would find its way out through the cut-outs for the signal connections machined in the honeycomb sandwich. The impedance of this kind of leak is large enough that gas can escape the detector with no intake of air through back-diffusion. The concerned read-out chambers were then extracted and repaired, and the leak tests on subsequent chambers were modified such that the O 2 was measured both at over-and underpressure in the read-out chamber, resulting in a tight system.

Argon contamination
At one point, the routine gas analysis with the gas chromatograph showed increasing levels of Ar in the Xe-CO 2 mixture. This elusive leak came from a faulty pressure regulator which was pressurised with argon on the atmospheric side. Occasionally, depending on the pressure, the membrane of the regulator would leak and let Ar enter the gas volume. A total of 1% Ar accumulated in the mixture and was removed by cryogenic distillation, together with N 2 .

Leak in pipe
The last major leak in the system was detected when suddenly the pressure at the high pressure buffer started to steadily decrease. Any leak of the system would appear, while running, as a decrease in the high pressure buffer, because the system always ensures the right overpressure at the read-out chambers. By stopping the system and isolating all of its modules, it was found that the source of the leak was a long, stainless steel pipe which connected the compressor module, half way down the cavern shaft, to the surface, where the gas, still at high pressure, is cleaned and recirculated. It was not possible to find the exact location of the leak. This was solved by replacing the pipe by a spare.

Services
The supermodules installed in the space frame require service infrastructure for their operation. To reduce the weight, the connections (low and high voltage, cooling, gas, read-out, and control lines) are  routed via dedicated frames on the A-and C-side, respectively. Both frames are 2 m extensions of the space frame with similar geometry, but mechanically independent except for the flexible services. Most of the equipment, such as the low-voltage power supplies, is placed in the cavern underground and thus inaccessible during beam operation. Some devices are situated in counting rooms in the cavern shaft, which are supervised radiation areas but accessible.

Low voltage
The During the RUN 1 operation, several low-voltage connections on the supermodules showed increased resistivity resulting in excessive heat dissipation, which in some cases required to switch off part of the detector until the problem could be fixed during an access. Later, during LS 1, the affected supermodules were pulled out of the experiment and the connections were reworked in the cavern. The supermodules were re-inserted and re-commissioned immediately after the rework. The complete procedure took about one day per supermodule.

Cooling
The complexity of the cooling system, whose cooling medium is deionised water, is driven by the large amount of heat sources (more than 100 000) distributed over the complete active area of the detector. Heat is produced by the MCMs and the Voltage Regulators (VR) on the read-out boards, the DCS boards, and the power bars. The total heat dissipation in a supermodule amounts to about 3.3 kW, of which about 2.6 kW are produced in the FEE, the remaining 700 W originate from the voltage regulators and the bus bars. The DCS boards contribute with about 130 W per supermodule. Overall, the rate of heat to be carried away during detector operation amounts to 55 kW and 70 kW in Pb-Pb and pp collisions, respectively, due to different read-out rates. Apart from the power bus bars, the heat sources are positioned on top of the read-out boards.
In the cooling system the pressure is kept below atmospheric pressure. Thus a leak leads to air entering in the system but no water is spilled onto the detector. The cooling plant [67] consists of a 1500 l storage tank positioned at the lowest point outside the solenoid magnet, which is able to contain all the water of the installation, the circulation pump, the 18 individual circuits that supply cooling water to the 6 layers of each supermodule, and the heat exchanger connected to the CERN chilled water network. The reservoir is kept at 300-350 mbar below atmospheric pressure by means of a vacuum pump that also removes any air collected through small leaks. In addition, the pressure of the circulation pump (1.8 bar) and the diameter of all pipes are chosen such that a sub-atmospheric pressure is maintained in all places of the detector, despite a difference in height of about 7 m between the lowest and the highest supermodule. Each circuit is equipped with individual heaters and balancing valves in order to control the temperature and the flow in each loop separately. The heaters are regulated by a proportional-integral-derivative controller. A temperature stability in the cooling water of ±0.2 • C is achieved. The typical water flow is about 1300 l/h per supermodule. To avoid corrosion a fraction of the total water flow is passed by a deioniser to keep the water conductivity low. As the water is in contact with similar materials (stainless steel and aluminium), the TRD cooling system also supplies the water to the cooling panels of the thermal screening between TPC and TRD [31].
The loop regulations and cooling plant control is done by a PLC. Warnings and alarms are issued by the PLC if the parameters are outside the allowed intervals and read out by the Detector Control System (see Section 6). Two independent security levels were implemented in each loop. The first continuously monitors the pressure of each loop and stops the water circulation of the cooling plant if any value reaches atmospheric pressure. Secondly, large safety valves were installed at the entrance to each supermodule. They will open in case an overpressure of 50 mbar is reached, providing a low resistance path for the water evacuation in case of emergency.
The cold water is supplied in the lowest point of each supermodule and the warm water is collected on the highest point in order to have more homogeneous water flow in all pipes. A water manifold at one end-cap of the supermodule distributes the water in parallel to the 6 layers inside each supermodule, and on the opposite side a similar manifold collects the warm water. In each layer, two rectangular pipes along the z-direction (65 × 8 × 7500 mm) supply (collect) water to (from) the meanders, 76 individual cylindrical aluminium pipes (3 mm in diameter) running across the y-direction where the heat sources are. A total of 17 meander types were designed for the system. To bring the water from the rectangular pipes to the individual meanders, the rectangular pipe has small stainless steel pipes (3 mm diameter and 5 cm length) soldered at the proper position for each MCM row. A Viton tube of about 2 cm length is used to connect the small stainless steel pipes and the meanders as well as for the connections between the two meanders (one per ROB) in y-direction. A total of about 25000 Viton tube connectors were used in the system. This kind of connector was previously used in the CERES/NA45 leakless cooling system [68] because of its low price and reliability.

High voltage
The high voltage distribution for the drift field and the anode-wire plane is made separately for each chamber, reducing the affected area to one chamber in case of failure. The power supplies for the drift channels and anode-wires were purchased from ISEG [69] (variants of the model EDS 20025). Each module has 32 channels, which are grouped in independent 16-channel boards. Each channel is independently controllable in terms of the voltage setting and current limit as well as monitoring of current and voltage. Eight modules are placed into each crate and remotely controlled via CANbus (Controller Area Network) from DCS (see Section 6). The HV crates are placed in one of the counting rooms in the cavern shaft, which allows access even during beam operation.
For each of the 30 read-out chambers in a supermodule one power supply is needed for the drift field and one for the anode-wire plane. A multiwire HV cable connects the 32 channel HV module with a 30 channel HV fanout box (patch box) located at one end of the supermodule, where the output is redistributed to single wire HV cables (see Section 2.3). The individual HV cables are then connected to a HV filter box, mounted along the side of the read-out chamber. The HV filter box supplies the HV to the 6 anode segments and the drift cathode of the read-out chamber, and in addition it allows connection of the HV ground to the chamber ground. It consists of a network of a resistor and capacitors (2.2 nF and 4.7 nF) to suppress load-induced fluctuations of the voltages in the chamber.
The HV crates are equipped with an Uninterruptible Power Supply (UPS) and a battery to bridge short term power failures. In case of a longer power failure (> 10 s) a controlled ramp-down is initiated, i.e. the HV of the individual drift and anode-wire channels is slowly ramped down. Details on maximum applied voltages, channel equalisation, ramp speed as well as high-voltage instability observed during data taking are discussed in Sections 6 and 7.3.

Slow control network
The slow control of the TRD is based on Detector Control Systemboards [70]. They communicate with the DCS (see Section 6) by a 10 Mbit/s Ethernet interface, mostly using Distributed Information Management (DIM) as protocol for information exchange. The use of Ethernet allows the use of standard network equipment, but a dedicated network restricted to the ALICE site is used. The DCS boards are used as end points for the DCS to interact with subsystems of the detector. Later sections will discuss how the DCS boards are used as interface to the various components, e.g. the front-end electronics or the GTU.
The DCS boards were specifically designed for the control of the detector components and are used by several detectors in ALICE. At the core, the board hosts an Altera Excalibur EPXA1 (ARMv4 core + FPGA), which hosts a Linux operating system on the processor and user logic in the FPGA fabric depending on the specific usage of the board. The DCS board also contains the Trigger and Timing Control receiver (TTCrx) for clock recovery and trigger reception. The Ethernet interface is implemented with a hardware PHY (physical layer) and a soft-Media Access Controller (MAC) in the FPGA fabric.
In case of the boards mounted on the detector chambers, the FPGA also contains the Slow Control Serial Network (SCSN) master used to configure the front-end electronics. Further general purpose I/O lines are, e.g. used for JTAG and I 2 C communication.
Since the Ethernet connections are used for configuration and monitoring of the detector components, reliable operation is crucial. All DCS boards are connected to standard Ethernet switches installed in the experimental cavern outside of the solenoid magnet. Because of the stray magnetic field and the special Ethernet interface of the DCS board (no inductive coupling), there are limitations on the usable switches.
Since the failure of an individual switch would result in the loss of connectivity to a large number of DCS boards, a custom-designed Ethernet multiplexer was installed in front of the switches in the second half of RUN 1. This allows the connection of each DCS board to be remotely switched between two different switches with separate uplinks to the DCS network. The multiplexers themselves are implemented with fully redundant power supplies and control interfaces.

Read-out
The read-out chain transfers both raw data and condensed information for the level-1 trigger. While the former requires sufficient bandwidth to minimise dead time, the latter depends on a low latency, i.e. a short delay of the transmission. The data from the detector are processed in a highly parallelised read-out tree. Figure 13 provides an overview and relates entities of the read-out system to detector components.
In the detector-mounted front-end electronics, the data are processed in Multi-Chip Modules grouped on Read-Out Boards (ROB) and eventually merged per half-chamber. Then, they are transmitted optically to the Track Matching Units (TMU) as the first stage of the Global Tracking Unit (GTU). The data from all stacks of a supermodule are combined on the SuperModule Unit (SMU) and eventually sent to the Data AcQuisition system (DAQ) through one Detector Data Link (DDL) per supermodule.
The read-out of the detector is controlled by trigger signals distributed to both the FEE and the GTU. The ALICE trigger system is based on three hardware-level triggers (level-0, 1, 2) and a High Level Trigger (HLT) [72] implemented as a computing farm. In addition to these levels, the FEE requires a dedicated wake-up signal as described in the next subsection.

Pretrigger and LM system
Both FEE and GTU must receive clock and trigger signals, which are provided by the Central Trigger Processor (CTP) [73] using the Trigger and Timing Control (TTC) protocol over optical fibres. While the GTU only needs the level-0/1/2 and is directly connected to the CTP, the FEE requires a more complicated setup. To reduce power consumption, it remains in a sleep mode when idle and requires a fast wake-up signal before the reception of a level-0 trigger to start the processing. During RUN 1, an intermediate pretrigger system was installed within the solenoid magnet [74,75]. Besides passing on the clock and triggers received from the CTP, it generated the wake-up signal from copies of the analogue V0 and T0 signals (reproducing the level-0 condition) and distributed it to the front-end electronics. In addition, the signals from TOF were used to generate a pretrigger and level-0 trigger on cosmic rays. Because of limitations of this setup, the latencies of the contributing trigger detectors at the CTP were reduced for RUN 2 (also by relocating the respective detector electronics) such that the functionality of the pretrigger system could be integrated into the CTP. The latter now issues an LM (level minus 1) trigger for the TRD before the level-0 trigger. An interface unit (LTU-T) was developed for protocol conversion [76] in order to meet the requirements of the TRD front-end electronics. A comparison of the two designs is shown in Fig. 14 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17   0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17   0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17   0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17   18

Front-end electronics
The FEE is mounted on the back-side of the read-out chamber. It consists of MCMs which are connected to the pads of the cathode plane with flexible flat cables. An MCM comprises two ASICs, a PASA and a TRAP, which feature a large number of configuration settings to adapt to changing operating conditions. The signals from 18 pads are connected to the charge-sensitive inputs of the PASA on one MCM. An overview of the connections is shown in Fig. 15.
The very small charges induced on the read-out pads (typically 7 µA during 1 ns) are not amenable to direct signal processing. Therefore, the signal is first integrated and amplified by a Charge Sensitive Amplifier (CSA). Its output is a voltage signal with an amplitude proportional to the total charge. The CSA has a relatively long decay time, which makes it vulnerable to pile-up. A differentiator stage removes the low frequency part of the pulse. The exponential decay of the CSA feedback network, in combination with the differentiator network, leads to an undershoot at the shaper output with the same time constant as the CSA feedback network. A Pole-Zero network is used to suppress the undershoot. A shaper network is required to limit the bandwidth of the output signal and avoid aliasing in the subsequent digitisation process. At the same time the overall signal-to-noise ratio must be optimised. These  objectives are achieved by a semi-Gaussian shaper, implemented with two low-pass filter stages. Each stage consists of two second-order bridged-T filters connected in cascade. The second shaper consists of a fully differential amplifier with a folded cascode configuration and a common-mode feedback circuit. This circuit network was implemented to prevent the output of the fully differential amplifier from drifting to either of the two supply voltages. It establishes a stable common-mode voltage. The last stage in the chain comprises a pseudo-differential amplifier with a gain of 2. This stage adapts the DC voltage level of the PASA output to the input DC-level of the TRAP ADC [77].
The differential PASA outputs are fed into the ADCs of the TRAP, the second ASIC on the same MCM. The PASA and TRAP parameters are listed in Table 4. The TRAP is a custom-designed digital chip produced in the UMC 0.18 µm process. The TRAP comprises cycling 10-bit ADCs for 21 channels, a digital filter chain, a hardware preprocessor, four two-stage pipelined CPUs with individual single-port, Hamming-protected instruction memories (IMEM, 4k x 24 bit), about 400 configuration registers usable by the hardware components, a quad-port Hamming-protected data memory (DMEM, 1k x 32 bit), and an arbitrated Hamming-protected data bank (DBANK, 256 x 32 bit) [78]. Three excess ADC channels are fed with the amplified analogue signal from the two adjacent MCMs to avoid tracking inefficiencies at the MCM boundaries. The signals of all 21 channels are sampled and processed in time bins of 100 ns. The number of time bins to be read out, can be configured in the FEE. At the beginning of RUN 1 24 time bins were conservatively read out. At a later stage the number of time bins was reduced to 22 in order to reduce the readout time and the data volume.
The first step in the TRAP is the digitisation of the incoming analogue signals. In order to avoid rounding effects, the ADC outputs are extended by two binary digits and fed into the digital filter chain. First, the pedestal of the signal is equilibrated to a configurable value. Then, a gain filter is used to correct for local variations of the gain, arising either from detector imperfections or the electronics themselves. A tail cancellation filter can be used to suppress the ion tails. The filtered data are fed into a pre-processor which contains hardware units for the cluster finding. The four CPUs (MIMD architecture) are used for the further processing. The local tracking procedure is discussed in detail in Section 12.1.
The MCMs are mounted on the ROB. On each board, 16 chips are used to sample and process the detector signals. A full detector chamber is covered by 8 ROBs (6 for chambers in stack 2). The read-out is organised in a multi-level tree. First, the data from four chips are collected by so-called column merger chips. The latter, in addition to processing the data from their own inputs, receive the data from three more MCMs. The data are merged and forwarded to the board merger, which combines the data from all chips of one ROB. One ROB per half-chamber carries an additional MCM which acts as half-chamber merger (without processing data of its own). It forwards the data to the Optical Read-out Interface (ORI) from where it is transmitted through an optical link (DDL) to the GTU. The link is operated at 2.5 Gbit/s and is implemented for uni-directional transmission without handshaking, i.e. the receiving side must be able to handle the incoming data for a complete event as it arrives. As the FEE does not provide multi-event buffering, the detector is busy until the transmission from the FEE is finished. The slowest half-chamber determines the contribution to the dead time of the full detector.

Global Tracking Unit
The GTU receives data via 1044 links from the FEE. The aggregate net bandwidth amounts to 261 GB/s. The two main tasks of the GTU are the calculation of level-1 trigger contributions from a large number of track properties in about 2 µs and the preparation of the event data for read-out. Accordingly, the data processing on the GTU features a trigger path, which is optimised for low latency, and a data path, which equips the detector with the capability to buffer up to 4 events (multi-event buffering, MEB). The derandomisation of the incoming data rate fluctuations with multiple event buffers minimises the readout related dead time. The data transfer from the GTU to the DAQ contributes to the dead time only when the read-out rate approaches the rate which saturates the output bandwidth as shown in Fig. 16.
The GTU consists of three types of FPGA-based processing nodes organised in a three-layer hierarchy (see Fig. 17 The data from one stack is received by the corresponding TMU. Each TMU implements the global online tracking, which combines pre-processed track segments to tracks traversing the corresponding detector stack, as first stage of the trigger processing (see Section 12). The TMUs furthermore implement the initial handling and buffering of incoming events as a pipelined data push architecture. Input shaper units monitor the structural integrity of the incoming data and potentially restore it to a form that allows for stable operation of all downstream entities. Dual-port, dual-clock BRAMs in the FPGA are utilised to compactify data of the 12 incoming link data streams to dense, wide lines suitable for storage in the SRAM. The SRAM provides buffer space for multiple events and its controller implements the required write-over-read prioritisation to ensure that data can be handled at full receiver bandwidth. On the read side, a convenient interface is provided to read out or discard stored events in accordance to the control signals generated by the segment control on the SMU.  The top-level TGU consolidates the status of the segments, which operate independently in terms of read-out, as well as the segment-level contributions of the triggers. It constitutes the interface to the CTP, to which it communicates the detector busy status and the TRD-global trigger contributes for various signatures (see Section 12).

Detector Control System
The purpose of the DCS is to ensure safe detector conditions, to allow fail-safe, reliable and consistent monitoring and control of the detector, and to provide calibration data for offline reconstruction. In addition it provides detailed information on subsystem conditions and full functionality for expert monitoring and detector operation. Tools were implemented to reduce the operational complexity and the information on detector conditions to a level that allows operators to monitor and handle the detector in an intuitive and safe way. The TRD DCS is integrated with the rest of the ALICE detector control systems into one system which is operated by one operator.

Architecture
The hardware architecture of the DCS can be divided into three functional layers. The field layer contains the actual hardware to be controlled (power supplies, FEE, etc). The control layer consists of devices which collect and process information from the field layer and make it available to the supervisory layer. Finally, the devices of the control layer receive and process commands from the supervisory layer and distribute them to the field layer.
The software on the supervisory layer is distributed over 11 server computers. It is based on the commercial Supervisory Control and Data Acquisition (SCADA) system PVSS II from the company ETM [79], now called Symatic WinCC [80]. The implementation uses the CERN JCOP control framework [81], shared by all major LHC experiments. This framework provides high flexibility and allows for easy integration of separately developed components in combination with dedicated software developed for the TRD, including Linux-based processes.
The software architecture is a tree structure that represents (sub-)systems of the detector and its devices, as shown in Fig. 18. The entities at the bottom of the hierarchy represent the devices (device units), logical entities are represented by control units. The DCS system monitors and controls 89 low voltage (LV) power supplies with more than 200 channels, and 1044 high voltage channels. The system also monitors the electronics configuration of more than one million read-out channels, the GTU, and the cooling and gas systems.

Detector safety
To ensure the safety of the equipment, nominal operating conditions are maintained by a hierarchical structure of alerts and interlocks. Whenever applicable, internal mechanisms of devices (e.g. power supply trip) are used to guarantee the highest level of reliability and security. Thresholds and status of the interlocks are controlled by the system, but the functioning of the device is independent of the communication between hardware and software. The possible range of applied settings (e.g. anode channel high voltage) is limited to a nominal range to prevent potential damage due to operator errors.
In addition, the system employs a three-level alert system, which is used to warn operators and detector experts of any unusual detector condition.
On the control and supervisory layer, cross system interlocks protect the devices and ensure consistent detector operation. These are a few examples: -In case of a failure of the cooling plant for the FEE, a PLC-based interlock disables the LV power supplies.
-The temperature of the FEE is monitored at the control and supervisory level and interlocked with the PCU to switch off the devices in case of overheating or loss of communication to the SCADA system.
-In case of a single LV channel trip, the corresponding FEE channels are consistently switched off.
-Unstable LHC beam conditions, e.g. during injection or adjustment of the beam optics, pose a potential danger to gas-filled detectors. Therefore the HV settings are adapted to the LHC status (see Section 7.2). At injection, the anode voltages are decreased automatically to an intermediate level to reduce the chamber gain. Restoring the nominal gain is inhibited until the LHC operators declare stable beams via a data interchange protocol.

High voltage
The HV system comprises 36 HV modules in 5 crates. The 1044 HV channels, 1 of each polarity providing anode and drift voltage to each chamber, are controlled via a 250 kbit/s CAN bus through a dedicated Linux-based DIM server [82]. The published DIM services, commands and remote procedure calls (RPC) resemble the logical structure of items used in commercial process control servers: the command to change a setting is confirmed by the server via a read back setting. In addition, the actual measured value from the device is published. Update rates for different services can be adjusted independently.
The HV gain and drift velocity are equilibrated for each chamber individually to compensate for small differences in the chamber geometry. Changes of environmental conditions (atmospheric pressure and temperature) as well as small variations of the gas composition cause changes in gas gain and drift velocity. To ensure stable conditions for the level-1 trigger (see Section 12), these dynamic variations are compensated by automatic adjustments of the anode and drift voltages which are performed in between runs. These and other automatic actions on the HV are described in Section 7.2.

Detector operation
The DCS employs a dedicated Graphical User Interface (GUI) and a Finite State Machine (FSM). The FSM allows experts and operators intuitive monitoring and operation of the detector. The FSM hierarchy reflects the structure of subsystems and devices shown in Fig. 18. Detector conditions are mapped to FSM states, and these are propagated from the device level upwards to the FSM top node. Standard operational procedures (configuration of read-out and trigger electronics, ramping voltages etc.) are carried out via FSM commands which propagate down to the devices and cause a transition to a different state.
The GUI for detailed monitoring and expert operation comprises a dedicated panel for each node in the FSM tree. An example is shown in Fig. 19. Detector subsystem 'ownership', i.e. the right to execute FSM commands and change the detector state, is only granted to a single operator at a time, and is represented by symbolic 'locks'. Operators can work on-site or access the DCS system remotely through appropriate gateways.
The monitoring data acquired by the DCS system are stored in dedicated databases. Dedicated trending GUIs allow the experts to visualise the time dependence of the detector conditions. During data taking, the monitoring data needed for detector calibration is queried and made available for offline analysis (see Section 7.3).

Operation
In this section, first the commissioning steps for the detector and the required infrastructure and then the operation and performance for different collision systems are described.

Commissioning
The service connections in the cavern were prepared and tested in parallel to the construction of the supermodules. The low-voltage connections were tested with dummy loads and the leak tightness of the cooling loops was verified. The Ethernet connections were checked using both cable testers and standalone DCS boards. The optical fibres for the read-out were controlled for connectivity and mapping. These tests were crucial in order to identify connection problems prior to the detector installation when all connections were still well accessible.
The supermodules were installed in different installation blocks as described in Section 2. Prior to the installation the supermodules were tested at the surface site. They were rotated along the z lab -axis to the orientation corresponding to their foreseen installation position (e.g. relevant for cooling). A test setup provided all relevant services (low/high voltage, cooling, Ethernet, read-out, ...) to allow a full system test of each supermodule. The testing procedure included basic functionality tests, such as water and gas tightness, front-end electronic stress tests, read-out tests as well as checks of the noise level [66].
After successful surface testing, the supermodules were installed into the space frame in the cavern (see Section 2.3). Subsequently, the services were connected and the basic tests described above repeated to verify operation in the final setup. At this stage, also the full read-out of the detector with the experimentwide trigger and data acquisition systems was commissioned. To check the data integrity of the read-out chain, test pattern data, generated either in the FEE or in the GTU, were used. Errors observed during those tests, e.g. bitflips on individual connections on a read-out board, were cured by switching to spare lines or by masking channels from the read-out if a correction was not possible. After establishing the read-out, pedestal runs (without zero suppression) were recorded to determine the baseline and noise of each channel. If needed, further data were recorded to perform a Fourier analysis in order to identify and fix noise sources, e.g. caused by missing ground connections. In addition, these runs were used to identify inactive channels which cannot be read out.
After each installation block of new supermodules, a dedicated calibration run was performed before the actual data taking. The detector is read out with radioactive 83m Kr distributed through the gas system (see Section 10.2). Since this was usually the first high-rate data taking after the end-of-year shutdown (and installation), these runs and the preparations for them were an important step to get ready for the real data taking.
Before each physics production run, periods of cosmic-ray data taking were scheduled to study the performance of the detector system, to align individual detector components (see Section 9.1) and to provide reference spectra for particle identification (see Section 11). Data were obtained with and without magnetic field. A two-level trigger condition was used to ensure sufficient statistics in the detector acceptance, even when only the first supermodules were installed in the horizontal plane (see Section 12.3).

High voltage operation
To avoid HV trips during the critical phases of beam injection (e.g. Based on measurements in pp, p-Pb and Pb-Pb collisions in RUN 1, it has been estimated that the chambers had a time averaged current of about 200 nA. This led to a total accumulated charge of less than 0.2 mC per cm of wire for RUN 1. As the chambers were validated for charges above 10 mC/cm, it is expected that no ageing effect occurs during the time the TRD is going to be operated. Up to now, in fact no deterioration in the performance of tracking, track matching and energy resolution was observed.
The average anode current as a function of the interaction rate as measured by the T0 detectors used for the ALICE luminosity measurement has a linear dependence with a slope of 1/200 nA/Hz for p-Pb collisions at √ s NN = 5.02 TeV. The slope parameter was obtained from different LHC fills ranging from minimum-bias data taking up to high rate interaction running, where the LHC background conditions can be different. Under the vacuum conditions in RUN 1, about 1/3 of the current was due to the background rate, which is nearly negligible in RUN 2.
The expected dependence of the measured current on detector occupancy was found. The probability for pile-up events in, e.g. p-Pb collisions at √ s NN = 5.02 TeV at 200 kHz interaction rate is about 14% when averaged over time, with a maximum of ∼24% as calculated from the bunch spacing and the number of bunch crossings in the LHC filling scheme [2,83] as well as the integration time of the read-out chamber (drift length/drift velocity).
For the level-1 trigger it is crucial to reduce the time dependence of the drift velocity and the gain to a minimum. The former impacts the track matching, the latter the electron identification. To ensure the required stability, the anode and drift voltages are adjusted to compensate for pressure changes (the temperature is sufficiently stable). The parameters for the correction were obtained by correlating the calibration constants with pressure (see Section 10). A relative pressure change dp/p results in a change of gain of dG/G = −6.76 ± 0.04 and drift velocity of dv d /v d = −1.41 ± 0.01 [84]. In addition, the dependences of the gain and drift velocity on the anode and drift voltage, respectively, as obtained from test beam measurements [85] were used (from RUN 2 onwards the dependence of gain on voltage was taken from the krypton calibration runs). This results in voltage changes of about 0.83 V and 1.4 V for a pressure change of 1 mbar. During RUN 1 the gain and the drift velocity could be kept constant within about 2.5% and 1%, respectively. These values include the precision of the determination of the calibration constants (see Section 10). The variations can be further reduced by measuring and correcting for the gas composition using a gas chromatograph installed during LS 1.
During RUN 1, 10% of the anode and 5.5% of the drift channels turned out to be problematic (see Fig. 35). The respective channels had to either be reduced in anode voltage or switched off. As the detector is segmented into 5 stacks along the beam direction and 6 layers in radial direction, the loss of a single chamber in a stack is tolerable and excellent performance is still achieved for tracking and particle identification (see Sections 8 and 11). Most of the problematic chambers showed strange current behaviours (trending vs time). The de-installation of a supermodule and disassembly of the individual read-out chambers followed by detailed tests revealed that the inspected problematic anode and drift channels had broken filter capacitors (4.7 nF/3 kV). Thus, the 4.7 nF capacitors (see Section 4.3) were removed from the resistor chain in the last supermodules built and installed during the LS 1 (5 supermodules).

In-beam performance
After commissioning with cosmic-ray tracks and krypton calibration runs in 2009, the detector went into operation and worked reliably during the first collisions at the LHC on December 6 th 2009. Since then, the detector has participated in data taking for all collision systems and energies provided by the LHC [2]: -pp collisions from √ s = 0.9 to 13 TeV at low interaction rates (minimum-bias data taking) and high intensities (minimum-bias data taking and rare triggering) with a maximum interaction rate of 200-500 kHz. During the rare trigger periods, the detector contributed level-1 triggers on highp T electrons and jets (see Section 12).
-p-Pb collisions at √ s NN = 5.02 TeV and 8 TeV with interaction rates at the level of 10 kHz (minimum-bias data taking) and at maximum 200 kHz (rare triggering). The detector contributed the same triggers as in the pp running scenario.
-Pb-Pb collisions at √ s NN = 2.76 TeV and 5.02 TeV with maximum interaction rates of up to 8 kHz (minimum-bias and rare triggering).
At the beginning of a fill, once all detectors within ALICE are ready for data taking, a global physics run is started. A run is defined in ALICE as an uninterrupted period of data taking, during which the conditions (trigger setup, participating detectors, etc.) do not change. A run can last from a few minutes to several hours until either the experimental setup or conditions have to be changed or the beam is dumped. An additional end-of-run (EOR) reason is given by the occurrence of a problem related to a given detector or system. The detector parameters measured during a run, such as the voltages and currents of the anode and drift channels as well as temperatures of the FEE, are dumped at the EOR to the Offline Conditions Database (OCDB) via the Shuttle framework [86,87]. The relevant parameters can then be used in the offline reconstruction and analysis.
In order to ensure sufficiently stable conditions during a run, any change, such as the failure of a part of the detector, e.g. due to a LV/HV trip, triggers the ending of the run. In order to avoid too frequent interruptions, the failure of a single chamber within a stack is ignored. Technically, this is realised using the so-called Majority Unit within DCS.
All subcomponents of the TRD detector (infrastructure and gas system) are monitored via DCS (see Section 6). In case any entity deviates from nominal running conditions by pre-defined thresholds a warning is issued. The single entity is either recovered by the DCS operator in the ALICE Run Control Centre or by an expert intervention. During RUN 1 data taking, most interventions were related to the recovery of single event upsets (SEU) and HV trips of problematic channels by re-configuration of the FEE or ramping up of the anode/drift channels. For RUN 2 an automatic recovery of the FEE and HV was put in place.  Fig. 20: Event size vs charged-particle multiplicity for various collision systems for one supermodule. To obtain the charged-particle multiplicity, global tracks (see Section 8) fulfilling minimum tracking quality criteria were counted on an event-by-event basis.

Read-out performance
The event size depends on the charged-particle multiplicity. It is therefore influenced by the collision system and the background conditions of the LHC. The event size vs. charged-particle multiplicity is shown for various collision systems for one supermodule in Fig. 20. For the most central Pb-Pb collisions an event size of 800 kB per supermodule is found.
The dead time per event is composed of the front-end processing and transmission time to the GTU and a potential contribution from the shipping to DAQ. On average the former scales approximately linearly with the event size and rate, the latter is suppressed by the MEB as long as the read-out data rate stays sufficiently below the effective link bandwidth. The typical event sizes of 7 kB, 14 kB, 200 kB in minimum-bias data taking for pp, p-Pb, and Pb-Pb collisions result in front-end contributions of 20 µs, 25 µs, 50 µs, respectively. This does not include the read-out induced part. However, as illustrated by the Pb-Pb case shown in Fig. 16, the detector is typically operated in the linear range of the curve, indicating that input rate fluctuations are absorbed by the MEB and that the read-out does not contribute significantly to the dead time.
The read-out rate during RUN 1 and until now in RUN 2 ranged from about 100 Hz in rare trigger periods to about 850 Hz in minimum-bias data taking in pp and p-Pb collisions. In Pb-Pb collisions, the read-out rate was about 100 Hz and 350 Hz for minimum-bias data taking in Pb-Pb collisions in RUN 1 and up to now in RUN 2, respectively.

Radiation effects
The radiation on the TRD was for RUN 1 and RUN 2 (until the end of 2016) rather low both in terms of flux and dose. The following radiation calculations for the inner radius of the TRD are based on simulations obtained using the FLUKA transport code [88] and taking into account the measured multiplicities of Pb-Pb, p-Pb and pp collisions [89][90][91][92][93][94] as well as the running scenarios (luminosities, running time, and interaction rate). For the indicated time range the Total Ionisation Dose (TID) and the Non-Ionising Energy Loss (NIEL), quoted in 1-MeV-neq fluence, were 7 · 10 −3 krad and 2 · 10 9 cm −2 , respectively. The flux of hadrons is highest in Pb-Pb collisions, because it is proportional to the product of the interaction rate and the particle multiplicity. For Pb-Pb collisions at √ s NN = 5.02 TeV, the flux of hadrons with > 20 keV energy and charged particles is about 3.8 · 10 −2 kHz/cm 2 and 2.5 · 10 −2 kHz/cm 2 , respectively. The radiation load in terms of flux and dose are far below the values, for which the experiment was designed for [1].
In the radiation environment described above, very few SEUs are observed in the electronics. The most affected device is the DCS board, for which SEUs result in occasional reboots (a few DCS boards per LHC fill). The DCS board is needed for control and monitoring but is not part of the read-out chain meaning that the reboots do not affect the data taking. The external RAM on the DCS board can be monitored for SEUs by writing and verifying known patterns in unused areas of the ∼13 MB memory per chamber. During 2.5 months of pp data taking at LHC luminosities of about 5 · 10 30 cm −2 s −1 , 20 SEUs as shown in Fig. 21 were observed in the external RAM, i.e. a negligible amount compared to the occasional reboots of a few DCS boards.
The memories of the TRAPs are Hamming-protected and, thus, resilient to SEUs. However, the configuration registers are not protected and can be affected by radiation. Therefore, the configuration is compressed and written to a Hamming-protected memory area. In this way, the registers can be checked (and corrected) against the compressed configuration.

Data quality assurance
The Data Quality Monitoring framework (DQM) provides online feedback on the data and allows problems to be quickly spotted and identified during data taking. The Automatic MOnitoRing Environment (AMORE) was developed for ALICE [95] and allows run-based, detector-specific analyses on the raw data. The results are visualised in a dedicated user interface. The monitored observables, such as noise level, event size per supermodule, trigger timing, FEE not sending data, are compared with reference values or diagrams (depending on the data taking scenario). Deviations from the references indicate a problem to the operator. Based on the information obtained from the online DQM all runs are directly marked with a quality flag, both globally and for the individual ALICE subdetectors. For the offline physics analyses, lists of runs are selected based on these flags according to the physics case under study.

Pretrigger performance
A dedicated wake-up signal is required for the FEE (see Section 5.1). It should reflect the level-0 trigger condition as closely as possible. However, as it needs to be generated before the actual level-0 trigger, it cannot use the same information. This introduces some inefficiency into the TRD read-out. In the early RUN 1 LHC filling schemes (e.g. during the LHC ramp-up in 2009) with only a few colliding bunches per orbit, it was possible to send a wake-up signal for all of the bunch crossings with potential interactions. This resulted in a fully efficient operation [71]. During this time, the pretrigger system was commissioned to use the V0 and T0 signals as inputs. They could then also be used for filling schemes with many bunches. The trigger condition was configured as closely as possible to the ALICE level-0 interaction trigger, i.e. a coincidence of either the V0 or the T0 detectors (simultaneous signals in the Aand C-side, see Section 2). The efficiency of the V0-and T0-derived wake-up signals depends on the discrimination thresholds used for those detectors and on the inherent dead time between pretrigger and the abort or end of the read-out (see Section 5). The latter is particularly important when subsequent collisions are close in time, e.g. in LHC filling schemes that have bunch trains with 25 or 50 ns bunch spacing [96]. For runs taken at low interaction rates the pretrigger efficiency is above 97%; for higher rates the efficiency depends on the colliding bunch structure of the filling scheme and reaches average values down to about 83% in RUN 1 [71]. These inefficiencies were avoided with the LM system used in RUN 2 (see Section 5.1).
The analysis of electrons from heavy-flavour hadron decays in p-Pb collisions at √ s NN = 5.02 TeV in events satisfying the pretrigger condition showed no bias compared to results from events triggered with the ALICE level-0 minimum-bias interaction trigger [97].

Tracking
The charged particle tracking in the ALICE central barrel is based on a Kalman filtering [98]. Track finding and fitting are performed simultaneously [2]. The algorithm operates on clusters of track hits from the individual detectors. The clusters carry position information and, depending on the detector, the amount of charge from the ionisation signal. The cluster parameters are calculated locally from the raw data, implying that the cluster finding can be parallelised.
The global tracking starts from seed clusters at the outer radius of the TPC (see Fig. 1). During the first inward propagation of the tracks previously unassigned TPC clusters are attached while updating the track parameterisation at the same time. If possible, the track is further propagated to the ITS. Subsequently, an outward propagation adds information from TRD, TOF, and HMPID. A second inward propagation is used to obtain the final track parameters, which are stored at a few important detector positions, most importantly at the primary vertex.
The TRD contributes to the tracking in various ways. First, it adds roughly 70 cm to the lever arm, which improves significantly the momentum resolution for high-p T tracks. Second, it increases the precision and efficiency of assigning clusters from the detectors at larger radii, in particular the TOF, to propagated tracks. In addition, the TRD is used as reference to obtain correction maps for distortions in the TPC, which arise from the build up of space charge at high interaction rates. For this the TRD and ITS track segments are reconstructed using as seeds the TPC tracks (with relaxed tolerances accounting for potential distortions). Then, the estimate of the real track position is built as a weighted average of the ITS and TRD refitted tracks (without TPC information). The TPC distortions are deconvoluted from the residuals between these interpolations and the measured TPC cluster positions.
The tracking in the TRD can be subdivided into the formation of tracklets (track segments within one read-out chamber) from clusters and the updating of the global tracks based on the tracklets. These steps are performed layer-by-layer. The chambers within a layer can be treated in parallel. For each layer, a seed track is prepared by propagation from the TPC and used to calculate the intersection with a chamber. Based on this information a tracklet is formed from the clusters in the vicinity of this intersection and then the track parameterisation is updated accordingly. In the following, details of the individual steps will be given.

Clusterisation
Primary ionisation in the detector gas leads to a signal that spreads over several pads. Because of the slower ion drift, the charge carries over into subsequent time bins, resulting in a correlation between time bins (see Section 2.1). The cluster algorithm combines the data from adjacent pads in the same time bin, producing clusters with information on position and total charge. The former is calculated from the weighted mean of the charge shared between adjacent pads (up to 3). Look-Up Tables (LUT) are used to relate the measured charge distribution to the actual position. These LUTs are the result of calculations for the different pad width sizes, based on measurements in a test beam [46]. The cluster position can deviate from the LUT values because of detector parameters which are subject to calibration (see Section 10), most importantly the drift velocity v d and the time offset t 0 (time corresponding to the position of the anode wires, see Fig. 30). In addition, a correction for the E × B effect is applied. The complete position characterisation also includes the estimated uncertainty, which determines the weight for updating the global track. The uncertainties are derived from differential analyses of Monte Carlo simulations. Cluster properties such as the deposited energy, time bin, and reconstructed position relative to the pad with the maximum charge are taken into account as well as particle level characteristics such as electrical charge and incident angle. A linear model relates all uncertainties with parameters being defined by all conditions determining a cluster.

Track reconstruction
For the preparation of the TPC-based track seed used to match with the TRD clusters, the Kalman parameterisation (at the outer radius of the TPC) is propagated to the radial position of the anode wires of a given chamber. At this radius the position is least affected by variations in calibration parameters. If a chamber is rotated with respect to the tracking frame, the radial position of the anode wires depends on the intersection point of the track in the y-z plane. As this is only known after the propagation, the preparation of the track seed is an iterative process.
The clusters that are assigned to the seed track in a given layer are combined into tracklets. A straight line fit is sufficient for their description since the negligible sagitta of the trajectory is only of the order of tens of microns.
Since in the read-out chamber the electrons drift in the radial direction, that is approximately parallel to the track, and due to the long ion tails, the signals pile up. The measured charges, sampled in time intervals of 100 ns, are therefore correlated between different time samples. Since such correlations degrade the angular resolution, a tail cancellation correction is applied [46]. It subtracts an exponential tail proportional to the current signal from the subsequent samples for each read-out pad.
The number of pads on the the read-out plane onto which a track is projected depends on the track incident angle. For decreasing transverse momentum, more pads will carry a signal. The Lorentz angle also affects this spread. For negatively (positively) charged particles the Lorentz drift is along (opposite to) the track inclination, independent of the polarity of the magnetic field. On average, negatively charged particles are thus spread over fewer pads than positively charged ones. In the right panel of Fig. 22, an example of a positively charged particle of p T = 0.5 GeV/c (worst case) is shown. Its projection spans over 6 pads.
The procedure to find candidates for seeds involves a preliminary stage in which clusters are searched in the neighbourhood of the propagated seed. In Fig. 23 the mean and width of the residuals are shown for the arising tracklets in ∆y in layer 0 as a function of the seed p T . The imperfect tail cancellation results in different position biases for tracklets from positive and negative tracklets, the signal spreading over more pads for the former.

Performance
The relative frequencies of the number of tracklets assigned to a track are shown in Fig. 24 for pp collisions at √ s = 13 TeV. Tracks consisting of 6 layers account for more than 50% (60%) for p T < 1 GeV/c (p T > 1 GeV/c). Tracks with 4 and 5 layers are mainly produced by particles crossing dead areas of the detector.
A crucial figure of merit for the tracking is the fraction of global tracks matched to the TRD. This includes acceptance effects, between the TPC and the TRD as well as the TRD and the TOF detector. The momentum dependence is shown in Fig. 25 for tracks with at least 4 layers (about 75% of all tracks). For positively charged particles, the Lorentz drift of the electrons is opposite to the track inclination, which (together with the tail cancellation) results in a slightly higher efficiency.
A systematic analysis of the position resolution in the bending plane (rϕ) is presented in Fig. 26. The resolution (σ ∆y ) is expressed as the width of a Gaussian fit to the difference between the position reconstructed via tracklets and different references (∆y). It is shown as a function of the inverse transverse momentum scaled with the particle charge (q/p T ). First, the ideal position resolution is derived from Monte Carlo simulations by comparing the reconstructed tracklet position with the true particle position   These tracks from simulation yield a slightly worse resolution because the theoretical limit does not consider the pad tilting. It is worth noting that the simulated position resolution describes the measured dependency reasonably well. Effects of remaining miscalibration and misalignment of all central barrel detectors lead to a degradation of about 500 µm for the resolution in the TRD.
The good position resolution capabilities demonstrated by the TRD detector can be used in the central barrel tracking of ALICE to improve the transverse momentum resolution of reconstructed particles. Figure 27 shows the q/p T resolution of the combined ITS-TPC tracking with and without the TRD for various running scenarios. In all considered cases the TRD was also used as reference to obtain the correction maps for the distortions in the TPC. The inclusion of the TRD in tracking in addition improves the resolution by about 40% at high transverse momentum for pp collisions recorded at both low (12 kHz) and high interaction (230 kHz) rates. For example in the low interaction scenario of pp collisions, the achieved q/p T resolution is 3% at 40 GeV. In addition the inclusion of the TRD in the track reconstruction improves the impact parameter resolution and the reconstruction of tracks that pass at the edges of the TPC sectors, i.e. increasing the acceptance of the experiment.

Alignment
The physical alignment of the detectors during installation (see Section 2.3) has a finite precision of the order of 1 mm for chambers within a supermodule and of 1 cm for supermodules in the spaceframe. The subsequent software alignment, i.e. accounting for the actual positions of supermodules and chambers in the reconstruction and simulation software, is the subject of this section. The alignment parameters (three shifts and three rotation parameters per alignable volume) are deduced from optical survey data and/or from reconstructed tracks. In the latter case, the obtained values have to be added to those already used during the reconstruction. The obtained alignment sets are stored in the OCDB and used in the subsequent reconstructions. The different alignment steps are described in the following subsections. The alignment is checked and, if necessary, redone after shutdown periods and/or interventions that may affect the detector positions, e.g. installations of new supermodules.

Internal alignment of chambers with cosmic-ray tracks
The internal detector alignment, i.e. the relative alignment of the read-out chambers within one stack, is performed with cosmic-ray tracks recorded without magnetic field (Fig. 28, left) Table 5: Typical width of the tracklet-to-track residuals in y observed during the internal alignment procedure. The residuals are between a tracklet (measured by a single chamber) and track (defined by the remaining chambers of the stack). L0-L5 refer to the six TRD chambers within a stack. The L0 and L5 resolutions are given only for comparison purposes as the positions of these two chambers are fixed during the minimisation.
from this constraint is removed later during the stack alignment. Chamber tilts are neglected. The typical spread (Gaussian σ ) of the residual between tracklet and straight track is about 1 mm for a single chamber (see Table 5). The initial chamber misalignments of 0.6-0.7 mm are reduced to 0.2-0.3 mm (r.m.s.). The minimum required statistics is O(10 3 ) tracks per read-out chamber (i.e. per stack). For a few stacks, located around ϕ = 0 and ϕ = 180 • , with low statistics of cosmic-ray tracks, charged tracks from pp collisions taken without magnetic field are used instead.
The internal y alignment sets deduced from cosmic-ray tracks and from pp collisions agree within 0.18 mm (Gaussian σ ). From this, the accuracy of the internal alignment is estimated to be about ∆y = 0.18 mm/ √ 2 = 0.13 mm. Similar agreement exists between cosmic-ray runs taken in different periods.

Survey-based alignment of supermodules
The supermodules are subject to an optical survey after installation and, subsequently, after every hardware intervention that may affect the geometry of the detector. For this measurement, survey targets are inserted into precision holes existing at each end of every supermodule.
Because of poor accessibility of the muon-arm side, the supermodules are only surveyed on one side (A-side). Four of the six alignment parameters, x, y, z shifts and the rotation around the z-axis, are then determined for each supermodule by fitting the survey results. The typical survey precision is 1 mm. The survey-based alignment procedure reduces the supermodule misalignment from its initial value of 1-2 cm to a few mm.

External alignment with tracks from beam-beam collisions
The external alignment, i.e. the alignment of TRD volumes with respect to the TPC, is performed with charged-particle tracks recorded with magnetic field (Fig. 28, right). Only tracks with p T > 1.5 GeV/c are used. First, all six alignment parameters of each TRD supermodule are varied to minimise the residuals. Subsequently, the alignment of each stack is refined by adjusting its x and y positions and its rotation around the z-axis. The tracklet-to-track residuals in y before and after alignment are shown in Fig. 29 for two supermodules. As can be seen, the initial misalignment and the degree of improvement vary supermodule by supermodule. The typical width of the residuals (Gaussian σ ) is about 2 mm (see Table 6). In the limit of low number of tracks per stack N track , the alignment precision is statistical: , systematic effects start to dominate. Figure 29 shows the effect of an alignment procedure applied to the same data set from which it was deduced. However, one single alignment set is used for runs of a complete year. This raises the question of the universality and temporal stability of the alignment, which can be addressed by comparing alignment sets deduced from various portions of data. Separate analyses of positive and negative tracks yield   two alignment sets that agree within 1 mm (r.m.s. of the y shifts). A larger difference (2 mm) is seen between the two magnetic field polarities. Such differences can result from mechanical displacements and/or from the fact that the TPC calibration is performed separately for the two polarities. The presence of a step in the middle of the central TRD stack, at z = 0, in Fig. 29 indicates the latter. Several iterations of the TRD to TPC alignment and the TPC calibration with respect to the TRD are needed to achieve the best possible precision. In order to address the entanglement of the alignment and calibration of the central barrel detectors, an alternative approach was developed during LS 1. It is based on a combined alignment and calibration fit performed using the Millepede algorithm [99]. The new method allows for a simultaneous alignment and calibration of the ITS, TRD, and TOF, followed by the calibration of the TPC. The procedure is being used successfully in RUN 2.

Calibration
The ALICE calibration scheme is explained in [2]. Here the calibration procedures for the TRD are described. The four basic calibration parameters for the TRD -time offset, drift velocity, gain, and noise -are illustrated in Fig. 30 Input data Parameters pedestal runs pad noise, pad status runs with 83m Kr in the gas relative pad gain physics runs (cpass0/1) chamber status, time offset, drift velocity, Lorentz angle, gain anode wires) and an edge (around 2.8 µs), respectively. Since the calibrated time represents the distance from the anode wires, the position of the anode peak provides the time offset. The time span between the anode peak and the entrance-window edge is inversely proportional to the drift velocity. The mean pulse height is proportional to the gain and the width of the pedestal is proportional to the pad noise.
While ionisation electrons are attracted to the anode wires by an electric field E, the presence of a magnetic field perpendicular to it, |E × B| > 0, leads to a Lorentz angle of about 9 • between the electron drift direction and the direction of the electric field. Knowledge of the Lorentz angle is necessary for the reconstruction of the tracklets, described in Section 8.2 (see Fig. 22 and Fig. 46).
The complete list of the calibration parameters, organised according to the source from which they are determined, is given in Table 7. Once determined for a given run, the calibration parameters are stored in the OCDB and used in the subsequent reconstructions. In the following, the methods used to determine the values of the calibration parameters are discussed.

Pad noise and pad status calibration using pedestal runs
Short pedestal runs are taken roughly once per month during data taking. In these runs, events are triggered at random instants and the data are recorded without zero suppression. At the end of the run, an automatic analysis of the pedestal data is performed on the computers of the DAQ system [100]. Hundred events are sufficient to calculate the position of the baseline of the analogue pre-amplifier and shaper output (pedestal) and its fluctuation (noise) for all electronics channels. The results are subsequently collected by the Shuttle system [87] and transported to the OCDB. The mean noise is 1.2 ADC counts, corresponding to an equivalent of 1200 electrons. The pad-by-pad r.m.s. value is 0.17 counts. The precision of the measurement is 0.015 counts (r.m.s.). Pads that have a faulty connection to the FEE, are connected to a non working FEE channel, have excessive noise, or are bridged with a neighbour are marked in the OCDB and treated correspondingly during the data taking and reconstruction chain (pad status).

Pad gain calibration using 83m Kr decays
Pad-by-pad gain calibration of the TRD chambers is performed after every installation of new supermodules. It is done by injecting radioactive gas into the chambers and measuring the signals of the decay electrons. The method, developed by ALEPH [101,102] and DELPHI [103], is also used to calibrate the ALICE TPC [31].
Solid 83 Rb decays by electron capture into gaseous Kr and populates, among others, the isomeric state 83m Kr with an excitation energy of 41.6 keV and a half-life of 1.8 hours. The radioactive krypton is injected into the gas circulation system and is distributed over the sensitive volumes of all installed chambers. The krypton nuclei decay to their ground state by electron emission. The decay energy, comparable to the energy lost by a minimum-ionising particle traversing the sensitive volume of a readout chamber (20-30 keV), gets deposited within 1 cm from the decay point. For each decay, the total signal is calculated by integrating over y (pad column), z (pad row) and x (drift time), and filled into the histogram associated with the pad of maximum signal.
With three gas inlets to each supermodule (see Section 3), groups of 10 chambers are connected in series.
The difference between the decay rates seen in the first and last chamber of the chain was reduced to a factor of ∼3 by increasing the gas flow during the krypton calibration run. With an 83 Rb source intensity of 5 MBq and a measurement time of one week, the collected statistics is of the order of thousand counts per pad. This is sufficient to identify the expected decay lines in the distribution. An example is shown in Fig. 31. The histogram of each pad is fitted by stretching horizontally the reference distribution. The stretching factor is the measure of the pad gain. The energy resolution at 41.6 keV is 10%.
The resulting pad gain factors for one particular chamber are shown in Fig. 32. The short-range variations of up to 10% reflect the differences between electronics channels. The long-range inhomogeneities originate from chamber geometry and are typically within ±15% (peak to peak). A detailed description of the krypton calibration can be found in [104] and [105].
The improvement of the chamber resolution achieved by the krypton-based pad-by-pad calibration is presented in Fig. 33. The histograms show the pulse height spectrum before calibration, after one and after two iterations (calibrations performed in consecutive years), respectively.

Chamber calibration using physics data
The anode and drift voltages of the individual chambers are adjusted periodically (once a year) to equalise the chamber gains and drift velocities. Moreover, an automatic procedure is in place that continuously adjusts the voltages depending on the atmospheric pressure, compensating the impact of the environment on the gas properties (see Section 7.2). This is important because the pulse height and the tracklet angle are used for triggering (see Section 12). In order to achieve the ultimate resolution for physics data analysis, the chamber status, time offset, drift velocity, Lorentz angle, and gain are calibrated run-by-run offline, using global tracks from physics runs. A sample of events of each run is reconstructed for this purpose. The required statistics is equivalent to 10 5 pp interaction events. The first reconstruction pass (cpass0) provides input for the calibration. The second pass (cpass1) applies the calibration and the reconstructed events are used as input for the data quality assurance analysis, and for the second iteration of the calibration. The read-out chamber status and the chamber-wide time offset, drift velocity, Lorentz angle, and gain values are extracted from cpass0 and updated after cpass1. The time offset is obtained as indicated in Fig. 30. The drift velocity and the Lorentz angle are derived from the correlation between the derivative of the local tracking y coordinate with respect to the drift time, and the azimuthal inclination angle of the global track (see Fig. 34). The former represents the uncalibrated estimate of the tracklet angle. The latter is obtained from the extrapolation of the global track to the TRD. The correlation is fitted by a straight line. The effect of the pad tilt (dy/dz = tan(α), α = ± 2 • , see Section 2) is taken into account by adding the respective term to the global track inclination. The slope and the offset parameters give the drift velocity and the Lorentz angle, respectively.
The gain calibration factor is determined by histogramming, for each chamber, the deposited charge divided by the path length and taking the mean of this distribution. The last stage of the chamber calibration is to identify chambers for which a satisfactory calibration cannot be obtained or whose parameter values are very different from the mean. These chambers are masked in the data analysis and in the respective simulation.  The typical mean values, chamber-by-chamber variations, stability, and precision of the calibration parameters are shown in Table 8. The chamber-by-chamber variation is quantified by the r.m.s. of the chamber distribution within one run. The stability is described via the maximum variations observed in one read-out chamber during half a year of running. The precision is defined as 1/ √ 2 of the r.m.s. difference between the calibration parameters deduced from two high-statistics data sets taken under identical conditions.

Quality assurance
As described before, during cpass1 reconstructed events are subject to a quality assurance (QA) analysis in which control histograms monitoring the quality of the calibrated data are filled. The analogous monitoring of raw data, performed online, is described in Section 7.3. As an example, two such QA histograms, representing the efficiency and the mean number of layers in each stack (equivalent to the number of active layers) in one particular run of the pp data taking in 2015, are shown in Fig. 35   PHOS detector are visible.

Particle identification
The TRD provides electron and charged hadron identification based on the measurement of the specific energy loss and transition radiation. The total integrated charge measured in a tracklet [107], normalised to the tracklet length, is shown in Fig. 36 for electrons and pions in p-Pb collisions at √ s NN = 5.02 TeV. The electron and pion samples were obtained by selecting tracks originating from γ → e + e − conversions in material and from the decay K 0 s → π + π − via topological cuts and particle identification (PID) with the TPC and the TOF. The obtained electron sample has an impurity of less than 1%. Due to the larger specific energy loss and transition radiation, the average charge deposit of electrons is higher than that of pions. Charge deposit distributions recorded in test beam measurements at CERN PS in 2004 for electrons and pions in the momentum range 1 to 10 GeV/c [47,106] describe the results from collision data well (see Fig. 36), and can thus also be used as references for particle identification.
The measured charge deposit distributions can be fitted by a modified Landau-Gaussian convolution: (Exponential × Landau) * Gaussian [108,109], where the Landau distribution is weighted by an exponential dampening (Landau(x) → e kx Landau(x)). This function describes the specific charge deposit distributions for pions (dE/dx) and electrons (dE/dx + TR) well and can thus be used to extract the most probable energy loss. The dependence of the most probable signal versus β γ is shown in Fig. 37. The data have been extracted from measurements (i) in a beam test at CERN PS in 2004 (pions and electrons) [106], (ii) with pp collisions at √ s = 7 TeV (protons, pions and electrons) [107] and (iii) with a cosmic-ray trigger in the ALICE setup (muons) [108]. The selection of the flight direction of the cosmic-ray muons allows only the specific energy loss (dE/dx) or the summed signal (dE/dx + TR) to be measured by selecting muons that first traverse the drift region and then the radiator, and vice versa [108,109]. To improve the momentum reconstruction of very high p T cosmic-ray muons, a ded-   icated track fitting algorithm [108,109] was developed, combining the clusters of the two individual tracks in the two hemispheres of the TPC. This yields a better momentum resolution by about a factor of 10, e.g. at 1 TeV/c the 1/p T resolution is 8.1·10 −4 (GeV/c) −1 [108,109].
The onset of the TR production is visible for β γ 800, both for electrons and high-energy (TeV scale) cosmic-ray muons. The signals for muons are consistent with those from electrons at the same β γ. The most probable signal (MPV) of the energy loss due to ionisation only, normalised to that of minimum ionising particles (mip), is well described by the parameterisation proposed by the ALEPH Collabora-tion [101,110] (shown in Fig. 37): Minimum ionising particles are at a β γ value of 3.5 and the dE/dx in the relativistic limit is 1.8 times the minimum ionisation value. To describe the dE/dx + TR signal, a parameterised logistic function is needed in addition. The formula, normalised to the signal for minimum ionising particles, is as follows: .
The saturated TR yield in the relativistic limit is 0.7 times the minimum ionisation value. At β γ = 2.4·10 3 the logistic function reaches half its maximum value.

Truncated mean method
The TRD can provide electron (described in the next section) and hadron identification. For the hadron identification, the truncated mean is calculated from the energy loss (+TR) signal stored in the clusters (see Section 8) [108]. For the particle identification, the deviation from the expected most probable signal for a given species is then used after normalisation to the expected resolution of the truncated mean signal for the track under study.
In order to obtain an approximately Gaussian shape, the long tail of the Landau distribution needs to be eliminated or at least strongly suppressed, which can be realised through a truncated-mean procedure.
The PID signal of a charged hadron passing through the detector is calculated using all M clusters along the up to six layers (see Section 8). The truncated mean is then calculated as the average over the N lowest values: N = f ·M. The truncation fraction f = 0.55 was chosen in order to maximise the separation power between minimum ionising pions with p = 0.5 GeV/c and electrons with p = 0.7 GeV/c. The different momenta were chosen to maximise the statistics of the electron sample [108]. However, the cluster signal strength depends on the radial position of the cluster within the read-out chamber (see Fig. 5). Therefore, the cluster amplitudes are first weighted with time-bin dependent calibration factors, found and applied during the cpass0/cpass1 calibration steps (see Section 10). For example, for the cosmic-ray data sample, the weights are determined for tracks within the interval 1.65 ≤ log 10 (β γ) ≤ 2.5 to eliminate kinematic dependences. These β γ are far below the onset of TR. After applying this procedure, some non-uniformity over time bins remains (±15%), which is due to the TR component [108]. Figure 38 shows the truncated mean signal as a function of momentum for p-Pb collisions at √ s NN = 5.02 TeV. The curves represent the expected signals for various particle species. These parameterisations were obtained by fitting the truncated mean signal (dE/dx + TR) of electrons from conversion processes, pions from K 0 s and protons from Λ decays as a function of β γ = p m with a sum of the ALEPH parameterisation (Eq. 1) and logistic function (Eq. 2), see above.
The resolution of the truncated mean signal is shown in Fig. 39 as a function of the number of clusters (N cls ), which is described by the function where σ sys describes systematic uncertainties due to, e.g. residual calibration effects. The fit shows that the resolution is, as expected, mainly driven by a statistical scaling according to the law σ trunc ∝ 1/ √ N cls . The results demonstrate a resolution of the truncated mean signal of 12% for tracks with signals in all   . At low momenta an excellent separation power is achieved, at high momentum the separation power is about 2 for π/K and 1 for K/p.

Electron identification
For the electron identification (eID), also the temporal evolution of the signal is used. For each TRD chamber the signal amplitudes of the clusters along a tracklet are redistributed into seven slices during the track reconstruction (see Section 8). Each slice corresponds to about 5 mm of detector thickness for a track with normal incidence. The ratio of the average signal for electrons and pions as a function of the slice number is shown in Fig. 41 for p-Pb collisions at √ s NN = 5.02 TeV. At large slice numbers, i.e. long drift times, the TR contribution is visible because the TR photon is predominantly absorbed at the entrance of the drift region.
The eID performance is expressed in terms of the electron efficiency (the probability to correctly identify an electron) and the corresponding pion efficiency (the fraction of pions that are incorrectly identified as electrons). The inverse of the pion efficiency is the pion rejection factor. The following methods are in use: i) truncated mean (see previous section), ii) a likelihood method with 'dimensionality' (onedimensional, LQ1D, corresponds to the total integrated charge [107], two-dimensional, LQ2D, for two charge bins [111], etc.), iii) neural networks (NN) [112][113][114].
For the LQ2D method the signal is evaluated in two charge bins, i.e. the integrated signals of the first four slices and the last three slices are averaged. The latter sum contains most of the TR contribution. For the LQ3D method, the signals of the slices are combined as sums of the first three, the next two and the final two. Both the LQ7D and NN methods utilise 7 charge bins and thus benefit from the complete information contained in all 7 slices. While individual slices may be empty, the charge bins must contain a charge deposition. In physics analyses, this selection criterion does not introduce a loss of electrons when applying the LQ1D or the LQ2D methods, but causes a reduction in the number of electrons by about 40% when the LQ7D method is used. The clean samples of electrons and pions described above are used to obtain references in momentum bins for particle identification. For each particle traversing the TRD, the likelihood values for electrons and pions, muons, kaons and protons are then calculated for each chamber via interpolation between adjacent momentum references. The global track particle identification is finally determined as the product of the single layer likelihood values. In physics analyses, hadrons (e.g. pions) can be rejected with the TRD by applying either a cut on the likelihood or a pre-calculated momentum-dependent cut on the likelihood value for electrons. The latter provides a specified electron efficiency constant versus momentum. To cross-check the references and determine systematic uncertainties, electrons from photon conversions can be studied. In Pb-Pb collisions the mean of the charge deposit distributions shows a centrality (event multiplicity) dependence, of about 15% comparing central and peripheral collisions [111], and therefore centrality-dependent references were introduced.
The references can only be created after the relative gain calibration of the individual pads and the timedependent gain calibration of the chambers as described in Section 10. After this, the detector response is uniform across the acceptance and in time, and thus it can be studied in detail by combining all chambers and the full statistics of 1-2 months of data taking. Since the reference creation requires a large data sample, the reference distributions are only produced after the full physics reconstruction pass. This means that the reference creation can only be done later, during data analysis rather than already during reconstruction. The references for the truncated mean and the likelihood methods are stored for this purpose in the Offline Analysis Database (OADB) and read from there in the initialisation phase of the analysis tasks [115].
The pion efficiency for 1 GeV/c tracks is shown as a function of the electron efficiency and as a function of the number of detector layers providing signals for the various methods in Fig. 42. For all methods the pion rejection factor decreases as expected with decreasing number of contributing layers and a lower electron selection efficiency corresponds to a better pion rejection factor for all methods.
A pion rejection factor of about 70 is obtained at a momentum of 1 GeV/c in p-Pb collisions with the LQ1D method, the most simple identification algorithm. The LQ2D method yields a pion rejection factor far better than the design goal of 100 at 90% electron efficiency found in test beams with prototypes [106]. When using the temporal evolution of the signal even better performance is achieved, reaching a rejection of up to 410. Figure 43 shows the momentum dependence of the pion efficiency for the different methods. At low momenta, the pion rejection with the LQ1D method improves with increasing momentum because of the onset of the transition radiation. From 1-2 GeV/c upwards, the electron-pion separation power gradually decreases due to the saturation of the TR production and the relativistic rise of the specific energy loss of pions. The other methods that make use of the temporal evolution of the signal provide substantial improvements, in particular for low and intermediate momenta. At high momenta (beyond 2 GeV/c), the limitation in statistics for the reference distributions is reflected in the rather modest improvements  in the pion rejection in the muti-dimensional methods. The similar momentum-dependent shape of the likelihood methods is in parts due to the usage of the same data sample for reference creation. The best performance is achieved for the LQ7D and NN methods. However these methods are sensitive to a residual miscalibration of the drift velocity, while the truncated mean and LQ1D method are more robust against small miscalibration effects. At low momentum, where the energy loss dominates the signal, the truncated-mean method provides very good pion rejection. The rejection power of the method decreases at higher momenta, because the TR contribution, yielding higher charge deposits, is likely to be removed in the truncation [108]. To visualise the strength of the TRD LQ2D electron identification method, the difference in units of standard deviations between the measured TPC energy loss of a given track and the expected energy loss of an electron for tracks with TOF and TOF+TRD particle identification is shown in Fig. 44. The results are compared for tracks with a momentum of 1.9-2.1 GeV/c within the TRD acceptance. In this momentum interval electrons cannot be discriminated from pions using TOF-only electron identification. After applying the TRD electron identification with 90% electron efficiency with the LQ2D method, hadrons are suppressed by about a factor of 130. The electron identification capabilities of the TRD thus allow selecting a very pure electron sample. This is important, e.g. for the measurement of electrons from heavy-flavour hadron decays. Details on the usage of the electron identification for the latter measurement in pp collisions at √ s = 7 TeV can be found in [116].
In the Bayesian approach within ALICE [117], where the identification capabilities of several detectors are combined, the TRD particle identification contributes with its estimate of the probability for a given particle to belong to a given species. For this purpose, transverse momentum dependent 'propagation factors' for the priors, which represent the expected abundance of each particle species within the ITS and TPC acceptance, are calculated and stored in the analysis framework.

Trigger
ALICE features a trigger system with three hardware levels and a HLT farm [2]. Apart from the contributions from the pretrigger system (see Section 5.1), the TRD contributes to physics triggers at level-1.
These are based on tracks reconstructed online in the GTU (see Section 5.3). The reconstruction is based on online tracklets (track segments corresponding to one read-out chamber) that are calculated locally in the FEE of each chamber. The local tracking in the FEE and the global online tracking in the GTU are discussed in the following.
As the trigger decision is based on individual tracks, a variety of signatures can be implemented, only limited by the complexity of the required calculations and the available time. In the following, the triggers on cosmic-ray muons, electrons, light nuclei, and jets are discussed.

Local online tracking
The local online tracking is carried out in parallel in the FEE (see Section 5.2). Each of the 65 000 MCMs processes data from 21 pads, 3 of which are cross-fed from the neighbouring chips to avoid inefficiencies at the borders of the chip (see Fig. 15). For accurate online tracking, all relevant corrections and calibration steps must be applied online. After appending two digits to avoid rounding imprecisions, the digitised data are propagated through a chain of filters. First, a pedestal filter is used to compensate for variations in the baseline. A gain filter makes it possible to correct for local gain variations, either caused by the chamber or by the electronics. This equilibration is important for the evaluation of the specific energy loss, which is used for online particle identification. It uses correction factors derived from the krypton calibration (see Section 10.2). A tail cancellation filter can be used to reduce the bias from ion tails of signals in preceding time bins. This improves the reconstruction of the radial cluster positions and of the deflection in the transverse plane. The offline reconstruction takes the already applied online corrections into account. For that purpose, all configuration settings are stored in the OCDB and are, therefore, known during the offline processing.
After the filtering, the data for one event are searched time bin-wise for clusters by a hardware preprocessor. A cluster is found if the charge on three adjacent pads exceeds a configurable threshold and the center channel has the largest charge (see Fig. 45). For each MCM and time bin, transverse positions are calculated for up to six clusters. They are used to calculate and store the (channel-wise) sums required for a linear regression.
After the processing of all time bins, up to four channels with a minimum number of found clusters are further processed (if more than four channels exceed the threshold, the four of them with the largest number of clusters are used). For the selected channels, a straight line fit is computed from the precalculated sums. The fit results in information on the local transverse position y, the deflection in the bending plane d y , the longitudinal position z, and a PID value. The transverse position and deflection are calculated from the fit, the longitudinal position is derived from the MCM position, and the PID from a look-up table using the accumulated charge as input.
The reconstructed values for y and d y are corrected for systematic shifts caused by the Lorentz drift and the pad tilt. An example of a reconstructed tracklet is shown in Fig. 46. Eventually, the values (in fixed-point representation) are packed into one 32-bit word per tracklet for read-out.   is used in Monte Carlo productions based on event generators but can also be run on data recorded with the actual detector. This allows cross-validating hardware and simulation, and to study the effect of parameter changes on the tracklet finding. Therefore, Monte Carlo simulations are well-suited to study the performance of the online tracking algorithm with a given set of configuration options since tracklets can be compared to track references (track positions from Monte Carlo truth information). This allows tracklet efficiencies to be determined. An example is displayed in Fig. 47, which shows the efficiency of the tracklet finding process for a typical set of parameters as a function of y and q/p T . The efficiency drops for large y and negative q/p T , where the asymmetry in y is caused by a combination of the Lorentz correction and the numerical range available for the deflection. The efficiency is close to 100% in the regime relevant for triggering. Furthermore, shifts in y and d y are calculated with respect to the expectation from the Monte Carlo information. Besides a small systematic shift because of the uncorrected misalignment, the distributions show widths of about 300 µm and 1700 µm in y and d y [71], respectively.

Global online tracking
The global online tracking in the GTU operates stack-wise on the tracklets reconstructed and transmitted by the FEE. It is divided into a track matching and a reconstruction stage. The algorithm used for the matching of the tracklets is optimised for the high multiplicity environment of Pb-Pb collisions [118]. It Global online tracks consist of at least four matching tracklets. The reconstruction stage uses the positions of the contributing tracklets to calculate a straight line fit (see Fig. 48). The computation is simplified by the use of pre-calculated and tabulated coefficients, which depend on the layer mask. The approximation of a straight line is adequate for the trigger-relevant tracks above 2 GeV/c. The transverse offset a from the nominal vertex position is then used to estimate the transverse momentum [118]. The PID value for the track is calculated as the average over the contributing tracklets. A precise simulation of all the tracking steps was implemented and validated in AliRoot. It was used for systematic studies of the tracking performance, see below. Figure 49 shows the timing of the online tracking together with the constraints for the trigger contributions. Between interactions, the FEE is in a sleep mode [78]. In this mode only the ADCs, the digital filters, and the pipeline stages are active. The latter makes it possible to process the data from the full drift time upon arrival of a wake-up signal (see Section 5.1). The processing can be aborted if it is not followed by a level-0 trigger. In this case, a clear sequence is executed for resetting and putting the FEE back to sleep mode. If a level-0 trigger was received, processing continues and the tracklets are sent to the GTU. Here, the track matching and reconstruction runs as the tracklets arrive. The tracks are used to evaluate the trigger conditions (see next sections) until the contribution for the level-1 trigger must be issued to the CTP (about 6 µs after the level-0 trigger). The tracking can continue beyond the contribution time for the trigger; the resulting tracks are ignored for the decision but are available for offline analysis (flagged as out-of-time). The efficiency of the global online tracking is shown in Fig. 51. In order to separate the efficiency of the online tracking from the acceptance and geometrical limitations, the normalisation is done once for all primary tracks and once for those which are findable, i.e. which have at least 4 tracklets assigned in one stack in the offline tracking (TRD acceptance). The efficiency starts to rise at about 0.6 GeV/c, reaches half of its asymptotic value at 1 GeV/c, and saturates above about 1.5 GeV/c. Lower transverse momenta are not relevant for the trigger operation and corresponding tracks are suppressed at various stages. For comparison, the curve obtained from an ideal Monte Carlo simulation shows slightly higher efficiencies. The difference is caused by non-operational parts of the real detector (see Section 7) not being reflected in the ideal simulation.
The correlation of the inverse transverse momentum from online and offline tracking is established by matching global online tracks to global offline tracks, reconstructed with ITS and TPC, based on a geometrical distance measure. An example for pp collisions at √ s = 8 TeV is shown in Fig. 52. The online estimate correlates well with the offline value in the transverse momentum range relevant for the trigger thresholds, i.e. 2-3 GeV/c. The width of the correlation corresponds to an online measured resolution of about 10% for momenta of 1.5 − 5 GeV/c.
The p T resolution is crucial for the trigger since it determines the sharpness of the threshold. It is shown in Fig. 53 for a p T threshold of 3 GeV/c, where a width (10-90%) of about 0.6 GeV/c is found. This is also well reproduced by simulations.
As a further development, the online tracking can benefit from taking the chamber alignment into account in the local tracking, and also by enabling the tail cancellation filter in the FEE. This will allow the use of tighter windows for the track matching and, thus, a reduction in combinatorial background while maintaining the same tracking efficiency. This is relevant for the online tracking in the high-multiplicity environment of Pb-Pb collisions. At the time of writing, these improvements are under development.

Trigger on cosmic-ray muons
Cosmic-ray tracks are used for several purposes in the experiment, e.g. for detector alignment after installation, and before physics runs (see Section 9). Recording sufficient statistics requires a good and clean trigger, in particular for tracks passing the experiment horizontally, for which the rates are very low. Therefore, the first level-1 trigger in ALICE was contributed by the TRD (even before the LHC start-up) in order to select events containing tracks from cosmic rays. It was operated on top of a level-0 trigger from TOF (TOF back-to-back coincidence). At first, when the online tracking was still under commissioning, the selection was based on coincident charge depositions in multiple layers of any stack. Later, it used the full tracking infrastructure with the condition requiring the presence of at least one track in the event. This was sufficient to suppress the background from the impure level-0 input from TOF.   Fig. 53: Turn-on curve of the trigger with a p T threshold of 3 GeV/c for positively and negatively charged particles in comparison to the same variable computed in simulation with a realistic detector geometry (active channels). Also shown is the corresponding distribution for an ideal detector geometry (ideal simulation, not considering misalignment). The onset is characterised by a fit with a Fermi function.

Trigger on jets
Jets are commonly reconstructed by algorithms which cluster tracks that are close in pseudorapidity and azimuth (η-ϕ plane). The area covered by a TRD stack roughly corresponds to that of a jet cone of radius R = 0.2. This allows the presence of several tracks above a p T threshold within one stack to be used as a signature for a high-p T jet. The TRD is only sensitive to the charged tracks of the jet, which is also the part that is reconstructed using global offline tracking in the central barrel detectors.
In pp collisions at √ s = 8 TeV and p-Pb collisions at √ s NN = 5.02 TeV, the trigger sampled the anticipated integrated luminosity of about 200 nb −1 and 1.4 nb −1 in RUN 1, respectively. Figure 54 shows the rejection observed in pp collisions ( √ s = 8 TeV) for the condition of a certain number of global online tracks above a p T threshold within any stack. As a compromise between rejection and efficiency for the triggering on jets, 3 tracks above 3 GeV/c were chosen as a trigger condition. This results in a very good rejection, of about 1.5 · 10 −4 . The jet trigger was also used in p-Pb collisions, where a good performance was achieved as well. However, the higher multiplicity reduces the rejection slightly.
In Fig. 55 the jet p T spectra from the TRD-triggered data sample are shown. The jets were reconstructed using the anti-kt jet finder from the Fastjet package [119] with a resolution parameter of R = 0.4. As expected it extends to significantly larger jet p T than the one from the minimum-bias data sample. In order to judge the bias on the shape of the spectrum, it is compared to an EMCal-triggered sample. At sufficiently high p T above about 50 GeV/c, the shapes of the spectra agree.
To further judge the bias on the fragmentation, the raw fragmentation function is shown as reconstructed from the jets in the TRD-triggered data sample in Fig. 56. The commonly used variable ξ is defined as For the lower jet p T intervals, a clear distortion can be seen at ξ values corresponding to the p T threshold (in the given jet p T interval). It disappears for higher jet p T , and agreement with fragmentation functions obtained from an EMCal-triggered sample is found for jet p T above about 80 GeV/c [71].
In order to improve the efficiency of the jet trigger, the counting of tracks can be extended over stack boundaries and, thus, avoid the acceptance gaps introduced between sectors and stacks. Corresponding Right: For comparison the p T spectra were scaled to the same yield between 60 and 80 GeV/c. The spectra were re-binned to calculate the ratios. studies are ongoing.

Trigger on electrons
During the tracklet reconstruction stage an electron likelihood is assigned to each tracklet allowing for an electron identification (see Section 12.1). It was calculated using a one-dimensional look-up table based on the total accumulated charge (the hardware also allows a two-dimensional LUT). The tracklet length is taken into account as a correction factor applied to the charge, making the actual look-up table universal across the detector. The look-up table is created from reference charge distributions of clean electron and pion samples obtained through topological identification (see Section 11).
In order to select electrons at the trigger level, a combination of a p T threshold and a PID threshold can be used. The thresholds were optimised for different physics cases. For electrons from semileptonic decays of heavy-flavour hadrons, the goal was to extend the p T reach at high values. Thus, a p T threshold of 3 GeV/c was chosen and the PID threshold was adjusted to achieve a rejection of minimum-bias events by a factor of about 100. For the measurement of quarkonia in the electron channel, a p T threshold of 2 GeV/c was chosen to cover most of the total cross-section. The PID threshold was increased to ) c (GeV/ p 1 2 3 4 5 6 7 8 9 10 The main background of the electron triggers is caused by the conversion of photons in the detector material at large radii just in front of or at the beginning of the TRD. The emerging electron-positron pairs look like high-p T tracks and are likely to also be identified as electrons as well. This background is suppressed by requiring (in addition to the thresholds explained above) at least five tracklets, one of which must be in the first layer. The background can be further reduced by requiring that the online track can be matched to a track in the TPC. However, this can not be done during the online tracking, but only during the offline analysis or in the HLT during data taking.
To judge the performance of the triggers, electron candidates are identified using the signals from TPC, TOF, and TRD. For TPC and TOF the selection is based on n σ e , i.e. the deviation of the measured signal from the expected signal normalised to the expected resolution. Figure 57 shows the distribution of this variable for the TPC as a function of the track momentum p. The data sample was derived using an electron trigger with a p T threshold of 3 GeV/c and cleaned in the offline analysis by requiring matching with TPC tracks, i.e. rejecting electrons from photon conversions. Above 3 GeV/c the enhancement of electrons is clearly visible in the region around n σ TPC e = 0.
The enhancement due to the TRD electron trigger in comparison to the minimum-bias trigger is also clearly visible in Fig. 58, which shows a projection of n σ TPC e in a momentum interval for both data samples. A further suppression of hadrons can be achieved by exploiting the offline PID of the TRD (see Section 11). Figure 59 shows the p T spectra of electron candidates with 6 layers identified using the TPC and the TOF in the minimum-bias and triggered data sample. The expected onset at the trigger threshold of 3 GeV/c is observed for the triggered events and shows in comparison to the corresponding spectrum from minimum-bias collisions an enhancement of about 700.
The dominant background for the electron triggers, i.e. the conversion of photons at large radii close to the TRD entrance and in the first part of the TRD, was addressed before RUN 2. The p T reconstruction in the online tracking assumes tracks originating from the primary vertex, which results in a too-high momentum for the electrons and positrons from 'late conversions' as shown in Fig. 60. An online rejection based on the calculation of the sagitta in the read-out chambers was implemented and validated. For a sagitta cut of ∆1/p T = 0.2 c/GeV an increased rejection of a factor of 7 at the same efficiency was achieved in pp collisions at √ s = 13 TeV [120]. For this selection criterion about 90% of the late conversions are removed, while about 70% of the good tracks are kept. This improvement allows only those tracks to be used for the electron trigger which are not tagged as late conversions. This setting was already successfully used in RUN 2.

Trigger on nuclei
A trigger on light nuclei was used for the first time in the high-interaction p-Pb and Pb-p data taking at √ s NN = 5.02 TeV in 2016. It exploits the much higher charge deposition from multiply-charged particles. The trigger enhances mainly the statistics of doubly-charged particles (Z = 2), i.e. 3 He and 4 He. The trigger was operated with an estimated efficiency of about 30% at a rejection factor of about 600.
This trigger is also used in the pp data taking at 13 TeV during RUN 2 to significantly enhance the sample of light nuclei. The trigger does not just enhance the sample of particles with Z = 2, but also of deuteron, triton and hypertriton (a bound state of a proton, a neutron and a lambda hyperon, which decays weakly into a 3 He and a pion) nuclei. This will allow a precise determination of the mass and the lifetime of the latter.

Summary
The physics objectives of the TRD together with the challenging LHC environment have led to an ambitious detector design. This required the development of a new chamber design with radiator and electronics. After extensive tests of individual components and the full system, as well as commissioning with cosmic-ray tracks, the detector was ready for data taking with the first collisions provided by the LHC  in 2009. During RUN 1, the original setup of 7 installed supermodules was further extended, reaching a maximum coverage of 13/18 in azimuth. The detector was completed in the LS 1 before RUN 2. Since then it provides coverage of the full azimuthal acceptance of the central barrel. Read-out and trigger components were also upgraded. The developed gas system, services and infrastructure, read-out and electronics, and the Detector Control System allow the successful operation of the detector. The xenonbased gas mixture (over 27 m 3 ) essential for the detection of the TR photons is re-circulated through the detector in order to reduce costs. To minimise the dead time and to cope with the read-out rates for heavy-ion data taking in RUN 2, the data from the detector are processed in a highly parallelised read-out tree using a multi-event buffering technique, with link speeds to the DAQ of about 4 Gbit/s. Failsafe and reliable detector operation and its monitoring was achieved. The resulting running efficiencies are about 100% at read-out rates ranging from 100 Hz to 850 Hz in pp and p-Pb collisions, and up to 350 Hz in Pb-Pb collisions.
Robust schemes for calibration, alignment and tracking were established. The TRD adds roughly 70 cm to the lever arm of the other tracking detectors in ALICE. The q/p T resolution of high transverse momentum tracks at 40 GeV/c is thus improved by about 40%. In addition, the TRD increases the precision and efficiency of track matching of the detectors that lie behind it. Tracks anchored to the TRD are essential to correct the space charge distortions in the ALICE TPC.
Several hadron and electron identification methods were developed. The electron identification performance is overall better than the design value. At 90% electron efficiency, a pion rejection factor of about 70 is achieved at a momentum of 1 GeV/c for simple identification algorithms. When using the temporal evolution of the signal, a pion rejection factor of up to 410 is obtained.
The complex and efficient design of the trigger allows the provision of triggers based on transverse momentum and electron identification in just about 6 µs after the level-0 trigger. This procedure successfully provides enriched samples of high-p T electrons, light nuclei, and jets in pp and p-Pb collisions. In pp collisions, e.g. at √ s = 8 TeV, the jet trigger has efficiently sampled the foreseen integrated luminosity of about 200 nb −1 during RUN 1 with a constant rejection of around 1.5 · 10 −4 . The TRD will contribute further to the physics output of the experiment in various areas, giving enriched samples of electrons, light nuclei and jets due to the trigger capabilities as well as its contributions to tracking and particle identification.