Development of a read out driver for ATLAS micromegas based on the Scalable Readout System

With future LHC luminosity upgrades, part of the ATLAS muon spectrometer has to be changed, to cope with the increased flux of uncorrelated neutron and gamma particles. Micromegas detectors were chosen as precision tracker for the New Small Wheels, that will replace the current Small Wheel muon detector stations during the LHC shutdown foreseen for 2018. To read out these detectors together with all other ATLAS subsystems, a readout driver was developed to integrate these micromegas detectors into the ATLAS data acquisition infrastructure. The readout driver is based on the Scalable Readout System, and its tasks include trigger handling, slow control, event building and data transmission to the high-level readout systems. This article describes the layout and functionalities of this readout driver and its components, as well as a test of its functionalities in the cosmic ray facility of Ludwig-Maximilians University Munich.


Micromegas detectors in the ATLAS New Small Wheel
The ATLAS muon spectrometer consists of a barrel and two end-cap regions. RPCs and TGCs are used for triggering on muons, MDTs for precise reconstruction of muon tracks. Once the LHC luminosity will be upgraded by a factor of 2-3 beyond its design value of 1 × 10 34 cm −2 s −1 during an upgrade in 2018, without modifications to the muon spectrometer, the performance in the endcap region would degrade massively, due to the increased rate of uncorrelated background hits in the detectors close to the beam pipe.
To overcome this limitation, as well as to increase the triggering capabilities, the innermost end-cap stations, called Small Wheels will be replaced by New Small Wheels [1] during the second long shutdown (LS2) in 2018.
These New Small Wheels will be built from micromegas [2] and sTGC (small strip Thin Gap Chamber) detector technologies, both known to be capable of high tracking and triggering performance even in high background irradiation environments.
Prototype detectors of these technologies were and are subject of thorough tests under laboratory conditions, as well as tests within the ATLAS structure during LHC run time. This article presents the status and results of the efforts to integrate micromegas detectors into the existing ATLAS data acquisition chain, making use of the Scalable Readout System (SRS) [3], developed within the RD51 [4] framework.

The Scalable Readout System
The Scalable Readout System has been developed as a flexible and powerful data acquisition system for various applications. Designed around modern Virtex 1 -5 and Virtex-6 FPGAs, it allows to read a variety of different frontend chips on different detector technologies. In this work, the SRS system is used to read out micromegas detectors, equipped with APV25 [5] analogue frontend chips, connected to FrontEnd Concentrator (FEC) boards in groups of sixteen. Each FEC board is accompanied by an ADC board, which performs the analogue-to-digital conversion of the frontend signals as illustrated in figure 1.
An optional SRS component, the Scalable Readout Unit (SRU) allows the connection of several FEC boards, to collect their data and do further event processing. Here, the SRU is used as an ATLAS Read Out Driver (ROD), whose tasks are defined by the sequence of processing steps of the ATLAS data acquisition chain. The large amount of available logic resources in the SRU FPGA allows to add additional functionalities or event data processing at later stages of the firmware development.

ATLAS SRS readout driver functionalities
Every ATLAS subsystem makes use of individual Read Out Drivers, that serve as interface between the detector-specific infrastructure, and the ATLAS-wide common Read Out System (ROS). The SRU as a micromegas ROD has to fulfill a variety of tasks, which are all implemented within its hardware and FPGA firmware.

Trigger, timing and control
The LHC bunch-crossing clock, used to synchronize the detectors with the particle collisions, as well as the level 1 trigger signal are distributed to each ATLAS subsystem via light-guiding fibers. The SRU is equipped with a standard TTCrx receiver ASIC [6], developed by the CERN Microelectronics group. It allows the SRU to synchronize with the LHC bunch crossing clock of 40.08 MHz, receive the level 1 trigger signals of the central trigger processor, and the bunch and event counters reset signals, which are necessary to build valid ATLAS event data fragments.

Detector data collection
The connectivity between FEC boards and SRU is realized with DTCC [7] (Data, Trigger, Clock and Control) point-to-point links, using standard SFTP (Shielded Foiled Twisted Pair) category 6/7 cables. This solution offers a high-bandwidth path for detector data in one direction, and a low-skew synchronization for clock and trigger information in the other direction. In this way, the frontend chips connected to the FEC boards run synchronously to the LHC bunch crossing clock received on the SRU.
The SRU hardware allows the connection of up to 40 FEC boards with -in the case of APV25 frontend chips -2048 channels each. The current implementation of the bi-directional DTCC links has been tested with up to eight FEC boards connected to a single SRU. Nevertheless this number can be easily increased if necessary. Alternatively additional SRU units can be used for large detector systems, in case other SRU resources, like memory or output bandwidth, limit the amount of FEC boards to be read out by a single unit.

Event building
Before the data from the different ATLAS subdetectors can be merged, they have to be formatted in a certain way to ensure that all necessary information about the event are contained, and no loss of synchronization may happen. After reception of the frontends data from each FEC board, the SRU firmware performs an event building algorithm to comply with the ATLAS data format [8], concerning headers, trailers and trigger information.
The APV25 frontend chip supports the readout of several consecutive analogue sampling steps, which allows the reconstruction of the analogue signal pulse. To be able to use time-projection techniques to reconstruct the projected track in 3D, it is necessary to read 10 to 20 consecutive time samples from the APV chips and determine the arrival time of electron clusters on the individual readout strips. Since it takes about 4 µs to transmit a single time bin, it is not possible to read analogue data for each level 1 trigger, which are generated at a rate of up to 100 kHz. To cope with this unavailability of detector data for these level 1 triggers, the SRU firmware implements a bookkeeping-logic, which takes note of every trigger that could not be served by the APV25 frontend chips. To comply with the expectations of the Read Out System, dummy events are inserted into the output data stream by the SRU firmware for every trigger missed in this way, to maintain overall data order and integrity.

Data transmission
The data acquisition element following the Read Out Drivers (ROD) are PC-based nodes called Read Out System (ROS). The data transmission is realized on ATLAS standard SLINK [9] over optical fiber.
The current ATLAS subsystem Read Out Drivers use separate mezzanine boards (HOLA), with a dedicated FPGA and serializer chip, to create the SLINK data stream containing the AT-LAS event format. The Virtex-6 FPGA on the SRU board has more than enough resources to integrate this functionality, using its internal logic capabilities and GTX transceivers. If the amount of detector data exceeds the SLINK bandwidth, additional SLINKs can be added with firmware upgrades. Each SLINK uses one of the SRU onboard industry standard SFP+ cages for conversion to optical fiber.

Slow control
The SRU features a slow control path over optical or copper based gigabit Ethernet to read and write configuration and calibration registers. As an alternative to SLINK, the detector data path can be switched to Ethernet for fast online preview, calibration and debugging. All communication is based on UDP (User Datagram Protocol) packets.

Test of the system with cosmic muons
The SRS system (one SRU and one FEC unit) has been installed in the Ludwig-Maximilians University (LMU) cosmic ray facility, together with the L1 chamber, the first resistive strip micromegas detector with a size of about one square meter. The detector uses 16 APV25 chips to read the charge information from 2048 detector strips. The cosmic ray facility uses two full sized ATLAS BOS MDT (monitored drift tubes) chambers, covering about 9 m 2 , for tracking of cosmic muons with a resolution in the order of 40 µm. With that information, the L1 micromegas chamber, that is sandwiched by the MDT chambers, can be investigated for homogeneity of efficiency and spatial resolution by comparing its measured data with the MDT tracks. Figure 2 shows the cosmic ray facility and figure 3 shows the measured difference (residuals) between the precision coordinate of the MDT muon track and the micromegas recordings. Data integrity of the Read Out Driver is validated, as the residuals are distributed closely around zero over the whole micromegas chamber. One can also see a dead channel around y = -150 mm and a small gap in between the two readout boards at y = 0 mm.
Since the MDT reference chambers are read out by a complete ATLAS-like data acquisition structure, including a TTC trigger source and SLINK data reception path, this is an excellent test bench for the SRS system in the ATLAS readout system. Tests of the system are ongoing, and the SRU based Read Out Driver shows the expected performance. No problems occurred so far. Alternative comparison of the Read Out Driver's data -4 -

JINST 9 C01038
with data from test beam measurements, performed with other data acquisition configurations, shows no difference in data integrity. The SRS performance fully complies with the high rate and bandwidth demands of an ATLAS Read Out System.

Conclusion and outlook
The presented ATLAS Read Out Driver based on the SRU unit of the Scalable Readout System shows the expected performance and fulfills all demands on an ATLAS ROD. It interfaces to all necessary infrastructure to be integrated into the ATLAS data acquisition system.
In the presented implementation it makes use of APV25 frontend chips, but due to the modular nature of the Scalable Readout System, this frontend electronics can be changed with only little modifications to the FEC hardware and firmware, and without altering the ATLAS-specific SRU firmware. The desired frontend for the ATLAS New Small Wheel is the future VMM3 chip, whose predecessor prototype VMM2 is currently being produced, and which offers a number of additional features like onboard digitization and pulse shape feature extraction.
During the LS1 shutdown in 2013 and 2014, a 4-layer, square meter in size micromegas detector equipped with VMM2 frontend chips will be installed parasitically on one of the current ATLAS Small Wheels. In the run period until the LS2 shutdown it will provide the unique opportunity to test the NSW micromegas technology within the actual ATLAS environment.
To compare the muon track measurements of the micromegas detector with the full ATLAS muon system on an event-by-event basis, the micromegas will be integrated into the ATLAS data acquisition infrastructure and read out with the presented SRU based Read Out Driver. Due to its moderate channel count of 8192, a single SRU unit is sufficient, but the system can be easily expanded to read out much larger detectors like the New Small Wheel.