Detector Control System for the GE1/1 slice test

Gas Electron Multiplier (GEM) technology, in particular triple-GEM, was selected for the upgrade of the CMS endcap muon system following several years of intense effort on R&D. The triple-GEM chambers (GE1/1) are being installed at station 1 during the second long shutdown with the goal of reducing the Level-1 muon trigger rate and improving the tracking performance in the harsh radiation environment foreseen in the future LHC operation [1]. A first installation of a demonstrator system started at the beginning of 2017: 10 triple-GEM detectors were installed in the CMS muon system with the aim of gaining operational experience and demonstrating the integration of the GE1/1 system into the trigger. In this context, a dedicated Detector Control System (DCS) has been developed, to control and monitor the detectors installed and integrating them into the CMS operation. This paper presents the slice test DCS, describing in detail the different parts of the system and their implementation.


Introduction
This paper presents the Detector Control System (DCS) developed for the GE1/1 slice test. The GE1/1 station and the slice test are described, followed by a detailed presentation of the different parts of the DCS system and the operational procedures adopted for the slice test.
The GE1/1 station. The GE1/1 station is the first new muon station being installed in the CMS muon system during Long Shutdown 2 (2019-2020) as a part of the muon system upgrade project. It is located in a region close to the beam pipe and covers the pseudo-rapidity interval 1.6 < | | < 2.4, previously only partially instrumented due to the high hit rate. The aim of the GE1/1 muon station is to improve the L1 muon trigger performance before the installation of a new silicon tracker and its associated track trigger [1] in LS3. A detailed description of the physics motivations of the GE1/1 station can be found in [2].
The detector technology selected for the GE1/1 station is the triple-GEM design that was developed over the last five years to satisfy the performance requirements outlined in [2].
-1 -In the GE1/1 muon station, a pair of triple-GEM chambers is combined to form a superchamber, that provides two measurement planes in the muon endcap to complement the existing ME1/1 detectors and to maximize the detection efficiency.
The superchambers alternate in between the so called long (1.55 < | | < 2.18) and short (1.61 < | | < 2.18) versions, as required by the mechanical envelope of the existing structure. Each endcap holds 18 long and 18 short superchambers, for a total of 72 ten-degree chambers per endcap.
Components for the GE1/1 slice test were installed in 2017. The goals of the slice test were to acquire expertise in installation and commissioning, to establish the operational conditions, and to integrate the readout into the CMS online framework.  Four superchambers were positioned almost vertically, at six o'clock, and one of them was positioned at three o'clock, instrumenting, in total, a 50 • region in the negative endcap. In the slice test, a GE1/1 superchamber covering a 10 • region with two layers of triple-GEM detectors is referred to as a Gemini, replacing the term Superchamber used in production activities. The two triple-GEM detectors of each Gemini are identified as layer1 and layer2, layer1 being the closest one to the interaction point.

High voltage
The main difference between the implementation of the vertical detectors and the horizontal one was the high voltage supply. Each layer of vertical Gemini was powered by a single high voltage channel and then distributed to the detector electrodes through a ceramic divider. The horizontal Gemini was powered with a multi-channel supply providing 14 HV channels (7 channels per layer) to power independently the seven electrodes of each Gemini layer.

Readout system and low voltage
The readout system used for all Gemini in 2017 was based on VFAT2 ASICs [3]. At the beginning of 2018 only the horizontal Gemini detectors were replaced with new ones equipped with the most recent readout electronics based on VFAT3 ASICs [4]. The two versions are here referred to as version 2 and version 3 respectively.
A schematic representation of the layout of the version 2 GEM DAQ electronics is shown in figure 2. The readout of a single chamber is fragmented into twenty-four sectors, each one read out by its own ASIC chip. The on-chamber electronics are mounted on the GEM Electronics Board (GEB), a large multilayer PCB plane covering the entire detector. The GEB contains 24 VFAT2 hybrids and an Opto-Hybrid (OH) board with a Virtex-6 FPGA providing the data acquisition and slow control functionality for the ASIC chips. On the back-end, a micro-TCA ( TCA) crate hosts an AMC13 advanced mezzanine card for the propagation of Trigger Timing and Controls (TTC) signals to the CTP7 mezzanine card, also hosted in the TCA crate, which is linked to the OH board via optical fibers. The AMC13 is also responsible for sending DAQ event fragments from the GEM AMCs to the cDAQ event builder. The main differences between the version 2 and version 3 electronics are the generation of the ASIC chips, a new GEB design and a new OH board. The VFAT3 ASIC has improved performance and a communication protocol and data formatting block especially designed to accomodate the operational conditions of the high luminosity LHC and to meet the constraints of the upgraded CMS detector. The latest version GEB has been redesigned by splitting it into two parts in order to simplify the manufacturing process and to reduce mechanical issues during assembly. The OH board has been redesigned to work as a bridge between the two parts of the GEB and is located in the middle of the chamber. The back-end electronics are unchanged.

Gas system
The slice test chambers were operated with an Ar:CO2 70:30 gas mixture. In total, three gas lines were used for the whole system. The gas lines were continuously monitored by two flowcells per gas line that measured input and ouput flow, and by pressure sensors.

RadMon
RadMon is a universal dosimetry device, based on p-channel MOS transistors, used to measure the energy deposition from ionizing radiation. These devices are used to monitor the radiation background in the GE1/1 region due to neutrons, photons, and charged hadrons. A complete description of the device can be found in [5]. During the slice test, only one Gemini was equipped with RadMon sensors.

Fiber Bragg Grating sensors
Fiber Bragg Grating (FBG) sensors were installed to monitor the temperature of the detectors during their operation. An FBG is a type of distributed Bragg reflector consisting of a short segment of optical fiber that reflects particular wavelengths and transmits all others. Its functioning is well described in [12].
In total 6 FBG sensors were installed on each chamber, with 2 FBG sensors installed on the readout board, 2 on the GEB and 2 on the cooling plate, for a total of 12 FBG sensors, connected in series. The FBG sensors were connected to an integrator interfaced to a PC through TCP/IP, allowing their remote control using the DIM protocol [10] on the CMS network to communicate with the DCS system.

DCS software
The GEM subdetector is being added to the original eight subdetectors of the CMS experiment: Pixel detector, silicon strip Tracker, Preshower and Electromagnetic Calorimeter (ECAL), Hadronic Calorimeter (HCAL), Drift Tubes (DT), Resistive Plate Chambers (RPC), and Cathode Strip Chambers (CSC). Each subdetector has its own DCS, designed according to its specific needs.
The CMS DCS is based on SIEMENS SIMATIC WinCC Open Architecture (WinCC OA) [6], previously known as Prozess Visualisierungs und Steuerungs System (PVSS). WinCC OA is a SCADA [7] system for the visualization and operation of processes and machines, supporting customized functionalities, designed for large scale applications, and widely used for industrial and commercial applications. It supports multi-user and redundant systems, which are necessary for CMS use. The Joint Controls Project team (JCOP) [8], a collaboration between the four LHC experiments and the CERN IT/CO group, built the JCOP controls framework [9] that is based on WinCC OA and integrates the requirements of the LHC experiments. The JCOP framework provides a set of guidelines as well as software tools for the developers of the control systems for CERN experiments. The GEM DCS represents an example of a CMS subdetector control system built from scratch in recent years. The local GEM DCS can be conceptually divided into three main parts: one part to monitor the gas system, one part to monitor and control the single-channel HV and LV and one part to monitor and control the multichannel HV. The following sections give an overview of the basic features of the main parts of the DCS.

High voltage and low voltage systems
Panels for monitoring and control of the low voltage and single channel high voltage are shown in figure 3. Each layer is graphically represented by a trapezoid (figure 3a), that displays the actual -4 -voltage and current on the HV channels and the three LV channels used to power that detector layer and its electronics. A standard user has access only to buttons in the lower part of the trapezoid that control switching on or off the HV and LV channels, but a standard user cannot modify any other setting.
Expert users also have access to the panels shown in figure 3b and 3c, for changing the settings of each voltage channel.
For the multichannel HV system, the A1515TG board can be operated in two modes, called GEM or FREE. In GEM mode when one of the seven channels connected to the same detector is turned on (or off), the other six channels are also turned on (or off) according to the OnOrder (OffOrder) settings. The latter setting is an integer number which specifies the powering order for each channel. In FREE mode, the seven channels can be turned on or off independently from each other.
The first step in the development of the DCS to monitor and control the multichannel HV was the implementation of the panels shown in figure 4 and 5. It was necessary that all monitorable values and settings be accessible, so that any action on the board and its channels was possible and any parameter was visible from the panels. Ideally, the error LEDs should always be grey and the status LEDs should always be green, since during normal operation the detectors are always powered and only the voltage setting may change according the LHC state. One "child" panel (selectable from a "parent" panel) was dedicated to the manual setting of all the relevant HV parameters, while another one (figure 5) is opened by the Monitor button from the main panel. The panel in figure 5 is a passive panel, meaning that no change to the system can be performed from it. This panel offers an exhaustive view of the actual values of all the existing parameters, organized by detector layers. An important feature is the matrix of LEDs located in the upper right part of the panel. A red LED indicates a channel in error condition and the type of error. Dedicated LEDs indicate the most common errors: overcurrent, overvoltage, undervoltage and trip. Other types of errors are identified from the Status value, shown on the right of the LED matrix. A legend for the Status values is accessible from the panel.

Gas system
The gas system provides the proper gas mixture for the operation of the detectors. It controls the composition of the mixture, the pressure, and the flow rate to the chambers.   The gas system is operated and maintained by a CERN central team. Each CMS subsystem is responsible for monitoring the operation and is allowed to adjust the gas flow in individual each line. The gas system section of the DCS is devoted to monitoring the mixer, rack, and flowcells.
All the variables related to the gas system are published by the gas group using the DIP protocol [11] and then read by the DCS.
The mixer monitoring window contains all parameters related to the gas mixture. The most important ones are the ratio of each component, i.e. the actual percentage of each component, the input pressure of each component and the output pressure of the mixture.
The rack monitoring window shows the pressure of the gas entering the pre-distribution and the distribution racks, and the pressure of the gas exiting from the detectors. These three values and the mixture components percentages mentioned above are the most important parameters of the system. The pressures indicate that there is gas present in the system, and the components percentages assure that the correct gas mixture is supplied to the detectors.
The last part of the monitoring system is focused on the flowcells (see figure 6) whose role is to measure the input and output flows from each gas line. For the slice test there were three gas lines each with a pair of input and output flowcells. The DCS could host up to six pairs of flowcells. The difference between input and output flow is calculated online by the DCS. Since the resolution of the flowcells is of order of 1 l/h, the flowcells cannot be used for monitoring the gas tightness of the chambers.

Environmental parameters
As the behaviour of gas detectors can be deeply affected by environmental conditions, the GEM DCS includes two sections devoted to monitoring environmental parameters. One section is focused on the cavern conditions and the other is specific to one Gemini detector.
The pressure and temperature in the service and experimental caverns were monitored, as shown in figure 7. The sensor data are available as DIP publications [11] that are read by the DCS in a way similar to that used for the gas values.   Monitoring was implemented for one of the Gemini detectors (see figure 8) that was instrumented with Fiber Bragg Grating (FBG) temperature sensors [12] previously described in section 2.5. The FBG sensor data were important in understanding which components (electronics, HV circuit, etc.) contribute to heating the detector and the effectiveness of the cooling circuit.

RadMon
The RadMon device installed in the slice test contains five on-board sensors that are described in table 1. The GEM DCS calculates the temperature and radiation doses and displays them in a dedicated panel.

LHC and magnet
The slice test DCS includes panels for monitoring the status of the LHC machine and the CMS magnet.
The LHC Info panel, shown in figure 9, monitors the status of the machine, beam, and handshake modes and information about the fill, including fill number, energy, instantaneous luminosity, collision rate, and integrated luminosity. Time-dependent plots of the energy, collision rate, and instantaneous luminosity are shown at the bottom of the panel.
-9 - • Cryo State describes the status of the cryogenic system.
• Dewar State describes the status of the liquid nitrogen dewar.
• Emergency reports any dangerous condition of the magnet system.
• Ramping and Steady indicate if the magnet current is increasing/decreasing or is stable.
The four fields on the right provide details on the operational status: • Current, is the value of the magnet current.
• Dewar Level is for the liquid nitrogen.
• Field, shows the magnetic field value.
• Vacuum, shows the pressure in the vacuum tank.
A complete description of the CMS magnet can be found in [13].

Automation
Manual monitoring and intervention features of the DCS were described in the previous section. Most of the time, however, the system is operated automatically. In periods of data taking, and also in other machine conditions, the LHC goes through a sequence of states to which all the experiments and their subsystems must react properly. The experiments must be ready to take data efficiently during collisions and to go to a safe state in case of potentially harmful LHC conditions. Automation is achieved through two different tools: the DCS Finite State Machine (FSM) and the DCS Detector Protection. The FSM enables control of entire subdetectors as single objects through pre-defined global actions reacting to the LHC changes of state. The FSM can have only one state at a time, among a finite number of possible states. The CMS FSM has a tree hierarchy, whose nodes can be either logical units, control units or, if they are at the lowest level of the tree, device units. The state of a logical or control node is defined by the states of its children. It is possible to give a command to such a node, that will consequently send a command to its children. As a result, the state of the children may be re-evaluated and changed, and the state of the parent node will be re-evaluated according to the new state of its children. The number and names of the states, the rules defining the state transitions, the list of available commands in each state and their effect on children are all programmed by the developer. At the bottom of the FSM tree are the device nodes, that do not propagate a command to any child, but usually act on a piece of hardware. If a node or device unit in a particular state receives a command that is not declared among the state's actions, then that command is ignored. A simple example of an FSM structure for an LHC experiment is shown in figure 11.
An important feature of the FSM structure is that commands propagate only down the tree, while states propagate only upwards. In order to send commands, one user takes control of one node of the FSM. This user then controls the tree from there down, either sending commands from the highest node owned by that user that are propagated down the tree, or sending commands directly to a node or device at a lower position. Only one person at a time can own an FSM tree. This mechanism is the basis for the automation of CMS subdetectors. Each subdetector has designed its own DCS with its FSM. The upper node of each subdetector can be included in the central CMS FSM, controlled from a single operator station in the CMS control room. The subdetector's upper nodes must follow a set of conventions and rules on the states and commands, to be compatible with the central DCS and correctly communicate with it. The specific needs of each subdetector are implemented down its FSM tree. By taking control of the CMS node, the operator owns all the subdetectors and has control of the entire CMS detector. The operator has the ability to send some pre-defined global actions to the subdetectors, whose effects have been specified and programmed by the subdetector experts. If necessary, the operator can partition out the tree of a subsystem giving control of its FSM tree to a subdetector expert. When there is a change in the LHC or beam state, automated actions -not manually sent by the shifter -are propagated down the FSM tree according to each subdetector's automation matrix. In order for a command to reach a subdetector and be propagated down its tree, its FSM must be centrally owned by the shifter in the control room.
Automation is required to ensure that all subdetectors are locked in a safe mode whenever a potentially dangerous situation exists. For this purpose the FSM is not an adequate tool, as the real effect of a propagated command depends on several factors, including whether the subdetector is -12 - under central control or not when an LHC or beam change of state occurs, or if the subdetector top node is in a state able to process a command. Detector protection is used when it is desired that the subsystems always react, regardless of any other factor.

FSM
The scheme of the FSM realized for the slice test is shown in figure 12. It is organized according to the subdetector parts, the low voltage system (GEM_LV), the high voltage system (GEM_HV) and the gas system.
The structures of the low voltage and high voltage sub-trees are similar, with the Gemini chambers at the first level and the two chamber layers on the second level. Currently, in the LV subtree all layers of horizontal Gemini have three low voltage channels that are necessary to power their on-board readout system. The LV of vertical Gemini currently uses only one voltage channel, hence the sub-trees of Gemini01L1_LV and Gemini01L2_LV have only one voltage channel. Similarly, there are some differences in the structures of the GEM_HV sub-trees of the Gemini27_HV to Gemini30_HV branches (vertical Gemini) and the Gemini01_HV (horizontal Gemini). The subtrees of the chambers equipped with the voltage divider and powered with a single HV channel are simple, as each detector layer is a device unit acting directly on the HV channel powering that layer. For each chamber layer supplied with the multichannel power supply, seven different HV channels power a detector layer, each one with its own device unit in the FSM tree.
The Gas System FSM sub-tree. The GEM Gas System sub-tree has a substantial difference with respect to the LV and HV sub-trees of the FSM: no actions are programmed at any level of its tree. Hence it is not used to perform any action and it does not receive any command following the automation matrix; it is used only to monitor the gas system components.
The GEM Gas System sub-tree is divided into two main parts, the GEM Mixer and the Gas Distribution. The device units at the end of their sub-trees are connected to several parameters, whose values can be either in a good, warning or bad range. The device units called Stepper are -13 -connected to a status integer representing the state of a different part, rack or mixer of the gas system. For each gas line used in the slice test (Channel2, Channel3 and Channel4 in figure 12), the input and output flow of each line together with their difference are monitored. The Argon and CO2 device units in the GEM Mixer sub-tree are connected to percentage values of the gases in the gas mixture. The states of the device units that depend on the aforementioed parameters are propagated up the tree to the main node of the gas system. As a result, the latter can be in three different states: • Running, if all the parameters are in a good range • Warning, if any of them is in a suspicious range • Not Running, if any of them is in a bad range.

Protection system
The operation of the protection system relies on the existence of an input condition, i.e. a boolean input variable, that is set to true when a situation requiring a protected action occurs, and is set back to false when this situation is over. In addition, it is necessary to identify one or more variables, called output Data Point Elements (DPEs), to be set to a particular value and locked to that value until the input condition remains true (or fired). Locking a variable means that the associated DPEs cannot be changed, neither through the FSM nor from the DCS panel nor anywhere else in DCS. For example, the detector protection is typically used to set the high voltage to STANDBY when LHC starts injection, as programmed by the automation matrix. In such a case, one typically chooses the v0 setting of the HV channels as output DPE to be locked to the standby voltage. It is possible to create more than one detector condition depending on the same input variable. After a condition has been fired and the output DPEs have been locked, a verification is executed. Some DPEs and an expected value must be defined. Such DPEs, called verify DPEs, are usually not the same ones used as output DPEs. In the previous example, one could set the vMon value of the HV channels as verify DPEs, and expect them to be smaller than the standby voltage (plus some tolerance).
Protected actions defined in the GEM automation matrix are all PROTECTED STANDBY triggered by the same input variable. The latter is not controlled by the local GEM DCS but is provided and controlled by the central DCS. As the high voltage is supplied to the detectors in two different ways, either with the single channel or the multichannel supply, it has been necessary to develop two different detector protection systems to obtain the protected standby of the high voltage. The two protection systems are triggered by the same central input condition. Our policy was to lock only the v0 setting of the high voltage channels, but not to lock the onOff setting. In this way, if a channel is powered the only allowed value is the standby voltage. On the other hand, it is always possible to switch off a chamber during a protected standby.

Conclusions and future perspectives
The GE1/1 slice test DCS was developed from scratch to handle the operation of 10 detectors in the CMS environment. A local version including HV and LV control, as well as gas system monitoring, was deployed since 2017, allowing the operation of the detectors by GEM expert users.
-14 - For brevity, in the GEM_LV and GEM_HV sub-trees only some of the gemini chambers are shown, and only the sub-trees of Gemini27L1_LV, Gemini01_LV, Gemini27L1_HV and Gemini01L1_HV have been expanded down to the device units. The trees not expanded in the scheme have the same structure. The same applies to the gas channels in the Gas System sub-tree, where only the sub-tree of Channel2 has been expanded down to the device units.
The completion of a dedicated FSM and detector protection system led to the inclusion of the GEM system into the central CMS DCS at the end of 2017. Since the beginning of the 2018 run, GEM were allowed to operate unmanned, responding to LHC activities as did the other subsystems. Since its deployment, the DCS system, even when still under development, allowed a stable and safe operation of the detectors.
The slice test DCS is currently being used as a starting point for the development of the DCS for the entire GE1/1 station, which is being installed during Long Shutdown 2 of the LHC.