The Astrophysics with Italian Replicating Technology Mirrors (ASTRI) Mini-Array is an international collaboration led by the Italian National Institute for Astrophysics (INAF) and devoted to imaging atmospheric Cherenkov light for very-high |
1.IntroductionAstrophysics with Italian Replicating Technology Mirrors (ASTRI)1,2 is a project aimed at developing the next generation of imaging atmospheric Cherenkov technique (IACT) telescopes for ground-based -ray astronomy in the energy band between 1 and several hundred TeV. It was initially funded as a flagship project by the Italian Ministry of University and Research and is now one of the most significant ground-based astronomy projects led by INAF, focusing on both technological and scientific advancements. The ASTRI-Horn prototype telescope has a diameter of 4 m and is located at the Serra la Nave site on the slopes of Mount Etna in Sicily. It is managed by INAF-OACt and was named after Guido Horn D’Arturo, an Italian astronomer who invented telescopes with tiled mirrors. This telescope is the first of its kind in Cherenkov astronomy, with a wide field of view of 10 deg and a compact, aplanatic, two-mirror optical configuration of the Schwarzschild-Couder type. In addition, it has a camera equipped with SiPM silicon sensors. The ASTRI collaboration has gained valuable experience from the implementation of the first phase of the project and is currently working on the second phase which involves the installation of nine Cherenkov telescopes, known as ASTRI Mini-Array, at the Teide Observatory in Tenerife, Canary Islands, Spain. These telescopes are spaced apart and are like ASTRI-Horn but with some improvements. The project is being supported on-site by the Istituto Astrofisico de Canarias (IAC) and the Fundacion Galileo Galilei (FGG), which is governed by INAF. Other international institutions, such as Universidade de São Paulo (USP) in Brazil, North-West University South Africa, Université de Geneve in Switzerland, are also contributing at various levels. The ASTRI-Horn and ASTRI Mini-Array telescopes are prototypes of the small-size telescopes (SST) that will be installed at the southern site of the Cherenkov Telescope Array Observatory (CTAO) in Chile, at Paranal, which is expected to be operational by 2027. The ASTRI Mini-Array aims to conduct stereoscopic observations in Cherenkov light from 2025 onwards.3 Due to the IACT,4 it is possible to infer the direction and spectrum of -ray photons with energies in the range from a few hundreds of GeV to 200 TeV and beyond arriving at the Earth from astrophysical sources. This will be the first time such observations are performed with wide-field telescopes. The focus will be on studying astronomical sources. Due to the precise angular and energy resolutions of the ASTRI Mini-Array, it will complement the high-altitude direct particle detectors at the northern site, such as LHAASO and HAWC. The latter are already monitoring the sky in the same band. The ASTRI telescopes optics system is based on a primary mirror made of reflecting segments and a monolithic secondary mirror, arranged in a proper configuration. The Cherenkov UV-optical light produced by atmospheric particle cascades (air-showers), initiated by the primary -ray photons entering the atmosphere, is focused onto a compact camera (just 50 cm diameter) with a large (10.5 deg) field of view, due to a small plate scale. The camera is a fast (tens ns timescale) SiPM system developed by INAF adopting the CITIROC ASICS.5 The collected data are recorded and the array trigger is managed off-line, combining together the data taken by the different telescopes, after that a proper time stamp is granted by the White Rabbit common timing system.6 Appropriate data analysis methods are employed to reduce the level of the background and allow efficient detection of -rays coming from astrophysical sources. Besides the -ray scientific program, the ASTRI Mini-Array will also perform stellar Hambury-Brown intensity interferometry studies and cosmic rays detection in the PeV region due to the Cherenkov light analysis. To perform Stellar Hambury-Brown intensity interferometry observations7 is possible because each telescope of the ASTRI Mini-Array will be equipped with an ad hoc, very fast camera for intensity interferometry. The ASTRI Mini-Array layout, with its very long baselines (hundreds of meters), will allow us to obtain angular resolutions down to 50 micro-arcsec, making it possible to reveal details on the surface of bright stars and of their surrounding environment and to open new frontiers in some of the major topics in stellar astrophysics. The measurements of cosmic rays are also possible because 99% of the observable component of the Cherenkov light has a hadronic nature. Even if the main challenge in detecting -rays is to distinguish them from the much higher background of hadronic cosmic rays, this background, recorded during normal -ray observations, can be used to perform measurements and detailed studies of the cosmic rays themselves.8 The ASTRI Mini-Array telescopes, including the Cherenkov Camera,5 are an updated version of the ASTRI-Horn Cherenkov Telescope9 operating at Serra La Nave (Catania, Italy) on Mount Etna. The software developed by INAF for the ASTRI-Horn telescope, including development, testing, and production environments, is partially reused in the ASTRI Mini-Array context. The ASTRI Mini-Array Software System presented in this paper manages observing projects, observation handling, remote array control and monitoring, data acquisition, archiving, processing and simulations of the Cherenkov and intensity interferometry observations, including science tools for the scientific exploitation of the ASTRI Mini-Array data. The ASTRI Mini-Array Software System10 is under development by INAF teams and other Italian research institutions (including other public research institutions, such as the University of Perugia and INFN), foreign institutes (University of Geneva), and private partners (the Advanced Center for Electrical and Electronic Engineering—AC3E—at Santa Maria University, Valparaiso, Chile). INAF is in charge of software management and coordination, requirements specifications, top-level architecture definition, development, integration, verification, validation, and deployment of the overall software, with AC3E as an external contractor to manage some of these activities. 1.1.ASTRI Mini-Array SystemThe ASTRI Mini-Array system1,2 is geographically distributed in three main sites. The Array Observing Site (AOS) at the Teide Observatory is operated by the Instituto de Astrofisica de Canarias (IAC), where the nine telescopes and the remaining observing site system are under installation; the AOS includes a data center for computing and networking resources. Some array operation centers (AOCs) are planned, each equipped with a control room. These AOCs are located remotely at various INAF institutes in Italy, at the IAC facilities in La Laguna (Tenerife), and at the Teide site for use during the installation and commissioning phases. A primary control room will allow the operator to supervise and carry out the scheduled observations and calibrations during the night, commanding the ASTRI Mini-Array, while an astronomer on-duty (AoD) supports and manages the observations; additional control rooms can allow monitoring the night operations. Finally, the ASTRI Data Center is in Rome for data archiving, processing and quick-look, simulation, and science user support. ASTRI Mini-Array Software System runs at the AOS (the on-site software is called supervisory control and data acquisition, SCADA) and in the ASTRI Data Center in Rome (the off-site software). The on-site software controls and monitors the observing site system, the site service system, and the safety and security system installed at the AOS. The array operator and AoD can remotely connect to the on-site software from AOCs through a web interface named operator human machine interface (operator HMI) that allows remote access, monitoring, and control of the on-site systems. The observing site system is composed of all subsystems aimed at performing the observation:
The site service system is composed of all subsystems that provide services required to support the observing site system. The main subsystems are
Finally, the safety and security system does not depend on any other site-installed system other than power. The functional safety actions are the detection of interlock requests and emergency stops. In case of hazardous faults, the system interlocks any other system that could be in a hazardous situation because of that fault. The safety and security system will be connected to the site emergency stop (E-stop) system that, if activated, shall trigger an emergency stop function. Emergency stop devices must be a backup to other safeguarding measures, not a substitute. E-stop devices shall be appropriately distributed throughout the site (e.g., local control room, service cabinet) to facilitate a quick activation from different locations in an emergency. Each hardware assembly has a local control system, i.e., a hardware/software system used to switch-on/switch-off, control, configure and get the status, monitoring points, and alarms of all parts of the assembly: the related software is called local control software (LCS). Each LCS could have a local engineering human machine interface (engineering HMI). LCSs can be delivered as part of an externally contracted subsystem or developed by the INAF team (e.g., for subsystems developed by INAF such as the optical camera and the UVSiPM). Each LCS implements an interface to the ASTRI Mini-Array Software System based on the IEC 62541 standard for the OPC Unified Architecture protocol (OPC-UA).16 It is one of the most important communication protocols for industry 4.0 and the internet of things. OPC-UA allows access to machines, devices, and other systems in a standardized way and enables similar and manufacturer-independent data exchange. An interface control document describes the interface between an LCS and the on-site software. Two subsystems use a different protocol: the on-site ICT system uses a simple network management protocol (SNMP) and the power management system uses a MODBUS protocol.17 The INAF team has in charge also the development of the assembly, integration, and verification (AIV) software used during the AIV activities, which could be connected with an LCS via the OPC-UA interface (see Sec. 4.4.7). 2.Observing CycleThe ASTRI Mini-Array observing cycle is the main driver for developing the ASTRI Mini-Array software architecture. The ASTRI Mini-Array Software System is envisioned to handle the observing cycle, i.e., the end-to-end control and data flow, and the information and operations required to conduct all tasks from the time an observing project (a description of a scientific project to observe a target) is created until the resulting data are acquired and analyzed. The main actors that interact with the software system are the following:
A schematic representation of the global information flow is given in Fig. 1, where the observing cycle’s main phases and related functions are shown. The observing cycle is divided into four main phases: (i) observation preparation, (ii) observation execution, (iii) data processing, and (iv) dissemination. The observation preparation is the first phase of the observing cycle. The observing cycle initiates by a science user submitting an observing project. Once the ASTRI Mini-Array Science Team has selected and approved a list of observing projects, the support astronomer, with the help of an observation scheduler (see Sec. 4.4.2) tool, turns them into a list of scheduling blocks (SBs) containing all the information required to perform the corresponding observations, including time constraints and telescope constraints and configuration. SBs are divided into observing blocks (OBs), i.e., the smallest sequence of observing instructions that can be scheduled, and that depends on the observation mode chosen by the submitter of the proposal. For example, suppose the Wobble observation mode is chosen as the observing mode. In that case, a single SB will be divided into a sequence of OBs, which foresee a calibration run as the first OB and then 2 or 4 OBs with alternate wobble target positions. SBs are scheduled in long- and short-term observation plans and stored in the archive system. The short-term observation plan (the list of SBs that must be observed during the night) is transferred on-site. The next step of the observing cycle is the observation execution. The central control executes the short-term observation plan of the observing night, carrying out setups (with an appropriate set of configuration parameters), calibrations, and target observations necessary to ensure that the acquired data are correctly calibrated and used in the construction of the final data product. The array operator remotely supervises operations at the ASTRI Mini-Array AOS via a remote operator HMI. The on-site software starts the array elements, checks the array’s status, assesses the environmental conditions and atmosphere characterization (e.g., NSB level), performs the array calibration, and checks the observation data quality. The array operator can also manually change the schedule, check the status of assemblies, and administer other resources. Changes in environmental conditions, atmosphere characterization, or array status can change the kinds of observations that can be carried out; SBs are scheduled or stopped considering current conditions. At the end of an OB, the data are transferred off-site; this starts the data processing phase. The data processing produces calibrated and reconstructed data (the final event list), applying necessary corrections. Monte Carlo simulations are performed to optimize the reconstruction of the Cherenkov events. Automated scientific analysis is performed on reconstructed data. If an external science alert is detected, the short-term observation plan is modified to follow up on the interesting astrophysical multi-messenger (GW or neutrino) and multi-wavelength events. Data and science tools are distributed for a scientific analysis of the observing projects to the science users: this is the data dissemination phase. Science tools can be used to produce images and spectra and detection of -ray sources. High-level data and data products [event lists and instrument response functions (IRFs)] are released to the ASTRI Mini-Array Science Team. Storing all persistent information in the archive system makes the system less coupled so that these phases can work independently as long as they maintain the information flow to and from the archive system. 3.Main RequirementsTo reduce overall operation costs and workforce, the following top-level requirements are considered for the definition of the software architecture:
The ASTRI Mini-Array software developed by the ASTRI collaboration and used during operations, with only a few exceptions regulated by industrial contracts, is primarily governed by the Lesser General Public License (LGPL) from the Free Software Foundation. The software will become open-source as soon as a future fully operational version is released. 4.General Software Architecture4.1.4+1 Architectural View ModelThe primary goal of software architecture is to illustrate the organization of the software system, delineate its structural components and their functionalities, and integrate these components into broader subsystems. The architectural approach used by ASTRI Team is the 4+1 view architectural model19 illustrated in Fig. 2 and consists of looking at the system through different views, represented with unified modeling language (UML)20 diagrams: (i) the use-case view describes the system’s interaction with actors developing use cases. A use case is a list of actions or event steps typically defining the interactions between an actor and a system to achieve a goal. The actor can be a human or other hardware or software systems; (ii) the logical view is a functional decomposition of the system with the description of the global information flow based on the analysis of use cases and data models; (iii) the process view deals with the dynamic aspect of the system; (iv) the implementation/development view represents the detailed design of the implemented system; (v) the physical/deployment view depicts the system from a system engineer’s point of view: the physical view is more concerned with the system’s physical layer, and the deployment view deals with allocating computing resources on physical nodes. It concerns the topology of software components on the physical layer and their physical connections. The ASTRI Team adopted the 4+1 view model because this allows a deep integration of the domain experts (e.g., scientists, instrument developers) with the software developer team. For example, this allowed them to actively participate in the requirement definition, developing use cases directly to integrate their knowledge of -ray astrophysics, astronomical observatories, instrument development and operations in the overall definition of the software architecture. Experts and scientists have also actively participated in the definition of logical and process views. 4.2.Requirement EngineeringThe main purpose of the requirement engineering process is to produce functional and quality (a.k.a. non-functional) requirements. The requirement inception is the first step to collect the requirements from users and other stakeholders to
A requirement inception process is a challenge because there are many different problems during this phase:21 (i) problem of scope: the user specifies technical details, and the boundary of the system is not well defined; (ii) problem of understanding: the users do not have a complete understanding of the problem domain, have trouble communicating needs to the system/software engineers, omit information that is believed to be “obvious,” specify requirements that conflict with the needs of other customers/users, and have a glossary with terms with different meanings; (iii) requirements volatility: the requirements change over time. The development of some views is part of this process; in particular, the use case, logical, and process views are used to define the scope and the main functions of the software system. An initial definition of some top-level requirements (listed in Sec. 3) has been provided to address the problem of understanding, coupled with a glossary and a high-level definition of the data model of the ASTRI project. In this way, many ambiguities have been removed from the beginning of the project facilitating the requirement inception phase. To keep the problems depicted above (scope, understanding, and volatility) under control, we have adopted an iterative process for the definition of a set of top-level software documents to develop the views “so that solutions can be reworked in the light of increased knowledge.”22 4.3.Top Level Software DocumentsThe content of the top-level software documents (also called software system engineering documents) is summarized in this contribution. These documents passed a Concept Design Review (CoDR, see Sec. 5.3) with a panel of external reviewers in June 2020. These documents include (i) the top level use-case document, (ii) the top level software architecture document, (iii) the top level data model document, (iv) the product breakdown structure (PBS), and (v) a global glossary at project level. The main inputs for defining these documents were the ASTRI science and system requirements, the ASTRI operation concept,2 and the ASTRI science use cases. The top level data model document provides a conceptual view of the ASTRI Mini-Array data model, describing data products, data models and their relationship, referring to data streams in architectural diagrams without ambiguity, and defining a short identifier for the data product. The concepts and definitions described in this document and the glossary are references for all software documents developed by the ASTRI team. The top level use cases document captures the greatest possible number of stakeholder’s points of view analyzed during the requirements inception phase. This document contains observation-related use cases that describe how to perform observations from the proposal to the scientific exploitation of the acquired data from a user’s point of view and the commonalities of all the science-related use cases, according to the observing cycle described in Sec. 2. This category includes calibration and other technical-related use cases. This document covers the use-case view of the system and is the starting point for the development of detailed use-case documents at the subsystem level. The iterative process adopted by the ASTRI team allowed using these use cases as a high-level process view, including human actors and some top-level system actors defined in the top level architecture document. The top level architecture document provides a comprehensive architectural overview of the ASTRI Mini-Array Software System and the hardware installed at Teide from a logical perspective, providing a complete functional decomposition and the main requirements of the software. This document covers the logical view, partially the process view, and the deployment view. This document depicts various aspects of the software using different views and describes the most significant architectural decisions. Use cases coupled with the functional view provide a complete description of the functional requirements of the software. The functional decomposition described in the top level architecture has been used to develop the whole PBS of the software system, used to manage interfaces and to define the specification tree, i.e., the definition of the hierarchical relationship of all technical aspects of the software system and is the basic structure to perform requirements traceability. The PBS has also been used to define the project’s work breakdown structure (WBS), allowing an organization of the work based on the customer-supplier relationship described in Sec. 5.1. The top-level documents serve as the foundation for a more comprehensive requirement elicitation phase. The requirement elicitation phase comes after inception and involves gathering detailed requirements. It aims to uncover specific needs, features, and constraints by interacting with stakeholders and users. This phase involves the development of detailed use-case documents and software requirements for each software subsystem within the ASTRI Mini-Array software architecture (refer to Sec. 4.4). In addition, these requirements, along with the top-level documents, are used to create detailed design documents for each software subsystem. This process ensures traceability between the subsystems and the top-level use cases and architectural elements, effectively constructing the complete specification tree of the ASTRI Mini-Array Software System. 4.4.ASTRI Mini-Array Software Main SystemsThe general architecture of the ASTRI Mini-Array Software System is derived from the use cases, data models, and data flow definitions and consists of the top-level systems described in this section. Figure 3 shows the context view with the main software systems, which are the archive system, the science support system, the SCADA system, the data processing system (DPS), the simulation system, the on-site startup system, and the AIV and engineering software. The following sections provide an overview of these systems, with a short description of the main functionalities and a link with the observing cycle phases. 4.4.1.Archive systemThe archive system (see Fig. 4 with the connected data models) provides a central repository for all persistent information of the ASTRI Mini-Array, such as observing projects, observation plans, raw and reduced scientific data, monitoring data, system configuration data, logs of all operations and schedules. The main archives are
4.4.2.Science support systemThe science support system manages the observing projects, and the observation plans preparation, the management of science alert events, the dissemination of scientific data, and the science tools for their analysis. It is the main interface for science users to the ASTRI Mini-Array system. It provides them with an easy-to-use science support system HMI for the detailed specification of observations. The main products generated by this system are the observation plans. The science support system also contains the science gateway. The science user uses the web interface to access high-level science-ready data and data products delivered by the DPS. This system supports the observation preparation and the dissemination phase of the observing cycle. The main functions are (see Fig. 5):
The transient handler is responsible for submitting a new observing project to the observation scheduler whenever an interesting external alert is received and flagged as observable. Upon receiving this trigger from the transient handler, the observation scheduler will generate a new short-term observation plan for the ToO observation and provide it to the SCADA/central control that will be triggered to execute the new short-term observation plan. 4.4.3.Supervisory control and data acquisition systemThe SCADA system controls all operations at the AOS. SCADA’s central control system interfaces and communicates with all assemblies and dedicated software installed at the site. It is responsible for the execution of the short-term observation plan to perform observations. SCADA shall be supervised by the operator but performs the operations in an automated way. It shall provide scientific data, logging, monitoring, alarm, and online observation quality information to help assess data quality during the acquisition. This system supports the day and night observation execution and maintenance phases. The main functions (see Fig. 6) are:
Each SCADA subsystem could provide an engineering HMI, i.e., a dedicated graphical user interface for development, troubleshooting, and test purposes. SCADA is developed using the ALMA Common Software (ACS).29 ACS30 is a container component framework designed for distributed systems, with standardized paradigms for logging, alarms, location transparency, and support for multiple programming languages: Java, C++, and Python. ACS has been used successfully for the Atacama Large Millimeter Array (ALMA) Observatory, which manages an array of 66 antennas on the Chajnantor plateau in Chile. ACS has also been used for ASTRI-Horn and the Sardinia Radio Telescope31 and is also used for CTA.32 Most of Mini-Array’s software developers in INAF are, therefore, familiar with the use of ACS. 4.4.4.Data processing systemThe DPS33 (see Fig. 7) performs the calibration of scientific data, data reduction, and analyses. It also checks the quality of the final data products. Its primary role is to process data retrieved from the archive system as soon as enough data have been acquired to make such reduction meaningful. Typically, processing will be performed on data sets arising from an SB. This system supports the observing cycle data processing phase. The main functions are: (i) the stereo event builder,34 to perform the off-line software stereoscopic event reconstruction of Cherenkov data; (ii) the Cherenkov data pipeline including the calibration software pipeline,33 for data calibration, reconstruction, selection, and automated scientific analysis of Cherenkov data; (iii) the intensity interferometry data reconstruction, and (iv) scientific analysis pipeline,11 for reconstruction and analysis of the Stellar Intensity Interferometry data. 4.4.5.Simulation systemThe simulation system provides Monte Carlo simulated scientific data for developing reconstruction algorithms and characterizing real observations. 4.5.Data CaptureThe ASTRI Mini-Array System’s software can be divided into telescope domain and the science domain. The telescope domain is instrument-centric, and the science domain is scientific observation-centric. The science support and DPSs are part of the science domain. SCADA is the bridge between the two domains. The data capture, part of the central control system, takes the instrument-centric, time-ordered data stream, collects, and extracts those items needed in the science domain, and re-organizes them; it is responsible for collecting the metadata associated with the OB execution (the run), the data capture report. The data capture report is necessary to reduce and analyze the scientific data. The SDM describes the content of this metadata and provides links to the two domains. Figure 8 provides more details and links data capture and the data models in the telescope and science domains. 4.6.Operation of the MA Software SystemThis section provides a sketch of the architectural process view of the ASTRI Mini-Array Software System. Figure 9 summarizes the workflow and the main operation, and the numbering sequence is reported in the following paragraphs, where the workflow of the main software systems is described. The science support system manages observing projects submitted by science user (1) and provides support to prepare the observation plan and associated SBs stored in the archive system (1.1). At the beginning of the night, the validated short-term observation plan with all the relevant information (e.g., target and pointing coordinates, observing mode, OB duration) is uploaded from the science archive. The observation selection is performed automatically by the central control system or manually by the operator (2), which quickly cross-checks the array’s status and environmental conditions through the operator HMI. The validated short-term observation plan for the night is retrieved to be executed manually or by setting the central control system in an automated way (2.1). The central control system manages the observation, fetching the current OB from the archive (2.2). The central control system configures the array assemblies and starts the array data acquisition system (2.3) and the online observation quality system (2.4). The alarm and monitoring systems are always running to have a full-time monitoring of the site. When the hardware systems are ready, the operator starts the observation (3), and the central control system manages the list of OBs in an automated way. A run is the execution of an OB with an associated identifier. During the observation, the array data acquisition system acquires and saves raw data in the local bulk repository (3.1), while the online observation quality system focuses on ongoing problems in data quality (3.2) and sends a report to the operator HMI. During the observation, the data capture of the central control system prepares the observation summary report (see Sec. 4.5), i.e., collects all the engineering and auxiliary information needed by the DPS to reduce and analyze the raw scientific data. During the observation, the operator checks the observation status through the operator HMI. The central control system sends information about the observation status (3.3), providing feedback to the operator. The logging system (3.4) and the monitoring system (3.5) send information to the operator HMI. The alarm system sends alarms to the operator HMI (3.6). The observation summary report is stored in the science archive (3.7), and the raw data are stored in the bulk archive (3.8). At the beginning of the night also the DPS (4) is started. When a run is finished, the raw data (4.1) and the observation summary report (4.2) are transferred off-site in automated way. A short-term analysis is performed at the end of the data transfer of a run (4.3) to produce preliminary science products, which are stored in the archive system (4.4). The operator checks some results of the DPS through the operator HMI. The long-term data analysis is started when data are ready in the off-site archive. The DPS pipeline retrieves raw data and metadata (the observation summary report), as well as calibration coefficients (CAL1), look-up-tables, and IRFs, needed for Cherenkov data characterization and scientific analysis from the archive system and performs the complete data reduction. The DPS pipeline generates the final science-ready data and automatic science products and stores them in the archive system. Before the Cherenkov data analysis, a stereo event-building procedure employs an offline stereoscopic event reconstruction. This step is essential to exploit the stereoscopic capability of the array. When science-ready data and science products are computed, they are made available from the archive system by the science support system to the science user (1.2). 5.Software Engineering ApproachThe ASTRI software engineering office is part of the ASTRI system engineering activities of the ASTRI project office. It interacts with all ASTRI work packages by delivering coordination and integration services for developing ASTRI software. The ASTRI software engineer team, coordinated by a software system engineer, defines guidelines and planning for the ASTRI software development and deployment. These activities are coordinated with the ASTRI project office, which is responsible for all aspects of the project. The software engineering team coordinates activities with the ASTRI quality assurance team, the safety team, and the science team. In the following sections, we describe the software life cycle and the organization of the developer teams, which is based on the tailoring of the European Cooperation for Space Standardization (ECSS)35 integrated with Agile software development practices. 5.1.Customer-Supplier RelationshipThe production of the ASTRI Mini-Array Software System requires the cooperation of several INAF work groups and external organizations that share the common objective of providing a software system that satisfies the overall scientific and technical requirements of the ASTRI Mini-Array. To organize the overall team, a customer-supplier relationship model has been adopted, where the customer accepts the software, having one or more software suppliers that must develop and deliver software according to the customer’s requirements. This relationship is recursive, i.e., the customer could also be a supplier to a higher-software-level customer. The ASTRI Mini-Array Software System’s suppliers are INAF teams of different institutes and other research institutions, such as the University of Perugia, INFN, and the University of Geneva (Switzerland), and AC3E, which supplies part of the software of the SCADA system. INAF oversees software management and coordination, requirements specifications, and top-level architecture definition. Each supplier is responsible for developing, integrating, and verifying all sub-work-packages (sub-WPs) products. AC3E is also responsible for SCADA integration, verification of the integrated system, delivery and deployment, and supports the validation of the SCADA system. This organization defines a complex customer-supplier chain. This requires overall project management following a structured approach throughout all stages of the software life cycle and at all levels of the customer-supplier chain. Management, engineering, and product assurance activities are integrated for the execution of the project. The software system engineer is the top-level customer of the customer-supplier chain for the software. The software coordinator and the deputy software coordinator are the suppliers of the software system engineer that must provide the software systems identified in Sec. 4. Each software subsystem coordinator (SCADA, archive, simulation, data processing, science user support, and on-site startup) is a supplier for the software coordinator. Each software subsystem coordinator manages the effort provided by ASTRI developers, external contractors, and research institutes as suppliers. 5.2.Tools and StandardsThe software is designed with the UML, requirements and design are managed and documented using the Enterprise Architect tool.36 Released documents are managed using the DMS plugin of Redmine.37 The code is fully managed using the GitLab38 INAF repository,39 including continuous integration (CI) at the subsystem level using the GitLab CI environment for automated subsystem verification. SonarQube40 has been connected to the GitLab projects: the new code commit triggers the Sonar scanner, which provides the quality report and a tag pass/fail according to well-defined quality metrics. These tests are performed in a testing environment. Docker containers41 and an official ASTRI virtual machine are used for development, CI, and deployment. 5.3.Software Development Life CycleThe software system engineering team has defined a software development plan that integrates aspects of Agile Development methodologies,42 including (i) frequent iterations and releases; (ii) feature-driven development; (iii) unit and component tests created with the source code by the development teams during each iteration; (iv) automated testing and CI; and (v) distributed configuration management. The software system engineering team has also developed verification and validation plans. The quality assurance team defined the quality assurance plan for the software. All suppliers of the ASTRI software follow these plans. The supplier performs the verification procedures to test the system as a white box; the customer conducts the validation with the system as a black box to accept the delivered software. The following major reviews are foreseen in the ASTRI Mini-Array software life cycle:
The project started with a general CoDR and some subsystem PDR reviews to provide a general decomposition of the project and a preliminary design. To integrate this first phase with a set of development iterations, a V-model which embedded the management of iterations and incremental deliveries has been adopted for the entire software life-cycle, as shown in Fig. 10; note that the adopted V-model does not imply that the development process is a waterfall method. In detail, the project started with the following phases:
At the end of each subsystem’s PDR, the development starts iteratively and incrementally. The number and size of each iteration depend on the subsystem; iterations are agreed upon between customer and supplier and are based on the milestones foreseen by the ASTRI Mini-Array project connected with hardware procurement and related deployment. Each subsystem development iteration (blue boxes in Fig. 10) is divided in the following phases:
All subsystem releases are aligned with a software system (e.g., SCADA) milestone; the purpose of each milestone is defined at the system level. When all subsystems release the software for a specific milestone of a software system, the software integration, delivery, and deployment iteration starts and is divided into
At the end of this process, the software is used for system operations. This process is not linear and sometimes requires some synchronization points between subsystems. After some iterations, we discovered that a general internal CDR for the SCADA subsystems developed by INAF was necessary. The primary purpose was to align the internal SCADA interfaces, verify the consistency of documentation and compliance with the top-level documents, align documents, including lessons learned during the iterations from other subsystems, and update the risk analysis after 1-year development. This review was conducted in the Spring of 2022 with the software system engineering team as the review panel. In our approach, we have adopted the 12 Agile Methodology principles. These principles are highly useful for developing the ASTRI Mini-Array Software System, especially when dealing with the need to synchronize the development of many teams, and the deployment of integrated software with the on-site hardware. These principles are applied in this context in the following way:
For overall planning, strict integration and collaboration with the system engineering team to adapt the schedule with the real on-site hardware telescope is part of these principles. This collaboration ensures that the software development aligns with the system engineering requirements and adjusts to any changes in the hardware deployment schedule, facilitating a smoother integration process. Given the geographically distributed nature of the project and the need for synchronization during on-site deployment, Agile principles that include frequent communication can ensure that each team remains aligned with the overarching project goals. 5.4.Testing EnvironmentThe testing environment of the software comprises two test beds able to reproduce the ICT infrastructure at the AOS,15 including hardware assemblies simulators. One test bed hosts the same version of the software installed at the Teide site or is used for verification activities before the deployment, while the other test bed runs the software under integration. With this infrastructure, we can emulate the on-site ICT infrastructure and install, run and verify the software before the on-site deployment, including all hardware and services. The test bed is based on a virtualization system (ProxMox44) to run virtual machines that emulate the on-site ICT, including all interconnection functions of his local area network (LAN) and the necessary network services: domain name server, network address translator (NAT), and routing. The test bed supports manual verification and validation procedures and supports also automated tests performed using the GitLab CI framework based on Docker containers. 5.5.Release ManagementThe release management concerns the whole software development life cycle. As presented in the previous sections, we are providing many releases according to the project schedule. Any release, in addition to the implemented software, shall include the specific document version of the requirement specification, detailed design, verification test plan, verification test report, and the user manuals related to the latest developed features. A validation test plan and test reports are foreseen for the acceptance of the software. Eventually, the release document, which collects all the deliverables for a release, shall be published and used for personnel training. 5.6.Software Quality Assurance ApproachAccording to the ASTRI Mini-Array Product Assurance Plan,45 we also released a software quality product assurance plan (SPAP) to establish the goals, the processes, and the responsibilities to implement the effective quality assurance functions for the ASTRI Mini-Array software. The SPAP provides the framework necessary to ensure a consistent approach to software quality assurance throughout the project life cycle. It defines the approach that will be used by the product assurance manager, the product assurance responsible for the software and all the actors involved to monitor and assess software development processes and products. 6.ConclusionsThis paper outlines the software architecture and engineering approach used for the ASTRI Mini-Array Software System. Its primary function is to manage observing projects for the array, which includes using both the Cherenkov camera (for celestial -ray and cosmic-ray investigations) and the stellar intensity interferometry detectors. The system is responsible for various tasks, such as observation handling, array control and monitoring, data acquisition, archiving, data processing, and simulations. It also supports users conducting Cherenkov and intensity interferometry observations and provides scientific tools for exploiting observational data. The development plan for the software implementation covers all the project phases, from construction to operations and dissemination. This document outlines the primary requirements and constraints influencing the software’s definition. To achieve this, the architecture, various views, different aspects of the ASTRI Mini-Array software, and significant architectural decisions have been discussed in the text. The ASTRI Mini-Array project is also being developed to pave the way to participation in CTAO. In this respect, the ASTRI Mini-Array may be considered a pathfinder of CTAO for INAF and the other international partners involved in the project. In particular, for the SST sub-array of CTAO, not only the telescopes’ optomechanical structure will be pretty similar, but also the telescope control system, including the engineering HMI, will be in practice the same (or, at least, the one of the ASTRI Mini-Array will be, for the most part, reused). In addition, the two projects share several technological and conceptual similarities, which is also part of the innovative and collaborative nature of the field. Many of the authors of this paper are actively involved in both projects, contributing with their expertise to develop software for CTAO and ASTRI Mini-Array projects. They make use of standard tools and technologies, such as ACS and OPC-UA. This fosters knowledge exchange between the two collaborations. On the other hand, there are some significant differences between CTAO and the ASTRI Mini-Array projects, which led to different choices in terms of the respective software architectures:
Finally, the high network bandwidth between the data center in Italy and the Teide Observatory has provided many benefits for the ASTRI Mini-Array, positively impacting software architecture. The data are transferred off-site as soon as an SB is closed, allowing us to perform initial data reconstruction, data quality assessment, and a scientific quick-look within a few minutes after the completion of the SB. Moreover, moving the data processing and scientific quick-look to the off-site data center simplifies the on-site ICT and software architecture. A full and automated synchronization of the archive system between on-site and off-site is performed, increasing the system’s reliability. On the other hand, the telescope control will be entirely managed on-site but without the need for on-site operators who can remotely operate. The lessons learned from the ASTRI Mini-Array project may also be very valuable to the CTAO software development (and, of course, the other way around). As both projects are advancing, we foresee several opportunities for code reuse, optimization, and collaboration, with mutual benefits. In light of these connections, we acknowledge the importance of a continued dialogue between the ASTRI Mini-Array project and the CTAO software development efforts. Managing the software life cycle (design, development, verification, integration, validation, delivery, and deployment) for the ASTRI Mini-Array project is challenging. The development involves different software suppliers (INAF, with its Institutes distributed at different locations in Italy, the University of Geneva in Switzerland, AC3E in Chile, and IAC in the Canary Islands), with a customer suppliers chain operating at different levels within the project. Our managing approach follows standard procedures based on the ECSS and is properly adapted to the Mini-Array case. An Agile, iterative, and incremental process is pursued, with selected use cases assumed as the baselines for each iteration. This approach allows us to manage the complexity of the geographically distributed organization and to effectively support the incremental development of the ASTRI Mini-Array system at the Teide Observatory. Code and Data AvailabilityData sharing is not applicable to this article, as no new data were created or analyzed. The ASTRI Mini-Array software developed by the ASTRI collaboration and used during operations, with only a few exceptions regulated by industrial contracts, is primarily governed by the LGPL from the Free Software Foundation. The software will become open-source as soon as a future fully operational version is released. Code will be available at the following link https://www.ict.inaf.it/gitlab/astri/ AcknowledgmentsThis work was conducted in the context of the ASTRI Project thanks to the support of the Italian Ministry of University and Research (MUR) as well as the Ministry for Economic Development (MISE), with funds explicitly assigned to the Italian National Institute of Astrophysics (INAF). We acknowledge the support of the Brazilian Funding Agency FAPESP (Grant No. 2013/10559-5), the South African Department of Science and Technology through Funding Agreement 0227/2014 for the South African Gamma-Ray Astronomy Program and the ANID-Basal Fund, Project FB0008 (AC3E). IAC is supported by the Spanish Ministry of Science and Innovation (MICIU). They are partially supported by H2020-ASTERICS, a project funded by the European Commission Framework Programme Horizon 2020 Research and Innovation action under Grant Agreement No. 653477. The ASTRI project is becoming a reality thanks to Giovanni “Nanni” Bignami and Nicoló “Nichi” D’Amico, two outstanding scientists who, in their capability as INAF Presidents, provided continuous support and invaluable guidance. While Nanni was instrumental in starting the ASTRI telescope, Nichi transformed it into the Mini Array in Tenerife. Now the project is being built owing to the unfaltering support of Marco Tavani, the current INAF President. Paolo Vettolani and Filippo Zerbi, the past and current INAF Science Directors, and Massimo Cappi, the Coordinator of the High Energy branch of INAF, have been also very supportive of our work. We are very grateful to all of them. Unfortunately, Nanni and Nichi passed away, but their vision still guides us. We are very grateful for the software architecture and development support to Ismam Abu, Alessandro Carosi, Luca Castaldini, Elena Fedorova, Federico Fiordoliva, Michele Mastropietro, Francesco Visconti, and Georgios Zacharis. We thank Joe Schwarz, one of the main authors of the present paper, who passed away just during the review process. Joe was crucial in defining the control software architecture for the CTAO and ASTRI projects. His guidance was essential in selecting ACS as the environment. Joe initiated various development activities in this regard. Despite facing severe health issues, until the end, he remained updated on the progress of different activities and actively participated in the ASTRI meetings. This article has gone through the internal ASTRI review process. ReferencesG. Pareschi,
“The implementation of the ASTRI Mini-Array gamma-ray experiment at the Observatorio del Teide, Tenerife,”
Proc. SPIE, 12182 121820J https://doi.org/10.1117/12.2630241 PSISDG 0277-786X
(2022).
Google Scholar
S. Scuderi et al.,
“The ASTRI Mini-Array of Cherenkov telescopes at the observatorio del teide,”
J. High Energy Astrophys., 35 52
–68 https://doi.org/10.1016/j.jheap.2022.05.001
(2022).
Google Scholar
S. Vercellone et al.,
“ASTRI Mini-Array core science at the Observatorio del Teide,”
J. High Energy Astrophys., 35 1
–42 https://doi.org/10.1016/j.jheap.2022.05.005
(2022).
Google Scholar
J. Hinton and W. Hofmann,
“Teraelectronvolt astronomy,”
Annu. Rev. Astron. Astrophys., 47
(1), 523
–565 https://doi.org/10.1146/annurev-astro-082708-101816 ARAAAJ 0066-4146
(2009).
Google Scholar
G. Sottile et al.,
“ASTRI-Horn Cherenkov camera: improvements on the hardware and software components,”
Proc. SPIE, 12188 1218830 https://doi.org/10.1117/12.2629634 PSISDG 0277-786X
(2022).
Google Scholar
H. Brown, The Intensity Interferometer: Its Application to Astronomy/R. Hanbury Brown, Taylor and Francis, London
(1974). Google Scholar
D. Kieda, S. Swordy and S. Wakely,
“A high resolution method for measuring cosmic ray composition beyond 10 TeV,”
Astropart. Phys., 15
(3), 287
–303 https://doi.org/10.1016/S0927-6505(00)00159-6 APHYEE 0927-6505
(2001).
Google Scholar
G. Pareschi et al.,
“The ASTRI SST-2M prototype and mini-array for the Cherenkov Telescope Array (CTA),”
Proc. SPIE, 9906 99065T https://doi.org/10.1117/12.2232275 PSISDG 0277-786X
(2016).
Google Scholar
A. Bulgarelli et al.,
“The Software Architecture and development approach for the ASTRI Mini-Array gamma-ray air-Cherenkov experiment at the Observatorio del Teide,”
Proc. SPIE, 12189 121890D https://doi.org/10.1117/12.2629164 PSISDG 0277-786X
(2022).
Google Scholar
L. Zampieri et al.,
“A Stellar Intensity Interferometry Instrument for the ASTRI Mini-Array telescopes,”
Proc. SPIE, 12183 121830F https://doi.org/10.1117/12.2629270 PSISDG 0277-786X
(2022).
Google Scholar
G. Bonanno et al.,
“Focal plane detector and front-end electronics of the stellar intensity interferometry instrument for the ASTRI Mini-Array telescopes,”
Proc. SPIE, 12183 1218322 https://doi.org/10.1117/12.2629293 PSISDG 0277-786X
(2022).
Google Scholar
S. Germani et al.,
“The pointing monitoring camera hardware and software systems for the ASTRI Mini-Array project,”
Proc. SPIE, 12188 1218835 https://doi.org/10.1117/12.2629528 PSISDG 0277-786X
(2022).
Google Scholar
D. Impiombato,
“UVSiPM: a light auxiliary detector to measure the night sky background seen by the ASTRI Mini-Array Cherenkov telescopes at the Observatorio del Teide,”
Proc. SPIE, 12191 121910X https://doi.org/10.1117/12.2629875 PSISDG 0277-786X
(2022).
Google Scholar
F. Gianotti et al.,
“ASTRI Mini-Array on-site information and communication technology infrastructure,”
Proc. SPIE, 12189 121891D https://doi.org/10.1117/12.2629831 PSISDG 0277-786X
(2022).
Google Scholar
N. Parmiggiani et al.,
“The online observation quality system software architecture for the ASTRI Mini-Array project,”
Proc. SPIE, 12189 121892H https://doi.org/10.1117/12.2629278 PSISDG 0277-786X
(2022).
Google Scholar
P. Kruchten,
“The 4+1 view model of architecture,”
IEEE Software, 12
(6), 42
–50 https://doi.org/10.1109/52.469759 IESOEG 0740-7459
(1995).
Google Scholar
A. Bulgarelli et al.,
“The Cherenkov Telescope Array Observatory: top level use cases,”
Proc. SPIE, 9913 991331 https://doi.org/10.1117/12.2232224 PSISDG 0277-786X
(2016).
Google Scholar
L. Macaulay et al.,
“USTM: a new approach to requirements specification,”
Interact. Comput., 2
(1), 92
–118 https://doi.org/10.1016/0953-5438(90)90017-C INTCEE 0953-5438
(1990).
Google Scholar
F. Russo et al.,
“The telescope control system for the ASTRI Mini array of imaging atmospheric Cherenkov telescopes,”
Proc. SPIE, 12189 121892I https://doi.org/10.1117/12.2629943 PSISDG 0277-786X
(2022).
Google Scholar
M. Corpora et al.,
“Design and development of the supervisor software component for the ASTRI Mini-Array Cherenkov Camera,”
Proc. SPIE, 12189 121891Z https://doi.org/10.1117/12.2629350 PSISDG 0277-786X
(2022).
Google Scholar
V. Conforti et al.,
“The array data acquisition system software architecture of the ASTRI Mini-Array project,”
Proc. SPIE, 12189 121890N https://doi.org/10.1117/12.2626600 PSISDG 0277-786X
(2022).
Google Scholar
V. Pastore et al.,
“Array data acquisition system interface for online distribution of acquired data in the ASTRI Mini-Array project,”
Proc. SPIE, 12189 1218924 https://doi.org/10.1117/12.2629922 PSISDG 0277-786X
(2022).
Google Scholar
F. Incardona et al.,
“The monitoring logging and alarm system of the ASTRI Mini-Array gamma-ray air-Cherenkov experiment at the Observatorio del Teide,”
Proc. SPIE, 12189 121891E https://doi.org/10.1117/12.2629887 PSISDG 0277-786X
(2022).
Google Scholar
(accessed 8 January 2024). https://confluence.alma.cl/display/ICTACS/ACS+documentation
().
Google Scholar
G. Chiozzi et al.,
“CORBA-based common software for the ALMA project,”
4848 43
–54 https://doi.org/10.1117/12.461036
(2002).
Google Scholar
I. Prandoni et al.,
“The Sardinia Radio Telescope: from a technological project to a radio observatory,”
Astron. Astrophys., 608 A40 https://doi.org/10.1051/0004-6361/201630243 AAEJAF 0004-6361
(2017).
Google Scholar
I. Oya et al.,
“The array control and data acquisition system of the Cherenkov telescope array,”
in 17th Int. Conf. Accel. and Large Exp. Phys. Control Syst.,
(2020). Google Scholar
S. Lombardi et al.,
“The data processing, simulation, and archive systems of the ASTRI Mini-Array project,”
Proc. SPIE, 12189 121890P https://doi.org/10.1117/12.2629362 PSISDG 0277-786X
(2022).
Google Scholar
S. Germani et al.,
“The Stereo Event Builder software system of the ASTRI Mini-Array project,”
Proc. SPIE, 12189 121891R https://doi.org/10.1117/12.2629466 PSISDG 0277-786X
(2022).
Google Scholar
N. La Palombara et al.,
“The product assurance programme of the ASTRI mini-array project,”
Proc. SPIE, 12187 121871I https://doi.org/10.1117/12.2629261 PSISDG 0277-786X
(2022).
Google Scholar
BiographyAndrea Bulgarelli has experience in system and software processes and knowledge of instrument development and simulation for X- and gamma-ray telescopes, such as AGILE, ASTRI Mini-Array, and CTA Observatory. He performed technological and scientific research on multi-wavelength and multi-messenger astrophysics. He gained experience in requirement engineering, interface management, and co-engineering activities with system engineers and scientists to define the requirements and architecture of big science astrophysical projects. |