1 Introduction

Concepts such as cybersecurity and cyberdefense are becoming increasingly present in a society dominated by the digital technology (Dashti et al., 2017; Bongiovanni, 2019; Eslamkhah & Hosseini Seno, 2019). In fact, in an ever-changing world, where digitization reaches all areas, cybersecurity issues are one of the main threats to the privacy of individuals, to the sustainability of companies, and to the protection of their assets (Mortazavi & Safi-Esfahani, 2019). As a result, some authors stress that organizations have to cope with increased risk due to threats to their Information and Communication Technologies (ICT), which compromises their very survival (Gritzalis et al., 2018). In this context, data and information systems are critical assets that need to be adequately protected (Szabó, 2017; Yoseviano & Retnowardhani, 2018), but this is nevertheless a complex objective to fulfill (Akinwumi et al., 2018), requiring a clear commitment and awareness on the part of organizations (Sardjono & Cholik, 2018; Ahmad et al., 2021) and personal and financial resources that in most cases are not available (Mortazavi & Safi-Esfahani, 2019).

According to the ISO/IEC 27.001, the Information Security Management System (ISMS) is part of an overall management structure focused on preserving the information security within organizations. This management structure includes the definition of security policies and procedures that imply people, processes, and technology for its alignment with the business strategy. To be effective, the implementation of an ISMS needs a considerable resources investment (Ahmed & Nibouche, 2018) and a detailed plan defining how to respond against security incidents (Proença & Borbinha, 2018). In fact, a key element of each ISMS is the security risk assessment and management strategy (Hariyanti et al., 2018; Szwaczyk et al., 2018; Ruan, 2017; Alshawabkeh et al., 2019), but security risks do not only affect to ICT components of organizations, but also their business processes, and even the organization and strategy level (Ross et al., 2019). Therefore, effective risk management helps top managers to make optimal decisions (Tiganoaia et al., 2019; Wolf & Serpanos, 2020), as security incidents can have harsh consequences to different levels of the organization (Debnath et al., 2020).

Currently, risk assessment and management solutions have numerous open issues that complicate their applicability and effectiveness. First of all, most security incidents are caused by the general lack of awareness of risk or their inaccurate assessment (Turskis et al., 2019). In addition, the natural state of risks is dynamic, as they are related to constantly evolving threats and vulnerabilities, but unfortunately mainstream approaches provide a static picture of risks (Paltrinieri & Reniers, 2017). Moreover, existing risk assessment methods rely heavily on the experience of risk experts (Sun & Xie, 2019), so new methods that exploit knowledge reuse are needed to provide effective and objective risk management and limit the assumed costs (Alhawari et al., 2012). In this scenario, cybersecurity incidents are on the rise, both in intensity and impact (Glantz et al., 2017), so the scientific community is calling for the development of appropriate methodologies and tools to enable companies to address, understand and manage their cybersecurity risk, improving their current drawbacks (Thakur et al., 2015; Wang et al., 2018).

However, the aim of this paper is to try to contribute to the resolution of a specific problem of security incident response, which is a fundamental aspect of ISMS and in particular is a part of risk management, and which is responsible for reacting to incidents by applying controls to reduce damage and efficiently restore systems (Bhardwaj & Sapra, 2021). The problem we address is how to find and select the minimum set of incidents that we must undertake to cover all existing controls involved in a given period of time, taking into account the severity of these incidents and the set of controls that have been affected by them. Thus, in the normal production operation of an information system, a large daily volume of security incidents may occur on a daily basis and need to be reviewed and corrected. Moreover, these incidents are not isolated from each other, but in many cases, we can find interdependencies, so that several security events can affect the same security controls. For example, we can have a baseline scenario with 4 unresolved incidents that globally affect 2 security controls. In this way, by optimizing the set of incidents to be resolved, it is possible that resources need only be allocated to two of the incidents, with the other two being resolved directly after the corresponding controls have been reinforced, thus saving time and resources. This is an optimization problem that is easily solved with traditional algorithms when the number of incidents and associated controls is small, but as the number of incidents increases, the problem becomes unsolvable from a traditional perspective, because the algorithm complexity is exponential and therefore, we must find other approaches. In this paper we explore another paradigm to solve this problem, quantum algorithms.

In fact, although quantum computing is in its most incipient stages, it has already left the research stage and is ready for industrial use, making it a prime candidate for solving certain types of highly complex problems for which even supercomputers are failing (Mueck, 2017). In particular, this new paradigm is already being applied to solve certain types of problems for which such a computing paradigm is particularly suitable (Hidary, 2019), such as optimization (Lucas, 2014) or machine learning problems (Wittek, 2014), which are widely used today. Moreover, the emergence of these new quantum computers, has a great implication in computer security, due to the weakness of cryptographic systems against the computational power of quantum systems (Shor, 2002) and the need for the emergence of a new post-quantum cryptography (Mailloux et al., 2016). In particular, the problem proposed in this paper concerns the optimization of incident response in a risk analysis and management system, where incident response can be optimized by selecting those appropriate controls to perform, being a problem that grows exponentially with the number of incidents. For this reason, and given that the optimization of the response may not converge in a classical computer, a solution based on a quantum annealing algorithm has been proposed to find the optimal configuration to the problem. This solution has been successfully programmed and tested on a D-Wave quantum computer.Footnote 1

In previous works we have developed MARISMA (Rosado et al., 2021), a comprehensive and extensible framework that is being applied to carry out risk assessment and management for many and different companies, and which addresses most of the drawbacks of current approaches. Through our experience applying our framework to real cases, we have identified this problem in the security incident response process. Thanks to this framework, we were able to validate the proposed quantum algorithm on real cases.

The article continues in Section 2 by analyzing the background and some related work of security incidents and quantum optimization; Section 3 shows the approach we follow in MARISMA to manage the response to security incidents; Section 4 presents the algorithm proposed by quantum programming to solve the problem posed, and shows an analysis and comparison of the results obtained by applying the quantum algorithm versus the classical algorithm; finally, Section 5 shows the main conclusions obtained during the research and future work to be carried out.

2 Background and related work

This section includes background content about the three research topics addressed in this paper, quantum programming, quantum optimization and security incident response. In particular, the first subsection discusses the foundation on which quantum computing is based, and the second subsection presents how this programming paradigm can be applied to optimization problems. In the third subsection, we provide an overview of the security incident response process and discuss some open research problems.

2.1 Quantum programming

Quantum computing, a paradigm that exploits the quantum physical aspects of reality, promises to have a huge impact in computing (IBM: The Quantum Decade, 2021). However, to have real applications of quantum computing, programming languages are needed that provide structured and high-level descriptions of quantum algorithms, without reference to the underlying hardware (Clairambault et al., 2019).

The discovery of efficient quantum algorithms by Shor (2002) and Grover (1997) has sparked a lot of interest in the field of quantum programming. However, it remains a very difficult task to find new quantum algorithms mainly because quantum programs are very low-level due to they are usually represented as quantum circuits, or in some combinator language that results in functional circuits (Altenkirch & Grattage, 2005). The first aspect that distinguishes quantum programming from classical programming is the use of quantum bits (qubits) instead of bits (Sánchez & Alonso, 2021).

The way in which quantum programmers work with qubits is through quantum circuits and quantum gates. Computation in quantum programming is performed, under the circuit representation of quantum programs (QP), by means of gates, which provide the primitive operations to manipulate the magnitude and phase of the system qubits (Sánchez & Alonso, 2021). Quantum circuits and gates can be represented graphically like in Fig. 1, but also through syntax-based notations that are provided by a wide variety of quantum programming languages (e.g., Q#, QASM, Cirq, pyquil, QCL, among many other) which have been proposed to make it easier to specify quantum algorithms. These quantum algorithms are usually a translation of the quantum circuit into code, i.e., a sequence of textual programming statements.

Fig. 1
figure 1

Example of quantum circuit

Since the first practical quantum programming language QCL was introduced, many other languages have appeared (Heim et al., 2020), some of them more oriented to quantum circuits (like the “classical” assembler), while others are closer to high-level languages. The design of subsequent quantum programming languages was influenced by the QRAM (quantum random access machine) model (Knill, 1996) in which the quantum system is controlled by a classical computer. Various quantum programming languages have been released in the last few years including LIQUi|⟩ (Wecker & Svore, 2014), Quipper (Green et al., 2013), Scaffold (Abhari et al., 2012), and, more recently, Q# (Svore et al., 2018) or Q|SI⟩ (Liu et al., 2018). All of these languages propose answers to the fundamental questions of quantum programming and were designed with the aim of addressing the challenges of practical quantum computing. In particular, all of these languages make it possible to express and reason about quantum algorithms of the size and type expected in real-world applications of quantum computing. In doing so, quantum programming environments can play an essential role in turning quantum computers from objects of science into instruments of scientific discovery (Gyongyosi & Imre, 2019).

Many quantum programming languages have been designed and implemented in terms of different types of language paradigms for programming quantum computers (Zhao, 2020). A first and major semantic distinction is between imperative and functional languages. Imperative languages are described by specifying how the execution of a given program modifies a global state. On the other hand, programs in functional languages map inputs to outputs, and more complex programs are built out of elementary function (Heim et al., 2020). Nowadays, the main imperative quantum programming languages are Q# (Svore et al., 2018), Q|SI⟩ (Liu et al., 2018), ProjectQ (Steiger et al., 2018) and Qiskit (Aleksandrowicz et al., 2019), among others. As concerns functional languages, there are not too many proposals, but we can find Quantum lambda calculi (Maymin, 1996), Quipper (Green et al., 2013), LIQUi|⟩ (Wecker & Svore, 2014) among the principal contributions. Finally, we can highlight qASM (Pakin, 2016) and Quil (Smith et al., 2016) for other quantum programming languages paradigms.

2.2 Quantum optimization

Quantum computing technology offers fundamentally different solutions to computational problems and enables more efficient problem solving than is possible with classical computations (Gyongyosi & Imre, 2019).

A qubit is usually represented with the electron spin or photons among other subatomic particles. A qubit is a multiple status quantum system, i.e., it is not only defined by zero or one as a classical bit, but possible values exist at the same time. So, a qubit can be zero or one with a certain probability (this is known as superposition and is the key for the high computational power). The actual value of a qubit is only known once it is measured, and then, the qubit is collapsed and cannot be used anymore without resetting. As a result, the philosophy of quantum programming is oriented toward exploring and searching optimal solutions in a probabilistic space (Piattini et al., 2021).

Many of the quantum optimization algorithms are based on search algorithms using the well-known Grover’s (1997) algorithm, which performs a search in an unknown search space based on encoding the solution requirements by means of a quantum oracle. These quantum oracles (Sutor, 2019; Johnston et al., 2019), are a sort of black box that can be assimilated to the function concept of high-level languages and that help in the construction of these search algorithms with linear complexity.

In addition, other quantum environments such as Quantum Leap from quantum computer manufacturer D-WaveFootnote 2 provide optimization environments for NP-hard combinatorial problems using adiabatic quantum optimization (Farhi et al., 2001; Das & Chakrabarti, 2008). This type of programming is based on the specification of the system to be optimized as a Hamiltonian that represents both the objective and the constraints of the system and the quantum computer is responsible for finding the solution that provides the lowest energy to the system. Some approaches to this quantum optimization system based on Ising expressions can be found in Lucas (2014). There are also alternatives based on programming based on quantum gates, such as the one found in the Qiskit textbook (Asfaw et al., 2020), which implements the quantum approximate optimization algorithm (QAOA) (Farhi et al., 2014).

Quantum adiabatic computing is a great step forward in the path of optimization algorithms, in which apart from the classical search algorithms (with several limitations in efficiency and effectiveness), such as backtracking, dynamic programming, heuristic search such as A* or adversarial search, such as Minimax or branch and bound algorithms, new algorithms and approaches have been identified and developed that have been gradually improving the efficiency of this type of techniques. Among the recent improvements in these algorithms, we could highlight the genetic algorithms (Rocke, 2000), the classical annealers such as the simulated annealing algorithms (Kirkpatrick et al., 1983) or benchmark functions algorithms (Dieterich & Hartke, 2012). However, all these solutions usually have limitations when working with local minimums and do not usually give good results when dealing with really large or complex problems. In this sense, adiabatic quantum computation is probably one of the great promises in solving complex optimization NP-complete problems in polynomial time (Černý, 1993).

The usual process of quantum annealing algorithms is to specify the problem to be solved as qubits in a superposition state and through the annealing process collapse the qubits to a classical state that is either 0 or 1 and represents the lowest energy solution to the proposed problem. As can be seen in Fig. 2 the process starts with an energy state that corresponds to the superposition state of the qubits, in which there is only one valley (a), as the annealing process progresses the energy possibilities are separated generating a double-well potential state (b). At the end of the process one of the valleys corresponds to the minimum energy that stabilizes the system and a deeper valley corresponding to that solution is generated (c).Footnote 3

Fig. 2
figure 2

Quantum annealing process

In our work we apply a quantum computing approach to optimize incident response management in the context of a risk assessment and management framework. This quantum computing approach will specify incidents with their associated threats and controls and search for the minimum energy state that represents the best solution for incident resolution in the shortest possible time.

2.3 Managing security incidents

As mentioned in Section 1, security incidents are undesired events that impact the different dimensions of the valuable assets that make up a company’s information systems (Mahima, 2021). These incidents are caused by failures in the implementation of the security controls that protect these assets, i.e., by vulnerabilities that exist in the information systems. These vulnerabilities are exploited by threats to reach these assets and cause damage to them (Dion, 2020).

In order to minimize the damage of these incidents, organizations try to apply the most appropriate incident response methods (Prasad & Rohokale, 2020). In fact, the management of security incidents and the correlation of these events is a topic of great interest to the scientific community (Salvi et al., 2022). Many organizations have focused on managing risks through integrated services in Computer Security Incident Response Teams (CSIRT), as these have proven to be one of the best solutions to improve cybersecurity by collaborating with each other, sharing knowledge and learning from cross experiences (Tanczer et al., 2018). However, the implementation of a CSIRT comes at a considerable cost, which makes it only suitable for large organizations, with the need to create simpler and more effective incident management systems for small and medium-sized enterprises (Pleta et al., 2020).

Security incident management and response can be considered a hot research topic with some relevant open questions (Grispos et al., 2017). One of the most relevant question is how to achieve a reasonable situational awareness to know the situation regarding vulnerabilities, threats and possible security incidents (Ahmad et al., 2021). In this area, there is recent intense research, for example proposing models to explain how organizations should achieve situational awareness of cybersecurity (Ahmad et al., 2020), arguing that providing a rapid and efficient response to security incidents clearly supports cybersecurity awareness and improves the overall cybersecurity performance of companies (Naseer et al., 2021), or considering misinformation as one of the key reasons for the lack of situational awareness (Ahmad et al., 2019). Indeed, it is often claimed that attackers take advantage of the lack of corporate communication following cybersecurity incidents (Knight & Nurse, 2020) and the lack of learning from their experiences in incidents (Ahmad et al., 2020, 2015, 2012).

It is, therefore, necessary for any type of company to have adequate and efficient tools to support incident management processes. And above all, utilities and processes provide them with mechanisms that facilitate decision-making to optimize the selection and prioritization of security incidents to be resolved (Ahmad et al., 2015). This is mainly due to the fact that the incidence workload can be very high throughout the lifecycle of an information system, especially in typical cases such as the release of a new version of an application or an operating system upgrade. Thus, there is a strong need to take into account the specific efficiency and effectiveness needs of these new incident management support systems (van der Kleij et al., 2021).

But for us, in this work, the most relevant problem faced by organizations is agility in managing and responding to security incidents (Tam et al., 2021). This agility translates into the need to respond to these incidents in the shortest possible time (van der Kleij et al., 2021; He et al., 2022). But this problem is becoming increasingly difficult to address, due to the growing number of incidents and their interconnection. When systems receive hundreds of events, we find that incident response teams must make a decision on which are the top incidents to start analyzing. In this sense, a key factor in organizing and prioritizing the incidents to be resolved is the possible relationships between them. An information system has different security controls dedicated to protect the system’s assets against potential threats, or even to fix vulnerabilities inherent to certain assets (such as software or operating systems). Thus, the most common scenario is that several reported incidents affect the same set of controls. The correct selection of the controls to be strengthened may therefore mean that resolving a single incident automatically resolves several related incidents. Consequently, by appropriately prioritizing the resolution of incidents, it is possible to optimize both the use of resources and the time spent in the overall process. But this prioritization cannot be done manually, as it would delay decision-making. According to some researchers, security incident response requires complex event processing (to capture, process, integrate and analyze data in real time), as well as investigation of the cause-effect relationship between incidents (Naseer et al., 2021).

Therefore, we can see how most of the current research related to security incidents has concluded that agility in responding to security incidents is the basis for the correct management of an information system (Aoyama et al., 2020). But very little research has focused on solving the problems arising from the computational complexity of having to analyze large numbers of events in short periods of time, also taking into account the possible relationships between the different security incidents to be solved. And it is this agility in analysis that will allow the right decisions to be made in reasonable timeframes (Srinivas et al., 2019).

3 MARISMA framework for managing security risks and incidents

In this section we present the MARISMA framework (Rosado et al., 2021), our approach to dynamic risk analysis and management. We begin by presenting the aim and the main components of this framework, and then we detail the process we carry out for the management of security incidents and the subsequent processing of these incidents to generate useful knowledge that helps in the company’s decision-making, ending by showing the computational limitations that currently prevent its efficient use.

3.1 MARISMA architecture

MARISMA is our risk analysis and management framework, which we have been developing, improving, extending and applying to many types of companies and technologies over the last decade. As can be seen in Fig. 3, the framework consists of three parts, a methodology supported by a metadata structure, an extensibility mechanism and an automatic tool that supports the methodology and implements the extensions.

Fig. 3
figure 3

General architecture of the MARISMA framework

The core element of our framework is a methodology that sets out a comprehensive and detailed process for carrying out the entire risk assessment and management lifecycle for an enterprise or part of it, including the necessary activities to configure the appropriate reusable data structures to be used, the semi-automatic generation of risk data and, finally, the dynamic risk management, which includes specific tasks for security incident response. The methodology is supported by a set of Key Risk Indicators (KRI) and by a metadata structure (the Risk Meta-Pattern in Fig. 3), which defines the components and their relationships that allow for maximum customization and automation of the risk assessment and management process. But being aware that different sectors or different technological environments may need a different risk assessment and management strategy (e.g., being affected by different types of threats, or for having assets of different nature), or even that risk may be considered at different abstraction level (e.g., information systems risks or business processes risks), we offer the possibility to instantiate our metadata structure in specific contexts (our Specific Patterns in Fig. 3), such as the ISO/IEC 27.001, Big-Data based systems, Cyber-Physical Systems (CPS), and Business Processes.

The last component of our framework is the e-MARISMA tool, which was developed considering a Software as a Service architecture in the Cloud and using Java stack technology. This tool implements all the processes of the methodology and it is possible to configure it to support any pattern representing a particular context. It offers a rich set of services, not only related to the pattern configuration and administration, but also focused on the risk assessment and management processes carried out by our customers. The main objective of this tool is to be able to perform fast, cheap, visual, and accurate risk assessment, as well as efficient and effective risk management, so we exploit reusability as much as possible. In addition, the tool learns from the knowledge gathered from the occurrence of security incidents, and consequently can make automated decisions by correlating incidents.

This framework has been applied to different types of companies (electric, hydrocarbons, governments, health, shipbuilding, chemical industry, etc.) in more than eight European and Latin American countries.

Table 1 Datasets of incidents

3.2 Incident response in MARISMA

As mentioned in the previous section, the security incident management and response is a critical activity carried out within the dynamic risk management process of our framework. Once an incident is identified, we need to collect, categorize and analyze the incident context information, and some relevant parameters need to be adjusted in our system (level of risks and compliance controls, involved controls, probability of threat occurrence, etc.). This parameter turning depends on the set of concepts and relationships defined in our risk meta-pattern, and on its specific instantiation through one pattern, which will include the components selected through the process shown in Fig. 4, and which are formally defined in Definition 1.

Fig. 4
figure 4

Security incident management process

Definition 1

A Security Incident for MARISMA. Let \(si_i\) be a security incident. Together with \(si_i\), the tuple \(\langle T,AG,A,RD,C \rangle\), as additional information is defined, where T represents a set of n threat types {\(t_1\), \(t_2\), \(\ldots\), \(t_n\)} that have caused the security incident, AG is a set of m asset groups {\(ag_1\), \(ag_2\), \(\ldots\), \(ag_m\)} affected by these threats. Each asset type is related to A, a set of l assets {\(a_1\), \(a_2\), \(\ldots\), \(a_l\)}, and each asset has associated RD, a set of k risk dimension {\(rd_1\), \(rd_2\), \(\ldots\), \(rd_k\)}. Finally, each group of assets is affected by the failure of C, a set of j control {\(c_1\), \(c_2\), \(\ldots\), \(c_j\)}.

This process is fully implemented by eMARISMA, which provides a workflow to (i) enter the security incident information (a description, the cause, the responsible person, and the time limits to be solved), (ii) select from the stored information and according to the data relationships defined by the risk pattern the hierarchy of elements that are involved with the security incident (threats, assets and controls), defining other related information such as the severity of the incident, and quarantine the affected controls by temporarily lowering their coverage level while the incident is resolved, and finally, once the incident is solved, (iii) support knowledge management and learning from the security incidents occurred by recording the lesson learned, incident resolution costs and some concluding remarks. Obviously, when a security incident occurs and is recorded, a set of chain changes are automatically applied on the risk components according to the stored meta-information. This is because the level of compliance with security controls is penalized if a threat has compromised the control, which affects the risk level of many other assets, and which implies that those controls need to be reviewed and strengthened.

However, a typical scenario in incident management is the heavy workload involved in organizing and prioritizing incidents in order to define the most efficient way to resolve them in the shortest possible time and using available resources appropriately. The organization and efficient distribution of incidents becomes even more complicated at peak activity, such as the first production start-up of a system or the registration of a new service, when the number of incidents can reach large amounts, with the added complication of managing them properly. Thus, it is common to have to prioritize and plan dozens (or even hundreds) of incidents in a short period of time, which involves complex calculations, a high level of difficulty, and a high cost in time. This scenario is further complicated by the fact that incidents are not usually isolated elements, but are often related to each other to a greater or lesser extent, so the order of incident resolution is important to address problems in an optimized way. It is therefore important to have systems that are capable of managing these large volumes of security incidents in order to obtain the best action plans.

To illustrate this problem, we will consider this example based on a typical dataset of reported incidents to be treated according to the incident management structure used by eMARISMA (see Table 1), which defines the following attributes: (i) IdIncident: Unique identifier of the incident, (ii) IdThreat: Threat code according to the definition of the pattern used, (iii) Threat: Description of the threat that has caused the incident, (iv) Severity: Qualitative assessment of the severity of the incident (between 1 and 5), (v) IdControl: Control code according to the definition of the pattern used, (vi) Control: Description of the control that has been affected by the threat, and (vii) Estimated Time: Estimate of number of hours required to resolve the incident.

As we can see in Table 1, we consider that each incident involves a single threat, which is usually the most frequent scenario. Each incident may affect one or more controls whose implementation must be reviewed and corrected to resolve the incident and try to prevent its recurring. In addition, it is common for several incidents to be related to the same control. For example, control [12.1.3] Capacity management has been affected by both security incidents 1 and 6. Similarly, we can see how control [12.3.1] Information backups is affected by incidents 1 and 9. In this way, by prioritizing the resolution of incident 1, we reinforce the two affected controls, and incidents 6 and 9 would also be resolved, with the consequent savings in time and resources.

This optimization problem, which consists of selecting the minimum set of incidents from among those identified that cover the entire set of affected controls, is easy to solve by means of adequate planning for small sets of incidents, but becomes enormously complicated when working with volumes of hundreds of incidents, requiring a large amount of time and resulting in planning that is not very efficient in many cases. In this sense, quantum computing emerges as a powerful mechanism to solve this identified problem.

4 Quantum algorithm for incident response optimization

In this section, we first show the algorithmic solution proposed for the problem posed, using quantum algorithms, and after that, we compare the results obtained using classical algorithms versus the proposed solution using quantum algorithms.

4.1 Algorithm definitions

In order to correctly plan the algorithmic solution of the proposed problem, it is necessary to specify the variables and entities that are part of the algorithm. These variables can be defined as follows:

Definition 2

Let be i a unique identifier of an incident corresponding with the IdIncident of Table 1.

Definition 3

Let be k a control identifier representing the IdControl unique control code.

Definition 4

Let be C\(_k\) the set of incidents related to IdControl k.

Definition 5

Let be t\(_i\) the estimated time in hours necessary for solving the incident whose IdIncident is equal to i, mapping to the Time value.

Definition 6

Let be x\(_i\) a binary variable that determines, at the algorithm solution, whether the incident i is selected to be addressed.

Definition 7

Let be P a penalty coefficient, which serves to modulate the weight of the constraints in the algortihm definition. It can be found empirically to be equal to the highest estimated time among all the occurrences plus 1, thus affecting the whole solution.

Based on these definitions we can express algebraically the objective pursued by executing the quantum optimization algorithm that will be sent to the quantum computer.

4.2 Algorithm approach

As we can observe in Section 3, the present problem is a variation of the Minimum Vertex Cover algorithm,Footnote 4 in which the input to the algorithm is a series of incidents identified by their ID. Each of the incidents has a severity and an estimated resolution time. In addition, it has a series of associated controls that will have to be reviewed and strengthened in order to consider that the incident has been resolved. These controls can be associated to the resolution of several incidents, so that if we resolve an incident that shares controls with another one, we resolve that for other incident at the same time. To solve the problem, we should select a result in which the minimum set of incidents to be solved is selected, so that we cover all the controls that allow us to solve the other incidents. This solution must be done in the shortest possible time.

As discussed in Section 2, for solving this kind of problems, there are several good approaches such as genetic algorithms or classical annealers but the lack the ability to solve complex optimization problems in polynomial time. Regarding solutions based on quantum computation, we have basically two main options, quantum gate-based circuits and adiabatic quantum algorithms. While it is true that some approaches based on quantum gates, such as the QAOA algorithm, allow the resolution of optimization problems with an approximation similar to quantum annealers, its formulation and, above all, its implementation as a quantum circuit is much more complex and extensive than the formulation of the Hamiltonian of quantum annealers and its representation as Ising or QUBO, which are much simpler to understand and independent of the underlying quantum platform.

In order to solve the problem, we will model this problem as a Quadratic Unconstrained Binary Optimization (QUBO) problem, also known as unconstrained binary quadratic programming (UBQP), which will represent the objectives and constraints of our problem and can be sent to the solver of the adiabatic quantum computer to find the minimum energy state, which will coincide with the combination of variables, i.e., incidents, that must be selected to find an optimal result to our problem.

All optimization problems following the QUBO pattern are specified on the basis of a Hamiltonian, which in the form of a summation indicates the objective and the constraints to be met by the solution. This Hamiltonian is expressed as a Binary Quadratic Model (BQM) and is converted into a BQM matrix which is the one we will pass to the adiabatic solver.

Our main objective is to minimize the total time of the issues that are part of the solution. In the form of a BQM expression we could specify it as follows:

$$\begin{aligned} \sum _{i=1} ^N (x_i \times t_i) \end{aligned}$$
(1)

Being \(x_i\) the binary variable that determines whether or not the incident i is selected, and \(t_i\) the estimated time related to the incident i. It should be noted that our goal is to minimize this objective.

The constraints are somewhat more complicated to model, since the incidents can be fulfilled either because they have been selected, or because the set of controls that form part of it have already been solved by one or more previously selected issues. A possible solution could be to make a graph that relates all the incidents that share controls and to select a node from each of the subgraphs. Unfortunately, this solution would only work if the relationships between incidents and controls were one-to-one. In a real case, several controls must be necessary to resolve an incident and therefore, a simple graph is not able to represent such information. If we wanted to extend such a graph to represent the complexity of the relationships between controls and incidents, such a graph would be unmanageable and would not be useful for solving the problem. Therefore, it is necessary to take a new approach, as stated next.

In order to find a possible solution, let’s analyze a small example in Table 2:

Table 2 Incidents and controls example

In Table 2 we can see that to solve our problem it is not necessary to solve all the incidents {A,B,C} since by attending to a subset of them, e.g., {A,B}, we cover all the necessary controls.

Looking at this small example of Table 2 we can see that building a node that represents all the dependencies between incidents and controls is not easy, because the need of controls for each of the incidents is not the same for each of them, so the graph would be too complicated and should represent 2 types of edges, those representing dependencies between controls and incidents and those representing relationships between incident controls and also these relationships would be partial.

Therefore, the resolution of this problem by means of a network of this type is not a good approximation and we will have to model the restrictions in another way. The solution is to focus on the controls and see how all the incidents are solved through the completeness of the necessary controls.

In this problem, we are looking for all the incidents to be solved, and to determine that an incident is solved we look at whether its controls have been selected or not. In other words, we want all controls to have at least one incident related to it that has been selected. If all the controls have been solved, we know that all the issues will be solved as well. This constraint will be formulated as follows:

$$\begin{aligned} \sum _{i \in C_k} (x_i) \ge 1 \end{aligned}$$
(2)

Where \(C_k\) is the set of incidents related to the control k. With this expression we control that at least one of the incidents related to k has been selected. Doing this for all the controls, we obtain:

$$\begin{aligned} \sum _k \Big ( \sum _{i \in C_k} (x_i -1)^2 \Big ) \end{aligned}$$
(3)

In order to construct the final QUBO equation we need to add a penalty coefficient (P), which serves to modulate the weight of the constraints in the Hamiltonian expression. Empirically, it can be calculated that this coefficient P is the highest estimated time among all the occurrences plus 1, so that the penalty still affects the solution. The final QUBO equation is as follows:

$$\begin{aligned} \sum _{i=1} ^N (x_i \times t_i) + P \times \sum _k \Big ( \sum _{i \in C_k} (x_i -1)^2 \Big ) \end{aligned}$$
(4)

Simplifying the part of the expression that represents the restrictions, we can obtain the following expression:

$$\begin{aligned} \begin{aligned}&P \times \sum _k \Big ( \sum _{i \in C_k} (x_i - 1)^2 \Big ) = \\&P \times \sum _k \Big ( \sum _{i,j \in C_k} (x_i^2 + 1^2 + 2 x_i x_j - 2x_j) \Big ) \end{aligned} \end{aligned}$$
(5)

Considering that x can only take as values 0 and 1, we can eliminate the square it has since it is irrelevant; as well as that of 1:

$$\begin{aligned} P \times \sum _k \Big (\sum _{i, j \in C_k} (x_i + 1 + 2x_ix_j - 2x_i)\Big ) \end{aligned}$$
(6)

Simplifying:

$$\begin{aligned} \begin{aligned}&P \times \sum _k \Big (\sum _{i,j \in C_k} (-xi + 1 + 2xixj)\Big ) = \\&\sum _k \Big (\sum _{i,j \in C_k} (-Px_i + P + 2Px_ix_j)\Big ) \end{aligned} \end{aligned}$$
(7)

We can eliminate the constant part, since it does not modify the solution:

$$\begin{aligned} \sum _k \Big (\sum _{i,j \in C_k} (-Px_i + 2Px_ix_j)\Big ) \end{aligned}$$
(8)

So our BQM (QUBO) expression of the initial Hamiltonian is finally as follows:

$$\begin{aligned} \begin{aligned}&\sum _{i=1} ^N (x_i \times t_i) + P \times \sum _k \Big ( \sum _{i \in C_k} (x_i -1)^2 \Big ) = \\&\sum _{i=1} ^N (x_i \times t_i) + \sum _k \Big (\sum _{i,j \in C_k} (-Px_i + 2Px_ix_j)\Big ) \end{aligned} \end{aligned}$$
(9)

The final equation gives us a linear part \((-Px_i)\) and a quadratic part \((2Px_ix_j)\), which will be sent to the quantum annealing solver through a bidimensional matrix generated from the above expression.

Based on the definition of the previous Hamiltonian, the Python code shown in Listing 1 is generated, in which a QUBO matrix is filled in to be sent to the quantum sampler annealing. This algorithm creates a superior triangular matrix, which defines the QUBO matrix for the Binary Quadratic Model (BQM) of the previous Hamiltonian.

figure i

4.3 Quantum algorithm execution

Using the code shown in Listing 1, which generates the input matrix for the quantum annealer sampler, we tested the algorithm with a small real example, as shown in Table 3. Using the Listing 1 we generated the triangular QUBO matrix Q, and we send it to the sampler with the code shown in Listing 2

figure j

After executing the coding, we get the results of the sampling as a text file in which we can observe the results of the algorithm and the energy of each of the found solutions. The solution with a minimum energy level is the one that fulfills the requirements and goals of our problem. The output of the algorithm for the data shown in Table 3 is shown next:

figure k
Table 3 Quantum annealer execution example

The output can also be seen graphically as Fig. 5 shows the configuration of the qubits in the quantum processor, in which each of the points shows a qubit representing, respectively, the incidents to be managed. The lines of the generated graph that link the qubits are the constraints and control associations that exist between different incidents. In the same graph, the final configuration of the amplitudes (0 or 1) of each qubit is shown, so that the system remains in the lowest energy configuration.

Fig. 5
figure 5

Quantum qubits in a D-Wave quantum processor after Quantum Annealing

Figure 6 shows graphically the output of the algorithm in which we can see that incidents [3, 4, 5, 6, 7, 10, 11, 12, 13, 14, 15] were selected for being processed, as the controls used for addressing incident number 6 solve also incidents 1 and 9, as occurs with incidents 12 and 8 and also with incidents 4 and 2.

Fig. 6
figure 6

Solution inspector in a D-Wave quantum annealer

In Fig. 7, the histogram of the energies of the returned examples can be observed. In this figure you can see the occurrence of each of the solutions found by the quantum processor and its associated energy, so that it can be seen visually that the result returned by the algorithm is the final configuration of the qubit states with the lowest energy and that has occurred a greater number of times in the Quantum annealing process. In this case, the best solution was found with an energy value of -100 and it was also the most repeated solution found.

Fig. 7
figure 7

Lowest energy in a quantum annealer

4.4 Empirical results

The classical algorithms that solve this type of problem are usually based on backtracking, dynamic programming or branch and bound, which have an exponential computational complexity. However, adiabatic optimization algorithms, due to their quantum nature and thanks to the concept of superposition, achieve processing similar to multithreaded processing in constant or linear time, depending on the algorithm implemented.

In order to see the computational improvement of the proposed algorithm, we run the algorithm with sample sets of different sizes using a D-Wave 2000Q lower-noise system, with a quantum processor DW_2000Q_6 providing 2048 qubits in a [16,16,4] chimera topology. We also test a classic Backtracking algorithm written in Python running on Mac OS System, with a 3.2 GHz Intel Core i7 and 64 GB DDR4 RAM. As can be seen in Table 4, we observed, in the experiments carried out, a real behavior similar to the expected one, with a constant time and independent of the number of incidents to process (around 3 s), while the time of a backtracking algorithm to solve the problem grows exponentially with the number of incidents, not even converging with one hundred incidents.

Table 4 Classical vs quantum algorithm execution times (seconds)

In the light of these results we can consider it appropriate to believe that the adiabatic quantum approach for solving optimization problems in the context of security incident management is widely efficient and an improvement over previous management based on classical optimization algorithms.

Fig. 8
figure 8

Classical vs quantum algorithm execution times (seconds)

4.5 Economic considerations

As can be seen in Fig. 8, from 25 security incidents that are interrelated by means of the controls, classical computers are no longer efficient in solving the problem and start to take an increasing amount of time to solve. Between 25 and 40 events, classical systems may still be able to solve the problem, albeit with increasing machine consumption. From 40 events onwards, the complexity of the problem is so high that the problem cannot be solved by classical computers and quantum computing must be used.

But another question we must ask ourselves is when it would become profitable from a cost point of view, taking into account current prices. For this purpose, a cost study has been carried out:

  • Currently according to the Quantum Computing Report (AndreSaraiva, 2022), each qubit-second costs approximately $0.05 USD currently (June 2022).

  • On the other hand, the power consumption of a 3.2 GHz Intel Core i7 processor is around 205W/hour (Cutress, 2021).

  • The estimated cost of energy in Europe in 2022 was 0.3071$ kWh (EuroStat, 2022)

Table 5 Classical vs quantum algorithm execution cost ($ USA/seconds)

Therefore, the cost will be:

  • From the quantum computer, only 15 qubits have been used. Therefore, the cost has been:

    $$\begin{aligned} \begin{aligned}&(0.05 \$Q/sc \times 15 Q) = 0.75 \$Q/sc \\&\$ = United\;Stated\;Dolar; \;Sc = Seconds; Q = Qbits \end{aligned} \end{aligned}$$
    (10)
  • For the traditional computer it has been:

    $$\begin{aligned} \begin{aligned}&(205Wh \times 0.0003071\$Wh)/60sc = 0.00104926 \$Ws \\&Wh = Watts/hour; \;\$Wh = USD \times Watss/hour \\&Ws = Watts/seconds;\; \$Ws = USD \times Watss/seconds \\&Sc = Seconds \end{aligned} \end{aligned}$$
    (11)

If we apply these costs to the results of Table 4, we can see that in this case the economic equilibrium is obtained in the interval between 30 and 40 incidents. From this point onwards, the cost skyrockets for traditional computers. It cannot be calculated from 100 incidents onwards, as the complexity prevents it from finding a solution, and even if it were to find one, its cost would be much higher than the use of a quantum computer Table 5.

Fig. 9
figure 9

Classical vs quantum algorithm cost (USD/seconds)

Therefore, we can see how quantum computing, in addition to allowing us to solve problems in less time, also allows us to do so at a lower cost. On the other hand, it is expected that the costs associated with quantum computing will continue to fall in the future at a faster rate than those of traditional computers, which will make its use to solve complex problems, such as the one discussed in this article, increasingly efficient Fig. 9.

5 Conclusions

In recent years, security management, risk analysis and, in particular, risk management based on the correct management and learning from security incidents have become increasingly important.

In this regard, the time it takes to respond to incidents and re-establish system security is a crucial aspect. However, the response time offered by classic solutions grows exponentially as the number of incidents increases, making them unsuitable for real-world scenarios.

Our risk analysis and management framework MARISMA, through the use of the automated and cloud-based tool eMARISMA, has allowed us to identify this need through its application to a high number of companies analyzing their risks, and we have realized that this was a major limitation to be able to offer increasing value for these companies, so the need arose to look for solutions that were outside of traditional technologies.

We have designed and implemented an algorithm based on the new paradigm of quantum programming, and after performing a complete set of tests and executions, we can conclude that its results are correct, and as expected according to the nature of the basis of quantum, the execution time obtained is very efficient. Thus, we have shown how this quantum algorithm solves this problem in almost constant time, while classical algorithms offer exponential time cost.

Therefore, we can state that although today there are numerous open problems related to security incident management, especially when dealing with large volumes of data, some of them can be solved using quantum algorithms. In fact, part of our future work is to further investigate quantum algorithms and swarm intelligence applied to the exploitation of our security dataset of security risks and incidents from many organizations, in order to correlate security incidents in real time, providing a global and much more efficient way of responding against security incidents.