A robust statistical framework for cyber-vulnerability prioritisation under partial information in threat intelligence

Proactive cyber-risk assessment is gaining momentum due to the wide range of sectors that can benefit from the prevention of cyber-incidents by preserving integrity, confidentiality, and the availability of data. The rising attention to cybersecurity also results from the increasing connectivity of cyber-physical systems, which generates multiple sources of uncertainty about emerging cyber-vulnerabilities. This work introduces a robust statistical framework for quantitative and qualitative reasoning under uncertainty about cyber-vulnerabilities and their prioritisation. Specifically, we take advantage of mid-quantile regression to deal with ordinal risk assessments, and we compare it to current alternatives for cyber-risk ranking and graded responses. For this purpose, we identify a novel accuracy measure suited for rank invariance under partial knowledge of the whole set of existing vulnerabilities. The model is tested on both simulated and real data from selected databases that support the evaluation, exploitation, or response to cyber-vulnerabilities in realistic contexts. Such datasets allow us to compare multiple models and accuracy measures, discussing the implications of partial knowledge about cyber-vulnerabilities on threat intelligence and decision-making in operational scenarios.


Introduction
Cyber-vulnerabilities of devices, networks, or other information and communication technologies (ICTs) can generate system failures or pave the way for different types of cyber-attacks, including denial-of-service, malware injection, and data exfiltration.Social engineering can also enhance these incidents, while cascading effects in complex ICTs or systems-of-systems (Fortino et al., 2020) can compromise or interrupt service supply, undermining the operational continuity of critical infrastructures.In turn, cyber-incidents lead to economic losses, safety risks, reputational damage, and violations of personal rights such as privacy, the right-to-be-anonymous, and the proper use of personal or sensitive data.The effect of these damages is not always measurable due to the intangible nature of social and reputational effects and the lack of high-quality data, which are often kept secret to prevent additional reputational issues (Giudici and Raffinetti, 2021).
New vulnerabilities emerge from the increasing number of connections between digital systems, which now include personal devices, Internet-of-Things (IoT) sensors, cloud computing or storage services, and even vehicles (Barletta et al., 2023), which represent access points to other information systems through privilege escalation.The latter amplifies the severity of cybervulnerabilities and represents a weakness when local access points may lead to violations of classified information at the national level, as in the case of public administration (Catalano et al., 2021).
To prevent cyber-incidents, proactive cyber-risk assessment keeps evolving through new methods, standards, approaches, and good practices aimed at informed decision-making in the management of cyber domains, in particular cyber-vulnerabilities.Currently, cyber-risk assessment standards are based on severity levels assessed by institutions, such as the National Institute of Standards and Technology (NIST) and national Computer Security Incident Response Teams (CSIRTs).Although NIST provides a harmonised approach to evaluating the general impact of a cyber-vulnerability, contextual factors (e.g., exposure to a vulnerable technology and its identifiability) may influence exploitability.Available information on these factors may affect the perceived likelihood of a cyber-attack exploiting a cyber-vulnerability, influencing both offensive and defensive interventions and resource usage.Such information is often stored in reserved reports, data collections, or expert evaluations that are not disclosed.In addition to this limited knowledge, multiple cyber-vulnerabilities can be relevant to individuals and organisations, which have to prioritise them to better allocate their cybersecurity (economic, temporal, and professional) resources based on accessible information and personal criteria.
These issues prompt a deeper analysis of the way risk about cyber-vulnerabilities is perceived and evaluated based on available information: this leads to the following research questions (RQs): (RQ1) How to assess cyber-risk based on partial information on known vulnerabilities without relying on specific statistical properties (e.g., their distributional assumptions) that could hardly be verified?
(RQ2) How to measure the accuracy of such an assessment while also taking into account the presence of unknown vulnerabilities?
To answer these questions, we propose a new statistical framework to address the need for flexible and interpretable models relating to cyber-vulnerability assessment and their prioritisation, in this way supporting adaptive decision-making.Flexibility is required to allow different users to adapt the framework based on the information they have access to, e.g., by adding explanatory variables or considering different response variables based on their own ranking.Interpretability is needed to prompt appropriate interventions, e.g., counteractions to fix vulnerabilities or prevent their exploitation.
This work focuses on vulnerabilities rather than actual incidents, which requires appropriate models to deal with the two types of uncertainty connected to the research questions in terms of both estimation procedures and accuracy measures.Specifically, to address RQ1, we adopt mid-quantile regression (Geraci and Farcomeni, 2022) as a means to provide robust estimates of ordinal (quantitative and qualitative) risk assessments of known cyber-vulnerabilities dependent on available information.Regarding RQ2, we introduce a new accuracy measure that meets an invariance requirement for cyber-vulnerability priority rankings with respect to unobserved or unknown vulnerabilities.
These proposals are tested on both simulated and real data; the former allow us to explore multiple scenarios and test the sensitivity of the assessment performance on hyperparameters and model assumptions, while the latter inform us on actual cyber-vulnerabilities, the extent to which they adhere to or deviate from parametric models, and the way the different methods perform under such deviations.We summarise the main contributions of this work as follows: • The first methodological contribution is mid-quantile-based statistical models to work out qualitative variables with quantitative methods.This proposal allows for overcoming the dependence on statistical assumptions, enabling the prediction of both qualitative and quantitative priority measures.Along with robust quantile regression estimates, these models return conditional probability estimates for an ordinal response variable, so they may serve as a basis for novel probabilistic modelling of cyber-threat assessment and risk analysis relying on likelihood estimations associated with a given impact (Crotty and Daniel, 2022) if an appropriate set of explanatory variables is available.
• As the focus of this work is on cyber-vulnerability prioritisation, the second theoretical contribution is the proposal of a new accuracy index for rank prediction.The definition of this index is grounded in the inherent uncertainty of unknown vulnerabilities.By relying on both simulated and real data, we can explore the properties of the new accuracy measures, in particular their ability to discriminate between different ranking models in terms of prediction accuracy, depending on hyperparameters (e.g., the number of priority levels) or deviations from widely adopted statistical assumptions.
• Along with the methodological contributions, we carry out a data collection procedure to test our proposals, integrating information from multiple datasets, discussing the results in relation to recent studies, and pointing out implications in cyber-vulnerability prioritisation for research in threat intelligence.
While the statistical approach presented here is flexible enough to include other threat sources, the data we consider in this work do not involve factors such as social engineering, insider threats, or physical effects (e.g., overload of ICT capacities).However, it is worth stressing that such factors may be as critical as cyber-vulnerabilities and may combine with them in the execution of a cyber-attack (Catalano et al., 2022).The paper is organised as follows: the notions of cybersecurity and cyber-vulnerabilities that are relevant for this work are described in Section 2, where we also present an overview of recent advances in related works and introduce the required preliminaries on the statistical models used in the paper.Our proposal is presented and motivated in Section 3, also discussing the appropriate index to assess performance and model comparison suited to our research questions in the cyber-risk domain.Section 4 describes the data sources that are used for the specification and validation of the proposed model.In Section 5, following a descriptive analysis of the data, we summarise and comment on the results of simulations and the exploration of the real dataset in terms of prioritising cyber-vulnerabilities.After the discussion of the results in Section 6, conclusions are drawn in Section 7, where we point out future work and applications of the present proposal.

Related work
Cyber-risk assessment is a well-recognised issue that plays a key role in different domains, e.g., the management of critical infrastructures (Paté-Cornell et al., 2018) and industrial sectors (Corallo et al., 2020).Cyber-physical systems and personal devices require adequate solutions to ensure data protection, and the diffusion of IoT is opening the way to new sources of cyberrisk (Radanliev et al., 2018, Tsiknas et al., 2021).A variety of cyber-risk models have been introduced to support risk assessment and prioritisation, but their effectiveness in operational scenarios is affected by domain-specific aspects and requires an appropriate trade-off between the assessment's validity and its usability for decision-making (Paté-Cornell et al., 2018).

Cyber-risk assessment and modelling
The scope of the cyber-risk assessment should be clarified by first specifying the objective of the analysis (e.g., proactive prevention or forensic investigation), the object of the analysis, and, consequently, the methodology adopted.This work is focused on proactive prevention, where one should distinguish between cyber-vulnerability and cyber-incident: a vulnerability is an access point, but this does not necessarily entail a cyber-incident, that is, actual (intentional or not) damage to a digital system.This distinction is relevant for decision-makers, namely, cybersecurity experts and ICT managers, security operational centres, or national agencies.Cyber-incident analysis is fundamental to cyber-forensic activities.Still, the prevention of new cyber-incidents in operational scenarios should use all fungible information to manage security resources better and take appropriate counteractions.
Each known cyber-vulnerability is uniquely identified by a Common Vulnerability Exposure (CVE) code.In the NIST classification, the CVE acts as a primary key to retrieving both the impacts in terms of CIA dimensions (confidentiality, integrity, and availability) and the severity assessment of relevant intrinsic characteristics of the vulnerability.Focusing on cyber-vulnerabilities as the object of our assessment, the standard approach to properly scoring emergent vulnerabilities is driven by the NIST's methodology (Sharma andSingh, 2018, Jung et al., 2022).
In addition to such intrinsic features of cyber-vulnerabilities, other extrinsic factors affect cyber-risk and threats, in particular a technology's exposure, which refers to the number of exposed hosts (devices or systems) where a given vulnerability, labelled by a CVE, has been recognised.Exposure concurs to define targets and feasible attacks along with exploits and their cost; an exploit is defined as a software component, a process, or any human or physical resource that can be directly executed to perform a cyber-attack.In this work, we primarily deal with software exploits, but related work also addresses the role of interactions between malicious software and human factors in the definition of new attack techniques (Tommasi et al., 2022).We talk about a 0-day when the vulnerability has not been disclosed before and there are no available solutions to patch it.
Proactive defence aims at increasing resilience at the individual and network level (preventing criticalities), supporting efficient management of resources and ICT maintenance, and preserving individuals and community rights in cyber-space such as privacy, compliance with the General Data Protection Regulation (GDPR), and right-to-be-anonymous.In particular, proactive defence is needed to choose appropriate counteractions that mitigate the occurrence of cyber-incidents from cyber-vulnerabilities.There are several techniques to enhance cybersecurity, including vulnerability assessment, penetration testing, and static or dynamic analysis of applications.However, proactive defence is subject to bounded resources: time constraints, verification costs (Srinidhi et al., 2015, Gao et al., 2022), a specific effort for proprietary software, limits to automation, and contextual security analysis in highly connected systems.Therefore, accurate methods to support experts in risk assessment are a relevant premise for prioritising interventions and, hence, making better use of resources.In this regard, (semi-)automatic tools and applications based on AI, especially deep learning, are gaining increasing attention as practical support to detect malware (Cui et al., 2018).Unfortunately, they do not provide complete protection against malware attacks; in a recent study (Catalano et al., 2022), it was shown that classification based on convolutional neural networks could be deceived by masking malware with a goodware component to bypass automatic controls.This approach is called polymorphism and is a software property often used in cyber guerrilla attacks (Van Haaster et al., 2016).Furthermore, Macas et al. (2023) conducted a detailed review and categorisation of cyber-attacks taking advantage of adversarial learning.On the other hand, these works outline potential counteractions to mitigate cyber-risks in relation to such applications of deep learning.Also, new approaches are being investigated to benefit from deep learning while overcoming some of its limitations, e.g., enhancing explainability (Keshk et al., 2023, Sharma et al., 2023).
Moving to risk assessment methodologies and modelling, different research streams are investigated to support cybersecurity experts through different methodological or algorithmic techniques.Qualitative approaches supporting cyber-risk management are recommended in international standards, including risk matrices.However, the validity of such approaches is limited by methodological issues that can lead to inconsistencies, misleading interpretations, and a lack of focus on potential correlations among risk factors (see, e.g., Crotty and Daniel (2022) and references therein).
On the other hand, partial information in the cybersecurity domain is a serious obstruction to quantitative analysis, which influences its limited adoption compared to qualitative or semiqualitative methods based on risk matrices.In fact, limited data accessibility has been widely recognised as a relevant issue (Giudici and Raffinetti, 2021), with an economic impact on estimates (Anderson et al., 2013) and consequent effects on insurance (Carfora et al., 2019).Among the main factors leading to data scarcity or non-availability, we mention resource limitations for conducting vulnerability assessments and non-disclosure policies to avoid sharing confidential information on cyber-threats and reputational losses.These aspects should be considered along with the lack of harmonisation between different quantitative methodologies, which hinders the assessments' comparability (Crotty andDaniel, 2022, Facchinetti et al., 2023).
A central topic in quantitative risk analysis is the way the likelihood and impact of a cyberincident are estimated.Probability estimation is subject to various uncertainty sources and limitations in different quantitative methods (Allodi and Massacci, 2017), and available assessments provided by cybersecurity agencies should be integrated with external information.For example, several studies adopt the CVSS as a means to evaluate the probability of a cybervulnerability's exploitation leading to a cyber-attack; see, e.g., the references in He et al. (2019, p. 168207).Similar approaches are questioned by other works, which suggest that CVSS alone does not directly link to a cyber-attack's likelihood; instead, the CVSS should be combined with external information regarding exploits and available resources in the black market (Allodi and Massacci, 2014).
A general approach to data-driven updates of probability distributions by combining different information sources about cyber-vulnerabilities is given by Bayesian statistics and related computational techniques.The Factor Analysis of Information Risk (FAIR) model is a prominent example based on a well-established information security risk ontology; FAIR allows evaluating risk through the specification of a class of prior distributions and Monte Carlo simulations (Crotty and Daniel, 2022).Even in this case, the model's applicability is limited by the adherence of specific scenarios in the cyber-domain with the model's distribution assumptions, and recent works have tested and relaxed such assumptions (Wang et al., 2020).Related to this work, network-based approaches have been applied to cyber-risk modelling in different ways, starting with network analysis of connected hosts (Gil et al., 2014) and including knowledge graphs (Zhao et al., 2023) and Bayesian networks or machine learning (e.g., random forest) algorithms (Facchinetti et al., 2023, Kia et al., 2024).Knowledge graphs allow encoding semantic structures and have strict relations with cybersecurity ontologies (Zhao et al., 2023, Sec.2), providing practical support in knowledge retrieval, reporting, and analysis in combination with statistical or machine learning algorithms.Bayesian networks are a powerful approach to exploring causal relations or dependences, for example, in attack chains; furthermore, they are also used to enhance the integration of qualitative frameworks and regulatory aspects that can affect cyber-risk (Shin et al., 2015).Bayesian networks can be integrated with other techniques, including taxonomic models based on the frequency and magnitude of threats and losses, such as the FAIR model mentioned above (Wang et al., 2020).Estimation techniques in Bayesian networks rely on distributional assumptions or the knowledge of distribution parameters, and they can be affected by uncertainty about the dependence structure connecting vulnerabilities, devices, and attacks.Therefore, even for this class of methods, deviations from distributional assumptions or a lack of information to identify the probabilistic or statistical models could undermine the validity of the approach, as current studies point out (Allodi and Massacci, 2017, Woods and Böhme, 2021, Kia et al., 2024).
Aiming at fostering automatic assessments and reducing subjective experts' bias, new supervised methods for cyber-risk prediction based on CVEs have been recently proposed, where natural language processing and topic detection help predict vulnerabilities' likelihood and impact (Kia et al., 2024).Motivated by the same need to infer the likelihood and impact of a cyber-vulnerability's exploitation, fuzzy logic has been considered too (Dondo, 2008).The role of uncertainty in the cyber-domain is also relevant for the development of fuzzy techniques applied to intrusion detection systems (Javaheri et al., 2023), game-theoretic modelling of allocation and sharing cyber-defence resources (Gao et al., 2022), copula-based risk modelling for time series analysis of cyber losses (Zängerle and Schiereck, 2023), and stochastic processes for evaluating the resilience of a system based on Markov chains (Zhang and Malacaria, 2021).

Preliminaries on statistical models
In line with the research questions stated in the Introduction, here we focus on interpretable statistical modelling and recently proposed applications to promote proper cyber-risk assessment and cybersecurity analysis.Before discussing the two specific models addressed in this work in the cybersecurity domain, we briefly review the ordered logit (OrdLog) model as a benchmark for regression with ordinal responses (McCullagh, 1980).

Ordered logit model
The OrdLog model is a Generalised Linear Model (GLM) suited to cumulative probability distributions for ordinal responses conditioned on explanatory variables.GLMs have proven useful with count response data as a means to predict the number of intrusions (Leslie et al., 2018) or other count data related to cyber-attacks.These statistical models can support testing the distributional assumptions underlying such count data.Leslie et al. (2018) stress some issues already mentioned above, namely, the subjectivity of vulnerability scoring systems and the issues posed by a qualitative, rather than quantitative, structure, the partial knowledge about existing vulnerabilities, and the dependence on the adopted technology.
The OrdLog model is specified as follows: let y 1 , . . ., y n be a sample of n ordinal responses, and X be a vector of explanatory variables (or regressors).The OrdLog model aims at describing the effect of regressors on the odds log where P (y ≤ h|X) (respectively, P (y ≥ h|X)) is the left (respectively, right) cumulative probability associated with the h-th level of the response and conditioned to the observed values X.The fit procedure estimates the model parameters, which are the level-specific intercepts α h and the β coefficients that quantify the effects of regressors on the log-odds.This formulation assumes that the proportional odds hypothesis, namely, the log-ratio of the odds on the left-hand side of (2.1), depends on the ordinal level h only through the scale coefficient α h , which does not depend on the variables X.
Despite the wide applicability of ordered logit or probit, more general approaches can be envisaged to overcome limitations from the potential violation of model assumptions (in this case, the proportional odds hypothesis).Another motivation stimulating research for new methodologies to deal with ordinal responses is the reduced interpretability of parameter estimates of GLMs with respect to simpler linear regression.This aspect is relevant in operational scenarios, where decision-makers should be able to interpret and quantify the impact of an explanatory variable without assuming background knowledge of the underlying statistical model.For this reason, we briefly present a recent proposal regarding the use of a regression model with ordinal responses in cyber-risk assessment.

Rank transform in linear regression
A recent approach in Giudici and Raffinetti (2021) involves a linear regression model (which we refer to as LinReg) for data regarding cyber-incidents and is based on the rank transform of a n-dimensional ordinal variable Y with k levels, that is, the set of ranks for each observation with a given prescription to handle ties (Iman and Conover, 1979).Formally, we move from the ordinal response Y to the rank-transformed variable R(Y ) defined by and #Y (−1) ({h + 1}) denotes the number of observations of Y whose value is h + 1.The fit of the regression model where N (0, σ 2 ) is the centred normal distribution with variance σ 2 estimated from the data, is compared to the Rank Graduation Accuracy (RGA) (Giudici and Raffinetti, 2021) where test data y 1 , . . ., y n have mean y and are ranked using the estimated ranks r obtained by fitting (2.3).
As anticipated, the choice of model (2.2)-(2.3) is argued to provide more interpretable results supporting decision-making with respect to GLMs.However, the use of linear regression with rank transform may not be suited to dealing with cyber-vulnerabilities; contrary to actual cyber-incidents, vulnerabilities are subject to the different types of uncertainty mentioned above, especially in the cyber-guerrilla context (Van Haaster et al., 2016).
From a methodological perspective, this means that several assumptions underlying the linear regression model may not be fulfilled when dealing with cyber-vulnerabilities.In particular, linear models rely on the normality assumption for the residuals, which may not be met in networks of digital systems; in fact, evidence shows that some relevant features of data breach datasets are well described by heavy-tail distributions (Edwards et al., 2016).Even the homoscedasticity assumption may not be fulfilled, and class unbalancing could make the linear model more sensitive to this violation, while quantile regression does not assume homoscedasticity.

Quantile regression: remarks for cyber-risk assessment
Both the OrdReg and the LinReg models rely on assumptions that may be unverifiable in real datasets: unbalanced classes, deviations from normality, and a lack of complete knowledge of the space of potential vulnerabilities (unknown ones or 0-days) may reduce the effectiveness of the aforementioned regression methods.In the cyber-domain, such hypotheses may actually not be verifiable due to the already-mentioned confidentiality and restrictions on data sharing.For this reason, we consider distribution-free approaches to make the analysis more robust against violations of statistical assumptions and concentrate on quantile regression (Koenker and Hallock, 2001).
Let Q τ := inf y {y : τ ≤ F (y)) be the τ -th quantile of a random variable with cumulative distribution function (CDF) F .Quantile regression estimates (2.5) Parameter estimates β(τ ) ∈ R p come from the minimization of the loss function (Koenker and Hallock, 2001) where I(X) is the characteristic function of a subset X ⊆ R.
In addition to increased robustness against model misspecification, the choice of quantile regression leads to a new parameter τ that naturally relates to the notion of Value-at-Risk (VaR) (also see Radanliev et al. (2018), Carfora et al. (2019) for a discussion of VaR in the cybersecurity context), in line with the purposes of this work.
Different estimates can arise from different choices of the quantile level, which lets us compare different rankings or prioritisations at different quantile levels by looking at parameters associated with regressors.However, this aspect may lead to ambiguities if it is not properly linked to risk evaluation and decision-making, e.g., when ranking the attributes represented by the regressors (Angelelli and Catalano, 2022).This leads us to consider quantile regression, where the response explicitly refers to a vulnerability's priority.

Mid-quantile regression
Dealing with an ordinal response, we have to extend the quantile regression approach to discrete variables; for this purpose, we take advantage of mid-quantile (MidQR hereafter) regression methods.Recent work by Geraci and Farcomeni (2022) applies mid-quantile regression (Parzen, 2004) to discrete data: starting with a random variable Y described by a categorical distribution Introducing π 0 = 0, π k+1 = 1, y 0 = y 1 , and y k+1 = y k , we can define the mid-quantile function as where δ(•) is the Dirac distribution.Setting F (y) := p(Y ≤ y) as before, estimators for unconditioned MidQR are obtained naturally, i.e., by the substitution of the estimates in the expression of the mid-quantile function.Such estimators enjoy good asymptotic consistency and normality for the sampling distribution; see Ma et al. (2011), Geraci andFarcomeni (2022), and references therein.
For a given link function h(•), we can consider a conditional mid-quantile function and estimate ĜY |X (y|x) from samples (x i , y i ), i ∈ {1, . . ., n}, through a non-parametric estimator that can encompass both continuous and discrete regressors (Li and Racine, 2008): where K λ (X i , x) is a kernel function with bandwidth λ, δX (x) is the kernel estimator of the marginal density of the explanatory variables X, and Estimates of coefficients β follow from the minimization of the following quadratic loss function (2.10) The estimation and fitting procedures can be carried out using the R package Qtools developed by Geraci and Farcomeni (2022).

Contribution and proposed methodology
The previous discussion points out the need to facilitate the transfer of qualitative structures and assessments into quantitative models, as both have practical advantages and limitations.Qualitative assessments are widely adopted in standards and guidelines and allow encoding experts' evaluations even when sufficient data for quantitative analyses are not available; on the other hand, they may give rise to inconsistencies and embed subjective factors or biases, especially in the assessment of probabilities related to cyber-events (De Smidt and Botzen, 2018).Quantitative methods enhance the assessments' accuracy and reduce ambiguity, but their implementation requires sensitive information or confidential data that are generally not available.Furthermore, the validity of those methods may rely on distributional assumptions or the knowledge of parameters or dependencies, which may be limited for the same reasons.
A way to combine the two approaches is to adopt quantitative models to analyse ordinal assessments of qualitative variables; specifically, mid-quantile methods involve fitting (mid-)conditional distribution functions for cyber-vulnerability priority levels based on available information, so we can convert CVSS qualitative information, in conjunction with other relevant risk factors (Allodi and Massacci, 2014), into probabilistic models.Starting with an ordinal response variable, we can also move from cyber-vulnerabilities' priority to ranking, enabling the comparison of different methodologies such as the LinReg model mentioned above.The non-parametric approach that we adopt avoids methodological issues that could compromise the validity of the analysis, making the estimated probability usable in multiple settings.Finally, an appropriate accuracy index is proposed to enhance the compatibility of ranking predictions with the original ordinal structure and the uncertainty related to unknown cyber-vulnerabilities.

Estimation: MidQR for robust cyber-vulnerability assessment
For our purposes, MidQR is used to provide estimates of the conditional quantile given a set of regressors that includes both intrinsic vulnerability characteristics and external variables (exposure and exploit availability), with a qualitative priority assessment as our ordinal response variable.In addition to quantile estimates, we are interested in the mid-cumulative distribution function that describes the conditional probability of priority levels, as it helps to identify where a lack of complete information may have an effect.Such a conditional distribution concerns the quantity where we focus on regressors X with a non-zero probability mass.The quantity (3.1) can be seen as a balance of the joint occurrence of a given impact level with cyber-vulnerability features (P (Y ≤ y ∧ X = x)) and the features' likelihood (P (X = x)).The different forms of uncertainty mentioned in the Introduction, such as underreported vulnerabilities, affect the evaluation of (3.1) starting from the measurements x, as we have limited knowledge of the sample space due to unknown vulnerabilities.As a subsequent step, the resulting estimates are used to predict the priority level of new vulnerabilities at a given quantile level and, then, prioritise them.This last step should enjoy some invariance properties for the predicted values to mitigate the effect of the aforementioned uncertainty on the ranking accuracy.This requirement has a practical effect in regression models dealing with both estimated ranking (LinReg) and, more generally, distributions of ordinal variables (such as MidQR).In the scope of this work, the performance index we introduce in the next section complements the estimation phase by taking into account the effects of partial knowledge about vulnerabilities on rankings.
Experts' subjectivity in the assessment of regressors extracted from the attack vector is another source of uncertainty (Kia et al., 2024).Even if this work does not involve measurement errors for the explanatory variables X in the regression models, we point out that Bayesian methods are a viable approach to dealing with a mixture of experts and grouping multiple regression models in the context of cyber-vulnerability assessment (Angelelli et al., 2022).

A new performance index for cyber-risk prediction under uncertainty
The uncertainty about the sample spaces, with consequent effects on the estimation of the priority assessment, is a major driver that prompts our research for a new approach to evaluating the accuracy of the assessment.
Specifically, the use of quantitative values in (2.4) should take into account the nature of the variables in the model.The evaluation of (2.4) assumes an algebraic structure, formally, the semiring (N, +, •, 0, 1) of natural numbers for rankings or the ordered field (R, +, •, 0, 1) for regression, which is not necessarily linked to the original ordinal variables assessing the priority of a cyber-vulnerability.This algebraic structure is an artefact suited to the regression model and, hence, to the estimated variables (let them be the rank transform or the mid-quantile); the only effect derived from the ordinal variables is the order defining the summands in (2.4).It is worth noting that a similar observation also applies in other frameworks for uncertainty modelling, e.g., when dealing with structural representations of epistemic uncertainty in data-driven initiatives (Angelelli et al., 2024).
Motivated by these considerations, we introduce a novel prediction accuracy index to accommodate the characteristics of cyber-vulnerability data.We consider a reverse RGA index defined as RGA(r tr , r est ), namely, we exchange the roles of the estimated r est and the "true" r tr rankings.We refer to such an index as the Agreement of Grounded Rankings (AGR) to stress the focus on the reference frame in the ranking, namely, the order structure and the limited knowledge of the set of cyber-vulnerabilities to be ranked.
To better appreciate the need for appropriate use of the RGA index for unconventional cyberrisk assessment, we consider the case of sub-sampling, i.e., known subsets of an unknown family of cyber-vulnerabilities.This emulates the partial knowledge available due to 0-days.
Example 1.We can consider the following 5-dimensional rank vectors: where c est derives from a given estimation procedure, while c tr,u , u ∈ {1, 2}, are two "true" rankings obtained from different knowledge about the state of a digital system and its sample space.Although they are different, the rankings c tr,1 and c tr,2 are consistent with the same attribution of ordinal levels: for the sake of concreteness, we can assume that the components of both c tr,1 and c tr,2 are generated by ranking the same ordinal assessment ("10","6","8","8","3"), where priority levels are ordered from "10" to "1".In this case, the differences between c tr,1 and c tr,2 can arise from the existence of other elements in the two ranked sample spaces beyond those associated with the components of c tr,1 and c tr,2 .The evaluation of RGA(y est , y tr,u ) for u ∈ {1, 2} following the definition (2.4) does not satisfy invariance under changes in rankings that are generated by the same ordinal assessment.Indeed, we have This shows that the AGR index resolves the lack of invariance under sub-sampling in RGA.The favourable invariance of the AGR index under rank transformations that are compatible with the same underlying ordinal assessment is in line with Luce's axiom of Independence of Irrelevant Alternatives (Luce, 2005), while some algebraic conditions related to this type of symmetry have been discussed in reasoning under uncertainty (Angelelli et al., 2024).Practically, this invariance is required when dealing with partial information about the space of potential cyber-vulnerabilities, which is the general situation faced by a decision-maker due to the occurrence of unknown vulnerabilities not exploited yet, 0-days, and unconventional cyber-attacks (Van Haaster et al., 2016, Tommasi et al., 2022).

Databases
Several databases can be used to assess the cybersecurity of a digital system.Among the most widely used by practitioners are the following ones: • the National Vulnerability Database (NVD) includes assessments of vulnerabilities' severity by the NIST in terms of data impact dimensions (Confidentiality, Integrity, and Availability) and three additional technical features describing the accessibility prompted by the cyber-vulnerability, namely, Access Vector (AV), Access Complexity (AC), and Authentication (Au).The severity assessments of these six components compose the attack vector1 .
• The CSIRT database2 reports relevant updates on vulnerabilities in line with the evaluation by NIST.Such reports are communicated by the Italian CSIRT, which is established within the National Cybersecurity Agency.
• The Shodan database3 reports exposed hosts or IP addresses affected by known vulnerabilities, which may represent a relevant driver for attackers' intervention.The Shodan database can be queried by specifying a CVE and the country of the exposed hosts.Data are collected by the Shodan monitor platform by combining different techniques, such as crawling, IP lookups, and metadata analysis.
• Reported exploits for CVEs can be extracted from ExploitDB4 .Information about exploits can be further refined from VulnDB5 , a database that collects information on the price range of exploits associated with a CVE.The fields extracted from VulnDB include the 0-day price range, the price at the time of querying, and the exploitability.
• Tenable's6 assessment interprets CVSSs and assigns an ordinal risk priority through threat and vulnerability analysis.It contains qualitative risk information in Tenable's Vulnerability Priority Rating (VPR) assessment, which is obtained through machine learning algorithms that process information collected from the dark web, social media, code repositories, and reports.This index is the result of a threat intelligence activity that incorporates exploits' code maturity and extracts features to monitor the impact of a cyber-vulnerability in terms of actual and predicted threats7 .
For all these databases, we prepared Python scripts in order to extract the required data through APIs automatically: • We started by selecting vulnerabilities identified in Italy through Shodan to obtain a base set of CVEs.Then, the shodan API was used to extract the exposure data.
• Subsequently, the scripts were adapted to extract the attack vectors associated with these CVEs from the NVD database through a request that returned a JSON file, which was inspected to get the CVSS scores.
• Then, we checked the availability of the exploits from ExploitDB and VulnDB.For Ex-ploitDB, we used CVE Searchsploit (Fioraldi, 2017) to obtain the exploits for the selected CVEs.
• In conclusion, a dedicated script was used to obtain Tenable's VPR assessment of the CVEs under consideration; even in this case, we collected these data by inspecting the output of a request for the selected CVEs.
Running these Python scripts, the final dataset for model validation consists of n = 714 units.This data extraction procedure is graphically depicted in Figure 1 as a component of the overall analysis blue to validate the proposal and investigate its scope of applicability.

Data description
The above data manipulation procedure leads to a dataset with the following variables: 1. Components of the attack vector obtained from the NIST vulnerability assessment constitute ordinal regressors.2. Exposure is a numerical variable that counts exposed hosts, but the variety of such count data lets us consider a continuous approximation of this variable.3.For each CVE, the existence or absence of an exploit is encoded in a dichotomic variable.4. Tenable's priority rating is the ordinal response (dependent variable) that is linked to the previous explanatory variables through MidQR.
For the present investigation, we selected p = 7 explanatory variables returned by the procedure described above, whose interpretation is summarised in Table 1.
Table 1: Main attributes of the variables and their interpretation for statistical modelling.For each set of variables, the data source is provided in the leftmost column.The quantification for the ordinal assessments of the components X C , X I , X A , X AV , X AC of the attack vector (rightmost column) are provided by NVD experts.

Descriptive analysis of the dataset
Data extracted from the databases described in Section 4 select n = 714 cyber-vulnerabilities in Italy.The time span of the CVEs is 1999-2021.We concentrate on a single country to take into account local (country-wise) factors that could generate differences in cyber-risk and threat analyses (Crotty and Daniel, 2022) and carry out the analysis within a known context.In our study, this choice may help to control contextual covariates that are not involved in this analysis, e.g., regulatory aspects and governance factors affecting both technological adoption and cyber-threats at a national level.We emphasise that this choice can be customised for other countries or extended on a cross-national scale based on the specific research design and assessment objectives.
Regarding the time span, while the attack vector's components are intrinsic and, hence, do not change with time, the VPR and exposure are dynamically monitored and adapted, so they reflect the current state of the vulnerability within its limited life-cycle, also considering technology updating and cyber-vulnerability patching or fixing.By taking the exploit variable as dichotomic (existence or absence), we overcome potential temporal effects related to the number of exploits, which fall beyond the scope of the present analysis.However, we stress that the aforementioned regression models can capture temporal factors through relations between independent variables (in particular, exposure and exploit availability) and the dependent response (Tenable's VPR assessment).A dedicated study of these relations could align with and complement time-series analysis of the information in CVE scores and descriptions (Kia et al., 2024).
We note that each variable in the attack vector is characterised by manifest unbalancing among the different levels, as shown in Figures 2a-2b.When the response in a regression model is well approximated by a continuous variable, then unbalancing could make linear regression more sensitive to deviations from homoscedasticity; hence, quantile regression could be favourable.This is the case when the exposure of vulnerable hosts is related to intrinsic features of the vulnerabilities (Angelelli and Catalano, 2022): it is easily checked from the QQ-plots in Figures 3a-3b that the residuals of the exposure N exp and its log-transform 10 • log 10 (1 + N exp ), considered as responses in a linear model with regressors (X C , X I , X A , X AV , X AC ), show strong deviations from normality.
(a) Regressors (X C , X I , X A , X AV , X AC ).
(b) Free model.This remark also entails that linear regression would not fit the distribution assumptions when a proxy of cyber-risk, such as exposure, is used as the response.We also note that even the residuals of the "free model", i.e., the QQ-plot of the exposure N exp itself, violate the normality assumption (see Figure 3b).The use of the transform N exp → 10 • log 10 (1 + N exp ) in the previous QQ-plots slightly reduces the deviation from normality; more importantly, it highlights multimodality in the distribution of exposure, as it is manifest in the histograms depicted in Figures 4a-4b   This suggests the need to go beyond linear models for an appropriate description of the external characteristics of cyber-vulnerabilities, starting from their intrinsic (attack vector) and extrinsic (exposure, exploits) features as regressors.

Simulation study
Contrary to real dataset analysis, in this simulation study, we can control the data generation mechanism, so we can compare both estimation and accuracy measurement in relation to the data-generating model (OrdLog).Furthermore, we can conduct different tests to evaluate the models' performance at varying hyperparameters, in particular the number of ordinal levels in the response variable and the randomness of the probabilities in the OrdLog model.
We start by specifying the preliminary simulation study to provide a general comparative analysis between the model presented in (Giudici and Raffinetti, 2021) and the MidQR.
• We used n tr = 320 units for training and n test = 80 units for testing the accuracy performance of the models.We started with a response variable having k = 4 levels, in line with Tenable's priority rating that is used in the analysis of real data.However, we also tested k ∈ {3, 6, 8} to evaluate the behaviour and performance of the different models when the number of levels of the response variable changes.
• Two continuous and two factor explanatory variables were considered, each of the latter having three categories.This induced P := 2 + 2 • (3 − 1) = 6 regressors after moving to ANOVA variables.
• This scheme was iterated to obtain n iter = 100 samples of the response variable Y .
In this way, we got the coefficient estimates and the mean, over the simulation runs, of the standard error (SE) estimates for each coefficient.For MidQR, we adapted a function in Qtools to overcome computational issues in the estimation of the conditional (mid-)CDF, which involves the kernel method based on (Li et al., 2013).Specifically, we acted on the estimated covariance matrix of the coefficients to make its computation compatible with cases where the quantile level lies outside the range of the sample mid-CDF.However, the outcomes of this procedure, which is     Then, we move to different numbers of levels in order to better assess the behaviour of the different methods in different decision scenarios.We address this aspect starting with k = 3: this is a typical scale in several operational or tactical decisions, where levels are generally interpreted as "low", "medium", and "high", respectively.The outcomes of this set of simulations are presented in Table 5.The corresponding RGA and AGR indices are shown in Table 6.Even in this case, we provide a graphical representation of these outcomes in Figure 6.Finally, we complete the simulation study by considering more than 4 levels in the response variable.Specifically, we report the results at k = 6 (Table 7) and k = 8 (Table 8).The boxplots corresponding to the RGA and AGR indices summarised in Table 9 are displayed in Figure 7.

Real dataset analysis
In parallel with the investigation of the simulated data, we report the study of the dataset whose construction has been described in Section 4. In particular, we present the same type of indicators considered for the simulations.However, here we stress that multiple datasets are constructed from the original one through its random splitting into a training set (n tr = 664) and a test set (n test = 50).This splitting of the dataset takes into account the imbalance of cyber vulnerability characteristics, so a smaller percentage of observations in the training set could cause the models, in principle, to miss relevant information about rare events.This aspect also occurs in other statistical analyses of cybersecurity (Giudici and Raffinetti, 2021).
We generated 100 random extraction of test sets, whose complements return the associated training sets, to evaluate averaged parameter estimates, standard errors, and predictive performance indices; 16 quantile levels equally spaced between 0.1 and 0.9 are considered in this case.
We start with parameter estimates, which are shown in Table 10.Here, the whole set of variables described in Table 1 is used to implement the regression models.Then we restrict these models by considering only technical (X AC , X AV ) and contextual (exposure, exploit) variables; the corresponding outcomes are presented in Table 11.
Moving to the performance indices, both RGA and AGR for all the regression models under examination are reported in Table 12.In addition, we provide two graphical representations regarding the behaviour of the predictive performance at different quantile levels: the boxplots in Figure 8 and the plots of average RGA and AGR for all 16 quantile levels in Figure 9.
In order to investigate the robustness of the analysis according to the aforementioned settings, we conducted parallel analyses with different partitionings (n tr = 574 and n test = 140), a different number of iterations, or scaling of the numerical regressor.The results and overall performance in the different scenarios are similar to those we have presented above, revealing a satisfactory robustness of the proposed approach.

Discussion
In line with the search for flexibility, interpretability, and robustness in cyber-risk assessments, a quantile-based approach can extract relevant information beyond means to examine rare events, which is a primary need for the continuity of a network or critical infrastructure.The AGR index Table 10: Parameter estimates from data regarding real cyber-vulnerabilities.All the variables have been used as regressors.lets us evaluate predictive performance without relying on a quantitative structure for the ordinal responses.Here, we discuss the outcomes of the analysis of synthetic and real data.

Exposure
AGR as an appropriate measure of predictive accuracy.From simulations, we see that the datagenerating models are generally associated with a higher AGR value, while their RGA is often worse than other models (see Figures 5, 6, and 7).It is plausible that the specific model underlying the data generation process provides better predictive performance compared to other models.This criterion identifies AGR as a more appropriate performance index for our purposes since it better distinguishes the data-generating model in terms of predictive capacity, as is manifest from the above-mentioned figures.
In addition, AGR enjoys the invariance property under sub-sampling, as discussed in Section 3, which is desirable since the measure is not affected by other (possibly unknown) vulnerabilities.In this way, we can better prioritise the vulnerabilities under consideration without incurring order reversal due to new vulnerabilities not previously detected.From a different perspective, such new information may be needed to update individual priority ratings and adapt to the dynamic behaviour of cyber-space, as is discussed in the following paragraph.
MidQR and probabilistic risk modelling.We already pointed out the distinction between cyberincidents and cyber-vulnerability.Recalling that the analysis in Giudici and Raffinetti (2021) focuses on the former, the comparison of the regression models that we have carried out is purely methodological, and the tests we conducted on cyber-vulnerability data set a common ground to compare the characteristics of the methods in terms of RGA and AGR indices.By the same token, the rank transform has been used to enhance the comparability of the responses produced by the two models.
In this regard, while rankings are the primary outcome of LinReg, mid-quantile models produce cumulative probability estimates for ordinal responses.A potential extension of this research is the comparison of different conditional (mid-)probabilities extracted from mid-quantile methods obtained with different sets of regressors; the information divergence between such distributions, e.g., through entropy-based methods, can support the quantification of the information content provided by the vulnerability's characteristics.In this way, our proposal can support the search for new models for cyber-risk analysis based on probability and impact (Allodi and Massacci, 2017).
While the present work uses Tenable's VPR for the analysis, each decision-maker can customise the model (as well as the quantile level), adapt it in time to get new estimates and quantile effects, or compare different risk factors derived from different criteria in terms of predictive power.This opportunity stimulates further studies to take advantage of probability estimates from mid-quantile methods in specific scenarios or case studies.Indeed, networks of connected organisations could carry out the analysis using their own threat assessment as the response variable; therefore, such probability estimates could help conduct risk analysis in conjunction with Bayes update rules and graphical models, e.g., Bayesian networks (Shin et al., 2015), providing an alternative to the assignment of standard values for probabilities starting from qualitative experts' opinions.We also stress that the proposed approach can be extended to quantitative response variables too; indeed, we can choose a different set of regressors related to cyber-vulnerabilities' characteristics and severity, considering the frequency of related cyber-incidents as a response variable, if available.In this way, the fitted mid-cumulative distribution functions could represent a robust alternative to estimating or predicting the number of cyber-incidents or cyber-intrusions (Leslie et al., 2018).
Real and synthetic data.Referring to Table 12, two different models are considered: the full one (all the relevant variables in the dataset derived from Table 1 are involved) and a restricted one, where the "CIA" components of attack vectors are excluded.This choice is driven by a better understanding of the role of the CVSS impact dimensions in vulnerability prioritisation and cyber threat analysis (Allodi andMassacci, 2014, 2017).Table 12 suggests that different regression models provide different information regarding the role of the CIA attributes, where OrdLog generates larger deviations (outliers) with high accuracy that seriously affect the average accuracy performance; clearly, quantile-based indices depicted in Figure 8 are more robust with regard to such anomalies.Furthermore, the two models show different behaviours at varying quantile levels, as exhibited in Figure 9.
By comparing the full and partial models, we observe that AGR leads to higher discrimination than RGA does.Formally, let us consider the ratios of the average values of RGA and AGR evaluated for the technical and full models, respectively.For the LinReg model, AGR leads to higher discrimination than RGA does (ϱ RGA = 1.043 and ϱ AGR = 0.862).Focusing on MidQR, we also see that AGR is more sensitive than RGA with respect to the choice of the quantile level in terms of model discrimination.Indeed, ϱ RGA ∈ [0.845; 1.076], while ϱ AGR ∈ [0.490; 1.002], and ϱ AGR < 0.8 for quantile levels τ 1 to τ 9 .In fact, ϱ AGR tends to increase with the quantile level, which suggests a non-trivial contribution of the CIA attributes in combination with information about exposure or exploits, which also depends on the choice of the quantile level.
While the LinReg and MidQR models considered in this work are comparable in terms of RGA performance on real data, using AGR, we can see that OrdLog performs poorly since the predicted values are restricted to the set {1, . . ., k}.When the dataset has low variability, the estimated values collapse to a typical value, which contains no information and drastically reduces predictive performance.This also suggests a severe deviation from the OrdLog model assumptions in the present cyber-vulnerability dataset.
Another indirect test of the deviation of real data from the OrdLog model comes from the relative magnitude of RGA and AGR.In Tables 2-8, which refer to data simulated starting from the ordered logit model, AGR is comparable with RGA (i.e., with the same order of magnitude), and at low values of k, especially at k = 3, AGR is larger than RGA when we focus on MidQR and the data-generating model.On the other hand, real data lead to different behaviour: calculating the ratios AGR/RGA within each iteration, their median lies in [0.218, 0.402] for the 16 quantile levels in the full model and [0.174, 0.213] in the partial model; looking at the ratios AGR/RGA of the mean values shown in Table 12, they range in [0.201, 0.357] for the full model and in [0.187, 0.221] for the partial model.These ratios are useful as an additional check of the deviation from the OrdLog model used in simulations, AGR and RGA indices for the same model should not be compared, as they measure different performance aspects of a given model.
Dependence of the MidQR performance on k.MidQR performs better when the number of levels k of the response variable is small (less than 6), as can be seen comparing Figures 5-6 with Figure 7.In the latter, AGR highlights a divergence between the data-generating model (OrdLog) and alternative models (LinReg or MidQR); on the other hand, RGA returns a performance comparable to that of LinReg and MidQR.SE of the estimates.As remarked in the previous section, an arbitrary choice of the quantile level may lead to overestimating the parameter SE through the kernel approach adopted in (Geraci and Farcomeni, 2022) and based on (Li and Racine, 2008); this is confirmed by the outputs of the simulations.When this overestimation happens, the remaining indices (i.e., the regular SE and the MCSE) provide a more informative picture of the sampling distribution.
Implications for cyber-threat intelligence and secure information disclosure.As a practical consequence of the observations in the last paragraphs, we draw attention to the information the individual decision-maker has, uses, and communicates about cyber-risk.
Agencies such as NIST share their evaluation through dedicated information channels; however, this information can also be acquired by potential attackers, who can use it to prioritise their own objectives.Indeed, resources are also needed by attackers (e.g., costs for exploit acquisition, time and effort for detection of vulnerable hosts, integration of multiple components to avoid countermeasures), and information on risk factors from different organisations can be useful to suggest relevant criticalities.
Our proposal addresses this issue in two ways: first, as already recalled, MidQR enhances robustness against violations of assumptions in parametric methods and allows for the analysis of different types of explanatory or response variables; this makes MidQR suited to compare models with different sets of explanatory variables and then choose an appropriate trade-off between predictive ranking accuracy and limited information to be shared.The second contribution involves the invariance property of the AGR index, which avoids inconsistency in rankings obtained from different sets of cyber-vulnerabilities in the sense of Example 1; this reduces the need to share information on relevant cyber vulnerabilities to achieve a given value of accuracy in rank estimation.
These observations are mainly related to cybersecurity data and their usefulness for distinct decision-making stages, which led us to select the databases described in Section 4. Information granularity in data from cyber-incidents does not often suffice to extract useful insights into the current threats.This leads to data aggregation and censoring that could not allow cybersecurity operational experts to prioritise the current vulnerabilities, as is the case in the classification of attack techniques reported in (Giudici and Raffinetti, 2021), where multiple types of attacks are grouped together (e.g., SQL injection is a particular attack model upon which malware can be based, and malware can exploit one or more 0-days).Similarly, the use of ordered logit or other GLMs is a well-established approach to carrying out inference about probabilities, even in the cyber-risk domain (Mukhopadhyay et al., 2019), but the present analysis has shown that it is not suited to the collected cyber-vulnerability data.However, this should be interpreted as complementarity between the analyses on cyber-incidents, and the present one: they serve different phases (strategic, tactic, or operative) of a process with a common objective, and each phase should identify appropriate data for its scope.

Conclusion and Future Work
This work investigated statistical modelling for threat intelligence, with particular attention to the information resources regarding cyber-vulnerabilities.Being fixing resource-expensive, decision-makers have to allocate their resources based on their current state of knowledge and their risk perception.The statistical model and the index proposed for cyber-vulnerability assessment complement other approaches developed in the cyber-risk literature.These models are not mutually exclusive and could be considered in parallel to highlight distinct aspects of relevance to decision-makers.
The actual realisation of cyber-attacks relies on several information sources that can enhance or inhibit them.It is plausible that indirect access to information plays a more important role than expected: along with limited data disclosure and underreporting, even prioritisation data communicated by organisations to prevent cyber-incidents can guide cyber-attackers, as discussed in Section 6.The present work opens the way to further applications supporting secure information disclosure on cyber-vulnerabilities, since the advantages of the framework discussed in the previous sections can highlight the effects of both information sources (in terms of available regressors) and cyber-risk perception or severity assessments (e.g., a suitable datagenerating model).A more accurate evaluation of such effects is a necessary premise to avoid the indirect and unintended communication of information.
A deeper investigation is needed for the emergence of multiple prioritisations due to different decision criteria and uncertainty sources, which may occur when different experts or organisations conduct separate analyses based on their own choices for response and explanatory variables.Various approaches could be explored to formalise compatibility conditions for ordinal structures under uncertainty (Angelelli et al., 2024) in continuity with the arguments that led to the AGR index in Section 3.2.A dedicated study to identify information-theoretic, fuzzy, or relational criteria to encompass and quantify specific uncertainty sources in cyber-space could support individuals or groups in contextualising risk assessment about shared digital resources.
Despite the generality of the methodology, a limitation of this work is that it does not explicitly consider context-specific data that could affect cyber-vulnerability prioritisation.Risk factors may vary due to internal priorities in the organisation and the evolution of the overall digital system (new products, legislation).Patterns extracted within Tenable's VPR processing contain information about risks posed by cyber-threats, but contextual factors should also be explored when adapting this analysis to specific case studies or operational scenarios, including governance requirements, tools for the development of secure digital products (Baldassarre et al., 2020a), privacy (Baldassarre et al., 2020b), and behavioural factors that can influence the perception of the exploitability of a cyber-vulnerability.Future work will explore complementary approaches for estimating behavioural latent traits, including Bayesian methods, and connecting them to relevant parameters in risk assessment (e.g., the choice of the quantile level).These factors require specific measurement models and evaluation methods, and, in line with the adoption of graphical methods in cyber-risk assessment, structural equation models (Woods and Böhme, 2021) could be a valid option to extend our research directions into the study of behavioural risk perception.

Figure 1 :
Figure1: Graphical description of the experiments to validate the efficiency of mid-quantile regression for priority estimates and AGR as an accuracy index of predicted risk levels.
(a) Distribution of impact dimension levels.(b) Distribution of features (AV, AC) and VPR.

Figure 2 :
Figure 2: Distribution of levels of variables from the cyber-vulnerability dataset.

Figure 3 :
Figure 3: QQ-plots of the theoretical (normal) quantiles compared to the empirical quantiles of residuals of y = 10 • log 10 (1 + Nexp) derived from the exposure Nexp of cyber-vulnerabilities. .

Figure 8 :
Figure 8: Boxplots of RGA and AGR for real data.Boxplots refer, from left to right of the x-axis, to OrdLog, LinReg, and MidQR with τ taking values in {0.1, 0.26, 0.42, 0.58, 0.74, 0.9}.To improve the quality of Figure 8c, the range has been restricted and excludes 9 extreme outliers for the OrdLog model and one for MidQR(τ 1 ).

Figure 9 :
Figure 9: Behaviour of average RGA and AGR for real data and the 16 quantile levels τ under consideration.Circles and triangles denote the index estimates for the full and partial models, respectively.The y-intercepts of the dotted and dot-dashed lines represent the value of the index from the ordered logit and the linear regression on rank-transformed variables, respectively.

Table 2 :
Coefficient estimates from simulations with k = 4 levels for the response variable.The parameters in the generative model are tuned in order to get the uniform probability distribution on the k possible response levels.

Table 3 :
Coefficient estimates from simulations with k = 4 levels for the response variable.Generic parameters in the generative model lead to a non-uniform probability distribution on the k possible response levels.

Table 5 :
Coefficient estimates from simulations with k = 3 levels for the response variable.

Table 7 :
Coefficient estimates from simulations with k = 6 levels for the response variable.

Table 8 :
Coefficient estimates from simulations with k = 8 levels for the response variable.

Table 9 :
RGA and AGR from simulations with a higher number of levels for the response variable: k = 6 (columns 2-5) and k = 8 (columns 6-9).The last row corresponds to the reference value, namely, the index RGA or AGR evaluated at (rtrue, rtrue).

Table 11 :
Parameter estimates from data regarding real cyber-vulnerabilities.Only technical and contextual variables have been used as regressors.

Table 12 :
RGA and AGR indices from real data analysis.Columns 2-5 refer to models with the full set of regressors; columns 6-9 follow from the restriction to technical (AV, AC) and contextual (exposure, exploit) variables as regressors.