Web Applications Vulnerability Management using a Quantitative Stochastic Risk Modeling Method

The aim of this research is to propose a quantitative risk modeling method that reduces the guess work and uncertainty from the vulnerability and risk assessment activities of web based applications while providing users the flexibility to assess risk according to their risk appetite and tolerance with a high degree of assurance. The research method is based on the research done by the OWASP Foundation on this subject but their risk rating methodology needed debugging and updates in different in key areas that are presented in this paper. The modified risk modeling method uses Monte Carlo simulations to model risk characteristics that can’t be determined without guess work and it was tested in vulnerability assessment activities on real production systems and in theory by assigning discrete uniform assumptions to all risk characteristics (risk attributes) and evaluate the results after 1.5 million rounds of Monte Carlo simulations.


Introduction
Web based business applications are becoming the preferred way of conducting day to day business activities as part of a digital business model.As such, there is a growing concern related to data breaches that affected major digital businesses in the past years.An updated list of the world biggest data breaches is available at World's Biggest Data Breaches [1].An cybersecurity report published by Ernst & Young in 2017 shows that poor risk assessment and management practices are a big cause for data breaches and cyberattacks: "With the quality of reporting being so low, it is no surprise that 52% of responders think their boards are not fully knowledgeable about the risks the organization is taking and the measures that are in place.In other words, our survey suggests that about half of all boards."[2] Most organizations are using superficial qualitative risk management methodologies when they are assessing IT related risks using formulas that require a lot of guess work and allow for high level of uncertainty with low assurance.The aim of this research is to propose a quantitative risk modeling method that reduces the guess work and uncertainty from the vulnerability and risk assessment activities of web based applications while providing users the flexibility to assess risk according to their risk appetite and tolerance with a high degree of assurance.The research method is based on the research done by the OWASP Foundation on this subject but their risk rating methodology needed debugging and updates in different in key areas that are presented in this paper.The modified risk modeling method uses Monte Carlo simulations to model risk characteristics that can't be determined without guess work and it was tested in vulnerability assessment activities on real production systems and in theory by assigning discrete uniform assumptions to all risk characteristics (risk attributes) and evaluate the results after 1.5 million rounds of Monte Carlo simulations.

A1 Injection
Injection flaws, such as SQL, OS, XXE, and LDAP injection occur when untrusted data is sent to an interpreter as part of a command or query.The attacker's hostile data can trick the interpreter into executing unintended commands or accessing data without proper authorization.A2 Broken Authentication and Session Management Application functions related to authentication and session management are often implemented incorrectly, allowing attackers to compromise passwords, keys, or session tokens, or to exploit other implementation flaws to assume other users' identities (temporarily or permanently).

A3
Cross-Site Scripting (XSS) XSS flaws occur whenever an application includes untrusted data in a new web page without proper validation or escaping, or updates an existing web page with user supplied data using a browser API that can create JavaScript.XSS allows attackers to execute scripts in the victim's browser which can hijack user sessions, deface web sites, or redirect the user to malicious sites.

A4 Broken Access Control
Restrictions on what authenticated users are allowed to do are not properly enforced.Attackers can exploit these flaws to access unauthorized functionality and/or data, such as access other users' accounts, view sensitive files, modify other users' data, change access rights, etc.

A5 Security Misconfiguration
Good security requires having a secure configuration defined and deployed for the application, frameworks, application server, web server, database server, platform, etc. Secure settings should be defined, implemented, and maintained, as defaults are often insecure.Additionally, software should be kept up to date.

Sensitive
Data Exposure Many web applications and APIs do not properly protect sensitive data, such as financial, healthcare, and PII.Attackers may steal or modify such weakly protected data to conduct credit card fraud, identity theft, or other crimes.Sensitive data deserves extra protection such as encryption at rest or in transit, as well as special precautions when exchanged with the browser.

A7 Insufficient Attack Protection
The majority of applications and APIs lack the basic ability to detect, prevent, and respond to both manual and automated attacks.Attack protection goes far beyond basic input validation and involves automatically detecting, logging, responding, and even blocking exploit attempts.Application owners also need to be able to deploy patches quickly to protect against attacks.

A8 Cross-Site Request Forgery (CSRF)
A CSRF attack forces a logged-on victim's browser to send a forged HTTP request, including the victim's session cookie and any other automatically included authentication information, to a vulnerable web application.Such an attack allows the attacker to force a victim's browser to generate requests the vulnerable application thinks are legitimate requests from the victim.A9 Using Components with Known Vulnerabilities Components, such as libraries, frameworks, and other software modules, run with the same privileges as the application.If a vulnerable component is exploited, such an attack can facilitate serious data loss or server takeover.Applications and APIs using components with known vulnerabilities may undermine application defenses and enable various attacks and impacts.

A10 Under protected API's
Modern applications often involve rich client applications and APIs, such as JavaScript in the browser and mobile applications that connect to an API of some kind (SOAP/XML, REST/JSON, RPC, GWT, etc.).These APIs are often unprotected and contain numerous vulnerabilities.
The OWASP Risk Rating Methodology assumes that a discovered vulnerability will impact a web application on 2 main areas: technical and business; and to determine the probability that a vulnerability will be exploited it measures the likelihood of an attack by determining the vulnerability profile (vulnerability assessment) and the threat agent profile (what type of agent will use that vulnerability in an attack.By design the OWASP Risk Rating Methodology ask the cybersecurity specialist do determine by itself or with help of peers how a discovered vulnerability should score on each of the 4 risk subcomponents.Each of the 4 risk subcomponents has 4 attributes categories that are presented in the Table 7 through Table 22, and each attributes category has 1-9 scored scale that measures the severity of that attribute.For each attributes category the global OWASP community collectively determined how many positions to be measured.Each category attribute receives a score from 1(0.125) to 9 (1.125), and then the risk subcomponents and risk components are quantitatively determined using the formulas from Table 4 while the qualitative scale is presented in Table 5 and the qualitative risk matrix is presented in Table 7.For each attribute category the OWASP community determined the measures descriptions and although arguments can be made for and against how each measures description was chosen in the end the OWASP community reached a consensus.The risk modeling methodology used in this research is based on OWASP Risk Rating Methodology (presented above) with the following modifications to eliminate inherited errors and bias:  the scale score presented in Table 2 uses only the 1-9 scored scale, while the original OWASP Risk Rating Methodology uses a 0-9 scored scale, which in my opinion is not needed since the discovery of a vulnerability can't have a 0 score from any risk methodology perspective;  the classification rational between the 4 risk subcomponents is changed from the OWASP Risk Rating Methodology by assuming that the Threat Agent Profile and the Business Impact subcomponent are difficult to determine for a discovered vulnerability without bias, Table 3;  all the attributes that are difficult to determine through human rational thinking and experience are determined using Monte Carlo simulations using the Oracle Crystal Ball software [5].

Methodology validation
The methodology and risk modeling formulas were verified using the Oracle Crystal Ball software by assigning to each attribute category a discrete uniform assumption between the declared measures with an equal probability for each measure.For example using the measures in Table 22, the discrete uniform assumption is that there is a 0.25 (there a 4 measures, 0.25x4= 1) (the probability scale is between 0 and 1) chance that one of the [0.375, 0.625, 0.875, 1.125] will happen during a Monte Carlo simulation round.The validation exercise tested the validity of the formulas presented in  penetration testing activities for each of the 3 entities in the past 6 months.The automated tool Acunetix Web Vulnerability Scanner [6] was used to perform initial vulnerability discovery and Kali Linux [7]

Case 1 -Electronic Banking System -Time Based SQL Injection Vulnerability
In the web based portal for an electronic banking system a time based SQL Injection vulnerability was discovered and validated using the tool SQLMAP [9].SQL injection is a vulnerability that allows an attacker to alter backend SQL statements by manipulating the user input.An SQL injection occurs when web applications accept user input that is directly placed into a SQL statement and doesn't properly filter out dangerous characters [10].
The vulnerability was successfully exploited and the client's database was extracted from the electronic banking system portal.In the Table 25 all the known assumptions for the attributes categories were determined and then using Oracle Crystal Ball we calculated the risk profile for the time based SQL injection vulnerability.The mean impact score is 5.21 (MEDIUM), the mean likelihood score is 6.91 (HIGH) and the mean risk score is 4.50 (ME-DIUM), as presented in the Table 26.Although the quantitative analysis calculated a mean risk score of 4.50, using the percentiles assurance values from Table 27 the organization can eliminate or keep a level of uncertainty depending on their risk appetite.In this case the organization had a very low risk appetite given the importance of their electronic banking system to their business model and business reputation and they wanted 100% assurance, meaning that the vulnerability risk score was officially 6.25 (HIGH) with an impact score of 6.25 (HIGH) and a likelihood score of 8.00 (HIGH).The organization immediately allocated the resources, time and budget to repair the vulnerability.In the web based application for a supply chain ordering system a Joomla!Core Remote Code Execution (1.5.0 -3.4.5) vulnerability was discovered and validated using various Metasploit modules.Joomla! is prone to a remote code execution vulnerability because it fails to sufficiently sanitize user-supplied input.Successful exploitation may allow attackers to execute arbitrary commands with the privileges of the user running the application, to compromise the application or the underlying database, to access or modify data or to compromise a vulnerable system.[11] In the Table 28 all the known assumptions for the attributes categories were determined and then using Oracle Crystal Ball we calculated the risk profile for the Joomla core remote code execution vulnerability.The mean impact score is 6.31 (HIGH), the mean likelihood score is 7.03 (HIGH) and the mean risk score is 5.57 (MEDIUM), as presented in the Table 29.Although the quantitative analysis calculated a mean risk score of 5.57, using the percentiles assurance values from Table 30 the organization can eliminate or keep a level of uncertainty depending on their risk appetite.
In this case the organization had a low risk appetite given the importance of their supply chain ordering system to their business model and business reputation and they wanted 80% assurance, meaning that the vulnerability risk score was officially 6.10 (HIGH) with an impact score of 6.75 (HIGH) and a likelihood score of 7.50 (HIGH).The results were presented to the organization's management who allocated the resources, time and budget to repair the vulnerability.

Conclusions
The experimental results show that the proposed web application risk modeling method can deliver reliable results related to quantitative risk assessment that enable users to take better decisions on how to manage their web applications risks.The available literature on this particular subject is very limited, most of the research revolving around comparative studies between risk management frameworks like the Core Unified Risk Framework [13] or on improving the IT risk assessment model proposed by NIST [14].Since the most widespread risk assessment model relies on the annual loss expectancy (ALE) and single loss expectancy (SLE) [15] formulas that in practice involve accepting a high level of uncertainty most organizations are relying solely on the qualitative risk assessment models and are classifying risk in ranges from low to high or critical.The quantitative risk modeling method described in this research is considerably reducing the guess work and uncertainty from the risk assessment activities while providing sufficient flexibility to users to model risks according to their risk posture (appetite and tolerance) by using high rounds of Monte Carlo simulations on 1 and up to 16 risk attributes.

M o t i
v e S c o r e I n t r u s i o n D e t e c t i o n S c o r e E a s e o f D i s c o v e r y S c o r e L o s s o f A c c o u n t a b i l i t y S c o r e O p p o r t u n i t y S c o r e F i n a n c i a l D a m a g e S c o r e E a s e o f E x p l o i t a t i o n S c o r e A w a r e n e s s S c o r e L o s s o f A v a i l a b i l i t y S c o r e L o s s o f I n t e g r i t y S c o r e R e p u t a t i o n D a m a g e S c o r e S k i l l S c o r e R e l a t i o n s h i p S c o r e P r i v a c y V i o l a t i o n S c o r e L o s s o f C o n f i d e n t i a l i t y S c o r e N o n -C o m p l i a n c e S c o r e DOI: 10.12948/issn14531305/21.3.2017.02

Table 3 .
Risk subcomponents ease of classification without stochastic modeling

Table 4 .
Risk modeling formulas

Table 5 .
Risk components qualitative classification

Table 6 .
Risk score qualitative classification matrix

Table 7 .
Attribute Category -Threat Agent Profile (Skill Level)

Table 11 .
Attribute Category -Vulnerability Profile (Ease of discovery)

Table 12 .
Attribute Category -Vulnerability Profile (Ease of exploitation)

Table 15 .
Attribute Category -Technical Impact (Loss of confidentiality)

Table 16 .
Attribute Category -Technical Impact (Loss of integrity)

Table 17 .
Attribute Category -Technical Impact (Loss of availability)

Table 18 .
Attribute Category -Technical Impact (Loss of accountability)

Table 19 .
Attribute Category -Business Impact (Financial Impact)

Table 22 .
Attribute Category -Business Impact (Privacy violation)

Table 23 .
Table 4 with 1.500.000rounds of Monte Carlo simulations.The exercise results are presented in Table 23, with the distribution graph presented in Fig.1.To de-Methodology validation results using 1.500.000rounds of Monte Carlo simulations DOI: 10.12948/issn14531305/21.3.2017.

Table 24 .
The risk score template that was used for the risk modeling experiments