Abstract
Despite the large investments in information security technologies and research over the past decades, the information security industry is still immature when it comes to vulnerability management. In particular, the prioritization of remediation efforts within vulnerability management programs predominantly relies on a mixture of subjective expert opinion and severity scores. Compounding the need for prioritization is the increase in the number of vulnerabilities the average enterprise has to remediate. This article describes the first open, data-driven framework for assessing vulnerability threat, that is, the probability that a vulnerability will be exploited in the wild within the first 12 months after public disclosure. This scoring system has been designed to be simple enough to be implemented by practitioners without specialized tools or software yet provides accurate estimates (ROC AUC = 0.838) of exploitation. Moreover, the implementation is flexible enough that it can be updated as more, and better, data becomes available. We call this system the Exploit Prediction Scoring System (EPSS).
- Paul D. Allison. 2008. Convergence failures in logistic regression. In SAS Global Forum, Vol. 360. 1–11.Google Scholar
- Luca Allodi and Fabio Massacci. 2014. Comparing vulnerability severity and exploits using case-control studies. ACM Transactions on Information and System Security (TISSEC) 17, 1 (2014), 1.Google ScholarDigital Library
- Mohammed Almukaynizi, Eric Nunes, Krishna Dharaiya, Manoj Senguttuvan, Jana Shakarian, and Paulo Shakarian. 2017. Proactive identification of exploits in the wild through vulnerability mentions online. In 2017 International Conference on Cyber Conflict (CyCon US’17). IEEE, 82–88.Google Scholar
- Ashish Arora, Rahul Telang, and Hao Xu. 2008. Optimal policy for software vulnerability disclosure. Management Science 54, 4 (2008), 642–656.Google ScholarDigital Library
- Thanassis Avgerinos, Sang Kil Cha, Alexandre Rebert, Edward J. Schwartz, Maverick Woo, and David Brumley. 2014. Automatic exploit generation. Communications of the ACM 57, 2 (2014), 74–84.Google ScholarDigital Library
- Nitin Bakshi, Stephen E. Flynn, and Noah Gans. 2011. Estimating the operational impact of container inspections at international ports. Management Science 57, 1 (2011), 1–20.Google ScholarDigital Library
- Steve Beattie, Seth Arnold, Crispin Cowan, Perry Wagle, Chris Wright, and Adam Shostack. 2002. Timing the application of security patches for optimal uptime. In LISA, Vol. 2. 233–242.Google Scholar
- Mehran Bozorgi, Lawrence K. Saul, Stefan Savage, and Geoffrey M. Voelker. 2010. Beyond heuristics: Learning to classify vulnerabilities and predict exploits. In Proceedings of the 16th ACM International Conference on Knowledge Discovery and Data Mining (KDD-2010). ACM, 105--114.Google Scholar
- David Brumley, Pongsin Poosankam, Dawn Song, and Jiang Zheng. 2008. Automatic patch-based exploit generation is possible: Techniques and implications. In 2008 IEEE Symposium on Security and Privacy (SP’08). IEEE, 143–157.Google ScholarDigital Library
- Federico Cabitza, Raffaele Rasoini, and Gian Franco Gensini. 2017. Unintended consequences of machine learning in medicine. JAMA 318, 6 (2017), 517–518.Google ScholarCross Ref
- Hasan Cavusoglu, Huseyin Cavusoglu, and Srinivasan Raghunathan. 2007. Efficiency of vulnerability disclosure mechanisms to disseminate vulnerability knowledge. IEEE Transactions on Software Engineering 33, 3 (2007), 171–185.Google ScholarDigital Library
- Jonathan H. Chen and Steven M. Asch. 2017. Machine learning and prediction in medicine—Beyond the peak of inflated expectations. New England Journal of Medicine 376, 26 (2017), 2507.Google ScholarCross Ref
- Debabrata Dey, Atanu Lahiri, and Guoying Zhang. 2015. Optimal policies for security patch management. INFORMS Journal on Computing 27, 3 (2015), 462–477.Google ScholarCross Ref
- Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey. 2015. Algorithm aversion: People erroneously avoid algorithms after seeing them err.Journal of Experimental Psychology: General 144, 1 (2015), 114.Google Scholar
- Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey. 2016. Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science 64, 3 (2016), 1155–1170.Google ScholarDigital Library
- Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017).Google Scholar
- Michel Edkrantz and Alan Said. 2015. Predicting cyber vulnerability exploits with machine learning. In SCAI. 48–57.Google Scholar
- Arthur E. Hoerl and Robert W. Kennard. 1970. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics 12, 1 (1970), 55–67.Google ScholarCross Ref
- Rob J. Hyndman and George Athanasopoulos. 2013. Forecasting: Principles and Practice. OTexts.Google Scholar
- Kyriakos K. Ispoglou, Bader AlBassam, Trent Jaeger, and Mathias Payer. 2018. Block oriented programming: Automating data-only attacks. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. 1868–1882.Google ScholarDigital Library
- Jay Jacobs, Sasha Romanosky, Idris Adjerid, and Wade Baker. 2019. Improving vulnerability remediation through better exploit prediction. In Proceedings of the Workshop on the Economics of Information Security.Google Scholar
- Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, and Jun Sakuma. 2012. Fairness-aware classifier with prejudice remover regularizer. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 35–50.Google ScholarCross Ref
- Karthik Kannan and Rahul Telang. 2005. Market for software vulnerabilities? Think again. Management Science 51, 5 (2005), 726–740.Google ScholarDigital Library
- Sabyasachi Mitra and Sam Ransbotham. 2015. The effects of vulnerability disclosure policy on the diffusion of security attacks. Information Systems Research 26, 3 (2015), 565–584.Google ScholarDigital Library
- S. Philip Morgan and Jay D. Teachman. 1988. Logistic regression: Description, examples, and comparisons. Journal of Marriage and Family 50, 4 (1988), 929–936.Google Scholar
- Stuart Rose, Dave Engel, Nick Cramer, and Wendy Cowley. 2010. Automatic keyword extraction from individual documents. Text Mining: Applications and Theory 1 (2010), 1–20.Google Scholar
- Carl Sabottke, Octavian Suciu, and Tudor Dumitraş. 2015. Vulnerability disclosure in the age of social media: Exploiting twitter for predicting real-world exploits. In 24th {USENIX} Security Symposium ({USENIX} Security’15). 1041–1056.Google Scholar
- Fadil Santosa and William W. Symes. 1986. Linear inversion of band-limited reflection seismograms. SIAM Journal on Scientific and Statistical Computing 7, 4 (1986), 1307–1330.Google ScholarDigital Library
- Mark Schmidt. 2009. A Note on Structural Extensions of SVMs. https://www.cs.ubc.ca/~schmidtm/Documents/2009_Notes_StructuredSVMs.pdf. These were class notes, and therefore unpublished work, but still useful.Google Scholar
- Gideon Schwarz et al. 1978. Estimating the dimension of a model. Annals of Statistics 6, 2 (1978), 461–464.Google Scholar
- Kenna Security and Cyentia Institute. 2019. Prioritization to Prediction, Volume 3. Technical Report. Kenna Security.Google Scholar
- Alfredo Vellido, José David Martín-Guerrero, and Paulo J. G. Lisboa. 2012. Making machine learning models interpretable. In ESANN, Vol. 12. Citeseer, 163–172.Google Scholar
- Wei Wu, Yueqi Chen, Xinyu Xing, and Wei Zou. 2019. {KEPLER}: Facilitating control-flow hijacking primitive evaluation for Linux kernel vulnerabilities. In 28th {USENIX} Security Symposium ({USENIX} Security’19). 1187–1204.Google Scholar
- Wei Wu, Yueqi Chen, Jun Xu, Xinyu Xing, Xiaorui Gong, and Wei Zou. 2018. {FUZE}: Towards facilitating exploit generation for kernel use-after-free vulnerabilities. In 27th {USENIX} Security Symposium ({USENIX} Security’18). 781–797.Google Scholar
- Chaowei Xiao, Armin Sarabi, Yang Liu, Bo Li, Mingyan Liu, and Tudor Dumitras. 2018. From patching delays to infection symptoms: Using risk profiles for an early discovery of vulnerabilities exploited in the wild. In 27th {USENIX} Security Symposium ({USENIX} Security’18). 903–918.Google Scholar
- Hui Zou and Trevor Hastie. 2005. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 67, 2 (2005), 301–320.Google ScholarCross Ref
Index Terms
- Exploit Prediction Scoring System (EPSS)
Recommendations
Parameter manipulation attack prevention and detection by using web application deception proxy
IMCOM '17: Proceedings of the 11th International Conference on Ubiquitous Information Management and CommunicationThe attack abusing web application vulnerabilities are currently classified into traditional attack threats. However, security breaches by web application attacks are still reported via mass media. Although the vulnerabilities in popular products such ...
Common Vulnerability Scoring System
Vendors have historically used proprietary methods for scoring software vulnerabilities, usually without detailing their criteria or processes. The Common Vulnerability Scoring System (CVSS) is a public initiative designed to address this issue by ...
A-R Exploit: An Automatic ROP Exploit Based on Long Sequence
SERE-C '14: Proceedings of the 2014 IEEE Eighth International Conference on Software Security and Reliability-CompanionMore attention has been paid to program security since ROP had been proposed. An ROP defence scheme based on detecting frequent ret sequences was designed in 2009 and it was proved an useful way to defend most ROP attacks. However, this scheme was ...
Comments