skip to main content
10.1145/2857705.2857750acmconferencesArticle/Chapter ViewAbstractPublication PagescodaspyConference Proceedingsconference-collections
short-paper

To Fear or Not to Fear That is the Question: Code Characteristics of a Vulnerable Functionwith an Existing Exploit

Published:09 March 2016Publication History

ABSTRACT

Not all vulnerabilities are equal. Some recent studies have shown that only a small fraction of vulnerabilities that have been reported has actually been exploited. Since finding and addressing potential vulnerabilities in a program can take considerable time and effort, recently effort has been made to identify code that is more likely to be vulnerable. This paper tries to identify the attributes of the code containing a vulnerability that makes the code more likely to be exploited. We examine 183 vulnerabilities from the National Vulnerability Database for Linux Kernel and Apache HTTP server. These include eighty-two vulnerabilities that have been found to have an exploit according to the Exploit Database. We characterize the vulnerable functions that have no exploit and the ones that have an exploit using eight metrics. The results show that the difference between a vulnerability that has no exploit and the one that has an exploit can potentially be characterized using the chosen software metrics. However, predicting exploitation of vulnerabilities is more complex than predicting just the presence of vulnerabilities and further research is needed using metrics that consider security domain knowledge for enhancing the predictability of vulnerability exploits.

References

  1. Shin, Y. and Williams, L. "Is complexity really the enemy of software security"? in Proc. ACM Workshop Quality Protection, 2008, pp. 47--50. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Shin, Y. and Williams, L. "An empirical model to predict security vulnerabilities using code complexity metrics," in Proc. ACM-IEEE Int. Symp. Empirical Softw. Eng. Meas., 2008, pp. 315--317. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. I. Chowdhury and M. Zulkernine, "Using complexity, coupling, and cohesion metrics as early indicators of vulnerabilities," J. Syst. Archit., vol. 57, no. 3, pp. 294--313, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. T. Zimmermann, N. Nagappan, and L. Williams, "Searching for a needle in a haystack: Predicting security vulnerabilities for windows vista," in Proc. Int. Conf. Softw. Testing, Verification Validation, 2010, pp. 421--428. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. L. Allodi and F. Massacci, "My Software has a Vulnerability, should I worry?," arXiv preprint arXiv:1301.1275, 2013.Google ScholarGoogle Scholar
  6. A. Younis and Y.K. Malaiya. "Comparing and Evaluating CVSS Base Metrics and Microsoft Rating System". The 2015 IEEE International Conference on Software Quality, Reliability and Security, 2015, pp. 252--261. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. K. Nayak, D. Marino, P. Efstathopoulos, T. Dumitra¸ "Some vulnerabilities are different than others". In: Proceedings of the 17th International Symposium on Research in Attacks, Intrusions and Defenses, 2014, pp. 426--446.Google ScholarGoogle ScholarCross RefCross Ref
  8. "National Vulnerability Database Home.". Available: http://nvd.nist.gov/. {Accessed: 24-May-2015}.Google ScholarGoogle Scholar
  9. EDB: Exploits Database by Offensive Security. Available: http://www.exploit-db.com/. {Accessed: 24-May-2015}.Google ScholarGoogle Scholar
  10. M. Fagerland and L. Sandvik. "Performance of five two-sample location tests for skewed distributions with unequal variances." Contemporary clinical trials, vol. 30, pp.490--496, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  11. A. Ozment, "Improving vulnerability discovery models," in Proceedings of the 2007 ACM workshop on Quality of protection, New York, NY, USA, 2007, pp. 6--11. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. S. Frei, D. Schatzmann, B. Plattner, and B. Trammell, "Modeling the Security Ecosystem - The Dynamics of (In)Security," in Economics of Information Security and Privacy. Springer US, 2010, pp. 79--106.Google ScholarGoogle ScholarCross RefCross Ref
  13. N.E. Fenton, S.L. Pfleeger, Software Metrics: A Rigorous and Practical Approach, PWS Publishing Co., Boston, MA, USA, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. T.J. McCabe, A complexity measure, IEEE Transactions on Software Engineering 2 (4) (1976) 308--320. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. W.A. Harrison, K.I. Magel, A complexity measure based on nesting level, ACM Sigplan Notices 16 (3) (1981) 63--74. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. S. Henry, D. Kafura, Software structure metrics based on information flow, IEEE Transactions on Software Engineering (1981) 510--518. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. N. Nagappan, T. Ball, A. Zeller, Mining metrics to predict component failures, in Proceedings of the 28th International Conference on Software Engineering, Shanghai, China, May 2006, pp. 452--461. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. A. Younis, Y.K. Malaiya and I. Ray, "Assessing Vulnerability Exploitability Risk Using Software Proprieties", Software Quality Journal: 1--44, Mar 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. G. Forman, "An extensive empirical study of feature selection metrics for text classification." The Journal of machine learning research, 3, p.1289--1305, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. M. Hall and L. Smith. Practical feature subset selection for machine learning. In Proceedings 21st Australasian Computer Science Conference, University of Western Australia, Perth, Australia, February 1996.Google ScholarGoogle Scholar
  21. R. Kohavi, G.H. John, "Wrappers for feature subset selection" Artificial Intelligence, 97(1--2), p. 273--324, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. I. Jolliffe, Principal component analysis. John Wiley & Sons, Ltd, 2002.Google ScholarGoogle Scholar
  23. B. Schneier, Beyond Fear: Thinking Sensibly about Security in an Uncertain World. Springer-Verlag, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. E. Alata1, V. Nicomette1, M. Kaâniche1, M. Dacier, and M. Herrb, "Lessons Learned from the Deployment of a High-Interaction Honeypot", EDCC'06: in Proc. 6th European Dependable Computing Conf. Coimbra, Portugal, 2006, pp. 39--46. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. P. Morrison, K. Herzig, B. Murphy, and L. Williams, "Challenges with Applying Vulnerability Prediction Models", Proceedings of the 2015 Symposium and Bootcamp on the Science of Security, 2015. Microsoft Research: http://research.microsoft.com/apps/pubs/default.aspx?id=240601. {Accessed: 24-March-2015}. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. S. Sparks, S. Embleton, R. Cunningham, and C. Zou, "Automated vulnerability analysis: Leveraging control flow for evolutionary input crafting," in Computer Security Applications Conference, 2007. ACSAC 2007. Twenty-Third Annual, 2007, pp. 477--486.Google ScholarGoogle Scholar
  27. M. Howard, J. Pincus, and J. Wing, "Measuring Relative Attack Surfaces," in Computer Security in the 21st Century, D. T. Lee, S. P. Shieh, and J. D. Tygar, Eds. Springer US, 2005, pp. 109--137.Google ScholarGoogle Scholar
  28. P. K. Manadhata and J. M. Wing, "An Attack Surface Metric," Software Engineering, IEEE Transactions on, vol. 37, no. 3, pp. 371 --386, Jun. 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. IEEE, "IEEE Standard for a Software Quality Metrics Methodology," IEEE Std 1061--1998 (R2004), IEEE CS, 2005.Google ScholarGoogle Scholar
  30. Apache-SVN. The apache software foundation. Available: http://www.svn.apache.org/viewvc/. {Accessed: 24-May-2015}.Google ScholarGoogle Scholar
  31. Linux Kernel Archive. Available: https://www.kernel.org/ {Accessed: 24-May-2015}.Google ScholarGoogle Scholar
  32. Scientific Toolworks Understand. Available: http://www.scitools.com/. {Accessed: 24-May-2015}.Google ScholarGoogle Scholar
  33. LocMetrics. Available: http://www.locmetrics.com/index.html. {Accessed: 24-May-2015}.Google ScholarGoogle Scholar
  34. WEKA Toolkit. Available: http://www.cs.waikato.ac.nz/ml/weka. {Accessed: 24-May-2015}.Google ScholarGoogle Scholar
  35. I.H. Witten, E. Frank, Data Mining: Practical Machine Learning Tools and Techniques (2nd ed.), Morgan Kaufmann, San Francisco, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Usage Statistics and Market Share of Web Servers for Websites. Available: http://www.w3techs.com/technologies/overview/web_server/all. {Accessed: 24-May-2015}.Google ScholarGoogle Scholar
  37. Usage Statistics and Market Share of Web Servers for Websites. Available: http://w3techs.com/technologies/details/os-unix/all/all. {Accessed: 24-May-2015}.Google ScholarGoogle Scholar
  38. P. Mell, K. Scarfone, and S. Romanosky, "A complete guide to the common vulnerability scoring system version 2.0," in Published by FIRST-Forum of Incident Response and Security Teams, 2007, pp.1--23.Google ScholarGoogle Scholar
  39. M. Gegick, L. Williams, J. Osborne, and M. Vouk. "Prioritizing software security fortification through code-level metrics." In Proceedings of the 4th ACM workshop on Quality of protection, 2008, pp. 31--38. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. T. Zimmermann, R. Premraj, A. Zeller, "Predicting defects for eclipse". In Proceedings of the Third International Workshop on Predictor Models in Software Engineering, 2007, pp. 9--15. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. M. Bozorgi, L. K. Saul, S. Savage, and G. M. Voelker, "Beyond heuristics: learning to classify vulnerabilities and predict exploits," in Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, New York, NY, USA, 2010, pp. 105--114. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. L. Allodi and F. Massacci, "A preliminary analysis of vulnerability scores for attacks in wild," ACM Proc. of CCS BADGERS, 2012, pp.17--24. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. L. Allodi and F. Massacci, "My Software has a Vulnerability, should I worry?,", 2013 , arXiv preprint arXiv:1301.1275.Google ScholarGoogle Scholar
  44. P. Bhattacharya, M. Iliofotou, I. Neamtiu, and M. Faloutsos, "Graph-based analysis and prediction for software evolution," in Proc. Intl. Conf. on Softw. Eng. (ICSE). ACM, 2012, pp. 419--429. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. R. Scandariato, J. Walden, A. Hovsepyan, W. Joosen. Predicting vulnerable software components via text mining. IEEE Trans Softw Eng, 40 (10) (2014), pp. 993--1006.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. To Fear or Not to Fear That is the Question: Code Characteristics of a Vulnerable Functionwith an Existing Exploit

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        CODASPY '16: Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy
        March 2016
        340 pages
        ISBN:9781450339353
        DOI:10.1145/2857705

        Copyright © 2016 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 9 March 2016

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • short-paper

        Acceptance Rates

        CODASPY '16 Paper Acceptance Rate22of115submissions,19%Overall Acceptance Rate149of789submissions,19%

        Upcoming Conference

        CODASPY '24

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader