Skip to main content
Log in

Directed Dynamic Symbolic Execution for Static Analysis Warnings Confirmation

  • Published:
Programming and Computer Software Aims and scope Submit manuscript

Abstract

Currently, there is no doubt among experts in the field of program certification and quality assurance that automated program analysis methods should be used to find bugs that lead to program security vulnerabilities. The national standard for the secure software development requires the use of source code static analysis tools as one of the measures of software quality assurance at the development stage and the application of dynamic analysis and fuzz-testing of the source code at the qualification testing stage. Fundamental limitations of automated program analysis and testing methods make it impossible to carry out simultaneously exhaustive and precise analysis of programs for errors. Thereof, researches are nowadays carried out aimed at reducing the effect of fundamental limitations on the quality and productivity of automated software error detection methods. This paper discusses an approach that combines methods of source code static analysis and dynamic symbolic execution in order to increase the program error detection efficiency.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.

Similar content being viewed by others

REFERENCES

  1. Ayewah, N., Hovemeyer, D., Morgenthaler, J.D., Penix, J. and Pugh, W., Experiences using static analysis to find bugs, IEEE Software, 2008, vol. 25, no. 5, pp. 22–29.

    Article  Google Scholar 

  2. Calcagno, C., Distefano, D., Dubreil, J., Gabi, D., Hooimeijer, P., Luca, M., O’Hearn, P.W., Papakonstantinou, I., Purbrick, J., and Rodriguez, D., Moving fast with software verification, Lect. Notes Comput. Sci., 2015, vol. 9058, pp. 3–11.

    Article  Google Scholar 

  3. Calcagno, C. and Distefano, D., Infer: An automatic program verifier for memory safety of C programs, Lect. Notes Comput. Sci., 2011, vol. 6617, pp. 459–465.

    Article  Google Scholar 

  4. GOST R 56939-2016: Information protection. Secure software development. General requirements. – National Standard of Russian Federation, 2016

  5. Turing, A.M., On computable numbers, with an application to the Entscheidungsproblem, Proc. London Math. Soc., 1937, vol. s2–42, no. 1, pp. 230–265.

  6. Belevantsev, A.A., Multilevel static analysis of source code for program quality assurance, Doctoral (Phys.–Math.) Dissertation, Moscow, 2017.

  7. Johnson, B., Song, Y., Murphy-Hill, E., and Bowdidge, R., Why don’t software developers use static analysis tools to find bugs? Proc. of the 2013 Int. Conf. on Software Engineering, San Francisco, 2013, pp. 672–681.

  8. Sadowski, C., Aftandilian, E., Eagle, A., Miller-Cushon, L., and Jaspan, C., Lessons from building static analysis tools at Google, Commun. ACM, 2018, vol. 61, no. 4, pp. 58–66.

    Article  Google Scholar 

  9. Boyer, R.S., Elspas, B., and Levitt, K.N., SELECT—a formal system for testing and debugging programs by symbolic execution, Proc. of the Int. Conf. on Reliable software, Los Angeles, 1975, pp. 234–245.

  10. Guidelines for the Use of the C Language in Critical Systems, Warwickshire: MISRA, 2013.

  11. Seacord, R.C. and Martin, R., MITRE CWE and CERT Secure Coding Standards, Boston: Addison-Wesley Professional, 2009.

    Google Scholar 

  12. King, J.C., Symbolic execution and program testing, Commun. ACM, 1976, vol. 19, no. 7, pp. 385–394.

    Article  MathSciNet  MATH  Google Scholar 

  13. Ganesh, V. and Dill, D.L., A decision procedure for bit-vectors and arrays, Proc. 19th Int. Conf. on Computer Aided Verification, Berlin, 2007, pp. 519–531.

  14. De Moura, L., and Bjorner, N., Z3: An efficient SMT solver, Lect. Notes Comput. Sci., 2008, vol. 4963, pp. 337–340.

    Article  Google Scholar 

  15. Baldoni, R., Coppa, E., D’Elia, D.C., Demetrescu, C., and Finocchi, I., A survey of symbolic execution techniques, ACM Comput. Surv., 2018, vol. 51, no. 3.

  16. Cha, S.K., Avgerinos, T., Rebert, A., and Brumley, D., Unleashing mayhem on binary code, Proc. 2012 IEEE Symp. on Security and Privacy, San Francisco, 2012, pp. 380–394.

  17. Avgerinos, T., Cha, S.K., Rebert, A., Schwartz, E.J., Woo, M., and Brumley, D., Automatic exploit generation, Commun. ACM, 2014, vol. 57, no. 2, pp. 74–84.

    Article  Google Scholar 

  18. MacIntyre, B., Gandy, M., Dow, S., and Bolter, J.D., DART: A toolkit for rapid design exploration of augmented reality experiences, Proc. 17th Annual ACM Symp. on User Interface Software and Technology, Santa Fe, 2004, pp. 197–206.

  19. Sen, K., Marinov, D., and Agha, G., CUTE: A concolic unit testing engine for C, Proc. 10th European Software Engineering Conf. held jointly with 13th ACM SIGSOFT Int. Symp. on Foundations of Software Engineering, Lisbon, 2005, pp. 263–272.

  20. Cadar, C., Ganesh, V., Pawlowski, P.M., Dill, D.L., and Engler, D.R., EXE: Automatically generating inputs of death, ACM Trans. Inform. Syst. Secur., 2008, vol. 12, no. 2.

  21. Godefroid, P., Levin, M.Y., and Molnar, D., Automated whitebox fuzz testing, Proc. 16th Annual Network & Distributed System Security Symp., San Diego, 2008, pp. 151–166.

  22. Cadar, C., Dunbar, D., and Engler, D., KLEE: Unassisted and automatic generation of high-coverage tests for complex systems programs, Proc. 8th USENIX Conf. on Operating Systems Design and Implementation, San Diego, 2008, pp. 209–224.

  23. Chipounov, V., Kuznetsov, V., and Candea, G., S2E: A platform for in-vivo multi-path analysis of software systems, Proc. 16th Int. Conf. on Architectural Support for Programming Languages and Operating Systems, Newport Beach, 2011, pp. 265–278.

  24. Taneja, K., Xie, T., Tillmann, N., de Halleux, J., and Schulte, W., Guided path exploration for regression test generation, Proc. 2011 Int. Symp. on Software Testing and Analysis, Toronto, 2011, pp. 1–11.

  25. Yang, G., Person, S., Rungta, N., and Khurshid, S., Directed incremental symbolic execution, Proc. 32nd ACM SIGPLAN Conf. on Programming Language Design and Implementation, San Jose, 2011, pp. 504–515.

  26. Zamfir, C. and Candea, G., Execution synthesis: A technique for automated software debugging, Proc. 5th European Conf. on Computer Systems, Paris, 2010, pp. 321–334.

  27. Ma, K.K., Phang, K.Y., Foster, J.S., and Hicks, M., Directed symbolic execution, Proc. 18th Int. Conf. on Static Analysis, Venice, 2011, pp. 95–111.

  28. Parvez, R., Ward, P.A.S., and Ganesh, V., Combining static analysis and targeted symbolic execution for scalable bug-finding in application binaries, Proc. 26th Annual Int. Conf. on Computer Science and Software Engineering, Toronto, 2016, pp. 116–127.

  29. Borodin, A.E., Interprocedural context-sensitive static analysis for finding defects in C/C++ program source code, Cand. Sci. (Phys.–Math.) Dissertation, Moscow, 2016.

  30. Gerasimov, A.Yu., Survey on static program analysis results refinement approaches, Tr. Inst. Sistemnogo Program. Ross. Akad. Nauk, 2017, vol. 29, no. 3, pp. 75–98.

    Google Scholar 

  31. Gerasimov, A.Yu., Kruglov, L.V., Ermakov, M.K., and Vartanov, S.P., An approach of reachability determination for static analysis defects with help of dynamic symbolic execution, Tr. Inst. Sistemnogo Program. Ross. Akad. Nauk, 2017, vol. 29, no. 5, pp. 111–134.

    Google Scholar 

  32. Gerasimov, A.Yu., On limitations of the classification method of program defects found by static program analysis methods using dynamic symbolic execution, Trudy 60-i Vserosiiskoi naychnoi konferentsii MFTI (Proc. 60th MPTI All-Russian Scientific Conf.), Moscow, 2017.

  33. Isaev, I.K. and Sidorov, D.V., The use of dynamic analysis for generation of input data that demonstrates critical bugs and vulnerabilities in programs, Program. Comput. Software, 2010, vol. 36, no. 4, pp. 225–236.

    Article  MathSciNet  Google Scholar 

  34. Fedotov, A.N., Kaushan, V.V., Gaisaryan, S.S., and Kurmangaleev, Sh.F., Building security predicates for some types of vulnerabilities, Tr. Inst. Sistemnogo Program. Ross. Akad. Nauk, 2017, vol., 29, no. 6, pp. 151–162.

    Google Scholar 

  35. Li, M., Chen, Y., Wang, L., and Xu, G., Dynamically validating static memory leak warnings, Proc. 2013 Int. Symp. on Software Testing and Analysis, Lugano, 2013, pp. 112–122.

  36. Xu, R.-G., Godefroid, P., and Majumdar, R., Testing for buffer overflows with length abstraction, Proc. 2008 Int. Symp. on Software Testing and Analysis, Seattle, 2008, pp. 27–38.

  37. Fortify Analysis Software. https://software.microfocus.com/en-us/solutions/application-security?jumpid =va_912rzvtnd7/. Cited May 15, 2018.

  38. Burnim, J. and Sen, K., Heuristics for scalable dynamic test generation, Proc. 2008 23rd IEEE/ACM Int. Conf. on Automated Software Engineering, 2008, pp. 443–446.

  39. Cui, H., Hu, G., Wu, J., and Yang, J., Verifying systems rules using rule-directed symbolic execution, Proc. 18th Int. Conf. on Architectural Support for Programming Languages and Operating Systems, Houston, 2013, pp. 329–342.

  40. Neugschwandtner, M., Comparetti, P.M., Haller, I., and Bos, H., The BORG: Nanoprobing binaries for buffer overreads, Proc. 5th ACM Conf. on Data and Application Security and Privacy, San Antonio, 2015, pp. 87–97.

  41. Ivannikov, V.P., Belevantsev, A.A., Borodin, A.E., Ignat’ev, V.N., Zhurikhin, D.M., Avetisyan, A.I., and Leonov, M.I., Static analyzer Svace for finding defects in a source program code, Tr. Inst. Sistemnogo Program. Ross. Akad. Nauk, 2014, vol. 40, no. 5, pp. 265–275.

    Google Scholar 

  42. Gerasimov, A., Vartanov, S., Ermakov, M., Kruglov, L., Kutz, D., Novikov, A., and Asryan, S., Anxiety: A dynamic symbolic execution framework, Proc. 2017 Ivannikov ISPRAS Open Conf., Moscow, 2017, pp. 16–22.

  43. Schwartz, E.J., Avgerinos, T., and Brumley, D., All you ever wanted to know about dynamic taint analysis and forward symbolic execution (but might have been afraid to ask), Proc. 2010 IEEE Symp. on Security and Privacy, Oakland, 2010, pp. 317–331.

  44. Ge, X., Taneja, K., Xie, T., and Tillmann, N., DyTa: Dynamic symbolic execution guided with static verification results, Proc. 33rd Int. Conf. on Software Engineering, Honolulu, 2011, pp. 992–994.

  45. Chebaro, O., Kosmatov, N., Giorgetti, A., and Julliand, J., The sante tool: Value analysis, program slicing and test generation for C program debugging, Proc. 5th Int. Conf. on Tests and proofs, Zurich, 2011, pp. 78–83.

  46. Cuoq, P., Signoles, J., Baudin, P., Bonichon, R., Canet, G., Correnson, L., Monate, B., Prevosto, V., and Puccetti, A., Experience report: Ocaml for an industrial-strength static analysis framework, Proc. 14th ACM SIGPLAN Int. Conf. on Functional Programming, Edinburgh, 2009, pp. 281–286.

  47. Williams, N., Marre, B., Mouy, P., and Roger, M., PathCrawler: Automatic generation of path tests by combining static and dynamic analysis, Proc. 5th European Conf. on Dependable Computing, Budapest, 2005, pp. 281–292.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. Yu. Gerasimov.

Additional information

Translated by O. Pismenov

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gerasimov, A.Y. Directed Dynamic Symbolic Execution for Static Analysis Warnings Confirmation. Program Comput Soft 44, 316–323 (2018). https://doi.org/10.1134/S036176881805002X

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S036176881805002X

Keywords

Navigation