skip to main content
10.1145/3555776.3577713acmconferencesArticle/Chapter ViewAbstractPublication PagessacConference Proceedingsconference-collections
research-article

An Efficient Black-Box Support of Advanced Coverage Criteria for Klee

Published:07 June 2023Publication History

ABSTRACT

Dynamic symbolic execution (DSE) is a powerful test generation approach based on an exploration of the path space of the program under test. Well-adapted for path coverage, this approach is however less efficient for conditions, decisions, advanced coverage criteria (such as multiple conditions, weak mutations, boundary testing) or user-provided test objectives. While theoretical solutions to adapt DSE to a large set of criteria have been proposed, they have never been integrated into publicly available testing tools. This paper presents a first integration of an optimized test generation strategy for advanced coverage criteria into a popular open-source testing tool based on DSE, namely, Klee. The integration is performed in a fully black-box manner, and can therefore inspire an easy integration into other similar tools. The resulting version of the tool, named Klee4labels, is publicly available. We present the design of the proposed technique and evaluate it on several benchmarks. Our results confirm the benefits of the proposed tool for advanced coverage criteria.

References

  1. Damiano Angeletti, Enrico Giunchiglia, Massimo Narizzano, Alessandra Puddu, and Salvatore Sabina. 2010. Using Bounded Model Checking for Coverage Analysis of Safety-Critical Software in an Industrial Setting. J. Autom. Reason. 45, 4 (2010), 397--414.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Sébastien Bardin, Nikolai Kosmatov, and François Cheynier. 2014. Efficient Leveraging of Symbolic Execution to Advanced Coverage Criteria. In ICST. 173--182.Google ScholarGoogle Scholar
  3. Sébastien Bardin, Nikolai Kosmatov, Michaël Marcozzi, and Mickaël Delahaye. 2021. Specify and Measure, Cover and Reveal: A Unified Framework for Automated Test Generation. Sci. Comput. Program. 207 (2021), 102641.Google ScholarGoogle ScholarCross RefCross Ref
  4. Sébastien Bardin, Nikolai Kosmatov, Bruno Marre, David Mentré, and Nicky Williams. 2018. Test Case Generation with PathCrawler/LTest: How to Automate an Industrial Testing Process. In ISOLA (LNCS), Vol. 11247. 104--120.Google ScholarGoogle Scholar
  5. Dirk Beyer. 2022. Advances in Automatic Software Testing: Test-Comp 2022. In FASE (LNCS), Vol. 13241. 321--335.Google ScholarGoogle Scholar
  6. Cristian Cadar, Daniel Dunbar, and Dawson Engler. 2008. KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs. In OSDI. 209--224.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Cristian Cadar, Vijay Ganesh, Peter M. Pawlowski, David L. Dill, and Dawson R. Engler. 2006. EXE: Automatically Generating Inputs of Death. In CCS. 322--335.Google ScholarGoogle Scholar
  8. Cristian Cadar, Patrice Godefroid, Sarfraz Khurshid, Corina S. Pasareanu, Koushik Sen, Nikolai Tillmann, and Willem Visser. 2011. Symbolic Execution for Software Testing in Practice: Preliminary Assessment. In ICSE. 1066--1071.Google ScholarGoogle Scholar
  9. Cristian Cadar and Koushik Sen. 2013. Symbolic Execution for Software Testing: Three Decades Later. Commun. ACM 56, 2 (2013), 82--90.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Omar Chebaro, Nikolai Kosmatov, Alain Giorgetti, and Jacques Julliand. 2012. Program Slicing Enhances a Verification Technique Combining Static and Dynamic Analysis. In SAC. 1284--1291.Google ScholarGoogle Scholar
  11. Edmund Clarke, Daniel Kroening, and Flavio Lerda. 2004. A Tool for Checking ANSI-C Programs. In TACAS (LNCS), Vol. 2988. 168--176.Google ScholarGoogle Scholar
  12. Hyunsook Do, Sebastian G. Elbaum, and Gregg Rothermel. 2005. Supporting Controlled Experimentation with Testing Techniques: An Infrastructure and its Potential Impact. Empir. Softw. Eng. J. 10, 4 (2005), 405--435.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Patrice Godefroid, Nils Klarlund, and Koushik Sen. 2005. DART: directed automated random testing. In OSDI. 213--223.Google ScholarGoogle Scholar
  14. Patrice Godefroid, Michael Y. Levin, and David A. Molnar. 2008. Active Property Checking. In EMSOFT. 207--216.Google ScholarGoogle Scholar
  15. Patrice Godefroid, Michael Y. Levin, and David A. Molnar. 2008. Automated Whitebox Fuzz Testing. In NDSS.Google ScholarGoogle Scholar
  16. Konrad Jamrozik, Gordon Fraser, Nikolai Tillman, and Jonathan de Halleux. 2013. Generating Test Suites with Augmented Dynamic Symbolic Execution. In TAP (LNCS), Vol. 7942. 152--167.Google ScholarGoogle Scholar
  17. James C. King. 1976. Symbolic Execution and Program Testing. Commun. ACM 19, 7 (1976), 385--394.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Florent Kirchner, Nikolai Kosmatov, Virgile Prevosto, Julien Signoles, and Boris Yakobowski. 2015. Frama-C: A Software Analysis Perspective. Form. Asp. Comput. 27 (2015), 573--609. Issue 3.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Bogdan Korel and Ali M. Al-Yami. 1996. Assertion-Oriented Automated Test Data Generation. In ICSE. 71--80.Google ScholarGoogle Scholar
  20. Kelvin Ku, Thomas E. Hart, Marsha Chechik, and David Lie. 2007. A Buffer Overflow Benchmark for Software Model Checkers. In ASE. 389--392.Google ScholarGoogle Scholar
  21. Éric Lavillonnière, David Mentré, and Denis Cousineau. 2019. Fast, Automatic, and Nearly Complete Structural Unit-Test Generation Combining Genetic Algorithms and Formal Methods. In TAP (LNCS), Vol. 11823. 55--63.Google ScholarGoogle Scholar
  22. Chunho Lee, Miodrag Potkonjak, and William H. Mangione-Smith. 1997. MediaBench: A Tool for Evaluating and Synthesizing Multimedia and Communicatons Systems. In MICRO. 330--335.Google ScholarGoogle Scholar
  23. Michaël Marcozzi, Michaël Delahaye, Sébastien Bardin, Nikolai Kosmatov, and Virgile Prevosto. 2017. Generic and Effective Specification of Structural Test Objectives. In ICST. 436--441.Google ScholarGoogle Scholar
  24. Thibault Martin, Nikolai Kosmatov, Virgile Prevosto, and Matthieu Lemerre. 2020. Detection of Polluting Test Objectives for Dataflow Criteria. In iFM. 337--345.Google ScholarGoogle Scholar
  25. Rahul Pandita, Tao Xie, Nikolai Tillmann, and Jonathan de Halleux. 2010. Guided Test Generation for Coverage Criteria. In ICSM. 1--10.Google ScholarGoogle Scholar
  26. Mike Papadakis, Nicos Malevris, and Maria Kallia. 2010. Towards automating the generation of mutation tests. In AST. 111--118.Google ScholarGoogle Scholar
  27. Mike Papadakis and Malevris Nicos. 2011. Automatically Performing Weak Mutation with the Aid of Symbolic Execution, Concolic Testing and Search-Based Testing. Software Qual. J. 19, 4 (2011), 691--723.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Corina S Păsăreanu and Neha Rungta. 2010. Symbolic PathFinder: Symbolic Execution of Java Bytecode. In ASE. 179--180.Google ScholarGoogle Scholar
  29. Koushik Sen, Darko Marinov, and Gul Agha. 2005. CUTE: A Concolic Unit Testing Engine for C. In FSE. 263--272.Google ScholarGoogle ScholarCross RefCross Ref
  30. Nikolai Tillmann and Jonathan de Halleux. 2008. Pex-White Box Test Generation for .NET. In TAP (LNCS), Vol. 4966. 134--153.Google ScholarGoogle Scholar
  31. Nicky Williams, Bruno Marre, Patricia Mouy, and Muriel Roger. 2005. PathCrawler: Automatic Generation of Path Tests by Combining Static and Dynamic Analysis. In EDCC (LNCS), Vol. 3463. 281--292.Google ScholarGoogle Scholar
  32. Lingming Zhang, Tao Xie, Lu Zhang, Nikolai Tillmann, Jonathan de Halleux, and Hong Mei. 2010. Test Generation via Dynamic Symbolic Execution for Mutation Testing. In ICSM. 1--10.Google ScholarGoogle Scholar

Index Terms

  1. An Efficient Black-Box Support of Advanced Coverage Criteria for Klee

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      SAC '23: Proceedings of the 38th ACM/SIGAPP Symposium on Applied Computing
      March 2023
      1932 pages
      ISBN:9781450395175
      DOI:10.1145/3555776

      Copyright © 2023 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 7 June 2023

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate1,650of6,669submissions,25%
    • Article Metrics

      • Downloads (Last 12 months)28
      • Downloads (Last 6 weeks)4

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader