ABSTRACT
Dynamic symbolic execution (DSE) is a powerful test generation approach based on an exploration of the path space of the program under test. Well-adapted for path coverage, this approach is however less efficient for conditions, decisions, advanced coverage criteria (such as multiple conditions, weak mutations, boundary testing) or user-provided test objectives. While theoretical solutions to adapt DSE to a large set of criteria have been proposed, they have never been integrated into publicly available testing tools. This paper presents a first integration of an optimized test generation strategy for advanced coverage criteria into a popular open-source testing tool based on DSE, namely, Klee. The integration is performed in a fully black-box manner, and can therefore inspire an easy integration into other similar tools. The resulting version of the tool, named Klee4labels, is publicly available. We present the design of the proposed technique and evaluate it on several benchmarks. Our results confirm the benefits of the proposed tool for advanced coverage criteria.
- Damiano Angeletti, Enrico Giunchiglia, Massimo Narizzano, Alessandra Puddu, and Salvatore Sabina. 2010. Using Bounded Model Checking for Coverage Analysis of Safety-Critical Software in an Industrial Setting. J. Autom. Reason. 45, 4 (2010), 397--414.Google ScholarDigital Library
- Sébastien Bardin, Nikolai Kosmatov, and François Cheynier. 2014. Efficient Leveraging of Symbolic Execution to Advanced Coverage Criteria. In ICST. 173--182.Google Scholar
- Sébastien Bardin, Nikolai Kosmatov, Michaël Marcozzi, and Mickaël Delahaye. 2021. Specify and Measure, Cover and Reveal: A Unified Framework for Automated Test Generation. Sci. Comput. Program. 207 (2021), 102641.Google ScholarCross Ref
- Sébastien Bardin, Nikolai Kosmatov, Bruno Marre, David Mentré, and Nicky Williams. 2018. Test Case Generation with PathCrawler/LTest: How to Automate an Industrial Testing Process. In ISOLA (LNCS), Vol. 11247. 104--120.Google Scholar
- Dirk Beyer. 2022. Advances in Automatic Software Testing: Test-Comp 2022. In FASE (LNCS), Vol. 13241. 321--335.Google Scholar
- Cristian Cadar, Daniel Dunbar, and Dawson Engler. 2008. KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs. In OSDI. 209--224.Google ScholarDigital Library
- Cristian Cadar, Vijay Ganesh, Peter M. Pawlowski, David L. Dill, and Dawson R. Engler. 2006. EXE: Automatically Generating Inputs of Death. In CCS. 322--335.Google Scholar
- Cristian Cadar, Patrice Godefroid, Sarfraz Khurshid, Corina S. Pasareanu, Koushik Sen, Nikolai Tillmann, and Willem Visser. 2011. Symbolic Execution for Software Testing in Practice: Preliminary Assessment. In ICSE. 1066--1071.Google Scholar
- Cristian Cadar and Koushik Sen. 2013. Symbolic Execution for Software Testing: Three Decades Later. Commun. ACM 56, 2 (2013), 82--90.Google ScholarDigital Library
- Omar Chebaro, Nikolai Kosmatov, Alain Giorgetti, and Jacques Julliand. 2012. Program Slicing Enhances a Verification Technique Combining Static and Dynamic Analysis. In SAC. 1284--1291.Google Scholar
- Edmund Clarke, Daniel Kroening, and Flavio Lerda. 2004. A Tool for Checking ANSI-C Programs. In TACAS (LNCS), Vol. 2988. 168--176.Google Scholar
- Hyunsook Do, Sebastian G. Elbaum, and Gregg Rothermel. 2005. Supporting Controlled Experimentation with Testing Techniques: An Infrastructure and its Potential Impact. Empir. Softw. Eng. J. 10, 4 (2005), 405--435.Google ScholarDigital Library
- Patrice Godefroid, Nils Klarlund, and Koushik Sen. 2005. DART: directed automated random testing. In OSDI. 213--223.Google Scholar
- Patrice Godefroid, Michael Y. Levin, and David A. Molnar. 2008. Active Property Checking. In EMSOFT. 207--216.Google Scholar
- Patrice Godefroid, Michael Y. Levin, and David A. Molnar. 2008. Automated Whitebox Fuzz Testing. In NDSS.Google Scholar
- Konrad Jamrozik, Gordon Fraser, Nikolai Tillman, and Jonathan de Halleux. 2013. Generating Test Suites with Augmented Dynamic Symbolic Execution. In TAP (LNCS), Vol. 7942. 152--167.Google Scholar
- James C. King. 1976. Symbolic Execution and Program Testing. Commun. ACM 19, 7 (1976), 385--394.Google ScholarDigital Library
- Florent Kirchner, Nikolai Kosmatov, Virgile Prevosto, Julien Signoles, and Boris Yakobowski. 2015. Frama-C: A Software Analysis Perspective. Form. Asp. Comput. 27 (2015), 573--609. Issue 3.Google ScholarDigital Library
- Bogdan Korel and Ali M. Al-Yami. 1996. Assertion-Oriented Automated Test Data Generation. In ICSE. 71--80.Google Scholar
- Kelvin Ku, Thomas E. Hart, Marsha Chechik, and David Lie. 2007. A Buffer Overflow Benchmark for Software Model Checkers. In ASE. 389--392.Google Scholar
- Éric Lavillonnière, David Mentré, and Denis Cousineau. 2019. Fast, Automatic, and Nearly Complete Structural Unit-Test Generation Combining Genetic Algorithms and Formal Methods. In TAP (LNCS), Vol. 11823. 55--63.Google Scholar
- Chunho Lee, Miodrag Potkonjak, and William H. Mangione-Smith. 1997. MediaBench: A Tool for Evaluating and Synthesizing Multimedia and Communicatons Systems. In MICRO. 330--335.Google Scholar
- Michaël Marcozzi, Michaël Delahaye, Sébastien Bardin, Nikolai Kosmatov, and Virgile Prevosto. 2017. Generic and Effective Specification of Structural Test Objectives. In ICST. 436--441.Google Scholar
- Thibault Martin, Nikolai Kosmatov, Virgile Prevosto, and Matthieu Lemerre. 2020. Detection of Polluting Test Objectives for Dataflow Criteria. In iFM. 337--345.Google Scholar
- Rahul Pandita, Tao Xie, Nikolai Tillmann, and Jonathan de Halleux. 2010. Guided Test Generation for Coverage Criteria. In ICSM. 1--10.Google Scholar
- Mike Papadakis, Nicos Malevris, and Maria Kallia. 2010. Towards automating the generation of mutation tests. In AST. 111--118.Google Scholar
- Mike Papadakis and Malevris Nicos. 2011. Automatically Performing Weak Mutation with the Aid of Symbolic Execution, Concolic Testing and Search-Based Testing. Software Qual. J. 19, 4 (2011), 691--723.Google ScholarDigital Library
- Corina S Păsăreanu and Neha Rungta. 2010. Symbolic PathFinder: Symbolic Execution of Java Bytecode. In ASE. 179--180.Google Scholar
- Koushik Sen, Darko Marinov, and Gul Agha. 2005. CUTE: A Concolic Unit Testing Engine for C. In FSE. 263--272.Google ScholarCross Ref
- Nikolai Tillmann and Jonathan de Halleux. 2008. Pex-White Box Test Generation for .NET. In TAP (LNCS), Vol. 4966. 134--153.Google Scholar
- Nicky Williams, Bruno Marre, Patricia Mouy, and Muriel Roger. 2005. PathCrawler: Automatic Generation of Path Tests by Combining Static and Dynamic Analysis. In EDCC (LNCS), Vol. 3463. 281--292.Google Scholar
- Lingming Zhang, Tao Xie, Lu Zhang, Nikolai Tillmann, Jonathan de Halleux, and Hong Mei. 2010. Test Generation via Dynamic Symbolic Execution for Mutation Testing. In ICSM. 1--10.Google Scholar
Index Terms
- An Efficient Black-Box Support of Advanced Coverage Criteria for Klee
Recommendations
Comprehensive analysis of FBD test coverage criteria using mutants
Function block diagram (FBD), a graphical modeling language for programmable logic controllers, has been widely used to implement safety critical system software such as nuclear reactor protection systems. With the growing importance of structural ...
Empirical evaluation on FBD model-based test coverage criteria using mutation analysis
MODELS'12: Proceedings of the 15th international conference on Model Driven Engineering Languages and SystemsFunction Block Diagram (FBD), one of the PLC programming languages, is a graphical modeling language which has been increasingly used to implement safety-critical software such as nuclear reactor protection software. With increased importance of ...
Boundary Coverage Criteria for Test Generation from Formal Models
ISSRE '04: Proceedings of the 15th International Symposium on Software Reliability EngineeringThis paper proposes a new family of model-based coverage criteria, based on formalizing boundary-value testing heuristics. The new criteria form a hierarchy of data-oriented coverage criteria, and can be applied to any formal notation that uses ...
Comments