Skip to main content
Log in

A Characterisation Schema for Software Testing Techniques

  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

One of the major problems within the software testing area is how to get a suitable set of cases to test a software system. This set should assure maximum effectiveness with the least possible number of test cases. There are now numerous testing techniques available for generating test cases. However, many are never used, and just a few are used over and over again. Testers have little (if any) information about the available techniques, their usefulness and, generally, how suited they are to the project at hand upon, which to base their decision on which testing techniques to use. This paper presents the results of developing and evaluating an artefact (specifically, a characterisation schema) to assist with testing technique selection. When instantiated for a variety of techniques, the schema provides developers with a catalogue containing enough information for them to select the best suited techniques for a given project. This assures that the decisions they make are based on objective knowledge of the techniques rather than perceptions, suppositions and assumptions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Basili, V. R., and Rombach, H. D. 1991. Support for comprehensive reuse. Software Engineering Journal 6(5): September, 303–316.

    Google Scholar 

  • Beizer, B. 1990. Software Testing Techniques. 2nd edition. Boston, MA: International Thomson Computer Press.

    Google Scholar 

  • Bertolino, A. 2004. Guide to the Knowledge Area of Software Testing. Software Engineering Body of Knowledge, IEEE Computer Society. February. http://www.swebok.org.

  • Birk, A. 1997. Modelling the application domains of software engineering technologies. Proceedings of the Twelfth International Conference on Automated Software Engineering (ASE). Lake Tahoe, CA, November.

  • Frankl, P., and Iakounenko, O. 1998. Further empirical tudies of test effectiveness. Proceedings of the ACM SIGSOFT International Symposium on Foundations on Software Engineering. Lake Buena Vista, FL, USA, ACM, November, pp. 153–162.

  • Frankl, P. G., and Weiss, S. N. 1993. An experimental comparison of the effectiveness of branch testing and data flow testing. IEEE Transactions on Software Engineering 19(8): August, 774–787.

    Article  Google Scholar 

  • Harrold, M. J. 2000. Testing: A roadmap. Proceedings of the 22nd International Conference on the Future of Software Engineering. Limerick, Ireland, May, pp. 63–72.

  • Henninger, S. 1996. Accelerating the successful reuse of problem solving knowledge through the domain lifecycle. Proceedings of the Fourth International Conference on Software Reuse. Orlando, FL, April, pp. 124–133.

  • Hutchins, M. Foster, H. Goradia, T., and Ostrand, T. 1994. Experiments on the effectiveness of dataflow—and controlflow—based test adequacy criteria. Proceedings of the 16th International Conference on Software Engineering. Sorrento, Italy: IEEE, May, pp. 191–200.

  • Juristo, N., Moreno, A.M., and Vegas, S. 2004. Reviewing 25 years of testing technique experiments. Empirical Software Engineering Journal, 9(1): 7–44.

    Article  Google Scholar 

  • Kontio, J., Caldiera, G., and Basili, V. R. 1996. Defining factors, goals and criteria for reusable component evaluation. Proceedings of the CASCON'96 Conference. Toronto, Canada, November, pp. 12–14

  • Maiden, N. A. M., and Rugg, G. 1996. ACRE: Selecting methods for requirements acquisition. Software Engineering Journal 11(3): 183–192.

    Google Scholar 

  • Myers, G. J. 1970. The Art of Software Testing. New York, USA: Wiley-Interscience.

    Google Scholar 

  • Pfleeger, S. L. 1999. Software Engineering: Theory and Practice. New Jersey, USA: Mc-Graw Hill.

    Google Scholar 

  • Prieto-Díaz, R. 1989. Software Reusability, Vol 1, Chapter 4. Classification of Reusable Modules, Addison-Wesley, pp. 99–123.

  • RTI, 2002. The Economic Impact of Inadequate Infrastructure for Software Testing. Planning Report 02–3, National Institute of Standards and Technology. May.

  • Sommerville, I. 1998. Software Engineering. 5th edition. Harlow, England: Pearson Education.

    Google Scholar 

  • Vegas, S. 2002. A Characterisation Schema for Selecting Software Testing Techniques. PhD Thesis, Facultad de Informática, Universidad Politécnica de Madrid. February.

  • Vegas, S., Juristo, N., and Basili, V. R. 2003. A Process for identifying relevant information for a repository: a case study for testing techniques. Managing Software Engineering knowledge. Chapter 10, Springer–Verlag, Berlin, Germany, pp. 199–230.

    Google Scholar 

  • Weyuker, E. J. 1990. The cost of data flow testing: An empirical study. IEEE Transactions on Software Engineering 16(2): February, 121–128.

    Article  Google Scholar 

  • Wood, M., Roper, M., Brooks, A., and Miller, J. 1997. Comparing and combining software defect detection techniques: A replicated empirical study. Proceedings of the 6th European Software Engineering Conference. Zurich, Switzerland, September.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sira Vegas.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Vegas, S., Basili, V. A Characterisation Schema for Software Testing Techniques. Empir Software Eng 10, 437–466 (2005). https://doi.org/10.1007/s10664-005-3862-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10664-005-3862-1

Keywords

Navigation