ABSTRACT
On a regular basis I have in recent years delivered an evolving keynote presentation under the title “Building on success – Beyond the obvious”. During this keynote I try indicate which basic testing practices are, based on my personal experiences, often key and sometimes even sufficient to “survive” in real-life projects. Being honest and looking at day-to-day practice, I often notice that many structured testing practices, as defined by TMap [1], TMMi [2] and/or ISTQB [3], are not, or at most partly, applied. I often encounter a meaningless test plan, test design techniques not being applied, reviews not being performed and testers not trained and prepared for their job. And this being is the case more than 30 years after releasing the best-seller “Testing according to TMap”, and also more than 20 years after releasing the basic ISTQB Foundations in Software Testing syllabus!
The contradiction here is that despite not applying the proposed testing practices most of us are still releasing systems. However, the release is often (a bit) too late, at much higher costs and often not fully according the expectations. At the project retrospective, management typically at first firmly state they are unsatisfied with the result and the situation, and performance shall be better next time. In practice, next time nothing has changed and often it is the same result and situation. I can only conclude that this is apparently acceptable to the management since they don't really act (although they say differently). My personal observation is that there is a sort of minimum set of testing practice and that there are often in practice just enough to get the job done in a project. In this paper, we will explore and present a minimum set of testing practices starting from the concept of “good enough testing”.
- M. Pol, R. Teunnissen and E. van Veenendaal (2002), Software testing – A Guide to the TMap Approach, Addison WesleyGoogle Scholar
- E. van Veenendaal and B. Wells (2012), Test Maturity Model integration (TMMi) – Guidelines for Test Process Improvement, UTN PublishingGoogle Scholar
- D. Graham, R. Black and E. van Veenendaal (2019), Foundations of Software Testing – ISTQB Certification (4th edition), CengageGoogle Scholar
- B. Hetzel (1984), The complete guide to software testing, QEB Information Sciences Inc.Google Scholar
- G.J. Myers (1979), The art of Software testing, Wiley-InterscienceGoogle ScholarDigital Library
- Building on Success – Beyond the Obvious: A Closer Look at Good Enough Testing
Recommendations
Achieving scalable mutation-based generation of whole test suites
Without complete formal specification, automatically generated software tests need to be manually checked in order to detect faults. This makes it desirable to produce the strongest possible test set while keeping the number of tests as small as ...
An approach for clustering test data
LATW '11: Proceedings of the 2011 12th Latin American Test WorkshopThe existing test techniques and criteria are considered complementary because they can reveal different kinds of faults and test specific aspects of the program. The functional criteria, such as Category Partition, are difficult to be automated, and ...
State coverage: a structural test adequacy criterion for behavior checking
ESEC-FSE '07: Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineeringWe propose a new language-independent, structural test adequacy criterion called state coverage. State coverage measures whether unit-level tests check the outputs and sideeffects of a program.
State coverage differs in several respects from existing ...
Comments