skip to main content
10.1145/2786805.2786843acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article

When, how, and why developers (do not) test in their IDEs

Published:30 August 2015Publication History

ABSTRACT

The research community in Software Engineering and Software Testing in particular builds many of its contributions on a set of mutually shared expectations. Despite the fact that they form the basis of many publications as well as open-source and commercial testing applications, these common expectations and beliefs are rarely ever questioned. For example, Frederic Brooks’ statement that testing takes half of the development time seems to have manifested itself within the community since he first made it in the “Mythical Man Month” in 1975. With this paper, we report on the surprising results of a large-scale field study with 416 software engineers whose development activity we closely monitored over the course of five months, resulting in over 13 years of recorded work time in their integrated development environments (IDEs). Our findings question several commonly shared assumptions and beliefs about testing and might be contributing factors to the observed bug proneness of software in practice: the majority of developers in our study does not test; developers rarely run their tests in the IDE; Test-Driven Development (TDD) is not widely practiced; and, last but not least, software developers only spend a quarter of their work time engineering tests, whereas they think they test half of their time.

References

  1. P. Runeson, “A survey of unit testing practices,” IEEE Software, vol. 23, no. 4, pp. 22–29, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. A. Begel and T. Zimmermann, “Analyze this! 145 questions for data scientists in software engineering,” in Proceedings of the International Conference on Software Engineering (ICSE), pp. 12–13, ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. L. S. Pinto, S. Sinha, and A. Orso, “Understanding myths and realities of test-suite evolution,” in Proceedings of the Symposium on the Foundations of Software Engineering (FSE), pp. 33:1–33:11, ACM, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. A. Zaidman, B. Van Rompaey, A. van Deursen, and S. Demeyer, “Studying the co-evolution of production and test code in open source and industrial developer test processes through repository mining,” Empirical Software Engineering, vol. 16, no. 3, pp. 325–364, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. A. Bertolino, “Software testing research: Achievements, challenges, dreams,” in Proceedings of the International Conference on Software Engineering (ISCE), Workshop on the Future of Software Engineering (FOSE), pp. 85–103, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. F. Brooks, The mythical man-month. Addison-Wesley, 1975. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. G. Meszaros, xUnit Test Patterns: Refactoring Test Code. Addison-Wesley, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. R. L. Glass, R. Collard, A. Bertolino, J. Bach, and C. Kaner, “Software testing and industry needs,” IEEE Software, vol. 23, no. 4, pp. 55–57, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. A. Bertolino, “The (im)maturity level of software testing,” SIGSOFT Softw. Eng. Notes, vol. 29, pp. 1–4, Sept. 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. J. Rooksby, M. Rouncefield, and I. Sommerville, “Testing in the wild: The social and organisational dimensions of real world practice,” Comput. Supported Coop. Work, vol. 18, pp. 559–580, Dec. 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. P. Runeson, M. Host, A. Rainer, and B. Regnell, Case Study Research in Software Engineering: Guidelines and Examples. Wiley, 2012. Google ScholarGoogle ScholarCross RefCross Ref
  12. M. Beller, G. Gousios, and A. Zaidman, “How (much) do developers test?,” in Proceedings of the 37th International Conference on Software Engineering (ICSE), NIER Track, pp. 559–562, IEEE, 2015.Google ScholarGoogle ScholarCross RefCross Ref
  13. P. Muntean, C. Eckert, and A. Ibing, “Context-sensitive detection of information exposure bugs with symbolic execution,” in Proceedings of the International Workshop on Innovative Software Development Methodologies and Practices (InnoSWDev), pp. 84–93, ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. S. S. Shapiro and M. B. Wilk, “An analysis of variance test for normality (complete samples),” Biometrika, vol. 52, no. 3-4, pp. 591–611, 1965.Google ScholarGoogle ScholarCross RefCross Ref
  15. J. L. Devore and N. Farnum, Applied Statistics for Engineers and Scientists. Duxbury, 1999.Google ScholarGoogle Scholar
  16. W. G. Hopkins, A new view of statistics. 1997. http://newstatsi.org, Accessed 16 March 2015.Google ScholarGoogle Scholar
  17. V. I. Levenshtein, “Binary codes capable of correcting deletions, insertions, and reversals,” in Soviet physics doklady, vol. 10, pp. 707–710, 1966.Google ScholarGoogle Scholar
  18. J. C. Munson and S. G. Elbaum, “Code churn: A measure for estimating the impact of code change,” in Proceedings of the International Conference on Software Maintenance (ICSM), p. 24, IEEE, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. K. Beck, Test Driven Development – by Example. Addison Wesley, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. H. Munir, K. Wnuk, K. Petersen, and M. Moayyed, “An experimental evaluation of test driven development vs. test-last development with industry professionals,” in Proceedings of the International Conference on Evaluation and Assessment in Software Engineering (EASE), pp. 50:1–50:10, ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Y. Rafique and V. B. Misic, “The effects of test-driven development on external quality and productivity: A meta-analysis,” IEEE Transactions on Software Engineering, vol. 39, no. 6, pp. 835–856, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. J. E. Hopcroft, R. Motwani, and J. D. Ullman, Introduction to Automata theory, languages, and computation. Prentice Hall, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. G. Rothermel and S. Elbaum, “Putting your best tests forward,” IEEE Software, vol. 20, pp. 74–77, Sept 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. A. Patterson, M. Kölling, and J. Rosenberg, “Introducing unit testing with BlueJ,” ACM SIGCSE Bulletin, vol. 35, pp. 11–15, June 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. M. Beller, A. Bacchelli, A. Zaidman, and E. Juergens, “Modern code reviews in open-source projects: which problems do they fix?,” in Proceedings of the Working Conference on Mining Software Repositories (MSR), pp. 202–211, ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. E. Derby, D. Larsen, and K. Schwaber, Agile retrospectives: Making good teams great. Pragmatic Bookshelf, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. C. Marsavina, D. Romano, and A. Zaidman, “Studying fine-grained co-evolution patterns of production and test code,” in Proceedings International Working Conference on Source Code Analysis and Manipulation (SCAM), pp. 195–204, IEEE, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. M. Gligoric, S. Negara, O. Legunsen, and D. Marinov, “An empirical evaluation and comparison of manual and automated test selection,” in Proceedings of the 29th ACM/IEEE international conference on Automated software engineering, pp. 361–372, ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. L. Ponzanelli, G. Bavota, M. Di Penta, R. Oliveto, and M. Lanza, “Mining stackoverflow to turn the ide into a self-confident programming prompter,” in Proceedings of the Working Conference on Mining Software Repositories (MSR), pp. 102–111, ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. A. Clauset, C. R. Shalizi, and M. E. Newman, “Power-law distributions in empirical data,” SIAM review, vol. 51, no. 4, pp. 661–703, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. J. G. Adair, “The Hawthorne effect: A reconsideration of the methodological artifact.,” Journal of applied psychology, vol. 69, no. 2, pp. 334–345, 1984.Google ScholarGoogle ScholarCross RefCross Ref
  32. L. Hattori and M. Lanza, “Syde: a tool for collaborative software development,” in Proceedings of the International Conference on Software Engineering (ICSE), pp. 235–238, ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. R. Robbes and M. Lanza, “Spyware: a change-aware development toolset,” in Proceedings of the International Conference on Software Engineering (ICSE), pp. 847–850, ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. S. Negara, N. Chen, M. Vakilian, R. E. Johnson, and D. Dig, “A comparative study of manual and automated refactorings,” in Proceedings of the 27th European Conference on Object-Oriented Programming, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. R. Minelli, A. Mocci, M. Lanza, and L. Baracchi, “Visualizing developer interactions,” in Proceedings of the Working Conference on Software Visualization (VISSOFT), pp. 147–156, IEEE, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. P. Kochhar, T. Bissyande, D. Lo, and L. Jiang, “An empirical study of adoption of software testing in open source projects,” in Proceedings of the International Conference on Quality Software (QSIC), pp. 103–112, IEEE, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. T. D. LaToza, G. Venolia, and R. DeLine, “Maintaining mental models: a study of developer work habits,” in Proceedings of the International Conference on Software Engineering (ICSE), pp. 492–501, ACM, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. R. Pham, S. Kiesling, O. Liskin, L. Singer, and K. Schneider, “Enablers, inhibitors, and perceptions of testing in novice software teams,” in Proceedings of the International Symposium on Foundations of Software Engineering (FSE), pp. 30–40, ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. A. N. Meyer, T. Fritz, G. C. Murphy, and T. Zimmermann, “Software developers’ perceptions of productivity,” in Proceedings of the International Symposium on Foundations of Software Engineering (FSE), pp. 19–29, ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. P. D. Marinescu, P. Hosek, and C. Cadar, “Covrig: a framework for the analysis of code, test, and coverage evolution in real software,” in Proceedings of the International Symposium on Software Testing and Analysis (ISSTA), pp. 93–104, ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. R. Feldt, “Do system test cases grow old?,” in Proceedings of the International Conference on Software Testing, Verification and Validation (ICST), pp. 343–352, IEEE, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. M. Greiler, A. van Deursen, and M. Storey, “Test confessions: a study of testing practices for plug-in systems,” in Software Engineering (ICSE), 2012 34th International Conference on, pp. 244–254, IEEE, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. V. Hurdugaci and A. Zaidman, “Aiding software developers to maintain developer tests,” in Proceedings of the European Conference on Software Maintenance and Reengineering (CSMR), pp. 11–20, IEEE, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. When, how, and why developers (do not) test in their IDEs

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ESEC/FSE 2015: Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering
      August 2015
      1068 pages
      ISBN:9781450336758
      DOI:10.1145/2786805

      Copyright © 2015 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 30 August 2015

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate112of543submissions,21%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader