skip to main content
10.1145/2393596.2393669acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article

Recalling the "imprecision" of cross-project defect prediction

Published:11 November 2012Publication History

ABSTRACT

There has been a great deal of interest in defect prediction: using prediction models trained on historical data to help focus quality-control resources in ongoing development. Since most new projects don't have historical data, there is interest in cross-project prediction: using data from one project to predict defects in another. Sadly, results in this area have largely been disheartening. Most experiments in cross-project defect prediction report poor performance, using the standard measures of precision, recall and F-score. We argue that these IR-based measures, while broadly applicable, are not as well suited for the quality-control settings in which defect prediction models are used. Specifically, these measures are taken at specific threshold settings (typically thresholds of the predicted probability of defectiveness returned by a logistic regression model). However, in practice, software quality control processes choose from a range of time-and-cost vs quality tradeoffs: how many files shall we test? how many shall we inspect? Thus, we argue that measures based on a variety of tradeoffs, viz., 5%, 10% or 20% of files tested/inspected would be more suitable. We study cross-project defect prediction from this perspective. We find that cross-project prediction performance is no worse than within-project performance, and substantially better than random prediction!

References

  1. E. Arisholm, L. C. Briand, and M. Fuglerud. Data mining techniques for building fault-proneness models in telecom java software. In ISSRE, pages 215--224. IEEE Computer Society, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. E. Arisholm, L. C. Briand, and E. B. Johannessen. A systematic and comprehensive investigation of methods to build and evaluate fault prediction models. JSS, 83(1):2--17, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. A. Bachmann, C. Bird, F. Rahman, P. Devanbu, and A. Bernstein. The missing links: Bugs and bug-fix commits. In FSE, pages 97--106. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. C. Bird, A. Bachmann, E. Aune, J. Duffy, A. Bernstein, V. Filkov, and P. Devanbu. Fair and balanced?: bias in bug-fix datasets. In Proceedings of the the 7th FSE, pages 121--130. ACM, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. L. C. Briand, W. L. Melo, and J. Wüst. Assessing the applicability of fault-proneness models across object-oriented software projects. IEEE TSE, 28(7):706--720, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. D. Cubranić and G. C. Murphy. Hipikat: recommending pertinent software development artifacts. In Software Engineering, 2003. Proceedings. 25th International Conference on, pages 408--418, Portland, Oregon, 2003. IEEE Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. J. Ekanayake, J. Tappolet, H. C. Gall, and A. Bernstein. Tracking concept drift of software projects using defect prediction quality. In Proceedings of the 2009 6th IEEE International Working Conference on Mining Software Repositories, MSR '09, pages 51--60, Washington, DC, USA, 2009. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. K. El Emam, S. Benlarbi, N. Goel, and S. Rai. The confounding effect of class size on the validity of object-oriented metrics. IEEE TSE, 27(7):630--650, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. M. Fischer, M. Pinzger, and H. Gall. Populating a release history database from version control and bug tracking systems. In ICSM '03, page 23, Washington, DC, USA, 2003. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Z. He, F. Shu, Y. Yang, M. Li, and Q. Wang. An investigation on the feasibility of cross-project defect prediction. Autom. Softw. Eng., 19(2):167--199, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. T. Khoshgoftaar and J. Munson. Predicting software development errors using software complexity metrics. Selected Areas in Communications, IEEE Journal on, 8(2):253--261, 1990. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. S. Kim, H. Zhang, R. Wu, and L. Gong. Dealing with noise in defect prediction. In ICSE'2011, pages 481--490. IEEE, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. S. Kim, T. Zimmermann, E. Whitehead Jr, and A. Zeller. Predicting faults from cached history. In Proceedings of the 29th ICSE, pages 489--498. IEEE Computer Society, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. A. G. Koru and H. Liu. Identifying and characterizing change-prone classes in two large-scale open-source products. Journal of Systems and Software, 80(1):63--73, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. S. Lessmann, B. Baesens, C. Mues, and S. Pietsch. Benchmarking classification models for software defect prediction: A proposed framework and novel findings. IEEE TSE, 34(4):485--496, July 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Y. Ma and B. Cukic. Adequate and precise evaluation of quality models in software engineering studies. In Proceedings of the 29th ICSE Workshops, ICSEW '07, pages 68--, Washington, DC, USA, 2007. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Y. Ma, G. Luo, X. Zeng, and A. Chen. Transfer learning for cross-company software defect prediction. Information and Software Technology, 54(3):248--256, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. T. Mende. Replication of defect prediction studies: problems, pitfalls and recommendations. In T. Menzies and G. Koru, editors, PROMISE, page 5. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. T. Menzies, A. Butcher, A. Marcus, T. Zimmermann, and D. Cok. Local vs. global models for effort estimation and defect prediction. In Automated Software Engineering (ASE), 2011 26th IEEE/ACM International Conference on, pages 343--351. IEEE, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. T. Menzies, J. Greenwald, and A. Frank. Data mining static code attributes to learn defect predictors. IEEE TSE, 33(1):2--13, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. R. Moser, W. Pedrycz, and G. Succi. A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction. In W. Schäfer, M. B. Dwyer, and V. Gruhn, editors, ICSE, pages 181--190. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. T. Ostrand and E. Weyuker. The distribution of faults in a large industrial software system. In International Symposium on Software Testing and Analysis: Proceedings of the 2002 ACM SIGSOFT international symposium on Software testing and analysis: Roma, Italy. Association for Computing Machinery, Inc., One Astor Plaza, 1515 Broadway, New York, NY, 10036--5701, USA, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. D. Posnett, V. Filkov, and P. Devanbu. Ecological inference in empirical software engineering. In ASE'2011, pages 362--371. IEEE, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. F. Rahman, D. Posnett, A. Hindle, E. Barr, and P. Devanbu. Bugcache for inspections: hit or miss? In Proceedings of the 19th ACM SIGSOFT FSE, pages 322--331. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. E. Shihab, Z. M. Jiang, W. M. Ibrahim, B. Adams, and A. E. Hassan. Understanding the impact of code and process metrics on post-release defects: a case study on the eclipse project. In G. Succi, M. Morisio, and N. Nagappan, editors, ESEM. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. B. Turhan, T. Menzies, A. B. Bener, and J. Di Stefano. On the relative value of cross-company and within-company data for defect prediction. Empirical Softw. Engg., 14(5):540--578, Oct. 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. B. Turhan, A. T. Misirli, and A. B. Bener. Empirical evaluation of mixed-project defect prediction models. In EUROMICRO-SEAA, pages 396--403. IEEE, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. H. Wang, T. M. Khoshgoftaar, and N. Seliya. How many software metrics should be selected for defect prediction? In R. C. Murray and P. M. McCarthy, editors, FLAIRS Conference. AAAI Press, 2011.Google ScholarGoogle Scholar
  29. E. J. Weyuker, T. J. Ostrand, and R. M. Bell. Do too many cooks spoil the broth? using the number of developers to enhance defect prediction models. ESE, 13(5):539--559, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. H. Zhang. On the distribution of software faults. IEEE TSE, 34(2):301--302, March-April 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. T. Zimmermann, N. Nagappan, H. Gall, E. Giger, and B. Murphy. Cross-project defect prediction: a large scale experiment on data vs. domain vs. process. In H. van Vliet and V. Issarny, editors, ESEC/SIGSOFT FSE, pages 91--100. ACM, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. T. Zimmermann, R. Premraj, and A. Zeller. Predicting defects for eclipse. In Proceedings of the Third International Workshop on Predictor Models in Software Engineering, PROMISE '07, pages 9--, Washington, DC, USA, 2007. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Recalling the "imprecision" of cross-project defect prediction

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      FSE '12: Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering
      November 2012
      494 pages
      ISBN:9781450316149
      DOI:10.1145/2393596

      Copyright © 2012 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 11 November 2012

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate17of128submissions,13%

      Upcoming Conference

      FSE '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader