Skip to main content
Log in

Will this localization tool be effective for this bug? Mitigating the impact of unreliability of information retrieval based bug localization tools

  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

Information retrieval (IR) based bug localization approaches process a textual bug report and a collection of source code files to find buggy files. They output a ranked list of files sorted by their likelihood to contain the bug. Recently, several IR-based bug localization tools have been proposed. However, there are no perfect tools that can successfully localize faults within a few number of most suspicious program elements for every single input bug report. Therefore, it is difficult for developers to decide which tool would be effective for a given bug report. Furthermore, for some bug reports, no bug localization tools would be useful. Even a state-of-the-art bug localization tool outputs many ranked lists where buggy files appear very low in the lists. This potentially causes developers to distrust bug localization tools. In this work, we build an oracle that can automatically predict whether a ranked list produced by an IR-based bug localization tool is likely to be effective or not. We consider a ranked list to be effective if a buggy file appears in the top-N position of the list. If a ranked list is unlikely to be effective, developers do not need to waste time in checking the recommended files one by one. In such cases, it is better for developers to use traditional debugging methods or request for further information to localize bugs. To build this oracle, our approach extracts features that can be divided into four categories: score features, textual features, topic model features, and metadata features. We build a separate prediction model for each category, and combine them to create a composite prediction model which is used as the oracle. We name this solution APRILE, which stands for Automated PRediction of IR-based Bug Localization’s Effectiveness. We further integrate APRILE with two other components that are learned using our bagging-based ensemble classification (BEC) method. We refer to the extension of APRILE as APRILE +. We have evaluated APRILE + to predict the effectiveness of three state-of-the-art IR-based bug localization tools on more than three thousands bug reports from AspectJ, Eclipse, SWT, and Tomcat. APRILE + can achieve an average precision, recall, and F-measure of 77.61 %, 88.94 %, and 82.09 %, respectively. Furthermore, APRILE + outperforms a baseline approach by Le and Lo and APRILE by up to a 17.43 % and 10.51 % increase in F-measure respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. https://bugcenter.googlecode.com/files/BugLocator.zip

  2. http://dev.mysql.com/doc/refman/5.1/en/fulltext-stopwords.html

  3. https://www.st.cs.uni-saarland.de/ibugs/

  4. http://goo.gl/Ojqrrp

  5. https://bugcenter.googlecode.com/files/swt-3.1.zip

  6. http://svn.apache.org/repos/asf/tomcat/trunk/

  7. https://github.com/lebuitienduy/aprile_plus

  8. http://nlp.stanford.edu/software/tmt/tmt-0.4/

References

  • Abreu R, Zoeteweij P, Golsteijn R, Van Gemund AJ (2009) A practical evaluation of spectrum-based fault localization. J Syst Softw 82(11):1780–1792

  • Antoniol G, Ayari K, Di Penta M, Khomh F, Guéhéneuc YG (2008) Is it a bug or an enhancement?: A text-based approach to classify change requests. In: Proceedings of the 2008 Conference of the Center for Advanced Studies on Collaborative Research: Meeting of Minds, ACM, New York, NY, USA, CASCON ’08, pp 23:304–23:318

  • Ayewah N, Pugh W (2010) The google findbugs fixit. In: Proceedings of the 19th international symposium on Software testing and analysis, ACM, pp 241–252

  • Bachmann A, Bernstein A (2009) Software process data quality and characteristics: a historical view on open and closed source projects. In: Proceedings of the joint international and annual ERCIM workshops on principles of software evolution (IWPSE) and software evolution (Evol) workshops, ACM, pp 119–128

  • Bauer E, Kohavi R (1999) An empirical comparison of voting classification algorithms: bagging, boosting, and variants. Mach Learn 36(1-2):105–139

    Article  Google Scholar 

  • Blei DM, Ng AY, Jordan MI (2003) Latent dirichlet allocation. J Mach Learn Res 3:993–1022

    MATH  Google Scholar 

  • Bowring JF, Rehg JM, Harrold MJ (2004) Active learning for automatic classification of software behavior. In: Proc. 2004 int. Symp. on software testing and analysis (ISSTA’04), Boston, MA, pp 195–205

  • Breiman L (1996a) Bagging predictors. Mach Learn 24(2):123–140. doi:10.1007/BF00058655

  • Breiman L (1996b) Bagging predictors. Mach Learn 24:123–140

  • Broomhead DS, Lowe D (1988) Multivariable functional interpolation and adaptive networks. Complex Syst 2:321–355

    MathSciNet  MATH  Google Scholar 

  • Brun Y, Ernst MD (2004) Finding latent code errors via machine learning over program executions. In: Proc. 26th Int. Conf Software Engineering (ICSE’04), Edinburgh, Scotland

  • Cleve H, Zeller A (2005) Locating causes of program failures. In: Proceedings of the 27th International Conference on Software Engineering, ACM, New York, NY, USA, ICSE ’05, pp 342–351

  • Cronen-Townsend S, Zhou Y, Croft WB (2002) Predicting query performance. In: Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, ACM, pp 299–306

  • Han J, Kamber M (2006) Data Mining Concepts and Techniques, 2nd edn. Morgan Kaufmann

  • He B, Ounis I (2004) Inferring query performance using pre-retrieval predictors. In: String processing and information retrieval, Springer, pp 43–54

  • Heckman S, Williams L (2011) A systematic literature review of actionable alert identification techniques for automated static code analysis. Inf Softw Technol 53 (4):363–387

    Article  Google Scholar 

  • Hovemeyer D, Pugh W (2004) Finding bugs is easy. ACM Sigplan Notices 39 (12):92–106

    Article  Google Scholar 

  • Jalbert N, Weimer W (2008) Automated duplicate detection for bug tracking systems. In: Dependable Systems and Networks With FTCS and DCC, 2008. DSN 2008. IEEE International Conference on, pp 52–61

  • Johnson B, Song Y, Murphy-Hill E, Bowdidge R (2013) Why don’t software developers use static analysis tools to find bugs?. In: 2013 35Th international conference on software engineering, ICSE, IEEE, pp 672–681

  • Johnson S (1978) Lint, a c program checker

  • Jones JA, Harrold MJ (2005) Empirical evaluation of the tarantula automatic fault-localization technique. In: Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering, ACM, pp 273–282

  • Kim S, Ernst MD (2007) Which warnings should i fix first?. In: Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering, ACM, pp 45–54

  • Kochhar PS, Tian Y, Lo D (2014) Potential biases in bug localization: do they matter?. In: ACM/IEEE International conference on automated software engineering, ASE’14, vasteras, Sweden - september 15 - 19, 2014, pp 803–814

  • Kochhar PS, Xia X, Lo D, Li S (2016) Practitioners’ expectations on automated fault localization. In: Proceedings of the 25th International Symposium on Software Testing and Analysis, ACM, pp 165–176

  • Kohavi R, Wolpert DH et al (1996) Bias plus variance decomposition for zero-one loss functions. In: ICML, vol 96, pp 275–83

  • Lamkanfi A, Demeyer S, Giger E, Goethals B (2010) Predicting the severity of a reported bug. In: 7Th IEEE working conference on mining software repositories, MSR, IEEE, pp 1–10

  • Lamkanfi A, Demeyer S, Soetens QD, Verdonck T (2011) Comparing mining algorithms for predicting the severity of a reported bug. In: 15Th european conference on software maintenance and reengineering, CSMR, IEEE, pp 249–258

  • Le TD, Lo D (2013) Will fault localization work for these failures? an automated approach to predict effectiveness of fault localization tools. In: 2013 IEEE International conference on software maintenance, eindhoven, the Netherlands, September 22-28, 2013, pp 310–319

  • Le TD, Lo D, Thung F (2014a) Should I follow this fault localization tools output? Empirical Software Engineering pp 1–38. doi:10.1007/s10664-014-9349-1

  • Le TD, Thung F, Lo D (2014b) Predicting effectiveness of ir-based bug localization techniques. In: 25th IEEE International Symposium on Software Reliability Engineering, ISSRE 2014, Naples, Italy, November 3-6, 2014, pp 335–345

  • Le TD, Oentaryo RJ, Lo D (2015) Information retrieval and spectrum based bug localization: better together. In: Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2015, Bergamo, Italy, August 30 - September 4, 2015. doi:10.1145/2786805.2786880, pp 579–590

  • Le TD, Lo D, Le Goues C, Grunske L (2016) A learning-to-rank based fault localization approach using likely invariants. In: Proceedings of the 25th International Symposium on Software Testing and Analysis, ACM, pp 177–188

  • Lemmens A, Croux C (2006) Bagging and boosting classification trees to predict churn. J Mark Res 43(2):276–286

    Article  Google Scholar 

  • Lucia L o D, Jiang L, Thung F, Budi A (2014) Extended comprehensive study of association measures for fault localization. J Softw: Evol Process 26(2):172–219

    Google Scholar 

  • Lukins SK, Kraft NA, Etzkorn LH (2010) Bug localization using latent dirichlet allocation. Inf Softw Technol 52(9):972–990

    Article  Google Scholar 

  • Manning CD, Raghavan P, Schütze H (2008) Introduction to information retrieval. Cambridge University Press, New York

    Book  MATH  Google Scholar 

  • Marcus A, Maletic JI (2003) Recovering documentation-to-source-code traceability links using latent semantic indexing. In: Proceedings of the 25th international conference on software engineering, may 3-10, 2003, Portland, Oregon, USA, pp 125–137

  • Menzies T, Marcus A (2008) Automated severity assessment of software defect reports. In: 24Th IEEE international conference on software maintenance (ICSM 2008), september 28 - october 4, 2008, Beijing, China, pp 346–355

  • Mitchell T (1997) Machine Learning. McGraw Hill

  • Mothe J, Tanguy L (2005) Linguistic features to predict query difficulty. In: ACM Conference on research and development in information retrieval, SIGIR, Predicting query difficulty-methods and applications workshop, pp 7–10

  • Parnin C, Orso A (2011) Are automated debugging techniques actually helping programmers?. In: Proceedings of the 20th international symposium on software testing and analysis, ISSTA 2011, toronto, ON, Canada, July 17-21, 2011, pp 199–209

  • Porter MF (1980) An algorithm for suffix stripping. Program 14(3):130–137

    Article  Google Scholar 

  • Prasad AM, Iverson LR, Liaw A (2006) Newer classification and regression tree techniques: bagging and random forests for ecological prediction. Ecosystems 9 (2):181–199

    Article  Google Scholar 

  • Rao S, Kak AC (2011) Retrieval from software libraries for bug localization: a comparative study of generic and composite text models. In: Proceedings of the 8th International Working Conference on Mining Software Repositories, MSR 2011 (Co-located with ICSE), Waikiki, Honolulu, HI, USA, May 21-28, 2011, Proceedings, pp 43–52

  • Saha RK, Lease M, Khurshid S, Perry DE (2013) Improving bug localization using structured information retrieval. In: 2013 28Th IEEE/ACM international conference on automated software engineering, ASE 2013, silicon valley, CA, USA, November 11-15, 2013, pp 345–355

  • Seo H, Kim S (2012) Predicting recurring crash stacks. In: IEEE/ACM International conference on automated software engineering, ASE’12, essen, Germany, September 3-7, 2012, pp 180–189

  • Shihab E, Ihara A, Kamei Y, Ibrahim WM, Ohira M, Adams B, Hassan AE, Matsumoto K (2010) Predicting re-opened bugs: A case study on the eclipse project. In: 17Th working conference on reverse engineering, WCRE 2010, 13-16 october, 2010, Beverly, MA, USA, pp 249–258

  • Shihab E, Ihara A, Kamei Y, Ibrahim WM, Ohira M, Adams B, Hassan AE, ichi Matsumoto K (2013) Studying re-opened bugs in open source software. Empir Softw Eng 18(5):1005–1042

    Article  Google Scholar 

  • Shtok A, Kurland O, Carmel D (2009) Predicting query performance by query-drift estimation. In: Advances in Information Retrieval Theory, Springer, pp 305–312

  • Shtok A, Kurland O, Carmel D (2010) Using statistical decision theory and relevance models for query-performance prediction. In: Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval, ACM, pp 259–266

  • Sisman B, Kak AC (2012) Incorporating version histories in information retrieval based bug localization. In: Proceedings of the 9th IEEE Working Conference on Mining Software Repositories, IEEE Press, pp 50–59

  • Tantithamthavorn C, Ihara A, Matsumoto K (2013) Using co-change histories to improve bug localization performance. In: 14Th ACIS international conference on software engineering, artificial intelligence, networking and parallel/distributed computing, SNPD 2013, honolulu, hawaii, USA, 1-3 July, 2013, pp 543–548

  • Tassey G (2002) The economic impacts of inadequate infrastructure for software testing. National Institute of Standards and Technology Planning Report 02–32002

  • Thomas SW, Nagappan M, Blostein D, Hassan AE (2013) The impact of classifier configuration and classifier combination on bug localization. IEEE, Trans Software Eng 39(10):1427–1443

    Article  Google Scholar 

  • Tian Y, Lo D, Sun C (2012a) Information retrieval based nearest neighbor classification for fine-grained bug severity prediction. In: 19Th working conference on reverse engineering, WCRE, IEEE, pp 215–224

  • Tian Y, Sun C, Lo D (2012b) Improved duplicate bug report identification. In: 16th European Conference on Software Maintenance and Reengineering, CSMR 2012, Szeged, Hungary, March 27-30, 2012, pp 385–390

  • Valdivia Garcia H, Shihab E (2014) Characterizing and predicting blocking bugs in open source projects. In: Proceedings of the 11th Working Conference on Mining Software Repositories, ACM, pp 72–81

  • Vinay V, Cox IJ, Milic-Frayling N, Wood K (2006) On ranking the effectiveness of searches. In: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, ACM, pp 398–404

  • Wang S, Lo D (2014) Version history, similar report, and structure: Putting them together for improved bug localization. In: Proceedings of the 22nd International Conference on Program Comprehension, ACM, pp 53–63

  • Xia X, Bao L, Lo D, Li S (2016) automated debugging considered harmful considered harmful – a user study revisiting the usefulness of spectra-based fault localization techniques with professionals using real bugs from large systems. In: Proceedings of the 32nd International Conference on Software Maintenance and Evolution (ICSME)

  • Xie X, Chen TY, Kuo FC, Xu B (2013a) A theoretical analysis of the risk evaluation formulas for spectrum-based fault localization. ACM Trans Softw Eng Methodol 22(4):31

  • Xie X, Kuo FC, Chen TY, Yoo S, Harman M (2013b) Provably optimal and human-competitive results in sbse for spectrum based fault localisation. In: International Symposium on Search Based Software Engineering, Springer, pp 224–238

  • Xuan J, Monperrus M (2014) Learning to combine multiple ranking metrics for fault localization. In: Proceedings of the 2014 IEEE International Conference on Software Maintenance and Evolution, IEEE Computer Society, pp 191–200

  • Yoo S (2012) Evolving human competitive spectra-based fault localisation techniques. In: International Symposium on Search Based Software Engineering, Springer, pp 244–258

  • Zeller A (2002) Isolating cause-effect chains from computer programs. In: Proceedings of the tenth ACM SIGSOFT symposium on foundations of software engineering 2002, charleston, south carolina, USA, November 18-22, 2002, pp 1–10

  • Zeller A, Hildebrandt R (2002) Simplifying and isolating failure-inducing input. IEEE Trans Softw Eng 28(2):183–200

    Article  Google Scholar 

  • Zhou J, Zhang H, Lo D (2012) Where should the bugs be fixed? more accurate information retrieval-based bug localization based on bug reports. In: 34Th international conference on software engineering, ICSE 2012, june 2-9, 2012, Zurich, Switzerland, pp 14–24

  • Zimmermann T, Nagappan N, Gall HC, Giger E, Murphy B (2009) Cross-project defect prediction: a large scale experiment on data vs. domain vs. process. In: Proceedings of the 7th joint meeting of the european software engineering conference and the ACM SIGSOFT international symposium on foundations of software engineering, 2009, amsterdam, the Netherlands, August 24-28, 2009, pp 91–100

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tien-Duy B. Le.

Additional information

Communicated by: Lin Tan

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Le, TD.B., Thung, F. & Lo, D. Will this localization tool be effective for this bug? Mitigating the impact of unreliability of information retrieval based bug localization tools. Empir Software Eng 22, 2237–2279 (2017). https://doi.org/10.1007/s10664-016-9484-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10664-016-9484-y

Keywords

Navigation