Skip to main content
Log in

On the relative value of cross-company and within-company data for defect prediction

  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

We propose a practical defect prediction approach for companies that do not track defect related data. Specifically, we investigate the applicability of cross-company (CC) data for building localized defect predictors using static code features. Firstly, we analyze the conditions, where CC data can be used as is. These conditions turn out to be quite few. Then we apply principles of analogy-based learning (i.e. nearest neighbor (NN) filtering) to CC data, in order to fine tune these models for localization. We compare the performance of these models with that of defect predictors learned from within-company (WC) data. As expected, we observe that defect predictors learned from WC data outperform the ones learned from CC data. However, our analyses also yield defect predictors learned from NN-filtered CC data, with performance close to, but still not better than, WC data. Therefore, we perform a final analysis for determining the minimum number of local defect reports in order to learn WC defect predictors. We demonstrate in this paper that the minimum number of data samples required to build effective defect predictors can be quite small and can be collected quickly within a few months. Hence, for companies with no local defect data, we recommend a two-phase approach that allows them to employ the defect prediction process instantaneously. In phase one, companies should use NN-filtered CC data to initiate the defect prediction process and simultaneously start collecting WC (local) data. Once enough WC data is collected (i.e. after a few months), organizations should switch to phase two and use predictors learned from WC data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

Notes

  1. Throughout the paper, the following notation is used: a defect predictor (or predictor) means a binary classification method that categorizes software modules as either defective or defect-free; data refers to MxN matrices of raw measurements of N metrics from M software modules; these N metrics are referred to as features.

  2. Therefore, throughout the paper, the term “data” refers to static code features.

  3. We should carefully note that we do not make use of any conceptual similarities, since our analysis is based on static code features. As to the issue of conceptual connections within the code, we refer the reader to the concept location and cohesion work of Marcus et al. (2008).

  4. http://promisedata.org/repository

  5. E.g. given an attribute’s minimum and maximum values, replace a particular value n with (n − min)/((max − min)/10). For more on discretization, see Dougherty et al. (1995).

  6. In other languages, modules may be called “function” or “method”.

  7. SOFTLAB data were not available at that time.

  8. Details of this issue are out of the scope of this paper. For more, please see Table 1 in (Domingos and Pazzani 1997).

  9. Caveat: We did not optimize the value of k for each project. We simply used a constant k = 10. We consider it as a future work to dynamically set the value of k for a given project (Baker 2007).

  10. In order to reflect the use in practice, we do not use the remaining 90% of the same project for training, we rather use a random 90% of data from other projects. Please note that all WC analysis in this paper reflects within-company, not within project simulations. Since SOFTLAB data are collected from a single company, learning a predictor on some projects and to test it on a different one does not violate within company simulation.

  11. That panel supported neither Fagan’s claim (Fagan 1986) that inspections can find 95% of defects before testing nor Shull’s claim that specialized directed inspection methods can catch 35% more defects that other methods (Shull et al. 2000).

  12. TR(a,b,c) is a triangular distribution with min/mode/max of a,b,c.

  13. Please note that we can only compare the defect detection properties of automatic vs manual methods. Unlike automatic defect prediction via data mining, the above manual inspection methods don’t just report “true,false” on a module, Rather, the manual methods also provide specific debugging information. Hence, a complete comparison of automatic vs manual defect prediction would have to include both an analysis of the time to detect potential defects and the time required to fix them. Manual methods might score higher to automatic methods since they can offer more clues back to the developer about what is wrong with the method. However, such an analysis is beyond the scope of this paper. Here, we focus only on the relative merits of different methods for predicting error prone modules.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Burak Turhan.

Additional information

Editor: James Miller

Rights and permissions

Reprints and permissions

About this article

Cite this article

Turhan, B., Menzies, T., Bener, A.B. et al. On the relative value of cross-company and within-company data for defect prediction. Empir Software Eng 14, 540–578 (2009). https://doi.org/10.1007/s10664-008-9103-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10664-008-9103-7

Keywords

Navigation