skip to main content
10.1145/3036290.3036323acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicmlscConference Proceedingsconference-collections
research-article

Investigating the Accuracy of Test Code Size Prediction using Use Case Metrics and Machine Learning Algorithms: An Empirical Study

Authors Info & Claims
Published:13 January 2017Publication History

ABSTRACT

Software testing plays a crucial role in software quality assurance. It is, however, a time and resource consuming process. It is, therefore, important to predict as soon as possible the effort required to test software, so that activities can be planned and resources can be optimally allocated. Test code size, in terms of Test Lines Of Code (TLOC), is an important testing effort indicator used in many empirical studies. In this paper, we investigate empirically the early prediction of TLOC for object-oriented software using use case metrics. We used different machine learning algorithms (linear regression, k-NN, Naïve Bayes, C4.5, Random Forest, and Multilayer Perceptron) to build the prediction models. We performed an empirical study using data collected from five Java projects. The use case metrics have been compared to the well-known Use Case Points (UCP) method. Results show that the use case metrics-based approach gives a more accurate prediction of TLOC than the UCP method.

References

  1. Badri, M., Toure, F., Empirical Analysis of Object-Oriented Design Metrics for Predicting Unit Testing Effort of Classes. Journal of Software Engineering and Applications, 5 (7), July 2012.Google ScholarGoogle ScholarCross RefCross Ref
  2. Bruntink, M., Van Deursen, A., Predicting class testability using object-oriented metrics. in Proceedings of the 4th IEEE International Workshop on Source Code Analysis and Manipulation (SCAM 2004), pp. 136--145, September 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Bruntink, M., van Deursen, A., An empirical study into class testability. Journal of Systems and Software, 79 (9), 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Singh, Y., Kaur, A., Malhota, R., Predicting testability effort using artificial neural network. in Proceedings of the World Congress on Engineering and Computer Science, USA, 2008.Google ScholarGoogle Scholar
  5. Singh, Y., Saha, A., Predicting testability of eclipse, a case study. Journal of Software Engineering, 4 (2), 2010.Google ScholarGoogle ScholarCross RefCross Ref
  6. Zhou, Y., Leung, H., Song, Q., Zhao, J., Lu, H., Chen, L. and Xu, B., An in-depth investigation into the relationships between structural metrics and unit testability in OOS. Information Sciences, 55 (12), Science China, 2012.Google ScholarGoogle Scholar
  7. Jacobson, I., Christerson, M., Jonson, P., Overgaard, G., Object-Oriented Software Engineering, A Use Case Driven Approach, Addison-Wesley, 1993. Google ScholarGoogle Scholar
  8. Almeida, É.R.C., Abreu, B.T., Moraes, R., An Alternative Approach to Test Effort Estimation Based on Use Cases. in Proceedings of the International Conference on Software Testing, Verification and Validation. IEEE CS, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Chaudhary, P., Yadav, C.S., An Approach for calculating the effort needed on testing Projects. International Journal of Advanced Research in Computer Engineering & Technology, 1 (1), March 2012.Google ScholarGoogle Scholar
  10. Nagheshwaran, S., Test Effort Estimation Using Use Case Points. in Quality Week 2001, San Francisco, USA, 2001.Google ScholarGoogle Scholar
  11. Xiaochun, Z., Bo, Z., Fan, W., Chen Lu, Q.Y., Estimate Test Execution Effort at an Early Stage, An Empirical Study. in International Conference on Cyber World. IEEE CS, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Yi, Q., Bo, Z., Xiaochum, Z., Early Estimate the Size of Test Suites from Use Cases. in Proceedings of the 15th Asia-Pacific Software Engineering Conference, IEEE CS, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Ochodek, M., Nawrocki, J., Kwarciak, K., Simplifying effort estimation based on Use Case Points. Information and Software Technology, 53, pp. 200--213, 2011 Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Badri, M., Badri, L., Flageol, W., On the Relationship between Use Cases and Test Suites Size, An Exploratory Study. ACM SIGSOFT Software Engineering Notes, 38 (4), July 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Badri, M., Badri, L., Flageol, W., Predicting the size of test suites from use cases, An empirical exploration. H. Yenigün, C. Yilmaz, and A. Ulrich (Eds.), ICTSS 2013, LNCS 8254, pp. 114--132, November 2013.Google ScholarGoogle Scholar
  16. Badri, L., Badri, M., Toure, F., Exploring empirically the relationship between lack of cohesion and testability in object-oriented systems. in Kim, T.-H., Kim, H.-K., Khan, M.K., et al. (eds.) ASEA 2010. CCIS, vol. 117, pp. 78--92. Springer, Heidelberg, 2010.Google ScholarGoogle Scholar
  17. Badri, L., Badri, M., Toure, F., An empirical analysis of lack of cohesion metrics for predicting testability of classes. International Journal of Software Engineering and Its Applications, 5 (2), 2011.Google ScholarGoogle Scholar
  18. Gupta, V., Aggarwal, K.K., Singh, Y., A Fuzzy Approach for Integrated Measure of Object-Oriented Software Testability. Journal of Computer Science, 1 (2), 2005.Google ScholarGoogle Scholar
  19. Singh, Y., Kaur, A., Malhotra, R., Empirical validation of object-oriented metrics for predicting fault proneness models. Software Quality Journal, 18 (1), pp. 3--35, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Baudry, B., Le Traon, B., Sunyé, G., Testability analysis of a UML class diagram. in Proceedings of the 9th International Software Metrics Symposium. IEEE CS, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Baudry, B., Le Traon, Y., Sunyé, G., Jézéquel, J.M., Measuring and improving design patterns testability. in Proceedings of the 9th International Software Metrics Symposium (METRICS 2003). IEEE CS, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Baudry, B., Le Traon, Y., Sunyé, G., Improving the testability of UML class diagrams. in Proceedings of the International Workshop on Testability Analysis, Rennes, France, 2004.Google ScholarGoogle ScholarCross RefCross Ref
  23. Khan, R.A., Mustafa, K., Metric based testability model for object-oriented design (MTMOOD). ACM SIGSOFT Software Engineering Notes, 34 (2), 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Le Traon, Y., Ouabdesselam, F., Robach, C., Analyzing testability on data flow designs. in Proceedings of the 11th International Symposium on Software Reliability Engineering (ISSRE 2000), pp. 162--173, October 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Fan, W., Xiaohu, Y., Xiaochun, Z., Lu, C., Extended Use Case Points Method for Software Cost Estimation. in International Conference on Computational Intelligence and Software Engineering, 2009.Google ScholarGoogle Scholar
  26. Karner, G., Resource Estimation for Objectory Projects, Objective systems, 1993.Google ScholarGoogle Scholar
  27. Mohagheghi, P., Anda, B., Conradi, R., Effort Estimation of Use Cases for Incremental Large-Scale Software Development. in Proceedings of the International Conference on Software Engineering, ICSE'05, May 15-21, St. Louis Missouri, USA, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Robiolo, G., Orosco, R., Employing use cases to early estimate effort with simpler metrics. Innovations in Systems and Software Engineering, 4, 2008.Google ScholarGoogle Scholar
  29. Robiolo, G., Badano, C., Orosco, R., Transactions and Paths, two use case based metrics which improve the early effort estimation. in Proceedings of the Third International Symposium on Empirical Software Engineering and Measurement. IEEE CS, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Larman, C., Applying UML and Design Patterns, An introduction to object-oriented analysis and design and the unified process. Prentice Hall, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Bou Nassif, A., Capretz, L.F., Ho, D., Software effort estimation in the early stages of the software life cycle using a cascade correlation neural network model. 13th ACIS Int. Conference on Software Engineering, Artificial intelligence, Networking and Parallel/distributed Computing, IEEE, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Bou Nassif, A., Capretz, L.F., Ho, D., Calibrating Use Case Points, ICSE Companion'14, May 31-June 7, Hyderabad, India - Copyright ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Carroll, E.R., Estimating Software Based on Use Case Points. OOPSLA'05, San Diego, California, USA, October 16-20, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Diev, S., Software estimation in the maintenance context. ACM SIGSOFT Software Engineering Notes, 31 (2), March 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Wilson DR, Martinez TR, Improved heterogeneous distance functions. JAIR, 6(1), pp. 1--34, 1977. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Quinlan JR (1993) C4.5, Programs for machine learning. Morgan Kaufmann Publishers, New York. Google ScholarGoogle ScholarDigital LibraryDigital Library
  1. Investigating the Accuracy of Test Code Size Prediction using Use Case Metrics and Machine Learning Algorithms: An Empirical Study

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      ICMLSC '17: Proceedings of the 2017 International Conference on Machine Learning and Soft Computing
      January 2017
      233 pages
      ISBN:9781450348287
      DOI:10.1145/3036290

      Copyright © 2017 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 13 January 2017

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader