Skip to main content
Log in

Subjective evaluation of software evolvability using code smells: An empirical study

  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

This paper presents the results of an empirical study on the subjective evaluation of code smells that identify poorly evolvable structures in software. We propose use of the term software evolvability to describe the ease of further developing a piece of software and outline the research area based on four different viewpoints. Furthermore, we describe the differences between human evaluations and automatic program analysis based on software evolvability metrics. The empirical component is based on a case study in a Finnish software product company, in which we studied two topics. First, we looked at the effect of the evaluator when subjectively evaluating the existence of smells in code modules. We found that the use of smells for code evaluation purposes can be difficult due to conflicting perceptions of different evaluators. However, the demographics of the evaluators partly explain the variation. Second, we applied selected source code metrics for identifying four smells and compared these results to the subjective evaluations. The metrics based on automatic program analysis and the human-based smell evaluations did not fully correlate. Based upon our results, we suggest that organizations should make decisions regarding software evolvability improvement based on a combination of subjective evaluations and code metrics. Due to the limitations of the study we also recognize the need for conducting more refined studies and experiments in the area of software evolvability.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Figure 1
Figure 2

Similar content being viewed by others

Notes

  1. http://www.m-w.com/

  2. Unfortunately we have not been able to obtain a copy of the report. A brief summary can be found in (Pigoski 1996) on page 288.

  3. A recent study (Robillard et al. 2004) of program modification tasks also showed that programmers who performed their modifications inside few methods were less successful than the ones who distributed their solutions to several methods.

  4. http://sourceforge.net/projects/same

  5. http://www.sdmetrics.com/

  6. The 1989 version of the pamphlet is no longer available, but the more recent edition (AFOTEC 1996) is available

References

  • AFOTEC (1996) Software maintainability evaluation guide. Department of the Air Force, HQ Air Force Operational Test and Evaluation Center

  • Arnold RS (1989) Software restructuring. Proc IEEE 77:607–617

    Article  Google Scholar 

  • Balazinska M, Merlo E, Dagenais M, Lague B, Kontogiannis K (2000) Advanced clone-analysis to support object-oriented system refactoring. Proceedings of Seventh Working Conference on Reverse Engineering, pp 98–107.

  • Bandi RK, Vaishnavi VK, Turk DE (2003) Predicting maintenance performance using object-oriented design complexity metrics. IEEE Trans Softw Eng 29:77–87

    Article  Google Scholar 

  • Bansiya J, David CG (2002) A hierarchical model for object-oriented design quality. IEEE Trans Softw Eng 28:4–17

    Article  Google Scholar 

  • Beck K, Beedle M, van Bennekum A, Cockburn A, Cunningham W, Fowler M, Grenning J et al (2001) Manifesto for agile software development. [cited 8/21 2003]. Available from http://agilemanifesto.org/

  • Briand LC, Daly JW, Wüst JK (1997) A unified framework for cohesion measurement in object-oriented systems. Proceedings of the Fourth International Software Metrics Symposium, pp 43–53

  • Briand LC, Daly JW, Wüst JK (1999) A unified framework for coupling measurement in object-oriented systems. IEEE Trans Softw Eng 25:91–121

    Article  Google Scholar 

  • Brown WJ, R Malveau C, McCormick HW, T Mowbray J (1998) AntiPatterns: refactoring software, architectures, and projects in crisis. Wiley, New York

    Google Scholar 

  • Chidamber SR, Kemerer CF (1994) A metric suite for object oriented design. IEEE Trans Softw Eng 20:476–493

    Article  Google Scholar 

  • Chidamber SR, Darcy DP, Kemerer CF (1998) Managerial use of metrics for object-oriented software: an exploratory analysis. IEEE Trans Softw Eng 24:629–639

    Article  Google Scholar 

  • Chikofsky EJ, Cross JH (1990) Reverse engineering and design recovery: a taxonomy. IEEE Softw 7:13–17

    Article  Google Scholar 

  • Coleman D, Ash D, Lowther B, Oman PW (1994) Using metrics to evaluate software system maintainability. Computer 27:44–49

    Article  Google Scholar 

  • Coleman D, Lowther B, Oman PW (1995) The application of software maintainability models in industrial software systems. J Syst Softw 29:3–16

    Article  Google Scholar 

  • Cusumano MA, Selby RW (1995) Microsoft secrets. Free Press, USA

    Google Scholar 

  • Cusumano MA, Yoffie DB (1998) Design strategy. In: Competing on internet time. Free Press, New York, USA, pp 180–198

  • Ducasse S, Rieger M, Demeyer S (1999) A language independent approach for detecting duplicated code. Proceedings of the International Conference on Software Maintenance, Oxford, England, UK, pp 109–118

  • Fowler M (2000) Refactoring: improving the design of existing code, 1st edn. Addison-Wesley, Boston

    Google Scholar 

  • Fowler M, Beck K (2000) Bad smells in code. In: Refactoring: improving the design of existing code, 1st edn. Addison-Wesley, Boston, pp 75–88

  • Garvin DA (1984) What does “product quality” really mean? Sloan Manage Rev 26:25–43

    Google Scholar 

  • Grady RB (1994) Successfully applying software metrics. Computer 27:18–25

    Article  Google Scholar 

  • Halstead MH (1977) Elements of software science. Elsevier, New York

    MATH  Google Scholar 

  • Harrison R, Counsell SJ, Nithi RV (1998) An evaluation of the MOOD set of object-oriented software metrics. IEEE Trans Softw Eng 24:491–496

    Article  Google Scholar 

  • Henderson-Sellers B (1996) Object-oriented metrics. Prentice Hall, Upper Saddle River, New Jersey

    Google Scholar 

  • Hitz M, Montazeri B (1996) Chidamber and kemerer's metrics suite: a measurement theory perspective. IEEE Trans Softw Eng 22:267–271

    Article  Google Scholar 

  • IEEE (1998) IEEE standard for software maintenance. The Institute of Electrical and Electronics Engineers, Inc, New York

    Google Scholar 

  • IEEE (1990) IEEE standard glossary of software engineering terminology. The Institute of Electrical and Electronics Engineers, Inc, New York

    Google Scholar 

  • Iio K, Furuyama T, Arai Y (1997) Experimental analysis of the cognitive processes of program maintainers during software maintenance. Proceedings of International Conference on Software Maintenance, pp 242–249

  • Kafura DG, Reddy GR (1987) The use of software complexity metrics in software maintenance. IEEE Trans Softw Eng 13:335–343

    Article  Google Scholar 

  • Kataoka Y, Ernst MD, Griswold WG, Notkin D (2001) Automated support for program refactoring using invariants. Proceedings of International Conference on Software Maintenance, Florence, Italy, pp 736–743

  • Kataoka Y, Imai T, Andou H, Fukaya T (2002) A quantative evaluation of maintainability enhancement by refactoring. Proceedings of the International Conference on Software Maintenance, Montreal, Canada, pp 576–585

  • Kendall M, (1948) The problem of m ranking. In: Rank correlation methods, 5th edn. Edward Arnold, London, pp 117–143

  • Kitchenham BA, Pfleeger SL (1996) Software quality: the elusive target. IEEE Softw 13:12–21

    Article  Google Scholar 

  • Kitchenham BA, Pfleeger SL (2002a) Principles of survey research part 2: designing a survey. ACM SIGSOFT Softw Eng Notes 27:18–20

    Article  Google Scholar 

  • Kitchenham BA, Pfleeger SL (2002b) Principles of survey research part 4: questionnaire evaluation. ACM SIGSOFT Softw Eng Notes 27:20–23

    Google Scholar 

  • Kitchenham BA, Pfleeger SL (2002c) Principles of survey research: part 3: constructing a survey instrument. ACM SIGSOFT Softw Eng Notes 27:20–24

    Google Scholar 

  • Kitchenham BA, Pfleeger SL (2002d) Principles of survey research: part 5: populations and samples. ACM SIGSOFT Softw Eng Notes 27:17–20

    Article  Google Scholar 

  • Lehman MM (1980) On understanding laws, evolution, and conservation in the large-program life cycle. J Syst Softw 1:213–221

    Article  Google Scholar 

  • Li W, Henry SM (1993) Object-oriented metrics that predict maintainability. J Syst Softw 23:111–122

    Article  Google Scholar 

  • Lorenz M, Kidd J (1994) Object-oriented software metrics. Prentice Hall, Upper Saddle River, New Jersey

    Google Scholar 

  • Mäntylä MV, Vanhanen J, Lassenius C (2003) A taxonomy and an initial empirical study of bad smells in code. Proceedings of the International Conference on Software Maintenance, Amsterdam, The Netherlands, pp 381–384

  • Marinescu R (2004) Detection strategies: metrics-based rules for detecting design flaws. In: Proceedings of Software Maintenance, Chicago, Illinois, USA, pp 350–359

  • Maruyama K, Shima K (1999) Automatic method refactoring using weighted dependence graphs. Proceedings of the International Conference on Software Engineering, Los Angeles, California, USA, pp 236–245

  • McCabe TJ (1976) A complexity measure. IEEE Trans Softw Eng 2:308–320

    Article  MathSciNet  Google Scholar 

  • McConnell S (1993) Code complete. Microsoft, Redmond, Washington

    Google Scholar 

  • McConnell S (2004) High-quality routines. In: Code complete 2, 2nd edn. Microsoft, Redmond, Washington, pp 161–186

  • Mens T, Tourwe T (2004) A survey of software refactoring. IEEE Trans Softw Eng 30:126–139

    Article  Google Scholar 

  • Muthanna S, Stacey B, Kontogiannis K, Ponnambalam K (2000) A maintainability model for industrial software systems using design level metrics. Proceedings of Seventh Working Conference on Reverse Engineering, Brisbane, Australia, pp 248–256

  • Oman PW, Hagemeister J (1994) Constructing and testing of polynomials predicting software maintainability. J Syst Softw 24:251–266

    Article  Google Scholar 

  • Oman PW, Hagemeister J, Ash D (1991) A definition and taxonomy for software maintainability. Software Engineering Test Lab, University of Idaho, pp 91–08

  • Pfleeger SL, Kitchenham BA (2001) Principles of survey research. Part 1. Turning lemons into lemonade. ACM SIGSOFT Softw Eng Notes 26:16–18

    Article  Google Scholar 

  • Pigoski TM (1996) Practical software maintenance. Wiley

  • Rajlich VT, Bennett KH (2000) A staged model for the software life cycle. Computer 33:66–71

    Article  Google Scholar 

  • Robillard MP, Coelho W, Murphy GC (2004) How effective developers investigate source code: an exploratory study. IEEE Trans Softw Eng 30:889–903

    Article  Google Scholar 

  • Rombach DH (1987) Controlled experiment on the impact of software structure on maintainability. IEEE Trans Softw Eng 13:344–354

    Article  Google Scholar 

  • Schwanke RW, Hanson SJ (1994) Using neural networks to modularize software. Mach Learn 15:137–168

    Google Scholar 

  • Shepperd MJ (1990) System architecture metrics for controlling software maintainability. IEE Colloquium on Software Metrics 4/1–4/3

  • Shneiderman B (1980) Software psychology: human factors in computer and information systems. Winthrop, Cambridge, Massachusetts

    Google Scholar 

  • Siegel S (1956) Nonparametric statistics for the behavioral sciences, 1st edn. McGraw-Hill, New York

    MATH  Google Scholar 

  • Simon F, Steinbruckner F, Lewerentz C (2001) Metrics based refactoring. Proceedings Fifth European Conference on Software Maintenance and Reengineering, Lisbon, Portugal, pp 30–38

  • Sommerville I (2001) Software engineering. Addison-Wesley, Reading, Massachusetts

    Google Scholar 

  • Stevens W, Myers G, Constantine L (1974) Structured design. IBM Syst J 13:115–139

    Article  Google Scholar 

  • Subramanyam R, Krishnan MS (2003) Empirical analysis of CK metrics for object-oriented design complexity: implications for software defects. IEEE Trans Softw Eng 29:297–310

    Article  Google Scholar 

  • Succi G, Pedrycz W, Djokic S, Zuliani P, Russo B (2005) An empirical exploration of the distributions of the Chidamber and Kemerer object-oriented metrics suite. Empirical Software Engineering 10:81–104

    Article  Google Scholar 

  • Sun Microsystems (1999) Code conventions for the java programming language. in Sun Microsystems [database online]. [cited 7/20 1999]. Available from http://java.sun.com/docs/codeconv/

  • Szulewski PA, Budlong FC (1996) Metrics for ada 95: focus on reliability and maintainability. CrossTalk—The Journal of Defence Software Engineering 1996

  • Tourwé T, Mens T (2003) Identifying refactoring opportunities using logic meta programming. Proceedings of the Seventh European Conference on Software Maintenance and Reengineering, 2003, Benevento, Italy, pp 91–100

  • Wake WC (2003) Refactoring workbook, 1st edn. Addison Wesley

  • Welker KD, Oman PW, Atkinson GG (1997) Development and application of an automated source code maintainability index. J Softw Maint Res Pract 9:127–159

    Article  Google Scholar 

  • Yu H, Ikeda M, Mizoguchi R (1994) Helping novice programmers bridge the conceptual gap. Proceedings of International Conference on Expert Systems for Development, Bangkok, Thailand, pp 192–197

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mika V. Mäntylä.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Mäntylä, M.V., Lassenius, C. Subjective evaluation of software evolvability using code smells: An empirical study. Empir Software Eng 11, 395–431 (2006). https://doi.org/10.1007/s10664-006-9002-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10664-006-9002-8

Keywords

Navigation