Skip to main content
Log in

Introduction of static quality analysis in small- and medium-sized software enterprises: experiences from technology transfer

  • Published:
Software Quality Journal Aims and scope Submit manuscript

Abstract

Today, small- and medium-sized enterprises (SMEs) in the software industry face major challenges. Their resource constraints require high efficiency in development. Furthermore, quality assurance (QA) measures need to be taken to mitigate the risk of additional, expensive effort for bug fixes or compensations. Automated static analysis (ASA) can reduce this risk because it promises low application effort. SMEs seem to take little advantage of this opportunity. Instead, they still mainly rely on the dynamic analysis approach of software testing. In this article, we report on our experiences from a technology transfer project. Our aim was to evaluate the results static analysis can provide for SMEs as well as the problems that occur when introducing and using static analysis in SMEs. We analysed five software projects from five collaborating SMEs using three different ASA techniques: code clone detection, bug pattern detection and architecture conformance analysis. Following the analysis, we applied a quality model to aggregate and evaluate the results. Our study shows that the effort required to introduce ASA techniques in SMEs is small (mostly below one person-hour each). Furthermore, we encountered only few technical problems. By means of the analyses, we could detect multiple defects in production code. The participating companies perceived the analysis results to be a helpful addition to their current QA and will include the analyses in their QA process. With the help of the Quamoco quality model, we could efficiently aggregate and rate static analysis results. However, we also encountered a partial mismatch with the opinions of the SMEs. We conclude that ASA and quality models can be a valuable and affordable addition to the QA process of SMEs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1

Similar content being viewed by others

Notes

  1. http://www.openmrs.org.

  2. http://www.conqat.org.

  3. http://www.ccfinder.net.

  4. http://www.semanticdesigns.com/Products/Clone.

  5. http://www.axivion.com.

  6. http://en.wikipedia.org/wiki/List_of_tools_for_static_code_analysis.

  7. http://splint.org.

  8. http://cppcheck.sourceforge.net.

  9. http://findbugs.sourceforge.net.

  10. http://msdn.microsoft.com/en-us/library/bb429476.aspx.

  11. http://www.hello2morrow.com/products/sonarj.

  12. http://www.headwaysoftware.com.

  13. http://source.valtech.com/display/dpm/Dependometer.

  14. http://www.quamoco.de/.

  15. http://www.in.tum.de/webportal/explorer.html.

  16. http://pmd.sourceforge.net.

  17. http://www.mono-project.com/Gendarme.

  18. http://www.in.tum.de/webportal/explorer.html.

  19. http://www.antlr.org.

  20. Following Wagner et al. (2008), three released bugs would suffice to justify ASA efforts.

References

  • Ahsan, S. N., Ferzund, J., & Wotawa, F. (2009). Are there language specific bug patterns? Results obtained from a case study using Mozilla. In Proceeding of the fourth international conference on software engineering advances (ICSEA’09) (pp. 210–215). IEEE Computer Society.

  • Al-Kilidar, H., Cox, K., & Kitchenham, B. (2005). The use and usefulness of the ISO/IEC 9126 quality standard. In Proceedings of the international symposium on empirical software engineering (ISESE’05) (pp. 126–132). IEEE Computer Society.

  • Ayewah, N., Hovemeyer, D., Morgenthaler, J. D., Penix, J., & Pugh, W. (2008). Using static analysis to find bugs. IEEE Software, 25, 22–29. doi:10.1109/MS.2008.130.

    Google Scholar 

  • Ayewah, N., Pugh, W., Morgenthaler, J. D., Penix, J., & Zhou, Y. (2007). Evaluating static analysis defect warnings on production software. In Proceedings of the 7th workshop on program analysis for software tools and engineering (PASTE ’07) (pp. 1–8). ACM Press. doi:10.1145/1251535.1251536.

  • Baca, D., Carlsson, B., & Lundberg, L. (2008). Evaluating the cost reduction of static code analysis for software security. In Proceedings of the third ACM SIGPLAN workshop on programming languages and analysis for security (PLAS ’08) (pp. 79–88). New York, NY: ACM. doi:10.1145/1375696.1375707.

  • Bansiya, J., & Davis, C. G. (2002). A hierarchical model for object-oriented design quality assessment. IEEE Transactions on Software Engineering, 28(1), 4–17. doi:10.1109/32.979986.

    Article  Google Scholar 

  • Beizer, B. (1990). Software testing techniques (2nd ed.). New York, NY: Thomson.

    Google Scholar 

  • Bessey, A., Block, K., Chelf, B., Chou, A., Fulton, B., Hallem, S., et al. (2010). A few billion lines of code later: Using static analysis to find bugs in the real world. Commun ACM, 53(2), 66–75. doi:10.1145/1646353.1646374.

    Article  Google Scholar 

  • Bijlsma, D., Ferreira, M. A., Luijten, B., & Visser J. (2012). Faster issue resolution with higher technical quality of software. Software Quality Journal, 20(2), 265–285.

    Article  Google Scholar 

  • Boehm, B. W., Brown, J. R., Kaspar, H., Lipow, M., Macleod, G. J., & Merrit, M. J. (1978). Characteristics of software quality. Amsterdam: Van Nostrand Reinhold.

    Google Scholar 

  • Boogerd, C., & Moonen, L. (2009). Evaluating the relation between coding standard violations and faults within and across software versions. In 6th IEEE international working conf. mining software repositories (MSR) (pp. 41–50). doi:10.1109/MSR.2009.5069479.

  • Chandra, P., Chess, B., & Steven, J. (2006). Putting the tools to work: How to succeed with source code analysis. IEEE Security Privacy, 4(3). 80–83. doi:10.1109/MSP.2006.77.

    Article  Google Scholar 

  • Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340.

    Google Scholar 

  • de Moor, O., Verbaere, M., Hajiyev, E., Avgustinov, P., Ekman, T., Ongkingco, N., et al. (2007). QL for source code analysis. In Proceedings of the seventh IEEE international working conference on source code analysis and manipulation (SCAM 2007) (pp. 3–16). IEEE Computer Society.

  • Deissenboeck, F., Feilkas, M., Heinemann, L., Hummel, B., & Juergens E. (2010a). ConQAT book. Technische Universität München, Institut für Informatik, Software & Systems Engineering, v2.6 edn. http://conqat.cs.tum.edu/index.php/ConQAT.

  • Deissenboeck, F., Heinemann, L., Herrmannsdoerfer, M., Lochmann, K., & Wagner, S. (2011). The Quamoco tool chain for quality modeling and assessment. In Proceedings of the 33rd international conference on software engineering.

  • Deissenboeck, F., Heinemann, L., Hummel, B., & Juergens, E. (2010b). Flexible architecture conformance assessment with ConQAT. In Proceedings of the 32nd ACM/IEEE international conference on software engineering (Vol. 2, pp. 247–250). ACM Press. doi:10.1145/1810295.1810343.

  • Deissenboeck, F., Heinemann, L., Hummel, B., & Wagner, S. (2012). Challenges of the dynamic detection of functionally similar code fragments. In T. Mens, A. Cleve, & R. Ferenc (Eds.), CSMR (pp. 299–308). IEEE.

  • Deissenboeck, F., Juergens, E., Lochmann, K., & Wagner, S. (2009). Software quality models: Purposes, usage scenarios and requirements. In Proceedings of the ICSE workshop on software quality.

  • Deissenboeck, F., Wagner S., Pizka, M., Teuchert, S., & Girard, J. F. (2007). An activity-based quality model for maintainability. In Proceedings of the IEEE international conference on software maintenance.

  • Dromey, R. G. (1995). A model for software product quality. IEEE Transactions on Software Engineering, 21(2), 146–162.

    Article  Google Scholar 

  • Elva, R., & Leavens, G. T. (2012). Jsctracker: A semantic clone detection tool for java code. Orlando, FL: University of Central Florida.

  • European Commission. (2003). Commission recommendation of 6 May 2003 concerning the definition of micro, small and medium-sized enterprises. Official Journal of the European Union L 124, 36–41.

    Google Scholar 

  • Feilkas, M., Ratiu, D., & Juergens, E. (2009). The loss of architectural knowledge during system evolution: An industrial case study. In Proceedings of the IEEE 17th international conference on program comprehension (ICPC’09) (pp. 188–197). IEEE Computer Society.

  • Ferzund, J., Ahsan, S. N., & Wotawa, F. (2008). Analysing bug prediction capabilities of static code metrics in open source software. In Proceedings of the international conferences on software process and product measurement (IWSM/Metrikon/Mensura ’08) (vol. 5338, pp. 331–343). Springer, LNCS.

  • Fiutem, R., & Antoniol, G. (1998). Identifying design-code inconsistencies in object-oriented software: A case study. In Proceedings of the international conference on software maintenance (ICSM’98). IEEE Computer Society.

  • Foster, J., Hicks, M., & Pugh, W. (2007). Improving software quality with static analysis. In Proceedings of the 7th ACM SIGPLAN-SIGSOFT workshop on program analysis for software tools and engineering (PASTE’07) (pp. 83–84). ACM Press.

  • Gleirscher, M., Golubitskiy, D., Irlbeck, M., & Wagner, S. (2012). On the benefit of automated static analysis for small and medium-sized software enterprises. In Lecture Notes in business information processing (vol. 94, pp. 14–38), previously accepted at: 1st Research Track at Software Quality Days, Vienna, 2012.

  • Heitlager, I., Kuipers, T., & Visser, J. (2007). A practical model for measuring maintainability. In Proceedings of the 6th international conference on quality of information and communications technology.

  • Hofer, C. (2002). Software development in Austria: Results of an empirical study among small and very small enterprises. In Proceedings of the 28th Euromicro conference (pp. 361–366). IEEE Computer Society. doi:10.1109/EURMIC.2002.1046219.

  • ISO/IEC 9126. (2003). Software engineering—product quality—quality model. International Standard.

  • ISO/IEC 25010. (2011). Systems and software engineering—systems and software quality requirements and evaluation (SQuaRE)—system and software quality models. International Standard.

  • Juergens, E. (2011). Why and how to control cloning in software artifacts. PhD thesis, Technische Universitaet Muenchen.

  • Juergens, E., Deissenboeck, F., & Hummel, B. (2009a). CloneDetective—A workbench for clone detection research. In Proceedings of the 31th international conference on software engineering (ICSE’09) (pp. 603–606). IEEE Computer Society. doi:10.1109/ICSE.2009.5070566.

  • Juergens, E., Deissenboeck, F., Hummel, B., & Wagner, S. (2009b). Do code clones matter? In Procedings of the 31th international conference on software engineering (ICSE’09) (pp. 485–495). IEEE Computer Society.

  • Juergens, E., & Göde, N. (2010). Achieving accurate clone detection results. In Proceedings 4th international workshop on software clones (pp 1–8). ACM Press.

  • Kautz, K. (1999). Making sense of measurement for small organizations. IEEE Software, 16, 14–20.

    Article  Google Scholar 

  • Kautz, K., Hansen, H. W., & Thaysen, K. (2000). Applying and adjusting a software process improvement model in practice: The use of the ideal model in a small software enterprise. In Proceedings of the 22nd international conference on Software engineering (ICSE ’00) (pp. 626–633). New York, NY: ACM. doi:10.1145/337180.337492.

  • Kienle, H., Kraft, J., & Nolte T. (2012). System-specific static code analyses: A case study in the complex embedded systems domain. Software Quality Journal, 20, 337–67. doi:10.1007/s11219-011-9138-7.

    Article  Google Scholar 

  • Kitchenham, B., & Pfleeger, S. L. (1996). Software quality: The elusive target. IEEE Software, 13(1), 12–21.

    Google Scholar 

  • Knodel, J., & Popescu, D. (2007). A comparison of static architecture compliance checking approaches. In Proceedings of the IEEE/IFIP working conference on software architecture (WICSA’07) (pp. 12–12). IEEE Computer Society.

  • Koschke, R. (2007). Survey of research on software clones. In Duplication, redundancy, and similarity in software, Schloss Dagstuhl.

  • Koschke, R., & Simon, D. (2003). Hierarchical reflexion models. In Proceedings of the 10th working conference on reverse engineering (WCRE’03) (p. 368). IEEE Computer Society.

  • Kremenek, T. (2008). From uncertainty to bugs: Inferring defects in software systems with static analysis, statistical methods, and probabilistic graphical models. PhD thesis, Dept. of Computer Science, Stanford University.

  • Lague, B., Proulx, D., Mayrand, J., Merlo, E. M., & Hudepohl, J. (1997). Assessing the benefits of incorporating function clone detection in a development process. In Proceedings of the international conference on software maintenance (ICSM’97) (pp. 314–321). IEEE Computer Society.

  • Lanubile, F., & Mallardo, T. (2003). Finding function clones in web applications. In Proceedings of the 7th European conference on software maintenance and reengineering (CSMR 2003) (pp. 379–388). IEEE Computer Society.

  • Littlewood, B., Popov, P. T., Strigini, L., & Shryane N. (2000). Modeling the effects of combining diverse software fault detection techniques. IEEE Transactions on Software Engineering, 26, 1157–1167. doi:10.1109/32.888629. http://portal.acm.org/citation.cfm?id=358134.357482

    Google Scholar 

  • Lochmann, K. (2010). Engineering quality requirements using quality models. In Proceedings of 15th international conference on engineering of complex computer systems (ICECCS’10). IEEE Computer Society, St. Anne’s College, University of Oxford, United Kingdom.

  • Lochmann, K. (2012). A benchmarking-inspired approach to determine threshold values for metrics. In Proc. of the 9th International Workshop on Software Quality (WoSQ’12). ACM, Research Triangle Park, Cary; (to appear in November 2012).

  • Lochmann, K., & Goeb, A. (2011). A unifying model for software quality. In Proceedings of the 8th international workshop on software quality (WoSQ’11). Szeged: ACM.

  • Mattsson, A., Lundell, B., Lings, B., & Fitzgerald, B. (2007). Experiences from representing software architecture in a large industrial project using model driven development. In Proceedings of the second workshop on sharing and reusing architectural knowledge architecture, rationale, and design intent (SHARK-ADI ’07). IEEE Computer Society. doi:10.1109/SHARK-ADI.2007.7.

  • McCall, J. A, Richards, P. K., & Walters, G. F. (1977). Factors in software quality. National Technical Information Service.

  • Mishra, A., & Mishra, D. (2006). Software quality assurance models in small and medium organisations: A comparison. International Journal of Information Technology and Management, 5(1), 4–20.

    Article  Google Scholar 

  • Passos, L., Terra, R., Valente, M. T., Diniz, R., & das Chagas Mendonca, N. (2010). Static architecture-conformance checking: An illustrative overview. IEEE Software, 27, 82–89. doi:10.1109/MS.2009.117.

  • Pino, F. J., Garcia, F., & Piattini, M. (2008). Software process improvement in small and medium software enterprises: A systematic review. Software Quality Journal, 16(2), 237–61. doi:10.1007/s11219-007-9038-z.

    Article  Google Scholar 

  • Pino, F. J., Garcia, F., & Piattini, M. (2009). Key processes to start software process improvement in small companies. In Proceedings of the 2009 ACM symposium on applied computing (SAC ’09) (pp. 509–516). New York, NY: ACM. doi:10.1145/1529282.1529389.

  • Plösch, R., Gruber, H., Körner, C., Pomberger, G., & Schiffer, S. (2009). A proposal for a quality model based on a technical topic classification. In Tagungsband des 2. Workshops zur Software-Qualitätsmodellierung und -bewertung.

  • Plösch, R., Gruber, H., Körner, C., & Saft, M. (2010). A method for continuous code quality management using static analysis. In Proceedings of the seventh international conference on the quality of information and communications technology (QUATIC) (pp. 370–375). IEEE Computer Society.

  • Pusatli, O., & Misra, S. (2011). A discussion on assuring software quality in small and medium software enterprises: An empirical investigation. Technical Gazette, 18(3), 447–452.

    Google Scholar 

  • Richardson, I., & VonWangenheim, C. (2007). Guest editors’ introduction: Why are small software organizations different? IEEE Software, 24(1), 18–22. doi:10.1109/MS.2007.12. http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4052546.

    Google Scholar 

  • Rosik, J., Le Gear, A., Buckley, J., & Babar, M. (2008). An industrial case study of architecture conformance. In Proceedings of the 2nd ACM-IEEE international symposium on empirical software engineering and measurement (ESEM ’08) (pp. 80–89). ACM Press.

  • Roy, C. K., & Cordy, J. R. (2007). A survey on software clone detection research. Tech. rep., Queen’s University at Kingston.

  • Ruthruff, J. R., Penix, J., Morgenthaler, J. D., Elbaum, S., & Rothermel, G. (2008). Predicting accurate and actionable static analysis warnings: An experimental approach. In Proceedings of the 30th international conference on Software engineering (ICSE ’08) (pp. 341–350). New York, NY: ACM. doi:10.1145/1368088.1368135.

  • Sangal, N., Jordan, E., Sinha, V., & Jackson, D. (2005). Using dependency models to manage complex software architecture. In: Proceedings of the 20th annual ACM SIGPLAN conference on object-oriented programming, systems, languages, and applications (OOPSLA ’05) (pp. 167–176). ACM Press. doi:10.1145/1094811.1094824.

  • Sjøberg, D. I. K., Anda, B., & Mockus, A. (2012). Questioning software maintenance metrics: A comparative case study. In P. Runeson, M. Höst, E. Mendes, A. A. Andrews, & R. Harrison (Eds.), ESEM (pp. 107–110). ACM.

  • Wagner, S. (2008). Defect classification and defect types revisited. In Proceedings of the 2008 workshop on defects in large software systems (DEFECTS 2008) (pp. 39–40). ACM Press.

  • Wagner, S., Deissenboeck, F., Aichner, M., Wimmer, J., & Schwalb, M. (2008). An evaluation of two bug pattern tools for java. In Proceedings of the first international conference on software testing, verification, and validation (ICST 2008) (pp. 248–257). IEEE Computer Society.

  • Wagner, S., Juerjens, J., Koller, C., & Trischberger, P. (2005). Comparing bug finding tools with reviews and tests. In Proceedings of the 17th international conference on testing of communicating systems (TestCom ’05), LNCS (vol. 3502, pp. 40–55).

  • Wagner, S., Lochmann, K., Heinemann, L., Kläs, M., Lampasona, C., Trendowicz, A., et al. (2013). Practical product quality modelling and assessment: The Quamoco approach. Submitted manuscript.

  • Wagner, S., Lochmann, K., Heinemann, L., Kläs, M., Trendowicz, A., Plösch, R., et al. (2012a). The Quamoco product quality modelling and assessment approach. In Proceedings of the 34th international conference on software engineering.

  • Wagner, S., Lochmann, K., Winter, S., Goeb, A., & Klaes, M. (2009). Quality models in practice: A preliminary analysis. In Proceedings of the 3rd international symposium on empirical software engineering and measurement. doi:10.1109/ESEM.2009.5316003.

  • Wagner, S., Lochmann, K., Winter, S., Goeb, A., & Kläs, M., Nunnenmacher, S. (2012b). Software quality models in practice. Technical Report TUM-I129, Technische Universität München, Institut für Informatik.

  • von Wangenheim, C. G., Anacleto, A., & Salviano C. F. (2006). Helping small companies assess software processes. IEEE Software, 23, 91–98.

    Google Scholar 

  • Zheng, J., Williams, L., Nagappan, N., Snipes, W., Hudepohl, J. P., & Vouk M. A. (2006). On the value of static analysis for fault detection in software. IEEE Transactions on Software Engineering, 32, 240–253. doi:10.1109/TSE.2006.38.

    Article  Google Scholar 

Download references

Acknowledgments

We would like to thank Christian Pfaller, Bernhard Schätz and Elmar Jürgens for their technical and organisational support throughout the project. The authors owe sincere gratitude to Klaus Lochmann for his advice and support in issues related to quality models. We thank all involved companies as well as the OpenMRS lead developers for their reproachless collaboration and assistance. Last but not least, we thank Veronika Bauer, Georg Hackenberg, Maximilian Junker and Kornelia Kuhle as well as our anonymous peer reviewers for many helpful remarks.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mario Gleirscher.

Appendix: Results of the two questionnaires and the quality model

Appendix: Results of the two questionnaires and the quality model

See Tables 9, 10, 11, 12 and 13.

Table 9 Summary of closed answers of the questionnaire for RQ 2.2 (five results, contents and answers have been translated from German to English)
Table 10 Summary of comments and open answers of the questionnaire for RQ 2.2 (five results, contents and answers have been translated from German to English, SS study subject, except from SS3, SSx corresponds to SOx)
Table 11 Quality model results matched with individual ASA results for RQ 3.1 (worst three characteristics focused, measures are ordered by their weighted impact)
Table 12 Results of the quality model for RQ 3.1, rounded to one decimal; rating given in German school grades—1: excellent, 6: insufficient
Table 13 Results of the questionnaire Comparison between the results of the quality model and the study participants’ opinions (RQ 3.2)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Gleirscher, M., Golubitskiy, D., Irlbeck, M. et al. Introduction of static quality analysis in small- and medium-sized software enterprises: experiences from technology transfer. Software Qual J 22, 499–542 (2014). https://doi.org/10.1007/s11219-013-9217-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11219-013-9217-z

Keywords

Navigation