skip to main content
10.1145/3593434.3593442acmotherconferencesArticle/Chapter ViewAbstractPublication PageseaseConference Proceedingsconference-collections
research-article
Open Access

Impact of Architectural Smells on Software Performance: an Exploratory Study

Published:14 June 2023Publication History

ABSTRACT

Architectural smells have been studied in the literature looking at several aspects, such as their impact on maintainability as a source of architectural debt, their correlations with code smells, and their evolution in the history of complex projects. The goal of this paper is to extend the study of architectural smells from a different perspective. We focus our attention on software performance, and we aim to quantify the impact of architectural smells as support to explain the root causes of system performance hindrances. Our method consists of a study design matching the occurrence of architectural smells with performance metrics. We exploit state-of-the-art tools for architectural smell detection, software performance profiling, and testing the systems under analysis. The removal of architectural smells generates new versions of systems from which we derive some observations on design changes improving/worsening performance metrics. Our experimentation considers two complex open-source projects, and results show that the detection and removal of two common types of architectural smells yield lower response time (up to ) with a large effect size, i.e., for - of the hotspot methods. The median memory consumption is also lower (up to ) with a large effect size for all the services.

References

  1. Andrea Arcuri and Lionel Briand. 2011. A Practical Guide for Using Statistical Tests to Assess Randomized Algorithms in Software Engineering. In International Conference on Software Engineering. 1–10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Alberto Avritzer, Ricardo Britto, Catia Trubiani, Matteo Camilli, Andrea Janes, Barbara Russo, André van Hoorn, Robert Heinrich, Martina Rapp, Jörg Henß, and Ram Kishan Chalawadi. 2022. Scalability testing automation using multivariate characterization and detection of software performance antipatterns. J. Syst. Softw. 193 (2022), 111446.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Umberto Azadi, Francesca Arcelli Fontana, and Davide Taibi. 2019. Architectural smells detected by tools: a catalogue proposal. In International Conference on Technical Debt. 88–97.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Alison Fernandez Blanco, Alexandre Bergel, and Juan Pablo Sandoval Alcocer. 2022. Software visualizations to analyze memory consumption: A literature review. ACM Computing Surveys (CSUR) 55, 1 (2022), 1–34.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Matteo Camilli, Carmine Colarusso, Barbara Russo, and Eugenio Zimeo. 2023. Actor-Driven Decomposition of Microservices through Multi-Level Scalability Assessment. ACM Trans. Softw. Eng. Methodol. (feb 2023). https://doi.org/10.1145/3583563Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Matteo Camilli, Andrea Janes, and Barbara Russo. 2022. Automated test-based learning and verification of performance models for microservices systems. Journal of Systems and Software 187 (2022), 111225. https://doi.org/10.1016/j.jss.2022.111225Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Matteo Camilli and Barbara Russo. 2022. Modeling Performance of Microservices Systems with Growth Theory. Empirical Software Engineering 27, 2 (11 Jan 2022), 39. https://doi.org/10.1007/s10664-021-10088-0Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Alexandra-Maria Chaniotaki and Tushar Sharma. 2021. Architecture Smells and Pareto Principle: A Preliminary Empirical Exploration. In International Conference on Mining Software Repositories. 190–194.Google ScholarGoogle Scholar
  9. Domenico Cotroneo, Luigi De Simone, Pietro Liguori, and Roberto Natella. 2021. Enhancing the analysis of software failures in cloud computing systems with deep learning. Journal of Systems and Software 181 (2021), 111043.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Leonardo da Silva Sousa, Willian Nalepa Oizumi, Alessandro Garcia, Anderson Oliveira, Diego Cedrim, and Carlos Lucena. 2020. When Are Smells Indicators of Architectural Refactoring Opportunities: A Study of 50 Software Projects. In ICPC ’20: 28th International Conference on Program Comprehension, Seoul, Republic of Korea, July 13-15, 2020. ACM, 354–365. https://doi.org/10.1145/3387904.3389276Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Francesca Arcelli Fontana, Valentina Lenarduzzi, Riccardo Roveda, and Davide Taibi. 2019. Are architectural smells independent from code smells? An empirical study. J. Syst. Softw. 154 (2019), 139–156.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Francesca Arcelli Fontana, Ilaria Pigazzini, Riccardo Roveda, Damian Tamburri, Marco Zanoni, and Elisabetta Di Nitto. 2017. Arcan: A tool for architectural smells detection. In International Workshops on Software Architecture. 282–285.Google ScholarGoogle ScholarCross RefCross Ref
  13. Francesca Arcelli Fontana, Ilaria Pigazzini, Riccardo Roveda, and Marco Zanoni. 2016. Automatic detection of instability architectural smells. In International Conference on Software Maintenance and Evolution. 433–437.Google ScholarGoogle ScholarCross RefCross Ref
  14. Joshua Garcia, Daniel Popescu, George Edwards, and Nenad Medvidovic. 2009. Identifying architectural bad smells. In European Conference on Software Maintenance and Reengineering. 255–258.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Joshua Garcia, Daniel Popescu, George Edwards, and Nenad Medvidovic. 2009. Toward a catalogue of architectural bad smells. In International conference on the quality of software architectures. 146–162.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. R.J. Grissom and J.J. Kim. 2005. Effect Sizes for Research: A Broad Practical Approach. Lawrence Erlbaum Associates.Google ScholarGoogle Scholar
  17. Mark Harman and Peter W. O’Hearn. 2018. From Start-ups to Scale-ups: Opportunities and Open Problems for Static and Dynamic Program Analysis. In Working Conference on Source Code Analysis and Manipulation. 1–23.Google ScholarGoogle ScholarCross RefCross Ref
  18. Wilhelm Hasselbring and Andre van Hoorn. 2015. Open-Source Software as Catalyzer for Technology Transfer: Kieker’s Development and Lessons Learned. Research Report. Department of Computer Science, Kiel University, Kiel, Germany. http://www.inf.uni-kiel.de/de/forschung/publikationen/technische-berichte/Google ScholarGoogle Scholar
  19. Sebastian Herold. 2020. An Initial Study on the Association Between Architectural Smells and Degradation. In European Conference on Software Architecture. 193–201.Google ScholarGoogle Scholar
  20. Olumuyiwa Ibidunmoye, Francisco Hernández-Rodriguez, and Erik Elmroth. 2015. Performance anomaly detection and bottleneck identification. ACM Computing Surveys (CSUR) 48, 1 (2015), 1–35.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Pooyan Jamshidi, Norbert Siegmund, Miguel Velez, Christian Kästner, Akshay Patel, and Yuvraj Agarwal. 2017. Transfer learning for performance modeling of configurable systems: An exploratory analysis. In International Conference on Automated Software Engineering. 497–508.Google ScholarGoogle ScholarCross RefCross Ref
  22. Herb Krasner. 2021. The cost of poor software quality in the US: a 2020 report. Proc. Consortium Inf. Softw. QualityTM (2021).Google ScholarGoogle Scholar
  23. Henry H Liu. 2011. Software performance and scalability: a quantitative approach. John Wiley & Sons.Google ScholarGoogle Scholar
  24. Lei Liu, Zhiying Tu, Xiang He, Xiaofei Xu, and Zhongjie Wang. 2021. An Empirical Study on Underlying Correlations between Runtime Performance Deficiencies and “Bad Smells” of Microservice Systems. In International Conference on Web Services. 751–757.Google ScholarGoogle ScholarCross RefCross Ref
  25. Minghua Ma, Zheng Yin, Shenglin Zhang, Sheng Wang, Christopher Zheng, Xinhao Jiang, Hanwen Hu, Cheng Luo, Yilin Li, Nengjun Qiu, 2020. Diagnosing root causes of intermittent slow queries in cloud databases. VLDB Endowment 13, 8 (2020), 1176–1189.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Robert Cecil Martin. 2003. Agile software development: principles, patterns, and practices. Prentice Hall PTR.Google ScholarGoogle Scholar
  27. Antonio Martini, Francesca Arcelli Fontana, Andrea Biaggi, and Riccardo Roveda. 2018. Identifying and Prioritizing Architectural Debt Through Architectural Smells: A Case Study in a Large Software Company. In European Conference on Software Architecture, Vol. 11048. 320–335.Google ScholarGoogle ScholarCross RefCross Ref
  28. Haris Mumtaz, Paramvir Singh, and Kelly Blincoe. 2021. A systematic mapping study on architectural smells detection. Journal of Systems and Software 173 (2021), 110885.Google ScholarGoogle ScholarCross RefCross Ref
  29. Riccardo Pinciroli, Connie U. Smith, and Catia Trubiani. 2021. QN-based Modeling and Analysis of Software Performance Antipatterns for Cyber-Physical Systems. In International Conference on Performance Engineering. 93–104.Google ScholarGoogle Scholar
  30. JProfiler Project. 2001. JProfiler web site. Retrieved Apr, 2023 from ttps://www.ej-technologies.com/products/jprofiler/overview.htmlGoogle ScholarGoogle Scholar
  31. Kieker Project. 2001. Kieker web site. Retrieved Apr, 2023 from http://kieker-monitoring.net/Google ScholarGoogle Scholar
  32. OpenMRS Project. 2001. OpenMRS web site. Retrieved Apr, 2023 from https://openmrs.org/Google ScholarGoogle Scholar
  33. David Georg Reichelt, Stefan Kühne, and Wilhelm Hasselbring. 2022. Automated Identification of Performance Changes at Code Level. In International Conference on Software Quality, Reliability and Security. 916–925.Google ScholarGoogle ScholarCross RefCross Ref
  34. Darius Sas, Paris Avgeriou, Ilaria Pigazzini, and Francesca Arcelli Fontana. 2022. On the relation between architectural smells and source code changes. J. Softw. Evol. Process. 34, 1 (2022).Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Darius Sas, Paris Avgeriou, and Umut Uyumaz. 2022. On the evolution and impact of architectural smells - an industrial case study. Empirical Software Engineering 27, 4 (2022), 86.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Tushar Sharma, Pratibha Mishra, and Rohit Tiwari. 2016. Designite: A Software Design Quality Assessment Tool. In International Workshop on Bringing Architectural Design Thinking into Developers’ Daily Activities. 1–4.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Tushar Sharma, Paramvir Singh, and Diomidis Spinellis. 2020. An empirical investigation on the relationship between design and architecture smells. Empir. Softw. Eng. 25, 5 (2020), 4020–4068.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Connie U Smith. 2007. Introduction to software performance engineering: Origins and outstanding problems. In International School on Formal Methods for the Design of Computer, Communication and Software Systems. Springer, 395–428.Google ScholarGoogle Scholar
  39. Girish Suryanarayana, Ganesh Samarthyam, and Tushar Sharma. 2015. Refactoring for Software Design Smells. https://www.sciencedirect.com/science/article/pii/B9780128013977120016.Google ScholarGoogle Scholar
  40. Antony Tang, Yan Jin, and Jun Han. 2007. A rationale-based architecture model for design traceability and reasoning. Journal of Systems and Software 80, 6 (2007), 918–934.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Fangchao Tian, Peng Liang, and Muhammad Ali Babar. 2019. How developers discuss architecture smells? an exploratory study on stack overflow. In International conference on software architecture. 91–100.Google ScholarGoogle ScholarCross RefCross Ref
  42. Catia Trubiani, Aldeida Aleti, Sarah Goodwin, Pooyan Jamshidi, André van Hoorn, and Samuel Gratzl. 2020. VisArch: Visualisation of Performance-based Architectural Refactorings. In European Conference on Software Architecture. 182–190.Google ScholarGoogle Scholar
  43. Catia Trubiani, Pooyan Jamshidi, Jürgen Cito, Weiyi Shang, Zhen Ming Jiang, and Markus Borg. 2019. Performance Issues? Hey DevOps, Mind the Uncertainty. IEEE Softw. 36, 2 (2019), 110–117.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Miguel Velez, Pooyan Jamshidi, Norbert Siegmund, Sven Apel, and Christian Kästner. 2021. White-box analysis over machine learning: Modeling performance of configurable systems. In International Conference on Software Engineering. 1072–1084.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Jóakim von Kistowski, Simon Eismann, Norbert Schmitt, André Bauer, Johannes Grohmann, and Samuel Kounev. 2018. TeaStore: A Micro-Service Reference Application for Benchmarking, Modeling and Resource Management Research. In International Symposium on the Modelling, Analysis, and Simulation of Computer and Telecommunication Systems.Google ScholarGoogle Scholar
  46. Hanzhang Wang, Ali Ouni, Marouane Kessentini, Bruce Maxim, and William I Grosky. 2016. Identification of web service refactoring opportunities as a multi-objective problem. In International Conference on Web Services. 586–593.Google ScholarGoogle ScholarCross RefCross Ref
  47. Rand R Wilcox. 2001. Fundamentals of modern statistical methods: Substantially improving power and accuracy. Vol. 249. Springer.Google ScholarGoogle Scholar
  48. Chenxing Zhong, Huang Huang, He Zhang, and Shanshan Li. 2022. Impacts, causes, and solutions of architectural smells in microservices: An industrial investigation. Software: Practice and Experience 52, 12 (2022), 2574–2597.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Impact of Architectural Smells on Software Performance: an Exploratory Study

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        EASE '23: Proceedings of the 27th International Conference on Evaluation and Assessment in Software Engineering
        June 2023
        544 pages
        ISBN:9798400700446
        DOI:10.1145/3593434

        Copyright © 2023 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 14 June 2023

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate71of232submissions,31%
      • Article Metrics

        • Downloads (Last 12 months)276
        • Downloads (Last 6 weeks)51

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format