skip to main content
10.1145/2961111.2962631acmconferencesArticle/Chapter ViewAbstractPublication PagesesemConference Proceedingsconference-collections
short-paper

Is effectiveness sufficient to choose an intervention?: Considering resource use in empirical software engineering

Published:08 September 2016Publication History

ABSTRACT

Context: Software Engineering (SE) research with a scientific foundation aims to influence SE practice to enable and sustain efficient delivery of high quality software. Goal: To improve the impact of SE research, one objective is to facilitate practitioners in choosing empirically vetted interventions. Method: Literature from evidence-based medicine, economic evaluations in SE and software economics is reviewed. Results: In empirical SE research, the emphasis has been on substantiating the claims about the benefits of proposed interventions. However, to support informed decision making by practitioners regarding technology adoption, we must present a business case for these interventions, which should comprise not just effectiveness, but also the evidence of cost-effectiveness. Conclusions: This paper highlights the need to investigate and report the resources required to adopt an intervention. It also provides some guidelines and examples to improve support for practitioners in decisions regarding technology adoption.

References

  1. N. B. Ali, K. Petersen, and C. Wohlin. A systematic literature review on the industrial use of software process simulation. J. Syst. Software, 97:65--85, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. T. S. Bergmo. Can economic evaluation in telemedicine be trusted? a systematic review of the literature. Cost Effectiveness and Resource Allocation, 7(1):1, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  3. B. Boehm and L. G. Huang. Value-based software engineering: a case study. Computer, 36(3):33--41, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. B. W. Boehm and K. J. Sullivan. Software economics: status and prospects. Inf. Softw. Technol., 41(14):937--946, 1999.Google ScholarGoogle ScholarCross RefCross Ref
  5. P. Bourque and R. E. Fairley. Guide to the software engineering body of knowledge v3. IEEE Computer Society, 2014.Google ScholarGoogle Scholar
  6. M. Brunetti and et al. Grade guidelines: 10. considering resource use and rating the quality of economic evidence. J. Clin. Epidemiol., 66(2):140--150, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  7. A. Cockburn and L. Williams. The costs and benefits of pair programming. In G. Succi and M. Marchesi, editors, Extreme Programming Examined, pages 223--243. Addison-Wesley, Boston, MA, USA, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. F. Deissenboeck and M. Pizka. Probabilistic analysis of process economics. Softw. Process Improv. Pract., 13(1):5--17, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. B. Dervaux. Stakes and methods in economic evaluation. Orthopaedics & traumatology, surgery & research: OTSR, 102(2):141, 2016.Google ScholarGoogle Scholar
  10. T. Dybå. Contextualizing empirical evidence. IEEE Softw., 30(1):81--83, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. T. Dybå and T. Dingsøyr. Strength of evidence in systematic reviews in software engineering. In Proc. of the Int. Symp. on Empirical Software Engineering and Measurement, ESEM, pages 178--187, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. T. Dybå, B. Kitchenham, and M. Jørgensen. Evidence-based software engineering for practitioners. IEEE Softw., pages 58--65, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. E. Engström, K. Petersen, N. bin Ali, and E. Bjarnason. SERP-test: a taxonomy for supporting industry-academia communication, accepted to Softw. Qual. J., pages, 2016.Google ScholarGoogle Scholar
  14. H. Erdogmus. Cost-effectiveness indicator for software development. In Proc. of the 1st Int. Symp. on Empirical Software Engineering and Measurement, ESEM, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. C. for Reviews and Dissemination. Systematic reviews: CRD's guidance for undertaking reviews in health care. Technical report, University of York, 2009.Google ScholarGoogle Scholar
  16. T. Gorschek, C. Wohlin, P. Carre, and S. Larsson. A model for technology transfer in practice. IEEE Softw., 23(6):88--95, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. M. Ivarsson and T. Gorschek. Technology transfer decision support in requirements engineering research: a systematic review of REj. Requir. Eng., 14(3):155--175, July 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. M. Ivarsson and T. Gorschek. A method for evaluating rigor and industrial relevance of technology evaluations. Empirical Softw. Engg., 16(3):365--395, June 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. M. I. Kellner, R. J. Madachy, and D. M. Raffo. Software process simulation modeling: Why ? What ? How ? J. Syst. Software, 46, 1999.Google ScholarGoogle Scholar
  20. B. A. Kitchenham, T. Dybå, and M. Jørgensen. Evidence-based software engineering. In Proc. of the 26th Int. Conf. on Software Engineering (ICSE), pages 273--281, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. R. J. Madachy. System dynamics modeling of an inspection-based process. In Proc. of the 18th Int. Conf. on Software Engineering (ICSE), pages 376--386, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. F. McCaffery, D. Šmite, F. G. Wilkie, and D. McFall. A proposed way for european software industries to achieve growth within the global marketplace. Softw. Process Improv. Pract., 11(3):277--285, 2006.Google ScholarGoogle ScholarCross RefCross Ref
  23. S. M. Mitchell and C. B. Seaman. A comparison of software cost, duration, and quality for waterfall vs. iterative and incremental development: A systematic review. In Proc. of the Int. Symp. on Empirical Software Engineering and Measurement, pages 511--515, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. M. M. Müller and F. Padberg. On the economic evaluation of XP projects. In Proc. of the 11th ACM SIGSOFT Symp. on Foundations of Software Engineering, pages 168--177, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. NICE. Process and methods guides, the guidelines manual. Technical report, National Institute for Health and Clinical Excellence (NICE), 2012.Google ScholarGoogle Scholar
  26. A. Rainer, D. Jagielska, and T. Hall. Software engineering practice versus evidence-based software engineering research. ACM SIGSOFT Software Engineering Notes, 30(4):1--5, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. I. Shemilt, M. Mugford, M. Drummond, E. Eisenstein, J. Mallender, D. McDaid, L. Vale, and D. Walker. Economics methods in cochrane systematic reviews of health promotion and public health related interventions. BMC Med. Res. Methodol., 6(1):1--11, 2006.Google ScholarGoogle ScholarCross RefCross Ref
  28. M. Shepperd. How do I know whether to trust a research result? IEEE Softw., 32(1):106--109, 2015.Google ScholarGoogle ScholarCross RefCross Ref
  29. J. E. Siegel, M. C. Weinstein, L. B. Russell, and M. R. Gold. Recommendations for reporting cost-effectiveness analyses. JAMA, 276(16):1339--1341, 1996.Google ScholarGoogle ScholarCross RefCross Ref
  30. D. I. K. Sjøberg, T. Dybå, and M. Jørgensen. The future of empirical methods in software engineering research. In Proc. of the Workshop on the Future of Software Engineering, FOSE, pages 358--378, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. D. Šmite and R. van Solingen. What's the true hourly cost of offshoring? IEEE Softw., 2015.Google ScholarGoogle Scholar
  32. W. F. Tichy, P. Lukowicz, L. Prechelt, and E. A. Heinz. Experimental evaluation in computer science: A quantitative study. J. Syst. Software, 28(1):9--18, 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. C. Wohlin. Writing for synthesis of evidence in empirical software engineering. In Proc. of the 8th ACM/IEEE Int. Symp. on Empirical Software Engineering and Measurement, ESEM, pages 1--4, New York, NY, USA, 2014. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    ESEM '16: Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement
    September 2016
    457 pages
    ISBN:9781450344272
    DOI:10.1145/2961111

    Copyright © 2016 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 8 September 2016

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • short-paper
    • Research
    • Refereed limited

    Acceptance Rates

    ESEM '16 Paper Acceptance Rate27of122submissions,22%Overall Acceptance Rate130of594submissions,22%

    Upcoming Conference

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader