Skip to main content
Log in

An Instrument for Assessing Software Measurement Programs

  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

This paper reports on thedevelopment and validation of an instrument for the collectionof empirical data on the establishment and conduct of softwaremeasurement programs. The instrument is distinguished by a novelemphasis on defining the context in which a software measurementprogram operates. This emphasis is perceived to be the key to1) generating knowledge about measurement programs that can begeneralised to various contexts, and, 2) supporting a contingencyapproach to the conduct of measurement programs. A pilot studyof thirteen measurement programs was carried out to trial theinstrument. Analysis of this data suggests that collecting observationsof software measurement programs with the instrument will leadto more complete knowledge of program success factors that willprovide assistance to practitioners in an area that has provednotoriously difficult.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Basil and Cook. 1974. The Management of Change. Maidenhead, U.K.: McGraw-Hill.

    Google Scholar 

  • Bassman, M., McGarry, F., and Pajerski, R. 1994. Software Measurement Guidebook. Greenbelt, Maryland: Software Engineering Laboratory.

    Google Scholar 

  • Beach, L. R. (Editor). 1996. Decision Making in the Workplace a Unified Perspective. Mahwah, New Jersey: Lawrence Erlbaum Associates.

    Google Scholar 

  • Boynton, A., and Zmud, R. 1984. An analysis of critical success factors. Sloan Management Review Summer: 17-27.

  • Burrel, G., and Morgan, G. 1979. Sociological Paradigms and Organisational Analysis. London: Heinemann.

    Google Scholar 

  • Coe, L. 1998. Five small secrets to systems success. Garrity, E., and Sanders, G. (eds). Information Systems Success Measurement. Hershey: Idea Group Publishing.

    Google Scholar 

  • Comer, P., and Chard, F. 1993. A measurement maturity model. Software Quality Journal 2: 277-89.

    Google Scholar 

  • Crockett, R. A. 1990. An Introduction to Sample Surveys—A User's Guide. Melbourne: Australian Bureau of Statistics.

    Google Scholar 

  • Darlington, R. B. Measures of association in crosstab tables. Available at: http://comp9.psych.cornell.edu/Darlington/crosstab/table0.htm. Accessed December, 1999.

  • Daskalantonakis, M. K., Yacobellis, R. H., and Basili, V. R. 1990. A method for assessing software measurement technology. Quality Engineering 3: 27-40.

    Google Scholar 

  • Daskalantonakis, M. K. 1992. A practical view of software measurement and implementation experiences within Motorola. IEEE Transactions on Software Engineering, pp. 886-1001.

  • DeLone, W. H., and McLean, E. R. 1992. Information systems success: The quest for the dependent variable. Information Systems Research 3(1): 60-95.

    Google Scholar 

  • Dutta, S., Lee, M., Van Wassenhove, L. 1999. Software engineering in Europe: A study of best practices. IEEE Software 82-9.

  • Fenton, N. 1991. Software Metrics: A Rigorous Approach. London: Chapman & Hall.

    Google Scholar 

  • Fenton, N., and Neil, M. 1998. New directions in software metrics. Available at: http://www.agena.co.uk/new_directions_metrics/start.htm. Accessed April, 1999.

  • Garrett, W. A. 1989. Proving application development productivity and quality. Proc. 1989 Spring Conf. of IFPUG. San Diego: International Function Point Users Group.

    Google Scholar 

  • Garrity, E., and Sanders, G. 1998. Dimensions of information systems success. Garrity, E., and Sanders, G. (eds). Information Systems Success Measurement. Hershey: Idea Group Publishing.

    Google Scholar 

  • Goldenson, D., Gopal, A., and Mukhopadhyay, T. 1999. Determinants of success in software measurement programs: Initial results. Sixth International Software Metrics Symposium. (Nov 4–6). Boca Raton, Florida, USA. Los Alamitos, California: IEEE Computer Society.

    Google Scholar 

  • Goodman, L. A., and Kruskal, W. H. 1979. Measures of Association for Cross Classifications. New York: Springer-Verlag.

    Google Scholar 

  • Grady, R. B., and Caswell, D. L. 1987. Software Metrics: Establishing a Company-Wide Program. New Jersey: Prentice-Hall.

    Google Scholar 

  • Hall, T., and Fenton, N. 1994. Implementing software metrics—The critical success factors. Software Quality Journal 3(4): 195-208.

    Google Scholar 

  • Hamburg, M. 1977. Statistical Analysis for Decision Making. (2nd ed). USA: Harcourt Brace Jovanovich, Inc.

    Google Scholar 

  • Holdsworth, J. 1994. Software Process Design: Out of the Tar Pit. Maidenhead: McGraw-Hill.

    Google Scholar 

  • ISO/IEC JTC1/SC7. 1995. Information technology—Software life cycle processes. Geneva: International Organization for Standardization. ISO/IEC TR 12207:1995

    Google Scholar 

  • ISO/IEC JTC1/SC7. 1998. Information technology—Software process assessment. Geneva: International Organization for Standardization. ISO/IEC TR 15504:1998.

    Google Scholar 

  • Jeffery, R., and Berry, M. 1993. A framework for evaluation and prediction of metrics program success. Proc. of the IEEE International Software Metrics Symposium. (May 17–21). Baltimore. Los Alamitos: IEEE Computer Society.

    Google Scholar 

  • Jeffery, R., and Zucker, B. (Centre for Advanced Empirical Software Research). 1997. The state of practice in software metrics. Technical Report No. 97/1. Sydney, Australia: CAESAR, University of New South Wales.

    Google Scholar 

  • Judd, D., Smith, E., and Kidder, L. 1991. Research Methods in Social Relations. (6th ed). Orlando, USA: Holt, Rinehart and Winston, Inc.

    Google Scholar 

  • Kanellis, P., Lycett, M., and Paul, R. J. 1998. An interpretive approach to the measurement of information systems success. Garrity, E., and Sanders, G. (eds). Information Systems Success Measurement. Hershey: Idea Group Publishing.

    Google Scholar 

  • Kraemer, H. C., and Thiemann, S. 1987. How Many Subjects? Statistical Power Analysis in Research. Newbury Park: Sage Publications.

    Google Scholar 

  • Laframboise, L., and Abran, A. 1996. Grille d'evaluation des facteurs de risque d'un programme de mesures en Genie Logiciciel. Proceedings of the Vision 96 Conference on Software Process Improvement. Montreal.

  • Lawrence, P. R., and Lorsch, J. W. 1967. Organisation and Environment: Managing Differentiation and Integration. Boston: Harvard Business School Press.

    Google Scholar 

  • Lee, A. S. 1999. Researching MIS. Currie, L., and Galliers, B. (eds). Rethinking Management Information Systems. New York: Oxford UP, pp. 7-27.

    Google Scholar 

  • Miller, J. C. 1989. Measurement using function point analysis. Proc. 1989 Spring Conference of IFPUG. San Diego, California: International Function Point Users Group.

    Google Scholar 

  • Niessink, F., and van Vliet, H. 1999. Measurements should generate value, rather than data. Proceedings of the Sixth International Software Metrics Symposium. (Nov 4–6) Boca Raton, Florida. Los Alamitos, California: IEEE Computer Society.

    Google Scholar 

  • Niessink, F. 2000. Perspectives on Improving Software Maintenance. Amsterdam: SIKS, Dutch Graduate School for Information and Knowledge Systems.

    Google Scholar 

  • Orlikowski, W. J., and Baroudi, J. J. 1991. Studying information technology in organisations: Research Approaches and Assumptions. Information Systems Research 2: 1-28.

    Google Scholar 

  • Pfleeger, S. 1993. Lessons learned in building a Corporate Metric Program. IEEE Software. May: pp. 67-74.

  • Rifkin, S., and Cox, C. 1991. Measurement in practice. Technical Report CMU/SEI-91-TR-16. Software Engineering Institute, Carnegie Mellon University.

  • Rosenthal, R., and Rosnow, R. 1979. The volunteer subject. MowDay, R. T., and Steers, R. M. (eds). Research in Organisations—Issues and Controversies. Santa Monica: GoodYear Publishing.

    Google Scholar 

  • Rubin, H. A. 1987. Critical success factors for measurement programs. Proc. 1987 Spring Conference of IFPUG. Scottsdale, Arizona: International Function Point Users Group.

    Google Scholar 

  • Sauer, C. 1999. Deciding the future for IS failures. Currie, L., and Galliers, B. (eds). Rethinking Management Information Systems. New York: Oxford UP, pp. 279-309.

    Google Scholar 

  • Scarborough, H. 1999. The management of knowledge workers. Currie, L., and Galliers, B. (eds). Rethinking Management Information Systems. New York: Oxford UP, pp. 474-96.

    Google Scholar 

  • Selby, R., Porter, A., Schmidt, and Berney, 1991. Metric driven analysis and feedback systems for enabling empirically guided software development. Proc. 13th International Conference on Software Engineering. IEEE Computer Society.

  • Slevin, D. P., and Pinto, J. K. 1986. The project implementation profile. Project Management Journal September: 57-70.

  • Slovin, M. 1997. Measuring measurement maturity. IT Metrics Strategies III(4): 11-13.

    Google Scholar 

  • Spurr, W. A., and Bonini, C. P. 1973. Statistical Analysis for Business Decisions. Illinois, USA: Richard D. Irwin.

    Google Scholar 

  • Zuse, H. 1998. The history of software measurement. Available at: http://irb.cs.tu-berlin.de/~zuse/. Accessed April, 1999.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Berry, M., Jeffery, R. An Instrument for Assessing Software Measurement Programs. Empirical Software Engineering 5, 183–200 (2000). https://doi.org/10.1023/A:1026534430984

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1026534430984

Navigation