Chapter One - Comparing Reuse Strategies in Different Development Environments

https://doi.org/10.1016/bs.adcom.2014.10.002Get rights and content

Abstract

There is a debate in the aerospace industry whether lessons from reuse successes and failures in nonembedded software can be applied to embedded software. This chapter analyzes and compares reuse success and failures in embedded versus nonembedded systems. A survey of the literature identifies empirical studies of reuse that can be used to compare reuse outcomes in embedded versus nonembedded systems. Reuse outcomes include amount of reuse, effort, quality, performance, and overall success. We also differentiate between types of development approaches to determine whether and how they influence reuse success or failure. In particular, for some development approaches, quality improvements and effort reduction are low for embedded systems. This is particularly relevant to the aerospace industry as it has been subject to reuse mandates for its many embedded systems.

Introduction

Reuse supposedly reduces development time and errors. “If a software package has been executing error-free in the field for an extended period, under widely varying, perhaps stressful, operating conditions, and it is then applied to a new situation, one strongly expects that it should work error-free in this new situation [1]” and “In theory, reuse can lower development cost, increase productivity, improve maintainability, boost quality, reduce risk, shorten life cycle time, lower training costs, and achieve better software interoperability [2].” The aerospace industry was an early advocate of reuse. Chapter 8 of Ref. [1] discusses reuse in aerospace, including potential savings in quality, cost, and productivity. In July of 1992, the DoD released the DoD reuse: vision and strategy [3]. The government invested heavily in reuse, e.g., the Control Channel Toolkit (CCT) in 1997 and Global Broadcasting Service (GBS) beginning in 1998. Hence, one would expect large-scale planned reuse. Many government requests for proposals contain a requirement for quantifying expected savings from reuse.

However, from the beginning, there has been a debate on reuse success. In aerospace, not all reuse experiences have been as successful as expected. Some major projects experienced large overruns in time, budget, as well as inferior performance, at least in part, due to the wide gap between reuse expectations and reuse outcomes [[4], [5]]. This seemed to be especially the case for embedded systems. In many US government customer shops, reuse became a red flag when awarding contracts.

Many engineers believed that one of the root causes of the disconnect between reuse expectations and reuse realization was that too often estimated savings from reuse came from projects that differed from the target project. In particular, the reuse estimates for nonembedded systems were applied to embedded systems. There was an ongoing debate as to whether embedded systems and nonembedded systems were similar as far as reuse was concerned. Many systems and software engineers, especially those who worked on embedded systems, claimed that reuse could not be successfully used in their systems because the code was optimized to particular processors which may not be used in the new project. Many of the embedded systems software engineers claimed that reusing software was more costly than building the system from scratch. Further, many claimed that, in particular, trying to use model-based (MB) development (derived from the Initiative) was more costly than using other development approaches. Our major motivation was to investigate whether empirical studies exist that either support or contradict these opinions. Our research questions are:

  • Are embedded systems different with respect to reuse?

  • Do embedded systems employ different development approaches?

  • Does the development approach have an impact on reuse outcomes?

  • What types of empirical studies exist that analyze and/or compare reuse in embedded and nonembedded systems? Given that empirical studies can vary greatly in their rigor, this should give an indication how “hard” the evidence is.

  • To what extent is there solid quantitative data paired with appropriate analysis?

  • Are there studies that either deal with aerospace projects or can reasonably be generalized for this domain?

  • What are the limitations of the current empirical evidence related to reuse?

In 2007, Mohagheghi and Conradi [6] conducted a survey assessing reuse in industrial settings. They studied the effects of software reuse in industrial contexts by analyzing peer-reviewed journals and major conferences between 1994 and 2005. Their paper's guiding question was “To what extent do we have evidence that software reuse leads to significant quality, productivity, or economic benefits in industry?” Reference [6] is a major step forward in identifying and measuring reuse effectiveness. Unfortunately, their work does not distinguish embedded versus nonembedded systems. By contrast, this chapter

  • Compares reuse effectiveness in embedded versus nonembedded systems

  • Compares reuse effectiveness for different development strategies

Section 2 describes these development strategies and defines embedded systems and types of empirical studies. Section 3 explains the review process used and the criteria for including or excluding papers. 4 analyzes reuse experiences for embedded versus nonembedded systems, the development approaches used, and the type of evidence available. Section 5 summarizes measures of reuse outcomes reported in these studies and compares differences between the types of metrics collected for embedded versus nonembedded systems. Section 6 analyzes the metrics in an attempt to answer the research questions above, by comparing size, development approach, and reuse success for embedded versus nonembedded systems. Section 7 discusses threats to validity. Section 8 summarizes results, limitations, and makes suggestions for improvements.

Section snippets

Development Approaches

Reference [6] defines software reuse as “the systematic use of existing software assets to construct new or modified assets. Software assets in this view may be source code or executables, design templates, freestanding Commercial-Off-The-Shelf (COTS) or Open Source Software (OSS) components, or entire software architectures and their components forming a product line (PL) or product family. Knowledge is also reused, reflected in the reuse of architectures, templates, or processes.” To simplify

Review Process and Inclusion Criteria

The search considered studies published in peer-reviewed journals and conferences, industry forums such as SEI, industry seminars, symposia and conferences, and industry- and government-funded studies. Industry sources were especially useful for PL development, since academic sources rarely have the need or ability to develop a PL for evaluation purposes. Additional sources were monographs and technical reports (for example, Ref. [23]).

We searched the ACM digital library and IEEE Xplore,

Reuse and Development Approaches for Embedded versus Nonembedded Systems

We classified papers by type of system (embedded and nonembedded), development approach, and type of empirical research.

Some empirical studies covered combinations of two or more approaches. While the academic definitions of some approaches subsume other approaches, it was not clear that this was happening. One argument against is that embedded systems tend to include performance and reliability models, MATLAB models, etc. Further, over time, some parts of the system may have switched

Metrics Reported

Next, we turn to the papers’ reporting of metrics. The types of metrics included size, reuse levels, quality, effort, performance, and programmatic (such as staff, institutionalized process, or schedule). We noticed that, while all of the metrics reported fit into these categories, the way the metrics were collected and reported differed. For example, size could mean the size of the project, the software size, the model size, the size of the system, the size of the software staff, or the size

Analysis of Outcomes

Unlike other studies that analyzed papers reporting on multiple studies, but reported results as a single data point per paper (e.g., Ref. [6]), we scored each project individually. For example, one study included 27 different projects with different results for different projects [87]. These are scored as 27 individual data points. When a study reported on reuse in both embedded and nonembedded systems, we included the individual projects in both categories, as appropriate. The remainder of

Threats to Validity

Since much of the analysis is better classified as qualitative, we assess the following types of validity: descriptive validity, interpretive validity, theoretical validity, generalizability, and evaluative validity [95].

Descriptive Validity. Descriptive validity relates to the quality of what the researcher reports having seen, heard, or observed. Because observations are important, we needed to include grey literature. Since these are not always peer reviewed, the rigor of data collection or

Conclusion and Future Work

We analyzed empirical studies of reuse dating from the time of DoD's release of the DoD software reuse (1992). We considered five development approaches to reusing software in both embedded and nonembedded types of software systems: ontology, PL, MB, CB, and unspecified approach (where the development approach was unknown). We considered eight different study types. Out of 84 candidate papers, only 43 had enough usable empirical content to enable a comparison of reuse outcomes in embedded

Acknowledgments

This material is based upon work supported in part by the National Science Foundation under grant # 0934413 to the University of Denver.

Appendix A: Years of Publication

One question was how empirical results on development strategies was evolving over time. Our study collected research since 1992. Empirical research on reuse increased since 2001 after a short spike in the late 1990s. Nineteen of the 24 case studies were conducted since 2002, as well as three of the five surveys. While in 1997 all of the empirical papers on development strategies were expert opinion, in 2007 and 2009 (the years with the most empirical papers on reuse) the types of studies were

Julia Varnell-Sarjeant received a B.S. in Economics from University of Missouri, Columbia, her Masters in Computer Science from the University of Colorado, Denver, and her Ph.D. from the University of Denver, Denver, Colorado. She has worked in the aerospace industry since 1981. She has been involved with numerous aerospace projects, including the Hubble Space Telescope, Mars Observer, NPOESS, GOES, GPS, Orion, and many classified programs. In 2011, she was chief engineer on a reference

References (95)

  • A. Orrego et al.

    On the relative merits of software reuse

  • L. Brown

    DoD Software Reuse Initiative: Vision and Strategy, Technical report, Department of Defense, 1225 Jefferson Davis Highway, Suite 910, Arlington, VA 22202–4301

    (July 1992)
  • T. Young

    Report of the Defense Science Board/Air Force Scientific Advisory Board Joint Task Force on Acquisition of National Security Space Programs, Technical report, Office of the Under Secretary of Defense For Acquisition

    (May 2003)
  • M. Schwartz

    The Nunn-McCurdy Act: Background, Analysis, and Issues for Congress, Technical report

    (2010)
  • P. Mohagheghi et al.

    Quality, productivity and economic benefits of software reuse: a review of industrial studies

    Empirical Softw. Eng.

    (2007)
  • X. Cai et al.

    Component-based software engineering: technologies, development frameworks, and quality assurance schemes

  • J. Estefan

    Survey of candidate model-based engineering (MBSE) methodologies

  • SEI, Software Product Lines, Technical report, Software Engineering Institute (SEI) (2010). URL...
  • P.C. Clements et al.

    Software Product Lines: Practices and Patterns

    (2001)
  • Y. Peng et al.

    An ontology-driven paradigm for component representation and retrieval

  • R. Studer et al.

    Knowledge engineering: principles and methods

    IEEE Trans. Data Knowl. Eng.

    (1998)
  • S. Lee et al.

    Reusable software requirements development process: embedded software industry experiences

  • International Organization for Standardization, Information Technology—Programming Languages, Their Environments and...
  • E.A. Lee

    CPS foundations

  • C. Wohlin et al.

    Experimentation in Software Engineering

    (2012)
  • R.K. Yin

    Case Study Research Design and Methods

    (2008)
  • B.A. Kitchenham

    Procedures for Performing Systematic Reviews, Technical report TR/SE-0401, Software Engineering Group, Department of Computer Science, Keele University, NSW 1430

    (2004)
  • C. Zannier et al.

    On the success of empirical studies in the international conference on software engineering

  • E.K. Jackson et al.

    Towards a formal foundation for domain specific modeling languages

  • S. Hallsteinsen et al.

    Experiences in Software Evolution and Reuse: Twelve Real World Projects

    (1997)
  • E. Heinz et al.

    Experimental evaluation in computer science: a quantitative study

    J. Syst. Softw.

    (1995)
  • J. Guojie et al.

    Enhancing software reuse through application-level component approach

    J. Softw.

    (2011)
  • N. Ilk et al.

    Semantic Enrichment Process: An Approach to Software Component Reuse in Modernizing Enterprise Systems

    Inform. Syst. Front.

    (2011)
  • V. Koppen et al.

    An architecture for interoperability of embedded systems and virtual reality

    IETE Tech. Rev.

    (2009)
  • C.A. Welty et al.

    A formal ontology for re-use of software architecture documents

  • E.A. Lee

    Embedded software

  • G.L. Zuniga

    Ontology: its transformation from philosophy to information systems, in: FOIS ’01: Proceedings of the International Conference on Formal Ontology in Information Systems

    (2001)
  • A.M. de Cima et al.

    The design of object-oriented software with domain architecture reuse

  • R. Kamalraj et al.

    Stability-based component clustering for designing software reuse repository

    Int. J. Comput. Appl.

    (2011)
  • D.H. Zhang et al.

    A reference architecture and functional model for monitoring and diagnosis of large automated systems

  • S. Henninger

    An evolutionary approach to constructing effective software reuse repositories

    ACM Trans. Softw. Eng. Methodol.

    (1997)
  • S. Winkler et al.

    A survey of traceability in requirements engineering and model-driven development

    Softw. Syst. Model.

    (2010)
  • S. Bhatia et al.

    Remote specialization for efficient embedded operating systems

    ACM Trans. Program. Lang. Syst.

    (2008)
  • B. Graaf et al.

    Evaluating an embedded software reference architecture—industrial experience report

  • R. Holmes et al.

    Systematizing pragmatic software reuse

    ACM Trans. Softw. Eng. Methodol.

    (2013)
  • Eichmann, Factors in Reuse and Reengineering of Legacy Software, Tech. rep., Repository Based Software Engineering...
  • Cited by (0)

    Julia Varnell-Sarjeant received a B.S. in Economics from University of Missouri, Columbia, her Masters in Computer Science from the University of Colorado, Denver, and her Ph.D. from the University of Denver, Denver, Colorado. She has worked in the aerospace industry since 1981. She has been involved with numerous aerospace projects, including the Hubble Space Telescope, Mars Observer, NPOESS, GOES, GPS, Orion, and many classified programs. In 2011, she was chief engineer on a reference architecture project for AFRL. She developed company-wide classes in architecture fundamentals and DoDAF.

    Anneliese Amschler Andrews holds an M.S. and Ph.D. from Duke University and a Dipl.-Inf. from the Technical University of Karlsruhe. She served as Editor in Chief of the IEEE Transactions on Software Engineering. She has also served on several other editorial boards including the IEEE Transactions on Reliability, the Empirical Software Engineering Journal, the Software Quality Journal, the Journal of Information Science and Technology, and the Journal of Software Maintenance. She is Professor of Computer Science at the University of Denver. Dr. Andrews is the author of a text book and over 200 articles in the area of Software and Systems Engineering, particularly software testing, system quality, and reliability.

    View full text