Modeling Software Multi Up-Gradations with Error Generation and Fault Severity

World is moving very fast and with each new step ‘our desire to want something more’ is increasing in all the aspects of lives. Keeping in mind the growing demand of society for highly equipped software; the suppliers tend to come with up-gradations very frequently with the desire to make an impression in the market. Each organization is battling to showcase that their products is an enhanced version. As a result, companies at proper intervals are coming up with innovative functionalities. In order to have an upper hand amongst the competitors; the judgement on apt time of release of a software is utmost sensitive. This decision is majorly governed by company’s expertise in being able to remove all kinds of faults in the software. Development of a multi-release upgraded software reliability growth model is done with regard to trace the upshots of faults in line for the existing software and the one’s remaining in the software at different intervals. This model ascertains the remaining bugs occurred in the software during its operational phase while testing of the new code in other words which occurs in course of addition of new features to the current software. Further, the case of three types of faults existing in the software namely: simple, hard and complex has been incorporated to model the fault removal phenomenon. Data Analysis has been given to support the results.


Introduction
Global economy of present world has much interconnectedness and to ensure efficacy and advancement companies are majorly dependent on their IT department which further relies on numerous software commodities. In this case to serve its designated purpose in the business the reliability of software product is desired. Lately, software tends to have a great impression on our day to day lives, thus the software developers are focusing more on developing technologies which could in turn build high-quality computer programs. As a result, software companies are straining to bring up an upgraded version from time to time. The operation of a new technology in its lifetime is demonstrated by the S-shaped curve or sigmoid curve, signifying that more efforts are required in the initial phase. After achieving the anticipated level of operational reliability set by the firm, up-gradation of software takes place replacing the prevailing product in the market with an upgraded version consisting of enhanced functionalities.
In order to survive the strong market pressure, firms tend to keep upgrading their software and rerelease it with new features. The up-graded version is further able to improve the software's performance and functionality. Incorporating this modern concept of multi-up-gradations, many software reliability models have been proposed. Kapur et al. (2010a) have discussed a multi release software model which identifies the faults when the software is in its operational phase and is being tested for additional features. The additional features can make the software highly complex and thus lead to faults being generated. In the work of Singh et al. (2012a), they have proposed two multi up-gradation software reliability growth models for multi release. The SRGMs have been modeled using Normal and Logistic Distribution to identify the faults of the previous as well as the next release. Above discussed models were based on the perfectly debugging environment. There can be many instances when the testing team may not be able to debug bugs perfectly instead it affects the eventual fault count by either not being able to debug the faults completely (i.e. imperfect fault removal phenomenon) or leading to addition of some of new flaws (Aggarwal et al., 2011;Kapur et al., 2010b;Singh et al., 2014c).
There is another stream of research which focused on the impact of fault severity within a fault removal process for multi upgraded software. This categorization is based upon the time taken for fault isolation and its removal after its observation. On the basis of which faults can be categorized as 'Simple fault' if the time between their observation, isolation and removal is negligible, another class of 'Hard faults' in which more efforts and time is required to observe and isolate the faults, third category of faults called as 'Complex faults' in which there is reasonable amount of both efforts and time is required for faults to observed, identified and debugged (Kapur et al., 2011a).
Several researchers have incorporated this concept in fault removal phenomenon for software with multiple releases viz. Researchers have considered the fault severity based multi upgradation software reliability growth model. Further a trend line shows the reliability growth the various releases of the software (Singh et al., 2014a). Another stream has focused on the inclusion of testing effort in fault removal process; Singh et al. (2012b) have developed a multi up-gradation software reliability model with testing effort. They have described the procedure to incorporate the Weibull testing-effort function into software multi up-gradation reliability models. They have further described an Exponential Power Distribution to identify the faults in software and predict the faults in its successive release. As an extension of the previous work; Singh et al. (2015a) have considered the severity of the faults induced along with the testing effort consumed when multi release of the software takes place. The faults of the previous release as well as the current release are considered to understand the effect of faults on the software. The fault removal rate has been considered to be dependent on the testing effort put in throughout the testing process. Singh et al. (2014b) have used different fault removal process based on the generalized Erlang Model for different release of multi upgraded software. Singh et al. (2014c) have discussed imperfect debugging environment in the multi release of the software. Their model describes the fault detection and correction as a stop step process and uses it in the two types of imperfect debugging i.e. incomplete fault removal and error generation while removing a fault. Kapur et al. (2014) have considered a Distributed Development Environment (DDE) during software development multi-up gradations modeling for removal of fault for newly developed and reused components in distributed environment using probability distribution functions. Singh et al. (2015b) have modeled the fault detection as a stochastic process with continuous state space in a multi-up-gradation model. They have considered the fault severity and the effect of learning. The simple faults are removed at an exponential rate whereas hard faults are removed by Yamada with learning effect function (Kapur et al., 2011a).
Enormous work has been done in the field of software engineering to model the fault removal process under the concept of multi upgradation with the inclusion of fault severity, imperfect debugging, testing effort and many more (Anand et al., 2015;Garmabaki et al., 2014;Singh et al., International Journal of Mathematical, Engineering and Management Sciences Vol. 3, No. 4, 429-437, 2018 https://dx.doi.org/10.33889/IJMEMS.2018.3.4-030 431 2011). Moving with the same trend in this paper emphasis has been laid to model the mean value function for multi upgraded software under the impact of fault severity with the assumption that not all faults are removed perfectly i.e. addition of new faults might happen. In order to differentiate between different types of faults based on their severity, three different forms of distribution functions have been considered. Rest of the paper organized as follows: Different set of notations are explained in section 2. Software reliability modeling and multi up-gradation modeling framework are demonstrated in section 3. Model validation and conclusion stated in section 4 & 5.

Software Reliability Modeling
Vast literature exists in the field of multi up-gradation to model the Fault Removal Phenomenon (FRP) for any software system using the Non-Homogeneous Poison Process (NHPP). Before we begin with FRP, it is important to have insight regarding the unified approach to model FRP (Kapur et al., 2011a).  the number of faults that has to be detected plus some new faults that might get introduced. Moving with above assumptions and with the consideration that the rate by which fault are removed is directly proportion to remaining number of faults in the system. The differential equation for mean value function by incorporation of error generation and hazard rate approach we have: Solving Eq.
(2) under the initial condition that at the start of FRP there will be zero faults detection or correction i.e.   00 mt we have Above modeled equation represents a general equation for a FRP. In order to include the impact of fault severity equation (3) can be modified as: .
Here the emphasis is given to model the case of successive releases of software discussed below.

Release I
The competitive environment present in the market has pushed the software firm to release multiple version of their software after a certain time point. The software based on a product family is upgraded after adding new and tested features to sustain in the market. The foundation of a new software product is more prone to fail under its usage. Therefore, the software testing team has to pay more attention during the test case execution process. The test cases executed under this release should capture as many faults and provide a positive assurance about the quality of the software. Due to complexity in the software code, there exist some cases while debugging the faults; faults underlying in the system might introduce some new faults. In this modeling framework, we have considered that the fault removal process behaves according to the existence of various types of faults and newly generated faults. Here we have considered three types of faults such as simple, hard and complex faults. The total numbers of bugs are then resolved by considering exponential rate (G-O Model) for simple faults, Yamada (two-stage) Model for hard faults and Erlang 3-stage model for the complex faults (Goel and Okumoto, 1979;Kapur et al., 2011a;Yamada et al., 1983). The mathematical expression for release I is given as follows: .

Release II
The existence of the second release is mainly due to the enhancement of user's requirement gained from the feedback of its preceding release, the competitive environment and to provide new functionality based on the market scenario. The accumulation of new code will result in an increase in the fault content and the testing team has to design respective test cases in order to deal with the new functionality and its effect. The new testing team for the second release will monitor or debug the upgraded version with a different rate. The test cases executed will be designed in order to gather the discovery or removal of faults due to the added functionality or due to the remaining faults of previous release. During the debugging process under the impact of error generation, the various types of faults based on their severity level either of current release or left over from the previous release would be removed with the newly allocated detection rate of simple, hard and complex faults respectively i.e.  In equation (7), it is a part of our consideration that left over simple, hard and complex faults of previous release will be debugged in correspondingly generated faults of same severity. The same sets of equations can be extended for software with '' n release, which can be given as follows: (1 ).

Model Validation and Data Analysis
The proposed model has been analyzed on real life data set. Two releases of Tandem data set have been used to validate the model (Wood, 1996). We have made use of SAS software package (SAS, 2004).

Conclusion
The cut-throat competition in the market has given birth to multiple versions of the software. There is relentlessness pressure to beat the competitors in the daily grind resulting in the process of sporadic up gradation of the software with various releases. The need for improvement in the functions and removal of bugs from the earlier versions is principal reason for these up gradations. The errors in the up-graded version can either generate from the newly added code or might be some previous remaining error. The categorization into three kinds of errors namely simple, hard and complex is associated with the severity of the errors. The one which is convenient to remove is known as simple, the one which is difficult and time-consuming is known as hard and one which is comparatively more time-consuming, effort taking and requires quite an expertise is known as complex. The testing team articulates different strategies for different forms of errors rooted in the software sometimes resulting into addition of more faults. Virtually it is not possible to eliminate all the bugs at a time, as some bugs are passed on from the previous release. In this article; embracing the concept of fault severity and error generation we have proposed a mathematical framework of multi-upgradation wherein all the faults generated would be captured and concurrently removed with the various kinds of remaining faults from the earlier release. Here, we have assumed that remaining simple (hard or complex) faults will be removed with simple (hard or complex) faults of its succeeding release.