Stochastic Formulation of Fault Severity Based Multi Release SRGM Using the Effect of Logistic Learning

In today’s environment, software reliability is one of the major concerns for Software firms. Many Software Reliability Growth Model (SRGM) has been developed and many are under process. In order to meet the requirements of consumer and to excel in competitive environment, companies are coming up with multiple add –ons. We design the model as stochastic with continuous state space because of large software system, the count of failures observed is huge and so, the variation in count of errors detected/ removed in each debugging is petite compared to original error content at the beginning of testing. This study is an add on to the software reliability literature where we have developed multi release SRGM’s based on available concept of depending on previous releases. The errors have been categorically divided upon the severity of their removal as one stage, two stage, three stage fault removal process is applied in an environment of irregular fluctuations.


Introduction
In today's world, computer is vital part of our day to day activities.Since software is embedded in everything, hence the need of reliable software.The need for reliable software gives birth to software reliability engineering.Not only to enhance but also for longitivity and need of nowadays reliable complex systems,, motivates the researchers to design tools and techniques not only to evaluate software quantitatively but also estimate important measures such as software reliability , mean time to failure, the number of remaining faults , failure intensity as well while testing and operation phase and thus named as Software Reliability Growth Modeling (SRGM'S).Numerous SRGM's have been proponed and authenticated under various presumptions until now by ample of analyzers across world.An SRGM which dictates the fault detection process as NHPP is proponed by Goel and Okumoto (1979) which hypothesized that fault removal rate is proportional to remaining fault number.The conjunction between testing and the corresponding number of faults removed are either exponential, s-shaped or mix of two (Pham, 2006) is illustrated by plenty of SRGM's, in last two decades.Because of software augmentation at consumer's end the conventional software reliability growth models are unsuccessful to achieve the error growth.Consequently, a new up-graded version of software is announced in trade when software reaches a level where it acquires its operational reliability desired by the firm.
In order to maintain its rapport among its customers, renowned software firms like IBM, ADOBE, and WIPRO etc. are working tirelessly.Ergo, their R & D departments keep a close watch on the market strategies at consumers end and beget their presence in market by not only considering the errors from the present software but also by upgrading/add-ons i.e. by regularly adding some novel features in the present software, in the useful life phase.Introduction of new add-ons or features in present software are generated on the substratum of consumers requirement.While on each up-gradation, an increase in failure rate is acknowledged by software.Further failure rate decline steadily, on basis of errors found and corrected, after the up-gradation.Fig. 1 given below, portrays the increment in failure rate on account of augmentation of novel functionalities in the software (Kapur et al., 2006a(Kapur et al., , 2006b(Kapur et al., , 2010a)).The enhanced and present software may differentiate with respect to execution, interface and serviceability etc.Instead of discrepancy in execution, interface and serviceability amid the enhanced and present software, the developers enhanced the software so as to ameliorate the software produced in spite of the worsening possibility of the up-graded version.In order to up-grade the present software, only chosen parts of the software system are altered whereas the others retain same to perform.Consequently, augments the error content which in result motivates the testing team to detect the errors in software.Secure enhancement not only reform the demeanors of the system but also sustain the market for firm; whereas vulnerable up gradation can create censorious faults in software.A multi release SRGM which states that multiplying errors in each generation not only based on all earlier releases but also, they are corrected with certitude was proposed by Kapur et al. (2006a).The proposed model is grounded on the presumptions that total error correction process of new release not only involved the reported errors from 1 st release and onwards but also on the errors spawned as a result of amalgamation of novel features to the present software.Lastly, the knowledge acquired by the testing team while testing is incorporated in model in terms of learning factor and most importantly the errors are differentiated on the substratum of severity i.e. simple, hard and complex.

Assumptions (i)
The error identification / removal process are designed as non-homogenous process (NHPP).(ii) The count of errors identified at any time is commensurate to the residual errors in the software.

(iii)
Original count of errors is finite.(iv) Errors identification process is designed as a stochastic process with a continuous state space.(v) While testing the count of errors (simple, hard & complex) decline steadily.

(vi)
Due to leftover errors in the system, software confronts the failure while execution.Fraction of previous k th release hard fault removed by new i th release complex fault removal rate.

SDE Based Modeling of Up-Gradations for Each Release
An SRGM is created for four consecutive releases, portraying the software failure occurrence phenomenon while testing or operational phase by implementing stochastic and statistical theories.The mathematical equations designed portray the behavior of errors corrected while testing.It is presumed that novel features are amalgamated to the software while up-gradation for the first time and hence new code is written which in turn create new errors and are thus identified while testing.The count of errors eradicated while detection/correction process as compared to original error contents is abundantly petite which is a result of enlarge software system, consequently, portrays the randomness in error detection/correction procedure.We implement stochastic model of it ^o type with continuous state space, so as to portrays the fluctuating nature of error detection process.Ample of SDE based SRGM's of it ^o type have been designed by Yamada et al. (2003) such as, exponential, delayed S-shaped and inflection S-shaped stochastic differential equation models.A generalized SRGM on the substratum of SDE of ît o type which includes three distinct errors i.e. simple, hard and complex is proponed by Kapur et al. (2006b).
The number of errors detected in the software system up to testing time t is depicted by an arbitrary variable, called { ( ), 0} m t t  , where m(t) takes continuous real value.Software fault detection procedure is regarded as discrete state space by NHPP model while testing.The synonymous differential equation is given by (Kapur et al., 2010a;Kapur et al., 2010b;Kapur et al., 2006b;Pham and Zhang, 2003). )) Due to some random environment effects, incomplete rate is known, so that we have: Where, r(t) is the time-based error identification/removal rate.

Let () t
 be a standard Gaussian white noise and  be a positive constant depicting a intensity of the irregular fluctuations.So, the above equation can be penned as: ) The overhead mathematical equation can be unfolded to the following, îto type stochastic differential equation: An integration of the white noise ) (t  with respect to time t. is formally defined as onedimensional Wiener process, ) (t W . Where, wiener process ) (t W , is a Gaussian process which consists following properties: ; as follows: Normal distribution is followed by Brownian motion or Wiener process, so, the density function (Singh et al., 2011) of w (t) is portrayed as: As a result the mean number of identified errors is (Singh et al., 2011) given as: This paper involves the concept of three distinct detection rate in which simple errors are eradicated by exponential rate whereas hard and complex errors are detected/eradicated as two phase and three phase error eradication method, respectively.Also, three types of error identification/correction procedure incorporate the concept of learning factor which is the consequences of knowledge gained by testing team (Pham, 2006).
Simple errors are designed as: Hard errors are designed as two stage process (Singh et al., 2010): Complex errors are designed as a three-stage process: By applying original condition, m (0) =0 while solving overhead equations, we get m (t) as:

Modeling of Four Releases 6.1 Release-1
In release 1, simple errors are extracted exponentially though hard faults by Yamada's identification/correction error rate and complex faults by Erlang method, incorporating the learning function in all three-error detection/correction rate.

Release-2
Amalgamation of novel features in the present software activates the alteration of the existing code which further increases the error content.While testing of newly formed code , the team not only eradicates certain errors (simple, hard ,complex) of previous release which are lying dormant and are not detected/ eradicated in previous release software version but also considers dependency and effect of subsumed new features with existing system.And the overall aloft procedure is accomplished before releasing the up-graded version of software in market.While, testing in this release, left over simple errors )) ( 1 ( with novel complex rate.Similarly, for the remaining complex errors from first iteration which is interacted with novel complex identification/ correction rate.Further, errors are created on account of amelioration of the features, a section of these errors are also eradicated while the testing with novel exposure rate i.e.

 
21 1 for hard errors and ) ( 1 23 t t F  for complex errors.Variation in error detection rate is not only by virtue of alteration in time and testing strategies but also on account of metamorphosis of complexity in errors which is the result of amalgamation of novel functionalities.The resulting equation (Tickoo et al., 2015) can be designed as:

M t p a F t p a F t t p p a F t t p a F t F t t p a F t F t t p a F t F t t
q p a F t F t t q p a F t

Release-3
Likewise, for release 3, we not only consider novel errors created in third release and but also the remnant errors (simple, hard and complex) from first and second release.As the parameters are more in the proposed model in release -3 compare to no. of available data points in tandem data.So, we increase the data points taking series mean of data points of available no. of faults detected in tandem data which is 37.92.The proposed model for release -3 (Tickoo et al., 2015) can be designed as:   (

Release -4
Amalgamation of new functionalities is a continuous process which persists till that particular software is amid consumers.While testing and integration of code, this phenomenon helps in more and more errors detection/ eradication which firstly improves the quality and secondly ameliorate the reliability of software product under consideration.The new features are supplemented in the software for the third time and the resulting model for fourth release (Tickoo et al., 2015)     (1 (1

Model Validation, Data Set and Data Analysis
In order to illustrate the software reliability and to authenticate the proposed model, it is tested on tendem data for four releases.Further, for estimation of parameters, the non-linear least square technique is applied on proposed model.

Conclusion
This paper suggested a noise based multi up-gradation model with fault severity of three types and incorporating the logistic learning function.Since, each software is hindered by bugs, so, software testing is imperative, which is an on-going process.Amid testing of newly created code, there is a possibility of detection/ correction of errors of previously released software which further enhances the reliability of software product under consideration.The suggested multi up gradation model is estimated on a four-release data set.In future, we can implement the proposed

Fig. 1 .
Fig.1.Failure rate curve due to feature enhancements for software systems

Fig 2 FigFig. 4 .Fig. 5 .
Fig 2, 3, 4, 5 depicts the graphical representation of the actual and predicted values goodness of fit curve for the release 1

Fraction of previous k th release simple fault removed by new i th release simple fault removal rate.
Time for i th release (i= 1to 4).Fraction of previous k th release simple fault by new i th release complex fault removal rate.Fraction of previous k th release hard fault removed by new i th release hard fault removal rate.
i  i  i t i p ik   Table-1 and Table-2portrays the estimated value of parameters and comparison criteria for four software releases, respectively.The performance analysis of proposed design is analyzed by the four common criteria i.e.Bias, Variation, Root Mean Square Prediction Error (RMPSE), Mean Square Error (MSE), on the substratum of data provided in Table1 as follow: