A Comparison of Start-Up Demonstration Test Procedures Based on a Combinatorial Approach

A comparative study is presented of various start-up demonstration procedures based on a combinatorial approach. The expected number of required tests and the probability of accepting the tested unit are derived using a set of auxiliary functions. A constrained optimization problem is solved for minimizing the number of required tests subject to some confidence level requirements. The variables for this optimization include the total number of successes, failures, and the maximal lengths of runs of successes and failures. Various extensions to scan statistic-based and weighted tests are included and also to the testing of several units in parallel. The alternative Markov Chain embedding approach might involve in some cases the inversion of very large matrices. This disadvantage does not exist here.


TSTF
Total successes total failures. CSTF Consecutive successes total failures. TSCSTF Total successes consecutive successes total failures. TSCSTFCF Total successes consecutive successes total failures consecutive failures. i.i.d.
Identical Independent Distributed.

N
The total number of start-ups (trials) until termination of the experiment. E{N} Expected number of N. SD The standard deviation for N. Pa The probability of acceptance of the unit. u [.] The unit step function. [.]

Introduction
Start-up demonstration tests are set up for proving the reliability of different kinds of equipment like lawn mowers, batteries, power generators, and so on. A whole bunch of procedures has been developed during the last decades for this purpose. In general, the results of these tests yield to the decision whether to accept or to reject the unit that is tested. The underlying theory is based on the theory of runs. Runs are sets of consecutive successes or failures and they are related to the commonly known consecutive kc-out-of-n systems. A survey of the theory may be found, for instance, in a book (Kuo and Zuo, 2003) and in a paper (Eryilmaz, 2010).
There exist several ways of handling start-up demonstration tests. These include using generating functions, the Markov Chain Embedding technique (MCE) and the present combinatorial approach. An extensive survey of the various start-up procedures has been presented in a paper by Balakrishnan et al. (2014). At first, the CS (Consecutive Successes) procedure was introduced by Hahn and Gage (1983) and by Viveros and Balakrishnan (1993). Accordingly, the tested unit is accepted if there exists a consecutive set of kcs successes. Thereafter, Gera (2004) presented the TSCS (Total Successes Consecutive Successes) model. Further, on, adding the number of failures into the procedure, the CSTF procedure evolved. The equipment is accepted if there exists a consecutive of kcs successes and it is rejected if kf failures appeared before (Balakrishnan and Chan, 2000;Eryilmaz and Chakraborti, 2008;Martin, 2004Martin, , 2008Griffith, 2005, 2008). A generalization of these models to the TSCSTF (Total Successes Consecutive Successes Total Failures) and TSCSTFCF (Total Successes Consecutive Failures Total Failures Consecutive Failures) procedures have been carried out by Gera (2010. Accordingly, accepting the tested unit if either ks successes or a run of length kcs successes are encountered before the counting of kf failures and the appearance of a run of kcf consecutive failures. Otherwise, the unit is rejected. At first, these models included simple I. I. D. binary tests (success or failure). Thereafter, previous sum dependent tests have been handled where the probability of the success of each test depends on the total number of previous successes (Velaisamy et al., 1996(Velaisamy et al., , 2007 Yalcin and Eryilmaz, 2012;. Extensions have also been carried out to multi-state tests Rakitzis and Antzoulakos, 2015;Smith and Griffith, 2011;Zhao et al., 2015).
The attribution of weights to consecutive components has been known before (Eryilmaz et al., , 2009(Eryilmaz et al., , 2014Kamalija and Amrutkar, 2014). Recently, it has been suggested to use a procedure for which a weight is related to each test . Accordingly, the sum of the weights of consecutive successes replaces the actual number of consecutive successes within a run. The same goes with runs of failures.
The scan statistic-based CSDF procedure and its generalized version TSCSTFDF procedure (Total Successes Consecutive Successes Total Failures Distant Failures) have been handled by (Antzoulakos et al., 2009;Zhao et al., 2015). Accordingly, introducing an additional parameter r, the tested equipment is accepted if there exists a run of successes of a fixed length (kcs) or a total number of successes (ks) before either a total number of kf failures or the occurrence of two failures that have less than r-1 successes between them (so that they are too close to each other).
The above single-dimensional concept may be generalized to the two-dimensional case (Gera, 2014;Zuo, 1993 For the purpose of designing a set of demonstration tests, two quantities are of interest. The basic one is the number of tests (N) that will be involved and it is represented by their expected number E{N}. In addition, we are interested in the probability of success and of accepting the tested unit (Pa). The methods of their calculation include generating functions and the Finite Markov Chain Embedding approach.
The present paper indicates that in some cases it is preferable to use the combinatorial approach to evaluate the quantities of interest. It consists of defining auxiliary probability functions through which the parameters of interest are calculated. There are cases for which the Markov Chain Embedding approach involves the inversion of very large scale matrices. Sometimes it seems to be impractical and even impossible to carry out this task. This is evident when we handle the option of running several units in parallel (the two-dimensional case).Our way requires in some instances less CPU time and, as it will be seen, there is nearly no need for storage space . Further on, an optimization problem of minimizing the expected number of required tests subject to come confidence level constraints is solved. The present technique also proved to be useful for solving nearby problems like .

Basic Quantities
For I. I. D., binary tests, Table 1 presents some closed-form expressions for the above mentioned two quantities in relation to various basic procedures. Let N be the random variable representing the total number of required tests, E{N} the expected number and Pa the probability of accepting the tested unit. To evaluate E{N} for more complicated cases, we first determine the values of the function h(n)= P{N>n} with aid of auxiliary functions that will be presented further on. Then,

P{N=n}=P{N>n-1}-P{N>n}
(1) and thus we will obtain the value of E{N} Regarding the probability of acceptance Pa, it may result owing to two occurrences: either the required total number of successes (ks) is encountered or a specified run of kcs consecutive successes is achieved before meeting either a certain total number of failures (kf) or a specified run of kcf failures. Pa,1 will denote the probability for the first case and Pa,2 for the second case. Then, This scheme was applied to the various procedures in the following section.

Auxiliary Functions
The above probability functions of interest (1), (2), were calculated with aid of some auxiliary functions. Table 2 presents these various functions and the interrelations between them are given in Table 3. Some additional notation is needed for each procedure, as follows: Adding the possibility of degradation as a third state result of each test, we introduce an example of a multi-state set of tests with the additional notation: p0 probability of failure of each trial. p1 probability of full success of each trial. p2 probability of degradation of each trial. ksd the number of successes and/or degradations required for acceptance. kcsd the number of consecutive full successes and/or degradations required for acceptance. Lns the length of the longest run of full successes throughout n tests. Lnsd the length of the longest run of full successes and/or degradations throughout n tests.
Tns the number of fully successful start-ups throughout n tests. Tnd the number of degraded start-ups throughout n tests. Tnsd the number of fully successful and/or degraded start-ups throughout n tests.
According to the procedure that involves distant failures (TSCSTFDF), the tested equipment is accepted if there exists a run of successes of a fixed length (kcs) or a total number of successes (ks) before either a total number of kf failures or the occurrence of two failures that have less than r-1 successes between them (so that they are too close to each other). It involves the following notations: Dnf the minimal spacing between adjacent failures throughout n tests r-2 the maximal number of successes between failures causing rejection We can relate to each test a certain relative weight. Then we observe the sum of the weights of consecutive successes and/or failures TSCSTFCFw . Accordingly, the tested unit is accepted if either the total weight of successes is at least kws or a run of consecutive successes has a weight of at least kcws before the meeting a certain number of failures with total weight kwf and before meeting a run of consecutive failures with total weight of at least kcwf (and vice versa). The following notations are then needed: w(n) the weight of the n'th test. pt ,qt probability of success, failure of t'th trial (=p for I. I. D. tests). kws total weight of successes required for acceptance. kwf total weight of failures yielding rejection. kcws weight of a run of consecutive successes required for acceptance. kcwf weight of a run of consecutive failures yielding failure. Lnws weight of longest run of consecutive successes till n (inclusive). Lnwf weight of longest run of consecutive failures till n (inclusive). Tnws total weight of successes till n (inclusive). Tnwf total weight of failures till n (inclusive).
The following generalization to the two-dimensional (planar) case has been presented (Gera, 2014). The tested unit is accepted if either there is a total of ks successes or if M units have successes along the same consecutive kcs tests (so that we have a rectangular grid of Mxkcs successes). It is rejected if before fulfilling these criteria for success, either there is a total of kf failures or if the M units fail at the same consecutive kcf tests (rectangular grid consisting of Mxkcf failures).
The following notations are added: , r=0,1 Gera (2010 Multi-state (2014) The difference equations that are valid for these auxiliary functions have been presented in various references as follows.
Some reasonable boundary conditions were added to these sets of difference equations.

The Probability Functions
Using the auxiliary functions and referring to Tables 1, 2 the following probability functions are then derived.
a. For binary tests (TSCSTFCF): Instead of evaluating Pa,1, Pa,2, it is more convenient to handle the probabilities of rejection Pr,1, In terms of the previously defined functions, The probability of having the second possibility is Summing up on the relevant f functions, and 2 1 r r r P P P   The probability of acceptance is simply and the second is:

The Optimization Problem
As mentioned above, two important functions of interest arethe expected number of required tests (E{N}) and the probability of acceptance of the tested equipment (Pa ). Actually, we choose two values of probability, an upper pU value and a lower pL value. Then, the tested unit is accepted if the probability of success p of each individual test is higher than pU and it is rejected if it is lower than pL. This sets up a confidence level on the acceptance of the tested equipment and we consider here the two types of error for acceptance: Type I (rejection when p>pU,) and Type II (acceptance when p<pL).
Considering the basic TSCSTFCF procedure, the optimization problem to be handled is the minimization of the number of required tests (through minimizing their expected number) subject to the above mentioned confidence limits on the probability of acceptance. Explicitly, determine the values of ks, kcs, kf, kcf so that they will minimize E{N} together with satisfying the inequalities (30) on the probability of acceptance: It is problematic to provide a closed-form solution even in the simplest cases of TSTF or CSCF procedures. This is due to the fact that the unknowns appear as exponents within the above inequalities and normally there don't exist simple closed-form expressions for their solution.
The optimization problem may be solved in various ways like using generating functions or with aid of the Markov Chain Embedding technique. Here we present an approach based on the above Further, on extensions to dependent and multi-state tests have been carried out. Also the parameters of various other procedures (mentioned above) like TSCSTFDF (distant failures) and the weighted TSCSTFCF have been optimized using the same approach. Finally, the optimization problem has been extended to a two-dimensional TSCSTFCF procedure based on the same technique.

General Results
Numerical results have been obtained for the values of the various parameters involved in each procedure, the minimal expected number of required tests (together with its standard deviation value Sd{N}) and the probabilities of acceptance for the upper and lower probability values pU,pL.
Obviously, these are dependent on the required level of confidence (α, β). We summarize those results within Tables 4 to 8. The calculation time of the optimum is also an important factor and therefore it is added within these tables (see also Zhao et al., 2010). In some cases it has been observed that the calculations were significantly shorter by using the present approach when compared to those using the Markov Chain Embedding technique (MCE). Also, since the MCE involves the inversion of large matrices, it can't be accomplished in a simple way. An example is presented in Table 7 where it seems that the only way for a solution is by using the combinatorial approach. International Journal of Mathematical, Engineering andManagement Sciences Vol. 3, No. 3, 195-219, 2018 ISSN: 2455-7749 207 The second row in Table 4 presents improved results in comparison to those shown in a previous paper . The procedure that involves a third state (not binary) requires a relative long CPU time to arrive at an optimum compared to the rather short calculation time needed for the binary state procedures. As for the distant failures model, it was impossible to find parameters that will cope with the constraints. It is thus assumed that an optimum doesn't exist.
The weighted procedure is observed to yield the least time consuming a set of tests (for testing a single unit). Regarding the planar case and as anticipated, the more units we test in parallel, the larger the saving in the time durance of those tests. The standard deviation shrinks as well. In general, the loosening of the restriction on the confidence level yields lower values of the optimal E{N}, and thus shortens the time of the testing. It is observed that the lowest value is achieved by using the binary TSCSTFCF procedure.
The CPU time for the CSTF plan is longer than that of the more general TSCSTFCF plan due to applying the generally oriented program for that special case. The compromise between the shortening of the testing time and the required confidence level is further observed here as the values of the optimum are even lower than in the previous table.
Again, the binary TSCSTFCF procedure gives the minimal value of optimum. Calculation times are not essentially different from those of the previous cases. International Journal of Mathematical, Engineering andManagement Sciences Vol. 3, No. 3, 195-219, 2018 ISSN: 2455-7749 209 The confidence level interval has been squeezed from (pU,pL)=(0.9,0.6) in Table 4 to (0.85,0.65) in Table 7, with the same confidence level values. Evidently, the more restrictive requirements in this table yield much more time consuming sets of tests. Thus, the question of the compromise between the need for tight requirements and the durance time of the tests should be considered.
Regarding (1 * ), according to the Markov chain embedding approach, the optimal parameters in this case involve a large matrix of about 32000 x 32000 elements, which cannot be inverted in a simple way. Probably the only way of achieving the results in this case is by using the present combinatorial technique. This seems true also for the other procedures in this example.
The results in the first row (for CSTF) show an improvement with respect to those previously presented Smith and Griffith, 2005) and likewise for the second row (TSCSTFCF).
As to the multi-state procedure (third row), no values are given since it required a long computation time. Again, it seems that the distant failures procedure has no optimum in this case that copes with the constraints. Practically, it is a question whether to adopt a weighted procedure or to use the normal binary TSCSTFCF model. The planar model is observed to reduce the testing time nearly linearly with respect to the number of tested units (M). The effect of loosening the constraints on the testing time can be seen in the following Fig. 1 where it is assumed that pU=0.9, pL=0.6. Obviously, the more restrictive requirements demand more tests. The question of which procedure to use is evidently dependent on these restrictions.

Additional Results
At first, we wish to examine the rapidity of convergence of the series that is used for calculating E{N} (Zhao et al., 2015).  Fig. 3. E{N}, Pa,U and runtime versus the upper limit K (31): TSCSTFCF : α=β=0.05,pU=0.99,pL=0.95 Also in this case, the choice of K=300 will be satisfactory.
Often the question of the sensitivity of an optimum to variations in some of the variables is considered (Zhao et al., 2010). Some examples are now provided in relation to such variations.
Example C: TSCSTFCF, i.i.d., α=0.05, pU=0.9, pL=0.6: The variation of the optimum with respect to changes in β is presented in Table 9. The variation of the optimum with respect to changing pL is given in Table 10. Again, the shrinking of the confidence interval will need much more tests.
Varying each of the parameters, its effect on the expected number of tests is now analyzed. For this sake, we will observe normalized values of the various parameter variations and of E{N}.
Example E: TSCSTFCF, pU=0.9, pL=0.6, α=0.05, β=0.05 The variation of E{N} and Pa with respect to the four parameter variations are presented within Fig. 4  It is thus observed that E{N} is more sensitive to kcs variations whereas Pa is mostly sensitive to changes in kcf and kf in this case.
The variations due to changes in the choice of pU,pL are now considered. Actually, it seems preferable to discuss the changes versus the discrimination factor pU/pL. Obviously, the more we 'squeeze' the discrimination interval, the more tests are required.

Conclusion
A comparison of procedures based on the combinatorial approach for handling start-up demonstration tests has been presented. The expected number of required tests and the probability of accepting the tested unit are derived using a set of auxiliary functions. A constrained optimization problem is solved for minimizing the number of required tests subject to some confidence level requirements. The variables for this optimization include the total number of successes, failures, and the maximal lengths of runs of successes and failures. We considered also the possibility of assigning weights to the tests and observing the total weight of runs of successes and failures. The single dimensional theory has been generalized to include the possibility of testing in parallel a number of units. This obviously shortens a lot the time of testing.
The alternative Markov Chain Embedding (MCE) approach involves the inversion of large scale matrices. Thus, there exist examples for which we couldn't apply that technique and it was needed to use the present combinatorial approach. In some cases, it has been observed that the results were achieved in much shorter time than using MCE.
All in all, the underlying technique appears to be simple and it is rather efficient and practical for application.