A Genetic Algorithm Based Hybrid Approach for Reliability-Redundancy Optimization Problem of a Series System with Multiple-Choice

The goal of this paper is to introduce an application of hybrid algorithm in reliability optimization problems for a series system with parallel redundancy and multiple choice constraints to maximize the system reliability subject to system budget and also to minimize the system cost subject to minimum level of system reliability. Both the problems are solved by using penalty function technique for dealing with the constraints and hybrid algorithm. In this algorithm, the well-known real coded Genetic Algorithm is combined with Self-Organizing Migrating Algorithm. As special cases, both the problems are formulated and solved considering single component without redundancy. Finally, the proposed approach is illustrated by some numerical examples and the computational results are discussed.


Introduction
In the present highly competitive business scenario, the reliability of a system (including industrial system) is widely regarded as an extremely important and crucial design measure.Consequently, the techniques / theories for the enhancement of system reliability play a pivotal role in the growth, development and improvement of power systems, telecommunication systems, manufacturing systems (Nourelfath and Nahas, 2003), advanced semiconductors, memory integrated circuits and nano systems (Ha and Kuo, 2006).The introduction of redundancy allocation is a commonly accepted technique, which is well known for its effectiveness in improving the reliability of a system.The problem associated with this method is known as redundancy allocation problem.The choice of the optimal combination of components of a system during design phase is guided by several factors like cost, performance, weight, size, technology, etc.Over the last few decades, a large number of researchers has explored this field of research.In this connection, one may refer to the works of Ghare and Taylor (1969), Tillman et al. (1977Tillman et al. ( , 1980)), Nakagawa et al. (1978), Chern (1992), Kuo (2001), Sun and Li (2002), Ha and Kuo (2006), Gupta et al. (2009) and others.
Due to the development of advanced technology and competitive market situations, for each component of a reliability system, various technologies are available.These technologies differ among themselves in terms of cost and reliability.This type of problem is known as reliability optimization problem with multiple choice constraints.In this area, the works of Nauss (1978), Sinha and Zoltners (1979), Sung and Lee (1994), Sung and Cho (2000), Nourelfath and Nahas (2003), Nahas and Nourelfath (2005) and others are worth mentioning.In their works, they did not consider the redundancy for each component.However, redundancies play an important role in reliability system.Mainly, it is used to increase the system reliability.On the other hand, they did not consider the cost optimization problem.
Genetic Algorithm (GA) is a very efficient and powerful heuristic search optimization method based on the mechanics of natural genetics and natural selection which mimics the Charles Darwin's evolutionary principle "Survival of the fittest".Prof. J. H. Holland (1975) first developed the concept of this algorithm.Thereafter, many works have been done for the development of this subject.The detailed works on the development of this subject are presented in the Goldberg (1989), Michalewicz (1996), Sakawa and Kato (2002) and others.
The focuses of the research on the hybridization have received significant interest in the recent years to solve the real-world problems (see Refs.Renders and Flasse, 1996;Salhi and Queen, 2004;Fan, et al., 2006;Pedamallu and Ozdamar, 2008).GA works very efficiently when it is combined with other algorithms or local search methods, rather than simple Genetic Algorithm (Chelouah and Siarry, 2003).To enhance the efficiency of GA, most of the researchers have proposed hybrid algorithms combining GA and various other algorithms.
Recently, a new stochastic optimization algorithm, viz.Self-Organizing Migrating Algorithm (SOMA) has been developed by Zelinka and Lampinen (2000).This is a population based stochastic search algorithm depending on the self-organizing behaviour of group of individuals in a social environment.Like Evolutionary Algorithm, it works with a population of individuals (in optimization, we refer to each solution as an individual).In SOMA, the individuals change their positions during migration loop (iteration).This change is enticed to the direction of the best individual from other individuals in a random fashion.This algorithm was seldom used in solving optimization problems.In this connection, one may refer to the recent works of Zelinka (2004), Nolle et al. (2005), Coelho (2009), Coelho and Alotto (2009), Coelho and Mariani (2010), Senkerik et al. (2010) and others.
This paper deals with a series system having several subsystems (with parallel redundancies) for each of which various technologies with different costs and reliabilities are available.For this system, two problems have been formulated and solved.In the first problem, system reliability is maximized subject to the budget constraint.On the other hand, in the second problem, system cost is minimized subject to a minimum level of system reliability.In both cases, problems are formulated as nonlinear integer programming problems and solved by using penalty function technique and a hybrid algorithm developed by combining real coded Genetic Algorithm and Self-Organizing Migrating Algorithm.As special cases, both the problems have been formulated and solved considering single component without redundancy.Finally, to illustrate the proposed approach, some numerical examples have been solved and the computational results have been discussed.

Nomenclature
n number of subsystems of the main series system j M number of technologies available for the subsystem j ij m number of redundant components arranged in parallel in the subsystem j , when technology i is adopted (i.e., the number of redundancies provided by technology i for subsystem j ) ij r reliability of each component arranged in parallel in the technology i for subsystem cost of each component arranged in parallel in the technology i for subsystem j ( ij c 's is assumed to be known)

Mathematical Formulation of Reliability-Redundancy Optimization Problem
The goal of the reliability-redundancy optimization problem is to determine an optimal redundancy allocation so as to maximize the overall system reliability under limited resource constraints.The reliability-redundancy optimizations are useful for system designs that are largely assembled and manufactured using off-the-shelf components and also have high reliability requirements (Coelho, 2009).
A well-known series system with n -independent subsystems has been considered.As depicted in Fig. 1, there are different technologies available for each of these n-subsystems.Each technology, when used for a subsystem, employs its own components arranged parallel with one another to form the subsystems.When the same technology used for a given subsystem, the components are identical in terms of cost and reliability.However, the component's reliability and cost may vary for different subsystems when the technology is given.Also, the component's reliability may vary for different technologies, when the subsystem is given.For each subsystem, only one technology can be adopted.From the given above situation, the following two decision making problems may arise: (i) To study and select the best combination of technologies along with the optimum number of redundant components in each subsystem, so that the system reliability is maximized subject to a budget constraint.(ii) To study and select the best combination of technologies along with the optimum number of redundant components in each subsystem, so that the total cost is minimized subject to a given fixed level of system reliability.
With the help of earlier mentioned notations, the mathematical formulations of two decisionmaking problems (i) and (ii) discussed above are as follows: These two problems (1) and (2) belong to the category of constrained integer nonlinear optimization problems.

Constraint Handling Technique of Constrained Integer Nonlinear Optimization Problems
In the application of evolutionary algorithm for the given constrained integer nonlinear optimization problem, there arises an important question: how the algorithm handles the constraints relating to the optimization problem?During the last few decades, several methods have been proposed to handle the constraints for solving constrained optimization problems with the help of evolutionary algorithms (Michalewicz and Schoenauer, 1996;Koziel and Michalewicz, 1999;Deb, 2000;Coello, 2002).Among these methods, penalty function method is very popular.In this method, the constrained optimization problem is converted to unconstrained one in which the reduced objective function involves the original objective function and a penalty for violating the constraints.Recently Gupta et al. (2009) proposed a penalty function approach to handle the constraints.In this approach, to convert the constrained optimization problem to an unconstrained one, a large negative value (say, -M) is blindly assigned to the objective function for the infeasible solution (for maximization problem).In this case, if the constrained optimization problem is Maximize () f x subject to ( ) 0, 1, 2,..., i gim  x then the reduced unconstrained optimization problem is as follows: , the feasible space for the optimization problem.
For minimization problem, it is to be noted that instead of -M, +M is considered.For solving the earlier mentioned constrained integer nonlinear optimization problems (1) and (2), we have proposed a new hybrid algorithm, viz.C-RCSOMGA which is discussed in the next section.

Hybrid Algorithm C-RCSOMGA, Based on RCGA and SOMA
For the purpose of solving the reliability-redundancy optimization problems (1) and (2), we have developed a hybrid Real Coded Self-Organizing Migrating Genetic Algorithm (C-RCSOMGA) combining two different algorithms RCGA and SOMA.In this algorithm, we have used tournament selection, modified uniform crossover and modified mutation operators for RCGA as well as our proposed modified strategy for SOMA.

Real Coded Genetic Algorithm (RCGA)
Genetic Algorithm is a stochastic search method based on natural evolution and natural genetics.In this algorithm, initially a population of solutions is generated by Random Number Generator (RNG).Then this population is updated from iteration (generation) to iteration with respect to their fitness value through different well known genetic operators (viz.selection, crossover and mutation) until a termination criterion is satisfied.For implementation of this GA for the proposed algorithm, the following basic components are considered:

Initialization of GA Parameters and Bounds of Variables
Genetic Algorithm is dependent on some parameters, viz.population size ( ).But there is no hard and fast rule to choose the values of all parameters.From the literature (Goldberg et al., 1989;Michalewicz and Schoenauer, 1996;Jabeen and Bhunia, 2006), it is seen that, if the values of the parameters are not chosen in the reasonable range, there arise some difficulties.If the value of size Pop is taken very large then the computational cost is large and also storing of data in computer in intermediate steps of GA may arise some difficulties at the time of computation.Again, if the value of size Pop is taken very small then the good properties of genetic operators do not stem for evolution of the population.T varies from problem to problem and it depends upon the number of variables of the problem.From the natural genetics, it is obvious that the value of cross p is always greater than that of mute p .

Representation and Initialization of Solution
After initialization of GA parameters and bounds of variables, a successful implementation of GA is dependent on the representation of solution and also the initialization of an appropriate population.In this work, we have used the real numbers to represent the component of solution.

Evaluation of the Fitness Function
After getting a population of solutions, GA carries out the evaluation stage.In this stage, we have to evaluate the objective function value of each solution of initial population or improved population.In this work, we have considered the objective function value as fitness value of the solution.

Selection Process
In the selection process, we have handled the constraints of an optimization problem by the tournament selection method (Brindle, 1981;Goldberg et al., 1989).In this method, two or more solutions are chosen randomly from the population and the best solution from this group is selected as parent for the next iteration.This process is repeated and both the solution have equal constraints violation, then select any one of them.

Crossover
After selection process, the survived solutions take part in the crossover operation.The main objective of this operation is to create new offspring (probably better) by recombining the features of randomly selected two or more parent solutions.The crossover of two parents is inspired by the natural genetic process.In this work, we have used modified uniform crossover operation (Sahoo et al., 2012).Expected []  ) (* denotes the product and denotes the integral value) number of solutions will take part in this operation.Here the uniform crossover operation has been used.The different steps of this operation at th t generation are as follows: Step-1: Find the integral value of Step-2: Select two chromosomes () t i v and () t j v randomly from the population.

Mutation
From inspiration of genetic diversity in nature, in GA, mutation operation is performed to introduce the random variations into the population.Sometimes, it helps to get back the information lost in earlier iterations.Expected ) (* denotes the product and denotes the integral value) number of genes / components will take part in mutation operation.According to Michalewicz (1996), mutation is also applied to whole solution vector rather than a single component of it.Basically, it is responsible for fine tuning capabilities of the system.In this work, we have used one-neighborhood mutation (Bhunia et al., 2010).
The computational steps of this operation at th t generation are as follows: Step-1: Find the integral value of the product of Step-2: Select a non-mutated gene () Step-3: Create new gene v by the following process as follows: where r is a uniformly distributed random number in [0, 1].

Termination Criterion
In GA, selection process, crossover, mutation and evaluation are performed repeatedly until a predefined termination criterion (In this work, we have considered the termination criterion as the number of iterations (generations) reaches T ) is met.
The RCGA flow can be summarized as follows:

Algorithm-I
Step-1: Initialize the parameters of GA.
Step-4: Evaluate the fitness function value of each solution of () Pt .
Step-5: Find the best solution from () Pt .
Step-6: Do the following until the termination condition is satisfied: Step-7: Print the best solution.

Self-Organizing Migrating Algorithm (SOMA)
SOMA is a relatively new stochastic evolutionary algorithm, which is based on the social behavior of cooperative solutions and self-organization (e.g. a herd of animals looking for their food).From the existing literature, it is evident that this algorithm has ability to converge towards the global optima (Zelinka et al., 2001).It starts with a population of solutions initialized randomly over the search space at the beginning and then improves the population in loops (called migration loop).In each loop, considering the solution with highest fitness as leader ( L ), all other solutions (called active solutions ( i a )) will traverse in the direction of the leader.Whether the solution will travel a certain distance (called path length) towards the leader in s N steps (number of steps) of defined length or not, it depends upon a perturbation parameter.This perturbation works as mutation operator of Genetic Algorithm.If the path length is greater than one, then active solution will over shoot the leader (Zelinka, 2004).(size of a migration step), PRT (perturbation determines whether a solution will travel directly towards the Leader or not), Migrations (number of iterations).The suggested values (Zelinka, 2004) for the above parameters are given in Table 1.

Parameter name Suggested range
Pop size 10 to any integer number n Problem dependent Step size Migrations 10 to any integer number Table 1.Suggested values for the SOMA parameters

Variations of SOMA
Based on different strategies, different versions of SOMA have been developed.Till now, Zelinka (2004) proposed five strategies as follows: (i) All-To-One, (ii) All-To-All, (iii) All-To-All Adaptive, (iv) All-To-Rand and (v) Clusters.
Here we shall discuss SOMA with All-To-One strategy in details.

SOMA with All-To-One Strategy
The evolution of SOMA performs with perturbation (equivalent to mutation for GA) and crossover operation.Here the movement of active solutions in the search space is perturbed, not mutated.Perturbation depends on the PRT controlling parameter and perturbation vector ( PRTVector ) (Zelinka, 2004).This SOMA generates a random number to assign PRT parameter from the interval (0, 1).If PRT equals to 0 then the perturbation is not performed and if PRT equals to 1 then the stochastic nature of SOMA will be vanished.
PRTVector is created by the following condition: where 1, 2, , jn  and 1 r is a random number between 0 and 1.Before a solution starts its journey over the search space towards the Leader, this j PRTVector is created for each solution's component (see Fig. 2).
SOMA creates a new solution at ( 1) th t  iteration by the special operation (Zelinka, 2004) (known as crossover in case of SOMA) as follows: ( 1) ( ) ( ) ( ) , start ,start () The different steps of SOMA are as follows:

Algorithm-II
Step-1: Generate initial population of solutions.
Step-2: Repeat the following for the number of migrations times: (i) Generate PRTVector parameter.
(ii) Evaluate the fitness function value of each solution of () Pt .
(iii) Find the best solution of population and consider it as leader ( L ).Step-3: Print the best solution.

Proposed Real Coded Self-Organizing Migrating Genetic Algorithm (C-RCSOMGA)
In this algorithm, initially a population is created by randomly generated solutions and is evaluated through the fitness function.After that, in each iteration, GA as well as SOMA operators are applied consecutively to the population to improve the same.At the end of the successful applications of GA operators, the best solution is selected according to the fitness value.Then, considering the best solution as leader ( L ) and others as active solutions ( i a ), SOMA operators are applied.In this application, a perturbation vector ( PRTVector ) is created first, then for each active solution, a set of new solutions is created by equation ( 3) along the path from active to leader at an equal length.After this, the best solution (called Perturbed Leader, PL ) is selected from the created new solutions for each active solution and the previous leader.In the next migration loop this PL will work as the leader ( L ).This process will be continued until the termination criterion is satisfied.In this connection, we call this new strategy as All-To-One Adaptive strategy which is incorporated in SOMA of our proposed C-RCSOMGA (see Fig. 3).The computational steps of C-RCSOMGA are as follows:

Algorithm-III
Step-1: Initialize the bounds of decision variables, parameters of GA and SOMA, and different parameters of the optimization problems.
Step-2: Set 0 t  [ t , the number of current iteration].
Step-4: Evaluate the fitness function value of each solution of () Pt .
Step-5: Find the best solution from () Pt .
Step-6: Increase the value of t by 1.
Step-8: Apply modified uniform crossover with probability cross p .
Step-9: Apply modified one-neighbourhood mutation with probability mute p .
Step-10: Evaluate the fitness function value of each solution of () Pt .
Step-11: Find the best fitted solution (Leader, L ) of () Pt and consider all others solutions as active solutions of this population.
Step-12: Apply SOMA with All-To-One Adaptive strategy as follows: (i) Do the following for each solution of the population:  Find the best solution (Perturbed Leader, PL ) from this created new population.(e) If PL is better than L , replace the L by PL .
Step-13: If (termination criterion is satisfied) go to Step-14, otherwise go to Step-6.
Step-14: Print the best result.

Numerical Results and Discussion
To illustrate and also to compare the results obtained from the proposed algorithm with the existing algorithms, we have solved four examples.The data for these four examples have been taken from Nahas and Nourelfath (2005) and shown in Appendix (see Table A.1, A.2, A.3 and A.4).However, Nahas and Nourelfath (2005) solved these examples considering without redundancy.In these examples, the available budgets are $1000, $900, $1000 and $1400.First of all we have solved the examples for reliability optimization without redundancy and compared the results with the same obtained from the existing algorithms (Nahas and Nourelfath, 2005) Algorithm -1 (AS+Alg1) and Algorithm -2 (AS+Alg1+Local).These results have been show in Table 4, 5. Using the same numerical data of Example 1 to 4, reliability optimization problem with redundancy has been solved for different budget.On the other hand, cost optimization problems with / without redundancy have been solved considering different lower bounds of reliability.The computational results have been shown in Table 4 -15.
Due to the stochastic nature of the proposed algorithm, 100 independent runs have been made for each problem considering different sets of random numbers.The proposed algorithm has been coded in C / C++ environment and the simulation has been done on a PC with Intel Core-i3 (2.5 GHz) processor in LINUX environment.The stopping criterion used in the proposed C-RCSOMGA performs up to maximum number of allowable iteration (T ).The values of parameters of C-RCSOMGA are given in Table 2 and Table 3 AS+Alg1+Local  3-4-5-2-3-3-2-3-2-2-2-3-4-3-2  From Table 4, it is observed that for all the examples, our proposed algorithm gives better solution than the existing algorithms (Nahas and Nourelfath, 2005) for reliability optimization without redundancy.It is also seen that among 100 runs maximum, minimum and average (mean) values of reliability be the same and obviously standard deviation is zero.In Table 5, best found solutions (selected technologies) with reliabilities have been shown for the existing algorithm (Nahas and Nourelfath, 2005) and our proposed algorithm for different examples.From this table, it is observed that for all the examples our algorithm gives the better result.
From Table 6, it is evident that for all the examples average, maximum and minimum values of reliabilities along with standard deviation have been shown considering different values of budget for reliability optimization problem with redundancy.In each case, number of feasible solutions obtained has also been displayed.It is also observed that for higher budget (i.e., for higher cost), the maximum and average reliabilities are greater and in all the runs the obtained results are feasible.
In Table 7, selected technologies, number of redundancies with reliability corresponding to the best found solution among all runs has been displayed for different budget costs for different examples.From this table, it is observed that system reliability is higher for higher cost.However, for examples 2, 3 and 4, feasible solutions are not obtained for budget costs $1000, $1000 and $3000 respectively.
In Table 8 -11, the computational results of cost optimization problem without redundancy have been displayed for different examples for different lower bound of system reliabilities.On the other hand, the computational results of the cost optimization problem with redundancy have been shown in Table 12 -15.In Table 8, 9 and 11, it is seen that the feasible solutions of the cost optimization problem without redundancy in different technology of each subsystem are not available among all runs when the lower bounds of system reliability are 0.9000, 0.9500, 0.9900 and 0.9990 for example 1, 0.9990 for example 2 and 0.9990 for example 4 respectively.

Concluding Remarks
This paper deals with two reliability optimization problems for a series system with parallel redundancy incorporating multiple choice constraints.In the first problem, system reliability is maximized subject to a budget constraint whereas in the second problem, system cost is minimized subject to a minimum level of system reliability.These problems are NP-hard problems.To solve these, we have developed hybrid heuristic approach based on parameter free penalty technique, RCGA and SOMA.Here, penalty function is to obtain the feasible solutions.
In this penalty technique, only a large negative value (in case of maximization problem) or a very large value (in case of a minimization problem) is considered for infeasible solution.From the experimental results, it is observed that the optimal or near to optimal solution (though the optimality cannot be tested analytically) can be obtained quickly.

Fig. 1 .
Fig. 1.Series system with n-subsystems (a) Initialization of GA parameters and bounds of variables.(b) Representation and initialization of solution.(c) Evaluation of the fitness function.(d) Selection process.(e) Genetic operators (crossover, mutation to create the new offspring for improvement of the population).(f) Termination criterion.
(a) Increase t by 1.(b) Select () Pt from ( 1) Pt by the selection process.(c) Perform the crossover operation with probability cross p .(d) Perform the mutation operation with probability mute p .(e) Evaluate the fitness function value of each solution of newly created () Pt .(f) Find the best solution from () Pt .
The main control parameters used in SOMA are size Pop (number of solutions in the population), n (number of decision variables of objective function of optimization problem), _ Path length (distance of movement of active solutions), size Step Fig. 2. Movement of active solutions (

(
iv) Do the following for each active solution of the population: (a) Set each active solution as best solution.(b) Do the following for specified number of steps:  Find the new solution towards the position of the leader ( L ) starting from active by the Eq.(3). Evaluate the fitness function for new solution. If the fitness of new solution is better than the fitness of previous best solution then replace that best solution by the new one.(c) Replace the active solution by the best solution obtained from earlier Step-2 (iv) (b).

Fig. 3 .
Fig. 3.The principle of C-RCSOMGA with All-To-One Adaptive strategy for two dimensional search spaces


(a) If the solution is infeasible then go to Step-12 (i) (c).(b) If the absolute value of the difference between the objective function value of Leader ( L ) and objective function value of the solution is less than  (in this work,  = 0.001) then go to Step-12 (i).(c) Generate PRTVector .(d) Do the following for specified number of size Step : Create a new population with the help of Eq. (3). Evaluate fitness for each solution of that new population.

Table 3 .
Experimental setup for C-RCSOMGA for the values of allowable maximum number of iteration ( T ) for different problems

Table 4 .
Comparison between the results of reliability optimization without redundancy for different Algorithm N/A -indicates the non-availability of results in literature.

Table 5 .
Comparison between the best solutions of reliability optimization without redundancy for different Algorithm

Table 6 .
Computational results of reliability of reliability optimization with redundancy for different Budget for hybrid C-RCSOMGA * -indicates that no feasible solution of the corresponding problem has been obtained.International Journal ofMathematical, Engineering and Management Sciences  Vol. 2, No. 3, 185-212, 2017  ISSN: 2455-7749   200 Table 12 -15.

Table 7 .
Computational results of corresponding best solution of reliability optimization with redundancy for different budget for hybrid C-RCSOMGA * -indicates that no feasible solution of the corresponding problem has been obtained.

Table 8 .
Computational results of cost optimization problem without redundancy for different technology of each subsystem for hybrid C-RCSOMGA for example 1 *-indicates that no feasible solution of the corresponding problem has been obtained.

Table 9 .
Computational results of cost optimization problem without redundancy for different technology of each subsystem for hybrid C-RCSOMGA for example 2 *-indicates that no feasible solution of the corresponding problem has been obtained.

Table 10 .
Computational results of cost optimization problem without redundancy for different technology of each subsystem for hybrid C-RCSOMGA for example 3

Table 11 .
Computational results of cost optimization problem without redundancy for different technology of each subsystem for hybrid C-RCSOMGA for example 4 *-indicates that no feasible solution of the corresponding problem has been obtained.

Table 12 .
Computational results of cost optimization problem with redundancy for different technology of each subsystem for hybrid C-RCSOMGA for example 1

Table 13 .
Computational results of cost optimization problem with redundancy for different technology of each subsystem for hybrid C-RCSOMGA for example 2

Table 14 .
Computational results of cost optimization problem with redundancy for different technology of each subsystem for hybrid C-RCSOMGA for example 3

Table 15 .
Computational results of cost optimization problem with redundancy for different technology of each subsystem for hybrid C-RCSOMGA for example 4