A hybrid of genetic algorithm and Fletcher-Reeves for bound constrained optimization problems

Article history: Received July 10, 2014 Received in revised format: January 4, 2015 Accepted January 5, 2015 Available online January 5 2015 In this paper a hybrid algorithm for solving bound constrained optimization problems having continuously differentiable objective functions using Fletcher Reeves method and advanced Genetic Algorithm (GA) have been proposed. In this approach, GA with advanced operators has been applied for computing the step length in the feasible direction in each iteration of Fletcher Reeves method. Then this idea has been extended to a set of multi-point approximations instead of single point approximation to avoid the convergence of the existing method at local optimum and a new method, called population based Fletcher Reeves method, has been proposed to find the global or nearer to global optimum. Finally to study the performance of the proposed method, several multi-dimensional standard test functions having continuous partial derivatives have been solved. The results have been compared with the same of recently developed hybrid algorithm with respect to different comparative factors. Growing Science Ltd. All rights reserved. 5 © 201


Introduction
Due to the globalization of market economy and competitive market situations, optimization is an active research area during the last few decades.In every sector of our real life situation, there are more decision making problems which are more and more complex.Most of these problems are multimodal and nonlinear in nature.The domains of these problems are non-convex.So, it is very challenging to find the global solution of these problems.Since these problems are non-linear, non-convex and multimodal, to solve those, powerful optimization techniques are to be applied.In this connection, one may apply the faster search techniques like, traditional gradient based iterative methods, which require the derivative information of the objective function.However, these methods have some difficulties, like, these methods (i) are dependent on the initial guess, (ii) converge to the nearest local optima from the initial guess.On the other hand, to solve the same problems, an efficient well known heuristic method, say Genetic Algorithm (GA) may be applied.But, this algorithm does not reach to the peak always, due to premature convergence as well as the randomness of the algorithm.So, to overcome the difficulties of gradient based methods as well as genetic algorithm, researchers are motivated to develop an efficient algorithm combining both the methods.This type of algorithm is known as hybrid algorithm.
At the end of twentieth century, to increase the efficiency of any algorithm, researchers are motivated to develop hybrid algorithms combining one or more algorithms.Chelouah and Siarry (2000) developed a continuous GA designed for the global optimization of multimodal functions.Later, Yiu et al. (2004) proposed a hybrid decent method for solving global optimization problems.Pal et al. (2005) introduced an application of real coded GA for mixed integer non-linear programming in the area of inventory problems.Yao et al. (2006) provided an overview of some recent advances in evolutionary computation with a wide range of topics in optimization, learning and design using evolutionary approaches and techniques.Deep and Thakur (2007) investigated a new crossover operator for real coded GA in the area of optimization problems.Karaboga and Basturk (2008) introduced artificial bee colony (ABC) algorithm as an optimization algorithm for solving different multi-dimensional optimization problems.Luo et al. (2008) developed a hybrid approach for solving systems of nonlinear equations by including the global search capabilities of chaos optimization and the high local convergence rate of quasi-Newton method.Bhunia et al. (2011) proposed a hybrid real coded GA for an inventory model of two warehouse system with some real conditions.Recently, Rao and Patel (2012) introduced an elitist teaching-learning-based optimization algorithm for solving complex constrained optimization problems.Teaching-Learning-based optimization (TLBO) is one of the recently proposed population based algorithms which simulates the teaching-learning process of the class room.Ibrahim et al. (2014) proposed a new hybrid method (known as the BFGS-CG method) by combining the search direction between conjugate gradient methods and quasi-Newton methods.
In the present days, most of the real life problems, i.e., the problems of business, engineering, medicine, etc. are more and more complex.Most of these problems are multimodal and nonlinear and the domains of the problems are non-convex.So it is very challenging to find the global optimality of these problems.Since these problems are very complex, some powerful optimization algorithms are needed.One such technique is gradient based iterative method which needs the derivative information of a function and thus it is a faster search technique.Among all the gradient based methods, the steepest descent method of Cauchy (1947) is most widely known optimization scheme for the bound constrained optimization problems having continuously differentiable objective functions.This method converges in a zig-zag way.So to overcome this difficulty, a more improved method called Conjugate Gradient (Fletcher-Reeves) method is used in this paper.
In the proposed method the step length of the Fletcher-Reeves method in each iteration is evaluated by GA.This idea is used to a set of initial points instead of a single point to overcome the convergency of the problem to a local optimum to form a new method called population based Fletcher Reeves method (PBFRM).Finally, to study the performance of the proposed method, a set of test functions having continuously partial differentiable has been solved.The results have been compared with the same of recently developed hybrid method.

Fletcher-Reeves Method
The well-known steepest descent method due to Cauchy is one of the oldest and most widely known simplest methods for solving unconstrained minimization problem defined as This is an iterative method which generates a sequence of points x to converge rapidly to x  .For this purpose, an improved method with better search direction (better in the sense that this direction produces the sequence of iterates to converge faster to x  .)was developed by Fletcher and Reeves.This method is known as Fletcher and Reeves method.The different steps of this method are given in Algorithm-1.
Step-2 : Find the search direction   .
x  is optimum, stop the process.Otherwise, set k = k +1 and go to step-5.
To test the optimality of the point   1 k x  , any one condition of the following can be used to terminate the iterative process in Algorithm -1.(i) When the change in function value in two consecutive iterations is very small, i.e., (ii) When the magnitude of all the partial derivatives (components of the gradient) of f are small, i.e., (iv) When the change in the design vector in two consecutive iterations is small, i.e., where i  (i=1, 2, 3,4) are very small positive numbers.In Fletcher Reeves method, the main task is to find the optimal step length for getting the next better approximations of the decision variables in each iteration.In the k-th iteration this step length is computed by solving another optimization problem as  is the (k-1)-th approximation.A necessary and sufficient condition for  to be optimal is that      =0 which is a non-linear equation in  and can be solved by any method, like, Newton-Raphson, Regular Falsi, Fixed point iteration method, etc.The main disadvantage for using these methods is to find the location of roots of the non-linear equation of Fletcher Reeves method.To overcome this difficulty, we shall use genetic algorithm (GA) for finding the best found step length (By GA, the obtained value of  is either optimal or nearer to the optimal value.We shall call this value as best found value as we cannot prove the optimality property analytically).The different steps for finding the best found value of  in each iteration of Fletcher Reeves method are given in the following algorithm:

P t represents the current generation],
Step-4: Evaluate the fitness value of each individual of   P t , Step-5: Find the best individual from   P t , Step-6: If the termination condition is not satisfied, go to Step-7; otherwise, go to Step-11, Step-7: Recombine   P t to produce offspring   C t using genetic operators (crossover and mutation), Step-8: Evaluate the fitness value of each individual of   P t , Step-9: Select   Step-10: Go to Step-6, Step-11: Print the result, Step-12: Stop.

Genetic Algorithm
To solve an optimization problem through GA (Holland, 1975;Goldberg, 1989;Michalawicz, 1996;Mitchell, 1996;Gen et al., 2000;Sakawa, 2002), it is very important to design an appropriate chromosome representation of solution.There are different types of representations among which binary and real coding representations are popular.In binary coding representation each variable is represented as binary substrings with desired precision.In this case the string length of an individual will be large and GA would perform poorly.In real coding representation each chromosome vector is encoded as a vector of floating point number of same length as the solution vector.This type of representation is easy to handle and is capable of representing quite large domains (or case of unknown domains).In this representation a vector (x1, x2,…, xn) is used as an individual (chromosome) to represent a solution of the optimization problem.The next step is to initialize the chromosomes which will take part in the artificial genetic operations like natural genetics.In this way population size of chromosomes are produced in which each element is initially selected randomly within the desired domain.Among many processes for selection of a random number, here we have used the uniform distribution.

Evaluation
After initialization of population from any generation of GA, we have to find out the fitness value of each chromosome.This fitness value of each chromosome is obtained in different ways by considering different fitness function.Generally the objective function is taken as the fitness function.

Selection
The main objective of selection operator is to select good solutions for the next generation of GA replacing the bad solutions based on the well know evolutionary principle "survival of the fittest" by Charles Darwin keeping the population size as constant.Some commonly used selection procedures are roulette wheel selection, truncation selection, universal stochastic sampling selection, steady state selection, ranking selection (linear or exponential), tournament selection etc.In this work, we have used the well known exponential ranking selection.

Crossover
In each generation, crossover produces more improved offspring by combining the features of the parents.In this operation, expected N (the integral part of p_cross×p_size, p_cross being the probability of crossover) number of chromosomes will take part.In our work, we have used whole arithmetical crossover.In this crossover, two different linear combinations of two parent chromosomes (vectors) are considered to produce the offspring.In any generation, if the parent chromosomes v S and w S are selected randomly from the population for crossover operation, then the produced offspring will be as follows: (1 ) where a is a proper fraction which can be chosen randomly.

Mutation
Like other genetic operations, mutation operation takes part to maintain genetic diversity into the population.These variations are small.It performs with low probability (mutation probability or mutation rate).Normally, after the crossover, the offspring are mutated.The probability of mutating a variable is inversely proportional to the number of variables.Thus the mutation rate is independent of the size of the population.Some methods of mutation are uniform mutation, non-uniform mutation, boundary mutation etc.In our work we have used non-uniform mutation.If the gene (element) Vik of chromosome Vi is selected for this operation and if the domain of Vik is an interval [lk 0 , lk 1 ] then the reduced value of Vik is given by where k  {1, 2,…,n} and ∆(t, y) returns a value in the range [0, y].Here, we have taken , where r is a random value in [0, 1], m_gen and t represent maximum generation number and the current generation respectively and b is called the non-uniform mutation parameter which is constant.

Population Based Fletcher-Reeves Method (PBFRM)
The solution obtained from genetic algorithm based Fletcher Reeves method may or may not always converge to the global optimum solution.The reason behind this is that the method is sensitive to the initial approximation.To overcome this difficulty, we have proposed a new method (we call this method as population based Fletcher Reeves method) by extending the idea of single-point approximation search to a multi-point approximation search.The multiple approximations produce a series of paths among which at least one converges to the global optimum.In this method, all the chromosomes will be improved by Fletcher Reeves method whereas the step length will be computed by genetic algorithm.The different steps in this method are given in the following algorithm:

Algorithm-3
Step-1: Set 0, k  Step-2: Create an initial population   k x , by generating each component/ gene of each individual/ chromosome     randomly from the search space (in case of unconstrained optimization problems, a large space is considered as search space).[ _ p size denotes the population size], Step-3: Compute the function values Step-4: Find the best value of f from all Step-5: Increase the value of k by unity i.e., 1 Step-8: Find the best found value of step length  using Algorithm-3 and store this value in   Step-11: Compute the best found value of step length   k i  in the direction   k i d by using Algorithm-3 and store this value in   Step-14: Increase the value of i by unity i.e., , go to step-7, Step-16: Find the best value of f from all Step-17: If the termination criterion is satisfied, go to step-19.Otherwise, go to step-18, Step-18: and then go to step-5, Step-19: Print the result and stop the process.

Numerical Illustration
To test the performance of the proposed hybrid algorithm, numerical experiments have been carried out considering 13 test problems (bound constrained optimization problems) given in Appendix-1.For solving these test problems, the algorithm for population based Fletcher Reeves method has been coded in C programming and implemented on a Pentium IV, 2.66 GHz with 1 GB RAM PC in LINUX environment.Each test problem has been solved for different values of n (the number of decision variables) and in each case, 30 independent runs have been performed and the corresponding results have been collected.For the statistical analysis of the results, the following characteristics have been computed and shown in Tables 1-13.
(i) best and worst function values for 30 runs.
(ii) mean and standard deviation (S.D.) of function values in 30 runs.
It is to be noted that the population size   _ p size of GA for calculating the step length in each iteration of Fletcher Reeves method is considered as 30 in each problem whereas the maximum number of generation   _ m gen of GA and also the population size   popsize of the proposed method are different for different values of n , the number of variables of each problem.These are mentioned in Table 1 -13.In all cases, the probability of crossover (p_cross) and mutation (p_mute) are taken as 0.9 and 0.15 respectively.To compare the performance of our proposed method (PBFRM) with the method HX-NUM-GA of Deep & Thakur (2007), we have done a comparative analysis with respect to different characteristics, such as average number of function evaluation, average execution time, mean and standard deviation of the objective function value.The results have been shown in Table 14.In order to compare the quality of the solution found among these two methods, a list of various characteristics like mean, standard deviation of the best objective function value, average number of function evaluation and average execution time corresponding to each test problem as shown in Appendix-1 has been given.It is observed from this Table that for each test problem, in our proposed method, the average number of function evaluations of PBFRM is significantly lesser than the same of HX-NUM-GA method.It is also found that out of 13 test problems, PBFRM takes lesser execution time in 9 test cases and in one test case it is almost equal.Again, out of 13 test problems it is observed that in most of the cases our proposed method shows that the best and worst objective function values are same with global objective function values which have been shown in Appendix-1.However, for test problem 11, the best and worst objective function values are very close to global objective function value.

Concluding Remarks
In the proposed method, the well known Fletcher Reeves method has been applied repeatedly in each chromosome of a population which is generated initially from the search domain of the problem.The proposed method is a multipoint approximation method.As a result, it requires more execution time and more function evaluations than single point approximation methods.Due to the random selection of initial approximations from the search space, the proposed method possesses the merits of global exploration and fast convergence.As the proposed method is gradient based, the method will be applicable only to those problems where the search space is continuous and the objective function is differentiable in n R , n being the number of decision variables.The proposed method may be applied in solving some real life decision making problems in the areas of structural optimization, inventory control, robotics, circuit decision etc. 1 1 20 exp 0.2 exp cos 2 20 2.5 sin 30 sin 5 30 [-5.12,5.12]0  

Table 1
Computational Results of Problem -1

Table 3
Computational Results of Problem -3

Table 4
Computational Results of Problem -4

Table 6
Computational Results of Problem -6

Table 7
Computational Results of Problem -7

Table 8
Computational Results of Problem -8

Table 14
Comparative Study among the methods HX-NUM-GA and PBFRM