High dimensional real parameter optimization with teaching learning based optimization

Article history: Received 12 April 2012 Received in revised format 12 June 2012 Accepted June 22 2012 Available online 27 June 2012 In this paper, a new optimization technique known as Teaching–Learning-Based Optimization (TLBO) is implemented for solving high dimensional function optimization problems. Even though there are several other approaches to address this issue but the cost of computations are more in handling high dimensional problems. In this work we simulate TLBO for high dimensional benchmark function optimizations and compare its results with very widely used alternate techniques like Differential Evolution (DE) and Particle Swarm Optimization (PSO). Results clearly reveal that TLBO is able to address the computational cost issue for all simulated functions to a large dimensions compared to other two techniques. © 2012 Growing Science Ltd. All rights reserved


Introduction
Even though there is a huge amount of work dealing with global optimization, there are still not many powerful techniques to be used for dense high-dimensional functions.One of the main reasons is the high-computational cost involved.Usually, the approaches are computationally expensive to solve the global optimization problem reliably.Very often, it requires many function evaluations and iterations and arithmetic operations within the optimization code itself.For practical optimization applications, the evaluation of f is often very expensive to compute and large number of function evaluations might not be very feasible.In recent past, there is growing demand in using evolutionary computation techniques for solving global function optimization problems.Among them, Genetic Algorithm (Holland, 1975) Particle Swarm Optimization (PSO) (Kennedy & Eberhart, 1995) ,Differential Evolution (DE) (Storn & Price, 1997;Price et al., 2005) and Artificial Bee Colony (ABC) (Karaboga & Basturk, 2007;Karaboga & Akay, 2009) etc. are widely used ones.These techniques and its several variants have been implemented for many benchmark global constrained and unconstrained function optimizations (Karaboga & Basturk, 2007;Karaboga & Akay, 2009;Mezura-Montes & Coello, 2005;Zavala et al., 2005;Becerra & Coello, 2006;Huang et al., 2007).However, it remains as a great challenge to solve high dimensional problems with reasonably less function evaluations.Recently, a new optimization techniques based on teaching learning approach known as Teaching Learning based optimization (TLBO) (Rao et al., 2011) is reported to produce better results as regard to the convergence speed.Rao, et al (Rao et al., 2012) have simulated many constrained and unconstrained real parameter optimization problems.Rao and Patel (2012) described the elitist TLBO algorithm for solving complex constrained optimization problems.The authors had provided the complete details of TLBO algorithm regarding its algorithm-specific parameter-less concept and the effect of common control parameters such as elite size, population size and number of generations on the performance of the algorithm.The points raised by Matej Črepinšek et al. (2012) were already addressed by Rao and Patel (2012).
In this paper we have attempted to simulate TLBO for various benchmark problems with different dimensions ranging from 10 to 500 to establish the effectiveness of TLBO over very popular classical PSO and DE.
The rest of the paper is organized as follows.In section 2 , we explain TLBO in brief.Section 3 provides the benchmark functions, parameter settings and simulation results.The conclusion and future enhancement is discussed in Section 4.

Teaching-learning-based optimization
This optimization method is based on the effect of the influence of a teacher on the output of learners in a class.It is a population based method and like other population based methods it uses a population of solutions to proceed to the global solution.A group of learners constitute the population in TLBO.In any optimization algorithms there are numbers of different design variables.The different design variables in TLBO are analogous to different subjects offered to learners and the learners' result is analogous to the 'fitness', as in other population-based optimization techniques.As the teacher is considered the most learned person in the society, the best solution so far is analogous to Teacher in TLBO.The process of TLBO is divided into two parts.The first part consists of the 'Teacher Phase' and the second part consists of the 'Learner Phase'.The 'Teacher Phase' means learning from the teacher and the 'Learner Phase' means learning through the interaction between learners.In the subsections below we briefly discuss the implementation of TLBO.

Initialization
Following are the notations used for describing the TLBO: : number of learners in a class i. e. "class size" : number of courses offered to the learners : maximum number of allowable iterations The population is randomly initialized by a search space bounded by matrix of rows and columns.The parameter of the learner is assigned values randomly using the equation where rand represents a uniformly distributed random variable within the range (0, 1), and represent the minimum and maximum value for parameter.The parameters of learner for the generation g are given by , , , , , , … … , , , … … , ,

Teacher Phase
The mean parameter of each subject of the learners in the class at generation is given as The learner with the minimum objective function value is considered as the teacher for respective iteration.The Teacher phase makes the algorithm proceed by shifting the mean of the learners towards its teacher.To obtain a new set of improved learners a random weighted differential vector is formed from the current mean and the desired mean parameters and added to the existing population of learners.
(4) is the teaching factor which decides the value of mean to be changed.Value of can be either 1 or 2. The value of is decided randomly with equal probability as, It may be noted here that is not a parameter of the TLBO algorithm.The value of is not given as an input to the algorithm and its value is randomly decided by the algorithm using Eq. ( 5).After conducting a number of experiments on many benchmark functions it is concluded that the algorithm performs better if the value of is between 1 and 2. However, the algorithm is found to perform much better if the value of is either 1 or 2 and hence to simplify the algorithm, the teaching factor is suggested to take either 1 or 2 depending on the rounding up criteria given by Eq.( 5).If is found to be a superior learner than in generation , than it replaces inferior learner in the matrix.

Learner Phase
In this phase the interaction of learners with one another takes place.The process of mutual interaction tends to increase the knowledge of the learner.The random interaction among learners improves his or her knowledge.For a given learner , another learner is randomly selected .The th i parameter of the matrix in the learner phase is given as (6)

D Algorithm Termination
The algorithm is terminated after iterations are completed.Details of TLBO can be refereed in (Rao et al., 2012)

Benchmark Functions, parameter settings and Simulation results
In this work we have simulated two classes of functions: Unimodal and Multimodal.Further, we have chosen separable and un-separable functions in both of these classes.In the Table 1 the descriptions of all simulated functions are presented.

Parameter settings
In all simulations of our work, the values of the common parameters used in each algorithm such as population size and total evaluation number were chosen to be the same.Population size is 50 and the maximum number fitness function evaluation is fixed as 100,000 for all functions.The other specific parameters of algorithms are given below: PSO Settings: Cognitive and Social components, , are constants that can be used to change the weighting between personal and population experience, respectively.In our experiments cognitive and social components are both set to 2 (Kennedy & Eberhart, 1995).Inertia weight, which determines how the previous velocity of the particle influences the velocity in the next iteration, is 0.5 (Kennedy & Eberhart, 1995).

DE Settings:
In DE, F is a real constant which affects the differential variation between two Solutions and set to F = 0.5*(1+ rand (0, 1)) where rand (0, 1) is a uniformly distributed random number within the range [0, 1].In our simulation the value of crossover rate, which controls the change of the diversity of the population, is chosen to be R = ( Rmax -Rmin) * (MAXIT-iter) / MAXIT where Rmax=1 and Rmin=0.5 are the maximum and minimum values of scale factor R, iter is the current iteration number and MAXIT is the maximum number of allowable iterations as recommended in (Storn & Price, 1997).
TLBO Settings: For TLBO there is no such constant to set.
We have simulated each function with different dimensions for each algorithm.The range of dimensions is chosen from 10 to 500.The simulated results are presented in Table 2 to Table 5.The fitness values and the number of function evaluations for six unmodal functions are shown in Table 2 and Table 4 respectively.Similarly, for multimodal functions it is shown in Table 3 and Table 5.The results are shown after 30 independent runs.The mean and standard deviations are calculated for obtained global minimum values and for the number of function evaluations in each algorithm with different dimensions.

Discussion of Results
In this work, all functions are run for 10 function evaluations (FFs) and the simulation is terminated when it reached the maximum number of evaluations or when it reached the global minima value for each test function.particularly in increasing dimensions as shown in Table 5.In general all the functions experimented in our work, TLBO outperforms other two approaches.Fig. 1 to Fig. 12 presents the fitness curve of all tested functions against all algorithms.
It is clearly evident from above figures that TLBO is able to locate global minimum values for all functions even in high dimensions.However, PSO and DE fail to obtain optimal values even upto 100000 function evaluations.But on the other hand TLBO is able to find best global minimum values for all dimensions up to 500 within 100000 FEs.This fact establishes the ability of TLBO to optimize high dimension functions over other two well know optimization techniques DE and PSO.

Conclusion and future research
In this paper, a new optimization technique called Teaching-Learning based Optimization (TLBO) is implemented for solving high dimensional real parameter optimization benchmark functions.The results are compared with that of other two very popular optimization techniques known as Differential Evolution (DE) and Particle Swarm Optimization (PSO).From the simulation results, it is clearly noted that TLBO is a very powerful optimization technique in handling high dimension functions.Twelve benchmark functions belonging two unimodal and multimodal category are simulated with different dimensions ranging up to 500.In all functions, TLBO could able to locate global minimum function values with less number of function evaluations (FEs) compared to DE and PSO.This has clearly demonstrated the capability of TLBO as candidate to solve very high dimension industrial application problems.As a further research, it can be seen how TLBO address the multi-objective function optimization problems.

Table 2
Global Minimum values of Unimodal functions

Table 3
Function Evaluations for Unimodal functions

Table 4
Global minimum values for Multimodal Functions

Table 5
Function Evaluations for Multimodal functions Table 2 and 3 it is clear that, as the dimension increases, it is difficult for both PSO and DE to locate the global best position and the algorithm traps into local point, but it is not the case for TLBO.In other words, increasing dimensions will not effect for searching global best position in TLBO.From the Table 4 it can be verified that the number of FEs are considerably less for Sphere and SumSquare functions in TLBO compared to other two algorithms.Rastrigin, Noncontinous Rastrigin, Griwank, Multimod functions are finding optimal global values in TLBO with less FEs compared to DE and PSO