DSSO-Directional Shrinking Search Optimization

Researchers are continuously trying to develop new heuristic search algorithms in the field of optimization. In this paper, a novel optimization technique has been proposed that tries to maximize the effectiveness of exploration and exploitation by the algorithms in the search of global optimum point. In this proposed algorithm, the points in the search regions are directing towards the global optimum point in the same time the search region is also starts reducing exponentially. A stochastic process has been added in the end of the algorithm to provide better exploitation of the search region. The performance of the algorithm has been compared with some well known optimization techniques and the derived empirical results on solving standard benchmark functions clearly signifies the high performance of the proposed algorithm.


INTRODUCTION
With the advances in engineering and technology, complexity and sizes of the problems are also taking a toll. Researchers are continuously looking for different methodology for successfully solving these problems. Many numbers of problems are so much complex (NPhard problems [1]) that traditional algorithms cannot able to solve. Hence researchers are continuously trying hard to develop different algorithms for solving global numerical optimization problems in better ways.Floudas and Gounaris [2] have illustrated recent advances in solving global optimization problems. In last two decades many algorithms have been developed by researchers emulating the behavior of nature. Some of the popular algorithms derived by emulating nature are Genetic Algorithm (GA) [3]- [8], Particle Swarm optimization (PSO) [9]- [12], Differential Evolution algorithm (DE) [13]- [17], Firefly algorithm (FFA) [18]- [20] , Gravitational search algorithm (GSA) [21], and lots more. These nature inspired algorithms have their own advantages and disadvantages while finding the global optimum solution in a search space.
All optimization algorithms have to perform two distinct operations every time, when they are finding global optimum solution in problem domain i.e. Exploration and Exploitation. The process of exploration shows how exhaustive search can be done in the search space while exploitation refers to the ability of the algorithm to find best solution in the search space. When algorithm perform exhaustive search in the search space then it consumes more time hence computational time increased. On the other hand if exploitation is more pronounced in a algorithm then chances of trapping in local optimum solution is more. Hence to make a balance between these two criteria's becomes very essential. Performance of an algorithm can be judged by its accuracy for finding global optimum solution in a less computational time.
In this paper we proposed a stochastic iterative technique of optimization termed as Directional Shrinking Search Optimization (DSSO) algorithm. Here the algorithm starts constricting the size of the search space gradually during exploration process and finally gets a high probable region for exploiting the optimum solution. This makes the algorithm to take less time computationally.

PROPOSED ALGORITHM
In most of the optimization problems with the increase in the dimension the search region grows exponentially. Hence classical techniques are not able to provide a practical solution.In this proposed technique, at any instant all randomly generated solution points are made to move towards the best solution observed at that instant. Directional movement of points towards best solution point is controlled by a factor calculated from the unit vectors between respective points and the best solution points. During iteration all points swifts towards the best solution point gradually. To facilitate high exploration capability, a stochastic move has been induced to every point through levy flight. After wards the fitness values of the points in the search space indicates how well points are converging towards the better solution. To reduce the computational requirement of the algorithm, the maximum and minimum dimension value of the search points are calculated. These values will act as boundary and shrink the search region keeping maximum possibility of obtaining best solution in that region. In every iteration, the same process of directional movement of points with stochastic effect and shrinking of search region will take place. This process not only facilitates better exploration but also exploiting the best solution in the search space. Hence global optimum solution is found in less computational time.
The algorithm for the proposed optimization technique is given below: 1. Given the objective function ( ) ℎ = ( , , , … , ) 12. Again calculate the fitness value for all points in the population and update the minimum fitness value using greedy technique. 13. Limiting the population within the new search region by using above boundary condition.

EXPERIMENTAL RESULTS AND DISCUSSION
Our proposed Directional shrinking search optimization (DSSO) algorithm performance is evaluated using some standard benchmark functions discussed in literature. The algorithm was coded in MATLAB (R2012a) and computational experiments were conducted in a Windows 8.1 PC with an Intel(R) core(TM) i3-4000M with 2.4GHz CPU and 4 GB RAM. The benchmark functions used for performance evaluation are shown in details in Table- Table-  Moreover the rate of convergence of these benchmark functions shown in figures below clearly depicts that the computation time taken by DSSO is very small. It has been observed that within 50 to 100 iterations, all the functions converge to global optimum point in the search space.

CONCLUSION
Recent years have witnessed the development of many heuristic optimization algorithms. Most of these algorithms are inspired from nature, physics etc. The common point among all is the drifting of points using a mathematical expression emulating the some natural phenomena towards global optimum. Here the drifting of point is done through directional vector and in the mean time the reduction of search region improves the exploration and exploitation capability of the DSSO algorithm. Since according the free lunch theorem [22], all optimization problems cannot be converged properly by any particular algorithm. Hence researchers may utilized this algorithm for converging more number of complex optimization functions and hybrid the algorithm with some more optimization algorithm which may lead us to more fruitful results in future application of this algorithm.