Elsevier

Neurocomputing

Volume 221, 19 January 2017, Pages 123-137
Neurocomputing

Collective decision optimization algorithm: A new heuristic optimization method

https://doi.org/10.1016/j.neucom.2016.09.068Get rights and content

Abstract

Recently, inspired by nature, diversiform successful and effective optimization methods have been proposed for solving many complex and challenging applications in different domains. This paper proposes a new meta-heuristic technique, collective decision optimization algorithm (CDOA), for training artificial neural networks. It simulates the social behavior of human based on their decision-making characteristics including experience-based phase, others'-based phase, group thinking-based phase, leader-based phase and innovation-based phase. Different corresponding operators are designed in the methodology. Experimental results carried out on a comprehensive set of benchmark functions and two nonlinear function approximation examples demonstrate that CDOA is competitive with respect to other state-of-art optimization algorithms.

Introduction

Global optimization refers to the process of obtaining optimal values for a given system or mathematical model from all the possible solutions to maximize or minimize objective function. Increasing complexity of optimization tasks involved in different fields of real world make the development of optimization techniques more significant and interesting than before. Therefore, over the past two decades, various optimization methods have been proposed on diverse inspirations. According to the number of candidate solutions, they can be grouped in two categories: individual-based and population-based methods. In the former case, optimization process tends to start with single random solution, which is improved over the course iterations. Thus, individual-based techniques need less computation cost and function evaluation but suffer from great drawbacks: derivation-based mechanism and premature convergence. In contrary, in the latter case, a set of solutions is generated randomly and improved from generation to generation. In this way, population-based methods have high ability to avoid local optima, since the exchange of information occurs between the solutions and assists them to conquer different difficulties of search spaces. Meanwhile, they also encounter high computational cost and more function evaluation.

The important feature of population-based stochastic search techniques is the division of the solution domain to two main milestones: diversification and intensification [1]. The former refers to the phase where candidate solutions tend to be changed more frequently and explore promising regions as broad as possible. Contradictory, the latter promotes convergence toward the best values obtained in exploration process. In other words, favoring diversification turns out higher local optima avoidance, whereas emphasizing intensification yields to faster convergence rate. Recently, heuristic optimization techniques have exhibited remarkable performance in a wide variety of problems from diverse fields due to the following advantages: simplicity, flexibility, derivation-free mechanism and local optima avoidance. For this reason, it has expanded tremendously. Inspired by different nature phenomena, scholars have proposed many successful and effective optimization methods. According to inspiration, these existing paradigms can be classified into three main categories: evolution-based, physics-based and swarm-based methods.

Evolution-based methodologies are inspired from the laws of biological evolution. The most popular evolution approach in this category is Genetic Algorithm (GA) [2], which imitates the theory of Darwinian evolution. Biogeography-Based Optimizer (BBO) on natural biogeography [3] and Bird Mating Optimizer (BMO) on natural evolution [4].

Physics-based algorithms are those who mimic the physical regulations of the universe, such as Simulated Annealing (SA) on the metallurgic annealing process [5], Ray Optimization (RO) on the Snell's light refraction law [6], Gravitational Search Algorithm (GSA) on the law of gravity and mass interactions [7], and Black hole (BH) on black hole phenomenon [8].

Swarm-based techniques simulate all kind of animal or human behaviors. For instance, Particle Swarm Optimization (PSO) [9] on the foraging behavior of bird flocking, Flower Pollination algorithm (FPA) on the pollination process of flowers [10], Cuckoo Search (CS) on the brood parasitism of cuckoo species [11], Crow Search Algorithm (CSA) on the behavior of crows [12], Interior Search Algorithm (ISA) on interior design and decoration [13], Artificial bee colony (ABC) on the foraging behavior of honey bee swarm [14], and Harmony Search (HS) on the principle of music improvisation [15].

Regardless of a great number of new algorithms and their applications in different fields, there is a question here that if more techniques are needed in this field. The answer is positive according to the No Free Lunch (NFL) theorem [16], which logically proves that there is no optimization methodology for solving all optimization tasks. This theorem encourages the proposal of new methods with the hope to solve a wider range of problems or some specific types of unsolved problems. In addition, it is generally known that the population-based algorithm updates the candidate solutions based on their fitness values (information of the search space). Undoubtedly, accurate spatial information are great beneficial to find a rough approximation of the global optimum. However, a great majority of population-based methods tend to share a common characteristic, each agent intends to generate only one new individual in each iteration. So, the search space can't be systematically exploited or explored, owing to a paucity of new solutions robust enough to guide search. This motivates us to study whether a new heuristic algorithm can be designed by combining a new vector generation strategy and improving the amount of the candidate solutions with precious space messages. In the other words, whether the search space around each member can be sampled systematically through increasing the number of selectable solutions. Motivated by the mentioned perspectives, authors attempt to get inspiration from the decision-making behavior of human and design a new optimization methodology.

This paper along the motivation proposes a newly heuristic paradigm, collective decision optimization algorithm (CDOA), which is inspired from the decision-making behavior of human. Like other nature-based methods, CDOA is also a population-based search technique that uses a population of candidate solutions to proceed to the global optimum. At each iteration, guided by different locations, an agent yields several promising solutions on multi-step position-selected scheme [17], which refers to the new location generated by the previous movement will be defined as the start point for the next movement guided by the other position. For the global best, it is beneficial to slightly change its position by the random walk, which works as a local search. In such a case, several candidate solutions are randomly generated and placed around the global optimal solution. As a result, increasing the number of comparable solutions for each agent benefits to sample the search space systematically and search the global optimal solution preferably.

The rest of this paper is summarized as follows. Section 2 reviews the inspiration and principles of the proposed algorithm. In Section 3, numerical experiment is carried out on a comprehensive set of benchmark functions. Section 4 investigates the effectiveness of the proposed CDOA methodology in training artificial neural networks. The main work of this study is concluded in Section 5.

Section snippets

Collective decision optimization algorithm (CDOA)

In this section, to describe the proposed methodology intuitively, the inspiration of CDOA is first reviewed briefly. Then, the basic calculation model and pseudo code are then outlined in detail.

Numerical experiments

Although these observations guarantee that the proposed method is able to improve the initial random solutions and convergence to a better point in the search space theoretically, it is necessary to verify the performance of CDOA in practice.

Problem statement

Feedforward neural networks (FNNs), the simplest structures of artificial network devised, consist of densely interconnected adaptive simple processing elements [36]. Recently, many practical applications of different fields, pattern recognition, function approximation and data compression, have been successfully solved by FNNs with three layers (including one input layer, one hidden layer, and one output layer). In addition, it has been proven that FNNs with three layers can approximate any

Conclusion

In this paper, the decision behavior of human is modelled to propose a new stochastic population-based method, called collective decision optimization algorithm (CDOA). The methodology is equipped with several operators to explore and exploit the solution spaces. In order to evaluate the performance of CDOA roundly, a comprehensive set of benchmark functions and training artificial neural networks are required. The extensive comparative study reveals that the proposed technique performs more

Acknowledgment

The authors express their sincere thanks to Dr. Seyedali Mirjalili for providing the codes. The authors would also like to thank Dr. Q.Y. Jiang and the anonymous reviewers for their free-handed assistance and construction suggestions. This work is partly supported by the National Natural Science Foundation of China under Project code (61075032), the Postdoctoral Foundation of China under Project code (2014M561817) and the Anhui Provincial Natural Science Foundation under Project code (

Qingyang Zhang received the B.S. degree in mathematics from Beihua University, Jilin, China, in 2011, the M.S. degree in applied mathematics from Beifang University of Nationalities, Yinchuan, China, in 2015. He is currently pursuing the Ph.D. degree at the School of Computer and Information, Hefei University of Technology, Hefei, China. His current research interests include meta-heuristics, evolutionary computation and image processing.

References (38)

  • Q.Y. Jiang et al.

    The performance comparison of a new version of artificial raindrop algorithm on global numerical optimization

    Neurocomputing

    (2016)
  • X.S. Yang

    Nature-inspired metaheuristic algorithms

    (2010)
  • J.H. Holland

    Genetic algorithms

    Sci. Am.

    (1992)
  • S. Kirkpatrick et al.

    VecchiMP. Optimization by simmulated annealing

    Science

    (1983)
  • R.C. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in: Proceedings of the Sixth International...
  • X.S. Yang

    Flower pollination algorithm for global optimization

    Unconv. Comput. Nat. Comput.

    (2012)
  • X.S. Yang et al.

    Engineering optimization by cuckoo search

    Int. J. Math. Model. Numer. Optim.

    (2010)
  • D. Karaboga et al.

    A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm

    J. Glob. Optim.

    (2007)
  • Z.D. Geem et al.

    A new heuristic optimization algorithm: harmony search

    Simulation

    (2001)
  • Cited by (0)

    Qingyang Zhang received the B.S. degree in mathematics from Beihua University, Jilin, China, in 2011, the M.S. degree in applied mathematics from Beifang University of Nationalities, Yinchuan, China, in 2015. He is currently pursuing the Ph.D. degree at the School of Computer and Information, Hefei University of Technology, Hefei, China. His current research interests include meta-heuristics, evolutionary computation and image processing.

    Ronggui Wang received the M.S. degree in mathematics from Anhui University, China, in 1997, and the Ph.D. degree in computer science from Hefei University of Technology, Hefei, China, in 2005. He is currently a Professor with School of Computer and Information, Hefei University of Technology. His research interests include digital image processing, artificial intelligence and data mining.

    Juan Yang received the B.S. and M.S. degrees in mathematics from Hefei University of Technology, Hefei, China, in 2004 and 2008, respectively. She received the Ph.D. degree with school of Computer and Information, Hefei University of Technology. She is currently a lecturer with school of Computer and Information, Hefei University of Technology. His research interests include image processing and intelligent visual surveillance.

    Kai Ding received the B.S. degree in computer science from Hainan Normal University, China, in 2014. He is currently pursuing the M.S. degree at School of Computer and Information, Hefei University of Technology. His research interests include digital image processing, artificial intelligence and data mining.

    Yongfu Li received the B.S. degrees in mathematics from Anhui university of Science and Technology, Huainan, China, in 2014. His research interests include image processing, artificial intelligence and intelligent visual surveillance.

    Jiangen Hu received the B.S. degree in mathematics from Anhui architecture university, China, in 2014. He is currently pursuing the M.S. degree at School of Computer and Information, Hefei University of Technology. His research interests include Image processing, artificial intelligence and video analysis.

    View full text