A novel parallel multi-swarm algorithm based on comprehensive learning particle swarm optimization
Introduction
In recent years, many researchers have been working on optimization which is very important for many areas such as especially computer science, high performance computing, industrial engineering and mechanical engineering. Optimization problems are often NP-hard, complicated and time consuming (Torn and Zilinskas, 1989, Talbi, 2009). In this work, unconstrained global optimization problems (Schäffler, 2012), the subclass of global optimization, are tackled. Optimization methods are basically divided into four classes to solve these problems: exact methods, approximation algorithms, metaheuristic algorithms and greedy algorithms. Metaheuristic algorithms find the optimal solution or a solution close to the optimal solution within a reasonable amount of time although exact methods find the optimal solution. The exact methods are not very often used in practice because the solutions of multi-dimensional, multimodal real-world problems by them are very time consuming. On the other hand, heuristic methods are often used in practice because they find a good solution in a reasonable time.
Many metaheuristic algorithms were developed to solve the optimization problems and new algorithms have been also proposed. Swarm intelligence and evolutionary computation are two popular subclasses of the metaheuristic algorithms. Swarm intelligence contains particle swarm optimization (Kennedy and Eberhart, 1995), artificial ant colony algorithm (Dorigo and Stützle, 2004), artificial bee colony algorithm (Karaboga and Basturk, 2007), grey wolf optimizer (Mirjalili et al., 2014) etc. On the other hand, evolutionary computation contains genetic algorithm (Goldberg, 1989), memetic algorithm (Neri et al., 2011) and gene expression algorithm (Ferreira, 2006) etc.
As mentioned above, metaheuristic algorithms are very efficient to solve optimization problems. Particle swarm optimization (PSO) is one of them. PSO is developed by Kennedy and Eberhart (1995) is a population-based and metaheuristic optimization technique. It inspired from the social behaviors of bird and fish flocks. Each individual in the population, called particle, represents a potential solution. Particles scan the search area by following the current best solutions in the population and thus converge to the global optimum. Thanks to the success and the popularity of PSO, it has been used in many various areas such as logistics and transportation (Wu and Tan, 2009), bioinformatics (Correa et al., 2006), business (Yang et al., 2011), finance (Kendall and Su, 2005), data mining (Grosan et al., 2006), product design and manufacturing (Yıldız, 2009), automotive industry (Yildiz, 2012) and so on.
Rapid advances in science and technology trigger developments in the computation area as in all areas. As a result of these developments, problems to be solved by computers become larger and more complex and they cause the increments of the large-scale data. Thus, parallel computing and parallel algorithms are needed to solve these problems and to process the large-scale data.
In parallel computing, a task which is divided into subtasks is run synchronously on multiple processors to obtain quickly the results (Grama, 2003). Using parallel computing, the performance of an algorithm increases and large-scale problems are solved in a shorter time. Therefore, parallel computing has been used in many areas such as medical image processing (Zhu and Cochoff, 2010), bioinformatics (Zomaya, 2006), data mining (Zaki and Ho, 2000), finance (Hong et al., 2010) nowadays because data increases day by day. In this work, Jade software framework (Bellifemine et al., 2007) which is a middleware is used to develop the proposed parallel algorithm. The middleware is a software layer, which supports heterogeneous computers, networks and operating systems and allows developing parallel applications, between the operating systems and applications (Tanenbaum and Van Steen, 2007).
Metaheuristic optimization algorithms have usually a sequential structure. Although their usage reduces time complexity, the solutions of some real-world problems, such as in aerospace (Hasenjäger et al., 2005, Olhofer et al., 2001) and chemistry (Lucasius and Kateman, 1991), which occur in the academy and industry still take too long time. Thus, parallel computing is used together with metaheuristic algorithms both to decrease the search time and to increase the quality of the solutions (Alba, 2005). The objective of this work is to develop a new parallel metaheuristic algorithm in order to solve unconstrained global optimization problems, especially the large-scaled problems.
After literature review, it is seen that the CLPSO algorithm (Liang et al., 2006) has a better performance than the other PSO variants. As previously mentioned, even if the optimization techniques are used, the solutions of the some problems become difficult and take too long time. Therefore to face such difficulties, the improving of the CLPSO algorithm’s performance is aimed by using parallel computing and we propose a new parallel multiswarm CLPSO algorithm (PCLPSO) in this work. The performance of PCLPSO is demonstrated on the function optimization problems. The main contributions of this article are that the proposed algorithm (i) uses a new cooperation strategy, (ii) significantly speeds up the search, (iii) improves the quality of the obtained solutions, (iv) improves the robustness and (v) also solves large-scale problems.
This article is organized as follows: Section 2 presents a detailed review on the PSO variants and the parallel metaheuristic optimization approaches. The PSO algorithm, CLPSO algorithm and PCLPSO algorithm are presented in Section 3. Section 4 reveals the experimental results and analysis of PCLPSO in solving the unconstrained global optimization problems. Finally, the article is concluded in Section 5.
Section snippets
Related work
To obtain better solutions of the global optimization problems, some researchers have worked to improve the PSO algorithm and have proposed many PSO variants. Shi and Eberhart (1998) introduced a new parameter, called inertia weight. The inertia weight plays the role to balance between the global search ability and local search ability. The performance of PSO is better through the inertia weight because its chance is bigger in order to find the global optimum within a reasonable number of
PSO
The PSO algorithm works as follows: Each particle in PSO represents a bird and offers a solution. Each particle has a fitness value which is found by a fitness function. Particles have velocity information which leads them in the search area. PSO is started with a certain number of random generated particles. The particles search the most appropriate solution in the search space by updating their velocity and position information by using Eqs. (1), (2), respectively. To update the position of a
Experimental results
In this section, we present computational results of the PCLPSO algorithm. Two unimodal and 12 multimodal benchmark functions are selected to test the performance of PCLPSO and to compare with other work. These functions are well known to the global optimization community and commonly used for the test of optimization algorithms. The formulas of the 14 functions are shown below. The global optimum values, the search and initialization ranges of these 14 functions are given in Table 2. All
Conclusions
This article presents a parallel metaheuristic algorithm, called PCLPSO, based on the PSO to solve unconstraint global optimization problems. This algorithm based on the master-slave paradigm has multiple swarms which work cooperatively and concurrently on distributed computers. Each swarm runs the algorithm independently. In the cooperation, the swarms exchange their own local best particle with each other in every migration process. Thus, the diversity of the solutions increases through the
References (64)
- et al.
“Communication latency tolerant parallel algorithm for particle swarm optimization
Parallel Comput.
(2011) - et al.
“Genetic algorithms for large-scale optimization in chemometrics: an application”
TrAC Trends Anal. Chem.
(1991) - et al.
“Grey wolf optimizer
Adv. Eng. Softw.
(2014) - et al.
“MPI-based parallel synchronous vector evaluated particle swarm optimization for multi-objective design optimization of composite structures
Eng. Appl. Artif. Intell.
(2012) - et al.
“A survey on parallel ant colony optimization
Appl. Soft Comput.
(2011) “Re-evaluating genetic algorithm performance under coordinate rotation of benchmark functions. A survey of some theoretical and practical aspects of genetic algorithms
BioSyst.
(1996)- et al.
“Multi-strategy adaptive particle swarm optimization for numerical optimization
Eng. Appl. Artif. Intell.
(2015) Parallel Metaheuristics: A New Class of Algorithms
(2005)- et al.
“Simulated annealing and parallel processing: an implementation for constrained global design optimization”
Eng. Optim.
(2000) - et al.
“Parallel ant colony optimization on graphics processing units
J. Parallel Distrib. Comput.
(2013)
Developing Multi-agent Systems with JADE
“Optimizing a realistic large-scale frequency assignment problem using a new parallel evolutionary approach
Eng. Optim.
“The particle swarm-explosion, stability, and convergence in a multidimensional complex space
IEEE Trans. Evol. Comput.
“An incremental particle swarm for large-scale continuous optimization problems: an example of tuning-in-the-loop (re) design of optimization algorithms
Soft Comput.
Ant Colony Optimization
Swarm Intelligence
“A parallel particle swarm optimization algorithm for multi-objective optimization problems
Eng. Optim.
“Dynamic multi-swarm particle swarm optimizer using parallel PC cluster systems for global optimization of large-scale multimodal functions
Eng. Optim.
Gene Expression Programming: Mathematical Modeling by an Artificial Intelligence (Studies in Computational Intelligence)
“Restart particle swarm optimization with velocity modulation: a scalability test
Soft Comput.
“Genetic Algorithms in Search, Optimization and Machine Learning
“Parallel swarms oriented particle swarm optimization”
Appl. Comput. Intell. Soft Comput.
Introduction to Parallel Computing
Swarm Intelligence in Data Mining
“Distributed evolutionary algorithms with hierarchical evaluation
Eng. Optim.
“A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm
J. Global Optim.
Cited by (90)
Chaotic heterogeneous comprehensive learning PSO method for size and shape optimization of structures
2023, Engineering Applications of Artificial IntelligenceA particle swarm optimizer with dynamic balance of convergence and diversity for large-scale optimization
2023, Applied Soft ComputingGrammar-based autonomous discovery of abstractions for evolution of complex multi-agent behaviours
2022, Swarm and Evolutionary ComputationMajor Advances in Particle Swarm Optimization: Theory, Analysis, and Application
2021, Swarm and Evolutionary ComputationApplication of binary PSO for public cloud resources allocation system of video on demand (VoD) services
2021, Applied Soft Computing