Elsevier

Information Sciences

Volume 294, 10 February 2015, Pages 456-477
Information Sciences

Ant algorithms with immigrants schemes for the dynamic vehicle routing problem

https://doi.org/10.1016/j.ins.2014.10.002Get rights and content

Abstract

Many real-world optimization problems are subject to dynamic environments that require an optimization algorithm to track the optimum during changes. Ant colony optimization (ACO) algorithms have proved to be powerful methods to address combinatorial dynamic optimization problems (DOPs), once they are enhanced properly. The integration of ACO algorithms with immigrants schemes showed promising performance on different DOPs. The principle of immigrants schemes is to introduce new solutions (called immigrants) and replace a small portion in the current population. In this paper, immigrants schemes are specifically designed for the dynamic vehicle routing problem (DVRP). Three immigrants schemes are investigated: random, elitism- and memory-based. Their difference relies on the way immigrants are generated, e.g., in random immigrants they are generated randomly whereas in elitism- and memory-based the best solution from previous environments is retrieved as the base to generate immigrants. Random immigrants aim to maintain the population diversity in order to avoid premature convergence. Elitism- and memory-based immigrants aim to maintain the population diversity and transfer knowledge from previous environments, simultaneously, to enhance the adaptation capabilities. The experiments are based on a series of systematically constructed DVRP test cases, generated from a general dynamic benchmark generator, to compare and benchmark the proposed ACO algorithms integrated with immigrants schemes with other peer ACO algorithms. Sensitivity analysis regarding some key parameters of the proposed algorithms is also carried out. The experimental results show that the performance of ACO algorithms depends on the properties of DVRPs and that immigrants schemes improve the performance of ACO in tackling DVRPs.

Introduction

Ant colony optimization (ACO) algorithms have been successfully applied for solving different combinatorial optimization problems, e.g., vehicle routing problems (VRPs) [12], [15]. Traditionally, researchers have drawn their attention on stationary optimization problems, where the environment remains fixed during the execution of an algorithm [23], [48]. However, many real-world applications are subject to dynamic environments. Dynamic optimization problems (DOPs) are challenging since the aim of an algorithm is not only to find the optimum of the problem quickly, but to efficiently track the moving optimum when changes occur [29]. A dynamic change in a DOP may involve factors like the objective function, input variables, problem instance, constraints, and so on.

Conventional ACO algorithms have been designed for stationary optimization problems [14], e.g., to converge fast into the global (or near) optimum solution, and may face a serious challenge to tackle DOPs. This is because the pheromone trails of the previous environment may bias the population to search into the old optimum, making it difficult to track the moving optimum. As a result, ACO will not adapt well once the population converges into an optimum. Considering that DOPs can be taken as a series of stationary problem instances, a simple way to tackle them is to re-initialize the pheromone trails and consider every dynamic change as the arrival of a new problem instance which needs to be solved from scratch [37]. However, this restart strategy is generally not efficient.

In contrast, once ACO algorithms are enhanced properly, they are able to adapt to dynamic changes since they are inspired from nature, which is a continuously changing process [2], [3], [29]. Recently, ACO algorithms have been successfully applied in combinatorial optimization problems with dynamic environments since they are able to reuse knowledge from previously generated pheromone trails [25], [26], [38]. More precisely, when the changing environments are similar, the pheromone trails of the previous environment may provide knowledge to speed up the optimization process to the new environment. However, the algorithm needs to be flexible enough to accept the knowledge transferred from the pheromone trails, or eliminate older unused pheromone trails, to better adapt to the new environment.

Several strategies have been proposed and integrated with ACO to shorten the re-optimization time and maintain a high quality of the output efficiently, simultaneously. These strategies can be categorized as: increasing diversity after a dynamic change [25], [37]; maintaining diversity during the execution [16], [38]; memory-based schemes [26], [27], [36]; and hybrid/memetic algorithms [39].

Among these strategies, immigrants schemes have shown promising results on binary DOPs [55], [61], dynamic traveling salesman problems [38], and recently dynamic vehicle routing problems (DVRPs) [35], [36]. Within immigrants schemes, a small portion of newly generated ants, called immigrant ants, replace the worst ants in the current population. Each immigrants scheme differs in the way immigrant ants are generated, e.g., random immigrants represent random solutions of the problem [24], elitism- or memory-based immigrants represent solutions that differ slightly from the best solution of a previous environment [54], [55]. In this paper, we focus on the immigrants schemes for the DVRP, and thus, the immigrant ants represent a feasible VRP solution.

Random immigrants ACO (RIACO) and elitism-based immigrants ACO (EIACO) [35] were previously applied only on a DVRP where the pattern of dynamic changes is random, denoted as random DVRPs, whereas memory-based immigrants ACO (MIACO) [36] was applied only on a DVRP where the pattern of dynamic changes is cyclic, denoted as cyclic DVRPs. However, these algorithms were extended from the previous developments proposed for dynamic traveling salesman problems [38], and thus, their behavior was unexpected in most dynamic test cases. In this paper, RIACO, EIACO, and MIACO are re-designed specifically for the DVRP and their performance is investigated on both random and cyclic DVRPs generated by a different benchmark generator. The proposed algorithms differ from their previous versions [35], [36] as follows: (1) the way random, elitism- and memory-based immigrant ants are generated; (2) the selection of ant as the base to generate elitism-based immigrants; and (3) the ants selected to replace other ants in the memory in order to generate memory-based immigrants.

The main issue with different dynamic benchmark generators for the DVRP currently used in the literature [30], [35], [36], [41] is that the optimum value is not known during the dynamic changes. The same case stands for the DVRPs considered on the initial developments of RIACO, EIACO and MIACO [35], [36]. Therefore, it is impossible to observe how close to the optimum each algorithm converges after a change. In binary and continuous optimization functions, algorithms are benchmarked in dynamic generators where the optimum value is known during the dynamic changes [5], [32], [53], [56]. Comprehensive surveys regarding benchmark generators for DOPs are available in [9], [42]. In this paper, the dynamic benchmark generator for permutation problems (DBGP) [40] is mainly used which can generate DVRPs with known optimum over the environmental changes and hence facilitate the observation on how close to the optimum an algorithm performs. Based on the DVRPs generated from DBGP, this paper benchmarks and compares the performance of the re-designed algorithms with other peer ACO algorithms. In addition, the algorithms are compared on the DVRP with traffic factors [38], which models a real-world scenario.

To summarize, the contributions of the paper are as follows: (1) RIACO, EIACO and MIACO, which were developed previously, are re-designed specifically to address the DVRP; (2) the dynamic test cases are generated from the recently proposed DBGP [40]; and (3) the experimental studies are extended on both random and cyclic DVRPs for all ACO algorithms. Sensitivity analysis on key parameters of the algorithms is also carried out.

The rest of the paper is organized as follows. Section 2 describes the basic VRP and its stationary and dynamic extensions. Section 3 briefly reviews existing work on ACO for DVRPs. Section 4 describes the benchmark generator used which can generate DVRPs where the optimum value is known over dynamic changes. Section 5 describes the algorithms proposed in this paper for addressing the DVRP. Section 6 gives the experimental results, including the statistical tests, and analysis. Finally, Section 7 concludes this paper with discussions on relevant future work.

Section snippets

Basic VRP description

The basic VRP can be described as follows: we need to route a number of vehicles with a fixed capacity to satisfy the demands of all the customers, starting from and finishing at the depot [10]. Hence, a VRP without the capacity constraint and with one vehicle can be seen as a traveling salesman problem. The basic problem is also known as the capacitated VRP and belongs to the class of NP-hard combinatorial problems [31]. The important symbols used in this section and for the remaining paper

ACO applications for the vehicle routing problem

ACO is a metaheuristic developed especially for combinatorial optimization problems. ACO algorithms are inspired from the foraging behavior of real ant colonies where ants communicate via their pheromone trails to optimize their paths. The research on ACO has mainly focused on applications in stationary environments, such as the traveling salesman problem [14], VRP [20], quadratic assignment problem [22], capacitated arc routing problem [52], and many others [15].

Regarding the VRP application,

Generate dynamic test cases

In order to generate dynamic routing problems, we have recently proposed the DBGP [40], which can convert any static permutation-encoded benchmark problem instance to a dynamic environment. In cases where the optimum of the benchmark problem instance is known, it will remain known during the environmental changes. DBGP shifts the population of the algorithm to search to a new location in the fitness landscape. Other existing DVRP benchmark generators, e.g., the DVRP with stochastic demands and

Framework

Conventional ACO algorithms are constructive heuristics. The solutions generated by the ants are not stored in an actual population, but only in the pheromone trails, which are used by the ants of the next iteration. In contrast, evolutionary algorithms consist of an actual population of feasible solutions, which is directly transferred from one iteration to the next using selection [28], [34]. Search operators (i.e., crossover and mutation) are used in evolutionary algorithms to generate the

Experimental setup

In the experiments, all the algorithms were tested on the DVRP instances that are constructed from three stationary benchmark VRP1 instances taken from real life vehicle routing applications, which are visually illustrated in Fig. 4. F-n45-k4, F-n72-k4 and F-n135-k7 benchmarks have optimum values of 724, 237 and 1162, respectively [17]. Problems F-n45-k4 and F-n135-k7 represent grocery deliveries from the Peterboro and Brarmalea, Ontario terminals,

Conclusions

The DVRP has attracted less attention than the stationary VRP. In general, combinatorial optimization problems with dynamic environments have attracted less attention than other DOPs in different domains. Existing benchmark generators for the DVRP generate DOPs where the optimum is unknown during the environmental changes, i.e., the DVRP with traffic factors. In this paper, we mainly use our recently proposed generator, i.e., the DBGP [40], which can generate DVRPs with known optimum over the

Acknowledgement

The authors would like to thank the anonymous reviewers for their thoughtful suggestions and constructive comments. This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grant EP/K001310/1.

References (61)

  • P.A.N. Bosman, Learning, anticipation and time-deception in evolutionary online dynamic optimization, in: Proc. of the...
  • J. Branke, Memory enhanced evolutionary algorithms for changing optimization problems, in: Proc. of the 1999 IEEE...
  • B. Bullnheimer et al.

    Applying the ant system to the vehicle routing problem

  • B. Bullnheimer et al.

    An improved ant system algorithm for the vehicle routing problem

    Ann. Oper. Res.

    (1999)
  • B. Bullnheimer et al.

    A new rank-based version of the ant system: a computational study

    Central Eur. J. Oper. Res. Econ.

    (1999)
  • C. Cruz et al.

    Optimization in dynamic environments: a survey on problems, methods and measures

    Soft Comput.

    (2011)
  • G.B. Dantzig et al.

    The truck dispatching problem

    Management Sci.

    (1959)
  • H. Dawid et al.

    Ant systems to solve operational problems

  • M. Dorigo et al.

    Ant algorithms for discrete optimization

    Artif. Life

    (1999)
  • M. Dorigo et al.

    Ant colony system: a cooperative learning approach to the traveling salesman problem

    IEEE Trans. Evol. Comput.

    (1997)
  • M. Dorigo et al.

    Ant system: optimization by a colony of cooperating agents

    IEEE Trans. Syst., Man Cybern. Part B: Cybern.

    (1996)
  • C. Eyckelhof et al.

    Ant systems for a dynamic tsp

  • M. Fisher

    Optimal solution of vehicle routing problems using minimum k-trees

    Oper. Res.

    (1994)
  • L.M. Gambardella, A.E. Rizzoli, F. Oliverio, N. Casagrande, A. Donati, R. Montemanni, E. Lucibello, Ant colony...
  • L.M. Gambardella, E.D. Taillard, C. Agazzi, Macs-vrptw: a multicolony ant colony system for vehicle routing problems...
  • L.M. Gambardella et al.

    Ant colonies for the quadratic assignment problem

    J. Oper. Res. Soc.

    (1999)
  • K. Goyal et al.

    Applying swarm intelligence to design the reconfigurable flow lines

    Int. J. Simul. Modell.

    (2013)
  • J.J. Grefenstette, Genetic algorithms for changing environments, in: Proc. of the 2nd Int. Conf. on Parallel Problem...
  • M. Guntsch et al.

    Pheromone modification strategies for ant algorithms applied to dynamic tsp

  • Cited by (0)

    View full text