Choice of a PISA selector in a hybrid algorithmic structure for the FJSSP

This paper analyzes the choice of a PISA selector for a Hybrid Algorithm integrating it as a Multi-Objective Evolutionary Algorithm (MOEA) with a path-dependent search algorithm. The interaction between these components provides an efficient procedure for solving Multi-Objective Problems (MOPs) in operations scheduling. In order to choose the selector, we consider both NSGA and SPEA as well as their successors (NSGAII and SPEAII). NSGAII and SPEAII are shown to be the most efficient candidates. On the other hand, for the path-dependent search at the end of each evolutionary phase we use the multi-objective version of Simulated Annealing.


Introduction
One of the main purposes of production planning is improving the efficiency of processes (Bihlmaier et al., 2009).A good plan in an industrial firm can be seen as a solution to a Job-Shop Scheduling Problem (JSSP) (Chao-Hsien & Han-Chiang, 2009) although this problem belongs to the NP-Hard class (Ullman, 1975;Papadimitriou, 1994).The JSSP involves the allocation of limited resources to jobs in order to optimize some given objectives (Armentano & Scrich, 2000;Storer et al., 1992).Evolutionary procedures have been designed to address these multi-objective problems (Deb et al., 2002) (Coello Coello et al., 2006).Most of the work on JSSP has been done on its single objective version, but in real-world cases multiple goals are highly frequent (Chinyao & Yuling, 2009).As indicated by T`kindt and Billaut (2006), a genuine scheduling problem requires the optimization of several simultaneous goals.Along these lines we present the result of searching for an appropriate PISA selector, from a small class of candidate Multi-Objective Evolutionary Algorithms (MOEAs) which, joint with a local search procedure (MOSA, Multi-Objective Simulated Annealing) addresses the flexible instance of JSSP (Cortés Rivera et al., 2003;Park et al., 2003;Tsai & Lin, 2003;Wu et al., 2004).In this sense, this paper provides a methodological ground for the design of such a hybrid algorithm, NSGAII+MOSA, as presented in (Frutos et al., 2010).We claim that the combination of these algorithms yields a metaheuristic tool that provides a good approximation to the Pareto frontier of multi-objective JSSPs without the short-comings of the underlying MOEA.In particular, that NSGAII fares better than other alternative candidates.

Approaches to the JSSP
The large body of work on the JSSP exhibits different solution strategies ranging from priority rules to parallel branch-and-bound algorithms.While Muth and Thompson's (1964) introduced the current form of the JSSP, Jackson (1956) presented solution procedures generalizing Johnson's (1954).Akers and Friedman (1955) provided a Boolean representation of the algorithm, which was later simplified as a disjunctive graph in Roy and Sussman (1964), while Balas (1959) profited from this representation to yield another solution to the JSSP.In more contemporary times, the complexity of the JSSP permitted alternative formulations (Li et al., 2011(Li et al., , 2013;;Della Croce et al., 2014), which allowed the application of particular algorithms like Clonal Selection (Cortés Rivera et al., 2003), Hybrid Artificial Bee Colony (Li et al., 2011), Multi-Population Interactive Coevolutionary (Xing et al., 2011), Priority Rules (Panwalker & Iskander, 1977), Shifting Bottlenecks (Adams et al., 1988) (Mönch & Zimmermann, 2011), etc.The efficiency of these meta-heuristic procedures leaves room for further improvement (De Giovanni & Pezzella, 2010) (Al-Hinai & ElMekkawy, 2011) (Shin et al., 2008).

Multi-Objective Optimization: Basic Concepts
Let us assume that several goals (objectives) have to be minimized.Thus, a vector * * * 1 n x (x ,..., x )  of n decision variables (real numbers) is chosen, satisfying q inequalities i g (x) 0  , i 1,...,q  as well as p equations i h (x) 0 each one corresponding to a particular goal, attains a Pareto optimum.More precisely, the family of decision vectors satisfying the q inequalities and the p equations is denoted by  and each x   is a feasible alternative.A * x is Pareto optimal if for any x   and every i=1,…,k, . This means that no x can improve a goal without worsening others.We say that a vector of real numbers .The main goal of Multi-Objective Optimization is to find the corresponding * FP .A good approximation should yield a few feasible alternatives close enough to the frontier (Frutos & Tohmé, 2009).

The Flexible Job-Shop Scheduling Problem
The Job-Shop Scheduling problem amounts to organizing the execution of a class of n jobs ( n Each job is described as a sequence of tasks that be performed in sequence: j j 1 n J S ,...,S  (assuming that the order of tasks is known we write i j S J  .We denote with i jk O that the task i S of job j J is performed on machine k M .i jk O requires the use of a machine k M for a period i jk 0   (the processing time) at a cost i jk  .The family of operations to be run on a machine k M is denoted k E .In the case of Flexible JSSP (FJSSP), each i jk O can be processed by any of the machines in M .A key issue here is the scheduling of activities, i.e. the determination of the starting time i jk t of each i jk O .(Table 1, FJSSP MF01 (Frutos et al., 2010)).At the start of the process each machine is available and can only carry out an operation at a time.Furthermore, no job can use each machine more than once and has to wait until the next machine is available (Lin et al., 2011).All the setup and waiting times are included in the initial data and machines can remain unused at any step of the plan.The final state is reached when each job has completed its last operation (Heinonen & Pettersson, 2007).The FJSSP involves, in turn, two subproblems: the allocation of the i jk O on the different k M and the determination of the best way of sequencing them, guided by the goals to reach.That is, to find optimal levels of Processing Time (Makespan) (f1) stated in Eq. ( 1), for each job j J and Total Operation Costs (f2) stated as Eq. ( 2).

A Multi-Objective Hybrid Evolutionary Algorithm
Evolutionary algorithms have been intensively applied to optimization problems (Coello Coello et al., 2006;Gao et al., 2008;Chiang & Lin, 2013;Rabiee et al., 2012).But for the FJSSP the high rate of convergence of some of them increase the evaluation costs on multi-objective instances, leading to a low diversity in the solutions.So, poorly distributed Pareto frontiers are sometimes obtained under these procedures.But if efficient local search procedures are added in the process, very few evaluations of the fitness functions yield acceptably distributed Pareto frontiers (see Fig. 1).Our take on this issue is to present a Multi-Objective Hybrid Evolutionary Algorithm (MOHEA) for the FJSSP combining a Multi-Objective Evolutionary Algorithm (MOEA), and Multi-Objective Simulated Annealing (MOSA) (Varadharajan & Rajendran, 2005).

The Evolutionary Phase
Individuals are represented by means of a variant of (Wu et al., 2004).Given that the FJSSP has two subproblems our MOHEA operates over two chromosomes.The first one represents the allocation of .We denote with values between 0 and (n!-1) the sequence of j J at a given k M .That is, for n = 3, we may have 0→1│2│3, 1→1│3│2, 2→2│1│3, 3→2│3│1, 4→3│1│2 and 5→3│2│1 (Table 2).The initial values are generated in a random way up from uniform distributions: integer numbers between 0 and m-1, for the allocation chromosome and between 0 and n!-1, for the sequencing chromosome.After that, a crossover and a mutation operator are applied segment-wise on the population of combined allocation-sequencing chromosomes.After some preliminary runs, we selected the Uniform Crossover operator, because it yields the best results.The mutation operator is needed because the crossover alone does not allow reaching certain areas of the search space of the FJSSP.We chose the Two-Swap mutation operator, which takes the chain of integers corresponding to two chromosomes and selects at random two genes, swapping their positions.

Simulated Annealing as a Local Search Process
Simulated Annealing provides a search procedure based on thermodynamic principles.To avoid local optimum traps, that tend to arise with traditional local search algorithms, random jumps to (possibly worse) alternative solutions are allowed.Simulated Annealing controls the frequency of jumps by means of the probability function ( T)  e   , where δ is the difference among values of the objective function, T is the "temperature" at the k-th iteration, starting at a high value (called the initial temperature) Ti, that cools down according to k 1 k T T    until a final temperature, Tf, is reached.Since higher temperatures increase the probability of getting poor solutions, the procedure diversifies them at its initial phase but improves them in the final stages.At the k-th iteration a class of close neighbors M (T,ω) is obtained, depending on the temperature and a control parameter ω.Each time a neighbor is generated, an acceptation criterion determines whether the current solution is kept or not.In the case of N objectives, there exist several alternative definitions of δ.We take δ as the normalized maximum deviation, If a new solution is rejected, a slight variant is tried.The probability of accepting a bad solution makes the algorithm less prone to get caught in a local minimum.On the other hand, during the execution T decreases according to a cooling velocity α, lowering the chances of upward displacements in the space of solutions and keeping the alternatives close to the optimal ones.The algorithm stops if no improvement has been obtained after a certain number of tries or if the final temperature Tf has been reached.Van Laarhoven et al. (1992), show that under appropriate conditions, the algorithm explores efficiently the neighborhood of the actual solution.Our version of the MOSA algorithm (Multi-Objective Simulated Annealing) generates, up from a given one, a class of close-enough alternative solutions by taking one of the genes of the chromosome and changing its value at random (Frutos et al., 2010), representing the exchange of several operations on a single machine.This procedure is applied M times.The pseudo-code of the MOSA used here is presented in Fig. 2.

Combining the Algorithms
We focus here on how the aforementioned pieces are assembled (see Fig. 3).First, the memetic procedure generates the initial population.Later, to evaluate the fitness of the individuals in the population, the value of each goal is computed and a binary tournament selection is performed.The selected candidates are subject to the genetic operators and create a new and smaller population.Then, the simulated annealing procedure performs a local search on each individual, replacing it with a new one.This is repeated until a given generation number is reached.

Take an initial
Change x and obtain x' 5.
if f1 and f2 improve 7. then

Implementation and Design of Experiments
The whole algorithm was implemented on PISA (A Platform and Programming Language Independent Interface for Search Algorithms) (Bleuler et al., 2003), an algorithm interface that distinguishes between two modules: variator and selector.The former takes all the specificities of the problem at hand to code and decode the solutions (to compute their fitness values).The selector module is independent of the problem and acts by selecting candidates.These modules exchange messages, coded as text files, independently of the programming language and the platform on which the algorithm runs.PISA provides a library of evaluations as well as statistical tools that allow evaluating and comparing alternative optimization methods (Knowles et al., 2005).For this work we considered the following MOEAs (Multi-objective Evolutionary Algorithms): the Non-dominated Sorting Genetic Algorithm (NSGA) (Srinivas, 1994), the Strength Pareto Evolutionary algorithm (SPEA) (Zitzler & Thiele, 1999) and their successors, the Non-dominated Sorting Genetic Algorithm II (NSGAII) (Deb et al., 2002) and the Strength Pareto Evolutionary algorithm II (SPEAII) (Zitzler et al., 2002).

0.
Generate an initial population ( 0 P ) of size N
Decodify and evaluate f1(x) and f2(x) on every individual 14. end for 15. end Fig. 3. Pseudo-code of the Multi-Objective Hybrid Evolutionary Algorithm NSGA classifies the individuals in layers grouping all the non-dominated individuals in a single front that comprises the individuals with the same value of fitness.This value is proportional to the size of the population, providing reproduction potential for all the individuals in the front.The procedure is repeated on the remaining individuals until all the individuals in the population are classified.Since the candidates in the first front have higher fitness they get more attention than the rest of the individuals.NSGAII is a more efficient version of NSGA that applies an elitist replacement strategy choosing the best individuals from the union between parent and child generations.All the solutions are ranked in terms of their degrees of non-dominancy, being the better ones those with lowest rank.SPEA is an algorithm that at each generation keeps in memory the non-dominated individuals and deletes those that became dominated.For each individual in the external system, a strength value is computed, proportional to the number of solutions in which it is dominant.The fitness of a member of the current population is computed by adding the strengths of the external non-dominated solutions that dominate it.SPEAII instead, applies a fine-tuning procedure according to which the fitness of an individual is obtained as a balance between the number of solutions that it dominates and the number that dominate the individual.Besides, it uses the "nearest neighbor" for valuing the density of feasible solutions, leading to a more efficient search.In Figure 4 we can see the PISA architecture adapted to the FJSSP.

Experiments and results
A preliminary analysis of the improvement process showed that it tended to become stable at the 200th generation.We chose then the limit of 250 generations, just to leave room for any later improvement.The parameters and characteristics of the computing equipment used during these experiments were as follows: size of the population: 200, type of cross-over: uniform; probability of cross-over: 0.90, type of mutation: two-swap, probability of mutation: 0.01, type of local search: simulated annealing (Ti: 850, Tf: 0.01, α: 0.95, ω: 10), probability of local search: 0.01, CPU: 3.00 GHZ and RAM: 1.00 GB.Initially we consider four solutions, two dominated solutions (see Table 3 and Table 5) and two undominated solutions (see Table 4 and Table 6) for problem MF01 (Frutos et al., 2010).The procedure has been applied to problems MF01 (Fig. 5 7.

Table 7
Mean running times of the algorithms.Each algorithm is iterated 30 times The fronts obtained are shown in Fig. , from which all the dominated solutions are eliminated.

Comparison Procedure
In order to compare the results of the algorithms and establish the better option for the FJSSP, several tests were applied over the solutions.On problems MF01, MF04 and MF05, NSGAII showed statistically significant differences with NSGA and SPEA at the α=0.05 level.On MF02 and MF03, NSGAII and SPEAII had differences with NSGA y SPEA of an overall significance level α=0.05.Thus, NSGAII and SPEAII address better the FJSSP.As a further step in the analysis, we establish the percentage of contribution of each algorithm to the Approximate Pareto Frontier (see Table 13).From this we can conclude that NSGAII is the best selector we can apply to our problem.

Conclusions
We presented a Multi-Objective Hybrid Evolutionary Algorithm (MOHEA) to solve the Flexible Job-Shop Scheduling Problem (FJSSP).The application of the MOHEA required the calibration of parameters to yield valid values.Our algorithm integrates two meta-heuristic procedures: a Multi-Objective Evolutionary Algorithm (MOEA) and a Multi-Objective Simulated Annealing (MOSA) algorithm.Individuals are coded in a way that facilitates the application of two basic genetic operators.Different MOEAs were tested for this task.It was shown that the performance of NSGAII is at least as good as SPEAII and it improves largely over NSGA and SPEA, validating the results in (Frutos et al., 2010).We are currently running a comparison between the MOHEA presented in this paper and recently developed approaches like the Hybrid Artificial Bee Colony Algorithm (Li et al., 2011) and the Multi-population Interactive Co-evolutionary Algorithm (Xing et al., 2011).Furthermore, we plan, in the future, to explore the performance of the MOHEA on other MOPs.We believe that it provides a strong and efficient approach to this kind of problems.
is, the starting time of an operation i jk O should be larger or equal than the total time spent on operation i 1 jh O  and on operation s pk O .

Table 1 A Flexible Job-Shop Scheduling Problem MF01 / Problem 3 × 4 with 8 operations (flexible) j
best elements and eliminate the rest

Table 13
Percentage of solutions contributed by NSGAII, NSGA, SPEAII and SPEA to the Approximate Pareto Frontier