Multi-objective optimization algorithms for mixed model assembly line balancing problem with parallel workstations

: This paper deals with mixed model assembly line (MMAL) balancing problem of type-I. In MMALs several products are made on an assembly line while the similarity of these products is so high. As a result, it is possible to assemble several types of products simultaneously without any additional setup times. The problem has some particular features such as parallel workstations and precedence constraints in dynamic periods in which each period also effects on its next period. The research intends to reduce the number of workstations and maximize the workload smoothness between workstations. Dynamic periods are used to determine all variables in different periods to achieve efficient solutions. A non-dominated sorting genetic algorithm (NSGA-II) and multi-objective particle swarm optimization (MOPSO) are used to solve the problem. The proposed model is validated with GAMS software for small size problem and the performance of the foregoing algorithms is compared with each other based on some comparison metrics. The NSGA-II outperforms MOPSO with respect to some comparison metrics used in this paper, but in other metrics MOPSO is better than NSGA-II. Finally, conclusion and future research is provided.


PUBLIC INTEREST STATEMENT
In today's highly competitive industrial environment, increasing systems' flexibility to meet customers' demands is crucial. Assembly line is a flow of materials subjected to workstations. In mixed model assembly lines several products are made on an assembly line while the similarity of these products is so high. As a result, it is possible to assemble and produce several types of products simultaneously in rational time. So, the flexibility of this type of the assembly line is high to produce different product with high volume based on customers' demand. This paper deals with the mixed model assembly line balancing problem where the parallel workstations in dynamic periods are considered. Dynamic periods means that each period has effects on its next period, and this is used to increase efficiency of the line. Two algorithms are used to solve the proposed mathematical model and the results are compared with each other.

Introduction
Assembly line is a flow of materials and components subjected to workstations placed as a sequence. As parts move through this line of production, they are assembled together or to the main part in order to form the final products. One of the models used in this kind of line, is a hybrid model called mixed model assembly line (MMAL) (Soman, van Donk, & Gaalman, 2004). In MMALs several products are made on an assembly line while the similarity of these products is so high that the setup time for stations from one product to another one is assumed to be negligible. As a result, it is possible to assemble several types of products simultaneously with respect to orders entered to the line without any additional setup times. A variety of similar models in MMAL can meet the various demands of customers. In today's highly competitive manufacturing environment, increasing flexibility of systems according to customers ' desires is crucial. MMAL has several advantages, such as products manufacturing based on customers demand and increasing the flexibility of the line. As a result, it is so important to study and search about this topic (Manavizadeh, Hosseini, Rabbani, & Jolai, 2013).
According to the above mentioned, if several products with high workload come to a workstation consecutively, they may increase the cycle time. To prevent this happening, concepts of balance and sequence are used. Sequencing assembly line includes the order of products entering the assembly line and whenever several models come with high volume followed at a station, the cycle time may exceed the station (Boysen, Fliedner, & Scholl, 2009). Assembly line balancing emerges when designing an assembly line and it consists of finding a feasible assignment of tasks to workstations in such a path that the assembly costs are minimized, the demand is met and constraints of the assembly process are satisfied (Boysen, Fliedner, & Scholl, 2007). As a result, the aim of the assembly line balancing is optimal assignment of works to stations.
There are some models used in assembly line balancing problem. The first of them which is used in this study is called assembly balancing problem type-I. It consists of assignment of tasks to workstations where the number of workstations should be minimized for a given cycle time. The second types of assembly balancing problem is type-II and it occurs when there are certain numbers of workstations, but the cycle time is unknown (Akpınar & Bayhan, 2011).
The rest of the paper is organized as follows: in Section 2, some related literatures are reviewed. Mathematical model and the problem definition are presented in Section 3. The concept of a nondominated solution is illustrated in Section 4. In Section 5, the methodology for tackling of this problem is described. Numerical results and comparison of two metaheuristics are presented in Section 6. Finally, the study is ended with the conclusion in Section 7. McMullen and Frazier (1998) found that using equal weight for each objective to solve a multi-objective assembly line balancing problem with simulated annealing algorithm when workstations are parallel has a better result. Simaria and Vilarinho (2004) solved MMAL balancing problem type-II with genetic algorithm to find the best balance in parallel workstations while the problem had constraints of zoning. McMullen and Tarasewich (2003) used ant techniques to solve the MMAL balancing problem with parallel workstations and probable time work. They compared ant techniques with several other heuristics, such as simulated annealing and found that this approach is competitive with the other methods. Simaria and Vilarinho (2009) solved balancing two-sided assembly lines problem with an ant colony optimization algorithm to minimize number of workstations and balancing the workload with precedence, capacity, zoning, and synchronism constraints. They understood that their proposed procedure is performed well. Rabbani, Kazemi, and Manavizadeh (2012) proposed a new approach to minimize the number of workstations when these are intersecting in mixed model U-line balancing type-I problem and solved it with genetic algorithm. Hamzadayi and Yildiz (2012) used genetic algorithm and simulated annealing to solve MMAL balancing type-I problem with sequence in U-line with parallel workstations and zoning constraints. Boysen, Kiel, and Scholl (2011) focused on the added workload stations in sequencing MMAL and tried to minimize the number of work overload situations. They introduced different exact and heuristic algorithms such as branch and bound and tested their model in some instances. Mamun, Khaled, Ali, and Chowdhury (2012) proposed a genetic algorithm to solve balancing mixed model assembly line of type-I with some features such as parallel workstations, zoning constraints, and sequence with limit resources. They defined reassignment process to increase flexibility in task allocation. Tiacci (2012) evaluated operational objectives of MMAL problem with parallel stations, stochastic task times, sequences, and buffers within workstations. They solved their model in Assembly Line Simulator (ASL) which is able to quickly model and simulate intricate assembly line. Akpınar, Bayhan, and Baykasoglu (2013) proposed a new hybrid algorithm which is combined ant colony optimization with genetic algorithm for MMAL balancing problem type-I with some features such as parallel workstations, zoning constraints, and sequence. Kellegöz and Toklu (2015) proposed a new method for balancing assembly line with parallel multi-manned workstations with zoning constraints and sequence. They used branch and bound algorithm for a comparative evaluation.

Literature review
Tiacci (2015a) solved buffer allocation problem and assembly line balancing problem simultaneously for the first time. He considered stochastic task times, parallel workstations, and buffers between workstations for his problem and solves it with genetic algorithm. Yang and Gao (2016) presented MMAL problem with adjacent workforce cross-training where the skill of doing each task can be learned by two workers in adjacent stations and then they can reallocate the tasks occurred variation of demand. They solved this problem with a branch, bound, and remember algorithm. Sivasankaran and Shahabudeen (2014) had a review on assembly line balancing problems. They classified problem into single model or multi model, deterministic or probabilistic task times, types of assembly line (straight line or U-line) and with which algorithms these problem were solved. Finally, they found out which types of problems didn't work yet. According to the mention, an overview of approaches in the literature on MMAL balancing problem of type-I (MMALBP-I) is shown in Table 1.
According to the mentioned researches, this paper considers a bi-objective MMAL balancing problem with parallel workstations in dynamic situation. Two multi-objective metaheuristic algorithms are used to solve the problem. Finally, the performance of non-NSGA-II and multi-objective particle swarm optimization (MOPSO) algorithms is compared with each other for this problem. Details of this problem about objectives and assumptions are in the next section.

Problem description
In this paper, we are trying to consider the layout of a MMAL including parallel workstations in dynamic periods. Using parallel workstations has many features such as, two or more replicas of a workstation can perform the same set of tasks on different assemblies when required. Cycle times are allowed to be shorter than the longest task time in parallel workstations, thus it can increase the production rate and it provides better flexibility in designing the assembly line and the number of tasks performed by each worker increases (Vilarinho & Simaria, 2002).
Each work center (WC) comprises either one workstation for the instance of non-paralleling or multiple parallel workstations. A WC with multiple workstations is considered busy if every workstations inside are busy. An example of parallel workstations is shown in Figure 1. According to this figure, one operator is assigned to each workstation. There are seven workstations in the mentioned figure where two of them are parallel with each other. These two parallel workstations have created one WC. Another five workstations that there are no parallel workstations with them consisted five WCs where each of workstation is one WC. In parallel workstations, the tasks assigned to the WC are not shared among the workstations, while every workstation can perform each one of the tasks (Tiacci, 2015b).
In such a way, workstations are replicated only when processing time of a task is higher than the cycle time for at least one model, so the number of this replication does not exceeded by the tasks which have a task time higher than the cycle time (Akpınar & Bayhan, 2011;Vilarinho & Simaria, 2002). According to the literature, this research has following objectives.

Objectives
In this research, there are two objectives which should be achieved: First of them is minimizing number of workstations (Akpınar & Bayhan, 2011;Manavizadeh et al., 2013;Vilarinho & Simaria, 2002) and the second one is to maximize workload smoothness between workstations (Akpınar & Bayhan, 2011;Mamun et al., 2012;Vilarinho & Simaria, 2002. The second objective aims to balance the line and it means that for each model, the idle time is divided through the workstations equally (Mamun et al., 2012;Vilarinho & Simaria, 2006).

Assumptions
The common assumptions of the problem are listed below: • Assembly line balancing problem type-I are used.
• Precedence constraints and precedence diagrams for each product are known. Precedence constraints indicate that an assignment should be allocated to a workstation, if it has no predecessors, or if all of its predecessors have already been assigned to that workstation or its previous workstations.
• Task performance's times of each product are known.
• Operators who are working in each workstation of the each line are multi-skilled and can be assigned to work at any station (flexible workers).
• Setup times between models are supposed as insignificant.
• All the operators are permanent and we do not have overtime periods.
• The number of workstations are variable.
• Cycle time is given and we have dynamic periods to determine all variables in different periods to achieve efficient solutions.
• We have a straight line with parallel workstations which can also be worked on each side of any line. Workstations along the line can be replicated to create parallel workstations.

Mathematical model
Every time one of the workstations has to wait for the completion of its predecessors, idle times will occurr. Minimizing the idle time which is related to the line is equivalent to minimizing the number of workstations (Manavizadeh et al., 2013). Minimizing the number of workstations can lead to reduced weighted line efficiency (WLE). So maximizing the WLE is another objective of this research to prevent this occurrence. (1)

Notations and parameters
The objective of minimizing the number of the workstations is represented by the first term in objective function, and the objective of smoothing workload between the workstations is the second term (Akpınar & Bayhan, 2011).
In the objective function, α and β are coefficients indicating the weight of each objective which is determined by the management while the sum of these factor must be 1. Since minimizing the number of workstations is more important than the second objective so, its weight must be higher than second objective (α ≻ β) (Vilarinho & Simaria, 2006). Constraints of this model are formulated as follows: Subject to: Constraint (2) ensures that all tasks are assigned to a station and each task is assigned only once (Gökҫen, Ağpak, & Benzer, 2006).
Equality (3) indicates dynamic periods which shows the gap between two periods and describes available stocks at the end of period t which is used at the first of period t + 1. Dynamic environment is considered for objectives and other constraints. Constraint (4) describes the precedence constraint. This means that before assigning task i to station k, all successors of its have been assigned to either station k or further stations. In this constraint, task b is a successor of task i. (2) Constraints (5a)-(5c) ensure that total time capacity of each workstation is up to task times assigned to each workstations and it will be equal to cycle time if there are no replication for that workstation. The maximum number of replicas of a workstation in a WC is up to the tasks which have a task time more than the cycle time for at least one model, and just the workstations where the time of the tasks assigned to it exceeds by the cycle time can be duplicated. In constraint (5c), λ is a very large positive integer (Akpınar & Bayhan, 2011).

Non-dominated solution
Multi-objective problems (MOP) appear in the most orders in today's study. Attempting to solve a MOP seems to be more complete and sensible in many issues (Deb, Agrawal, Pratap, & Meyarivan, 2000). A general single-objective problem is defined as: st: where g i (x) ≤ 0 and h j (x) = 0 are constraints of f(x); Ω includes all possible x. While a general MOP is as follow: st: In multi-objective issues due to a few objectives, we attempt to discover great bargains rather than a single solution so the documentation of optimum solution is introduced as Pareto Optimum. A solution x ∊ Ω will be Pareto Optimal if and only if there is no x′ ∊ Ω for which v = F(x′) = (f 1 (x′), f 2 (x′), ..., f k (x′)) that dominates u = F(x) = (f 1 (x), f 2 (x), ..., f k (x)). In Pareto dominance, vector "u" dominates vector "v" if and only if "u" is partially worse than "v".

Methodology
Two solution's algorithms designed to discover logical Pareto solutions, the first one is based on nondominated sorting genetic algorithm (NSGA-II) and the other one is based on multi-objective particle swarm optimization (MOPSO). In this section, metaheuristic approaches are described to solve the problem. The steps of the NSGA-II are schemed as follow (Akpınar & Bayhan, 2011):

Initial population
The initial population is created randomly; each individual is represented through three chromosomes. The first chromosome includes an order of different periods. The second chromosome contains an ordered sequence of all tasks which is used for each model according to the precedence diagram and its order is the same order that they are assigned to the WCs. The third chromosome consists workstations where the task is given to and if a workstation in a WC be replicated, the (7) Max or Min f (x) h j (x) = 0 j = 1, 2, ..., p x ∈ Ω number of that workstation which is assigned to a gene will replicate in next gene for the number of parallel stations (Tiacci, 2015b). According to illustrations mentioned previously, tasks assigned to workstations required for each period are determined.
A simple example to explain creation of initial population is shown in Figure 2. According to this figure, each solution has three chromosomes to specify its properties which are period, task, and workstation, respectively. At first, the order of these chromosomes is generated, randomly. After that, they are transformed to meaningful chromosomes to show solutions of the problem. In this example, WC 1 has two workstations which are parallel, WC 2 has one workstation and WC 3 has two workstations which are parallel. According to Figure 2, the genes which are below each other are considered as an answer in a population. It means that each task in each period is assigned to a WC. According to this description, task number 1 in period "a" is assigned to WC 3, task number 3 in period "c" is assigned to WC 2 and task number 2 in period "b" is assigned to WC 1.

Crossover
The single-point crossover divides each of the parent into two parts (head and tail). This point is generated randomly. Recombination of parent one with parent two is generated new offsprings. The head of first offspring keeps the head part of the first parent and the tail part of the first offspring is filled with all missing tasks in the order in the second parent. According to the mentioned, the head of the second offspring is built based on the head part of the second parent and the tail part is filled with all missing tasks in the order in the first parent.
Each individual is represented through three chromosomes (i.e. tasks order, periods order, and workstations order). We should decide which chromosome should be selected for the crossover. For this reason, we generate one integer random number between one and seven, namely in range [1][2][3][4][5][6][7]. This number defines which chromosome(s) is (are) chosen for applying crossover operator.
Note that "7" is obtained from 2016). State 1 occurs when the first chromosome of each parents is chosen for crossover and the other chromosomes remain unchanged which is shown in Figure 3. State 2 occurs when both of the first and the second chromosomes of each parents is chosen for crossover and the third chromosome is remained unchanged. State 3 occurs when the first and the third chromosomes of each parents is chosen for crossover and the second chromosome is remained unchanged and so on. Both of the generated offsprings become feasible as their head and the tail parts is also filled according to the precedence feasible order. Figure 3 shows the state one when the first chromosome of each parent is chosen for crossover and the other chromosomes remain unchanged. According to this figure, period's chromosomes for both parents are changed in the offsprings and other chromosomes are remained unchanged. The figure shows that the head part of the first parent is located in the head part of the first offspring and the tail part of it is filled by unused genes from the second parent. According to the mentioned, a, b, and c genes of the head part of the first parent is located unchanged in the head part of the first offspring. Where d, e, and f genes from the second parent are not used in the first offspring. So, these three genes are located in the tail part of the first offspring, respectively.

Mutation
We used swap operation for mutation which two genes are selected randomly for new offspring and the place of these genes are replaced in the chromosome. Other genes of the parent without changing are placed in the offspring. Each individual is represented through three chromosomes, so for doing mutation [1-7] condition will be occur which are as same as crossover. In the first state which is shown in Figure 4, changes are occurred in period chromosome and the other two chromosomes are remained unchanged. As it is shown in the figure, the second gene from head part and the fifth gene from tail part of the period's chromosome from the parent are selected randomly and the palaces of them are changed. The other genes of this chromosome are remained unchanged. So, the arrangement of the genes for this chromosome is changed from a-b-c-d-e-f to a-e-c-d-b-f.
Crossover and mutation are used to define how the next generation is created. The strategy illustrative of which individuals will be stay in the population and which ones will be replaced. The individuals of new generation may be created by the previous generation and the offspring produced by crossover and mutation.

MOPSO algorithm
Particle swarm optimization is a metaheuristic algorithm which is designed for continuous problems. This algorithm is inspired by the behavior of birds who are in search of food. The movement' speed of each individual into the food is compared to the best behavior (Parsopoulos & Vrahatis, 2002). The multi-objective particle swarm optimization provides the Pareto front nearer to the required Pareto fronts and its performance is better than some other multi-objective algorithms (Coello & Lechuga, 2002). The steps of the MOPSO algorithm are as follows: Step 1: Generate the initial population of particles.
Step 2: Each particle' velocity is generated and saved in the velocity vector which determines the direction in which a particle should move to improve its new position.
Step 3: The new position of each particle is saved as memory of particle in the personal best, which shows the best position of each individual. The new position of each particle is calculated by sum of its previous position and its velocity vector. If the new position calculated for each particle is better than its previous position, the personal best should be equal to that position. The previous position in the memory should be kept when the new position is dominated by its.

Step 4: Leaders which is the position of non-dominated particles is saved in an external memory called global best. Leaders lead other particles towards better areas in search space and considered for all the individual.
Step 5: Until the number of iteration is same as the maximum number of iterations, we should repeat step 4.

Numerical results
The performance of the NSGA-II and MOPSO algorithms is investigated and the related results are analyzed. The algorithms are coded in MATLAB R2013b and run on Intel CORE i7 2.30 GHz on personal computer with 6 GB RAM.
A numerical example is used to validate model with GAMS 23.5 software and the results of NSGA-II and MOPSO algorithms are compared with each other. In this example, precedence diagram can be shown as Akpınar and Bayhan (2011) in Figure 5 which is known for us in the problem with 30 tasks. For small size problem some of these tasks are considered. The initial experiment is performed by small-sized problems, which includes 10 sample test problems of different sizes, and 10 sample test problems of different sizes were used for large-sized problem. In this example, tasks are done over a planning horizon of 400 time units. A workstation can be replicated if a processing time of its task is more than the cycle time. The maximium number of replicas of workstations in a WC in this problem is supposed to be three (Akpınar & Bayhan, 2011).

Meta-heuristic methods
In NSGA-II and MOPSO algorithms, we classified test problems as small-sized and large-sized test problems with respect to the number of periods, tasks, workstations, and models including in them.
The results of comparison of NSGA-II and MOPSO algorithms are shown in Tables 5-14. Problem characters in Tables 5-14 are in order of the number of periods, tasks, workstations, and models they include. Each parameter of algorithms is used as follows.
General parameters: • Number of population size for both algorithm is set to 100.
• Experiments for both algorithm are repeated 20 times.
• Maximum number of iteration for both algorithms in each run of algorithm is considered as 50.
NSGA-II parameters: • Crossover rate and mutation rate are in order 0.8 and 0.3.
MOPSO parameters: • Number of repository size is noted 100.
• Values of C 1 (personal learning coefficient) and C 2 (global learning coefficient) are set to 2.
• Gamma which is defined as extra repository member selection pressure parameter is considered 2.
In order to compare the efficiency of these algorithms five comparison metrics such as number of Pareto solutions, diversification metric (DM), spacing metrics (SM), mean ideal distance (MID), and mean square index (MSI) were used. DM recognizes the extension of solution set and calculated as follows: is distance between the non-dominated solutions x i t and y i t . The results of our computations for this metric are shown in Tables 5 and 6. Tables 7 and 8 described the computation results of SM wh ich provides us to know the Euclidean distance between two adjacent solutions of Pareto front. This kind of metric provides us to know details about the uniformity of the distribution of the solutions which is obtained by each of the algorithm. This metric is computed as follows: where d i is the Euclidean distance between solution i and the nearest solution belonged to Pareto set of solutions and d is the average value of all d i . MID metric shows sum of the distance between obtained solutions (f i ; where i is the number of Pareto) and the ideal answer (f best ) for each objective function. The results of this metric are shown in Tables 9 and 10. Ideal answer is the point in population which has the best answer in all of         objective functions in the problem. The ideal answer in this problem is considered as the point of (0,0) because all objectives should be minimized. f max 1,total is the maximum value of the first objective function and f min 1,total is the minimum value of it. It is better to have less value of this metric. This metric is computed as follows: MSI which is shown in Tables 11 and 12 is the mean square of the maximum value of objective function minus the minimum value of the objective function.
Number of Pareto solutions measures ability of the algorithms to find effective point and computational time, which are summarized in Table 13 for small size and Table 14 for large-sized problems, respectively.
DM measure shows that in the most examples MOPSO operate better than NSGA-II in both sizes of the problems, because it has a higher value than NSGA-II. In SM measure, the average value of NSGA-II is more than MOPSO in both small and large sizes of the problem. So, we can say that MOPSO is superior to NSGA-II. MID measure does not show superiority for any of them. In this metric, the average value for NSGA-II in the small sizes is less than MOPSO, so it performs better than MOPSO in small sizes and in the large-scale MOPSO is better than NSGA-II. MOPSO is superior to NSGA-II in MSI measure. The computational time for MOPSO is almost better than NSGA-II. In most examples, NSGA-II is better than MOPSO in producing Pareto numbers.

Conclusion
This research deals with MMALBP-I which had some particular features such as parallel workstations and precedence constraints to reduce the number of workstations and maximize the workload smoothness between workstations for a given cycle time. This model has been implemented in a dynamic situation.
In the first step GAMS software was used to solve small-scale problem. For the second step, NSGA-II and multi-objective swarm optimization were used to solve the problem for the numerical example in small and large scales. The results obtained from those algorithms were compared with each other. The results which are displayed in most examples of DM and MSI measure, MOPSO is superior to NSGA-II, but in the SM NSGA-II operate better than MOPSO. The value of MID measure shows that in small scale, NSGA-II is better than MOPSO; and in large scale MOPSO is better than NSGA-II, so this measure doesn't show superiority for any of them. The computational time for MOPSO is almost better than NSGA-II, but in most examples NSGA-II is better than MOPSO in producing Pareto numbers.
For future research, MMAL balancing problem of type-II with U-line can be used in a dynamic situation when cycle time is unknown and number of workstations is known using the constraints of number of operators and skill level of them.