Two-Stage Assembly Scheduling with Batch Setup Times, Time-Dependent Deterioration, and Preventive Maintenance Activities Using Meta-Heuristic Algorithms

This article considers a two-stage assembly scheduling problem (TSASP) with batch setup times, time-dependent deterioration, and preventive maintenance activities (PMAs). The objective of this problem is to simultaneously determine the optimal component-manufacturing sequence (CMS), product-assembly sequence (PAS), number of setups, and number and position of PMAs (PPMA). First, to determine the optimal solution, a novel mixed integer linear programming model (MILP) for the proposed problem is derived. Then, a standard genetic algorithm (SGA), hybrid genetic algorithm (HGA), standard harmony search (SHS), hybrid harmony search (HHS), and harmony-search-based evolutionary algorithm (HSEA) were proposed owing to the intractability of the optimal solution for large-scale problems. SGA and SHS provide a chromosome to represent a complete solution including three decisions (CMS, PAS, and PPMA). HGA, HHS, and HSEA provide a chromosome to represent a partial solution including PAS. CMS and PPMA are found by an effective local search heuristic based on the partial solution. A computational experiment is then conducted to evaluate the impacts of the factors on the performance of the proposed algorithms.


Introduction
The two-stage assembly scheduling problem (TSASP) is one of the most widely studied scheduling problems in the literature and has many applications in the industry. This problem has been determined under many different settings reflecting a wide range of application environments [1][2][3]. However, most previous studies for TSASP did not consider the delay of tasks due to deterioration and maintenance activities. In this article, we consider TSASP with batch setup times, time-dependent deterioration, and preventive maintenance activity (PMA). In the first stage, a single machining machine produces components to assemble products. During the machining process, a batch setup time occurs whenever a new component is processed, or the component type is switched on the machining machine. Each component has a different deterioration rate and the deterioration is continuously accumulated during the production of the component. The PMA restores the deteriorated processing times of tasks to the original processing time by the cleaning or maintenance process on the machines. In the second stage, a single assembly machine can start assembling several required components for an ordered product when the components are available from the first stage. The target manufacturing system is described in Figure 1. In the figure, and represent types of products and types of components, respectively.
TSASP, like the other scheduling problems, has widely received attention in the literature [4][5][6][7]. Due to complex schemes of the problem, many studies proposed metaheuristic algorithms. Allahverdi and Al-Anzi [3] proposed a particle-swarm optimization algorithm (PSO), tabu-search algorithm (TS), and local search heuristic in a two-stage distributed database application. Their computational analysis indicates that the difference between the average errors of PSO is the best. Al-Anzi and Allahverdi [8] proposed a selfadaptive differential evolution heuristic (SADEH) for TSASP with respect to maximum lateness criterion where setup times are treated as separate from processing times. Yan et al. [9] proposed a hybrid variable neighborhood search (VNS) algorithm to minimize the weighed sum of the maximum makespan, earliness, and lateness on parallel-manufacturing machines and a single assembly machine in TSASP. Komaki et al. [10] proposed an artificial immune system algorithm (AIS) to solve a two-stage hybrid flow shop followed by a single assembly machine to minimize the makespan and Deng et al. [11] proposed a competitive memetic algorithm (CMA) to minimize the makespan in a parallel-manufacturing machine and a single assembly machine in TSASP. Komaki and Kayvanfar [12] proposed a grey wolf optimizer algorithm to solve the TSASP with release time in a parallel-manufacturing machine and a single assembly machine. Jung et al. [13] proposed two hybrid genetic algorithms (HGAs) for twostage assembly scheduling problem (TSASP) for processing products with dynamic-component sizes and a setup time. They had the same two-stage assembly structure with our study. The main differences are that they considered no job deterioration and preventive maintenance activities in the manufacturing stage. Meanwhile, many scheduling studies related to single or parallel machines considered not only a simple manufacturing environment but also various manufacturing environments such as situations involving deterioration and preventive maintenance [14][15][16][17][18]. According to Gupta and Gupta [19] and Browne and Yechiali [20], the processing times of tasks are not constant but increase over time depending on the sequence of the tasks or their starting times in many practical environments. This phenomenon is generally known as the deterioration of resources in the scheduling problem. As mentioned earlier, various types of deterioration occur in practical environments owing to workers' fatigue, mal-position of tools, and scraps of operations. Therefore, scheduling problems with deterioration are universal in a number of industrial applications. Meanwhile, the deteriorated processing times of tasks can be restored to the original processing times by the cleaning or maintenance process on the machines. An activity that changes the production efficiency of a machine is called a preventive maintenance activity (PMA) [21]. According to these previous researches, many researches were conducted for scheduling problem with deterioration and PMA. Mosheiov and Sarig [14] considered a single-machine scheduling and due-window assignment problem with PMA. The objective of this problem is to schedule the jobs, due-window and PMA, so as to minimize the total cost consisting of earliness, tardiness and duewindow starting time and size. Cheng et al. [15] considered a two-agent scheduling problem in which they assumed that, given a schedule, the actual processing time of a task of the first agent is a function of position-based learning, whereas the actual processing time of a task of the second agent is a function of position-based deterioration for minimizing the makespan. Lee et al. [16] considered the total earliness and tardiness penalty when scheduling simultaneously with the deterioration effects and maintenance activities on an unrelated parallel machine setting. Yang [17] considered the unrelated parallel machines with simultaneous considerations of the deterioration effects and multimaintenance activities for minimizing the makespan. They examined two Mathematical Problems in Engineering 3 models of scheduling with the deterioration effect, namely, the job-dependent and position-dependent deterioration model and the time-dependent deterioration model. Ji et al. [18] considered a single-machine scheduling with a common due-window and a deteriorating PMA. They assumed that the processing time of a task is a function of the amount of a resource allocated to it, its position in the processing sequence, and its aging effect. Cheng [22] introduced a singlemachine two-agent scheduling problem with a truncation learning effect. The objective is to find an optimal schedule to minimize the makespan. He used a branch-and-bound algorithm and three heuristic-based genetic algorithms to solve the problem. Wu et al. [23] proposed a branch-andbound algorithm and some ant colony algorithms for twoagent single-machine scheduling problem involving with the sum-of-processing-times-based learning and deteriorating effects. Wu et al. [24] proposed to integrate the two-agent and time-dependent processing times scheduling problem to minimize the total weighted number of tardy jobs of the first agent with the restriction that the maximum lateness of a job of the second agent is allowed to have an upper bound. They also proposed TS for searching near-optimal solutions. Ruiz-Torres et al. [25] proposed a set of list scheduling algorithm and simulated annealing algorithm (SA) for unrelated parallel machines with deterioration and multimaintenance activities. The results of this paper indicate that the SA is capable of producing high quality solutions for a wide range of instances. Joo and Kim [26] and Chung and Kim [27] considered a single-machine scheduling problem with various deteriorations and multiple PMAs for minimizing the makespan. To solve the problem, they proposed a mathematical model and hybrid genetic algorithms.
To the best of our knowledge, no available effective metaheuristic algorithms are capable of generating a nearoptimal solution while the TSASP simultaneously considering batch setup times, time-dependent deterioration, and PMAs. The remainder of this article is organized as follows. In Section 2, we describe a mathematical model of this problem. Section 3 describes five metaheuristic algorithms that are proposed in this article. First, the standard genetic algorithm (SGA) and standard harmony search (SHS) are proposed with a complete solution indicating componentmanufacturing sequence (CMS), product-assembly sequence (PAS), and position of PMAs (PPMA). Second, hybrid genetic algorithm (HGA) and hybrid harmony search (HHS) with local search heuristic are proposed with a partial solution indicating PAS. The remaining solution of HGA and HHS indicates CMS and PPMA are provided by local search heuristic. Finally, a harmony-search-based evolutionary algorithm (HSEA) that combines evolutionary algorithm, harmony search, and local search heuristic is proposed with a partial solution (CMS and PPMA). In Section 4, a computational experiment is then conducted to evaluate the impacts of the factors on the performance of the proposed algorithms using randomly generated instances. Finally, the conclusion and the future studies are discussed in Section 5. I 2 I 1 I 3 Figure 2: Example for representing dynamic-component size.

Mixed Integer Linear Programming (MILP) Model
In this section, a novel MILP model is developed to minimize the makespan for the TSASP with batch setup times, deterioration, and PMAs. The problem statement of this problem has been applied the dynamic-component size. This means that the kinds and number of components are different for each product. This is depicted in detail in Figure 2. According to Figure  Finally, one unit of component 1, one unit of component 2, and one unit of component 4 are required to assemble product 3. As such, various combinations of components for product types are required. Thus, the size of the components is dynamically changed according to the associated product, which is called a dynamic-component size. The TSASP has a realistic production process with the manufacturing stage and assembly stage. In the first stage, a single machining machine produces components to assemble products. During the machining process, a batch setup time occurs whenever the machining machine produces a different component of a new component.
Each component has a different deterioration rate and the deterioration is continuously accumulated during the producing the component. The PMA restores the deterioration when it is most appropriate in order to minimize the makespan. In the second stage, a single assembly machine starts to assemble products whenever the required components of the corresponding the product are available from the first stage.
The notations, decision variables, basic assumptions, and a mathematical model for the proposed problem are as follows: : the required number of ∈ for assembling to ∈ at ∈ : the cumulated number of ∈ at ∈ : the cumulated number of used ∈ for assembling to ∈ The objective function in (1) is to minimize the makespan of the TSASP.
The constraints in (2) ensure that the machining machine must produce only one component at each ∈ . The constraints in (3) and (4) ensure that assembly machine must produce only one product at each ∈ .
The constraints in (5) ensure that the amount in which each component is produced corresponded to the number of the components required to assemble all the products. The constraints in (6) define the batch relationship determining that two consecutive components are the same.
The constraints in (7) and (8) define the completion time of each component. The completion time for first ∈ is calculated by adding the batch setup time, the machining time for the component assigned to first ∈ , and the PMA time that determines whether PMA is conducted or not through as shown in (7). After that, the constraints in (8) calculate the batch setup time according to different component added to ∈ compared to the component at − 1 ∈ and add a deterioration time of the current component and the PMA time is determined through whether or not the PMA is conducted.
Mathematical Problems in Engineering 5 The constraints in (9) and (10) define the completion time of each product. The completion time for first ∈ is calculated by adding the minimum available assembly time for a product and the assembly time for the product as shown in (9). Following this, the constraints in (10) consider the completion time of the previous ∈ due to the sufficient condition for operating the assembly machine.
The constraints in (11) and (12) define the number of manufactured components at each ∈ and the cumulated number of required components, respectively.
The constraints in (13) and (14) define the cumulative batch setup time for calculating pure (without batch setup time) manufacturing time of component with deterioration.
Both are used to define a lack of components at each ∈ as stated in the constraints in (15). Note that the term for the lack of components determines whether there is the lack of components.
The constraints in (16) determine the available assembly time for a product in each ∈ by assuming a significant penalty value for the lack into account.
The constraints in (17) ensure that is equal to 1, if sum of is nonnegative value.
Finally, the constraints in (18) ensure the makespan of the TSASP. As a result, the optimal solution of componentmanufacturing sequence (CMS), optimal solution of product-assembly sequence (PAS), and position of preventive maintenance activity (PPMA) are determined by , V , and .
For the solving the problem by using CPLEX, some equations containing nonlinear functions such as min and max functions in (6), (9), (10), (13), (14), and (15) should be linearized. Some substitution variables are additionally defined to linearize the equations. The additional equations and additional variables related to the linearization are as follows:
Constraint (9a) defines the completion time of each product for first ∈ . Constraint (9b) limits the upper bound of as a minimum value of V . Furthermore, is searched by its lower bound in the optimization problem solving method because is part of the objective. In order to let value of induce as V , constraints for matching the lower bound with the upper bound of are essential and it can be conducted through Constraint (9c) and Constraint (9d). Note that the variables, , will be set as 1 in order for the feasibility of Constraints (9c) and (9d). (10) is changed to . Constraints (10b) and (10c)  and .
To validate the MILP model, an illustrative example with four components and two product types and detail parameters are provided in Figure 2 and Table 1, respectively. With the optimal CMS, PAS, and PPMA, componentmanufacturing completion times, product-assembly completion times, and the makespan are calculated in Figure 3. In accordance with the optimal CMS, the latest component between the components of product 2 is component 3, and it is completely manufactured at 124.51. Therefore, product 2 can start to be assembled at 124.51. The available assembly time of the remainder products also follows the same procedure. The completion time of the final product in PAS is the makespan of the problem, and it is determined by CPLEX to 297.11. The Gantt chart for the optimal schedule for the example is illustrated in Figure 3.

Metaheuristics
The MILP model proposed in this article is not tractable for TSASP because it is a typical combinatorial optimization problem. To solve the large-scale problems, we focus on developing effective metaheuristic algorithms and propose SGA, SHS, HGA, HHS, and HSEA.

. . Batching Heuristic Reducing Setup Time and Total Deterioration Time.
In this section, batching heuristic reducing the number of setups and total deterioration time (BSD) is proposed. The heuristic basically groups the same type of components together to reduce the number of setups and adds PMAs to reduce the total deterioration time of components. Thus, the BSD is guaranteed to minimize the total assembly time in the second stage by reducing the total manufacturing time by constructing an effective CMS and PPMA.
First, BSD is used with the predefined PAS to find the necessary components of each product. Then, the first product in PAS is selected. The component with the highest requirements for the first product is selected. If several of the highest requirements of the components are equal, the component with the longest processing time is selected. When all the components are scheduled, the same process is repeated for Input: PAS ← Output: complete CMS ← V and PPMA ← z Initialization: Let requirement of components for each product, ← 0 Let highest requirement of component for product, Calculate the by requiring number of component types for each product type for (4) Select the component with the highest requirement in (5) If the number of requirements of component is equal in (6) Select the component with the longest processing time (7) End If (8) Insert the selected component to (9) Until (10) Insert to V (11) Until Length of (12) Set V as many as number of (13) Repeat (14) If completion time of index for current V with a PMA ≤ completion time of index for current V (15) z ← 1 (16) Else (17) z ← 0 (18) End If (19) Until Length of z End Algorithm 1: Pseudocode for BSD. the next product in PAS. The process is repeated until all products in PAS are scheduled. As a result, the corresponding CMS is constructed by the current PAS. Then, to assign the effective PPMA, the PMA time is compared with the total deterioration time of the components from the previous PMA. If PMA time is less than total deterioration time of the previous components, a PMA is added and the current component is manufactured after the PMA. Otherwise, the current component is continuously manufactured without a PMA. The process is repeated until all PPMA are constructed in CMS. The pseudocode of BSD is shown in Algorithm 1.
An illustrative example from Figure 2 and Table 1 describes BSD. Suppose that PAS is { 2 -3 -1 }, which means one unit of product 2, one unit of product 3, and one unit of product 1 are sequentially assembled in the second stage. Let CMS initially be 0. In order to assemble 2 at the first position of PAS in the second stage, one unit of 4 and one unit of 3 are sequentially scheduled in the first stage, because the longest processing time (LPT) based assignment rule is applied for the CMS rule. Once all components in 2 are scheduled in the first stage, the current CMS is updated as { 4 -3 }. Next, in order to assemble 3 at the second position of PAS in the second stage, one unit each of 2 , 4 , and 1 must be ready by LPT rule. They are sequentially assigned to a position behind the same job to reduce the setup time. The resulting current CMS is updated as { 4 -4 -3 -2 -1 }. At last, in order to assemble 1 at the third position of PAS in the second stage, one unit each of 2 , 4 , and 1 must be ready by LPT rule.
They are also sequentially assigned to a position behind the same job to reduce the setup time. Thus, the CMS is updated as The PPMA is constructed by comparing the PMA time and total deterioration time of the components from the previous PMA. By the CMS, the corresponding PPMA is 1-1-1-0-1-0-0-0. Each value of PPMA determines the execution of PMAs according to each value. According to Table 1 . . Genetic Algorithms. The genetic algorithm (GA), initially developed by Holland [28], is one of the most powerful and broadly applicable metaheuristic algorithms based on the principles of evolution theory. GA is generally in an effective and efficient algorithm for large-scale combinatorial optimization problems [29]. In this article, two GAs with different solution representations (chromosomes) are compared for the two-stage assembly scheduling problem (TSASP). First, SGA uses a chromosome with a complete solution. The SGA using a chromosome with CMS, PAS, and PPMA has a chromosome to represent a complete solution. SGA can explore a wide solution area, because the chromosome of SGA is randomly generated. However, SGA may have a potential vulnerability to find the optimal solution if the optimization problem has optimality characteristics.
To improve the vulnerability, the BSD in Section 3.1 must be combined with HGA. Thus, HGA with a partial solution provides a chromosome representing one partial solution (PAS). Then other partial solutions (CMS and PPMA) are determined by BSD using the partial solution decoded from the chromosome. The structure of the chromosome in HGA for TSASP in this article is required to simultaneously represent a CMS and a PPMA in the first stage, as well as a PAS in the second stage. The CMS represents the manufacturing sequence of the components with consideration of deterioration, PAS represents the assembly sequence of the products, and PPMA represents the position of PMA. CMS and PAS use the actual machining and assembly sequence for the chromosome without additional decoding process. The value of the gene in PPMA is binary and it determines the execution of PMAs according to value of genes (0 or 1). The size of PPMA is equal to size of CMS because the deterioration occurs only for components in the machining machine according to the basic assumption of this problem. The detailed structure of chromosomes is depicted in Figure 4.
According to Figure 4, the PAS is { 3 -4 -1 -2 } and the CMS and PPMA are respectively. An initial population is randomly generated for the first generation. Each chromosome of the population is evaluated through the makespan. New chromosomes for the next generation are generated by using two genetic operations, which are crossover and mutation. In HGA, the one-point crossover and swap mutation are proposed as genetic operations. The main advantage of these operations is able to avoid infeasible solutions during the genetic operations. The one-point crossover and swap mutations are used for all chromosomes equally, and they are illustrated for CMS in Figure 5.
The chromosome of SGA has a three-dimensional string array that represents the CMS, PAS, and PPMA, as shown in Figure 4. The chromosome of HGA has a one-dimensional string array that represents a PAS and the other partial solutions (CMS and PPMA) are determined by BSD using the partial solution decoded from the chromosome. In both SGA and HGA, the chromosomes for the next generation are composed of the top 10% of the chromosomes directly copied from the current generation and the remaining chromosomes are stochastically selected by the roulette-wheel selection from the current generation. The next generation is evaluated, and this process is repeated until the termination criterion is satisfied. The pseudocode of the GA is described in Algorithm 2.
. . Harmony Search Algorithms. The harmony search algorithm (HS) was initially studied by Geem et al. [30]. The HS algorithm is inspired by the natural musical performance process, which occurs when a musician searches for a better state of harmony. In the HS algorithm, each solution is called a "harmony" which is represented by a -dimensional realnumber vector. An initial population of harmony vectors is randomly generated in the harmony memory (HM). Then a new candidate harmony is generated by three operators, namely, HM consideration, pitch adjustment, and random selection. The new harmony is updated in the memory through a comparison of the candidate and the worst harmony vector.
In this article, two types of HSs with different solution representations are compared for the proposed TSASP. First, SHS uses a solution structure with a complete solution that represents CMS, PAS, and PPMA. Similar to SGA, HHS and HHES combine a HM algorithm with a local search-batching algorithm, BSD. Thus, HHS and HHES construct a solution structure representing one partial solution, PAS. Then other partial solutions, CMS and PPMA are determined by BSD using the PAS. In the HS algorithm, a harmony memory with harmony vectors, in the th harmony vector in the HM, = { (1), (2), ⋅ ⋅ ⋅ , ( )}, is represented by a mdimension real-number vector. So, the harmony is necessary to convert it to a solution to evaluate the objective function.
In this article, three solution structures, PAS, CMS, and PPMA, are required to evaluate the makespan. To meet the conversion, a harmony vector is only converted to a PAS. Similar to HGA, other partial solutions which are a CMS Input: crossover probability, mutation probability, population size, and maximum number of generations, , , , and . Output: makespan, Initialization: Let PAS, CMS, and PPMA , V, ← 0 Begin (1) If this algorithm is SGA (2) Randomly generate an initial population for the first generation based on three single-dimensional arrays (3) End If (4) If this algorithm is HGA (5) Randomly generate an initial population for the first generation based on a single-dimensional array (6) End If (7) Repeat (8) Repeat (9) If this algorithm is SGA (10) Calculate the objective value through , V, and (11) End If (12) If this algorithm is HGA (13) Calculate the objective value through V and derived by MBR and (14) End If (15) Randomly select two chromosomes in the current population (16) If random probability ≤ (17) Do one-point crossover operation (18) End If (19) If random probability ≤ (20) Do swap mutation operation (21) End If (22) Calculate the fitness value (23) Until (24) Cloning the value of the top 10% of the chromosomes to the next generation. An initial PAS Regular interval Non-increasing order K 1 K 2 K 3 K 4 K 2 K 4 K 4 K 3 K 2 K 4 K 4 K 3 x k A k According to Figure 6, we have 4 products and an initial harmony vector is randomly generated as 4  The initial HM of the first iteration contains a randomly generated harmony vector to represent a PAS. The CMS and PPMA are generated as an actual machining sequence. PAS conducts the decoding process that reverses the order of the encoding process for each iteration. The solution structure of SHS consists a three-dimensional string array that represents a CMS, PAS, and PPMA. Each iteration of SHS improvises a new harmony and an improvising process is composed of HMC, RS, and PA. For equal-operator effect, HMC, RS, and PA are used for SHS and HHS. First, a uniformly random number is generated. If the random number is less than the operation probability of HM consideration , ( ) is generated by the HM consideration and is selected from any harmony in set of {1, 2, ⋅ ⋅ ⋅ , ℎ (size of HM)}. Otherwise, is generated by the random selection. The random selection is generated new real number in the search bounds. Second, if the random number is less than operation probability of pitch adjustment , is adjusted by a pitch adjustment. Otherwise, proceed to the next step. The pitch adjustment operator is as follows: where rand(0, 1) is a uniformly generated random number between 0 and 1 and is the bandwidth. HMC, PA, and RS are illustrated for PAS in Figure 7. According to Figure 7, the selected harmony is 2 and the current variable in the improvising process is 0.5. If the random number is less than , (1) is 0.5. Otherwise, (1) is 0.7, the randomly generated variable thorough RS. Next, if the random number is less than , than (1) is adjusted as 0.4 or 0.6 and 0.6 or 0.8. The signs are determined by a random number (if the random number > 0.5, then PA+; otherwise, PA−). The HM is updated by the fitness between and the worst harmony vector in the HM. Thereafter, the is eliminated in HM and is a new member of HM. When the improvement process is finished, and the process is repeated until the termination criterion is satisfied.
Similar to HGA, the harmony vector of HHS consists of a one-dimensional string array that represents a PAS and the other partial solutions (CMS and PPMA) are determined by BSD using the partial solution decoded from the solution structure. The pseudocode of HSs is shown in Algorithm 3.
. . Harmony-Search-Based Evolutionary Algorithm. The harmony-search-based evolutionary algorithm (HSEA) combines the advantages of EA and HS in order to effectively solve the large-scale problem of TSASP. Similar to HS, the initial solutions of HSEA ( ) for PAS are generated randomly as real-number vectors. To obtain a PAS, a decoding procedure is run to change from a real-number vector to a PAS. Other partial solutions (CMS and PPMA) are determined by BSD using the partial solution decoded from the solution structure. Next, the objective value for each initial solution is calculated by the complete solution. According to random probability, HSEA performs a harmony operator such as HMC, PA, or RS and improves a new solution vector ( V ). If V is not applied in HM and the regeneration index (RGI) is not same as the regeneration point (RGP), then RGI is increased by 1. Otherwise, update and RGI is 0. This process is repeated until the termination criterion (HMS) is satisfied. If RGI is the same as RGP, then the solutions that yield the objective value of the top 10% for are preserved and the rest solution for is regenerated randomly. This process is similar to the selection operator of GA in this article. For the diversity of solutions, another randomly generated solution ( ) consisting of the actual sequence and decoded current solution ( ) are combined. It performs the genetic operator for the combined solution ( ) and calculates the objective value of the combined solution to evaluate the fitness. Thereafter, the dominant solutions are preserved in the size of and . In order to continue HSEA, the PAS of the updated is encoded as real-number vectors. This process is repeated until the termination criterion is satisfied. A flow chart of HSEA is illustrated in Figure 8.

Computational Results
To evaluate the performance of the metaheuristic algorithms proposed, computational experiments were conducted by using randomly generated test problems. The computational experiments to test the performance of metaheuristic algorithms were executed with a small-sized problem group and a large-scale problem group. Since the complexity of a problem highly depends upon the length of CMS | |, several instances of two problem groups of small and largescale problems are randomly generated according to | |. To demonstrate the solvability of the mathematical model, we evaluated the best solution of metaheuristics with the optimal solution using | | as 8, 10, and 12 for small-sized problem group. The instances were constructed by randomly generating 8 instances within each | |. Since the computing time for CPLEX significantly becomes longer as | | have increased, we imposed a 7200 (sec.) limit and a particular run was simply terminated even if an optimal solution had not been found. Meanwhile, to evaluate the performance Input: harmony memory size, iteration, probability of harmony memory consideration, and probability of pitch adjustment, , , , and ℎ . Output: makespan, Initialization: Let solution vectors for PAS ← 0 Begin (1) If this algorithm is SHS (2) Randomly generate an initial solution for the first iteration based on three single-dimensional arrays (3) End If (4) If this algorithm is HHS (5) Randomly generate an initial solution for the first iteration based on single-dimensional arrays (6) End If (7) Repeat (8) If this algorithm is SGA (9) Calculate the objective value through and, CMS and PPMA generated by actual sequence (10) End If (11) If this algorithm is HGA (12) Calculate the objective value through CMS and PPMA derived by MBR and (13) End If (14) Until (15) Rank by the objective value (16) Repeat (17) Repeat (18) If random probability ≤ ℎ (19) Let harmony consideration operation (20) End If (21) If random probability ≤ (22) Let pitch adjustment operation (23) End If (24) Until the size of (25) Calculate the objective value of new solution vector (26) If the objective value of new solution vector ≤ the objective value of harmony with lowest rank (27) Update the new solution vector (28) End If (29) Until . End Algorithm 3: Pseudocode of HSs.  [15,25] between metaheuristic algorithms and gain the insight of the algorithms by altering the parameter | |, we relatively compare the best solution of SGA, SHS, HGA, HHS, and HSEA using | | as 160, 400, and 640 for large-scale problem group. The instances are constructed by randomly generating 8 instances within each | |.
The generating conditions of the parameters of problem instances for each group were summarized in Table 2. The scheduling periods were assumed to 480 (min.) (12 h × 1day) for the small-sized problem group and 3600 (min.) (12 h × 5day) for large-scale problem group, respectively. The number of products | | for each | | was 2, 3, and 4 for the small-sized problem group and 40, 60, and 80 for the large-scale problem group. The batch setup times, preventive maintenance activity times, and deterioration rates were generated as two cases, low-case and high-case. The low-case of batch setup time was fixed as 5 and high-case of batch setup time was fixed as 20. The low-case and high-case of Run encoding procedure for sol f as real number The all metaheuristic algorithms were executed with 30 replications. In order to be equal comparison of SGA, HGA, SHS, HHS, and HSEA, a population size was set as 2 × and a termination time was set as the elapsed time when SGA as a base heuristic is converged with no-improvement or repeats until 1,000 generations. The crossover, mutation, harmony memory consideration, and pitch adjustment rate were 0.8, 0.2, 0.7, and 0.3, respectively. The parameters of proposed algorithms were predetermined by extensive preliminary experimentations. The MILP model presented in Section 2 was coded in ILOG CPLEX 12.5 and the all metaheuristic algorithms were coded by C#. All the experiments solving each test problem using CPLEX and metaheuristic algorithms were run on a PC with Intel core i7-4770 CPU with 4GB RAM and Windows 7 operating system. The test results of small-sized problem group and large-scale problem group are summarized in Tables 3 and 4. It shows the optimal solution by CPLEX, mean. For performance measures, the mean objective function value of all replications (Mean), the relative percentage deviation (RPD), and mean absolute deviation (MAD) of each replication are calculated. The RPD and MAD are expressed in where is the objective function value and is the mean objective function value of all replications obtained by each metaheuristic and Best is the best objective function value of all experiments (CPLEX, SGA, HGA, SHS, HHS, and HSEA) for each test problem. Best can be the optimal solution if CPLEX obtains the optimal solution.
For the tests of small-sized problem group, CPLEX was unable to derive the optimal solution for most of test problems with | | ≥ 12 in 7200 (sec.). We defined the values of RPDs and MADs of test problems with | | ≥ 12 unable to     Table 3 indicate that all proposed metaheuristic algorithms perform well. Especially, HSEA was very close to the optimal solution with achieving the highest frequency of finding the optimal solution among other metaheuristic algorithms. The computational times of all metaheuristics algorithms for small-sized problem group were small enough to obtain solutions within reasonable computational times of 1 (sec.). These results mean that HSEA is very effective algorithms with low variation for small-sized problem group.
For large-scale problem group, the RPDs and MADs of HSEA were much better than the other metaheuristic algorithms in all the test problem groups. The results indicated that HSEA could significantly improve the performance. Both the RPDs and MADs of HSEA are 2.2 and 0.9, which are nearly 0. Figure 9 showed the mean plots and Tukey HSD intervals at the 95% confidence level for all problems by SGA, HGA, SHS, HHS, and HSEA in Table 4. It clearly illustrated that there were statistically significant differences between the RPD values among SGA, HGA, SHS, HHS, and HSEA. Figure 10 showed the pattern of the average RPDs in SGA, HGA, SHS, HHS, and HSEA by changing three significant performance factors, batch setup times ( ), PMA time ( ), and deterioration rate ( ). In this figure, HSEA clearly showed a low average RPD in all the combinations of the factors. This result indicates that HSEA is robust for various combinations of significant performance factors of the algorithms. The average RPDs of SGA, HGA, SHS, and HHS except for HSEA relatively show the highest value (the worst performance) in the case of high PMA time, low deterioration rate, and low batch setup time. The main reason of the occurrence of this phenomenon is that the makespan 16 Mathematical Problems in Engineering becomes sensitive to small change of the number of PMAs as the PMA time is increased.

Conclusions
In this article, a novel MILP model was developed for TSASP with batch setup times, time-dependent deterioration, and preventive maintenance activities in dynamic-component size and to minimize the makespan. In TSASP, n products ordered by various customers are scheduled. In the first stage, | | required number of components to be assembled to the corresponding products in the second stage must be manufactured. When all the required components for one corresponding product ordered were made, an assembly machine in second stage can immediately assemble the components into the product. Since the MILP model was not tractable for the problems | | ≥ 12 within a reasonable computing time, metaheuristic algorithms (SGA, HGA, SHS, HHS, and HSEA) were proposed. We executed the computational experiments with two groups, namely, a small-sized problem group and a large-scale problem group. Based on the experimental results, we found that HSEA was very effective algorithm with low RPD in reasonable computational time for the TSASP. Furthermore, HSEA showed that the algorithm is robust for various combinations of significant performance factors of the algorithms.
The future research can be divided into three directions. Firstly, the study can be extended on other kinds of setup and preventive maintenance activity such as sequence dependent setup time and position-based preventive maintenance activity. Secondly, the study should be extended to the deterioration of the assembly process in the second stage, because the most manual assembly process with human fatigue is popular. Finally, the study should be extended to apply other manufacturing process with three-stage assembly scheduling problem and flexible flow shop scheduling problems.

Data Availability
All data can be accessed in the Computational Results section of this article.

Conflicts of Interest
The authors declare that they have no conflicts of interest.