A Genetic Algorithm for a Two-Machine Flowshop with a Limited Waiting Time Constraint and Sequence-Dependent Setup Times

-is study considers a two-machine flowshop with a limited waiting time constraint between the two machines and sequencedependent setup times on the second machine. -ese characteristics are motivated from semiconductor manufacturing systems. -e objective of this scheduling problem is to minimize the total tardiness. In this study, a mixed-integer linear programming formulation was provided to define the problem mathematically and used to find optimal solutions using a mathematical programming solver, CPLEX. As CPLEX required a significantly long computation time because this problem is known to be NPcomplete, a genetic algorithm was proposed to solve the problem within a short computation time. Computational experiments were performed to evaluate the performance of the proposed algorithm and the suggested GA outperformed the other heuristics considered in the study.


Introduction
e two-machine flowshop scheduling problem with a limited waiting time constraint and sequence-dependent setup times investigated in this study is motivated by semiconductor manufacturing systems. In a two-machine flowshop, all jobs are processed in the same order of machine 1 and then machine 2, which is called permutation schedule. Owing to the limited waiting time constraint, jobs must be started on the second machine within a limited waiting time after the completion of those jobs on the first machine. Additionally, sequence-dependent setup times are incurred between two consecutive jobs on the second machine. e objective function of this scheduling problem is a minimization of the total tardiness. us, this problem can be defined by F2|max − wait and s ij | i T i in the three-field notation suggested by [1], where max-wait and s ij refer to a limited waiting time constraint and sequence-dependent setup times, respectively. T i is the tardiness of job i defined as T i � max C i − d i , 0 , where C i and d i are the completion time and due date of job i, respectively.
is scheduling problem is NP-complete because the typical two-machine flowshop tardiness problem is NP-complete [2]. e considered limited waiting time constraint is common in semiconductor wafer fabrication, in which there are many chemical processing processes. One example is a flowshop consisting of cleaning and diffusion processes [3]. Wafers that have completed the cleaning process may have problems that increase the likelihood of contamination and decrease quality if the surface is exposed to air for a long time before the next process (the diffusion process). To prevent this problem, the limited waiting time constraint is set between cleaning and diffusion processes. Additionally, if the chemical treatment effect is valid only when the treated wafer is processed on the next machine within a certain time after the treatment, the waiting time would be limited. If a wafer violates this time limit, the first operation must be reworked or discarded in case of high contamination. Failure to comply with the time constraints usually occurs when production volume is high, reducing productivity and causing management losses.
Setup time refers to the time required for preparation before processing jobs. If the setup time for a succeeding job varies depending on the preceding job, it is called the sequence-dependent setup time. In other words, the setup time depends on the sequence of the jobs. In semiconductor manufacturing systems, wafers for different types of products may require different chemicals or operating temperatures even though they are processed in the same machine. erefore, job scheduling is important to schedule jobs to minimize the setup times when the sequence-dependent setup times are considered.
A two-machine flowshop scheduling problem with sequence-dependent setup times (SDSTs) to minimize the makespan is very similar to the typical travel salesman problem (TSP) [4], and dynamic programming methods [5,6] and branch and bound algorithms [7,8] have been proposed to obtain optimal solutions for the problem. However, since finding optimal solutions of the flowshop problem with SDST requires considerably long computation times, recent studies have focused on development of metaheuristic algorithms: genetic algorithms [9][10][11], variable neighborhood search algorithm [12], migrating bird optimization algorithm [13], discrete artificial bee colony optimization [14], iterated greedy algorithm [15][16][17], and local search based heuristic algorithm [18].
A two-machine flowshop scheduling problem with a limited waiting time (LWT) constraint to minimize the makespan has been proven to be NP-hard [19]. For this problem, [19,20] suggest branch and bound algorithms to find optimal solutions, and [21] proposes several dominance properties for optimal solutions. e study in [22] considers a situation where some jobs skip the first machine in a twomachine flowshop and suggests an approximation algorithm to minimize the waiting time variation of jobs. e study in [23] considers a three-machine flowshop with overlapping waiting time constraints and proposes a branch and bound algorithm to minimize makespan. e study in [24,25] consider flowshop problems with time lag constraints and develop a simulated annealing algorithm and a constructive heuristic algorithm with and insertion mechanism, respectively. On the other hand, for a flowshop problem with LWT for the objective of minimizing total tardiness, a heuristic algorithm and a Lagrangian relaxation-based lower bound strategy [26] have been developed.
Only a few studies have considered SDSTs and LWT together, although both are common and important in semiconductor manufacturing systems. e study in [3] suggests a branch and bound algorithm for the objective of minimizing makespan. On the other hand, no-wait flowshop problems with SDSTs have been studied recently. e study in [27] proposes a branch and bound algorithm for no-wait flowshop, and [28] proposes a pairwise iterated greedy algorithm, while [29] suggests an insertion neighborhood search algorithm to minimize total flowtime. Note that this study is the first attempt to consider the two-machine flowshop with SDST and LWT for the objective of minimizing total tardiness.
In this study, a mixed-integer programming formulation for the considered scheduling problem is provided to describe the problem clearly and to obtain optimal schedule by CPLEX. Additionally, heuristic algorithms are developed to obtain good (or near-optimal) schedules in a short computation time rather than an optimal schedule because the problem is NP-complete.
ree types of heuristics are proposed: list scheduling methods, a constructive heuristic, and a genetic algorithm. To demonstrate the performance of the proposed heuristic algorithms, computational experiments were performed on randomly generated instances.
In the remainder of this paper, Section 2 describes the problem considered in this study in more detail including several assumptions and presents a mixed-integer linear programming formulation. Section 3 proposes heuristic algorithms. Section 4 describes the computational experiments and shows the results. Finally, Section 5 concludes the paper with a short summary.

Problem Description
In this section, the considered scheduling problem is described in more detail, and a mixed-integer linear programming formulation is presented. ere are n jobs to be processed in the two-machine flowshop with a limited waiting time constraint and sequence-dependent setup times on the second machine. Here, only permutation schedules are considered; that is, job sequences on the two machines are the same. Although permutation schedules are not dominant in the problem, permutation schedules are common in real situations because of the simplicity of work management, flexibility in material handling, and limitation in buffer space. Note that permutation schedules are dominant in a typical two-machine flowshop with regular measures, including total tardiness. Other assumptions include the following: (i) All jobs are ready to start at time zero (ii) Information on the jobs to be scheduled is given; that is, processing time, due date, limited waiting time, and sequence-dependent setup time are given in advance (iii) Preemption is not allowed (iv) Each machine can process only one job at a time, and a job can be processed on only one machine at a time In this paper, symbols in Table 1 are used to describe the mathematical formulation and proposed algorithms.
Completion times and tardiness of the jobs in a given sequence are computed as follows: e mixed-integer linear programming (MILP) formulation for the problems is given: 2 Mathematical Problems in Engineering subject to x ir , y ijr ∈ 0, 1 { }, ∀i, j, r.
Objective function (6) is to minimize the total tardiness. Constraints (7) and (8) ensure that each job is placed at only one position in the sequence, and only one job can be placed at each position. Constraints (9)-(12) define the start time of each job in the considered flowshop. Constraints (13)- (15) define the relationship between x ir and y ijr for sequencedependent setup times. Constraints (16) and (17) compute the completion time and tardiness of jobs. Constraints (18) and (19) define the domain of the decision variables.

Heuristic Algorithms
In this section, three types of heuristic algorithms are proposed to solve the considered problem. e algorithms are list scheduling rules, a modified NEH algorithm (MNEH), and a genetic algorithm (GA). As only permutation schedules are considered in this study, feasible permutation schedules are obtained by these heuristic algorithms, and completion times and tardiness of each jobs are computed with equations (1)-(5) provided in Section 2.

List Scheduling Rules.
List scheduling rules are widely used in practice (e.g., in semiconductor manufacturing systems) because they are intuitive and easy to develop and apply. Four list scheduling rules are used as described below, and schedules are obtained by sorting jobs in a nondecreasing order of the values according to the methods: e heuristic algorithm proposed in this subsection is modified from the constructive NEH algorithm [30], which is known to work well Index of the job at the r th position in a schedule x ir 1 if job i is assigned to the rth position in a schedule and 0 otherwise y ijr 1 if jobs i and j are assigned consecutively to positions r and r + 1 and 0 otherwise b [r]k Start time of the r th job on machine k Completion time of the r th job on machine k T [r] Tardiness of the r th job Σ Partial schedule σ i Partial schedule obtained with σ followed by job i U Set of unscheduled jobs C k (σ) Completion time of partial schedule σ on machine k in flowshop scheduling problems and has been modified by many researchers for various scheduling problems. e procedure of MNEH is summarized as follows.

Procedure: MNEH
Step 0. Obtain a seed sequence from the list scheduling method. Let σ be this sequence and set σ * � ∅, r � 1. Go to Step 1.
selected job into the ith position of σ * and compute the total tardiness. (iv) Step 3. Among the (|σ * | + 1) partial schedules obtained in Step 2, select a partial schedule that results in minimum total tardiness. Let σ * be the selected partial schedule. Go to Step 4. (v) Step 4. r ⟵ r + 1. If r < n, go to Step 1; otherwise, go to Step 5. (vi) Step 5. Set i � 1 and j � 2. Go to Step 6. (vii) Step 6. Generate a new sequence by interchanging two jobs in the ith and jth positions in σ * . If the total tardiness of the new sequence is less than that of σ * , then replace σ * with the new sequence and go to Step 5; otherwise, go to Step 7. (viii) Step 7. Let j←j + 1. If j ≤ n, go to Step 6; otherwise, let i←i + 1 and j←i + 1. If i < n, go to Step 6; otherwise, terminate. (σ * ) is the final solution of this heuristic.

Genetic Algorithm.
e GA was first introduced [31] as an optimization method that mimics the evolution of the natural world, and it is one of the most popular methods in the field of optimization because of its powerful search mechanism and ability to solve problems. e effectiveness of a GA depends on the procedure design, operators, and parameters.
us, the following subsections describe the design of the proposed GA in this study.

Solution Representation.
In the proposed GA, solutions are expressed in the chromosome structure, and the performance of the algorithm may vary depending on how the chromosome is expressed. In this study, the sequence of jobs is expressed in chromosomal structure because only the permutation schedule is considered.

Initial Population.
To generate an initial population, list scheduling methods and the MNEH algorithm are used.
at is, four solutions are generated from the list scheduling methods and used to generate four further solutions by MNEH. However, Steps 5-7 of MNEH are not applied here because these steps require a long computation time in largesized problems. e remainder of the initial population is generated randomly.

Fitness Evaluation.
Fitness is the evaluated objective value of a solution; that is, the evaluation value represents how good a solution is. Here, the total tardiness, which is aimed to be minimized by the objective function of the considered problem, is used for fitness evaluation. en, the lower the total tardiness, the better the solution.

Selection.
Selection is to choose a set of parents from the population to generate the offspring. In general, selection is based on the quality of the solutions and randomness. In the proposed GA, the two most widely used schemes in the literature (tournament and roulette) are used for selection.
In the tournament method, four chromosomes are selected randomly from the population, and pairwise comparisons are performed to choose two chromosomes as parents. at is, the better of the first two chromosomes and the better of the last two chromosomes are selected as the parents.
For the roulette methods, the inverse of the objective function value for all chromosomes (i) in the population is computed; that is, f i � 1/ r (T r + 1). Note that 1 is added to the denominator to avoid a case of dividing by zero because the objective function value (total tardiness T r ) can be zero. e selection probability of each chromosome in the population is then obtained by prob i � f i / f i . Based on these selection probabilities of chromosomes, two chromosomes to be parents are selected randomly.

Crossover.
e purpose of the crossover operation is to generate better offspring by exchanging the information of the selected parents. In the proposed GA, nine types of crossover are considered: one-point order crossover (OX1), two-point order crossover (OX2), order-based crossover (OBX), position-based crossover (PBX), partially matched crossover (PMX), similar job crossover (SJX), two-point similar job crossover (SJX2), similar block crossover (SBX), and two-point similar block crossover (SBX2).
For a description of these crossovers, let parent 1 and parent 2 be the selected parents to generate offspring.
(i) Procedure: OX1 Step 0. Let point 1 be a randomly selected cutoff point from parent 1 .
Step 1. Offspring inherits the front part (from the first to point 1 ) of parent 1 .
Step 2. Delete jobs that are already placed in offspring from parent 2 .
Step 3. Place the jobs in parent 2 into the unplaced positions of offspring in the order in which they appear in parent 2 .
(ii) Procedure: OX2 Step 0. Let point 1 and point 2 be two randomly selected cutoff points from parent 1 .
Step 1. Offspring inherits the middle part (between point 1 and point 2 ) of parent 1 .
Step 2. Delete jobs that are already placed in offspring from parent 2 .
Step 3. Place the jobs in parent 2 into the unplaced positions of offspring from left to right in the order in which they appear in parent 2 .
(iii) Procedure: OBX Step 0. Let set J be a set of jobs from parent 1 at random.
Step 1. Let set P be a set of positions corresponding jobs in set J from parent 2 .
Step 2. Place the jobs in set J into the positions in set P of offspring from left to right in the order in which the jobs appear in parent 1 .
Step 3. Delete jobs that are already placed in offspring from parent 2 .
Step 4. Place the jobs in parent 2 into the unplaced positions of offspring from left to right in the order in which they appear in parent 2 .
(iv) Procedure: PBX Step 0. Select a set of positions from parent 1 at random.
Step 1. Offspring inherits the jobs placed in the positions of the set from parent 1 .
Step 2. Delete jobs that are already placed in offspring from parent 2 .
Step 3. Place the jobs in parent 2 into the unplaced positions of offspring from left to right in the order in which they appear in parent 2 .
(v) Procedure: PMX Step 0. Let point 1 and point 2 be two randomly selected cutoff points from parent 1 .
Step 1. Offspring inherits the middle part (between point 1 and point 2 ) of parent 1 .
Step 2. For each unplaced position in offspring, the job in the same position in parent 2 is placed.
Step 3. Delete jobs that are already placed in offspring from parent 2 .
Step 4. If duplicate jobs occur before point 1 and after point 2 , these jobs are exchanged with the remaining jobs in parent 2 , considering the order of parent 2 .
(vi) Procedure: SJX Step 0. Offspring inherits jobs in the same position in the two parents.
Step 1. Let point 1 be a randomly selected cutoff point from parent 1 .
Step 2. Offspring inherits the front part (from the first to the point 1 ) of parent 1 .
Step 3. Delete jobs that are already placed in offspring from parent 2 .
Step 4. Place the jobs in parent 2 into the unplaced positions of offspring in the order in which they appear in parent 2 .
In SJX2, Steps 1 and 2 of SJX are modified by considering two cutoff points, as in OX2. Additionally, SBX and SBX2 are very similar to SJX and SJX2, respectively. In SBX and SBX2, a block means two consecutive jobs, and the only difference is that an offspring inherits the blocks in the same position in the two parents in Step 0. us, the detailed procedures of SJX2, SBX, and SBX2 are omitted.

Mutation.
A mutation is an operation to escape from the local optimum and increase the variability and diversity in the population.
is operation partially modifies a chromosome to generate offspring with a new chromosome. In the proposed GA, three types of mutation schemes are considered: interchange, insertion, and swap.
(i) Interchange: two randomly selected jobs are interchanged. (ii) Insertion: a randomly selected job is inserted into a randomly selected position. (iii) Swap: a randomly selected job is swapped with the next job.

Improvement (Local Search).
Local search methods are commonly used in genetic algorithms as well as other (meta)heuristics to improve the solutions. A local search strategy based on insertion is also used in the proposed GA. is local search is triggered when the current best objective function value is not improved for a predetermined number of consecutive generations. When the local search is triggered, insertion is applied 3n times to the best solution in the population, where n is the number of jobs. If the new solution is better than the current best solution, the current solution is replaced with the new solution. Note that because excessive local search attempts may consume a long computation time, the number of interchanges is limited to 3n.

Restart.
e goal of a restart procedure is to avoid premature convergence in the population and improve the solution quality obtained by GA. e restart procedure in the proposed GA is based on that of Ruiz and Maroto [32].
is restart procedure is triggered when the current best objective function value is not improved for a predetermined number of consecutive generations. e restart procedure is summarized below.
(i) Procedure: restart Step 0. Sort the chromosomes in the population in a nondecreasing order of their fitness value.
Step 1. Keep the first 20% of chromosomes and remove the remaining 80% of the chromosomes from the population.
Step 2. For the 20% of the population size, chromosomes are selected randomly from the first 20% of chromosomes, and the selected chromosomes are mutated once with an insertion operation.
Step 3. For the 20% of the population size, chromosomes are selected randomly from the first 20% of chromosomes, and the selected chromosomes are mutated once with an interchange operation.

Mathematical Problems in Engineering
Step 4. e remaining 40% of chromosomes are generated randomly.

Termination Criterion.
e termination criterion applied to the proposed GA is the maximum CPU time. at is, when the maximum CPU time from the start of the GA elapses, the GA procedure stops.

Whole Procedure of the Proposed GA.
e procedure of the proposed GA is summarized below, using the operators described above.
(i) Procedure: proposed GA (ii) Step 0. Let POP, P size , p c , and p m be the population, population size, crossover probability, and mutation probability, respectively. (iii) Step 1. Generate an initial population with P size chromosomes. Let POP be the initial population. (iv) Step 2. Set i � 1. While i < P size , do the following.

Test Instance and Environment.
Computational experiments were performed with randomly generated problems to evaluate the performance of the proposed algorithms. All algorithms were coded in Java, and the experiments were performed on a PC with a 3.2 GHz Intel Core i7-8700 CPU.
To generate problem instances, job information (processing times, sequence-dependent setup times SDST, and limited waiting times LWT) was generated according to the method presented in [3]. e processing times were generated from U(1, 50), where U(a, b) denotes the discrete uniform distribution with a range [a, b]. SDST and LWT were generated from U(0, 50) and U(1, 100), respectively. e due dates of the jobs were generated from U(X(1 − T − R/2), X(1 − T + R/2)), where X is the lower bound on the makespan, and T and R are the tardiness factor and range of due dates parameters, respectively. Here, to obtain the values of X, we assumed that SDSTs for each job i were set to s ij � min(s ij |∀j) and LWTs were infinite; then X is obtained by Johnson's algorithm [33] that provides the minimum makespan in the typical two-machine flowshop. For the distribution of due dates, three levels for each of T and R were used; that is, T � (0.2, 0.4, and 0.6) and R � (0.2, 0.5, and 0.8).

GA Calibration.
e performance of GA significantly depends on its operators and parameters. Hence, a calibration experiment is needed to select the best operators and parameters. e proposed GA has eight operators and parameters, which are listed below, and all possible combinations were evaluated to obtain the best among them. For these eight types of operators and parameters, 46,656 different combinations are possible. To simplify the experiment, this test is divided into two experiments. e first experiment was to select the best combination for selection, crossover, and mutation, and the second experiment was to find the best combination for the other parameters.
For the evaluation and selection for the GA operators in the first experiment, a full factorial design of selection, crossover, and mutation is considered; hence, 54 different algorithms are tested. e remaining parameters were fixed to (P size , p c , p m , P l , P r ) � (50, 0.5, 0.4, 10, 50). Additionally, four levels were considered for the number of jobs (n � 20, 50, 100 and 200), and five instances for each combination (T, R, and n) were generated. at is, 180 problem instances were generated. For a termination criterion, the maximum CPU time was set to 30n milliseconds (ms). To compare different GAs for each problem instance, the relative deviation index (RDI) was considered as a measure; RDI is defined as RDUI � (Alg sol − Min sol )/(Max sol − Min sol ), where Alg sol is the objective function value from the 6 Mathematical Problems in Engineering current algorithm, and Min sol and Max sol are the minimum and maximum values among the objective function values obtained from the algorithms in this comparison, respectively. Particularly when Max sol � Min sol , RDI will be 0 for all the algorithms. To see the effect of GA operators on the performance, analysis of variance (ANOVA) was performed using SPSS and the ANOVA table is shown in Table 2. As can be seen in the table, the performance of the proposed GA was significantly affected by the operators with a significance level of 0.05. According to the F-values, three operators were significantly effective on the RDI. Figure 1 illustrates the main effect plot to find the best operators with lower RDI values. As shown in the figure, roulette, OX2, and insertion showed better performance.
is is possibly because the change of fore and rear parts in sequences can lead to searching for better sequences in this scheduling problem with sequence-dependent setup times. For mutation, insertion is observed to be the best, which has been regarded in various flowshop scheduling studies as the best mutation operator [32]. erefore, roulette, OX2, and insertion were used in the following experiment. e second experiment was performed to find the best combination of parameters of the proposed GA. Considering all levels of (P size , p c , p m , P l , P r ), 864 different combinations were evaluated using the same approach as that for the operators.
e ANOVA results are summarized in Table 3, which shows that p c , p m , and P r are the most significant parameters that affect the GA performance. As shown in Figure 2, which illustrates the main effect plot of GA parameters, RDI had the lowest value at each of (P size , p c , p m , P l , P r ) � (50, 0.35, 0.6, 5, 75), respectively. e bath-curves are observed for all parameters except P l . at is, both higher and lower values from the selected values yielded worse results, validating the chosen range of values for these parameters in the experiments. For P l , frequent local searches were more effective when the best solution in the population was not improved. Notably, the probability of mutation is larger than that of crossover. is is probably because a small change by a mutation may lead to a better sequence rather than crossover.
at is, because limited waiting time constraints and sequence-dependent setup times were considered in this study, the solution may be improved by small changes in a job sequence. us, using the selected parameters, the proposed GA attempts to find better solutions by keeping the sequences in good solutions and making a few changes. In summary, the proposed GA uses roulette, OX2, and insertion for the operators and P size � 50, p c � 0.35, p m � 0.6, P l � 5, and P r � 75 for the parameters.

Performance Evaluation.
First, the performance of the proposed MILP formulation was evaluated using CPLEX 12.9. To avoid excessive computation times for CPLEX, the maximum CPU time limit was set to 3,600 seconds (s) for each instance. For the evaluation of MILP, four levels for the number of jobs (n � 5, 10, 15, and 20) were considered, and five instances for each combination of (T, R, and n)s were generated. at is, 36 combinations for (T, R, and n) were considered, and 180 problem instances are generated. e results are summarized in Table 4, which shows the average computation time (ACPUT) and the number of instances (NI) that failed to find an optimal solution within the computation time limit (3,600 s). As can be seen in the table, CPLEX obtained optimal solutions for all problems up to n � 10 but could not find optimal solutions for eight problems with n � 15. Furthermore, for n � 20, no problems were solved to optimality within the computation time limit.
In general, due dates significantly affect due date-related measures (e.g., total tardiness, which is the objective function of the considered scheduling problem in this study). Hence, the impact of the due dates on the performance of CPLEX was analyzed. e computation times of CPLEX with respect to the due date parameters (T, R) are summarized in Table 5, when n � 15. As can be seen from the table, the performance of CPLEX was significantly affected by the values of T and R. Particularly, when T � (0.2 and 0.4), the instances took a significantly long computation time to be solved. However, when T � 0.6, the instances were solved within shorter computation times.
at is, CPLEX can solve problems quickly when the due dates of jobs are tight. Regarding the range of due dates R, problems in which due dates of jobs are widely distributed were solved in a shorter computation time.
Next, the performances of the proposed list scheduling rules and MNEH algorithms were evaluated by comparison to solutions obtained from CPLEX. Here, let MNEH x be an MNEH algorithm that starts with an initial solution obtained by a list scheduling rule x. Note that, for the instances not solved within the time limit, the solutions terminated at the time limit were compared. Table 6 presents the average gaps between the heuristics and CPLEX, (Heu sol − CPLEX sol )/CPLEX sol , and the number of instances where heuristics found solutions which were equal to or better than those of CPLEX. As can be observed in the table, MEDD and Greedy demonstrated good performance among the list scheduling rules, and MNEH MEDD was the best performing heuristic algorithm. However, the average gaps from CPLEX were too large. ese results explain why a metaheuristic algorithm, GA, was needed to find good solutions within a short computation time in this study.
To evaluate the performance of the suggested GA, a performance evaluation by comparing it with other algorithms was required. However, to the best of our knowledge, there are no algorithms for the same problem considered in this study. us, the evaluation was performed by comparing the proposed GA to CPLEX in small-sized problem instances (n � 5, 10, and 15) and the proposed heuristic algorithms in medium-and large-sized problem instances (n � 20, 50, 100, and 200). rough these comparisons, the effectiveness of the solutions from the proposed GA was evaluated. In these experiments, the maximum computation times of GA were set to four levels (50n, 100n, 500n, and 1, 000n)ms. Let GA x be the genetic algorithm in which the maximum computation time is set to x · n ms. e effectiveness of GA solutions compared to those from CPLEX is shown in Table 7. As can be seen from the table, the suggested GA found optimal solutions for almost all problem instances. Particularly, GA outperformed CPLEX for n � 20; a minus value means that the objective value of solutions from GA is less than that from CPLEX.
is is probably because CPLEX fails to obtain optimal solutions within the maximum CPU time for n � 20.
For the medium and large problem instances, the GA solutions were compared with those from the MNEH algorithms. e results of this test are summarized in Table 8, which shows RDI values and the number of instances where a heuristic finds the best solutions among the heuristic solutions. e table illustrates that GAs outperformed MNEH algorithms, although GAs took longer CPU times. Additionally, in the case where GA 50 and MNEH Greedy had similar computation times (approximately 10 s), GA 50 still showed better performance. Overall, although GA requires a longer CPU time, GA solves the problems better, and the CPU time of GA 50 is also reasonably short.
Additionally, the effectiveness of the proposed GA was evaluated through comparison with a simulated annealing (SA) algorithm, which is also a widely used metaheuristic for various optimization problems. SA attempts to improve an initial solution repeatedly by making small alterations until further improvements cannot be made by such alterations. e SA algorithm can avoid entrapment in a local optimum by allowing occasional uphill moves that deteriorate the objective function value. In this experiment, a simple SA algorithm is devised for comparison with the proposed GA. In this SA algorithm, a neighborhood solution (σ ′ ) of a solution (σ) is generated using an insertion method, which is reported as the best mutation in the previous experiment. If is the total tardiness of solution x, σ is replaced with σ ′ . Otherwise, if Δ ≥ 0, the replacement is done with probability e − Δ/τ , where τ is a parameter called the temperature. is replacement procedure is repeated L times at a temperature, where L is called the epoch length, which is set to n 2 . e temperature is initially set to τ 0 � 100 and decreases gradually by a cooling function, which is set as τ x+1 � τ x − α, where α is a parameter that is set to 0.025 in this algorithm. e algorithm is terminated when the temperature decreases to zero or the computation time reaches the maximum computation time.
e procedure of this SA algorithm is summarized below.

Procedure: SA
Step 0. Set initial values for parameters used in the algorithm (τ 0 � 100, L � n 2 and α � 0.025). Let τ � τ 0 and l � 0.     For fair comparison, the same maximum completion time is set for SA. Let SA 500 and SA 1000 be the simulated annealing algorithms in which the maximum CPU times are set to 500n and 1000n, respectively. Here, GA 500 and GA 1000 are compared with SA 500 and SA 1000 , respectively. e results of comparisons with the SA algorithms are summarized in Table 9, which shows the average rate of Alg so /max sol , where Alg sol is the objective function value from the current algorithm, and Max sol � max GA x , SA x , where x � 500 and 1000. Note that because the objective value can be zero, the maximum values are used as the denominator rather than the minimum value to avoid dividing by zero; and if Max sol is zero, then the rate will be zero. According to this measurement, algorithms with low    rate value are better algorithms. As can be seen from Table 9, the proposed GA outperformed a simple SA except for n � 50. Additionally, the outperformance is clearer in largesized problems. Although the devised SA for this test was simple, the effectiveness of the proposed GA is validated from the results. Finally, to check the effects of due date parameters (T and R) on the performance of GA, convergence rate charts are proposed in Figure 3. For T values, instances generated with large T values converged more quickly in the early generations. at is, when the due dates of jobs are tight, GA can find good solutions in early generations. is result is like that of CPLEX. On the other hand, instances with R � 0.5 converged at later generations. at is, when the range of due dates is moderate, GA requires more computation time to find good solutions. According to the results, instances with a moderate range of due dates are more complicated than those of other cases.

Conclusion
In this study, we considered a two-machine flowshop scheduling problem with limited waiting time constraints and sequence-dependent setup times, which is very important in semiconductor manufacturing systems. A mixedinteger programming formulation is provided to describe the considered problem clearly and used to obtain optimal solutions with CPLEX. However, CPLEX could not obtain optimal solutions when n � 20 because the considered problem is NP-complete. us, to obtain good solutions in a short computation time, three types of heuristic algorithms were proposed: list scheduling methods, a constructive heuristic algorithm, and a genetic algorithm (GA). To find the best combination of the GA, 46,656 different combinations were evaluated, and the suggested GA outperformed the other heuristics. Additionally, GA found optimal solutions for all instances with n � 5, 10, and 15 and showed better performance compared to CPLEX when n � 20. us, it can be said that GA works well to obtain acceptable solutions within a reasonably short computation time.
Although the limited waiting time constraint and sequence-dependent setup times are very important characteristics in semiconductor manufacturing systems, there are only a few studies considering both characteristics together. us, we need to develop more effective and efficient methodologies including many types of metaheuristics. On the other hand, since it is very difficult to find optimal solutions for the considered problem, it is necessary to develop effective lower bounding strategies to evaluate solutions obtained by heuristic algorithms. Also, different waiting time constraints like overlapping waiting time constraints in [23] can be considered in the future research. Finally, since the problem considered in this study is a simple type of flowshop, this problem can be extended to more practical flowshops like hybrid flowshops in which there are parallel machines in each stage.

Data Availability
All data for the computational experiments are generated randomly and the method for generating data is written in Section 4.1 in the paper.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.