Next Article in Journal
Sediment Carbon Sequestration and Driving Factors in Seagrass Beds from Hainan Island and the Xisha Islands
Next Article in Special Issue
The Integrated Rescheduling Problem of Berth Allocation and Quay Crane Assignment with Uncertainty
Previous Article in Journal
Correction: Xu et al. Preparation of Long-Term Antibacterial SiO2-Cinnamaldehyde Microcapsule via Sol–Gel Approach as a Functional Additive for PBAT Film. Processes 2020, 8, 897
Previous Article in Special Issue
Joint Optimization of Pre-Marshalling and Yard Cranes Deployment in the Export Block
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Parallel Simulated Annealing Methodology to Solve the No-Wait Flow Shop Scheduling Problem with Earliness and Tardiness Objectives

1
AN-EL Anahtar ve Elektrikli Ev Aletleri Sanayi A.S., Istanbul 34896, Turkey
2
Industrial Engineering Doctorate Programme, Institute of Pure and Applied Sciences, Marmara University, Istanbul 34722, Turkey
3
Department of Industrial Engineering, Marmara University, Istanbul 34854, Turkey
*
Author to whom correspondence should be addressed.
Processes 2023, 11(2), 454; https://doi.org/10.3390/pr11020454
Submission received: 20 December 2022 / Revised: 20 January 2023 / Accepted: 28 January 2023 / Published: 2 February 2023

Abstract

:
In this paper, the no-wait flow shop problem with earliness and tardiness objectives is considered. The problem is proven to be NP-hard. Recent no-wait flow shop problem studies focused on familiar objectives, such as makespan, total flow time, and total completion time. However, the problem has limited studies with solution approaches covering the concomitant use of earliness and tardiness objectives. A novel methodology for the parallel simulated annealing algorithm is proposed to solve this problem in order to overcome the runtime drawback of classical simulated annealing and enhance its robustness. The well-known flow shop problem datasets in the literature are utilized for benchmarking the proposed algorithm, along with the classical simulated annealing, variants of tabu search, and particle swarm optimization algorithms. Statistical analyses were performed to compare the runtime and robustness of the algorithms. The results revealed the enhancement of the classical simulated annealing algorithm in terms of time consumption and solution robustness via parallelization. It is also concluded that the proposed algorithm could outperform the benchmark metaheuristics even when run in parallel. The proposed algorithm has a generic structure that can be easily adapted to many combinatorial optimization problems.

1. Introduction

Flow shop scheduling problems (FSSPs) consist of n jobs that should be processed on m different machines. In the classical FSSP, it is assumed that there exists an unlimited buffer between two successive machines. The no-wait flow shop scheduling problem (NWFSSP) is a special case of the generic FSSP in that each operation of a job has to be processed immediately after the preceding operation, without any delay. Consequently, the problem has a permutation schedule that should be processed on no-buffer machines [1]. The permutation constraint ensures that all jobs are processed in the same order on the machines [2]. The NWFSSP is fundamental to applications in real-life production environments, especially for jobs including quick obsolescence risk for manufactured products or their components. Examples are processes of canned food, critical operations of metals while forging, and production with chemical reactions [3].
Graham et al. [4] introduced the α   |   β   |   γ notation to define a scheduling problem, where α stands for the machine environment, β denotes any special constraint, and γ represents the objective function or functions of the problem. The machine environment shows the shopfloor setup, such as parallel machines, flow shop, and job shop. The constraints present the special cases of processes such as permutation (prmu), no-wait (nwt), blocking (block), and recirculation (rcrc). The last field hosts the objectives, e.g., total completion time, makespan, flow time, earliness, and tardiness. In this study, the NWFSSP is considered to minimize total earliness and tardiness that is denoted by F n |   n w t | E j + T j . NWFSSP problems with m machines have been proven to be NP-hard in the strong sense, where m is higher than 3 [5]. There are a considerable number of published studies that dealt with the NWFSSP with makespan, total completion time, or only tardiness objectives. Correspondingly, total earliness and tardiness objectives may be encountered in many studies dealing with scheduling problems. In some studies, the due date was also included in the model as a constraint rather than an objective. The survey by Allahverdi [6] covered over 300 papers in the literature containing no-wait constraints. The problem with the notation F m   |   n w t | T j that considers minimizing only total tardiness was recently studied by Aldowaisan and Allahverdi [7], Liu et al. [8], and Ding et al. [9]. However, a relatively small body of literature is concerned with the NWFSSP by minimizing total earliness and tardiness. Some studies [10,11,12,13,14,15,16] built the problem to minimize multiple objectives by integrating due date constraints and objectives such as makespan, total flow time, and resource consumption. Huang et al. [17] studied the flexible FSSP to minimize total weighted earliness and tardiness under the due-window constraint. Some of these studies had the goal of minimizing total tardiness and earliness with minor nuances. Arabameri and Salmasi [18] handled the weighted objective under a sequence-dependent setup time constraint providing a mixed-integer linear programming (MILP) model and a timing algorithm, comparing the performance of the customized particle swarm optimization (PSO) and tabu search (TS) metaheuristics. Schaller and Valente [19] studied the NWFSSP by minimizing total earliness and tardiness and compared dispatching heuristics. They compared the dispatching heuristics under the additional time-allowed constraint in another study [20]. Guevara-Guevara et al. [21] proposed a GA algorithm for the problem under a sequence-dependent setup time constraint and compared it with dispatching heuristics. Zhu et al. [22] implemented a discrete learning fruit fly algorithm for the distributed NWFSSP with weighted objectives under the common due date constraint and compared it to only iterated greedy with idle time insertion evaluation. Qian et al. [23] applied the matrix cube-based estimation of distribution algorithm (MCEDA) under sequence-dependent setup time and release time constraints, benchmarked with seven metaheuristic algorithms.
Ingber [24] addressed the primary shortcoming of the SA algorithm as its time-consuming computational steps. Due to its oscillation during the search for alternative solutions, the algorithm requires extensive time to present reasonable incumbent solutions. Many parallelized versions have been developed to overcome this disadvantage. Figure 1 includes the taxonomy by Greening [25] that was produced during his doctoral dissertation.
Greening [25] divided parallel simulated annealing (PSA) algorithms into two main categories. Synchronous algorithms share information during runtime, while asynchronous algorithms have limited or no communication. Synchronous algorithms calculate the same cost function. Asynchronous algorithms work faster by ignoring synchronization and allowing errors but result in lower outcome quality. Synchronous algorithms are divided into two subcategories: serial-like and altered generation. Serial-like algorithms generate new states as applied in a sequentially running algorithm. Altered generation applies different strategies for state generation.
In the past years, many studies have been proposed to overcome the disadvantages of the SA algorithm. Fast simulated annealing (FSA) by Szu and Hartley [26] and very fast simulated reannealing (VFSA) by Ingber [27] are among the prominent studies that altered the cooling process for more rapid convergence to the global minimum. Malek et al. [28] proposed a parallel SA strategy to solve the TSP problem in which parallel algorithms compare their solutions at a certain timepoint and restart with the initial temperature with the incumbent solution. On the basis of these studies, Yao [29] proposed a new simulated annealing algorithm (NSA) that is exponentially faster compared to VFSA. With a different approach, Roussel-Ragot and Dreyfus [30] suggested a general parallel form with two different temperature regimes. In this approach, parallel processors are in touch to update the approved current solution globally when a move is accepted. Mahfoud and Goldberg [31] integrated SA into the genetic algorithm to enrich the population at each iteration. Lee and Lee [32] evaluated various types of moving schemes for parallel synchronous and asynchronous SA strategies with multiple Markov chains. Wodecki and Bożzejko [33] implemented a parallel SA version considering the flow shop problem with a makespan objective in which parallel threads run simultaneous searches. In this implementation, incumbent solutions of all threads are changed to new ones if a thread finds a better solution than the current best solution. As a similar approach to this study, Bożejko and Wodecki [34] developed a procedure that runs a master thread and multiple slave threads. The slave threads iteratively search for better solutions. Upon finding a better incumbent solution, a thread tries intensification iterations and reports the result. All slave threads run again on the new incumbent solution with different temperatures. The results of the study demonstrated average values away from the optimum solution even for 20 jobs. The possible root cause of this failure could be premature convergence due to the greedy intensification of threads inconsistent with the cooling process of the SA algorithm. Czapiński [35] worked on the permutation flow shop problem in order to reduce total flow time. The study included master and worker nodes. Worker nodes obtain feedback on their results after a certain number of iterations to the master node. The incumbent solution is updated by the best solution shared with the master node. Worker nodes run again with the updated incumbent solution as the initial solution. Ferreiro et al. [36] implemented parallel-running asynchronous graphics processing unit (GPU) threads beginning with the same instructions but different initial solutions. Sonuc et al. [37] coded another GPU algorithm on the Compute Unified Device Architecture (CUDA) platform that runs independent threads to solve the binary knapsack problem. Richie and Ababei [38] provided a synchronous methodology with a managing thread that distributes the calculations to the worker threads. Turan et al. [39] suggested the multithread simulated annealing (MTSA) algorithm with master and slave threads. The created slave thread continues running until the defined number of non-improving iterations. Non-improving indicates that the global best solution cannot be improved. Upon completion of the thread runs, solutions are gathered into the pool. Vousden [40] compared the performances of asynchronous and synchronous SA implementations with regard to race conditions. Zhou et al. [41] benchmarked multiple threads of SA without communication initialized with different solutions. Coll et al. [42] built synchronous GPU threads that communicate in predefined time intervals and resume their runs on the best incumbent solution so far. Yildirim [43] deployed a multithread methodology with a hybrid structure where SA is fed by optimizer sub-threads. Even though many further studies are present in the literature, this review covered most of the parallelization approaches.
In the remainder of this paper, an MILP model is introduced to define the research problem. A simulated annealing (SA) metaheuristic variant, namely, the simulated annealing multithread (SAMT) algorithm, is proposed to solve the problem. The contributions of this algorithm and study are as follows:
  • Improving the runtime drawback of the SA algorithm;
  • Enhancing its robustness to converge to the global optimum solution;
  • Providing a new solution approach to NWFSSP, minimizing total earliness and tardiness;
  • Enabling parallel processing without excessive resource allocation.
The motivation behind this study is an aim to contribute to improvements in field optimization, more specifically metaheuristics. Even though metaheuristics were introduced after intense studies and analysis, there are still open issues to be optimized. A good example is the study by Deng et al. [44], where the authors improved the mutation strategy of the differential evolution algorithm and even compared the novel methodology to the previously improved differential evolution algorithm methodologies. Similarly, Cai et al. [45] worked on the quantum-inspired evolutionary algorithm to improve multiple shortcomings, such as poor runtime, search capability, and difficulty in assigning rotation angles. With the same focus, this study introduces a novel methodology that improves the original SA without requiring excessive computational resources. Section 2 of the paper states the research problem and details the proposed and benchmark metaheuristic algorithms. Section 3 introduces the benchmark results and comparative analysis. Section 4 includes the discussion, conclusion, and future prospects.

2. Methodology

The NWFSSP imposes processing of the jobs without interruption from the beginning on the first machine until completion on the last machine. Since the objective function also includes minimization of earliness in addition to tardiness, adding a delay between jobs may improve the objective function value (OFV) by avoiding the early completion of a job. However, unforced idleness would not be practical in most cases of the NWFSSP [46]. The disadvantages may be operational costs, such as machine running costs, setup costs, and buffer costs that exceed any cost incurred due to earliness. As a result, the problem is proposed to have a non-delay schedule. An MILP model is suggested to formulate the problem. To provide integrality, two dummy jobs with zero processing times on all machines should be added to the model as the initial and last jobs. Thus, the MILP model has n = n + 2 jobs and m machines. Each job may be processed only on one machine at a time. Similarly, one machine can process only one job at a time. The proposed MILP model, its parameters, and its decision variables are outlined below.
  • Parameters:
    • g i j k = Slack time between completion time of J o b   j and starting time of J o b   k on M a c h i n e   i , if J o b   k is processed immediately after J o b   j .
    • M = a very large number.
  • Decision variables:
    • E j = Earliness of J o b   j .
    • T j = Tardiness of J o b   j .
    • C i j = Completion time of J o b   j on M a c h i n e   i .
    • x j k = 1, if J o b   k is processed immediately after J o b   j , and 0 otherwise.
The MILP model on NWFSSP minimizing total earliness and tardiness:
M i n   z = j = 2 n E j + j = 2 n T j ,
S u b j e c t   t o :   C i k + M ( 1 x j k ) C i j + g i j k + p i k   i = 1 , , m   ;   j , k = 1 , , n + 2 ; j k ,
C m k + M ( 1 x 1 k ) i = 1 m p i k ;   k = 1 , , n + 2 ,
C m j d j T j + E j = 0 ;   j = 1 , , n + 2 ,
j = 1 n + 2 x j k = 1 ;   k = 1 , , n + 2
k = 1 n + 2 x j k = 1 ;   j = 1 , , n + 2
x ( n + 2 ) , 1 = 1 ,
x j k   { 0 , 1 } ;   j = 1 , , n + 2   ; k = 1 , , n + 2   ; j k ,
C i j , T j , E j , g i j k , p i k 0 ;   i = 1 , , m   ; j = 1 , , n + 2 ,
where the objective function in Equation (1) minimizes the sum of total earliness and tardiness; the constraint in Equation (2) ensures that the completion time of a job on a machine is the summation of the completion time of the preceding job, the slack time between these jobs, and the processing time of the job; the constraint in Equation (3) guarantees that the completion time of the first job is its total processing time on all machines; the constraint in Equation (4) calculates the earliness or tardiness by comparing the completion time and due date of each job; and the constraints in Equations (5) and (6) ensure that each job has only one predecessor and successor, respectively. The model consists of n + 2 jobs to satisfy the constraints in Equations (5) and (6). Equation (7) is a virtual constraint that assigns the first (dummy) job as the predecessor of the last (dummy) job. It is assumed that each job is processed without any delay once any operation of the predecessor job does not cause any violation for no-wait processing to be aligned with real-world applications. Thus, the values of slack time g i j k are fixed for any consecutive jobs independent from their position in the schedule. The calculation of g i j k is broadly explained by Röck [47].

2.1. Neighborhood Operations

Neighborhood operations are applied to reveal neighbor solutions for metaheuristic algorithms. The types of neighborhood operations influence the performance of a metaheuristic algorithm. Thus, selecting the appropriate operations is the crucial key to exploring the search space wisely. Three different types are considered in this paper that are implemented in the proposed or benchmark algorithms.

2.1.1. Insert Operation

The insert operation embraces the insertion of a job in the sequence to a specified position. Two positions should be determined for this operation. Usually, these positions are generated randomly or tried successively in an order. Let i and j be integer numbers from the set { 1 , , n } . In an insert operation, J o b   i is removed from the sequence and inserted into the position prior to J o b   j . The positions of the jobs after J o b   j slide toward the end of the schedule. An example operation is shown in Figure 2, where Job 2 is inserted into Position 4.

2.1.2. Swap Operation

Congruently to an insert operation, two positions should be determined for swap operations. Let i and j be integers from the set { 1 , , n } . In the swap operation, the positions of Job   i and Job   j are changed mutually. The positions of the remaining jobs do not change. The example in Figure 3 shows the swap operation of Job 2 and Job 4.

2.1.3. Sub-Interchange Operation

Defined by Arabameri and Salmasi [18], sub-interchange is executed by changing two adjacent jobs Job   i and Job   i + 1 . It is believed that running sub-interchange operations after updating the incumbent solution may result in a better solution.

2.2. Proposed Algorithm—Simulated Annealing Multithread (SAMT)

The SAMT algorithm is based on the simulated annealing (SA) metaheuristic. Proposed by Kirkpatrick [48], the SA simulates the annealing process of solids to solve combinatorial optimization problems. In physics, annealing specifies the process in which solids are heated rapidly to a maximum temperature and cooled down slowly in heat baths to allow them to arrange themselves in the ground state of the solid [49]. The algorithm is initialized with a high temperature and cooled slowly. These moves during the cooling process avoid algorithm trapping in local optima [50]. The flow of the classical SA algorithm is stated in Algorithm 1 [51].
Algorithm 1 Simulated Annealing
1: Select an initial solution in the solution space s S S
2: Set the initial temperature T ,
3: Set the temperature iteration counter i = 0 ,
4: Set the maximum number of repetitions n m a x
5: While (Stopping criteria are not met)
6:  Set n = 0 ,
7:  While ( n < n m a x )
8:   Generate a new solution s S S
9:   Calculate Δ = f ( s ) f ( s )
10:   If ( Δ < 0 )
11:    Set s = s
12:   Else
13:    If ( Uniform ( 0 , 1 ) < e ( Δ T ) )
14:     Set s = s
15:   Set n = n + 1
16:  Do
17:  Set i = i + 1
18:  Set T = T ( i )
19: Do
A review of the literature suggests that there are still open fields for parallel implementation of the SA algorithm. In this study, a novel approach including the utilization of multiple threads is proposed. The simulated annealing multithread (SAMT) algorithm is proposed to overcome the time disadvantage of the classical SA algorithm and boost the convergence process. The algorithm is named after its pragmatic structure in which classical SA runs on a main thread supported by a sub-thread that iteratively updates the incumbent solution of the main thread with much faster SA runs. Rather than distributing the calculations of the algorithm to available processors, this parallelism aims to run different SAs simultaneously to improve the consumed time and search the solution space intelligently to find the global optimum. The SAMT algorithm falls into the altered generation class of synchronous algorithms considering Greening’s taxonomy [25] with shared memory. The novel methodology flow is demonstrated in Algorithm 2.
Algorithm 2 Simulated Annealing Multithread (SAMT)
1: Select initial solutions in the solution space s m = s t S S
2: Set the initial temperature T 0 m
3: Set the temperature iteration counter i m = 0
4: Set the maximum number of repetitions n m a x
5: Set the operator selection probability P r o b O p e r a t i o n ( 0 , 1 )
6: Set the fast sub-thread’s initial temperature T 0 f t
7: Set the slow sub-thread’s initial temperature T 0 s t
8: Set the fast sub-thread iteration counter i f t = 0
9: Set the fast sub-thread iteration counter i s t = 0
10: Set the thread selection probability coefficient c ( 0 , 1 )
11: MAIN THREAD:
12: While (Stopping criteria are not met)
13:  If (SUB-THREAD is not running)
14:   Calculate Δ = f ( s t ) f ( s m )
15:   If ( Δ < 0 )
16:    Set s m = s t
17:   Else
18:    Set s t = s m
19:   Run (SUB-THREAD)
20:  Set n m = 0 ,
21:  While ( n m < n m a x )
22:   If ( U n i f o r m ( 0 , 1 ) < P r o b O p e r a t i o n )
23:     s m = I n s e r t ( s m )
24:   Else
25:     s m = S w a p ( s m )
26:   Calculate Δ = f ( s m ) f ( s m )
27:   If ( Δ < 0 )
28:    Set s m = s m
29:   Else
30:    If ( U n i f o r m ( 0 , 1 ) < e ( Δ T m ) )
31:    Set s m = s m
32:   Set n m = n m + 1
33:  Do
34:  Set T m ( i m ) = T m ( i m + 1 )
35:  Set i m = i m + 1
36: Do
37: Return s m and f ( s m )
38: SUB-THREAD:
39:  If ( c + ( ( 1 2 c ) T m ( i m ) T m ( 0 ) ) < U n i f o r m ( 0 , 1 ) )
40:   While (Stopping criteria are not met)
41:    Set n t = 0 ,
42:    While ( n t < n m a x s t )
43:     If ( U n i f o r m ( 0 , 1 ) < P r o b O p e r a t i o n )
44:       s t = I n s e r t ( s t )
45:     Else
46:       s t = S w a p ( s t )
47:     Calculate Δ = f ( s t ) f ( s t )
48:     If ( Δ < 0 )
49:      Set s t = s t
50:     Else
51:      If ( U n i f o r m ( 0 , 1 ) < e ( Δ T s t ) )
52:       Set s t = s t
53:     Set n t = n t + 1
54:    Do
55:    Set T s t ( i s t ) = T s t ( i s t + 1 )
56:    Set i s t = i s t + 1
57:   Do
58:  Else
59:   While (Stopping criteria are not met)
60:    Set n t = 0 ,
61:    While ( n t < n m a x s t )
62:     If ( U n i f o r m ( 0 , 1 ) < P r o b O p e r a t i o n )
63:       s t = I n s e r t ( s t )
64:     Else
65:       s t = S w a p ( s t )
66:     Calculate Δ = f ( s t ) f ( s t )
67:     If ( Δ < 0 )
68:      Set s t = s t
69:     Else
70:      If ( U n i f o r m ( 0 , 1 ) < e ( Δ T f t ) )
71:       Set s t = s t
72:     Set n t = n t + 1
73:    Do
74:    Set T f t ( i f t ) = T f t ( i f t + 1 )
75:    Set i f t = i f t + 1
76:   Do
The methodology should be initialized by the setting of parameters. Prior to finalizing this methodology, several alternative policies are considered, compared, and tested in terms of simplicity, runtime, efficiency, and implementation complexity. These policies include a different number of threads with distinct completion times and different schemes of solution updates. The threads in the proposed methodology have different roles. The fast thread decreases the runtime by jumping to better solutions in the vicinity of the current best solution with faster steps. This thread may also manipulate the search by orienting the direction toward different “hills” if the thread provides a better solution. The slow thread has different aims, such as enabling moves to longer distances in the vicinity, jumping to similar solutions of nearby optima, or diverging from local optima. Possible jumps after the completion of a sub-thread are presented in Figure 4.
If the sub-thread results in a solution sequence with a better OFV than the global incumbent solution, the global incumbent solution is updated. An adaptive strategy with a single sub-thread provided the best solutions After some experimental runs with different parameters, the results also showed that adjusting the initial temperature and runtime of the sub-thread should be adaptive to achieve elite results. Therefore, the methodology is revised to have a single master (main) thread and a slave (sub) thread, and the assignment of the algorithm speed (slow or fast) is conducted dependent on a predefined parameter. This strategy attempts to seek a larger space and decreases the probability to be stuck in local optima due to premature convergence. Both insert and swap operators work well on the NWFSSP. Both operators are utilized in the proposed methodology to benefit from each and avoid becoming trapped in a cycle. At each iteration, the algorithm runs the insert operator with the probability P r o b O p e r a t i o n or the swap operator with the probability 1 P r o b O p e r a t i o n . The temperature-dependent type selection function in the first line of the sub-thread determines the type of sub-thread depending on the temperature of the main thread. The value of the constant c determines the initial probabilities to select a fast or slow thread. In cases where c > 0.50 , the slow thread has a lower probability to be selected at the beginning of the run and the probability increases while T m decreases. If c < 0.50 , then the fast thread has the advantage with a higher probability. The value c = 0.50 grants equal probability during all the runs.

2.3. Benchmark Algorithms

The variants of tabu search (TS) and particle swarm optimization (PSO) in the study by Arabameri and Salmasi [18] were selected as benchmark algorithms since these metaheuristics were applied to the same research problem. The benchmark algorithms are introduced in Appendix A, where item A1 introduces the TS and its variants: TS with the EDD initial solution and insertion neighborhood (TSEI), the TS algorithm with a random initial solution and insertion neighborhood (TSRI), the TS algorithm with the EDD initial solution and swap neighborhood (TSES), and the TS algorithm with a random initial solution and swap neighborhood (TSRS); while item A2 describes the PSO algorithm supported by integrating two different local search algorithms: insertion (PSOI) and variable neighborhood structure (PSOV).

2.4. Test Problems

The test problems are selected from the datasets already introduced by different studies. These problems are widely considered in the literature for benchmark purposes of scheduling problems. The proposed and benchmark algorithms are verified on Carlier’s dataset [52], which has eight small-sized problems with 7–14 jobs and 4–9 machines. The process times in the dataset are generated with a pattern of sorting the digits adjacently. Another dataset for benchmarking consists of the scheduling problems defined by Reeves [53]. Reeves generated this dataset to test his proposed genetic algorithm to solve the FSSP problem. The reason for generating this dataset was his failure to find a publicly shared dataset for this purpose since there is some evidence that process times cannot be completely random [54]. Reeves generated this dataset using the suggestions by Rinnooy [55] and his parameters. The dataset has a maximum of 75 jobs and 20 machines beginning from 20 jobs and five machines. The famous Taillard dataset [56] is also included for comparison of the algorithms. The dataset comprises problems with a range of 20–500 jobs and 5–20 machines. Taillard included in his dataset hard problems to be solved. The process times are created randomly from a uniform distribution ( 1 ,   100 ) . Due to the high number of problems, only the first problems with the same number of jobs and machines in the datasets by Reeves and Taillard are considered.
Due dates for the problems are created according to the rule proposed by Arabameri and Salmasi [18]. The due date may be randomly produced from the interval in Equation (10).
d j ~ [ L B × ( 1 T F R 2 ) , L B × ( 1 T F + R 2 ) ] ,
where L B is the lower bound of the makespan, T F is the tightness factor, and R is the range parameter. The T F and R values are selected from the set { 0.2 , 0.5 , 0.8 } . To avoid the possibility of negative due date creation, { 0.8 , 0.5 } and { 0.8 , 0.8 } combinations are ignored for T F and R , respectively. Hence, a total of 27 different problems are solved with seven different due date schemes, resulting in 189 combinations. Among the methods to calculate the L B in the literature, the method by Taillard [56] is preferred due to its concrete and meaningful structure. The L B can be calculated according to Equation (11).
L B = m a x { max i ( α i + β i + γ i ) , max j ( i = 1 m p i j ) } ,
where α i , β i , and γ i are formulated according to Equations (12)–(14), respectively.
α i = min J ( k = 1 i 1 p k j )
β i = j = 1 n p i j
γ i = min j ( k = i + 1 m p k j )
The problems are divided into three groups according to their sizes considering the number of jobs. Small-sized problems in Group 1 have up to 20 jobs to be processed. Medium-sized problems in Group 2 consist of 20–100 jobs. Large-sized problems in Group 3 consist of over 100 jobs.

3. Results

3.1. Results of the Benchmark Runs

All of the metaheuristics involved in this study have several parameters that should be tuned to achieve favorable solutions. Additionally, the common parameters for benchmark runs should be determined. The maximum runtime of any run is limited by a function considering the number of jobs, as stated in Table 1. All metaheuristics except the TSEI and TSES algorithms are initialized with a random sequence to avoid any bias. The TSEI and TSES algorithms are initialized with the sequence provided by the EDD rule in which the jobs with earlier due dates are processed earlier.
The initial temperature of the SA algorithm should be set to a value that allows acceptance of worse solutions with a determined probability, decreasing systematically. Since the test runs are terminated according to the runtime criteria for all algorithms, the temperature of the SA algorithm is intended to be decreased as a function of time according to Equation (15) in this study.
T ( i + 1 ) = T ( i ) × ( 1 t c u r t m a x ) .
The parameter t c u r in Equation (15) denotes the elapsed time, whereas t m a x is the total assigned runtime. The algorithm stops when the temperature drops to zero. Similarly, the temperatures of the main and support threads of the SAMT algorithm should also be managed for smooth synchronization. However, the time limit of the threads should be arranged within the time limits of the main thread. Tuning the parameters has a high influence on the performance of a metaheuristic. The guideline by LaTorre et al. [57] identifies several methods that have gained recent attention to tune the parameters of metaheuristic algorithms. The focus iterative local search (FocusILS) methodology [58] was preferred to tune the parameters of the SAMT algorithm. T 0 m was selected from a set increasing by 50 until 500 degrees. The set {0.25, 0.50, 0.75} was assigned to find the best value of constant c and P r o b O p e r a t i o n . T 0 f t and T 0 s t were tuned by dividing T 0 m up to 10. The initial temperature of the SAMT algorithm may be selected as half of the SA algorithm for better and faster convergence. Insert or swap operators are selected with equal probability at each iteration. Running the fast thread more frequently at the initial phase of the algorithm eases discovering the vicinity of the current solution. Rare searches with a slower thread at this phase produce faster jumps toward the optima region if possible. The probability of using the slow thread toward the end of the runtime should be increased. This change is needed to allow the sub-thread to jump from local optima to different optimum regions by climbing the hills. Setting the parameter c to 0.25 assigns the probability of running the fast thread to 0.75 and the slow thread to 0.25. The probability of the fast thread decreases and the slow thread increases linearly during runtime until the probability of selecting the fast thread stabilizes at 0.25 and that of the slow thread stabilizes at 0.75. The parameters of the SAMT algorithm are shown in Table 2.
The PSO algorithm is dependent on various parameters and variables that directly affect the performance of the algorithm. Tuning these parameters with ineligible strategies may lead to disadvantages, such as premature convergence or failing to converge to the region near the optimum solution. The constants and variables are set to the values suggested by Arabameri and Salmasi [18] to achieve a consequential benchmark environment. The settings for the PSO algorithm are shown in Table 3.
While the w value for the PSOV algorithm is set to a fixed value, the value for the PSOI algorithm is updated regularly at each iteration dependent on a maximum number of iterations t m a x and current iteration t c u r .
The metaheuristics are compared with the percentage gap (PG), which is the relative percentage deviation of the algorithm compared to the best solution. PG may be calculated according to Equation (16).
P G = ( A l g O F V M i n O F V ) M i n O F V × 100 ,
where A l g O F V is the OFV value of the selected algorithm, and M i n O F V is the minimum OFV value provided by all algorithms for the corresponding problem. Since the metaheuristics have a stochastic nature, all metaheuristics had triple runs for each benchmark problem. The result tables show the average PG of three runs at each problem. SAMT has a main thread and a sub-thread during its runs. For the sake of fairness, the benchmark algorithms were run twice at each iteration, and the better solution was selected as if they ran two threads asynchronously. Algorithms were coded in the C# programming language. The codes were compiled and run on a single computer with Intel(R) Core (TM) i7-6500U CPU and 8 GB RAM.
The accuracies of the metaheuristic algorithms were verified by comparing them with the MILP results solved on the Carlier dataset. Each problem in the dataset was assigned seven due dates with different due dates creating schemes resulting in 56 test problems. The problems were solved using the Gurobi Solver optimally. Each algorithm could find the optimal result for each problem.
The results of the Reeves dataset are shared in Table A1 of Appendix B. The table has solutions for 49 problems. The last digit in the problem names represents the index for the due date scheme. The results of the Taillard dataset are shown in Table A2 of Appendix B. The table contains 84 problems. The indices and corresponding TF and R values for due date creation are shown in Table 4.

3.2. Comparative Analyses

The algorithms are evaluated under identical conditions and on the same test problems. Therefore, comparative analyses are performed through analysis of variance (ANOVA) considering solutions of the benchmark in terms of the robustness of the results. The two-way ANOVA results in Table 5 reveal that the factors job and algorithm, as well as their interaction, have a significant effect on the values of PG.
Due to the significance, post hoc analyses were conducted to reveal the differences in the group × algorithm interaction with pairwise comparisons. A concern during this analysis is keeping the family-wise error rate under control [57]. Tukey’s HSD (honestly significant difference) and Scheffe tests were applied for pairwise comparisons. Tukey’s HSD has a high sensitivity for pairwise comparisons in balanced conditions while the Scheffe method maintains the family-wise error rate under strict control [59]. A p-value lower than 0.05 shows that the instances are significantly different. Small-sized problems, in terms of the number of jobs, are not taken into consideration for evaluation since all the algorithms return the same OFV for all problems in this group.
The comparison results of Tukey’s HSD statistics for medium-sized problems are demonstrated in Table A3 of Appendix C. The results in Table A1 reveal that the TSRS algorithm performs significantly worse compared to the PSOI, PSOV, SA, and SAMT algorithms, whereas the TSES algorithm has worse results compared to the SA and SAMT algorithms in terms of OFV performance. The remaining values explain that there is no significant difference among the SA, SAMT, TSRI, TSEI, PSOI, and PSOV algorithms in this group. As a result, only the TS algorithms with the swap operator dissociate from the group since they provide considerably low-quality solutions. Tukey’s HSD results for large-sized problems in Table A4 of Appendix C prove that the problems supply different solution qualities when the size of the problem increases. Only the means of three comparisons are similar according to Table A2, which are PSOV vs. PSOI, TSRI vs. PSOI, and TSRI vs. TSEI. The SAMT algorithm notably outperforms the remaining metaheuristic algorithms with a consistent solution quality at all levels. Figure 5 shows the increasing distinction of solution robustness provided by different algorithms in terms of PG means vs. the number of jobs. The post hoc results for PG according to the Scheffe method in Table A5 of Appendix C align with the results of Tukey’s HSD. Having higher p-values to avoid type I error, the analysis suggests that the SAMT performs significantly better than TSRS considering medium-sized problems, in addition to outperforming all algorithms considering large-sized problems.
Apart from the robustness of the solution, the runtime to find the best OFV is another characteristic that should be assessed to determine the performance of the algorithms. Another two-way ANOVA table is established to understand whether there is a significant difference or not among the mean values of runtimes of the algorithms.
The ANOVA results in Table 6 reveal that at least one mean of the runtimes is not equal to the remaining means. Only Tukey’s HSD comparisons that are significantly different are listed in Table A6 of Appendix C, and the significant results from the Scheffe method are shown in Table A7.
The p-value of Tukey’s HSD test in Table A6 proves that the mean runtimes to find the best solution of the SA and SAMT algorithms are significantly different for problems with at least 50 jobs. The SAMT algorithm also provides a runtime advantage compared to all benchmark metaheuristics for problems with 200 and 500 jobs. Evaluating the results from the Scheffe method in Table A7 as a group suggests that SAMT can find the best OFV significantly more rapidly than SA, TSEI, PSOI, and PSOV for the problems that contain 200 jobs, as well as more rapidly than all algorithms in the case of the job number increasing to 500. According to Figure 6, it is possible to conclude that the SAMT algorithm needs a shorter runtime to find the best OV in comparison to the SA algorithm, thus yielding better or equal solutions. The SAMT algorithm provides better or equal objective function values in shorter run times compared to the SA algorithm.
Figure 7 demonstrates the improvement by SAMT compared to the classical SA. The SAMT is compared to the average of two asynchronous runs of SA for being fair. In the case of GP, both Scheffe and Tukey’s HSD post hoc analysis suggests that SAMT is significantly better for the problems in the large-sized set. The PG increases nearly linearly while the number of jobs increases as seen in Figure 7a. Considering the runtime to find the best solution, Tukey’s HSD suggests that SAMT is better for problems with 75 or more jobs. The more conservative Scheffe method suggests that the number of jobs should be a minimum of 200 to derive significance. Figure 7b presents that the difference gradually increases with an exponential trend. The post hoc analysis and Figure 7 confirm that the proposed SAMT outperforms the classical SA in terms of both runtime and solution robustness, even if the SA runs two asynchronous threads and the better OFV of these threads is selected for comparison.

4. Discussion and Conclusions

In this paper, a novel parallel metaheuristic methodology named SAMT based on SA was proposed. The motivation of the study was to improve the poor runtime performance and search capability of the classical algorithm. In the methodology, a sub-thread runs in parallel to update the search direction of the main thread adaptively. The aim is to increase the capability of classical SA to find better solutions in shorter runtimes. The NWFSSP with earliness and tardiness objectives, F n |   n w t | E j + T j , is considered for benchmarking. The literature review revealed that earliness and tardiness objectives for NWFSSP have not been widely studied. The most common practice to solve the research problem benefits from dispatching rules and heuristics. A major problem with dispatching rules and heuristics is their incapability to update themselves during runtime in contrast to metaheuristics. The study by Arabameri and Salmasi [18] was selected as the reference for benchmarking since the study included two important metaheuristic algorithms with different parameters.
The test runs and results of comparative analyses revealed that the SAMT algorithm could provide more robust solutions compared to the classical SA algorithm, variants of the PSO algorithms, and TS algorithms, whereby the solutions of the SAMT algorithm were slightly better in most cases of medium-sized problems and all cases of large-sized problems, even when the benchmark algorithms ran double asynchronous threads. Another contribution of this study was the enhancement of the runtime to provide a better solution in comparison to the SA algorithm. The SAMT algorithm consumed less time to find the best solution in large problems compared to benchmark problems. Unlike parallel computing, the proposed SAMT algorithm introduces independent parallel threads to enhance the robustness of the solution and overcome the runtime disadvantage of the classical SA algorithm. As intended, multiple threads of the SAMT algorithm grant a divergence–convergence strategy that enables the algorithm to explore the solution space more thoroughly and rapidly. The adaptive search strategy with a single slave thread is the novelty of this study. A temperature-dependent function stochastically determines the speed of the sub-thread at each run. Thus, the algorithm is adapted to jump into the solution space to decrease runtime and increase robustness.
The number of threads and parameters of the SA algorithm for each thread directly affected the performance of the SAMT algorithm. Hence, it is important to fine-tune each parameter systematically. An easy implementation for a single computer is through the FocusILS parameter tuning tool. The contribution of the study is the adaptive parameter tuning strategy of a single slave thread after analysis with the design of experiments (DOE). The purpose of the study was to present that the SA algorithm may be improved in terms of both time and result performance without excessive resource requirements, and the newly proposed algorithm is a robust methodology to solve the NWFSSP with the objective of addressing total earliness and tardiness. In future studies, the method may be enhanced by deploying it on multiple CPUs/GPUs with a distributed programming methodology. This method will be evaluated on different combinatorial optimization problems to confirm its efficiency. Another aspect will be to adapt different types of metaheuristics to a parallel methodology to solve the NWFSSP, before comparing them with the SAMT algorithm.

Author Contributions

Conceptualization, I.K., O.S. and S.B.; methodology, I.K., O.S. and S.B.; software, I.K.; validation, I.K. and O.S.; formal analysis, I.K. and O.S.; investigation, I.K.; resources, I.K.; data curation, I.K.; writing—original draft preparation, I.K.; writing—review and editing, O.S. and S.B.; visualization, I.K.; supervision, O.S. and S.B.; project administration, S.B. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by the company AN-EL Anahtar ve Elektrikli Ev Aletleri Sanayi A.S.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Scheduling problems were selected from the datasets introduced by Carlier [52], Reeves [53], and Taillard [56]. Due dates were randomly created as explained in the paper.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A

This appendix includes the descriptions of benchmark algorithms.

Appendix A.1. Tabu Search (TS)

The TS algorithm was proposed by Glover [60] to solve integer programming problems. Furthermore, Glover et al. [61] published a user guide introducing perspectives for implementing TS on combinatorial or nonlinear problems. TS has been widely applied to scheduling problems. The algorithm iteratively improves the incumbent solution until the termination criteria are met. Short-term memory is utilized to restrict recent moves in the neighborhood to explore better solutions and avoid entrapment in local optima, while long-term memory helps to update the neighborhood dynamically to intensification [62]. Recent moves are added to the tabu list (TL) and restricted for a defined number of iterations. Considering the study by Arabameri and Salmasi [18], the size of the TL is determined according to Equation (A1).
| T L | = 7 + n 15
An aspiration criterion should be defined to enable a better move to be confirmed even if it is in the tabu list. The aspiration criterion for this study is set as the objective value of the incumbent solution. Any move that has a better objective function than the current best solution is accepted regardless of the TL.
The TS algorithm is an incremental metaheuristic algorithm that iteratively updates an initial solution. Arabameri and Salmasi [18] evaluated three types of initial solution-creation mechanisms. Earliest due date (EDD) is the sequence in which the jobs are ordered according to their due dates in ascending order. The longest tardiness/earliness rate (LTER) rule considers the ratio of weights of earliness and tardiness, which are assumed to be “1” in this study. A random initial solution is formed with a sequence in which jobs are embedded in their positions randomly. Thus, two of these mechanisms are benchmarked in this study since LTER cannot grant an initial solution where all weights are equal to 1. Four combinations of TS algorithms are TS with the EDD initial solution and insertion neighborhood (TSEI), TS with a random initial solution and insertion neighborhood (TSRI), TS with the EDD initial solution and swap neighborhood (TSES), and TS with a random initial solution and swap neighborhood (TSRS). Until stopping criteria are met, a defined number of moves are carried out at each iteration, and the best move is selected as the new current solution if it is better than the current solution or meets the aspiration criteria. To improve the robustness of TS solutions, n moves are compared at each iteration to balance the tradeoff between the number of moves at each iteration and the total number of iterations. The pseudocode of the TS algorithm is shown in Algorithm A1.
Algorithm A1 Tabu Search
1: Find the initial sequence S i n i t by a construction heuristic or randomly
2: Set the best sequence S b e s t = S i n i t
3: Set the current sequence S c u r r = S i n i t
4. Set tabu list T L =
5: Set k m a x = The maximum number of iterations
6: Set k = 1
7: While ( k k max )
8: Create the set N ( S c u r r ) as neighbor solutions of S c u r r
9: Find the best solution S in the set N ( S c u r r )
10:  If ( S T L )
11:   If ( O F V   v ( S ) < v ( S c u r r ) )
12:    Set S c u r r = S
13:    Set v ( S c u r r ) = v ( S )
14:     If ( O F V   v ( S ) < v ( S b e s t ) )
15:      Set S b e s t = S
16:      Set v ( S b e s t ) = v ( S )
17:  Else
18:   If ( O F V   v ( S ) < v ( S b e s t ) ) (Aspiration)
19:    Set S b e s t = S
20:    Set v ( S b e s t ) = v ( S )
21:    Set S c u r r = S
22:    Set v ( S c u r r ) = v ( S )
23:  Set k = k + 1
24: Do
25: Return S b e s t and v b e s t

Appendix A.2. Particle Swarm Optimization (PSO)

Kennedy and Eberhart [63] proposed the PSO algorithm as a social method to solve continuous nonlinear functions. The PSO algorithm is a population-based metaheuristic that iteratively updates its individuals. These individuals (namely, particles) represent solution instances that move with varying velocities and directions toward better solutions. The velocity and the direction of the particles are determined by the positions of both the global best solution and the best solution of the particle, the previous velocity, and the previous direction. The position of the i-th particle at the k-th iteration may be represented as P i k = [ P i 1 k , P i 2 k , P i n k ] , where V i k = [ V i 1 k , V i 2 k , V i n k ] denotes its velocity.
The best position until the k-th iteration of a particle that is called p-best may be denoted as B k = [ B 1 k , B 2 k , , B n k ] . The global best particle at the k-th iteration (namely, g-best) is demonstrated with the notation B k = [ G 1 k , G 2 k , , G n k ] . The particles move toward p-best and g-best at each iteration to explore better solutions. Hence, the velocity of a particle may be updated according to Equation (A2) at each iteration as a function of the previous velocity, p-best, and g-best.
V i k + 1 = w × V i k + c 1 × r a n d 1 × ( B i k P i k ) + c 2 × r a n d 2 × ( G i k P i k ) ,
where w is the up-to-date inertia weight that arranges the impact of the previous velocity, c 1 and c 2 are acceleration coefficients, and r a n d 1 and r a n d 2 are uniform random numbers from the interval [ 0 , 1 ] . Upon calculation of V , P may be updated according to Equation (A3) at each iteration.
P i k + 1 = P i k + V i k + 1 .
Being a powerful metaheuristic, the PSO algorithm also has drawbacks that limit convergence to the global best solution in some conditions. Commonly, the algorithm is trapped in local optima in high-dimensional spaces, and the rate of convergence is very low in the iterative process [64]. Hence, many improved, hybrid, or integrated versions have been suggested by researchers. In line with the study by Arabameri and Salmasi [18], the PSO algorithm is supported by integrating two different local search algorithms, insertion (PSOI) and variable neighborhood structure (PSOV). The flow of the PSOI algorithm is shown in Algorithm A2.
Algorithm A2 PSOI Local Search
1: Set the iteration number k = 1
2: Set k m a x = The maximum number of iterations
3: Calculate OFV v for sequence S
4: While ( k k max )
5:  Pick random integers i and j from the set [ 1 , n ]
6:  Set a new sequence S by executing I n s e r t   ( i , j ) on sequence S
7:  Calculate OFV v of sequence S
8:  If ( v < v )
9:    Set S = S
10:   Set v = v
11:  Set k = k + 1
12: Do
13: Return S and v
The PSOI algorithm has a simple local search strategy that iteratively collates the solutions acquired by random moves with an insert operation and updates the incumbent solution to check if the new sequence returns a better OFV. The PSOV algorithm constitutes a relatively complex local search methodology. The pseudocode of the PSOV local search methodology is summarized in Algorithm A3. PSO-VNS attempts to discover better solutions by consulting insertion and swap operations sequentially. Upon perceiving a better OFV, the sub-interchange operation is executed to investigate the solution space further with the aim of improving the solution. The local search terminates after a defined number of maximum iterations.
Algorithm A3 PSOV Local Search
1: Set the iteration number k = 1
2: Set k m a x = The maximum number of iterations
3: Calculate OFV v for sequence S
4: While ( k k max )
5:  Pick random integers i and j from the set [ 1 , n ]
6:  Set a new sequence S by executing I n s e r t   ( i , j ) / S w a p   ( i , j ) on sequence S
7:  Calculate OFV v of sequence S
8:  If ( v < v )
9:   Set S = S
10:   Set v = v
11:   Set a new sequence S by finding the best solution from
12:    S u b i n t e r c h a n g e on sequence S
13:   Calculate OFV v of sequence S
14:   If ( v < v )
15:    Set S = S
16:    Set v = v
17:   Else
18:    Set a new sequence S by executing Swap ( i , j ) / I n s e r t   ( i , j )
19:  on sequence S
20:    Calculate OFV v of sequence S
21:    If ( v < v )
22:     Set S = S
23:     Set v = v
24:  Set k = k + 1
25: Do
26: Return S and v

Appendix B

This appendix contains the results of the benchmark datasets. Table A1 shows the results of the Reeves dataset.
Table A1. Results of Reeves dataset.
Table A1. Results of Reeves dataset.
ProblemSizeMinimumSAMTSATSEITSESTSRITSRSPSOVPSOI
REC01-120 × 560440.000.000.000.000.000.000.000.00
REC01-220 × 551130.000.000.000.000.000.000.000.00
REC01-320 × 535670.000.000.000.000.000.000.000.00
REC01-420 × 574120.000.000.000.000.000.000.000.00
REC01-520 × 569040.000.000.000.000.000.000.000.00
REC01-620 × 571240.000.000.000.000.000.000.000.00
REC01-720 × 512,0760.000.000.000.000.000.000.000.00
REC07-120 × 1076360.000.000.000.000.000.000.000.00
REC07-220 × 1066900.000.000.000.000.000.000.000.00
REC07-320 × 1056030.000.000.000.000.000.000.000.00
REC07-420 × 1011,8820.000.000.000.000.000.000.000.00
REC07-520 × 1011,1550.000.000.000.000.000.000.000.00
REC07-620 × 1013,5420.000.000.000.000.000.000.000.00
REC07-720 × 1019,1930.000.000.000.000.000.000.000.00
REC13-120 × 1598960.000.000.000.000.000.000.000.00
REC13-220 × 1597110.000.000.000.000.000.000.000.00
REC13-320 × 1594680.000.000.000.000.000.000.000.00
REC13-420 × 1516,1410.000.000.000.000.000.000.000.00
REC13-520 × 1516,7330.000.000.000.000.000.000.000.00
REC13-620 × 1515,9700.000.000.000.000.000.000.000.00
REC13-720 × 1525,2940.000.000.000.000.000.000.000.00
REC19-130 × 1017,4900.000.000.000.000.000.000.000.00
REC19-230 × 1013,6300.000.000.000.000.000.000.000.00
REC19-330 × 1011,9770.000.000.000.000.000.130.000.00
REC19-430 × 1023,0130.000.000.000.000.000.000.000.00
REC19-530 × 1020,5920.000.000.000.000.000.000.000.00
REC19-630 × 1022,9110.000.000.000.000.000.000.000.00
REC19-730 × 1038,0650.000.000.000.000.000.000.000.00
REC25-130 × 1521,5670.000.000.000.000.000.000.000.00
REC25-230 × 1519,6090.000.000.000.130.000.000.000.00
REC25-330 × 1514,7180.000.000.000.000.000.170.000.00
REC25-430 × 1531,1560.000.000.000.000.000.000.000.00
REC25-530 × 1532,5170.000.000.000.000.000.000.000.00
REC25-630 × 1530,2180.000.000.000.000.000.000.000.00
REC25-730 × 1549,7500.000.000.000.000.000.020.000.00
REC31-150 × 1041,9840.000.000.000.000.000.010.000.00
REC31-250 × 1035,3820.000.000.000.010.000.000.000.00
REC31-350 × 1028,9490.000.000.000.000.000.000.000.00
REC31-450 × 1054,6280.000.000.000.000.000.010.010.00
REC31-550 × 1047,2610.000.000.000.000.000.000.000.00
REC31-650 × 1048,1250.000.000.000.010.000.010.000.00
REC31-750 × 1084,5500.000.000.000.000.000.000.000.00
REC37-175 × 20127,7940.000.190.790.970.861.000.690.59
REC37-275 × 20107,5090.000.250.790.930.860.790.640.64
REC37-375 × 2095,5110.001.261.462.121.362.761.731.68
REC37-475 × 20174,1950.000.231.221.501.201.210.420.24
REC37-575 × 20172,4900.000.361.991.921.451.620.600.58
REC37-675 × 20158,0790.000.111.251.510.831.140.360.33
REC37-775 × 20253,3150.000.180.861.140.770.810.300.29
Table A2 presents the results of Taillard’s problems.
Table A2. Results of Taillard dataset.
Table A2. Results of Taillard dataset.
ProblemSizeMinimumSAMTSATSEITSESTSRITSRSPSOVPSOI
TAI001-120 × 553370.000.000.000.000.000.000.000.00
TAI001-220 × 540970.000.000.000.000.000.000.000.00
TAI001-320 × 529100.000.000.000.000.000.000.000.00
TAI001-420 × 571180.000.000.000.000.000.000.000.00
TAI001-520 × 560160.000.000.000.000.000.000.000.00
TAI001-620 × 562190.000.000.000.000.000.000.000.00
TAI001-720 × 511,2100.000.000.000.000.000.000.000.00
TAI011-120 × 1084150.000.000.000.000.000.000.000.00
TAI011-220 × 1075530.000.000.000.000.000.000.000.00
TAI011-320 × 1061800.000.000.000.000.000.000.000.00
TAI011-420 × 1012,3580.000.000.000.000.000.000.000.00
TAI011-520 × 1011,9000.000.000.000.000.000.000.000.00
TAI011-620 × 1011,1080.000.000.000.000.000.000.000.00
TAI011-720 × 1019,2750.000.000.000.000.000.000.000.00
TAI021-120 × 2011,4340.000.000.000.000.000.000.000.00
TAI021-220 × 2011,2560.000.000.000.000.000.000.000.00
TAI021-320 × 2095930.000.000.000.000.000.000.000.00
TAI021-420 × 2019,3620.000.000.000.000.000.000.000.00
TAI021-520 × 2019,2760.000.000.000.000.000.000.000.00
TAI021-620 × 2021,4200.000.000.000.000.000.000.000.00
TAI021-720 × 2031,8780.000.000.000.000.000.000.000.00
TAI031-150 × 529,8910.000.201.132.441.374.240.000.73
TAI031-250 × 524,3320.000.000.524.631.372.340.350.44
TAI031-350 × 515,5890.000.000.810.530.004.640.801.81
TAI031-450 × 531,7610.000.000.000.131.101.320.000.00
TAI031-550 × 526,3240.000.040.810.772.705.890.610.00
TAI031-650 × 517,2860.000.000.925.940.751.450.000.00
TAI031-750 × 553,1750.000.190.350.431.141.410.410.26
TAI041-150 × 1043,4700.000.360.901.190.554.290.730.42
TAI041-250 × 1035,9490.000.000.000.660.100.410.220.61
TAI041-350 × 1026,0590.000.000.411.670.000.920.310.00
TAI041-450 × 1055,9800.000.631.741.750.451.040.800.52
TAI041-550 × 1051,6150.000.090.321.000.461.280.030.09
TAI041-650 × 1050,1360.000.071.500.451.692.850.080.46
TAI041-750 × 1086,2230.000.000.720.850.971.040.000.00
TAI051-150 × 2069,2000.000.240.270.320.220.770.330.23
TAI051-250 × 2058,3400.000.210.420.840.970.680.290.39
TAI051-350 × 2051,7220.000.530.921.340.500.800.720.82
TAI051-450 × 2092,5240.000.220.930.750.430.780.320.29
TAI051-550 × 2095,0550.000.341.031.690.541.550.300.49
TAI051-650 × 2089,8950.000.001.211.220.132.340.580.87
TAI051-750 × 20140,1380.000.060.711.161.010.990.150.50
TAI061-1100 × 5129,5680.000.740.421.921.631.240.830.37
TAI061-2100 × 599,1230.000.941.862.221.213.741.141.05
TAI061-3100 × 566,6400.000.254.623.342.043.020.781.62
TAI061-4100 × 5134,7530.000.741.401.481.972.751.351.52
TAI061-5100 × 5111,1450.000.533.012.461.211.880.870.78
TAI061-6100 × 588,7020.000.851.994.142.275.321.340.53
TAI061-7100 × 5210,5760.000.470.931.201.131.470.740.71
TAI071-1100 × 10167,0930.000.280.520.940.711.600.360.35
TAI071-2100 × 10126,8320.000.640.821.191.242.290.580.94
TAI071-3100 × 1094,7690.000.870.913.371.322.300.490.94
TAI071-4100 × 10200,0610.000.371.331.580.961.580.870.77
TAI071-5100 × 10177,1400.000.461.911.380.821.090.940.62
TAI071-6100 × 10164,1820.000.661.963.282.052.920.740.99
TAI071-7100 × 10306,9760.001.121.022.182.503.590.921.10
TAI081-1100 × 20237,0850.000.561.311.840.821.570.490.97
TAI081-2100 × 20215,9980.000.871.062.311.841.531.241.30
TAI081-3100 × 20186,0090.000.491.322.121.613.600.891.21
TAI081-4100 × 20313,8090.000.751.261.441.422.160.821.23
TAI081-5100 × 20300,8510.000.271.292.280.732.460.480.88
TAI081-6100 × 20310,2010.000.802.152.913.033.071.541.13
TAI081-7100 × 20442,9230.001.021.942.341.942.511.081.15
TAI091-1200 × 10629,9750.000.632.302.821.783.191.371.26
TAI091-2200 × 10491,1870.000.962.060.532.131.111.301.54
TAI091-3200 × 10375,4230.000.833.472.512.682.061.762.13
TAI091-4200 × 10737,8220.001.251.892.531.882.742.071.75
TAI091-5200 × 10650,8180.001.421.872.472.152.241.621.87
TAI091-6200 × 10579,2240.000.983.172.762.552.921.871.76
TAI091-7200 × 101,105,3270.000.702.371.771.881.311.321.27
TAI101-1200 × 20893,3970.000.411.201.141.190.940.801.14
TAI101-2200 × 20773,9050.001.752.763.042.583.391.952.35
TAI101-3200 × 20655,9420.001.082.333.022.562.421.621.52
TAI101-4200 × 201,124,8820.000.572.282.202.032.241.231.26
TAI101-5200 × 201,054,9930.001.803.533.003.243.672.642.36
TAI101-6200 × 201,000,1850.001.641.342.732.403.922.472.77
TAI101-7200 × 201,589,5430.000.931.651.931.702.111.281.34
TAI111-1500 × 205,577,5010.000.691.802.881.844.071.621.64
TAI111-2500 × 204,773,1820.001.262.243.422.294.532.212.23
TAI111-3500 × 204,037,9880.002.203.204.293.245.453.203.17
TAI111-4500 × 206,817,3770.001.522.553.632.544.712.552.54
TAI111-5500 × 206,535,2520.001.922.974.032.975.302.932.95
TAI111-6500 × 206,092,4340.000.561.652.701.664.021.601.62
TAI111-7500 × 209,449,0830.000.211.232.281.253.471.221.23

Appendix C

The appendix contains the results of the post hoc analysis.
Table A3. Tukey’s HSD results for algorithm × group for Group 2 considering PG.
Table A3. Tukey’s HSD results for algorithm × group for Group 2 considering PG.
AlgorithmGroupAlgorithmGroupDifferenceLowerUpperP Adj
PSOV2PSOI2−0.0168−0.33540.30181.0000
SA2PSOI2−0.0831−0.40170.23551.0000
SAMT2PSOI2−0.1462−0.46480.17240.9935
TSEI2PSOI20.1173−0.20130.43590.9998
TSES2PSOI20.2931−0.02540.61170.1221
TSRI2PSOI20.1148−0.20380.43340.9998
TSRS2PSOI20.41130.09270.72990.0007
SA2PSOV2−0.0663−0.38490.25231.0000
SAMT2PSOV2−0.1294−0.44800.18920.9989
TSEI2PSOV20.1342−0.18440.45270.9980
TSES2PSOV20.3100−0.00860.62860.0687
TSRI2PSOV20.1317−0.18690.45030.9985
TSRS2PSOV20.42810.10950.74670.0003
SAMT2SA2−0.0631−0.38170.25551.0000
TSEI2SA20.2005−0.11810.51900.8193
TSES2SA20.37630.05770.69490.0042
TSRI2SA20.1980−0.12060.51660.8358
TSRS2SA20.49440.17580.81300.0000
TSEI2SAMT20.2635−0.05510.58210.2864
TSES2SAMT20.43930.12070.75790.0001
TSRI2SAMT20.2610−0.05760.57960.3047
TSRS2SAMT20.55750.23890.87610.0000
TSES2TSEI20.1758−0.14280.49440.9434
TSRI2TSEI2−0.0025−0.32110.31611.0000
TSRS2TSEI20.2939−0.02470.61250.1190
TSRI2TSES2−0.1783−0.49690.14030.9349
TSRS2TSES20.1181−0.20050.43670.9997
TSRS2TSRI20.2964−0.02220.61500.1097
Table A4. Tukey’s HSD results for algorithm × group for Group 3 considering PG.
Table A4. Tukey’s HSD results for algorithm × group for Group 3 considering PG.
AlgorithmGroupAlgorithmGroupDifferenceLowerUpperP Adj
PSOV3PSOI3−0.0644−0.53340.40451.0000
SA3PSOI3−0.5441−1.0130−0.07510.0057
SAMT3PSOI3−1.4248−1.8937−0.95580.0000
TSEI3PSOI30.50170.03270.97060.0206
TSES3PSOI30.99450.52551.46340.0000
TSRI3PSOI30.4557−0.01330.92470.0697
TSRS3PSOI31.61131.14242.08030.0000
SA3PSOV3−0.4796−0.9486−0.01070.0378
SAMT3PSOV3−1.3603−1.8293−0.89140.0000
TSEI3PSOV30.56610.09711.03510.0028
TSES3PSOV31.05890.58991.52780.0000
TSRI3PSOV30.52010.05120.98910.0120
TSRS3PSOV31.67571.20682.14470.0000
SAMT3SA3−0.8807−1.3496−0.41170.0000
TSEI3SA31.04570.57681.51470.0000
TSES3SA31.53851.06962.00750.0000
TSRI3SA30.99980.53081.46870.0000
TSRS3SA32.15541.68642.62440.0000
TSEI3SAMT31.92641.45752.39540.0000
TSES3SAMT32.41921.95022.88820.0000
TSRI3SAMT31.88051.41152.34940.0000
TSRS3SAMT33.03612.56713.50500.0000
TSES3TSEI30.49280.02380.96180.0264
TSRI3TSEI3−0.0460−0.51490.42301.0000
TSRS3TSEI31.10970.64071.57860.0000
TSRI3TSES3−0.5388−1.0077−0.06980.0067
TSRS3TSES30.61690.14791.08580.0005
TSRS3TSRI31.15560.68671.62460.0000
Table A5. Scheffe method’s significant results for algorithm × group for GP.
Table A5. Scheffe method’s significant results for algorithm × group for GP.
AlgorithmGroupAlgorithmGroupDifferenceLowerUpperP Adj
TSRS2SAMT20.55750.03771.07720.0139
SAMT3PSOI3−1.4248−2.1898−0.65970.0000
TSES3PSOI30.99450.22941.75950.0001
TSRS3PSOI31.61130.84632.37630.0000
SAMT3PSOV3−1.3603−2.1253−0.59530.0000
TSES3PSOV31.05890.29391.82390.0000
TSRS3PSOV31.67570.91072.44080.0000
SAMT3SA3−0.8807−1.6457−0.11570.0026
TSEI3SA31.04570.28071.81070.0000
TSES3SA31.53850.77352.30350.0000
TSRI3SA30.99980.23481.76480.0000
TSRS3SA32.15541.39042.92040.0000
TSEI3SAMT31.92641.16142.69140.0000
TSES3SAMT32.41921.65423.18420.0000
TSRI3SAMT31.88051.11542.64550.0000
TSRS3SAMT33.03612.27113.80110.0000
TSRS3TSEI31.10970.34461.87470.0000
TSRS3TSRI31.15560.39061.92060.0000
Table A6. Tukey’s HSD results for algorithm × job considering time to find the best OFV.
Table A6. Tukey’s HSD results for algorithm × job considering time to find the best OFV.
AlgorithmJobsAlgorithmJobsDifferenceLowerUpperP Adj
SAMT50PSOI50−54.4286−89.8820−18.97520.0000
TSRI50PSOI50−49.5000−84.9534−14.04660.0000
TSRS50PSOI50−40.2500−75.7034−4.79660.0040
SAMT50PSOV50−54.7500−90.2034−19.29660.0000
TSRI50PSOV50−49.8214−85.2748−14.36800.0000
TSRS50PSOV50−40.5714−76.0248−5.118020.0033
SAMT50SA50−40.6429−76.0963−5.18950.0032
TSRI50SA50−35.7143−71.1677−0.26090.0443
SAMT75PSOI75−80.0000−150.9068−9.09320.0047
SAMT75PSOV75−77.7143−148.6211−6.80750.0089
SAMT75SA75−72.0000−142.9068−1.09320.0386
SAMT100PSOI100−89.9048−130.8428−48.96670.0000
TSEI100PSOI100−55.5714−96.5095−14.63340.0000
TSES100PSOI100−49.0000−89.9381−8.06190.0011
TSRI100PSOI100−50.0476−90.9857−9.10950.0006
SAMT100PSOV100−90.5238−131.4619−49.58570.0000
TSEI100PSOV100−56.1905−97.1285−15.25240.0000
TSES100PSOV100−49.6190−90.5571−8.68100.0008
TSRI100PSOV100−50.6667−91.6047−9.72860.0004
SAMT100SA100−75.5238−116.4619−34.58570.0000
TSEI100SA100−41.1905−82.1285−0.25240.0451
TSRS100SAMT10061.000020.0619101.93810.0000
SAMT200PSOI200−150.0000−200.1387−99.86130.0000
SAMT200PSOV200−143.4286−193.5673−93.28990.0000
SAMT200SA200−134.9286−185.0673−84.78990.0000
TSEI200SAMT200141.785791.6470191.92440.0000
TSES200SAMT200128.428678.2899178.56730.0000
TSRI200SAMT200129.785779.6470179.92440.0000
TSRS200SAMT200114.000063.8613164.13870.0000
PSOV500PSOI500138.428667.5218209.33540.0000
SA500PSOI500133.428662.5218204.33540.0000
SAMT500PSOI500−222.7143−293.6211−151.80750.0000
TSEI500PSOI500127.428656.5218198.33540.0000
TSES500PSOI500115.714344.8075186.62110.0000
TSRI500PSOI500115.000044.0932185.90680.0000
TSRS500PSOI500140.571469.6646211.47820.0000
SAMT500PSOV500−361.1429−432.0497−290.23600.0000
SAMT500SA500−356.1429−427.0497−285.23600.0000
TSEI500SAMT500350.1429279.2360421.04970.0000
TSES500SAMT500338.4286267.5218409.33540.0000
TSRI500SAMT500337.7143266.8075408.62110.0000
TSRS500SAMT500363.2857292.3789434.19250.0000
Table A7. Scheffe method’s significant results for algorithm × job considering time to find the best OFV.
Table A7. Scheffe method’s significant results for algorithm × job considering time to find the best OFV.
AlgorithmGroupAlgorithmGroupDifferenceLowerUpperP Adj
SAMT200PSOI200−150.0000−284.9514−15.04860.0005
SAMT200PSOV200−143.4286−278.3799−8.47720.0048
SAMT200SA200−134.9286−269.87990.02280.0503
TSEI200SAMT200141.78576.8343276.73710.0080
SAMT500PSOI500−222.7143−413.5643−31.86420.0000
SAMT500PSOV500−361.1429−551.9929−170.29280.0000
SAMT500SA500−356.1429−546.9929−165.29280.0000
TSEI500SAMT500350.1429159.2928540.99290.0000
TSES500SAMT500338.4286147.5785529.27860.0000
TSRI500SAMT500337.7143146.8642528.56430.0000
TSRS500SAMT500363.2857172.4357554.13580.0000

References

  1. Pinedo, M. Scheduling; Springer: New York, NY, USA, 2012; Volume 29. [Google Scholar]
  2. Miyata, H.H.; Nagano, M.S.; Gupta, J.N. Integrating preventive maintenance activities to the no-wait flow shop scheduling problem with dependent-sequence setup times and makespan minimization. Comput. Ind. Eng. 2019, 135, 79–104. [Google Scholar] [CrossRef]
  3. Emmons, H.; Vairaktarakis, G. Flow Shop Scheduling: Theoretical Results, Algorithms, and Applications; Springer Science & Business Media: New York, NY, USA, 2012; Volume 182. [Google Scholar]
  4. Graham, R.L.; Lawler, E.L.; Lenstra, J.K.; Kan, A.H.G.R. Optimization and approximation in deterministic sequencing and scheduling: A survey. Ann. Discret. Math. 1979, 5, 287–326. [Google Scholar]
  5. Giaro, K. NP-hardness of compact scheduling in simplified open and flow shops. Eur. J. Oper. Res. 2001, 130, 90–98. [Google Scholar]
  6. Allahverdi, A. A survey of scheduling problems with no-wait in process. Eur. J. Oper. Res. 2016, 255, 665–686. [Google Scholar] [CrossRef]
  7. Aldowaisan, T.; Allahverdi, A. Minimizing total tardiness in no-wait flowshops. Found. Comput. Decis. Sci. 2012, 37, 149–162. [Google Scholar] [CrossRef]
  8. Liu, G.; Song, S.; Wu, C. Some heuristics for no-wait flowshops with total tardiness criterion. Comput. Oper. Res. 2013, 40, 521–525. [Google Scholar] [CrossRef]
  9. Ding, J.; Song, S.; Zhang, R.; Gupta, J.N.; Wu, C. Accelerated methods for total tardiness minimisation in no-wait flowshops. Int. J. Prod. Res. 2015, 53, 1002–1018. [Google Scholar] [CrossRef]
  10. Javadi, B.; Saidi-Mehrabad, M.; Haji, A.; Mahdavi, I.; Jolai, F.; Mahdavi-Amiri, N. No-wait flow shop scheduling using fuzzy multi-objective linear programming. J. Frankl. Inst. 2008, 345, 452–467. [Google Scholar]
  11. Tavakkoli-Moghaddam, R.; Rahimi-Vahed, A.R.; Mirzaei, A.H. Solving a multi-objective no-wait flow shop scheduling problem with an immune algorithm. Int. J. Adv. Manuf. Technol. 2008, 36, 969–981. [Google Scholar] [CrossRef]
  12. Abdollahpour, S.; Rezaian, J. Two new meta-heuristics for no-wait flexible flow shop scheduling problem with capacitated machines, mixed make-to-order and make-to-stock policy. Soft Comput. 2017, 21, 3147–3165. [Google Scholar] [CrossRef]
  13. Gao, F.; Liu, M.; Wang, J.-J.; Lu, Y.-Y. No-wait two-machine permutation flow shop scheduling problem with learning effect, common due date and controllable job processing times. Int. J. Prod. Res. 2018, 56, 2361–2369. [Google Scholar] [CrossRef]
  14. Li, Z.; Zhong, R.Y.; Barenji, A.V.; Liu, J.J.; Yu, C.X.; Huang, G.Q. Bi-objective hybrid flow shop scheduling with common due date. Oper. Res. 2021, 21, 1153–1178. [Google Scholar] [CrossRef]
  15. Lv, D.-Y.; Wang, J.-B. Study on resource-dependent no-wait flow shop scheduling with different due-window assignment and learning effects. Asia-Pac. J. Oper. Res. 2021, 38, 2150008. [Google Scholar] [CrossRef]
  16. Allali, K.; Aqil, S.; Belabid, J. Distributed no-wait flow shop problem with sequence dependent setup time: Optimization of makespan and maximum tardiness. Simul. Model. Pr. Theory 2022, 116, 102455. [Google Scholar] [CrossRef]
  17. Huang, R.-H.; Yang, C.-L.; Liu, S.-C. No-wait flexible flow shop scheduling with due windows. Math. Probl. Eng. 2015, 2015, 456719. [Google Scholar] [CrossRef]
  18. Arabameri, S.; Salmasi, N. Minimization of weighted earliness and tardiness for no-wait sequence-dependent setup times flowshop scheduling problem. Comput. Ind. Eng. 2013, 64, 902–916. [Google Scholar] [CrossRef]
  19. Schaller, J.; Valente, J.M. Minimizing total earliness and tardiness in a nowait flow shop. Int. J. Prod. Econ. 2020, 224, 107542. [Google Scholar] [CrossRef]
  20. Schaller, J.; Valente, J.M.S. Scheduling in a no-wait flow shop to minimise total earliness and tardiness with additional idle time allowed. Int. J. Prod. Res. 2022, 60, 5488–5504. [Google Scholar] [CrossRef]
  21. Guevara-Guevara, A.F.; Gómez-Fuentes, V.; Posos-Rodríguez, L.J.; Remolina-Gómez, N.; González-Neira, E.M. Earliness/tardiness minimization in a no-wait flow shop with sequence-dependent setup times. J. Proj. Manag. 2022, 7, 177–190. [Google Scholar] [CrossRef]
  22. Zhu, N.; Zhao, F.; Wang, L.; Ding, R.; Xu, T.; Jonrinaldi. A discrete learning fruit fly algorithm based on knowledge for the distributed no-wait flow shop scheduling with due windows. Expert Syst. Appl. 2022, 198, 116921. [Google Scholar] [CrossRef]
  23. Qian, B.; Zhang, Z.-Q.; Hu, R.; Jin, H.-P.; Yang, J.-B. A Matrix-Cube-Based Estimation of Distribution Algorithm for No-Wait Flow-Shop Scheduling With Sequence-Dependent Setup Times and Release Times. IEEE Trans. Syst. Man. Cybern. Syst. 2022, 1–12. [Google Scholar] [CrossRef]
  24. Ingber, L. Simulated annealing: Practice versus theory. Math. Comput. Model. 1993, 18, 29–57. [Google Scholar] [CrossRef]
  25. Greening, D.R. Simulated Annealing with Errors. Ph.D. Thesis, UCLA, California, LA, USA, 1995. [Google Scholar]
  26. Szu, H.; Hartley, R. Fast simulated annealing. Phys. Lett. A 1987, 122, 157–162. [Google Scholar] [CrossRef]
  27. Ingber, L. Very fast simulated re-annealing. Math. Comput. Model. 1989, 12, 967–973. [Google Scholar] [CrossRef]
  28. Malek, M.; Guruswamy, M.; Pandya, M.; Owens, H. Serial and parallel simulated annealing and tabu search algorithms for the traveling salesman problem. Ann. Oper. Res. 1989, 21, 59–84. [Google Scholar] [CrossRef]
  29. Yao, X. A new simulated annealing algorithm. Int. J. Comput. Math. 1995, 56, 161–168. [Google Scholar] [CrossRef]
  30. Roussel-Ragot, P.; Dreyfus, G. A problem independent parallel implementation of simulated annealing: Models and experiments. IEEE Trans. Comput. Des. Integr. Circuits Syst. 1990, 9, 827–835. [Google Scholar] [CrossRef]
  31. Mahfoud, S.W.; Goldberg, D.E. Parallel Recombinative simulated annealing: A genetic algorithm. Parallel Comput. 1995, 21, 1–28. [Google Scholar] [CrossRef]
  32. Lee, S.-Y.; Lee, K.G. Synchronous and asynchronous parallel simulated annealing with multiple Markov chains. IEEE Trans. Parallel Distrib. Syst. 1996, 7, 993–1008. [Google Scholar]
  33. Wodecki, M.; Bożzejko, W. Solving the flow shop problem by parallel simulated annealing. In International Conference on Parallel Processing and Applied Mathematics; Springer: Berlin/Heidelberg, Germany, 2001; pp. 236–244. [Google Scholar]
  34. Bożejko, W.; Wodecki, M. The new concepts in parallel simulated annealing method. In International Conference on Artificial Intelligence and Soft Computing; Springer: Berlin/Heidelberg, Germany, 2004; pp. 853–859. [Google Scholar]
  35. Czapiński, M. Parallel simulated annealing with genetic enhancement for flowshop problem with Csum. Comput. Ind. Eng. 2010, 59, 778–785. [Google Scholar] [CrossRef]
  36. Ferreiro, A.M.; García, J.A.; López-Salas, J.G.; Vázquez, C. An efficient implementation of parallel simulated annealing algorithm in GPUs. J. Glob. Optim. 2013, 57, 863–890. [Google Scholar] [CrossRef]
  37. Sonuc, E.; Sen, B.; Bayir, S. A parallel approach for solving 0/1 knapsack problem using simulated annealing algorithm on CUDA platform. Int. J. Comput. Sci. Inf. Secur. 2016, 14, 1096. [Google Scholar]
  38. Richie, J.E.; Ababei, C. Optimization of patch antennas via multithreaded simulated annealing based design exploration. J. Comput. Des. Eng. 2017, 4, 249–255. [Google Scholar] [CrossRef]
  39. Turan, H.H.; Kosanoglu, F.; Atmış, M. A multi-skilled workforce optimisation in maintenance logistics networks by multi-thread simulated annealing algorithms. Int. J. Prod. Res. 2021, 59, 2624–2646. [Google Scholar] [CrossRef]
  40. Vousden, M.; Bragg, G.M.; Brown, A.D. Asynchronous simulated annealing on the placement problem: A beneficial race condition. J. Parallel Distrib. Comput. 2022, 169, 242–251. [Google Scholar] [CrossRef]
  41. Zhou, X.; Ling, M.; Lin, Q.; Tang, S.; Wu, J.; Hu, H. Effectiveness Analysis of Multiple Initial States Simulated Annealing Algorithm, A Case Study on the Molecular Docking Tool AutoDock Vina. Available online: https://ssrn.com/abstract=4120348 (accessed on 19 December 2022).
  42. Coll, N.; Fort, M.; Saus, M. Coverage area maximization with parallel simulated annealing. Expert Syst. Appl. 2022, 202, 117185. [Google Scholar] [CrossRef]
  43. Yildirim, G. A novel hybrid multi-thread metaheuristic approach for fake news detection in social media. Appl. Intell. 2022, 1–21. [Google Scholar] [CrossRef]
  44. Deng, W.; Xu, J.; Song, Y.; Zhao, H. Differential evolution algorithm with wavelet basis function and optimal mutation strategy for complex optimization problem. Appl. Soft Comput. 2021, 100, 106724. [Google Scholar] [CrossRef]
  45. Cai, X.; Zhao, H.; Shang, S.; Zhou, Y.; Deng, W.; Chen, H.; Deng, W. An improved quantum-inspired cooperative co-evolution algorithm with muli-strategy and its application. Expert Syst. Appl. 2021, 171, 114629. [Google Scholar] [CrossRef]
  46. Valente, J.M.; Alves, R.A. Beam search algorithms for the early/tardy scheduling problem with release dates. J. Manuf. Syst. 2005, 24, 35–46. [Google Scholar] [CrossRef]
  47. Röck, H. The three-machine no-wait flow shop is NP-complete. J. ACM 1984, 31, 336–345. [Google Scholar] [CrossRef]
  48. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  49. Van Laarhoven, P.J.; Aarts, E.H. Simulated annealing. In Simulated Annealing: Theory and Applications; Springer: Dordrecht, The Netherlands, 1987; pp. 7–15. [Google Scholar]
  50. Nikolaev, A.G.; Jacobson, S.H. Simulated annealing. In Handbook of Metaheuristics; Springer: Boston, MA, USA, 2010; pp. 1–39. [Google Scholar]
  51. Bagherlou, H.; Ghaffari, A. A routing protocol for vehicular ad hoc networks using simulated annealing algorithm and neural networks. J. Supercomput. 2018, 74, 2528–2552. [Google Scholar] [CrossRef]
  52. Carlier, J. Ordonnancements à contraintes disjonctives. RAIRO Oper. Res. 1978, 12, 333–350. [Google Scholar] [CrossRef]
  53. Reeves, C.R. A genetic algorithm for flowshop sequencing. Comput. Oper. Res. 1995, 22, 5–13. [Google Scholar] [CrossRef]
  54. Amar, A.D.; Gupta, J.N.D. Simulated versus real life data in testing the efficiency of scheduling algorithms. IIE Trans. 1986, 18, 16–25. [Google Scholar] [CrossRef]
  55. Rinnooy Kan, A.H. Machine Scheduling Problems: Classification, Complexity, and Computations. Ph.D. Thesis, University of Amsterdam, Amsterdam, The Netherlands, 1976. [Google Scholar]
  56. Taillard, E. Benchmarks for basic scheduling problems. Eur. J. Oper. Res. 1993, 64, 278–285. [Google Scholar] [CrossRef]
  57. LaTorre, A.; Molina, D.; Osaba, E.; Poyatos, J.; Del Ser, J.; Herrera, F. A prescription of methodological guidelines for comparing bio-inspired optimization algorithms. Swarm Evol. Comput. 2021, 67, 100973. [Google Scholar] [CrossRef]
  58. Hutter, F.; Hoos, H.H.; Leyton-Brown, K.; Stuetzle, T. ParamILS: An automatic algorithm configuration framework. J. Artif. Intell. Res. 2009, 36, 267–306. [Google Scholar] [CrossRef]
  59. Lee, S.; Lee, D.K. What is the proper way to apply the multiple comparison test? Korean J. Anesthesiol. 2018, 71, 353–360. [Google Scholar] [CrossRef]
  60. Glover, F. Future paths for integer programming and links to artificial intelligence. Comput. Oper. Res. 1986, 13, 533–549. [Google Scholar] [CrossRef]
  61. Glover, F.; Taillard, E.; de Werra, D. A user’s guide to tabu search. Ann. Oper. Res. 1993, 41, 1–28. [Google Scholar] [CrossRef]
  62. Glover, F.; Laguna, M. Tabu search. In Handbook of Combinatorial Optimization; Springer: Boston, MA, USA, 1988; pp. 2093–2229. [Google Scholar]
  63. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-international Conference on Neural Networks. Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  64. Li, M.; Du, W.; Nian, F. An adaptive particle swarm optimization algorithm based on directed weighted complex network. Math. Probl. Eng. 2014, 2014, 434972. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Taxonomy of parallel simulated annealing methodologies [25].
Figure 1. Taxonomy of parallel simulated annealing methodologies [25].
Processes 11 00454 g001
Figure 2. Insert operation of Job 2 into Position 4.
Figure 2. Insert operation of Job 2 into Position 4.
Processes 11 00454 g002
Figure 3. Swap operation of Job 2 and Job 4.
Figure 3. Swap operation of Job 2 and Job 4.
Processes 11 00454 g003
Figure 4. Possible jumps by sub-threads on OFV curve.
Figure 4. Possible jumps by sub-threads on OFV curve.
Processes 11 00454 g004
Figure 5. Average percentage gap vs. no. of jobs.
Figure 5. Average percentage gap vs. no. of jobs.
Processes 11 00454 g005
Figure 6. Comparison of average runtimes to find the best solution.
Figure 6. Comparison of average runtimes to find the best solution.
Processes 11 00454 g006
Figure 7. Average PG (a) and runtime to find the best solution (b). Comparison of SA and SAMT for significantly different cases.
Figure 7. Average PG (a) and runtime to find the best solution (b). Comparison of SA and SAMT for significantly different cases.
Processes 11 00454 g007
Table 1. Runtime limits considering the size of the problem.
Table 1. Runtime limits considering the size of the problem.
Problem SizeRuntime Limit (in s)
Small n 2
Medium n 2
Large n 3
Table 2. The settings of the SAMT algorithm.
Table 2. The settings of the SAMT algorithm.
ParameterValue
T 0 m :Number of jobs
i m :Following the total assigned runtime
n m a x :10
P r o b O p e r a t i o n :0.50
T 0 f t : T 0 m 5
T 0 s t : T 0 m 3
F a s t   T h r e a d   R u n t i m e : M a s t e r   T h r e a d   R u n t i m e 150
S l o w   T h r e a d   R u n t i m e : M a s t e r   T h r e a d   R u n t i m e 60
c :0.25
Table 3. The settings of PSO algorithms.
Table 3. The settings of PSO algorithms.
Variable/ParameterValue/Formula
c 1 2.05
c 2 2.05
w (PSOI) 0.9 0.5 x t c u r t m a x
w (PSOV)1.0
p s i z e (PSOI)30
p s i z e (PSOV)20
Number of local search iterations 10 × n
Table 4. Due date creation scheme.
Table 4. Due date creation scheme.
IndexTFR
10.200.20
20.200.50
30.200.80
40.500.20
50.500.50
60.500.80
70.800.20
Table 5. ANOVA table for PG.
Table 5. ANOVA table for PG.
LevelDoFSum of Sq.Mean Sq.F ValueProb. (>F)
Algorithm7115.816.5447.570.0000
Group2581.3290.65835.750.0000
Algorithm × group14162.511.6133.370.0000
Residual1488517.50.35
Table 6. ANOVA table for runtime.
Table 6. ANOVA table for runtime.
LevelDoFSum of Sq.Mean Sq.F ValueProb.(>F)
Algorithm7277,59539,65642.90.0000
Job13123,270,8799,482,37510,258.80.0000
Algorithm × job911,017,39711,18012.10.0000
Residual14001,294,042924
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Karacan, I.; Senvar, O.; Bulkan, S. A Novel Parallel Simulated Annealing Methodology to Solve the No-Wait Flow Shop Scheduling Problem with Earliness and Tardiness Objectives. Processes 2023, 11, 454. https://doi.org/10.3390/pr11020454

AMA Style

Karacan I, Senvar O, Bulkan S. A Novel Parallel Simulated Annealing Methodology to Solve the No-Wait Flow Shop Scheduling Problem with Earliness and Tardiness Objectives. Processes. 2023; 11(2):454. https://doi.org/10.3390/pr11020454

Chicago/Turabian Style

Karacan, Ismet, Ozlem Senvar, and Serol Bulkan. 2023. "A Novel Parallel Simulated Annealing Methodology to Solve the No-Wait Flow Shop Scheduling Problem with Earliness and Tardiness Objectives" Processes 11, no. 2: 454. https://doi.org/10.3390/pr11020454

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop