A Robust Two-Machine Flow-Shop Scheduling Model with Scenario-Dependent Processing Times

In many scheduling studies, researchers consider the processing times of jobs as constant numbers. )is assumption sometimes is at odds with practical manufacturing process due to several sources of uncertainties arising from real-life situations. Examples are the changing working environments, machine breakdowns, tool quality variations and unavailability, and so on. In light of the phenomenon of scenario-dependent processing times existing in many applications, this paper proposes to incorporate scenariodependent processing times into a two-machine flow-shop environment with the objective of minimizing the total completion time. )e problem under consideration is never explored. To solve it, we first derive a lower bound and two optimality properties to enhance the searching efficiency of a branch-and-bound method. )en, we propose 12 simple heuristics and their corresponding counterparts improved by a pairwise interchange method. Furthermore, we set proposed 12 simple heuristics as the 12 initial seeds to design 12 variants of a cloud theory-based simulated annealing (CSA) algorithm. Finally, we conduct simulations and report the performances of the proposed branch-and-bound method, the 12 heuristics, and the 12 variants of CSA algorithm.


Introduction
In classical scheduling models, the processing times of jobs are often assumed to be constant numbers, but in practice it can be met with a lot of uncertainty occurrences such as the working environment changes, machine breakdowns, tool quality variations and unavailability, worker performance instabilities, and some external complex factors. Furthermore, the phenomenon of scenario-dependent processing times has not been addressed in the area of scheduling problems until recently [1,2]. Motivated by these uncertainty reasons, in this study we address scenario-dependent processing times into a two-machine flow-shop setting and find a robust measurement which minimizes the maximum total completion time over all scenarios.
is problem is certainly NP-hard because the same problem without the scenario-dependent processing times is nondeterministic polynomial-time hardness (or NP-hard) (see Gonzalez and Sahni [3]).
Many studies on literature consider that most processing times are often estimated based on statistical data and find that the variation of the data can be very big and the underlying distributions come from the data which could be inaccurate. erefore, researchers consider the worst-case performance measurement to be more important than the average error performance. When any one of two probable scenarios occurs, Kouvelis and Yu [4] suggested adopting a robust approach to overcome the worst case. For details of different scenarios of job processing times, readers may refer to Aissi et al. [5], Aloulou and Della Croce [6], De Farias et al. [7], Kasperski and Zielinski [8], Yang and Yu [9], and so on.
Regarding the studies on the completion time on the two-machine flow-shop scheduling problem without the scenario-dependent processing times, Kohler and Steiglitz [10] proposed some heuristics for finding near-optimal solutions. Following the NP-hard result by Gonzalez and Sahni [3], Cadambi and Sathe [11], Pan and Wu [12], Wang et al. [13], and Della Croce et al. [14] utilized a branch-andbound method incorporating several superior properties for exact solutions in turn. Following up two related studies, some complexity and approximation results for special cases of this problem are presented by Hoogeveen and Kawaguchi [15] while the lower bound based on Lagrangian relaxation was improved by Della Croce et al. [16]. For more flow shops with more than two machines, readers may refer to Gupta et al. [17], Xiang et al. [18], Allahverdi and Aldowaisan [19], Chung et al. [20], Rajendran and Ziegler [21], Zhang et al. [22], Tasgetiren et al. [23], Gao et al. [24], Shivakumar and Amudha [25], Dong et al. [26,27], and Marichelvam et al. [28].
Meanwhile, simulated annealing (SA), proposed by Kirkpatrick et al. [29], has been widely and successfully used to solve many discrete combinatorial problems. However, there are some drawbacks in this method. For example, the temperature is discrete and unchangeable in the annealing course from implementation aspect. is point cannot be fit to the requirement of continuous decrease in temperature in actual physical annealing processes. Second point is that it is easy to accept deteriorating solution with high temperature and it does not converge quickly. ird point is that it is hard to escape from local minimum trap with low temperature and has low searching accuracy. To overcome the disadvantages, Lv et al. [30] gave a theoretical discussion of a cloud theory-based simulated annealing algorithm (CSA) and showed that CSA is superior to SA in terms of convergence speed, searching ability, and robustness. Following that, Torabzadeh and Zandieh [31] applied CSA to solve the two-stage assembly flow-shop problems on m-machine environment to minimize a weighted sum of makespan and mean completion time. ey further compared CSA and SA and showed that CSA performed better than SA did. For more applications of the CSA algorithm, readers may refer to Geng et al. [32] for a SVR-based load forecasting model, to Pourvaziri and Pierreval [33] for a dynamic facility layout problem based on open queuing network, and to We et al. [34] for a twostage assembly scheduling problem with learning consideration.
In light of these observations, in this study, we will consider a robust two-machine flow-shop scheduling problem with scenario-dependent processing times and the goal is to minimize the total completion time of jobs. In the same problem setting but under the case where jobs processing times are constant, the literature shows that this problem is NP-hard for the adopted goal. e main contributions of this study are as follows. (1) e authors addressed a robust two-machine flow-shop scheduling problem with scenario-dependent processing times. (2) e authors derived two new dominant properties and two lower bounds to be embedded in a branch-and-bound method for finding optimal solutions efficiently. (3) e authors proposed 12 simple heuristics and their corresponding counterparts improved by a pairwise interchange method. (4) e authors design 12 variants of a CSA algorithm. e rest of this study is organized as follows. In Section 2, we introduce notations and problem formulation. In Section 3, we derive a lower bound and two dominant properties to be used in a branch-and-bound method. In Section 4, we propose 12 simple heuristics and their corresponding improved counterparts by a pairwise interchange process and 12 variants of CSA algorithm. In Section 5, we tune the parameters of CSA algorithm, and in Section 6, we conduct simulation studies to evaluate the performances of all the proposed methods.

Notations and Problem Description
e adopted notations for this study are listed as follows: n: number of jobs. J 1 , J 2 , . . . , J n : n job codes {M 1 , M 2 }: two machine codes σ 1 , σ 2 : job schedules δ, δ ′ : k determined job schedule and n − k undetermined job schedule [ ]: scheduled job position 1 , t (s) 2 : the starting time on M 1 and M 2 C (s) ij (σ): the completion time of job J j on machine i under scenario s for the job schedule σ T i : the initial temperature in CSA T f : the final temperature in CSA λ: the annealing index with 0 < λ < 1 in CSA N r : the number of improvements in CSA e studied two-machine flow-shop scheduling problem with two scenarios (or states) is described as follows. ere are n jobs, each consisting of two operations with processing times a (s) j , b (s) j , j � 1, 2, . . . , n, for scenario s, s � 1, 2, on machines M 1 and M 2 , respectively. ere are precedence constraints between the operations, i.e., each job is first processed on M 1 and then on M 2 . Additionally, no machine can process more than one job at a time and a job cannot be interrupted once the processing starts; no idle time can be allowed on machine M 1 . As the value of our objective function is scenario-dependent, the robustness criterion of this study is the absolute robustness [4,6], which is to find a schedule minimizing min σ∈Π max s�1,2 n j�1 C (s) 2[j] (σ)}}, where Π is the set of all possible permutations of jobs.

Methodology
Given the fact that the same problem without scenariodependent version is NP-hard (see Gonzalez and Sahni [3]), the branch-and-bound approach is commonly and widely used for an optimal-solution technical tool. Besides, deriving a good lower bound of the partial schedule is very important in the branch-and-bound (B&B) method. In what follows, we build a lower bound based on the idea of Ignall and Schrage [35]. Let σ � (δ, δ ′ ) be a schedule where the order of the first k jobs in the subschedule δ has been determined and the remaining (n − k) jobs in another subschedule δ ′ have not been determined, where l � n − k. According to the definition, the completion times for the (k + 1)th job, say J i from unscheduled set δ ′ , and the (k + 2)th job, say J j from unscheduled set δ ′ , on machines M 1 and M 2 for scenario s in a schedule σ � (δ, δ ′ ) are given as follows: (1) In general, In a similar way, we have erefore, we have the following inequality for s � 1, 2.
On the other hand, we have In a similar way, we have erefore, we have the following inequality for s � 1, 2.
According to equations (4) and (7), we have Discrete Dynamics in Nature and Society 3 erefore, we have the following lower bound: at is, where In what follows, we also derive two more properties used in our proposed branch-and-bound method. e first one is based on the idea of Cadambi and Sathe [11]. To further explain the details of Property 1, we let σ 1 � (δ, i, j, δ ′ ) and σ 2 � (δ, j, i, δ ′ ) denote two schedules in which δ and δ ′ , respectively, are partial sequences. To show that σ 1 is no worse than σ 2 , the following condition suffices: Proof. To show that σ 1 is no worse than σ 2 , the following condition suffices: From the definition of completion time of a job, the following equations hold. For s � 1, 2, Using successively definition (18), definition (17), assumptions, definition (13), and definition (15), we have Using successively definition (19), inequality (20), equality (14), assumptions, and definition (16), we have 4 Discrete Dynamics in Nature and Society From (20) and (21), we have max s�1,2 (C (s)

Proof.
is proof can be obtained similarly to the proof of Property 1.
Note that in general, a sequence σ 1 of jobs is said to be no worse than another sequence . Properties 1 and 2 are two special cases.

Some Heuristics and Some Variants of the CSA
In light of the scenario-dependent processing time factor, we consider a two-stage method to solve this problem. It is well known that the Johnson rule can be applied to minimize the makespan criterion for the two-machine flow shop. us, we adopt several different variants of the combinations of the scenario-dependent processing times to be applied on Johnson rule to find initial seeds. Even though it is not the best one for the total completion time criterion, a good seed can be obtained. Following that, we improve these seeds by a pairwise interchange method and reset these seeds in CSAs, respectively.
In the researchers' communities, Johnson's rule [36] has been commonly utilized to solve the makespan minimization problem on 2-machine flow-shop setting. However, one can observe that it could not yield the optimal schedule for this model when the scenario-dependent processing times are present. e main idea of Johnson's rule is based on " ere exists an optimal sequence in which job J i is scheduled before job J j if min a i , b j ≤ min a j , b i holds." Furthermore, to explore the performance of this rule, 12 simple heuristics and their corresponding improved ones by a pairwise interchange process are provided in the following. e details of the steps for the 12 heuristics in stage one are described as follows.
For stage 1, the following 12 heuristic algorithms are based on the Johnson rule [36]. Furthermore, to obtain a high quality of approximate solutions, we consider the solutions found from these 12 heuristics as 12 seeds to be improved by a pairwise interchange process and to be seeds used in a cloud theory-based simulated annealing (CSA) algorithm, respectively. e summary of 12 proposed initial heuristics, their corresponding 12 (pairwise) improved counterparts, and 12 variants of the CSA algorithm is as follows.
According the thoughts of Johnson's rule, we propose the 12 heuristics as H 1 , H 2 , . . ., and H 12 as follows. (2) i be the processing time for job J i on machine 1 and b n} and output the final schedule σ 1 and its value of the objective function Regarding the H 2 heuristic and H 3 heuristic, in step 1, we replace  (2) i } as the processing time for job J i on machine 2, keeping the same steps 2-5 as those in H 1 . We then record H 4 , H 5 , and H 6 for θ � 0.25, 0.5, 0.75. In a similar way, we record H 7 to H 9 for the cases a  e problem under study is NP-hard because the same problem without scenario-dependent processing times is NP-hard. In order to quickly find good quality of nearoptimal solutions and save CPU time, for stage 3, we utilize the cloud theory-based simulated annealing (CSA) algorithm proposed by Torabzadeh and Zandieh [31]. Furthermore, we also adopt solutions yielded from 12 heuristics at the first stage to be 12 seeds in CSA algorithm. Some important parameters in CSA such as the initial temperature (T i ), final temperature (T f ), the number of improvement repetitions (N r ), and the annealing index (λ) need to be explored and tuned. e descriptions of 12 variants of CSA algorithm are summarized as follows (Algorithm 1).
Besides the above developments, our proposed branchand-bound algorithm adopts the depth first policy in the branching tree and assigns jobs in a forward manner starting from the first place to the last place [34,37]. In the branching nodes, we determine if the node should be cut by using Property 1 and Property 2. We compute its lower bound for the active node and find its objective function for the complete node. We adopt the best solution among the proposed heuristics as an upper bound. Once a complete node is done, we update the upper bound when a complete node is generated.

and values of parameters
(8) Choose p and q integers randomly with 1 ≤ p, q ≤ n (9) Interchange the p th position and q th position in σ i to generate another new schedule σ t . (10) Compute the value of the objective function for σ t , obj (σ t ).  Discrete Dynamics in Nature and Society

Tuning the Parameters
Before performing intensive computational simulation experiments, there are four initial parameters in a CSA algorithm that need to be tuned, such as the initial temperature (T i ), final temperature (T f ), the number of improvement repetitions (N r ), and the annealing index (λ). According to the experience from some preliminary  Figure 4: e boxplots of 12 heuristics and 12 CSA-Hs algorithms (small n).
Discrete Dynamics in Nature and Society simulations, final temperature (T f ) is set at 10 − 8 . An experiment of one factor at a time was run to tune the parameters in the order of T i , N r , and λ. On these computer experiments, the size of jobs was set at n � 10, the processing times a (s) j , b (s) j followed a uniform distribution over integers 1 to 20, N r was set as 10, and λ was set as 0.98. For each parameter combination, 100 problem instances were generated, the mean of the objective function was computed, and the B&B method was also run to obtain the optimal value of the objective function. e average error percentage (AEP) was recorded. e formula of AEP is AEP � 100 where O j is the final objective value outputted from each of heuristics or algorithms and O * is from the B&B method. We tested T i at 0.9, 0.99, 0.1, 0.01, 10 − 3 , 10 − 4 , and 10 − 5 . As shown in Figure 1, as T i � 0.9, the mean of AEP has the minimum, 0.0257. Second, the parameter N r increases from 1 to 25, and other parameters were fixed at T i � 0.9 and λ � 0.98. As shown in Figure 2, as N r � 20, the mean AEP has a minimum of 0.18%. According to the parameter tuning results, the parameters were set finally as follows: T i at 0.9, N r at 20, λ at 0.99, and T f at 10 − 8 , respectively, for the later computational experiments.

Simulation Studies
Intensive computational simulation experiments were carried out to evaluate the performances of the branch-andbound method and the efficacy of the 12 heuristics and 12 variants of the CSA algorithm (recorded as 12 CSA_Hs). e heuristics and algorithms were coded by FORTRAN (Compaq visual Fortran 6.6 version). e experiments were executed on a PC with an Intel Xeon CPU E5-1620 3.60 GHz, RAM 4.00 GB, and in Windows 7 OS with 32 bits. For small size of jobs (n � 8, 9,10,12), the mean AEPs of 12 heuristics and 12 variants of the CSA algorithm were recorded; for large size of jobs (n � 150, 150, 200), the mean RPDs were reported, where RPD was calculated as , in which O j is the final objective value yielded from each of heuristics or algorithms and O * is the best among the 12 heuristics and 12 algorithms. 10 Discrete Dynamics in Nature and Society e job processing times were generated using several different uniform distributions by Kouveils et al. [2]. e integer processing times of a (s) j , b (s) j on machine M i follows a 10 · β i · U(0, 1) + (40 · α · β i + 1) discrete uniformly distribution, where α takes values of 0.2, 0.6, and 1.0 and (β 1 , β 2 ) takes values of, say, type T 1 = (1.0, 1.0), type T 2 = (1.2, 1.0), and type T 3 = (1.0, 1.2). ere are nine combinations of α and types of (β 1 , β 2 ). For each parameter combination, 100 problem instances were generated. e results found from the B&B method are listed in Table 1. As can be seen from Table 1, as n � 12, the number of nodes in the B&B method will exceed 10 8 . e number of nodes and CPU processing times will also increase as the number of jobs n and alpha increases. e simulation results of the small size of jobs are shown in Tables 2 and 3 for 12 Hi + PIs and 12 CSA_Hs. e AEPs of the CSA_Hs algorithm are much less than the Hi + PIs. e comparison results of the Hi + PIs and the CSA_Hs are further shown in Figure 4. In light of the boxplots in Figure 4, the performances of the group CSA_Hs are better than those of the group Hi + PIs in the mean and variance of AEP, i.e., the CSA_Hs are more accurate and stable.
In regard to the performances of 12 heuristics and 12 CSA_Hs, Tables 4 and 5 demonstrate the simulation numerical results. e performances of 12 Hi + PIs and 12 CSA_Hs are further summarized in Figure 5. From the boxplots shown in Figure 5, 12 CSA_Hs indicate a better capability on searching near-optimal solutions than those of 12 Hi + PIs, and it seems that there is no difference among the group of CSA_Hs and among the group of Hi + PIs. e following analysis of the simulation data applies statistical methods in the aim to have science evidence to that whether the differentiation of performance between the groups of 12 heuristics and the 12 CSA_Hs are significant statistically, and the performances within the 12 heuristics or within the 12 CSA_Hs are not significant statistically.
In order to establish the difference of performance between the 12 heuristics and CSA_Hs for small size of n, a linear model was fitted to AEPs on the affected factors, n, α, and the type of (β 1 , β 2 ), on software SAS 9.4. Four normality tests of the error term in the fitted linear model were adopted. e p values of these tests are less than 0.05 and are listed in Table 6 (columns 3 and 4), for example, the p value is less than 0.01 obtained from the Kolmogorov-Smirnov test. us, for the commonly used significance level 0.05, the normality assumption of the AEPs is invalid. erefore, instead of using a parametric statistical method, the DSCF procedure (Dwass-Steel-Critchlow-Fligner, [38]), a nonparametric statistical method, was utilized to allow a multiple comparison among the 12 Hs + PI and 12 CSA_Hs to fulfill the characteristic of the simulation observations. Table 7 exhibits the comparative outcomes. e performances among all pairs of any one of the 12 heuristics and any one of the12 CSA_Hs are significant different statistically. Moreover, any pair of differences of AEPs between the 12 heuristics is not significant statistically, and any pair of differences of AEPs between the 12 CSA_Hs is also not significant statistically.
Regarding the large size of jobs, another linear model was fitted to RPDs on the affected factor, n, α, and the type of (β 1 , β 2 ). e p values of the four normality tests of the error term in the fitted linear model are less than 0.05 and are listed in Table 6 (columns 5 and 6). Table 8 illustrates the results of another multiple comparison on the 12 heuristics and 12 CSA_Hs for large n.
e conclusion about the performances of (within) 12 heuristics and (within) the 12 CSA_Hs is almost the same as that from the comparisons for the small n; we would not repeat here because only some of the p values are slightly different between Tables 7 and 8.   Discrete Dynamics in Nature and Society

Conclusions
is paper studies the scenario-dependent processing times in a two-machine flow-shop environment on which the measurement is the total completion time. Because of the NP-hardness, we designed a branch-and-bound (B&B) method and developed a lower bound and two superior properties to enhance the searching efficiency of the B&B method. en, we proposed 12 simple heuristics which were improved by a pairwise interchange method, coded as Hi + PIs, and further devised a cloud theory-based simulated annealing (CSA) algorithm. e CSA algorithm fed with the initial solutions produced 12 simple heuristics (without any further improvement) to form 12 versions of the CSA algorithm, coded as CSA_Hs. e computer experiments were conducted to investigate the efficiency of the 12 heuristics, Hi + PIs, and 12 CSA_Hs. It was noticed that the CSA_Hs produce better solutions compared with 12 heuristics. In addition, statistical methods were applied to perform the analysis of the experimental data, and we verified the significant difference among the groups of the Hi + PIs and the CSA_Hs.
As for the future investigation, other characteristics of a scheduling problem like release times of jobs might also incur the uncertain feature.
us, a potential future research direction could consider this issue by developing structural properties for algorithm design to incorporate this aspect. Deriving more heuristics based on the total completion time criterion in references [39][40][41][42] is also a worthy future issue.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that they have no conflicts of interest.