Efficient priority rules for dynamic sequencing with sequence-dependent setups

Article history: Received November 4 2015 Received in Revised Format December 21 2015 Accepted February 1


Introduction
Dynamic scheduling problems with sequence-dependent setups are significantly less well-studied compared with their static counterparts.For example, in a comprehensive survey of scheduling problems with setups by Allahverdi et al. (2008) the case of single-machine with dynamic arrivals is not covered.The review of scheduling problems with sequence-dependent setups by Zhu and Wilhelm (2006) also does not incorporate systems with dynamic arrivals.
In this paper, the problem of sequencing jobs on identical parallel machines in the presence of sequencedependent setups is addressed.Jobs arrive dynamically to the system with random times between arrivals.The processing time and due date of a job becomes known upon its arrival to the system.A machine needs to be setup when switching from one job type to another.In this setting, generating complete schedules in advance is not possible since new information arrives continuously to the system.The system operates under a completely reactive scheduling policy, i.e. the controller does not construct a schedule at all but simply selects the next job to be processed on an available machine in every decision-making epoch.A decision-making epoch is a time point where at least one machine is available and there are more than one queueing jobs in the input buffer.
In this environment decisions must be made in real-time, thus the use of dispatching rules for scheduling is a reasonable practice primarily because of its ease of implementation and minimal computational requirements.The performance of a specific dispatching rule varies analogously to the objective function considered and the configuration of the system that it is applied to.Seventeen dispatching rules published in the relevant literature were considered in this study.The set of rules includes standard heuristics such as EDD and SPT, as well as setup-oriented rules (e.g.Vinod & Sridharan, 2008).For the analysis of the system, discrete-event simulation was used.An extended series of simulation experiments was conducted where the effect of categorical and continuous system parameters on four performance metrics was investigated.The insights gained from the analysis of the simulation experiments led to the development of an efficient, parameterized priority rule that was tested extensively.The simulation output was analyzed using rigorous statistical tests and the proposed rule was found to produce excellent results for the metrics of mean WIP, mean cycle time, and mean tardiness.
The structure of the paper is the following.Section 2 contains a brief review of related publications.In Section 3 the dynamic scheduling problem is described.The results from the computer-generated experiments and their analysis are presented in Section 4. The new priority rule is presented and tested in Section 5.The concluding remarks are given in Section 6.

Related work
The scheduling approach examined in this paper is completely dynamic and inherently different than the approach of breaking down the dynamic scheduling problem to a series of static scheduling problems (refer to Adibi et al. (2010), for example).Dynamic scheduling problems that involve standard dispatching rules have been studied in Dominic et al. (2004) and Huq et al. (2009), among others.More elaborate approaches that consider development composite rules and intelligent methods for priority rule selection can be found in the works of Korytkowski et al. (2013) and Xanthopoulos et al. (2013).
The most relevant publication to this paper is the work by Ang et al. (2009) where a single-machine dynamic sequencing problem with setups is considered.The two papers differ in that Ang et al. (2009) consider alternative shop configurations, sets of performance metrics and priority rules.In their study, emphasis is given on standard rules (not setup-oriented) which are combined in order to form composite rules that exhibit good behavior in terms of more than one performance metrics.In this research, we consider both standard and setup-oriented rules (refer to Table 2).The standard rules EDD and LS are included in this research because they are widely used in the literature as benchmarks for due date related metrics (Rajendran & Holthaus, 1999).The SPT and CR rules are known to be effective for flow time and due date related metrics, respectively (Kim & Bobrowski, 1997).Regarding the setup-oriented priority rules, note that the most basic rule that belongs to that category, i.e. the Similar Setup rule (Gavett, 1965) is not included in this study because it has been shown that it is not sufficient when flow time and tardiness performance is considered (Wilbrecht & Prescott, 1969).Instead, a modified similar setup rule (MMS) has been considered based on its positive performance reported in Arzi and Raviv (1998).The FCFSNS and DK rules are commonly used as benchmarks in problems with setups (e.g.Frazier, 1996).More sophisticated rules such as WORK, MJ, and SLK utilize information on the entire queueing job set of each job type (Nomden et al., 2008;Mahmoodi & Dooley, 1991).The EDDNS, SPTNS, LSNS, CRNS rules have been studied, in the context of job-shop scheduling, in the works of Vinod and Sridharan (2008), Chan et al. (2006) and Kim and Bobrowski (1997).However, note that there are no definitively conclusive results regarding their performance in the literature.For example, SPTNS has been reported in Jacobs and Bragg (1988) to outperform the SPT rule in respect to minimization of flow time, a result which contradicts those of Vinod and Sridharan (2008) and Wemmerlov (1992).Finally, note that two rules which are included in this study (SPSU & CRNS) have been shortlisted in a comprehensive survey of setup-oriented priority rules (Pickardt & Branke, 2012) as more promising candidates among other alternatives.

System description
The description of the dynamic scheduling problem is presented in this section.The system has n identical parallel machines that process m types/groups of jobs (Wang & Liu, 2014) and an input buffer where incoming jobs are stored until they are released for processing.The assumptions made regarding the machines/buffer system are the following: i) there are neither machine failures nor order cancellations, ii) the machines can process only one job at a time, iii) the input buffer has infinite capacity, iv) a job that has completed its processing exits the system immediately, i.e. the machines are never blocked.Table 1 summarizes the nomenclature used in the remainder of the paper.
arrival time of i-th queueing (k-th completed on machine j) production time of i-th (k-th completed on machine j) queueing job due date of i-th queueing (k-th completed on machine j) job setup time for i-th queueing (k+1-th completed) job on machine j ) ( , j i s g penalty for assigning queueing job i to machine j (0 if no setup is required, b otherwise, where WIP at time t, i.e. number of jobs that exist in the system at time t At time 0, all machines are idle (σj = 0, for all j) and the input buffer is empty (l = 0).Jobs arrive dynamically to the system at random time intervals with mean  .Processing times of jobs are also stochastic with mean  .The type, processing time, and due date of a job are assumed to become known upon arrival of that job to the system.The due date of the i-th queueing job satisfies Eq. (1): Sequence-dependent setups (Xanthopoulos et al., 2013) take place when switching from one job type to another in each machine.The length of the setup interval for the (k+1)-th completed job in machine j is computed according to Eq. (2): A job is released for processing whenever a machine is available and the input buffer contains at least one job, i.e. no planned idle time is allowed.Once a machine starts processing a job it cannot be interrupted.At the moment that the currently processed job on machine j is completed the machine's state shifts from operating to idle.Sequencing is completely reactive, meaning that it is triggered by system-related events.The event that causes a re-sequencing epoch is the transition of a machine's state from operating to idle provided that the input buffer contains at least two jobs.In a re-sequencing epoch all job priority indices are updated to account for newly arrived jobs.The job priority indices are determined by a dispatching rule that belongs to the set of rules listed in Table 2.If there is a queueing job i' with negative slack that belongs to a job type different than the currently processed on machine j: 1 -select job with minimum processing time that belongs to the same family as i' otherwise: 2 -select job with minimum processing time that does not require setup on machine j Queueing jobs are assigned to the machines according to the following rules:  if there is only one queueing job (l = 1) and only one idle machine (with σj = 0), assign the queuing job to machine j  if there is only one queueing job and more than one idle machines, assign the queueing job to the machine that does not require setup (if more than one machines do not require setup, ties are broken arbitrarily)  if there are more than one queueing jobs (l > 1) and only one idle machine (with σj = 0), assign the job with highest priority according to the dispatching rule in use to machine j  if there are more than one queueing jobs and more than one idle machines, assign the job with highest priority for idle machine j to machine j.If a queueing job has the highest priority for more than one idle machine, assign it to the machine that does not require setup (if more than one machines do not require setup, ties are broken arbitrarily) The performance of the system is captured by the following metrics, where  is the cardinality of set operator: mean tardiness:

Comparison of existing priority rules
The methods described in section 6.1 of Xanthopoulos et al. (2013) were used in order to obtain statistically significant results while controlling the computational cost of the simulation experiments.
For each replication of the simulation models, a warm-up period corresponding to 300 completed jobs was used.The parameters of the procedure that controls the length of each independent replication were set to 150 0  n and 001 .0  g , while the parameters of the procedure that controls the number of replications for each simulation model were 05 .0  h and 40 max  K (refer to section 6.1 of Xanthopoulos et al. (2015) for a description of the aforementioned procedures).
We examined 18 simulation cases which are described in Table 3 and Table 4. Cases 1 -9 pertain to single-machine configurations whereas cases 10-18 concern configurations with three parallel machines.

Table 3
Parameters for simulation cases 1 -9.The definition of the symbols is given in Table 1 simulation case number The simulation cases are also differentiated in terms of the mean time between arrivals, the mean processing time and the setup parameter.The parameters of the simulation cases were selected do that the behavior of existing priority rules could be investigated for alternative shop configurations, levels of workload imposed on the system and setup intensity.

Comparison of priority rules for varying arrival rates -single machines cases
In this section a comparative evaluation of the seventeen priority rules described in Table 2 for single machine cases with varying job arrival rates is conducted.Figs ( 1-4) show the mean WIP, mean cycle time, mean tardiness and mean percentage of tardy jobs, respectively (y axis) obtained by the 17 priority rules considered in this study versus the mean time between arrivals λ (x axis) for simulation cases 1 -5.
It is observed that the mean WIP, mean cycle time, mean tardiness and mean percentage of tardy jobs decreases monotonically as the mean time between arrivals increases.This is expected since the greater the mean time between arrivals is, the more moderate the workload imposed on the system becomes.In cases where the frequency of arrivals is high, intense queueing effects are monitored and the average time that incoming jobs spend in the system is relatively high.Moreover, exiting jobs are more likely to be tardy than early and so the metrics of mean tardiness and mean percentage of tardy jobs are also at relatively high levels.It is observed that the differences between the various priority rules are more evident in cases where the system is under heavy workload.This is an indication that the performance of the system is mostly affected by the sequencing rule used in cases where it operates close to its maximum attainable throughput rate.The best rule for minimizing mean WIP, mean cycle time and mean tardiness is SPSU for simulation cases 1 -5.The best priority rule for minimizing mean percentage of tardy jobs was found to be SPTNS for cases 2 -5 and MMS for the first simulation case.

Comparison of priority rules for varying levels of setup -single machines cases
In this section a comparative evaluation of the seventeen priority rules for single machine cases varying levels of the setup factor β is presented.Figs.
(5-8) show the performance metrics considered in this study (y axis) obtained by the 17 priority rules versus the setup factor β (x axis) which spans from 0.1 to 0.5 with step size 0.1 (simulation cases 4, 6, 7, 8, 9).It is observed that the due date-oriented priority rules (EDD, LS, LSSU) do not perform satisfactorily in general, whereas the setup-aware rules (EDDNS, SPTNS, LSNS, CRNS, SPSU) excel in respect to their performance for all objective functions.The rules SPT and CR define a somewhat intermediate situation between these two extremes.Increasing the setup magnitude has the effect of slowing down the production process as the averaged total processing time is increased.As a result the mean length of the pending jobs queue, mean cycle time, and mean tardiness is also increased.Moreover, priority rules that form batches of jobs that belong to the same class for processing are naturally well-suited for this type of production scenarios as they have the tendency to minimize the frequency of setups.

Fig
In respect to the mean WIP and mean cycle time metrics, the SPSU and SPTNS are the best rules for cases 4, 6 and 7 -9, respectively.Regarding mean tardiness, the best rule is: SPSU for simulation cases 4 and 6, SPTNS for cases 7 and 9, MMS for case 8. Finally, the mean percentage of tardy jobs is minimized by MMS in cases 6 -9 and by SPTNS in case 4. It is evident that the MMS rule is a good option for due date related performance metrics in situations where the amount of setup is considerable.This can be attributed to the fact that the MMS rule takes into consideration both the required setup for each queueing job and information on the composition of the set of waiting jobs as a whole.

Comparison of priority rules for varying arrival rates -cases with three parallel machines
In this section the results regarding simulation cases 10 -14 are presented.It is reiterated that in these cases production systems with three parallel machines are considered and the output of the systems for varying arrival rates is investigated.The performance of the 17 existing priority rules considered in this study are presented in Tables (5-7).
From Tables (5-7) it is observed that the performance of the LSNS rule is matching the performance of the FCFSNS rule.Note that this is because of the due date assignment scheme (Eq.( 1)) and not a general property of these two rules.
Simulation cases 10 -14 are analogous to single machine cases 1 -5, but they are also differentiated in terms of the magnitude of setup operations.Recall from Eq. ( 2) that the setup is proportional of the processing time of a job, and that the mean processing time of jobs for cases 10 -14 is 3.0 compared to the job mean processing time of 1.0 for cases 1 -5.As a result the setups in the three machine cases 10 -14 are lengthier than those of the single machine cases 1 -5.For that reasons there are some differentiations as to which is the best rule for each performance metric in cases 10 -14 in comparison to cases 1 -5.More specifically, the SPTNS rule has the best results in terms of minimizing mean WIP, mean cycle time, and mean tardiness for simulation case 10 where the system operates under heavy workload.For cases 11 -14 the SPSU rule achieves the best results regarding the aforementioned metrics.In respect to the mean percentage of tardy jobs metric, the MMS and the SPTNS are the best rules for cases 10, 11, 13, 14 and 12, respectively.The performance of the CRNS rule matches that of MMS in case 14 as regards to the mean percentage of tardy jobs.

Comparison of priority rules for varying levels of setup -cases with three parallel machines
The results of the three machine simulation cases 15 -18 are presented in this section.The performance of the 17 priority rules considered in this study for parallel machine systems with varying levels of the setup factor β are presented in Tables (8-9).From Table 8 and Table 9 it can be seen that more sophisticated rules such as WORK, MJ, and SLK which utilize information on the entire queueing job set of each job type perform rather poorly.The rationale of the WORK rule is that the frequency of setups is reduced by selecting the job type which will occupy a machine for the longest time.However, this rule tends to prioritize jobs with long processing times and this explains its poor performance.The MJ rule also attempts to reduce the setup frequency by choosing the job type with the largest number of queueing jobs, however as new jobs arrive to the system this practice might actually have a negative effect on the system's performance.The SLK rule is a truncated version of the SPTNS rule where machines switch between job types based on the slacks of the waiting jobs in the input buffer, nevertheless the experimental results show this intuitive rule to be ineffective in practice.
Simulation cases 15 -18 are characterized by considerable setup levels, and it can be argued that this is the reason why the SPTNS rule dominates the set of alternative heuristics for the metrics of mean WIP, mean cycle time and mean tardiness.The SPTNS is outperformed only in cases 16 and 18 regarding the mean percentage of tardy jobs by the MMS rule. 5.An effective priority rule for dynamic sequencing with sequence-dependent setups The SPSU rule guarantees that the shortest queueing job is selected for processing at all times.In many cases reported in section 4 and its sub-sections this quality proved to be sufficient for producing the best results.However, the myopic selection of the shortest queueing job is not the best choice in the long run in cases with relatively high setup times, as demonstrated by the results presented in section 4. In many such cases, eliminating setups whenever this is possible and breaking ties based on the minimum processing time in the queue of incoming jobs (SPTNS rule) was found to be the best policy.Furthermore, the SPTNS was found to perform very well in minimizing the mean percentage of tardy jobs.These remarks pave the way for the introduction of a more efficient priority rule which is presented in this section.The proposed rule PR sequences queueing jobs according to the criterion: where i p is the processing time of the i -th queueing job, j i s , is the setup incurred if the i -th is selected for processing on machine j, and b is real positive parameter.
The proposed rule PR is a combination of SPSU and SPTNS.The priority index assigned to a queueing job by SPSU and SPTNS is the sum of its processing time plus a penalty which is a function of the setup needed to process this job.The SPSU uses a linear penalty function and the SPTNS makes use of a step function.The step function eliminates setups provided that its maximum value is sufficiently high.On the other hand, the PR rule utilizes an exponential penalty function.This is demonstrated with an example in Fig. 9.

Comparison of proposed rule PR to priority rules SPSU, MMS and SPTNS
In this section a comparative evaluation of the PR rule is presented.The methods and parameters to conduct the additional simulation experiments are the same as the ones reported in section 4. The PR rule is compared only to the priority rules which were found to produce the best results (SPSU, SPTNS and MMS) in the simulated cases described in sections 4.1 -4.4, for brevity.The results for the cases with varying arrival rates and varying setup intensities are presented compactly in Table 10 and Table 11.In order to adjust parameter b of the PR rule the values e (base of the natural logarithm), 3.0, 4.0 and 5.0 were tested and b = 5.0 was ultimately selected.Note that with trivial effort for parameter tuning the PR rule produced very encouraging results.Regarding the mean WIP criterion it is observed that the proposed rule is the best in 9 cases and second to best in 5 cases, out of the 18 simulation cases in total.

Table 10
Comparison between SPSU, SPTNS, MMS and PR (continues in Table 11).The minimum values for each performance metric and simulation case are denoted in bold.
single machine cases ( m = 5, μ = 1.0, δ = 1.0 ) 3 parallel machine cases ( m = 5, μ = 3.0, δ = 1.0 ) In respect to the mean cycle time and the mean tardiness metrics the PR rule ranks first and second in 8 and 6 cases, respectively.The proposed rule performs rather inadequately in terms of the mean percentage of tardy jobs criterion, nevertheless it outperforms the SPSU rule in all simulation cases except two.

Table 11
Comparison between SPSU, SPTNS, MMS and PR (continued from Table 10).The minimum values for each performance metric and simulation case are denoted in bold  In order to obtain a more conclusive assessment of the PR performance compared to the SPSU, MMS and SPT rules rigorous statistical tests are conducted.For each performance measure and shop configuration, a two-way analysis of variance (ANOVA) at the 95% confidence level is carried out to test the effects of the main factors on the response variable.The factors are A) the simulation case (9 levels) and B) the priority rule (4 levels).The ANOVA tests calculate the p-value for the null hypotheses H0A and H0B that observations at all levels of factors A and B are drawn from the same population, respectively.The results are given in Table 12 and Table 13.In Table 12 and Table 13, the column labeled 'source' describes the source of the variation of the data where 'case' is the factor A (the simulation case), 'rule' is factor B (the priority rule) and 'error' the variability that cannot be attributed to any of the two main factors.The columns labeled 'SS', 'df', 'F' contain the sum of squares, degrees of freedom and F statistic measurements, respectively.Finally, the column labeled 'p-value' contains the p-values for the null hypotheses on the main effects.It is observed that the null hypotheses are rejected in all simulation cases (p-values < 0.05).
In order to determine which pairs of means are significantly different, the ANOVA tests are followed-up by multiple comparison procedures (Tukey's honestly significant difference criterion) for 95% confidence level.The results are illustrated in Figs.(10 -13).
The plots in Fig. s 10 -13 depict the population marginal means (Milliken & Johnson, 1992) together with comparison intervals.Marginal means with non-overlapping comparison intervals are significantly different.The results of the follow-up tests for the single machine cases are the following: From the left-side subplot of Fig. 10 it is seen that the PR rule produces significantly better results compared to the MMS rule and that its performance is statistically identical to that of the SPTNS and SPSU rules for the mean WIP metric.By observing the right-side (left-side) subplot of Fig. 10 (Fig. 11) it is inferred that the PR rule outperforms SPSU, SPTNS and MMS in a statistically significant way in respect to the mean cycle time (mean tardiness) metric.Finally, the PR priority rule is significantly better than the SPSU rule and worse than the SPTNS and MMS rules when compared in terms of the mean percentage of tardy jobs as it can be observed in the  (12)(13) indicate that the proposed priority rule achieves significantly better results in comparison to the SPSU and MMS rules for the mean WIP, mean cycle time and mean tardiness metrics.For the aforementioned metrics the performance of PR is statistically equal to that of SPTNS.Similarly to the single machine cases, the PR rule is significantly better than SPSU and worse than SPTNS and MMS in respect to the mean percentage of tardy jobs, for the simulation cases with three machines too.
multiple comparison tests -Tukey honestly significant difference criterion -95% confidence marginal means with comparison intervals Fig. 13.Results from multiple comparison follow-up tests for mean tardiness and mean percentage of tardy jobs (three machine cases).

Conclusions
The problem of real-time scheduling on parallel machines with dynamic arrivals, stochastic processing times, due dates and sequence-dependent setups was examined.For the analysis of the system the discrete-event simulation models were used.
A series of simulation experiments involving seventeen dispatching rules was conducted where four performance metrics were considered.The SPSU rule exhibited very good performance in minimizing mean WIP, mean cycle time and mean tardiness in cases with relatively low setup times, whereas in cases with high setup times the SPTNS rule excelled in respect to the aforementioned metrics.The SPTNS and the MMS rules also yielded the best results in minimizing mean percentage of tardy jobs in all cases.
Based on the interpretation of the experimental results a parametric priority rule that combines SPSU and SPTNS was introduced and tested in multiple experiments.The simulation output was analyzed using statistical tests and the proposed rule was found to produce significantly better results for the metrics of mean cycle time and mean tardiness in single machine simulation cases.The proposed rule's output also had statistically insignificant differences regarding the best priority rule in minimizing mean WIP in all cases and mean cycle time and mean tardiness in three machine cases.
The encouraging results of the proposed priority rule were obtained with very little effort regarding its fine-tuning.A plausible direction for future research would be to incorporate a self-adaptive element to the proposed heuristic so as to adjust more effectively to various shop configurations.

Fig. 11 .Fig. 12 .
Fig. 10.Results from multiple comparison follow-up tests for mean WIP and mean cycle time (single machine cases) multiple comparison tests -Tukey honestly significant difference criterion -95% confidence of Fig.11.In respect to the results of the follow-up tests for the three machine cases the following observations can be made: Figs.
SPSU SPTNS MMS mean percentage of tardy jobs

Table 1 Notation
setup factor of jobs σj state of machine j (0 and 1 symbolizes idle and working machine, respectively) l number of queueing jobs i i-th queueing job in the input buffer fi number of queueing jobs that belong to the same family with queueing job i w job type with maximum total processing time in the input buffer k, j k-th completed job in machine j

Table 4
Parameters for simulation cases 10 -18.The definition of the symbols is given inTable 1

percentage of tardy jobs lamda mean percentage of tardy jobs versus lamda -best rule is MMS in first case and SPTNS in all other cases
. 5. Mean WIP for simulation cases4, 6, 7, 8, 9

Table 5
Results for simulation cases 10 and 11.The minimum values for each performance metric and simulation case are denoted in bold

Table 6
Results for simulation cases 12 and 13

Table 8
Results for simulation cases 15 and 16.The minimum values for each performance metric and simulation case are denoted in bold

Table 9
Results for simulation cases 17 and 18.The minimum values for each performance metric and simulation case are denoted in bold

Table 12
Two-way ANOVA results for each performance metric (single machine cases).

Table 13
Two-way ANOVA results for each performance metric (three machine cases)