Polynomial time algorithm for minmax scheduling with common due-window and proportional-linear shortening processing times

: This article deals with common due-window assignment and single-machine scheduling with proportional-linear shortening processing times. Objective cost is a type of minmax, that is, the maximal cost among all processed jobs is minimized. Our goal is to determine an optimal schedule, the optimal starting time, and size of due-window that minimize the worst cost, which consist of four parts: earliness, tardiness, starting time and length of the due-window. Optimal properties of the problem are given, and then an optimal polynomial algorithm is proposed to solve the problem.


Introduction
In practice, the processing times of the jobs are often variable with the change of their starting time, this is the time-dependent processing times [1][2][3]. In recent years, more and more experts and scholars have studied time-related deterioration. Huang [4] studied a single machine bicriterion problem in which the processing time and group setup time is a linear function of its starting time can be solved optimally. Li and Lu [5] studied single-machine parallel-batch scheduling with deterioration effects. Under total rejection costs which cannot exceed a given constant, they showed that the problem of minimizing the the total weighted completion time (makespan) is NP-hard. Liang et al. [6] showed the weighted sum of makespan and resource cost minimization with deterioration effect and group technology remains polynomially solvable. Huang et al. [7] studied common due-window assignment problem in which the processing time is a proportional linear function. They proved that two different non-regular problems are polynomial solvable.
On the other hand, many scholars have conducted research on minmax scheduling problems on duedate or due-window assignment, i.e., minmax means that the maximal cost is minimized. Mosheiov [8] considered minmax scheduling with a common due-date assignment on parallel identical machines. The goal was to find the job schedule and due-date assignment with minimum cost of the worst scheduled job, and they proposed an efficient heuristic algorithm. Mosheiov and Sarig [9] studied due-window assignment scheduling problems, where the objection function is a minmax type. The objective function contains earliness, tardiness, the starting time, and size of the due-window. They proved that the problem remains polynomially solvable when the processing time of jobs are constants. Gerstl and Mosheiov [10] investigated single-machine scheduling with the due-date assignment. They demonstrated that the minmax minimization can be solved in polynomial time. Mosheiov [11] studied due-window assignment problems on a type of minmax. The earliness penalties, tardiness penalties, the cost of position, and the size of due-window were considered. The researcher offered evidence that the scheduling problems can be solved in polynomial time on a single-machine and provided an LPT-based heuristic (the LPT rule means processing with the largest processing time) on parallel identical machines which is NP-hard. Mor [12] considered minmax minimization with position-dependent processing time. For the common due-date and the common due-window, the researcher elucidates that these two problems are polynomially solvable by transforming them into assignment problems. Numerical simulation or numerical examples are given for all the problems. Mosheiov et al. [13] studied due-window assignment problems with position-dependent processing time and rejection jobs on a flow shop, and they illustrated that it remains polynomially solvable. More of scheduling with due-window (due-date) assignment can be seen in [14][15][16][17][18][19][20].
In the actual processing environment, the processing time of the jobs is often changes with time. In this paper, we study minmax scheduling problems with proportional-linear shortening processing times (denoted by PLS PT ). The objective function of this study has four components: earliness, tardiness, the starting time, and the size of the due window. The contributions of this article are demonstrated as follows: Firstly, considering the position and size of the due-window are known, the optimal scheduling sequence and the optimal value can be found. Then, the optimal scheduling and the optimal objective function can be found when the size or position of the due-window is known. According to the previous analysis, the optimal ordering is discussed when the size and position of the due-window are unknown. We prove that these problems with PLS PT remain polynomially solvable, i.e., the complexity is O(n), this is identical to that of the classical version (without any PLS PT ), where n is the number of jobs.
The rest of this study is organized as follows: Section 2 introduces the problem. Section 3 considers the scheduling problem on a single machine, which is discussed in four cases that center on whether or not the location and the size of the due-window are known. Computational experiments are given in Section 4. The last section is conclusion.

Problem definition
We investigate a set of n jobsQ = {J 1 , J 2 , ...J n } to be processed on a single-machine that cannot be interrupted. All the jobs are available for processing at time s (s ≥ 0). The general linear shortening model is as follows: the actual processing time of job J j is p j = a j − b j s j , where a j , b j , s j represent the normal processing time (the processing time without any linear shortening), shortening rate (the decreasing rate) and starting time of job J j , respectively. It is assumed that shortening rates b j satisfy the following condition: 0 ≤ b j < 1 and b j s + n i=1 a i − a j < a j (see [21,22]). In this article, a special case (i.e., b i = θa i , for some θ > 0) will be studied; that is, p j = a j 1 − θs j , where θ s + n j=1 a j − a min < 1 (a min = min a j , j = 1, 2, ..., n).
We suppose that all of the jobs have a common due-window [ d 1 , d 2 ], where d 1 , d 2 , D = d 2 − d 1 represent the starting time, finishing time and size of the due-window. Let C j be completion time of J j , and E j = max d 1 − C j , 0 (T j = max C j − d 2 , 0 ) represent earliness (tardiness) of J j . In this article, we consider the scheduling of minimizing maximum cost function (including earliness penalties, tardiness penalties, the cost for the starting time, and size of the due-window), i.e., the objective is to minimize the maximum cost of all jobs: where λ, β, γ, δ represent unit penalty costs of earliness, tardiness, starting time and size of the duewindow. As in Gawiejnowicz [3], we denote the scheduling problem as where CONW is the common due-window.

Main results
Lemma 1. Let [ξ] be the job scheduled at ξth position, if the first job's starting time is s, then Proof. By induction.

The d 1 and D are known
When d 1 and D (thus d 2 ) are known, obviously, the maximum earliness max E j (tardiness max T j ) can be determined by the first (final) processed job, hence Lemma 2. In the case that d 1 and D (thus d 2 ) are known, the optimal schedule is to process the job with the largest normal processing time first, and the the order of the remaining jobs is arbitrary.
Proof. Let a max = max a j (1 ≤ j ≤ n). It is obvious that the actual and normal processing time of any job are non-negative, so we obtain θs − 1 < 0. From (4), we have and Algorithm 1 Step 1. Find the job with the largest normal processing time (i.e., a max = max a j | j = 1, 2, . . . , n ) and process it to the first position (the remaining jobs are scheduled in any order).
θ , set the optimal starting time of the job scheduled at the first position s * = max{s, 0} and its maximum Y * can be calculated from (6).
Theorem 1. If d 1 and D are given constants, then Algorithm 1 solves Proof.
Step 1 needs O(n) time; Step 2 runs in constant time; hence, the total computational complexity is O(n).

The D is known
If D is known, we first determined s of the due-window, and then determine the optimal schedule and optimal value of the jobs. Without loss of generality, all the jobs begin processing at time 0.
According to Lemma 2, the job with the largest normal processing time has priority in processing, so C [1] = a max . For any given schedule, the maximum completion time of n jobs is constant, so C [n] is a constant, i.e., θ . This minmax minimization can be formulated as the following linear program (LP).
To facilitate a discussion on the optimal location of the due-window, suppose the D is known. Because λC [1] , δD, βC [n] and (δ − β) D are constants, we should discuss the relationship between γ and β. In most cases, it is more realistic to discuss the case of D ≤ C [n] . Case 1: If γ > β, we have γ − β > 0. In this case, d 1 takes its minimum value, i.e., d * 1 = 0.
If D ≤ C [n] − C [1] , we have d 1 ≥ C [1] , so in this case, there may be jobs that are completed before or after the due-window. Let

Algorithm 2
Step 1. Find the job with the largest normal processing time (i.e., a max = max a j | j = 1, 2, . . . , n ) and process it to the first position (the remaining jobs are scheduled in any order).

The d 1 is known
In the case where d 1 is known, we first determine the size of the due-window, and then determine the optimal schedule of jobs, assuming that all jobs are arrived at time 0. Without loss of generality, all jobs begin processing at time 0.
According to Lemma 2, the jobs with the largest normal processing time have priority in processing, This minmax minimization can be formulated as: Suppose that d 1 is known, we should determine the optimal D. Because λC [1] , (λ + γ) d 1 , βC [n] and (γ − β) d 1 are constants, we should discuss the relationship between δ and β. Case 1: If δ ≥ β, we have δ − β ≥ 0. Because δD > 0 and (δ − β) D ≥ 0, it is optimal to make D * = 0. In this case, the maximum consumption function is Y * (d 1 ) = βC [n] + (γ − β) d 1 . Case 2: If δ < β, we have δ − β < 0. Because δD > 0 and (δ − β) D < 0, it is optimal to make the earliness penalty of the job with largest normal processing time equal to tardiness penalty of the last job, i.e., λE [1] = βT [n] , and we have In this case, the maximum consumption function is Step 1. Find the job with the largest normal processing time (i.e., a max = max a j | j = 1, 2, . . . , n ) and process it to the first position (the remaining jobs are scheduled in any order).

The d 1 and D are unknown
In this case, d 1 and D are unknown, and we should find the optimal the starting time, size of the due-window and the job schedule such that max 1≤ j≤n max λE j , βT j + γ d 1 + δD is minimized. Assume that all jobs are arrived at time 0. Without loss of generality, all jobs begin processing at time 0, similarly, we have the following LP.
If δ < β, we have δ − β < 0. Similarly, it is necessary to take the maximum value of the size of the due-window, i.e., D * = C [n] . In this case, yielding d * , and Y * = δC [n] . Case 2: If γ ≤ β, we have γ − β ≤ 0. Moreover, if C [n] − C [1] < D ≤ C [n] , we have 0 ≤ d 1 < C [1] . In this case, there is no early job but only tardy jobs. Assuming that d 2 does not change, find the optimal position of the due-window and let its optimal cost function be βT [n] Then, according to the size relationship between γ and δ, the optimal value of d 1 can be obtained.
If γ ≥ δ, we have γ − δ ≥ 0. It is optimal to take the minimum value of the starting time of d 1 , i.e., d * 1 = 0. Because there is no early job, in order to minimize the objective function, the tardiness penalty is minimized. In other words, we have d * 2 = C [n] . In this case, d * , and the optimal cost function is Y * = δC [n] .
Solution According to Algorithm 4, the job with the largest normal processing time is J 6 and process it to the first position (the remaining jobs are scheduled in any order). Since γ < β and γ < δ, we have C 3) a j is a uniformly distributed over [1,100] such that θ s + 5) The coefficients λ, β, γ and δ are uniformly distributed over [1,10].
The computational experiments of Algorithms 1-4 are summarized as follows. The maximum and average CPU time (ms) required to find the optimal solutions are given in Table 1. From Table 1, we can observe that Algorithms 1-4 are very efficient and fast, and the CPU time of Algorithms 1-4 increases steady as n increases from 100 to 1000.

Conclusions
In this article, we investigated the minmax minimization with CONW assignment and PLS PT . The aim was to minimize max 1≤ j≤n max λE j , βT j + γ d 1 + δD . A polynomial algorithm was proposed for scenarios in which d 1 and D are known or not. Future research may focus on minmax scheduling with resource allocation, investigate the problems with general deterioration effects, or study the minmax scheduling with learning effects (see [23][24][25][26][27].