Skip to main content
Log in

Bin stretching with migration on two hierarchical machines

  • Original Article
  • Published:
Mathematical Methods of Operations Research Aims and scope Submit manuscript

Abstract

In this paper, we consider semi-online scheduling with migration on two hierarchical machines, with the purpose of minimizing the makespan. The meaning of two hierarchical machines is that one of the machines can run any job, while the other machine can only run specific jobs. Every instance also has a fixed parameter \(M \ge 0\), known as the migration factor. Jobs are presented one by one. Each new job has to be assigned to a machine when it arrives, and at the same time it is possible to modify the assignment of previously assigned jobs, such that the moved jobs have a total size not exceeding M times the size of the new job. The semi-online variant studied here is called bin stretching. In this problem, the optimal offline makespan is provided to the scheduler in advance. This is still a non-trivial variant for any migration factor \(M > 0\). We prove tight bounds on the competitive ratio for any migration factor M. The design and analysis is split into several cases, based on the value of M, and on the resulting competitive ratio. Unlike the online variant with migration for two hierarchical machines, this case allows an online fully polynomial time approximation scheme.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Akaria I, Epstein L (2022) Online scheduling with migration on two hierarchical machines. J Combin Optim 44:3535–3538

  • Albers S, Hellwig M (2012) Semi-online scheduling revisited. Theoret Comput Sci 443:1–9

    Article  MathSciNet  MATH  Google Scholar 

  • Angelelli E, Nagy Á.B, Speranza MG, Tuza Zs (2004) The on-line multiprocessor scheduling problem with known sum of the tasks. J Sched 7(6):421–428

    Article  MathSciNet  MATH  Google Scholar 

  • Azar Y, Regev O (2001) On-line bin-stretching. Theoret Comput Sci 268(1):17–41

    Article  MathSciNet  MATH  Google Scholar 

  • Bar-Noy A, Freund A, Naor JS (2001) On-line load balancing in a hierarchical server topology. SIAM J Comput 31(2):527–549

    Article  MathSciNet  MATH  Google Scholar 

  • Berndt S, Jansen K, Klein K (2020) Fully dynamic bin packing revisited. Math Program 179(1):109–155

    Article  MathSciNet  MATH  Google Scholar 

  • Böhm M, Sgall J, van Stee R, Veselý P (2017) Online bin stretching with three bins. J Sched 20(6):601–621

    Article  MathSciNet  MATH  Google Scholar 

  • Böhm M, Sgall J, van Stee R, Veselý P (2017) A two-phase algorithm for bin stretching with stretching factor 1.5. J Combinaor Optim 34(3):810–828

    Article  MathSciNet  MATH  Google Scholar 

  • Chen X, Ding N, Dósa G, Han X, Jiang H (2015) Online hierarchical scheduling on two machines with known total size of low-hierarchy jobs. Int J Comput Math 92(5):873–881

    Article  MathSciNet  MATH  Google Scholar 

  • Cheng TCE, Kellerer H, Kotov V (2005) Semi-on-line multiprocessor scheduling with given total processing time. Theoret Comput Sci 337(1–3):134–146

    Article  MathSciNet  MATH  Google Scholar 

  • Crescenzi P, Gambosi G, Penna P (2004) On-line algorithms for the channel assignment problem in cellular networks. Discret Appl Math 137(3):237–266

    Article  MathSciNet  MATH  Google Scholar 

  • Dósa G, He Y (2004) Semi-online algorithms for parallel machine scheduling problems. Computing 72(3):355–363

    MathSciNet  MATH  Google Scholar 

  • Epstein L (2003) Bin stretching revisited. Acta Informatica 39(2):97–117

    Article  MathSciNet  MATH  Google Scholar 

  • Epstein L, Levin A (2009) A robust APTAS for the classical bin packing problem. Math Program 119(1):33–49

    Article  MathSciNet  MATH  Google Scholar 

  • Epstein L, Levin A (2014) Robust algorithms for preemptive scheduling. Algorithmica 69(10):26–57

    Article  MathSciNet  MATH  Google Scholar 

  • Epstein L, Levin A (2019) Robust algorithms for total completion time. Discret Optim 33:70–86

    Article  MathSciNet  MATH  Google Scholar 

  • Epstein L, Noga J, Seiden SS, Sgall J, Woeginger GJ (2001) Randomized online scheduling on two uniform machines. J Sched 4(2):71–92

    Article  MathSciNet  MATH  Google Scholar 

  • Gabay M, Brauner N, Kotov V (2017) Improved lower bounds for the online bin stretching problem. 4OR: Q J Belgian French Ital Oper Res Soc 15(2):183–199

    Article  MathSciNet  MATH  Google Scholar 

  • Gabay M, Kotov V, Brauner N (2015) Online bin stretching with bunch techniques. Theoret Comput Sci 602:103–113

    Article  MathSciNet  MATH  Google Scholar 

  • Gálvez W, Soto JA, Verschae J (2020) Symmetry exploitation for online machine covering with bounded migration. ACM Trans Algorithms 16(4):43:1-43:22

    Article  MathSciNet  MATH  Google Scholar 

  • Graham RL (1966) Bounds for certain multiprocessing anomalies. Bell Syst Tech J 45(9):1563–1581

    Article  MATH  Google Scholar 

  • He Y, Zhang G (1999) Semi online scheduling on two identical machines. Computing 62(3):179–187

    Article  MathSciNet  MATH  Google Scholar 

  • Horowitz E, Sahni S (1976) Exact and approximate algorithms for scheduling nonidentical processors. J ACM 23(2):317–327

    Article  MathSciNet  MATH  Google Scholar 

  • Jiang Y (2008) Online scheduling on parallel machines with two GoS levels. J Comb Optim 16(1):28–38

    Article  MathSciNet  MATH  Google Scholar 

  • Jiang Y, He Y, Tang C (2006) Optimal online algorithms for scheduling on two identical machines under a grade of service. J Zhejiang Univ-Sci A 7(3):309–314

    Article  MATH  Google Scholar 

  • Kellerer H, Kotov V (2013) An efficient algorithm for bin stretching. Oper Res Lett 41(4):343–346

    Article  MathSciNet  MATH  Google Scholar 

  • Kellerer H, Kotov V, Gabay M (2015) An efficient algorithm for semi-online multiprocessor scheduling with given total processing time. J Sched 18(6):623–630

    Article  MathSciNet  MATH  Google Scholar 

  • Kellerer H, Kotov V, Speranza MG, Tuza Zs (1997) Semi on-line algorithms for the partition problem. Oper Res Lett 21(5):235–242

    Article  MathSciNet  MATH  Google Scholar 

  • Lee K, Hwang H-C, Lim K (2014) Semi-online scheduling with GoS eligibility constraints. Int J Prod Econ 153:204–214

    Article  Google Scholar 

  • Lee K, Lim K (2013) Semi-online scheduling problems on a small number of machines. J Sched 16(5):461–477

    Article  MathSciNet  MATH  Google Scholar 

  • Levin A (2022) Robust algorithms for preemptive scheduling on uniform machines of non-increasing job sizes. Inf Process Lett 174:106211

  • Lim K, Lee K, Chang SY (2011) Improved bounds for online scheduling with eligibility constraints. Theoret Comput Sci 412(39):5211–5224

    Article  MathSciNet  MATH  Google Scholar 

  • Liu M, Chu C, Xu Y, Zheng F (2011) Semi-online scheduling on 2 machines under a grade of service provision with bounded processing times. J Comb Optim 21(1):138–149

    Article  MathSciNet  MATH  Google Scholar 

  • Luo T, Xu Y (2014) Semi-online scheduling on two machines with GoS levels and partial information of processing time. Sci World J 2014:576234

  • Luo T, Xu Y (2015) Semi-online hierarchical load balancing problem with bounded processing times. Theoret Comput Sci 607(1):75–82

    Article  MathSciNet  MATH  Google Scholar 

  • Luo T, Xu Y (2016) Optimal algorithm for semi-online scheduling on two machines under GoS levels. Optim. Lett. 10(1):207–213

    Article  MathSciNet  MATH  Google Scholar 

  • Min X, Liu J, Wang Y (2011) Optimal semi-online algorithms for scheduling problems with reassignment on two identical machines. Inf Process Lett 111(9):423–428

    Article  MathSciNet  MATH  Google Scholar 

  • Park J, Chang SY, Lee K (2006) Online and semi-online scheduling of two machines under a grade of service provision. Oper Res Lett 34(6):692–696

    Article  MathSciNet  MATH  Google Scholar 

  • Qi X, Yuan J (2019) Semi-online hierarchical scheduling on two machines for \(l_p\)-norm load balancing. Asia-Pac J Oper Res 36(1):1

    Article  MATH  Google Scholar 

  • Sanders P, Sivadasan N, Skutella M (2009) Online scheduling with bounded migration. Math Oper Res 34(2):481–498

    Article  MathSciNet  MATH  Google Scholar 

  • Skutella M, Verschae J (2016) Robust polynomial-time approximation schemes for parallel machine scheduling with job arrivals and departures. Math Oper Res 41(3):991–1021

    Article  MathSciNet  MATH  Google Scholar 

  • Tan Z, He Y (2002) Semi-on-line problems on two identical machines with combined partial information. Oper Res Lett 30(6):408–414

    Article  MathSciNet  MATH  Google Scholar 

  • Tan Z, Zhang A (2011) Online hierarchical scheduling: an approach using mathematical programming. Theoret Comput Sci 412(3):246–256

    Article  MathSciNet  MATH  Google Scholar 

  • Wakrat I (2012) Online and semi-online scheduling with reordering and reassignment. Master’s thesis, Department of Computer Science, University of Haifa, Haifa, Israel

  • Wu Y, Ji M, Yang Q (2012) Optimal semi-online scheduling algorithms on two parallel identical machines under a grade of service provision. Int J Prod Econ 135(1):367–371

    Article  Google Scholar 

  • Xiao M, Wu G, Li W (2019) Semi-online machine covering on two hierarchical machines with known total size of low-hierarchy jobs. In: Proceedings of the 37th national conference of theoretical computer science (NCTCS’19), pp 95–108

  • Zhang A, Jiang Y, Fan L, Hu J (2015) Optimal online algorithms on two hierarchical machines with tightly-grouped processing times. J Comb Optim 29:781–795

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang A, Jiang Y, Tan Z (2009) Online parallel machines scheduling with two hierarchies. Theoret Comput Sci 410(38–40):3597–3605

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Leah Epstein.

Ethics declarations

Conflict of interest

There are no conflicts of interest or competing interests for this work.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Examples for the action of all algorithms

Examples for the action of all algorithms

We provide a large number of examples for our algorithms. The examples cover a multitude of cases occurring in the execution.

1.1 Algorithm A, where \(\varvec{M \ge 2.5}\)

In this section, we provide three examples for Algorithm A. Two examples are for the migration factor \(M=3\). In both of them step 4 is applied once. In the first example, after applying step 4, only step 2 will be applied, and in the second example, both step 3 and step 2 are applied. The third example is for \(M=20\), and in that example, step 4 is performed several times.

Fig. 4
figure 4

The schedules produced by Algorithm A with migration factor \( M = 3\) for \(I_1\) (see Sect. A.1.1) after three, four, and six jobs, respectively have been presented, and an optimal solution (with makespan 1) for that input. The final makespan for the algorithm is 1.12

1.1.1 Examples for Algorithm A with \(\varvec{M=3}\)

In the two examples of this section, we show the action of Algorithm A with a migration factor of \( M = 3\). For this value of M, we have \(\mu =\frac{2}{9} \approx 0.2222\), the value used by the algorithm in step 2 is \(1-\mu =\frac{7}{9} \approx 0.7778\), and the value used in step 3, which is also the competitive ratio for this algorithm, is \(1+\mu =\frac{11}{9}\approx 1.2222\).

Input \(I_1\) is defined as follows:

$$\begin{aligned} J_1= & {} (0.19,2),\ J_2=(0.38,2),\ J_3=(0.39,1),\ J_4=(0.68,2),\\ J_5= & {} (0.32,2),\ J_6=(0.03,1). \end{aligned}$$

The algorithm schedules the first two jobs on the second machine in step 3, and it schedules the third job on the first machine in step 2. We get \(x_3=0.39\), \(y_3=0.57\), and \(z_3=0\). When the algorithm receives the fourth job, it reaches step 4 because \(1.25=y_3+p_4>1+\mu \) holds. In step 4, the algorithm updates W to be the set of the two jobs \(J_1\) and \(J_4\), and it schedules these jobs on the second machine. The algorithm schedules the other jobs with GoS 2 that are not in W on the first machine (this set consists of a single job, which is \(J_2\)), and we get \(x_4=0.39,\ y_4=0.87,\ z_4=0.38\). Afterwards, the algorithm schedules the \(J_5\) on the first machine in step 2, because the load of the second machine is already not smaller than \(1-\mu \), i.e. \(y_4 \ge \frac{7}{9}\). Job \(J_6\) is assigned by step 2 as well, because both \(g_6=1\) and \(y_5 \ge \frac{7}{9}\) hold. Consequently, the final load of the first machine is 1.12, and the load of the second machine is 0.87. See Fig. 4 for an illustration of the process of execution of the algorithm, and an optimal solution.

Fig. 5
figure 5

The schedules produced by Algorithm A with migration factor \( M = 3\) for \(I_2\) (see Sect. A.1.1), and an optimal solution (with makespan 1) for that input. The final makespan for the algorithm is 1.05

In the second example, we use input \(I_2\), defined as follows:

$$\begin{aligned} J_1=(0.6,2),\ J_2=(0.65,2),\ J_3=(0.4,2),\ J_4=(0.35,1). \end{aligned}$$

The algorithm schedules the first job on the second machine in step 3, and we get \(x_1=0,\ y_1=0.6,\ z_1=0\). When the algorithm receives the second job, it reaches step 4 because \(1.25=y_1+p_2>1+\mu \) holds. In step 4, the algorithm updates W to be the set \(\{J_2\}\), schedules it on the second machine, and migrates the first job to the first machine. We get \(x_2=0,\ y_2=0.65,\ z_2=0.6\). The algorithm schedules the third job on the second machine in step 3 because \(y_2+p_3 \le 1+\mu \) holds, and the fourth job is scheduled on the first machine in step 2 because \(g_4=1\) (and additionally, at this time, it holds that \(y_3 \ge 1-\mu \)). Consequently, the load of the first machine is 0.95, and the load of the second machine is 1.05. See Fig. 5 for an illustration of the process of execution of the algorithm, and an optimal solution.

1.1.2 An example for Algorithm A with \(\varvec{M=20}\)

In this example, we show an input where step 4 is applied three times. We define input \(I_3\) for Algorithm A with a migration factor of \(M=20\). For this value of M, we have \(\mu =\frac{2}{43} \approx 0.04651\), the value used by the algorithm in step 2 is \(1-\mu =\frac{41}{43} \approx 0.95349\), and the value used in step 3, which is also the competitive ratio for this algorithm, is \(1+\mu =\frac{45}{43}\approx 1.04651\).

Fig. 6
figure 6

The schedules produced by Algorithm A with migration factor \(M = 20\), after three, four, five, six, seven, and eight jobs were presented, for input \(I_3\) of Sect. A.1.2. There are migrations of \(J_1\) (when \(J_5\) arrives), of \(J_5\) (when \(J_6\) arrives), and of three jobs when \(J_7\) arrives. The final makespan for the algorithm is 1, and the obtained solution is optimal

Input \(I_3\) is defined as follows:

$$\begin{aligned} J_1= & {} (0.25,2),\ J_2=(0.28,2),\ J_3=(0.29,2),\ J_4=(0.1,1),\\ J_5= & {} (0.26,2),\ J_6=(0.27,2),\ J_7=(0.49,2),\ J_8=(0.06,1). \end{aligned}$$

The algorithm schedules the first three jobs on the second machine in step 3, and it schedules \(J_4\) on the first machine in step 2. We get \(x_4=0.1,\ y_4=0.82,\ z_4=0\). When the algorithm receives the fifth job, it reaches step 4 because \(1.08=y_4+p_5>1+\mu \) holds. In step 4, the algorithm updates W to be \(\{J_2, J_3, J_5\}\) (the subset of \(Z\cup Y\cup \{J_5\}\) of maximum total size not exceeding 1), it migrates \(J_1\) to the first machine, and it schedules W on the second machine. We get: \(x_5=0.1,\ y_5=0.83,\ z_5=0.25\). Afterwards, the algorithm repeats the same process for job \(J_6\) (because \(1.1=y_5+p_6>1+\mu \) holds), it updates W to be \(\{J_2, J_3, J_6\}\), it migrates \(J_5\) to the first machine and schedules W on the second machine, and we get: \(x_6=0.1,\ y_6=0.84,\ z_6=0.51\). The algorithm migrates \(J_1\) and \(J_5\) a second time when it receives job \(J_7\), since \(1.33=y_6+p_7>1+\mu \) holds. Here the algorithm updates W to be \(\{J_1, J_5, J_7\}\), where \(w_7=1\), it migrates all jobs in Y to the first machine, and it schedules W on the second machine. We get: \(x_7=0.1,\ y_7=1,\ z_7=0.84\). Job \(J_8\) is assigned to the first machine by step 2 because both \(g_8=1\) holds (and additionally, \(y_7\ge 1-\mu \)), and we get: \(x_8=0.16,\ y_8=1,\ z_8=0.84\). Consequently, the loads of both machines are equal to 1. Thus, in this specific case, the obtained solution is optimal. See Fig. 6 for an illustration of the process of execution of the algorithm.

1.2 Algorithm B, where \(\varvec{0.75 \le M < 2.5}\)

We provide four examples for Algorithm B. In the first one, we show the action of this algorithm when it receives input \(I_1\) from Sect. A.1.1. In the second one, we show the action of this algorithm in step 4 by using input \(I_4\) (which we define). In the third example and in the fourth example, we show the action of the algorithm in step 5, when it receives inputs \(I_5\) and \(I_6\), which we define, where the actual assignment is performed by step 5.1 and step 5.2, respectively, for these two inputs.

Recall that the competitive ratio for this algorithm is 1.25, and the values used in steps 2 and 3 are based on this value.

1.2.1 An example for Algorithm B, such that there is no migration

Algorithm B schedules input \(I_1\) (see Sect. A.1.1) only using steps 2 and 3, i.e. it does not migrate any jobs. This holds because scheduling the fourth job on the second machine would not exceed the threshold of 1.25, but the load becomes greater than 0.75, i.e. \(0.75 \le y_4 =1.25\), which keeps the machines balanced with respect to loads. Consequently, \(x_6=0.42,\ y_6=1.25,\ z_6=0.32\), the load of the first machine is 0.74, and the load of the second machine is 1.25. See Fig. 7 for an illustration and additional details.

Fig. 7
figure 7

The schedules produced by Algorithm B for input \(I_1\) of Sect. A.1.1: (1) The algorithm schedules the first two jobs on the second machine in step 3, and the third job is scheduled on the first machine in step 2. (2) The algorithm schedules the fourth job on the second machine in step 3 and we get \(0.75 \le y_4\). (3) All other jobs will be scheduled on the first machine in step 2, and we have \(c_{B}(I_1)=1.25\) and \(c^*(I_1)=1\) (which holds by assigning \(J_4\) and \(J_5\) to the second machine, and the other jobs to the first machine)

1.2.2 An example for Algorithm B, such that step 4 is applied

In this example, we define input \(I_4\) and in particular, we exhibit the action of the algorithm in step 4 when the fifth job of \(I_4\) arrives. The input is:

$$\begin{aligned} J_1= & {} (0.01,2),\ J_2=(0.36,1),\ J_3=(0.2,2),\ J_4=(0.36,2),\\ J_5= & {} (0.79,2),\ J_6=(0.28,2). \end{aligned}$$

The first four jobs are scheduled in both steps 2 (job \(J_2\)) and 3 (the other three jobs), so we get: \(x_4=0.36,\ y_4=0.57\), and \(z_4=0\). In the process of assignment of the fifth job, the algorithm reaches step 4 because \(y_4+p_5>1.25\) and \(p_5\ge 0.75\) hold. We have \(0.75\cdot p_5=0.5925\). In step 4, the algorithm updates W be the set of the three jobs \(J_1\), \(J_3\), and \(J_4\) (whose total size is 0.57), it migrates W to the first machine, and it schedules \(J_5\) on the second machine, because \(y_4-w_5+p_5 = p_5 \le 1.25\). We get: \(x_5=0.36,\ y_5=0.79\), and \(z_5=0.57\). The algorithm schedules the last job on the first machine in step 2 because \(y_5 \ge 0.75\) holds, and it updates the variables as follows: \(x_6=0.36,\ y_6=0.79,\ z_6=0.85\). Consequently, the load of the first machine is 1.21, and the load of the second machine is 0.79. See Fig. 8.

Fig. 8
figure 8

The schedules produced by Algorithm B for input \(I_4\) of Sect. A.2.2 (from the moment when the fourth job had arrived), and an optimal solution for that input. We have \(c_{B}(I_4)=1.21\) and \(c^*(I_4)=1\)

1.2.3 An example for Algorithm B, such that step 5.1 is applied

In this example, we use a new input \(I_5\) to show the action of Algorithm B. In particular, we exhibit the action of the algorithm in step 5 when the fourth job arrives. Input \(I_5\) is:

$$\begin{aligned} J_1= & {} (0.2,2),\ J_2=(0.36,2),\ J_3=(0.07,2),\\ J_4= & {} (0.73,2),\ J_5=(0.28,1),\ J_6=(0.36,2). \end{aligned}$$

The first three jobs are scheduled on the second machine in step 3, and we get: \(x_3=0,\ y_3=0.63,\ z_3=0\). In the process of assignment of the fourth job, the algorithm reaches step 5, because \(1.36=y_3+p_4>1.25\) and \(0.73=p_4<0.75\) hold. The largest job of the current set Y is \(J_2\), and \(p_4^{\max Y}=p_2=0.36\), and together with the fourth job, it holds that \(p_4+p_4^{\max Y}=1.09<1.25\), so W will be computed. Since \(p_4^{\max Y}=p_2=0.36 \ge \frac{y_3}{2}=0.315\) holds, the algorithm defines W in step 5.1, and it is defined to be the set of the two jobs \(J_1\) and \(J_3\). The algorithm migrates W to the first machine and schedules \(J_4\) on the second machine, and we get: \(x_4=0,\ y_4=1.09\), and \(z_4=0.27\). The algorithm schedules the last two jobs on the first machine in step 2 (for both jobs it holds that the load of the second machine is sufficiently large, and for the fifth job the GoS is 1), and we get: \(x_6=0.28,\ y_6=1.09,\ z_6=0.63\). As a result, the load of the first machine is 0.91, the load of the second machine is 1.09. See Fig. 9.

Fig. 9
figure 9

The schedules produced by Algorithm B for input \(I_5\) of Sect. A.2.3 (from the moment when the third job had arrived), and an optimal solution for that input. We have \(c_{B}(I_5)=1.09\) and \(c^*(I_5)=1\)

1.2.4 An example for Algorithm B, such that step 5.2 is applied

In this example, we use a new input \(I_6\) to show the action of Algorithm B. We exhibit the action of the algorithm in step 5 when the fourth job arrives. Input \(I_6\) is:

$$\begin{aligned} J_1= & {} (0.17,2),\ J_2=(0.2,2),\ J_3=(0.26,2),\ J_4=(0.7,2),\\ J_5= & {} (0.1,1),\ J_6=(0.57,2). \end{aligned}$$

The first three jobs are scheduled on the second machine in step 3, and we get: \(x_3=0,\ y_3=0.63,\ z_3=0\). For the fourth job, the algorithm reaches step 5 because \(1.33=y_3+p_4>1.25\) and \(0.7=p_4<0.75\) hold. The largest job of the current set Y is \(J_3\), and \(p_4^{\max Y}=p_3=0.26\), and together with the fourth job, it holds that \(p_4+p_4^{\max Y}=0.96<1.25\), so W will be computed. Since \(p_4^{\max Y}=p_3=0.26\) while \(y_3=0.63\) holds, the algorithm defines W in step 5.2. The set W is defined to be the set \(\{J_3\}\). Specifically, step 5.2 is applied because \(0.25 \le p_4^{\max Y} < \frac{y_3}{2}=0.315\) holds. The algorithm migrates W to the first machine, and it schedules \(J_4\) on the second machine. We get: \(x_4=0,\ y_4=1.07\), and \(z_4=0.26\). Similarly to the previous examples, the algorithm schedules the last two jobs on the first machine in step 2, and it defines its new values as follows: \(x_6=0.1,\ y_6=1.07,\ z_6=0.83\). As a result, the load of the first machine is 0.93, the load of the second machine is 1.07. See Fig. 10.

Fig. 10
figure 10

The schedules produced by Algorithm B for input \(I_6\) of Sect. A.2.4 (from the moment when the third job had arrived), and an optimal solution for that input. We have \(c_{B}(I_6)=1.07\) and \(c^*(I_6)=1\)

1.3 Algorithm C, where \(\varvec{0.5 \le M < \frac{2}{3} }\)

Here, we provide three examples for Algorithm C. In the first example, we show the action of the algorithm when it receives input \(I_1\) of Sect. A.1.1. In the second one, we show the action of the algorithm in step 4 by using a new input \(I_7\). In the third example, we show the action of the algorithm in step 5 when it receives input \(I_8\), which we define.

In all three examples for this algorithm, we use \(M=0.6\), and thus the value used in step 2 is 0.6, and the value used in step 3 (and the competitive ratio) is 1.4.

1.3.1 An example Algorithm C, using \(\varvec{I_1}\)

Algorithm C with migration factor \(M = 0.6\) schedules \(I_1\) of Sect. A.1.1 only using the two steps 2 and 3, i.e. it does not migrate any jobs. This is because the load of the second machine after scheduling the fourth job on this machine does not exceed the upper bound (competitive ratio) \(2-M= 1.4\), and this keeps loads relatively balanced. Consequently, \(x_6=0.42,\ y_6=1.25,\ z_6=0.32\), the load of the first machine is 0.74, and the load of the second machine is 1.25 (see the output of Fig. 7, which belongs to the run of another algorithm and the same input, though the output is identical).

1.3.2 An example for Algorithm C, such that step \(\varvec{4}\) is applied

In this example, we exhibit the action of the algorithm in step 4 when the third job of our new input arrives. Input \(I_7\) is:

$$\begin{aligned} J_1=(0.5,2),\ J_2=(0.09,2),\ J_3=(0.82,2),\ J_4=(0.18,2),\ J_5=(0.41,1). \end{aligned}$$

Recall that \(M=0.6\). The first two jobs are scheduled on the second machine in step 3, and we get: \(x_2=0,\ y_2=0.59,\ z_2=0\). When the third job arrives, the algorithm reaches step 4 because \(p_3^{\max Y}=p_1=0.5 > M\cdot p_3=0.492\) holds. In step 4, the algorithm schedules \(J_3\) on the first machine, and we get: \(x_3=0,\ y_3=0.59\), and \(z_3=0.82\). The algorithm schedules the fourth job on the second machine in step 3 because \(0.77=y_3+p_4 \le 1.4=2-M\) holds, and the last job on the first machine in step 2 because \(g_6=1\) holds. As a result, the load of the first machine is 1.23, the load of the second machine is 0.77. See Fig. 11.

Fig. 11
figure 11

The schedules produced by Algorithm C with migration factor \( M =0.6\) for input \(I_7\) of Sect. A.3.2 (from the moment when the second job had arrived), and an optimal solution for that input. We have \(c_{C}(I_7)=1.23\) and \(c^*(I_7)=1\)

1.3.3 An example for Algorithm C, such that step 5 is applied

In this example, we exhibit the action of the algorithm in step 5 when the fifth job of the input arrives. Input \(I_8\) is:

$$\begin{aligned} J_1= & {} (0.35,2),\ J_2=(0.11,2),\ J_3=(0.13,2),\ J_4=(0.41,1),\\ J_5= & {} (0.82,2),\ J_6=(0.18,2). \end{aligned}$$

Recall that \(M=0.6\). The first three jobs are scheduled on the second machine in step 3, and the fourth job is scheduled on the first machine in step 2. At this time, we get: \(x_4=0.41,\ y_4=0.59,\ z_4=0\). When the fifth job arrives, the algorithm reaches step 5, because \(p_5^{\max Y}=p_1=0.35 \le M \cdot p_5=0.492\). It updates W to be \(\{J_1\}\) (the minimum length prefix with total size at least \(p_5+y_4-(2-M)=0.01\)), it migrates W to the first machine, and it schedules \(J_5\) on the second machine. We get: \(x_5=0.41,\ y_5=1.06\), and \(z_5=0.35\). The algorithm schedules the last job on the first machine in step 2 and updates: \(x_6=0.41,\ y_6=1.06,\ z_6=0.35\). As a result, the load of the first machine is 0.94, the load of the second machine is 1.06. See Fig. 12.

Fig. 12
figure 12

The schedules produced by Algorithm C with migration factor \( M =0.6\) for input \(I_8\) of Sect. A.3.3 (from the moment when the fourth job had arrived), and an optimal solution for that input. We have \(c_{C}(I_8)=1.06\) and \(c^*(I_8)=1\)

1.4 Algorithm D, where \(\varvec{\frac{2}{3} \le M < 0.75}\)

In this part we provide six examples for Algorithm D. In the first example we show the action of the algorithm when it receives input \(I_1\) of Sect. A.1.1. In the second and the third examples we show the action of the algorithm in step 4, by using inputs \(I_9\), \(I_{10}\), which we define. In the last three examples, we show the action of the algorithm in step 5 for three inputs \(I_{11}\), \(I_{12}\) and \(I_{13}\), which we also define here.

In all six examples, we use \(M=0.7\). As a result, the value used by the algorithm in step 2 is \(M=0.7\), and the value used in step 3, which is also the competitive ratio for this algorithm, is \(2-M=1.3\).

1.4.1 An example for Algorithm D, using \(\varvec{I_1}\)

Recall that \(M=0.7\). Algorithm D schedules input \(I_1\) of Sect. A.1.1 only using the two steps 2 and 3, i.e. it does not migrate any jobs. This holds because after scheduling the fourth job on the second machine we get: \(x_4=0.31,\ y_4=1.25,\ z_4=0\), i.e. the load of the second machine is between the two bounds \(M=0.7\) and \(2-M=1.3\), so any job that arrives after \(J_4\) will be scheduled on the first machine. Consequently, \(x_6=0.42,\ y_6=1.25,\ z_6=0.32\), and the makespan is 1.25 (see Fig. 7 again).

Fig. 13
figure 13

The schedules produced by Algorithm D with migration factor \( M =0.7\) for input \(I_9\) of Sect. A.4.2 (from the moment when the third job had arrived), and an optimal solution for that input. We have \(c_{D}(I_9)=1.08\) and \(c^*(I_9)=1\)

1.4.2 An example for Algorithm D, such that step 4.2 is applied

In this example, we exhibit the action of the algorithm in step 4 where \(y_{j-1}-w_j+p_j \le 2-M\) holds, which is tested when the fourth job in input \(I_9\) arrives. Input \(I_9\) is:

$$\begin{aligned} J_1=(0.54,2),\ J_2=(0.14,2),\ J_3=(0.32,1),\ J_4=(0.78,2),\ J_5=(0.22,2). \end{aligned}$$

Recall that \(M=0.7\). The first two jobs are scheduled on the first machine in step 3, and the third job on the first machine in step 2. We get: \(x_3=0.32,\ y_3=0.68\), and \(z_3=0\). When the fourth job arrives, the algorithm reaches step 4, because \(1.46=y_3+p_4 >2-M=1.3\) and \(p_4=0.78\ge M=0.7\) hold. In step 4, the algorithm updates W to be \(\{J_1\}\) (the maximum length prefix with total size at most \(M \cdot p_4=0.7 \cdot 0.78=0.546\), where \(M\cdot p_4 \ge p_1=0.54\), but \(p_1+p_2=0.68>0.546\)). We have \(y_3-w_4+p_4 = 0.68-0.54+0.78 = 0.92 \le 1.3=2-M\). Thus, the algorithm continues to step 4.2, and it migrates W to the first machine. The algorithm schedules \(J_4\) on the second machine, and we get: \(x_4=0.32,\ y_4=0.92\), and \(z_4=0.54\). The algorithm schedules the fifth job on the first machine in step 2 and it has \(x_5=0.32,\ y_5=0.92\), and \(z_5=0.76\). As a result, the load of the first machine is 1.08, the load of the second machine is 0.92. See Fig. 13.

1.4.3 An example for Algorithm D, such that step 4.1 is applied

In this example, we exhibit the action of the algorithm in step 4 where \(y_{j-1}-w_j+p_j > 2-M\) holds, which holds for the fourth job arrives. Input \(I_{10}\) which is used for this example is:

$$\begin{aligned} J_1=(0.54,2),\ J_2=(0.14,2),\ J_3=(0.32,1),\ J_4=(0.71,2),\ J_5=(0.29,2). \end{aligned}$$

Recall that \(M=0.7\). The first two jobs are scheduled on the second machine in step 3, and the third job is scheduled on the first machine in step 2. At this time we have \(x_3=0.32,\ y_3=0.68\), and \(z_3=0\). When the fourth job arrives, the algorithm reaches step 4, because \(p_4=0.71\ge M=0.7\) holds, and it did not apply earlier steps because \(1.39=y_3+p_4 >2-M=1.3\) holds while \(y_3<M=0.7\). In step 4, the algorithm updates W to be an empty set (the maximum length prefix with total size at most \(M \cdot p_4=0.7 \cdot 0.71=0.497 < p_1=0.54\)), and we get: \(y_3-w_4+p_4 = 0.68-0+0.71 = 1.39 > 1.3=2-M\), so the algorithm schedules \(J_4\) on the first machine, and we get: \(x_4=0.32,\ y_4=0.68\), and \(z_4=0.71\). The algorithm schedules the fifth job on the second machine in step 3 and it has \(x_5=0.32,\ y_5=0.97\), and \(z_5=0.71\). As a result, the load of the first machine is 1.03, the load of the second machine is 0.97. See Fig. 14.

Fig. 14
figure 14

The schedules produced by Algorithm D with migration factor \( M =0.7\) for input \(I_{10}\) of Sect. A.4.3 (from the moment when the third job had arrived), and an optimal solution for that input. We have \(c_{D}(I_{10})=1.03\) and \(c^*(I_{10})=1\)

1.4.4 An example for Algorithm D, such that both steps 5.1 and 5.3 are applied

In this example, we exhibit the action of the algorithm in step 5 where \(w_j > \min \{\frac{2\,M}{3}, M\cdot p_j\}\) and \(y_{i-1}-w_j+p_j \le 2-M\) hold, which happens when the fourth input job arrives. Input \(I_{11}\) which is used for this example is:

$$\begin{aligned} J_1=(0.54,2),\ J_2=(0.14,2),\ J_3=(0.32,1),\ J_4=(0.69,2),\ J_5=(0.31,2). \end{aligned}$$

Recall that \(M=0.7\). The first two jobs are scheduled on the second machine in step 3, and the third job is scheduled on the first machine in step 2. At this time, we get: \(x_3=0.32,\ y_3=0.68,\ z_3=0\). When the fourth job arrives, the algorithm reaches step 5 because \(1.37=y_3+p_4 >2-M=1.3\) and \(p_4=0.69 < M=0.7\) hold. In step 5 the algorithm updates W twice. First, W is defined as \(\{J_1\}\) (the minimum length prefix with total size at least \(\frac{M}{3}=\frac{7}{30}\approx 0.2333 \le p_1=w_4=0.54\)). At this time, we have: \(w_4 > \min \{\frac{2\,M}{3},\ M\cdot p_4 \} = \min \{\frac{7}{15}\approx 0.46667,\ 0.483\}\). The algorithm updates W in step 5.1 again to be \(Y \setminus W = \{J_2\}\). In step 5.3, the algorithm migrates W to the first machine, and it schedules \(J_4\) on the first machine. Step 5.2 is not applied because \(y_3 - w_4+p_4 = 1.23 \le 1.3 = 2-M\) holds. We get: \(x_4=0.32,\ y_4=1.23\), and \(z_4=0.14\). The algorithm schedules the last job on the first machine in step 2 and we get: \(x_5=0.32,\ y_5=1.23\), and \(z_5=0.45\). As a result, the load of the first machine is 0.77, and the load of the second machine is 1.23. See Fig. 15.

Fig. 15
figure 15

The schedules produced by Algorithm D with migration factor \( M =0.7\) for input \(I_{11}\) of Sect. A.4.4 (from the moment when the third job had arrived), and an optimal solution for that input. We have \(c_{D}(I_{11})=1.23\) and \(c^*(I_{11})=1\)

1.4.5 An example for Algorithm D, such that step 5.3 is applied

In this example, we exhibit the action of the algorithm in step 5 where \(w_j \le \min \{\frac{2\,M}{3}, M\cdot p_j\}\) and \(y_{i-1}-w_j+p_j \le 2-M\) hold, and this happens when the second input job arrives. Input \(I_{12}\) which is used for this example is:

$$\begin{aligned} J_1=(0.45,2),\ J_2=(0.24,2),\ J_3=(0.65,2),\ J_4=(0.31,1),\ J_5=(0.35,2). \end{aligned}$$

Recall that \(M=0.7\). The first two jobs are scheduled on the second machine in step 3, and we get: \(x_2=0,\ y_2=0.69,\ z_2=0\). When the third job arrives, the algorithm reaches step 5 because \(1.34=y_2+p_3 >2-M\) and \(p_3=0.65 < M=0.7\) hold. In step 5, the algorithm defines W to be \(\{J_1\}\) (the minimum length prefix with total size at least \(\frac{M}{3}=\frac{7}{30}\approx 0.2333 \le p_1=w_3=0.45\)), here we get: \(w_3 \le \min \{\frac{2\,M}{3},\ M\cdot p_3 \} = \min \{\frac{7}{15},\ 0.455\}\). In this case \(0.89=y_2 - w_3+p_3 \le 2-M=1.3\) holds, so the algorithm only applies step 5.3, and it migrates W to the first machine while scheduling \(J_3\) on the second machine. We get: \(x_3=0,\ y_3=0.89\), and \(z_3=0.45\). The algorithm schedules the last two jobs on the first machine in step 2 and it updates its variables as follows: \(x_5=0.31,\ y_5=0.89,\ z_5=0.8\). As a result, the load of the first machine is 1.11, the load of the second machine is 0.89. See Fig. 16.

Fig. 16
figure 16

The schedules produced by Algorithm D with migration factor \( M =0.7\) for input \(I_{12}\) of Sect. A.4.5 (from the moment when the second job had arrived), and an optimal solution for that input. We have \(c_{D}(I_{12})=1.11\) and \(c^*(I_{12})=1\)

Fig. 17
figure 17

The schedules produced by Algorithm D with migration factor \( M =0.7\) for input \(I_{13}\) of Sect. A.4.6, and an optimal solution for that input. We have \(c_{D}(I_{13})=1.07,\ c^*(I_{13})=1\)

1.4.6 An example for Algorithm D, such that both steps 5.1 and 5.2 are applied

In this example, we exhibit the action of the algorithm in step 5 where \(w_j > \min \{\frac{2\,M}{3}, M\cdot p_j\}\) and \(y_{i-1}-w_j+p_j > 2-M\) hold, and this happens when the second input job arrives. Input \(I_{13}\) which is used for this example is:

$$\begin{aligned} J_1=(0.69,2),\ J_2=(0.62,2),\ J_3=(0.38,2),\ J_4=(0.31,1). \end{aligned}$$

Recall that \(M=0.7\). The first job is scheduled on the second machine in step 3, and we get: \(x_1=0,\ y_1=0.69,\ z_1=0\). When the second job arrives, the algorithm reaches step 5 because \(1.31=y_1+p_2 >2-M=1.3\) and \(p_2=0.62 < M=0.7\) hold. In step 5, the algorithm defines W to be \(\{J_1\}\) (the minimum length prefix with total size at least \(\frac{M}{3}=\frac{7}{30}\approx 0.2333 \le p_1=w_2=0.69\)). Here, we get: \(w_2 > \min \{\frac{2\,M}{3},\ M\cdot p_2 \} = \min \{\frac{7}{15},\ 0.434\}\), and in this case the algorithm updates W again to be \(Y \setminus W = \{\}\), it schedules \(J_2\) on the first machine, because \(y_1 - w_2+p_2 = 1.31 > 1.3 = 2-M\) holds, and we get: \(x_2=0,\ y_2=0.69,\ z_2=0.62\). The algorithm schedules the third job on the second machine in step 3, and it schedules the last job on the first machine in step 2. We get: \(x_4=0.31,\ y_4=1.07\), and \(z_4=0.62\). As a result, the load of the first machine is 0.93, the load of the second machine is 1.07. See Fig. 17.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Akaria, I., Epstein, L. Bin stretching with migration on two hierarchical machines. Math Meth Oper Res 98, 111–153 (2023). https://doi.org/10.1007/s00186-023-00830-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00186-023-00830-3

Keywords

Navigation