Adaptive appointment scheduling with periodic updates

most prominent conclusion is that typically, even with relatively few updates, costs can be reduced drastically. Our experiments, however, also reveal that one can construct instances for which increasing the rescheduling frequency does not guarantee a cost reduction; we provide an in-depth analysis of the remarkable phenomenon. The work has broad application potential, e.g., in healthcare and for delivery companies.


Introduction
In many service systems appointment scheduling plays a pivotal role.A schedule is (in its most basic form) an increasing sequence of arrival times  1 , … ,   at which the  clients are supposed to arrive at the service facility.These times should be chosen such that the interests of the service provider and the clients are properly balanced.It is customary to measure the service provider's cost in terms of her idle time, and the clients' (aggregate) cost in terms of the sum of their waiting times.Our goal is to find a schedule that optimally balances the interests of both the service provider and her clients.On the one hand, the service provider wishes to efficiently run the system, in the sense that there is always a client in service.On the other hand, it is desired that the clients are provided a sufficiently high level of service, i.e., the waiting times should be as low as possible.Minimizing a cost function that encompasses both these idle times and waiting times over the arrival times provides us with an optimal schedule.
Classically, one works with what could be called 'a priori schedules': by minimizing the given objective function the arrival times  1 , … ,   are determined, these are announced to the clients, and not adjusted while delivering the service.It is clear that with the advent of context) no-shows, walk-ins, availability of certain equipment, etc., and (in the delivery context) the fact that in practice one typically works with delivery windows rather than with delivery times.
In the setup we consider in this paper, the cost function we focus on is the sum of the mean idle times and the mean waiting times, but in principle, one could work with any cost function that is based on distributional properties of the idle times and waiting times (second moments, certain quantiles, etc.).Even though our approach allows us to reschedule at any time we want, for reasons of transparency we mainly consider periodic updates (i.e., the update epochs are equidistant in time).We in addition analyze alternative mechanisms, such as rescheduling after each client's start of service and rescheduling after each client arrival.Other practically relevant extensions that we consider are ones in which we impose restrictions on the set of admissible adapted schedules.This includes a study of the setting in which one cannot adjust the appointments that lie in a given time interval immediately after the update (e.g., within the first 30 min.after the rescheduling epoch).
We proceed by discussing the problem at a more technical level.At any rescheduling epoch, the state information that is available is (i) the number of clients who are currently waiting, (ii) the elapsed service time of the client in service (if any), and (iii) the number of clients who have not entered the system yet.We consider the (natural) setting in which we are not given the full service-time distributions, but just their means and variances.Following the approach advocated in e.g., Kuiper et al. (2015Kuiper et al. ( , 2022) ) and Tijms (1994), we identify phasetype distributions with this mean and variance.Phase-type distributions have various attractive properties, most notably they can be used to fit any distribution on [0, ∞) arbitrarily closely (Asmussen, 2003, Theorem III.4.2) and they allow for relatively straightforward computations.They are defined as the absorption time of a suitably constructed continuous-time Markov chain with a given initial distribution  and transition rate matrix  .
The framework developed thus combines two attractive features.In the first place, as pointed out above, whereas in the existing literature one predominantly focuses on static schedules, we here focus on adaptive schedules.We propose a procedure to update the appointment schedule, typically leading to a substantial reduction of the objective function.In the second place, our approach works with service-time distributions with any mean and variance, whereas in many appointment scheduling studies, exponentially distributed service times are assumed (Pegden and Rosenshine, 1990;Stein and Côté, 1994).This exponentiality assumption is often imposed to ease the analysis; indeed, the memoryless property facilitates symbolic computation of the objective function.The idea is that, by using a phase-type fit, still a relatively explicit evaluation is possible, i.e., in terms of a matrix-valued recursive scheme for the mean waiting and idle times, in the spirit of the one presented in Wang (1997).
Contributions.The paper has three main contributions.In the first place, we propose an adaptive scheduling framework, through which one can assess the gain due to rescheduling.More specifically, in the setup considered the schedule is periodically updated based on the state information available, viz. the number of clients in the system and the elapsed service time of the client in service (if any), besides the number of clients still to be scheduled.
In the second place, so as to determine the adapted schedule, a prerequisite is that we are able to compute the objective function given the state information.An important property that we rely upon is that the residual service time (given that the elapsed service time is, say, ) is again of phase type.The main implication of this fact is that the complexity of determining a schedule update is as high as the complexity of evaluating a static (i.e., a priori determined) schedule.We point out how the parameters of the phase-type distribution of the residual service time can be determined; concretely, it has the same transition rate matrix  as the service time itself, but an adjusted initial distribution .We in addition show how the methodology developed in Wang (1997), which assumes homogeneous service times, can be generalized so as to also cover heterogeneous service times.
In the third place, we have performed a series of numerical experiments that primarily aim at quantifying the efficiency gain that can be achieved by adaptive scheduling (that is, relative to working with the static schedule).We provide a publicly available web tool by which adaptive schedules can be determined in real time.In our experiments we in particular study the impact of the rescheduling frequency.Generally, the more often the schedule is adapted the higher the gain, as expected; remarkably, even with relatively few updates, typically a substantial gain is achieved.One has to be careful though, in that one can construct instances in which a higher rescheduling frequency negatively impacts the gains; we provide an in-depth analysis of this counterintuitive phenomenon.We in addition investigate how the gain depends on (i) the variability of the service times and (ii) the cost function (in terms of the weight associated with the mean idle times relative to that of the mean waiting times).Then we consider several more realistic variants of our basic adaptive scheduling procedure.Most notably we assess the variant in which one cannot adjust the appointments that were scheduled directly after the rescheduling epoch.Also, an extra cost component is introduced to penalize deviations from the previously given schedule.Finally, we perform a comparison with the dynamic-programming-based rescheduling technique of Mahes et al. (2023), the most important conclusion being that our adaptive scheduling approach performs just slightly worse despite the fact that in the approach of Mahes et al. (2023) one optimizes over a larger decision space.
Literature.We proceed by providing a brief account of the literature in this area.We do not aim to provide an exhaustive overview; see e.g., Ahmadi-Javid et al. (2017) for a comprehensive, and relatively recent, survey.In Ahmadi-Javid et al. (2017), the authors distinguish between three levels: the strategic level (concerning design decisions, such as the choice of the number of servers), the tactical level (concerning planning decisions, such as the allocation of capacity to patient groups), and the operational level (e.g., focusing on determining the precise schedules).The theme of the current paper, the adaptation of schedules, clearly belongs to the operational measures but is hardly covered by Ahmadi-Javid et al. (2017).
The static scheduling problem, in which a schedule is determined a priori and not adapted, has been studied intensively.A methodology to find the optimal static schedule under specific distributional assumptions was presented in Wang (1997), while an extension to multiple servers can be found in Kuiper and Lee (2022).Various realistic features have been incorporated into the appointment scheduling problem.For example, in Chen and Robinson (2014) and Erdoğan and Denton (2013) one studies a combination of both routine clients that are assigned an appointment time in advance, and last-minute clients, seeking an appointment on the same day.A highly general framework, modeled as a nonlinear integer program, simultaneously covering noshows, non-punctuality, and walk-ins, can be found in Zacharias and Yunes (2020).Another substantial part of the literature focuses on the client sequencing problem, i.e., optimizing the order of arrivals of the clients (which is assumed to be fixed in the framework studied in our paper) (Berg et al., 2014;Mak et al., 2015).Concretely, ordering the clients in increasing variance of their service-time distributions appears to be the most common rule to produce a good sequence (de Kemp et al., 2021;Kong et al., 2016).The exact optimal sequencing policy is still unknown when there are three or more clients (Mak et al., 2014).
The present paper is related to the dynamic programming approach developed in Mahes et al. (2023).There a setting is studied where at each client arrival the arrival time of the next client is determined.It is shown that this results in a significant reduction in cost as compared to static scheduling.In the current paper, we comment on the performance of the adaptive scheduling approach relative to the one of Mahes et al. (2023).
Unlike in our work, the term 'adaptive appointment scheduling' is in the literature frequently referring to updating schedules by sequentially finding an available time slot adapted to the clients' needs.For example, Wang andFung (2014, 2015) and Wang and Gupta (2011) dynamically learn clients' preferences as appointment requests come in, while Erdoğan et al. (2015) uses stochastic integer programming and Doğru and Melouk (2019) proposes a simulation optimization approach.In all these papers, existing appointments are not modified when a schedule gets updated, rather available slots are filled according to some policy.
Even though most work on appointment scheduling focuses on healthcare applications, it can be applied in a substantially broader context.Other settings in which it can be applied include that of clients and a consulting professional, automobiles and a service center, and legal cases and a courtroom (Robinson and Chen, 2003).Over the past few years, due to the increasing presence of delivery services of various sorts, there has been a strong focus on appointment scheduling in a spatial setting, i.e., involving a routing component.These problems are intrinsically hard, as they combine all complications arising in appointment scheduling with those of the traveling salesman problem, see e.g., Liu et al. (2019) and Zhan et al. (2021).Moreover, during the delivery process, the driver might choose to take a different path from the prescribed route, a challenge addressed in Ghosh et al. (2023).In addition, Decerle et al. (2018) considers a routing and scheduling problem with time window constraints, based on the clients' availability.A way to assign such time windows before creating a vehicle routing schedule is described in e.g., Spliet and Desaulniers (2015) and Spliet and Gabor (2014).
Organization.The structure of this paper is as follows.In Section 2 we formally define the cost function and the objective.The evaluation of this cost function is described in Section 3. The performance of the method is assessed in Section 4. The paper is concluded with some final remarks.

Adaptive scheduling procedure
In this section, we formally describe the problem that is considered in this paper, as well as the algorithm that we propose.The structure of the section is as follows.In Section 2.1 we describe an auxiliary scheduling problem that will form the main building block of our algorithm.A crucial step is that we express our cost function in terms of the clients' mean sojourn times.Then, in Section 2.2, we describe our algorithm, in which we adapt the schedule at equidistant points.It is worth noting that in Section 3 we will point out how the objective function, featured in the algorithm, can be evaluated.

An auxiliary scheduling problem
We start by explaining a specific static scheduling problem that will, in Section 2.2, serve as the basis of our adaptive scheduling algorithm.To this end, we consider a sequence of  ∈ N clients with service times represented by the non-negative, independent random variables  1 , … ,   ; note that these are not necessarily identically distributed.Our goal is to find a schedule  1 , … ,   , where   represents the arrival time of the -th client (i.e., we implicitly require 0 ⩽  1 ⩽ ⋯ ⩽   ).
In this paper, we optimize an objective function that balances the interests of both the clients and the service provider.Concretely, we consider the following optimization problem: with, for  ∈ (0, 1), where   and   are the idle and waiting time associated with client , respectively; we refer to Fig. 1 for an illustration.Note that the entries of the sequences (E  )  =1 and (E  )  =1 are implicit functions of the arrival times  1 , … ,   .In the conventional version of this problem, the goal is to select  1 , … ,   that minimize the above cost function.The setting of our auxiliary scheduling problem, however, is different in the sense that at every rescheduling epoch, we are given the following state information: • The number of clients  ∈ {0, 1, … , } who have already entered the system at time 0. This thus means that, if  > 0, then  1 = ⋯ =   = 0.It is clear that also if  = 0, then we should take  1 = 0: bearing in mind the objective function that we wish to minimize, it is pointless to let client 1 enter after time 0. • The value of the elapsed service time  of client 1 if  > 0 (if  = 0, then we set  ≡ 0).This means that the remaining service time of the client in service is distributed as  1 −  conditional on  1 > .As we hold  fixed (in this subsection), we will sometimes denote this remaining service time simply by  1 (or by  1 (), to stress the dependence on ).• In total, there are  clients that remain to be served, of which the first  already entered the system.This means that there are  −  clients to be scheduled.
It turns out to be convenient to somewhat rewrite the objective function.Let   the sojourn time of client , i.e., her waiting time   increased by her service time   .Note that the time at which client  leaves the system is equal to the sum of all service and idle times corresponding to all clients up to and including this client.Hence, for a pictorial illustration of this fact is provided by Fig. 1.The above entails that we can express the (expected) waiting and idle times in terms of the (expected) sojourn times: naturally, E 1 = E 1 = 0, and, for  = 2, … , , the latter of which can be interpreted as the expected time difference between the service completion of client  − 1 and the start of service of client .As the expected service times E  are given numbers, we now have rewritten our problem in terms of the expected sojourn times of the clients only, a fact that we will extensively exploit in this paper.In the sequel, we (informally) denote the above optimization procedure by where, as mentioned, the distribution of (the remaining part of)  1 depends on the elapsed service time  of client 1.In Section 3 we point out how the routine Schedule can be evaluated; in Section 2.2 we explain how we can use Schedule in the key problem studied in this paper, namely that of periodically adapting the appointment schedule.

Periodically updating the schedule
The main idea is to update the schedule every  time units, for some predefined interval length  > 0. At the -th update, at time   ∶= , the state information available is the number of clients   in the system at time   , as well as (if   > 0) the elapsed service time   of the client in service.The underlying thought is that it allows us to respond to situations in which we are ahead of or behind schedule.One would expect that the smaller the value of , the more frequently we adjust the schedule, and the lower the cost; we explore this relationship in great detail in the numerical experiments of Section 4.
Below we include pseudocode for our periodically updated schedule.The main idea is to rerun the routine , as was introduced above, at the times   , with the current state information as input.It uses the following functions: • () provides the number of clients in the system at time  ⩾ 0, i.e., covering waiting clients (if any) as well as the client in service (if any).• () provides the number of service completions in [0, ] for  ⩾ 0. • () provides the elapsed service time of the client in service at time  ⩾ 0 (and 0 if there is no client in service).
Algorithm 1 describes, in self-evident pseudocode, how we update the schedule at the time epochs   .In the algorithm, the variable  keeps track of the current number of clients in the system,  denotes the elapsed service time of the current client in service (if any, and otherwise  ∶= 0), and  is the number of clients served so far.In addition,   is the all-ones vector of dimension  ∈ N.

end
In the above description, for ease we let the   correspond to equidistant points in time, but it is clear that in principle nonequidistant updates are possible, too.In particular, it is conceivable that early in the schedule, the amount of uncertainty is still modest, so there is a good reason to take  1 relatively large.We do not explore such non-equidistant updates in this paper.

Evaluation of the objective function
Now that we have described our adaptive scheduling approach, we continue by pointing out how our objective function  ( 1 , … ,   | , ) can be evaluated.In our setup, we suppose that we know the first two moments of the service times, or, equivalently, ) .
We do so following a well-established approach: fitting so-called phase-type distributed random variables (Kuiper et al., 2015(Kuiper et al., , 2022)), and then applying a technique in the spirit of the one developed in Wang (1997) to compute the expected sojourn times E  .A complication is that the procedure relied on in Kuiper et al. (2015Kuiper et al. ( , 2022) ) must be adapted to incorporate the  clients present at time 0, and if  > 0 the elapsed service time  of the client in service.In Section 3.1 we briefly recall the way to map a pair (E  , Var(  )) on a phase-type distribution; as we will see two special classes of phase-type distributions play a key role here.Section 3.2 describes how to evaluate the cost function  ( 1 , … ,   | , ) for a given schedule ( 1 , … ,   ) (where, evidently, the components of this vector are non-decreasing), thus solving the abovementioned complication.In Section 3.3 we comment on minimizing the cost function over ( 1 , … ,   ).

Phase-type fit
If the service times would have been exponentially distributed, then it would be relatively straightforward to devise a procedure to compute the expected sojourn times E  , essentially owing to the fact that the number of clients in the system is a continuous-time Markov chain.For an exponentially distributed service time   , the squared coefficient of variation (in the sequel abbreviated to SCV) equals 1, entailing that necessarily the mean and standard deviation match.In various application domains, however, data analysis has revealed that service times substantially deviate from being exponential.
In particular, in medical applications (Çayırlı et al., 2006) often the service times are relatively deterministic, reflected in the corresponding SCV being smaller than 1.Clearly, assuming exponential service times would provide suboptimal scheduling rules.
The above complication is remedied by working with specific convenient phase-type distributions by which we cover all values of the SCV.Phase-type distributions can be seen as generalizations of the exponential distribution that still allow a fairly explicit analysis.Formally, a phase-type distribution can be characterized as follows.Consider a continuous-time Markov chain {  } ⩾0 with state space  = {1, … ,  + 1}, where states 1, … ,  are transient and state  + 1 is absorbing.The initial state  0 ∈ {1, … , } is sampled according to a probability (row) vector  ∈ R  , i.e., its entries are non-negative and sum to 1.The process has a transition rate matrix of the form with  ∈ R × ,  × denoting an all-zeroes matrix of dimension ×, exit rate vector  ∶= −   , and   a -dimensional all-ones (column) vector.
The time it takes to reach the absorbing state, i.e., is called a phase-type distributed random variable with initial distribution  and subintensity matrix  , denoted by  ∼ PH  (,  ).The exponentially distributed times spent in each of the states 1, … ,  are typically referred to as phases, explaining the terminology 'phase-type'.
The phase-type distribution owes its popularity to two features.In the first place, as already mentioned above, they are attractive from a computational point of view: relying on standard tools from linear algebra, the corresponding densities and distribution functions can be numerically evaluated.In the second place, any distribution on the positive halfline can be approximated arbitrarily closely by a phasetype distribution (Asmussen, 2003, Theorem III.4.2).As extensively motivated in Tijms (1994), and used in e.g., Kuiper et al. (2015) and Mahes et al. (2023), two specific subclasses are of particular interest: mixtures of Erlang distributions and hyperexponential distributions.More concretely, with these two subclasses, all values of the SCV are covered, while at the same time, they have the attractive feature that they are low-dimensional (in terms of the number of parameters).In particular, in the context of appointment scheduling, numerical evaluation revealed that the error introduced by replacing the true service-type distribution by these specific subclasses has turned out to be negligible; cf. the findings reported in Kuiper (2016, pp. 110-111) and Mahes et al. (2023).
We continue by briefly discussing mixtures of Erlang distributions and hyperexponential distributions, and in particular, we describe how to map the mean and SCV of a random variable  on the corresponding parameters.A more extensive account of this two-moment fit is provided in e.g., Kuiper et al. (2015Kuiper et al. ( , 2022)).
-Case 1: SCV smaller than 1.If the SCV is below 1, we approximate the non-negative random variable  by a mixture of Erlang distributions (or: a weighted Erlang distribution).To this end, denote by E(, ) an Erlang distributed random variable with shape parameter  and scale parameter , and by  an independent uniform random variable on In other words: with probability  the random variable  equals an Erlang-distributed random variable with  phases, and with probability 1 −  an Erlang-distributed random variable with  + 1 phases.The parameters are uniquely determined (Mahes et al., 2023;Tijms, 1994): we have In addition, the initial distribution is given by  = (1, 0, … , 0).
-Case 2: SCV larger than 1.If the SCV is above 1, we approximate the non-negative random variable  by a hyperexponential distribution.
For some  1 ,  2 > 0 and  ∈ [0, 1], in self-evident notation, Hence,  now equals with probability  an exponentially distributed random variable with mean  −1 1 , and with probability 1− an exponentially distributed random variable with mean  −1 2 .To fit the mean and SCV, the parameters are not uniquely determined.This can be solved by imposing the balanced means condition (Kuiper et al., 2015;Tijms, 1994), i.e.,  1 = 2 and  2 = 2(1 − ) for some  > 0. Using this, we find It is left to characterize  as a phase-type random variable.It is clear that the subintensity matrix  ∈ R 2×2 we have is and  = (, 1 − ).
We proceed by explaining how these phase-type distributions are used in our framework.The main idea is to replace each of the service times   by a phase-type distributed random variable, following the recipe described above, leading to a description of the form   ∼ PH   (  ,   ).In case the number of clients in the system at an observation epoch   , previously denoted by , is positive, however, we wish to take into account the elapsed service time  of the client in service.This can be done by the following procedure; we again distinguish between the service time being a mixture of Erlangs and hyperexponential.
It can be seen in both cases the distribution of   −  conditional on   > , for some  > 0, is still of phase type, with the same   as the one of   itself, but with a different initial distribution which now depends on the elapsed service time  (therefore denoted by   ()).Let the process { , } ⩾0 , for  = 1, … , , the (  +1)-dimensional continuoustime Markov chain corresponding to   ∼ PH   (  ,   ).Our objective is to find an expression for the -th entry of   (), in the sequel denoted by   ().Note that   () can be interpreted as P( , =  |   > ).
First consider the case that   is mixed Erlang, say, with parameters   ,   and   (so that   =   + 1).As argued in Mahes et al. (2023), and as can be verified in a straightforward manner, the -th entry of   () where Now consider the case that   is hyperexponential, say, with parameters  1 ,  2 and   (so that   = 2).Again writing Observe that when the elapsed service time equals 0, in both the mixed Erlang and hyperexponential case, we obtain that   (0) =   , as it should.
Recall that we managed to rewrite the objective function in terms of the expected values of the sojourn times   ,  = 1, … , .As will become clear below, we actually derive expressions for the full distribution function of the   , from which the means E  can be found in an evident manner.The procedure presented borrows elements from the one developed in Wang (1997).Importantly, Wang (1997) covers the case of i.i.d.service times only; below we point out how it extends to the case of heterogeneous (but still independent) clients.
Given the schedule  1 , … ,   , let   ∶=   −  −1 the -th interarrival time, where we set  1 ∶=  1 .Denote by   () the number of clients in the system at shifted time  ∈ [0,  +1 ), that is,  time units after the arrival of client .Also, let   () be the phase the client in service is in at the same (shifted) time .If the system is idle, i.e., if no client is in service, set   () = 0. Denote for  = 1, … , ,  = 1, … ,  and  = 1, … ,  −+1 the probability that the system is in state (, ) at shifted time  by  ()   () ∶= P(  () = ,   () = ); observe that if   () = , then the index of the client in service is −+1.
Note that we define this probability for all states except state (0, 0), in which the system is idle, i.e., the only case in which at time  the -th client has been served.Using this observation, the sojourn-time distribution   () of the -th client equals, with   denoting an all-ones vector of dimension  ∈ N, where we define the vector ) .
As the first client immediately gets served at time  1 = 0, it directly follows that  1 () =  1 exp( 1 ) and E 1 = − 1  −1 1   1 for  ∈ [0,  2 ), with  1 ∶=  1 ; see e.g., Neuts (1994).When the second client arrives at time  2 , two scenarios are possible.Either the first client is still in service and the second client needs to wait, or the service of the first client has been completed (with probability  1 ( 2 )) and the second client immediately goes into service.The service process can now be represented by the subintensity matrix when checking the compatibility of the vectors and matrices, realize that   represents a   -dimensional row vector, entailing that (− 1   1 ) 2 is of dimension  1 ×  2 .We thus find for  ∈ [0,  3 ), where, for  ⩾ 0, Here,  1 ∶=  1 and  1 ∶=  1 , and, for  = 2, … , , The expected sojourn time of client  equals

Optimizing the objective function
Now that we know how to evaluate the objective function, we briefly comment on optimizing it over the schedule  1 , … ,   .A first remark is that this optimization problem can be expressed in terms of a convex programming problem, as has been rigorously established in Kuiper et al. (2022); this in particular entails that there is just one local minimum (which therefore is the global minimum as well).As a consequence, standard (quasi-)Newton minimization routines can be used to efficiently identify the minimum of  ( 1 , … ,   | , ) over time epochs  1 , … ,   such that 0 =  1 = ⋯ =   ⩽  +1 ⩽ ⋯ ⩽   ; here we set the arrival times of the first  clients to 0 as they already have entered the system.Essentially all standard numerical packages have implementations of state-of-the-art minimization routines that can quickly and accurately determine the corresponding minimizer; in our software, we have used the Sequential Least Squares Programming (SLSQP) solver implemented in the open-source Python package SciPy.
Remark 1.In our setup, appointment times can take continuous values, which is a leading paradigm in the appointment scheduling literature; see e.g., Kuiper et al. (2015), Kuiper and Lee (2022), Mak et al. (2015) and Wang (1997) and many of the approaches discussed in the overview (Ahmadi-Javid et al., 2017).In some practical singleserver appointment systems, however, clients are assigned to time slots (i.e., intervals whose lengths are multiples of some given granularity, for instance, five minutes).An elementary way to convert continuoustime schedules into slotted counterparts is by rounding off, but one could resort to more sophisticated procedures as well (e.g., by searching a discrete set of gridpoints around the optimal continuous-time schedule).

Numerical evaluation
In this section, we assess the performance of our approach through a series of numerical experiments.We start by describing, in Section 4.1, the interface of the applet we developed.Assessment of the standard variant is covered by Section 4.2, more practical variants are considered in Section 4.3, and a comparison with alternative rescheduling methods is given in Section 4.4.

Applet
We have developed a user-friendly applet that optimizes our objective function using the open-source solver SLSQP mentioned in Section 3.3.By this noncommercial applet, any user can adopt our methodology without having to do any coding.It enables users to compute the optimal schedule, essentially in real time, for any instance the optimal schedule.Here, an instance is a combination of the number of clients to be served , the mean E  and SCV S(  ) of the service time of each client  = 1, … , , the weight  ∈ (0, 1), the state information, viz. the number of clients in the system  ∈ {0, … , −1} and the elapsed service time  ≥ 0 of the client in service (if any).Note that this applet allows a user to update her schedule not only periodically, but also at any other desired time.
In practical situations, one may not want to adapt the appointments that lie in the interval immediately after the rescheduling epoch (for instance in situations in which clients may already be on their way to the service location).To deal with this issue, we provide the option to leave the schedule of all clients fixed until a certain time  ≥ (see Experiment 4).In this case, the user also enters the appointment times of the clients to show up before time  as generated by the existing schedule, where the (current) time in which we observe the state information is considered to be time 0. The interface of the applet is displayed in Fig. 2. It can be accessed through https:// adaptiveschedule.eu.pythonanywhere.com.

Standard variant
In this subsection, we consider the standard variant of our model, with the objective of quantifying the efficiency gain of our adaptive method relative to static scheduling.It is important to note that the value of the cost function for the adaptive approach cannot be directly computed, as opposed to the cost of static scheduling.More precisely, • the cost of the static schedule can be directly obtained by numerically solving the optimization problem formulated in Eq. ( 1), applying the techniques developed in e.g., Kuiper et al. (2015) and Wang (1997), which is basically just executing Algorithm with only one iteration of the while-loop; • the adaptive schedule, however, is recomputed at the time epochs   = , and therefore depends on the precise state of the system at these epochs.With our procedure, at every time epoch   , only the cost of the remaining schedule is evaluated as if the schedule will not be updated again.
To evaluate the cost of the adaptive schedule, we estimate the cost of the adaptive approach relying on Monte Carlo simulation.We note however, as discussed in greater detail in Appendix A, that in a few simple cases (assuming exponentially distributed service times and a low number of clients) a somewhat more explicit approach can be followed.We proceed by providing a more detailed discussion of the abovementioned simulation-based approach to evaluate the cost of the adaptive approach.In every simulation run the service times  1 up to   are sampled given their (phase-type) distributions, and at the epochs   the schedule is reevaluated, depending on the state information available at these moments.In the end, given the final schedule, the realized idle and waiting times, and therefore the value of the cost function for this run can be computed using the Lindley recursion (Lindley, 1952).The pseudocode for a simulation run can be found in Appendix B. Performing a sizeable number of runs, and averaging the realized costs, we can accurately estimate the cost corresponding to our adaptive approach.In this section, we denote the cost of the static method by  stat , the cost of the adaptive method (with rescheduling time ) by  adapt () and the efficiency gain by For reasons of transparency, the experiments are organized such that we vary the parameters as much as possible in an isolated manner.This way, we obtain insight into the impact of each of these parameters on the efficiency gain.Unless otherwise stated, experiments are based on  = 10 6 simulation runs and are performed on the National Supercomputer Snellius supported by SURF (www.surf.nl).
Experiment 1 (Effect of the Parameters).We start by evaluating how the rescheduling period  affects the cost of the schedule.One would expect that the cost of adaptive schedules decreases monotonically in the rescheduling frequency (i.e., increases monotonically in the rescheduling time ).While this expectation generally holds true, we start by presenting an elementary instance in which this is not the case.
We first consider a situation of three clients with exponentially distributed service times, where we normalize the mean service time to 1.The number of clients  is equal to 3, and the weight  is equal to 0.5 (i.e., the idle and waiting times are of equal importance).Note that in this setting with the SCV of the service times being equal to 1, due to the memoryless property of the exponential distribution, there is no impact of the elapsed service time.As a result, the model can (to some extent) be analyzed explicitly.In Fig. 3, we assess the influence of the rescheduling time  on the cost by simulation.It is observed that in this instance, the cost is not monotonically increasing.Also, for various values of  there are discontinuities.An extensive explicit analysis of this counterintuitive behavior can be found in Appendix A. Now we focus, in instances with a more realistic number of clients , on the impact of the weight  (i.e., the importance of the idle time relative to the waiting time).In Table 1, we consider three values of the weight .The cost of static scheduling corresponds to the instance  = ∞.The main conclusion from the table is that, in particular for low  and  there is a strong cost reduction.Even for  = 8, the efficiency gain is still around 10%, which is significant considering that in most cases, the schedule will be updated only once (since there are fifteen clients).
Additionally, it can be seen that both the sum of the idle times and the sum of the waiting times substantially decrease each time we halve the rescheduling time.We also observe that increasing the rescheduling frequency has more impact on the waiting time than on the idle time.When the waiting time is of higher importance than idle time, i.e., for low values of the weight , the waiting time of each of the clients can even be completely eliminated by rescheduling at a high frequency.
The benefit of frequent rescheduling (i.e., the gain) is more noticeable when the relative importance of waiting time is higher.In those settings rescheduling tends to avoid excessive waiting times, i.e., it will be less likely that many clients are waiting in the system.In Fig. 4 we observe that for any fixed rescheduling length , the total cost of the schedule depends effectively linearly on the number of clients to schedule.Then, we examine the dependence of the number of times the schedule gets updated and the cost of the schedule on the rescheduling length , for various values of the weight .In Fig. 5 we observe for homogeneous exponentially distributed service times that the number of updates increases sharply as  decreases.The higher the importance of the waiting times (i.e., the lower ), the more often the schedule will be updated, as can be reasoned as follows.When the weight assigned to waiting time (i.e., 1 − ) increases, each appointment is given a longer time interval.This means that the time it takes to execute all appointments (i.e., the makespan) goes up.As a result, the number of schedule updates increases.
Table 1 has shown that, generally, increasing the rescheduling frequency (i.e., decreasing ) leads to a reduction in both the sum of idle times and the sum of waiting times.However, this does not provide any insight into the effect on the individual idle and waiting times.To examine this behavior in greater detail, Figs. 6 and 7 present the mean values of idle and waiting time for each client, respectively, for  = 0.2 and  = 0.8.
• In line with e.g., Kuiper (2016, Figure 2.9.b), we note that in the static case ( = ∞), the mean idle times for clients exhibit a dome shape, mimicking the optimal individual interarrival times.Conversely, the mean waiting time increases with a client's position, i.e., clients scheduled later are expected to experience longer waiting times; cf.Kuiper (2016, Figure 2.10.b).• Overall, updating the schedule leads to shorter idle and waiting times for future clients.This effect is particularly visible for  = 0.8, as clients after the first rescheduling moment have significantly lower mean idle and waiting times compared to the static case.For  = 0.2, although the average idle and waiting times are generally lower than in the static case, a zigzag pattern emerges in the mean idle and waiting times of future arrivals.This can be attributed to 'asynchronizations', where the rescheduling period does not align with the arrival of a client: when rescheduling occurs just before a client's arrival, the average idle time is more likely to increase while the waiting time for this client is likely to decrease, still resulting in notable cost savings for that specific client.This zigzag effect becomes less pronounced as the rescheduling frequency increases, eventually leading to a relatively constant average idle and waiting time for all clients.
Fig. 8 shows the influence of the rescheduling time on the cost for  = 0.2 and  = 0.8.While we again keep the mean service times equal to 1, we now also vary the SCV.It is observed that the  cost is increasing in S(), which is in line with the findings of Mahes et al. (2023).We have connected the dots, but clearly, between the data points discontinuities can occur.While the figures show that at a more global level increasing the scheduling frequency leads to a cost reduction, it also displays the local non-monotone behavior that we already encountered in the elementary instance above.We in particular observe that for low values of , the cost locally exhibits sharply decreasing behavior.As shown in the appendix, the observed peaks (in particular for high SCV values and  = 0.2) are located at the scheduled arrival times.In fact, the cost function is not continuous and makes upward jumps at  1 ,  2 , … .Therefore, it is better to choose an update interval  such that updates do not coincide with arrival epochs.
Additionally, the impact of the service time variability on the efficiency gain is worth considering.Our experiments reveal that higher values of E or S() correspond not only to higher cost but also to larger efficiency gains.The gain being increasing in E can be explained by the fact that longer service times lead to more rescheduling opportunities, assuming a fixed rescheduling length .The gain is also increasing in S(): if there is a higher variability in the service times, the effect of relatively extreme scenarios (i.e., many or no clients in the system) can be neutralized more effectively when scheduling adaptively.
Experiment 2 (Heterogeneous Service Times).While in the previous experiment, we worked with homogeneous service times, in this experiment we assess the impact of heterogeneity.First of all, we do this in the setting of exponentially distributed service times (i.e., S(  ) = 1), with different parameters (and  = 3 and  = 0.5 fixed).We consider a situation in which the service times are ordered such that the mean service times E  ∈ {0.5, 0.6, … , 1.5} (and hence the corresponding variances) are increasing, three in which the service times are permuted    randomly, and one in which the mean service times are ordered decreasingly.The results are depicted in Table 2.The experiments confirm that the cost is lower in the case of increasing mean service times (and hence increasing variances), in line with the findings of de Kemp et al. (2021) and Kong et al. (2016).Secondly, we keep the E  = 1 fixed, but take the S(  ) ∈ {0.5, 0.6, … , 1.5}.Again, from Table 2 it is observed that the cost is the lowest when the variances of the jobs are increasing.These conclusions are particularly relevant in applications where the service-time distributions can vary substantially, confirming that clients with short (expected) service times should be scheduled first.

More practical variants
In reality, there are good reasons to keep the adapted schedule relatively close to the existing one.Clients may be on their way to the service facility, for instance, or may negatively perceive frequent schedule updates.In this subsection, three mechanisms that deal with such considerations are proposed and evaluated.Each case starts with the static schedule that is obtained by solving optimization problem (1).Then, the schedule is updated using a policy based on the minimization of a different objective function.Finally, to assess the cost of the adaptive schedules we use simulation, where in each simulation run the weighted sum of the realized idle and waiting times is determined.As

Table 2
Cost of adaptive schedule for  = 11,  = 0.5 and  = 3.In the left table, we take E  ∈ {0.5, 0.6, … , 1.5} equidistant, while in the right table the S(  ) ∈ {0.5, 0.6, … , 1.5} are varied in the same manner.before, when evaluating the cost of adaptive schedules we use  = 10 6 simulation runs, unless otherwise stated.

S(𝐵
Experiment 3 (Optimizing  by Penalizing Adaptations).This experiment discusses a way to determine a suitable update interval .Clearly, in practical situations, there are various reasons why one would wish to avoid frequent adaptations.One could incorporate this in various ways, for instance by penalizing the number of adaptations.A possible way to achieve this is by choosing the rescheduling time  such that, for some given weight  ∈ [0, 1], the new cost function 1  is minimized.In our experiment, we consider the setting in which the service times are independent identically distributed exponential random variables, and the weight  = 0.5 is held fixed.We are interested in which  minimizes the considered cost function for various values of .The results are shown in Fig. 9.It is observed that the cost function is relatively flat around its minimum.More importantly, a lower weight  represents a more substantial penalization for adaptations, and thus results in a higher optimal rescheduling time, as expected.
Experiment 4 (Tradeoff with Fixed Interval).In this experiment, we consider the variant in which the appointment times within a time interval of length  after each of the rescheduling epochs are held fixed.This means that only the clients that are supposed to arrive at least  time units later can undergo schedule updates.To make this variant even more realistic, we also impose that these clients will still arrive at least  time units later.We call the cost  • adapt ( | ).We again focus on homogeneous exponential service times, with the weight  = 0.5 being fixed.In Fig. 10 we present the combination of  and  that leads to the same cost, so as to obtain insight into the tradeoff between these two parameters.The figure confirms an evident property: the higher the fixed schedule time length , the lower the rescheduling time  should be in order to achieve the same cost.
In Table 3, we present the gain  ( | ) achieved by adaptive scheduling with a fixed schedule time length  ∈ {0, 1, 2} over static scheduling; this gain is quantified analogously to Eq. ( 2).Note that   = 0 corresponds with the standard variant of adaptive scheduling as considered in Experiment 1.We conclude from the table that the gain achieved can be substantial.At the same time, for evident reasons, choosing  relatively large rules out a strong cost reduction (compared to static scheduling, that is).
Experiment 5 (Penalizing Deviations).We then consider a variant in which deviations relative to the previous schedule are penalized.Concretely, in self-evident notation, at any adaptation, we minimize the objective function for some weight  ∈ [0, 1].Working with homogeneous exponentially distributed service times, Fig. 11 expresses the dependency of the objective function on the weight .The case  = 0 corresponds to penalizing any deviation from the initial static schedule and thus leads to the same cost as of the static schedule, i.e.,  stat .The higher , the deviations relative to the previous schedule are penalized to a lesser extent, leading to more flexible schedules and thus lower cost.We see that the cost when using objective function ℎ gradually decreases to the scenario of rescheduling after each  = 3 time units without penalizing any deviation from the old schedule.A reasonably smooth curve is already being achieved for  = 50, 000 simulation runs.
Experiment 6 (Including Overtime).Various extensions are possible, for instance, the one in which overtime is penalized.Overtime, for a given horizon  > 0, is defined as ∑  =1   =   +   is to be interpreted as the session end time.In this setup, the problem of finding an optimal schedule could include a penalty for the amount of time the horizon  is exceeded.This concretely means that the objective function becomes, for a weight  > 0, The overtime being increasing in   , the new term in the objective function gives an extra incentive to have low idle times.The term E  can be evaluated using the machinery presented above: which in the terminology of Proposition 1 equals Also for this type of objective function, one could work with an adaptive scheduling approach, so as to reduce the objective function.
To demonstrate the impact of including overtime, Fig. 12 illustrates the relationship between the cost and the horizon  .The experiment retains the same configuration as the previous one.As anticipated, the objective function is decreasing in the horizon  : the higher the value of  , the lower the overtime, and hence the lower the cost.Setting a high horizon  essentially corresponds to not imposing any overtime constraint, resulting in a cost function that disregards the overtime component (i.e., resulting in  = 0).Our numerical output shows that the cost of the adaptive schedules appears to behave similarly to that of the static schedule, with the absolute cost difference between the two showing an almost negligible dependency on the horizon  .In the example, for small values of  , adaptive scheduling leads to a 12% cost reduction, whereas for large  , this cost reduction is as high as 28%.

Comparison with alternative rescheduling methods
In this subsection, we compare our adaptive approach with a number of alternative rescheduling schemes.We limit ourselves to rescheduling only on moments when the system is not empty, making it unlikely for the next client to be scheduled to show up right away (as this would enforce immediate waiting time).
Experiment 7 (Rescheduling at Start of Service).First, we consider a strategy in which one reschedules at every moment when a client's service starts.The results are presented in Table 4, where we vary the value of the weight  and the SCV.Here,  is the gain achieved by rescheduling as defined in Eq. (2).It is observed that in all cases, substantial gains can be achieved.For a small value of , the gain is roughly constant, but a higher gain is achieved for SCVs further away from 1.The higher the relative importance of the idle time, the more can be gained by scheduling adaptively, in particular for small values of the SCV.
Experiment 8 (Rescheduling at Arrivals).Suppose that one reschedules at every moment that a client enters the system.This is in principle the same mechanism as the one analyzed in Mahes et al. (2023), with the crucial difference being that the dynamic programming approach 'exploits' the fact that when rescheduling one knows that there will be more future rescheduling moments (so that the mechanism analyzed in Mahes et al. (2023) necessarily leads to lower cost).While using this knowledge yields truly optimal results, the enormous state space makes real-time computations of schedules using the dynamic programming approach computationally challenging.This experiment serves to quantify the difference between both algorithms.
Denote by  dyn the cost of the dynamic programming method studied in Mahes et al. (2023).Then we define the loss of scheduling using the proposed method, rather than scheduling following the dynamic programming method of Mahes et al. (2023), by Also, let  be the gain achieved by rescheduling.Find in Table 5 the results for different values of  and SCV.It is observed that scheduling adaptively at each client arrival leads to nearly the same cost as the theoretical optimum achieved by scheduling as in Mahes et al. (2023).More specifically, the loss obtained by the proposed method is always less than 3%, and therefore, can be considered negligible.This is good news: when using our computationally inexpensive adaptive approach, instead of the computationally heavy dynamic programming approach of Mahes et al. (2023), hardly any efficiency is lost.In line with the findings reported in Mahes et al. (2023), the gains achieved by rescheduling at each client arrival are always substantial and more pronounced when the values of  and the SCV are high.

Using real-world data
The following experiment provides an illustrative example of our method's performance for a real-world data set.
Experiment 9 (Scheduling Arrivals for Parcel Delivery).The data available consists of (anonymized versions of) 60 traces, stemming from a parcel delivery company.A trace corresponds to a route along which a driver has delivered parcels.Using the first 50 traces we have estimated the mean and variance of the service-time distribution (corresponding to the travel time between two subsequent customers, increased by the time required to hand over the parcel).Then our algorithm is run on the last 10 traces, i.e., the traces not used in the estimation.
Across the first 50 routes, a total of 4, 930 parcels have been successfully delivered.The service times associated with these deliveries have an estimated mean of 2.152 min.and an estimated squared coefficient of variation (SCV) of 0.738.Then the two-moment fit outlined in Section 3.1 is employed.As we have an SCV smaller than 1, the weighted Erlang distribution is used.When fitting the mean and SCV, we arrive at the parameters  = 1,  = 0.728, and  = 0.433.Fig. 13 presents a histogram of the distribution of the service times, accompanied by the corresponding phase-type fit.The per-route number of parcels to be delivered varies between 33 and 129, with an average of 98.6 parcels per route.
Once the parameters of the service-time distribution have been determined based on the data from the initial 50 routes, we can generate schedules for the subsequent 10 routes.In our example, we have worked with a weight parameter of  = 0.5 (signifying equal importance assigned to idle and waiting times).The schedules are updated every  = 15 min.A comparison is made between the cost incurred on these routes using the adaptive approach on one hand, and the cost if one would not do any schedule updates on the other hand.The outcome of this comparison is presented in Table 6.
The adaptive scheduling approach consistently leads to a substantial reduction of the overall cost on all 10 routes, with an average gain of over 10%.By adapting the schedule for the rides that consist of an average of 101.5 parcels, approximately 21 updates are made, indicating that the schedule is adjusted after delivering roughly every 5 parcels.The reduction in waiting times shows a similar relative decrease.
Intentionally, we kept this experiment as elementary as possible; the main goal was to show our approach's potential to reduce the cost significantly.In our setup, all traces were 'treated equally', in that it was implicitly assumed that service times stem from the same distribution, but one can further refine the approach by classifying routes based on specific features (weather, driver, etc.).In that case, one estimates the mean and SCV of the per-class service times and uses these when generating schedules.

Concluding remarks
In this work, we have introduced a methodology to generate appointment schedule updates at any time.Given the first two moments of the clients' service times, a phase-type fit is applied, thus facilitating a fast evaluation of the objective function, which is a weighted combination of mean idle times and mean waiting times.In the update procedure that we propose, the cost function takes into account the current state information, namely the number of clients in the system as well as the elapsed service time of the client in service (if any).The evaluation of the cost function that uses this state information, as well as its optimization over the appointment times, have the same complexity as their counterparts pertaining to the conventional, static case.
Our results include an applet that can be used immediately to generate and update schedules.Experiments show that updating can lead to substantially lower cost, even if we do not update very frequently and/or leave the first appointments in the current schedule unchanged.
We envisage various directions for follow-up research.First, the cost function can be modified in various ways, a natural extension being a cost function that incorporates appointment windows rather than appointment times.This can be useful, for example, in the context of a service in which clients are served at home (such as home delivery services).While in the present paper we have worked with mean idle and waiting times, other functional forms are worth studying, too; one can for instance consider a quadratic cost function to penalize large outcomes more strongly.
Second, in this paper, we focused predominantly on updating the schedule periodically, even though with the proposed framework a schedule could in principle be updated at any time.One can therefore think about the question when one should update (given the available state information), for instance, if there is a maximum on the number of updates.
Case 2 (i.e.,  <  opt 3 ) is now more complex, since there are three subcases that we need to distinguish: (2a) The service times  1 + 2 have elapsed at time  = .This happens with probability which is simply the probability that an Erlang(2,1) random variable is less than .In this subcase, the idle time is  3 = − 1 − 2 and the third client should enter immediately.Since we also have cost related to the waiting time  2 , the total cost is (2b) Here, we consider the subcase  1 > , which happens with probability  − .Then we can reschedule and the cost is simply equal to what we originally would have had plus the waiting time of client 2 so far (equal to ): Obviously, the cost E[ 3,2 adapt ()] is still unknown at this stage, but we can solve the simple equation to find them after the third and last subcase.(2c) This last subcase occurs when  1 <  <  1 +  2 , which happens with probability The cost so far is equal to (1 − ) times the conditional waiting time of client 2 so far: Now the system has, at rescheduling epoch  = , reduced to the system with  = 2 clients with  = 1 client already present at the start.Therefore, the remaining cost after rescheduling is equal to  2,1 adapt () =  2,0 adapt (), which we already computed before in Eq. (3).
Combining the two main cases leads to the following expression for the cost: with  opt 3 = − −1 (−∕)−1.Fig. 15 shows a plot of the cost versus  for this model for  = 0.2.Note the discontinuities at  = 1.609, the optimal arrival time of client 2 in the  = 2,  = 0 model, and at  = 2.994, the optimal arrival time of client 3 in the  = 3,  = 2 model.The final model in this appendix is the standard model with  = clients.This model is particularly relevant because it is the smallest instance where the cost may decrease while increasing the rescheduling time  for certain intervals.
Lindley's recursion gives the following expressions: ∶= 3.699 when  = 0.2.The corresponding cost is equal to 0.693.Instead of providing the complete analysis to find E[ 3,0 adapt ()], we immediately show how this cost behaves as a function of  by plotting them in Fig. 16.In the right panel we focus only on the cost in the interval 1.826 <  < 3.699, i.e., between the optimal arrival times of clients 2 and 3.This is probably the most surprising part of Fig. 16, in particular the part where the cost decreases as  increases.This pattern has also been observed in Experiment 1 (in particular Fig. 3) and we would like to gain more insight into this surprising behavior.
To analyze the cost in the interval  opt 2 <  <  opt 3 , we first note that in this interval, client 2 has arrived at time  2 = 1.826 and client 3 is currently scheduled at  3 = 3.699.To determine the cost, we need to distinguish five different cases: and  opt 2 +  2 < .In this case, both services have finished and there is a conditional total idle time of  −  1 −  2 .There are no waiting times so far.Client 3 can be scheduled immediately and no further cost is going to occur.. opt 2 <  1 <  and  1 +  2 < .In this case, both services have finished and there is a conditional total idle time of  −  1 −  2 .Additionally, client 2 experienced a waiting time of  1 −  opt 2 .Client 3 can be scheduled immediately and no further cost is going to occur.. 1 <  opt 2 and  opt 2 +  2 > .In this case, client 2 had no waiting time and there was an idle time of  opt 2 −  1 .To determine when to optimally schedule client 3, note that the system now reduces to the  = 2,  = 1 system (which is equivalent to  = 2,  = discussed earlier).. opt 2 <  1 <  and  1 + 2 > .In this case, client 2 had a waiting time of  1 −  opt 2 and there was no idle time.As in Case , the system now reduces to the  = 2,  = 1 system.. 1 > .The conditional waiting time of client 2 is equal to − opt 2 and the system reduces to the  = 3,  = 2 system discussed before.This explains the discontinuity at  = 2.994.
So why does the cost decrease in Fig. 16 between  = 1.826 and  = 2.8?To understand this, we need to look at the five contributions - to the cost.Without going into too much detail, it can be verified that parts  and  are decreasing, while the others are increasing.In this numerical example, the decrease of  and  is stronger than the increase in  +  + .Let us zoom in on Case , which happens with probability It is readily seen that this depends on  only through the factor  − , which is clearly decreasing exponentially.
The main takeaway message, however, is that the cost function makes an upward jump whenever  is equal to one of the clients' arrival times.This implies that rescheduling at arrival times is, in principle, never optimal.The reason is that when scheduling at (or very soon after) a client's arrival, one loses the option to postpone the arrival if needed.In many real-life applications, obviously, the situation is more subtle, because it is not realistic (or not appreciated) to reschedule clients right before they are supposed to arrive.In these cases, a tradeoff should be sought, for example by fixing appointment times within a time interval of length  after each rescheduling epoch, as discussed in Experiment 4.

Appendix B. Pseudocode of the simulation
The following pseudocode has been used to simulate an experiment where we periodically reschedule with rescheduling time .

Fig. 4 .
Fig. 4. Cost of adaptive schedule for  = 0.5 and different values of  and .

R
.Mahes et al.

Fig. 12 .
Fig. 12. Cost of static and adaptive scheduling with horizon  for  = 15 clients with weight  = 0.5 and  = 3.The simulations are based on  = 10 5 runs.

Fig. 16 .
Fig. 16.Cost as a function of , for  = 3,  = 0 and  = 0.2, with homogeneous exponential service times.At the right, a zoomed-in version to focus on the interval  opt 2 <  <  opt 3 .

Table 1
Cost and efficiency gain of adaptive schedule,  = 15.

Table 4
Cost of adaptive and static schedule when rescheduling at each start of service,  = 15 and E = 1.

Table 5
Cost of adaptive and dynamic schedule when rescheduling at each client arrival,  = 15 and E = 1.The simulations are based on  = 10 6 runs.