Skip to main content
Log in

Managing mobile production-inventory systems influenced by a modulation process

  • Original Research
  • Published:
Annals of Operations Research Aims and scope Submit manuscript

Abstract

The objective of this paper is to investigate the potential added value of being able to relocate production capacity, relative to fixed production capacity, in a network of multiple, geographically distributed manufacturing sites. There is a growing number of examples of production capacity that can be geographically relocated with a modest amount of effort; e.g., 3D printers, bioreactors for cell and gene manufacturing, and modular units for pharmaceutical intermediates. Such a capability shows promise for enabling the fast fulfillment of a distributed network with a reduction in the total inventory and total production capacity of a distributed network with fixed production capacity without sacrificing customer service levels or total system resilience. Allowing also for transshipment, we model a production-inventory system with L production sites and Y units of relocatable production capacity, develop efficient and effective heuristic solution methods for dynamic relocation and multi-location inventory control, and analyze the potential added value and implementation challenges of being able to relocate production capacity. We describe the (L, Y) problem as a problem of sequential decision making under uncertainty to determine transshipment, mobile production capacity relocation, and replenishment decisions at each decision epoch. To enhance model realism, we use a partially observed stochastic process, the modulation process, to model the exogenous and partially observable forces (e.g., the macro-economy) that affect demand. We then model the (L, Y) problem as a partially observed Markov decision process. Due to the considerable computational challenges of solving this model exactly, we propose two efficient, high quality heuristics. We show for an instance set with five locations that production capacity mobility and transshipment, relative to the fixed production capacity case, can improve systems performance by as much as 41% on average over the no-flexibility case and that production capacity mobility can yield as much as 10% more savings compared to when only transshipment is permitted.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Satya S. Malladi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

A1 provides the foundational results for the \(L = 1\) case on which bounds presented in Sect. 4 for the general (LY) case are based. A2 presents a proof of Proposition 1, A3 presents a heuristic that is analogous to the heuristic LSF, and A4 presents additional tables of computational results.

1.1 A1 Analysis for the \(L=1\) Case

Assume \(v_0 = 0\), \(v_{n+1} = Hv_n\), define \(\mathcal {G}_n(\varvec{x},y)= \mathcal {G}(\varvec{x}, y, v_n)\) for all n, and let \(y_n^*(\varvec{x}, C)\) be the smallest value that minimizes \(\mathcal {G}_n(\varvec{x}, y)\) with respect to y. We remark that

$$\begin{aligned} v_{n+1}(\varvec{x},s,C) = {\left\{ \begin{array}{ll} \mathcal {G}_n(\varvec{x}, s) &{} \text { if } s\ge y_n^*(\varvec{x},C) \\ \mathcal {G}_n(\varvec{x},s+C) &{} \text { if } s\le y_n^*(\varvec{x},C) - C \\ \mathcal {G}_n(\varvec{x},y_n^*(\varvec{x},C)) &{} \text { otherwise. } \end{array}\right. } \end{aligned}$$

We now present claims for structured results with respect to \(\mathcal {G}_n\), \(v_n\), and \(y_n^*\) based on results in Federgruen and Zipkin (1986) and Malladi et al. (2018).

Proposition 3

For all n, \(\varvec{x}\), and C,

  1. (i)

    \(\mathcal {G}_n(\varvec{x},y)\) is convex in y

  2. (ii)

    \(v_n(\varvec{x},s,C)\) is:

    1. (a)

      convex in s,

    2. (b)

      non-decreasing for \(s\ge y_n^*(\varvec{x},C)\),

    3. (c)

      non-increasing for \(s\le y_n^*(\varvec{x},C) - C\),

    4. (d)

      equal to \(v_n(\varvec{x},y_n^*(\varvec{x},C), C)\) otherwise

  3. (iii)

    \(v_{n+1}(\varvec{x},s,C) \ge v_n(\varvec{x},s,C)\) for all s.

Proof of Proposition 3

The convexity of \(\mathcal {G}_0(\varvec{x},y)\) in y for all \(\varvec{x}\) follows from the definitions and assumptions. Assume \(\mathcal {G}_n(\varvec{x},y)\) is convex in y for all \(\varvec{x}\). It is then straightforward to show that item (ii) holds for \(n=n+1\) and all \((\varvec{x},C)\). We remark that the function \(g(y) = w(f(y))\) is convex and non-decreasing (non-increasing) if w is convex and non-decreasing (non-increasing) and if f is linear and non-decreasing. Hence, \(\mathcal {G}_{n+1}(\varvec{x},y)\) is convex in y for all \(\varvec{x}\), and item (i) and item (ii) hold for all n by induction. Since \(v_1(\varvec{x},s,C) \ge v_0(\varvec{x},s,C)\), a standard induction argument guarantees that item (iii) holds. \(\square \)

Let \(v_n(\varvec{x},s) = v_n(\varvec{x},s,C)\), \(v_n'(\varvec{x},s) = v_n(\varvec{x},s,C')\), \(\mathcal {G}_n(\varvec{x},y) = \mathcal {G}(\varvec{x},y,v_n)\), and \(\mathcal {G}_n'(\varvec{x},y) = \mathcal {G}(\varvec{x},y,v_n')\).

Proposition 4

Assume \(C\le C'\), and that \(y_n^*(\varvec{x},C) - d \le y_n^*(\varvec{\lambda }(\varvec{d},\varvec{z},\varvec{x}),C)\) for all n and all \((\varvec{d},\varvec{z},\varvec{x})\). Then for all n, \(\varvec{x}\), and s,

  1. (i)

    \(v_n'(\varvec{x},s,C) \le v_n(\varvec{x},s,C)\)

  2. (ii)

    If \(y\le y' \le y_n^*(\varvec{x},C)\), then \(\mathcal {G}_n(\varvec{x},y') - \mathcal {G}_n(\varvec{x},y) \le \mathcal {G}_n'(\varvec{x},y') - \mathcal {G}_n'(\varvec{x},y)\)

  3. (iii)

    If \(s\le s' \le y_n^*(\varvec{x},C)\), then \( v_{n+1}(\varvec{x},s',C) - v_{n+1}(\varvec{x},s) \le v_{n+1}'(\varvec{x},s',C) - v_{n+1}'(\varvec{x},s,C). \)

  4. (iv)

    \(y_n^*(\varvec{x},C') \le y_n^*(\varvec{x},C)\).

Proof of Proposition 4

Proof of item (i) is straightforward. Regarding item (ii)-item (iv), note item (ii) holds for \(n=0\); assume item (ii) holds for n. Then item (iv) also holds for n. We now outline the proof that item (iii) holds for \(n=n+1\). Recall

$$\begin{aligned} v_{n+1}(\varvec{x},s,C) = {\left\{ \begin{array}{ll} \mathcal {G}_n(\varvec{x}, s+C) &{} \text { if } s\le y_n - C \\ \mathcal {G}_n(\varvec{x},s) &{} \text { if } s\ge y_n \\ \mathcal {G}_n(\varvec{x},y_n) &{} \text { otherwise,}\end{array}\right. } \end{aligned}$$

where \(y_n = y_n^*(\varvec{x},C)\), and

$$\begin{aligned} v_{n+1}'(\varvec{x},s,C) = {\left\{ \begin{array}{ll} \mathcal {G}_n'(\varvec{x}, s+C') &{} \text { if } s\le y_n' - C' \\ \mathcal {G}_n'(\varvec{x},s) &{} \text { if } s\ge y_n' \\ \mathcal {G}_n'(\varvec{x},y_n') &{} \text { otherwise,}\end{array}\right. } \end{aligned}$$

where \(y_n' = y_n^*(\varvec{x},C')\). Similar to the proof of Proposition 5 and the proof of [Federgruen and Zipkin (1986), Theorem  3], there are two cases: (1) \(y_n - C \le y_n'\), (2) \(y_n' \le y_n - C\), which are more completely described as

$$\begin{aligned}&y_n' - C' \le y_n - C \le y_n' \le y_n, \\&y_n' - C' \le y_n' \le y_n- C \le y_n, \end{aligned}$$

respectively. For each case, there are 10 different sets of inequalities that the pair \((s,s')\) can satisfy. Showing that item (iii) holds when \(n=n+1\) for each of the 20 sets of inequalities is tedious but straightforward. We now show that for \(s \le s'\),

$$\begin{aligned} v_{n+1}(\varvec{x},s',C)-v_{n+1}(\varvec{x},s,C) \le v_{n+1}'(\varvec{x},s',C) - v_{n+1}'(\varvec{x},s,C) \end{aligned}$$

implies that for \(y\le y' \le y_n\), \(\mathcal {G}_{n+1}(\varvec{x},y') - \mathcal {G}_{n+1}(\varvec{x},y) \le \mathcal {G}_{n+1}'(\varvec{x},y') -\mathcal {G}_{n+1}'(\varvec{x},y)\). Note

$$\begin{aligned}&v_{n+1}(\varvec{\lambda }(d,z,\varvec{x}),y'-d,C) - v_{n+1}(\varvec{\lambda }(d,z,\varvec{x}),y-d,C) \\&\le v_{n+1}'(\varvec{\lambda }(d,z,\varvec{x}),y'-d,C) - v_{n+1}'(\varvec{\lambda }(d,z,\varvec{x}),y-d,C) \end{aligned}$$

for \(y-d \le y'-d \le y_n^*(\varvec{\lambda }(d,z,\varvec{x}),C)\), which implies

$$\begin{aligned} \mathcal {G}_{n+1}(\varvec{x},y') - \mathcal {G}_{n+1}(\varvec{x},y) \le \mathcal {G}_{n+1}'(\varvec{x},y') - \mathcal {G}_{n+1}'(\varvec{x},y) \end{aligned}$$

for all \(y \le y' \le y_{n+1}^*(\varvec{x},C)\) assuming \(y_{n+1}^*(\varvec{x},C) - d \le y_{n+1}^*(\varvec{\lambda }(d,z,\varvec{x}), C)\) for all \((d,z,\varvec{x})\). A standard induction argument completes the proof. \(\square \)

Proposition 5

Assume \(y_n^*(\varvec{x},C)-d_l \le y_n^*(\varvec{\lambda }(\varvec{d},\varvec{z},\varvec{x}), C)\) for all n and all \((\varvec{d},\varvec{z},\varvec{x})\). Then for all n, \(s\le s' \le y_n^*(\varvec{x},C)\) implies:

  1. (i)

    \(v_n(\varvec{x},s',C) - v_n(\varvec{x},s,C) \ge v_{n+1}(\varvec{x},s',C) - v_{n+1}(\varvec{x},s,C)\),

  2. (ii)

    \(\mathcal {G}_n(\varvec{x},s')-\mathcal {G}_n(\varvec{x},s) \ge \mathcal {G}_{n+1}(\varvec{x},s') - \mathcal {G}_{n+1}(\varvec{x},s)\),

  3. (iii)

    \(y_n^*(\varvec{x},C) \le y_{n+1}^*(\varvec{x},C)\).

Proof of Proposition 5

We note item (i) holds when \(n=0\). Assume item (i) holds for \(n=n-1\). Let \(y\le y' \le y_{n-1}^*(\varvec{x}, C)\), implying that \(y-d \le y'-d \le y_{n-1}^*(\varvec{x},C)-d \le y^*_{n-1}(\varvec{\lambda }(d,z,\varvec{x}),C)\) for all \((d,z,\varvec{x})\). Hence,

$$\begin{aligned}&v_{n-1} (\varvec{\lambda }(d,z,\varvec{x}),y'-d,C) - v_{n-1}(\varvec{\lambda }(d,z,\varvec{x}),y-d, C) \\&\quad \ge v_n (\varvec{\lambda }(d,z,\varvec{x}),y'-d, C) - v_n(\varvec{\lambda }(d,z,\varvec{x}),y-d, C), \end{aligned}$$

and thus item (ii) holds for \(n=n-1\) for all \(y\le y' \le y_{n-1}^*(\varvec{x},C)\). Letting \(y' = y_{n-1}^*(\varvec{x},C)\), we observe

$$\begin{aligned}&0 \ge \mathcal {G}_{n-1}(\varvec{x},y^*_{n-1}(\varvec{x},C)) - \mathcal {G}_{n-1}(\varvec{x},y) \\&\quad \ge \mathcal {G}_n(\varvec{x}, y_{n-1}^*(\varvec{x},C)) - \mathcal {G}_n(\varvec{x},y); \end{aligned}$$

hence, item (iii) holds for \(n=n-1\).

We now outline a proof that \(s\le s' \le y_n^*(\varvec{x},C)\) implies

$$\begin{aligned} v_n(\varvec{x},s') - v_n(\varvec{x},s) \ge v_{n+1}(\varvec{x},s') - v_{n+1}(\varvec{x},s). \end{aligned}$$
(13)

Following an argument in the proof of Federgruen and Zipkin (1986, Theorem 2) we consider two general cases: (1) \(y_n^*(\varvec{x},C)-C \le y_{n-1}^*(\varvec{x},C)\) and (2) \(y_{n-1}^*(\varvec{x},C) \le y_n^*(\varvec{x},C)-C\). Letting the dependence on \((\varvec{x},C)\) be implicit, cases (1) and (2) are more completely described as

$$\begin{aligned}&y_{n-1}^*- C \le y_n^* - C \le y_{n-1}^* \le y_n^* \\&y_{n-1}^*- C \le y_{n-1}^* \le y_n^* - C \le y_n^*, \end{aligned}$$

respectively. For each case, there are 10 different sets of inequalities that the pair \((s,s')\) can satisfy. The values \(v_n(\varvec{x},s'), v_n(\varvec{x},s), v_{n+1}(\varvec{x},s')\), and \(v_{n+1}(\varvec{x},s)\) are well defined for each of these inequalities in terms of \(\mathcal {G}_{n-1}\) and \(\mathcal {G}_n\). Showing that (13) holds for each of these 20 different sets of inequalities is again tedious but straightforward.

A standard induction argument completes the proof of the proposition. \(\square \)

We now claim that \(v(\varvec{x},s,C)\) is convex in C.

Proposition 6

  1. (i)

    If \(y\in A(s,C)\) and \(y' \in A(s, C')\), then \(\lambda y + (1-\lambda )y'\) \(\in A(s, \lambda C+(1-\lambda )C')\).

  2. (ii)

    If \(\xi \in A(s, \lambda C+(1-\lambda )C')\), then there is a \(y \in A(s,C)\) and a \(y' \in A(s,C')\) such that \(\xi =\lambda y+ (1-\lambda )y'\).

  3. (iii)

    For real-valued and continuous v,

    $$\begin{aligned}&\min \{v(\xi ): \xi \in A(s, \lambda C+(1-\lambda )C') \} \\&= \min \{ v(\lambda y+(1-\lambda )y'):y\in A(s,C) \\&\text{ and } y' \in A(s, C')\}. \end{aligned}$$
  4. (iv)

    For all \((\varvec{x},s)\) and n, \(v_n(\varvec{x},s,C)\) is convex in C.

Proof of Proposition 6

  1. (i)

    \(y\in A(s,C)\) and \(y'\in A(s,C')\) imply \(\lambda s \le \lambda y\le \lambda (s+C)\) and \((1-\lambda )s \le (1-\lambda )y' \le (1-\lambda ) (s+C')\); summing terms implies the result.

  2. (ii)

    Let \(X = (\lambda C + (1-\lambda )C'+ s) \) and \(\varDelta ^S = (X-\xi )/(X-s)\). Note \(\varDelta ^S \in [0,1]\) and \(\xi = \varDelta ^S s + (1-\varDelta ^S) X\). Let \(y=\varDelta ^S s + (1-\varDelta ^S)(s+C)\) and \(y' = \varDelta ^S s + (1-\varDelta ^S)(s+C')\). Then, \(y\in A(s, C), y' \in A(s, C')\), and \(\lambda y + (1-\lambda )y' = \xi \).

  3. (iii)

    Proof by contradiction follows from items (i) and (ii).

  4. (iv)

    From item (iii) and the convexity of \(\mathcal {G}_n(\varvec{x},y)\) in y for all n and y (by Proposition 3 item (i)), it follows that

    $$\begin{aligned}&v_n(\varvec{x},s,\lambda C + (1-\lambda )C') \\&\quad = \min \{ \mathcal {G}_n(\varvec{x}, \lambda y + (1-\lambda )y'): y\in A(s,C), y' \in A(s,C')\} \\&\quad \le \min \{ \lambda \mathcal {G}_n(\varvec{x},y) + (1-\lambda )\mathcal {G}_n(\varvec{x},y'): \\&\quad \quad y\in A(s,C), y' \in A(s,C')\} \\&\quad = \lambda v_n(\varvec{x},s,C) + (1-\lambda ) v_n(\varvec{x},s,C'). \end{aligned}$$

\(\square \)

Clearly, the assumption that \(y_n^*(\varvec{x},C) - d_l \le y_n^*(\varvec{\lambda }(\varvec{d},\varvec{z},\varvec{x}),C)\) for all n and all \((\varvec{d},\varvec{z},\varvec{x})\) is in general a challenge to verify a priori. Arguments in Federgruen and Zipkin (1986) suggest that as n gets large, \(y_n^*(\varvec{x},C)\) may converge in some sense to a function \(y^*_{\infty }(\varvec{x},C)\). From Malladi et al. (2018), \(y_0^*(\varvec{x},C)\) is straightforward to determine. Let \(\hat{y}(\varvec{x},C) \ge y_{\infty }^*(\varvec{x},C) \ge y_n^*(\varvec{x},C)\) for all n and \(\varvec{x}\). Then \(\hat{y}(\varvec{x},C)- d_l \le y_0^*(\varvec{\lambda }(\varvec{d},\varvec{z},\varvec{x}),C)\) for all \((\varvec{d},\varvec{z},\varvec{x})\) implies the above assumption holds. Determination of a function \(\hat{y}\) for the general case is a topic for future research. We present a special case where \(y_0^* = y_n^*\) for all n in the appendix section.

We point out two key differences between the infinite capacity and the finite capacity cases when the reorder cost, \(K'=0\). First, when C is infinite, the smallest optimal base stock level \(y_n^*(\varvec{x})\) is independent of the number of successive approximation steps, making it (relatively) easy to determine. Unfortunately, this result may not hold when C is finite except for the situation considered below in Proposition 7. This fact has implementation implications for the controllers at the locations; e.g., determining the base stock levels for the capacitated case will in general be more difficult than for the infinite capacity case.

Second, Propositions 4 and 6 state that \(v(\varvec{x}, s, C)\) is non-decreasing and convex in C. We also know that \(v(\varvec{x}, s, C)\) is convex in s (from Proposition 3, which is also true for the infinite capacity case) and concave and possibly piecewise linear in \(\varvec{x}\) (from earlier cited results, which is also true for the infinite capacity case). We showed in Sect. 6.2 that these structural results can be computationally useful in determining solutions to the stock and production module relocation problem. The relocation problem for determining \((\varvec{\varDelta ^S}, \varvec{\sigma }, \varvec{u}')\), given \((\varvec{x},\varvec{s},\varvec{u})\), requires knowing \(v_l(\varvec{x},s_l', u_l')\) for all l. We now consider approaches to compute or approximate \(v(\varvec{x}, s, C)\), following the presentation of a special case where \(y_0^* = y_n^*\) for all n.

Proposition 7

Assume that for all (dzx), \( y_0^*(\varvec{\lambda }(\varvec{d},\varvec{z},\varvec{x}), C) - C \le y_0^*(\varvec{x},C) - d \le y_0^*(\varvec{\lambda }(\varvec{d},\varvec{z},\varvec{x}),C).\) Then, \(y_n^*(\varvec{x},C) = y_0^*(\varvec{x},C)\) for all n.

We remark that the left inequality in Proposition 7 essentially implies that although capacity may be finite, it is always sufficient to insure the inventory level after replenishment can be \(y^*_0(\varvec{x},C)\).

Proof of Proposition 7

By induction. Assume \(y_n^*(\varvec{x},C) = y_0^*(\varvec{x},C)\). Note therefore,

$$\begin{aligned} v_{n+1}(\varvec{x},s,C) = {\left\{ \begin{array}{ll} \mathcal {G}_n(\varvec{x},s+C), &{} s \le y_0^*(\varvec{x},C) - C \\ \mathcal {G}_n(\varvec{x},s), &{} s \ge y_0^*(\varvec{x},C) \\ \mathcal {G}_n (\varvec{x},y_0^*(\varvec{x},C)) &{} \text { otherwise.} \end{array}\right. } \end{aligned}$$

Note

  1. (i)

    \( \min _y \mathcal {G}_{n+1}(\varvec{x},y) \le \mathcal {G}_{n+1}(\varvec{x},y_0^*(x,C))\)

  2. (ii)

    \( \min _y \mathcal {G}_{n+1}(\varvec{x},y) \ge \min _y \mathcal {L}(\varvec{x},y)+ \beta \sum _{d,z} \sigma (d,z,\varvec{x}) \min _y v_{n+1} (\varvec{\lambda }(d,z,\varvec{x}), y-d).\)

The minimum with respect to y \(v_{n+1}(\varvec{\lambda }(d,z,\varvec{x}), y-d, C)\) is such that \(y_0^*(\varvec{\lambda }(d,z,\varvec{x}),C) - C \le y - d \le y_0^*(\varvec{\lambda }(d,z,\varvec{x}),C)\). By assumption, \(y = y_0^*(\varvec{x},C)\) satisfies these inequalities. Thus,

$$\begin{aligned}&\min _y \mathcal {G}_{n+1}(\varvec{x},y) \ge \mathcal {L}(\varvec{x}, y_0^*(\varvec{x},y^*_0(\varvec{x},C)) \\&\qquad + \beta \sum _{d,z} \sigma (d,z,\varvec{x}) v_{n+1} (\varvec{\lambda }(d,z,\varvec{x}), y_0^*(\varvec{x},C)-d, C) \\&\quad = \mathcal {G}_{n+1}(\varvec{x},y_0^*(\varvec{x},C), \end{aligned}$$

and hence \(y_{n+1}^*(\varvec{x},C) = y_0^*(\varvec{x},C)\). \(\square \)

1.2 A2 Proof of Proposition 1

Proof

Let \(v_0(\varvec{x},s,C ) = \hat{v}_0(\varvec{x},s,C) = 0\). Consider \(\varvec{d} =(d_l, \varvec{d}_{j\ne l})\), where \(\varvec{d}_{j \ne l}\) can be considered as additional observation data z. Let \(\sum _z \sigma (d_l,z,\varvec{x}) = \sigma (d_l,\varvec{x})\).

$$\begin{aligned}&v_1 (\varvec{x},s,C) \\&\quad = \min _{s\le y\le s+C} \bigg \{ \sum _{d_l}\sigma (d_l,\varvec{x}) \left[ c(y,d_l) \right] \bigg \} \\&\quad = \min _{s\le y\le s+C} \bigg \{ \sum _{d_l}\sum _i x_i\sum _j \text {Pr}(j\mid i) \text {Pr}(d_l\mid j) \left[ c(y,d_l) \right] \bigg \} \\&\quad = \min _{s\le y\le s+C} \bigg \{ \sum _d\sum _i x_i \bigg ( \text {Pr}(d_l\mid i) \\&\qquad + \sum _j \text {Pr}(j\mid i) \text {Pr}(d_l\mid j) - \text {Pr}(d_l\mid i) \bigg ) \left[ c(y,d_l) \right] \bigg \} \\&\quad \ge \min _{s\le y\le s+C} \bigg \{ \sum _{d_l}\sum _i x_i \bigg ( \text {Pr}(d_l\mid i) -\max _k \text {Pr}(d_l\mid k) \\&\qquad + \min _k \text {Pr} (d_l\mid k)\bigg ) \left[ c(y,d_l) \right] \bigg \} \\&\quad \ge \min _{s\le y\le s+C} \bigg \{ \sum _{d_l}\sum _i x_i \text {Pr}(d_l\mid i) c(y,d_l) \\&\qquad - \sum _{d_l}\big (\max _k \text {Pr}(d_l\mid k) - \min _k \text {Pr} (d_l\mid k) \big ) c(y,d_l) \bigg \} \\&\quad \ge \min _{s\le y\le s+C} \bigg \{ \sum _{d_l}\sum _i x_i \text {Pr}(d_l\mid i) c(y,d_l) \bigg \} \\&\qquad + \min _{s\le y\le s+C} \bigg \{ -\sum _{d_l}\big (\max _k \text {Pr}(d_l\mid k) \\&\qquad - \min _k \text {Pr} (d_l\mid k) \big ) c(y,d_l) \bigg \} \\&\quad = \hat{v}_1(\varvec{x},s,C) + \min _{s\le y\le s+C} \big \{ -\sum _{d_l} k(d_l) c(y,d_l) \big \} \\&\quad = \hat{v}_1(\varvec{x},s,C) - \max _{s\le y\le s+C} \big \{ \sum _{d_l} k(d_l) c(y,d_l) \big \} \\&\quad = \hat{v}_1(\varvec{x},s,C) - \sum _{d_l} k(d_l) c(\hat{y},d_l) \\&\quad = \hat{v}_1(\varvec{x},s,C) - u, \text { where } u = \sum _{d_l} k(d_l) c(\hat{y},d_l) \\&\quad \quad \text { and } \hat{y} \in \{s,s+C\} \text { due to convexity of } c({y},d_l) \\&\quad \quad \forall \ y, d_l, \text { where } k(d_l) = \big (\max _k \text {Pr}(d_l\mid k) - \min _k \text {Pr} (d_l\mid k) \big ). \end{aligned}$$

By induction and infinite summation,

$$\begin{aligned}&v_n(\varvec{x},s,C) \ge \hat{v}_n(\varvec{x},s,C) - u(1+\beta +\dots +\beta ^n); \\&\quad v(\varvec{x},s,C) \ge \hat{v}(\varvec{x},s,C) - u/(1-\beta ) . \end{aligned}$$

1.3 A3 The Heuristic LARRO

We now present a heuristic for large instances with low computational overhead. LARRO stands for lookahead of rollout for relocations only.

$$\begin{aligned}&\text {LARRO:}\nonumber \\&\min _{\varvec{\varDelta ^S}, \varvec{u}',\varvec{y}} \sum _l \bigg \{ (K^{S+}_l \varDelta ^{S+}_l + K^{S-}_l \varDelta ^{S-}_l )\nonumber \\&\quad + K^M\sum _l |u_l-u_l' |/2 + (\zeta _l + \eta _l)/2 \bigg \},\nonumber \\&\text {subject to }\nonumber \\&\zeta _l \ge \gamma _j^l (s_l + \varDelta ^{S+}_l - \varDelta ^{S-}_l) + \hat{\gamma }_j^l \ \forall \ (\gamma _j^l, \hat{\gamma }_j^l) \in \varGamma ^l_{t+1}(u_l) \ \forall \ l \nonumber \\&\eta _l \ge \theta _j^l u_l' + \hat{\theta }_j^l \ \ \ \ \forall \ ( \theta _j^l, \hat{\theta }_j^l) \in \varTheta ^l_{t+1}(s_l) \ \forall \ l \nonumber \\&\sum _l u_l' = Y \nonumber \\&\sum _l \varDelta ^{S+}_l = \sum _l \varDelta ^{S-}_l,\nonumber \\&0\le u_l' \le Y_l', \ \forall \ l\nonumber \\&0\le \varDelta ^{S+}_l \le \sum _{k\ne l} (s_k)^+, \ \forall \ l\nonumber \\&0\le \varDelta ^{S-}_l \le - (s_l)^+, \ \forall \ l\nonumber \\&u_l',\ \varDelta ^{S+}_l, \varDelta ^{S-}_l \in \mathbb {Z}, \ \ \eta _l, \ \zeta _l \in \mathbb {R} \ \ \forall \ l \end{aligned}$$
(14)

Proposition 8

LARRO can be solved exactly by relaxing the integrality constraints.

1.4 A4 Results: Additional Tables

We now present additional numerical results that complement Sect. 6.2.

See Tables 10, 11, 12, 13, 14 and 15.

Table 10 Variation of average savings due to RRO over DNF with varying \(\theta \) for Instance Set A
Table 11 Variation of average savings due to LSF over DNF across \(\theta \) for Instance Set A
Table 12 Variation of average savings due to RSF over DNF across \(\theta \) for Instance Set A
Table 13 Variation of average savings due to heuristics over DNF across G for \(\theta = 0.2\) on a shorter horizon \(T=10\) instead of \(T=30\) for Instance Set A
Table 14 Value of mobility (% savings over DNF) using LSF with \(\theta = 0.2\) across varying \( K^S\) and \(K^M\) for \(G=1\) instances of Instance Set A
Table 15 Value of mobility (% savings over DNF) using LSF with \(\theta = 0.2\) across varying \( K^S\) and \(K^M\) for \(G=5\) instances of Instance Set A

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Malladi, S.S., Erera, A.L. & White, C.C. Managing mobile production-inventory systems influenced by a modulation process. Ann Oper Res 304, 299–330 (2021). https://doi.org/10.1007/s10479-021-04193-y

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10479-021-04193-y

Keywords

Navigation