Skip to main content
Log in

A Distributed Method for Optimal Capacity Reservation

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

We consider the problem of reserving link capacity in a network in such a way that any of a given set of flow scenarios can be supported. In the optimal capacity reservation problem, we choose the reserved link capacities to minimize the reservation cost. This problem reduces to a large linear program, with the number of variables and constraints on the order of the number of links times the number of scenarios. We develop a scalable, distributed algorithm for the problem that alternates between solving (in parallel) one-flow problem per scenario, and coordination steps, which connect the individual flows and the reservation capacities.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Gomory, R.E., Hu, T.C.: An application of generalized linear programming to network flows. J. Soc. Ind. Appl. Math. 10(2), 260–283 (1962)

    Article  MathSciNet  MATH  Google Scholar 

  2. Gomory, R.E., Hu, T.C.: Synthesis of a communication network. J. Soc. Ind. Appl. Math. 12(2), 348–369 (1964)

    Article  MathSciNet  MATH  Google Scholar 

  3. Labbé, M., Séguin, R., Soriano, P., Wynants, C.: Network synthesis with non-simultaneous multicommodity flow requirements: bounds and heuristics. Tech. rep., Institut de Statistique et de Recherche Opérationnelle, Université Libre de Bruxelles (1999)

  4. Minoux, M.: Optimum synthesis of a network with non-simultaneous multicommodity flow requirements. N.-Holl. Math. Stud. 59, 269–277 (1981)

    Article  MATH  Google Scholar 

  5. Petrou, G., Lemaréchal, C., Ouorou, A.: An approach to robust network design in telecommunications. RAIRO-Oper. Res. 41(4), 411–426 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  6. Chekuri, C., Shepherd, F.B., Oriolo, G., Scutellá, M.G.: Hardness of robust network design. Networks 50(1), 50–54 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  7. Ben-Ameur, W., Kerivin, H.: New economical virtual private networks. Commun. ACM 46(6), 69–73 (2003)

    Article  Google Scholar 

  8. Ben-Ameur, W., Kerivin, H.: Routing of uncertain traffic demands. Optim. Eng. 6(3), 283–313 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  9. Poss, M., Raack, C.: Affine recourse for the robust network design problem: between static and dynamic routing. Networks 61(2), 180–198 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  10. Ben-Tal, A., Ghaoui, L.E., Nemirovski, A.: Robust Optimization. Princeton University Press, Princeton (2009)

    Book  MATH  Google Scholar 

  11. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)

    Book  MATH  Google Scholar 

  12. Parikh, N., Boyd, S.: Proximal algorithms. Found. Trends Optim. 1(3), 123–231 (2014)

    Google Scholar 

  13. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)

    Article  MATH  Google Scholar 

  14. ApS, M.: MOSEK Optimizer API for Python 8.0.0.64 (2017). http://docs.mosek.com/8.0/pythonapi/index.html

  15. Diamond, S., Boyd, S.: CVXPY: A Python-embedded modeling language for convex optimization. J. Mach. Learn. Res. 17(83), 1–5 (2016)

    MathSciNet  MATH  Google Scholar 

  16. Domahidi, A., Chu, E., Boyd, S.: ECOS: An SOCP solver for embedded systems. In: Proceedings of the 12th European Control Conference, pp. 3071–3076. IEEE (2013)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nicholas Moehle.

Additional information

Paul I. Barton.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Solution of the Reservation Update Subproblem

Here, we give a simple method to carry out the reservation update problem (12). This section is mostly taken from [12, §6.4.1].

We can express (12) as

$$\begin{aligned} \begin{array}{ll} \text{ minimize } &{} \beta t + (1/2)\Vert x-u\Vert ^2 \\ \text{ subject } \text{ to } &{} x_i \le t, \quad i = 1,\ldots K. \end{array} \end{aligned}$$
(16)

Here, the variables are \(x\in {\mathbb {R}}^K\) and \(t\in {\mathbb {R}}\) (The variable x corresponds to \(f_j\) in (12), and the parameters u and \(\beta \) correspond to \(\alpha f_j + (1-\alpha {\tilde{f}}_j)+\pi _j/\rho \) and \(p_j/\rho \), respectively).

The optimality conditions are

$$\begin{aligned} x^\star _i \le t^\star , \quad \mu ^\star _i \ge 0, \quad \mu _i^\star (x^\star _i - t^\star ) = 0, \quad (x^\star _i - u_i) + \mu ^\star _i = 0, \quad {\mathbf {1}}^T \mu ^\star = \beta , \end{aligned}$$

where \(x^\star \) and \(t^\star \) are the optimal primal variables and \(\mu ^\star \) is the optimal dual variable. If \(x_i < t\), the third condition implies that \(\mu ^\star _i = 0\), and if \(x_i^\star = t^\star \), the fourth condition implies that \(\mu ^\star _i = u_i - t^\star \). Because \(\mu _i^\star \ge 0\), this implies \(\mu ^\star _i = (u_i - t^\star )_+\). Substituting for \(\mu _i^\star \) in the fifth condition gives

$$\begin{aligned} \sum _{i=1}^n (u_i - t^\star )_+ = \beta . \end{aligned}$$

We can solve this equation for \(t^\star \) by first finding an index k such that

$$\begin{aligned} u_{[k+1]} - u_{[k]} \le ( u_{[k]} - \beta ) / k \le u_{[k]} - u_{[k-1]}, \end{aligned}$$

where \(u_{[i]}\) is the sum of the i largest elements of u. This can be done efficiently by first sorting the elements of u and then computing the cumulative sums in the descending order until the left inequality in the above holds. Note that in this way there is no need to check the second inequality, as it will always hold (The number of computational operations required to sort u is \(K\log K\), making this the most computationally expensive step of the solution). With this index k computed, we have

$$\begin{aligned} t^\star = ( u_{[k]} - \beta ) / k. \end{aligned}$$

We then recover \(x^\star \) as \(x^\star _i = \min \{t^\star , u_i\}.\)

Iterate Properties

Here, we prove that, for any \(l=1,2,\dots \), if \(\pi ^{(k)}\) are the columns of \(\varPi (l)\), optimality condition 1 is satisfied, i.e., \(\varPi (l){\mathbf {1}}= p\) and \(\varPi (l) \ge 0\). Additionally, if \(f^{(k)}\) are taken to be the columns of F(l), then condition 2 is also satisfied, i.e.,

$$\begin{aligned} \sum _{j=1}^m \sum _{k=1}^K \varPi (l)_{jk} {\tilde{F}}(l)_{jk} = p^T \max {\tilde{F}}(l). \end{aligned}$$

To see this, we first note that, by defining \(h:{\mathbb {R}}^{m\times K}\rightarrow {\mathbb {R}}\) such that \(h(F) = p^T \max F\), then the subdifferential of h at F is

$$\begin{aligned} \partial h(F) = \{ \varPi \in {\mathbb {R}}^{m\times K} \mid \varPi {\mathbf {1}}= p, \; \varPi \ge 0, \; \varPi _{jk} > 0 \implies (\max F)_j = F_{jk} \}, \end{aligned}$$

i.e., it is the set of matrices whose columns are scenario prices and whose nonzero elements coincide with the elements of F that are equal to the row-wise maximum value.

Now, from the reservation update equation, we have

$$\begin{aligned} {\tilde{F}}(l+1)= & {} \mathop {\hbox {argmin}}\limits _{{\tilde{F}}} p^T \max ({\tilde{F}}) + (\rho /2) \Vert \alpha F(l+1) + (1-\alpha ){\tilde{F}}(l) \\&+ (1/\rho )\varPi (l) - {\tilde{F}}\Vert _F^2. \end{aligned}$$

The optimality conditions are

$$\begin{aligned} \varPi (l) + \rho \big ( - {\tilde{F}}(l+1) + \alpha F(l+1) + (1-\alpha ) {\tilde{F}}(l) \big ) \in \partial h\big ({\tilde{F}}(l+1) \big ). \end{aligned}$$

Using the price update equation, this is

$$\begin{aligned} \varPi (l+1) \in \partial h\big ({\tilde{F}}(l+1)\big ). \end{aligned}$$

This shows that the columns of \(\varPi (l+1)\) are indeed valid scenario prices. Additionally, we have (dropping the indices \((l+1)\) for \(\varPi (l+1)\) and \({\tilde{F}}(l+1)\))

$$\begin{aligned} \sum _{j=1}^m \sum _{k=1}^K \varPi _{jk} {\tilde{F}}_{jk} = \sum _{j=1}^m (\max {\tilde{F}})_j \sum _{k=1}^K \varPi _{jk} = \sum _{j=1}^m (\max {\tilde{F}})_j p_j = p^T \max {\tilde{F}}. \end{aligned}$$

The first step follows from the fact that the only terms for which \(\varPi (l+1)\) are positive, and thus the only terms that contribute to the sum, are those for which the corresponding element in \({\tilde{F}}(l+1)\) is equal to the maximum in that row. The second step follows from the fact that \(\varPi (l+1){\mathbf {1}}= p\).

Choice of Parameter \(\mu \)

Here, we provide some supplemental details regarding the selection of the ADMM algorithm parameter \(\rho \) (See Eq. (14)). A suitable range of values for the parameter \(\mu \) was found on the basis of numerical experiments. In particular, we varied n, m, and K, and generated graphs and scenario flows according to the method of Sect. 6. We then studied the number of iterations required for convergence to a tolerance of 0.01 (Problem instances taking longer than 200 iterations were terminated). Some representative examples are shown in Fig. 7. We see that the best choice of \(\mu \) is reasonably consistent across problem dimensions. An appropriate range of \(\mu \) is around 0.05 to 0.1.

Fig. 7
figure 7

Number of iterations for convergence, as a function of algorithm parameter \(\mu \). The legend shows the triple (nmK)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Moehle, N., Shen, X., Luo, ZQ. et al. A Distributed Method for Optimal Capacity Reservation. J Optim Theory Appl 182, 1130–1149 (2019). https://doi.org/10.1007/s10957-019-01528-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-019-01528-5

Keywords

Navigation