Abstract
We consider the problem of reserving link capacity in a network in such a way that any of a given set of flow scenarios can be supported. In the optimal capacity reservation problem, we choose the reserved link capacities to minimize the reservation cost. This problem reduces to a large linear program, with the number of variables and constraints on the order of the number of links times the number of scenarios. We develop a scalable, distributed algorithm for the problem that alternates between solving (in parallel) one-flow problem per scenario, and coordination steps, which connect the individual flows and the reservation capacities.
Similar content being viewed by others
References
Gomory, R.E., Hu, T.C.: An application of generalized linear programming to network flows. J. Soc. Ind. Appl. Math. 10(2), 260–283 (1962)
Gomory, R.E., Hu, T.C.: Synthesis of a communication network. J. Soc. Ind. Appl. Math. 12(2), 348–369 (1964)
Labbé, M., Séguin, R., Soriano, P., Wynants, C.: Network synthesis with non-simultaneous multicommodity flow requirements: bounds and heuristics. Tech. rep., Institut de Statistique et de Recherche Opérationnelle, Université Libre de Bruxelles (1999)
Minoux, M.: Optimum synthesis of a network with non-simultaneous multicommodity flow requirements. N.-Holl. Math. Stud. 59, 269–277 (1981)
Petrou, G., Lemaréchal, C., Ouorou, A.: An approach to robust network design in telecommunications. RAIRO-Oper. Res. 41(4), 411–426 (2007)
Chekuri, C., Shepherd, F.B., Oriolo, G., Scutellá, M.G.: Hardness of robust network design. Networks 50(1), 50–54 (2007)
Ben-Ameur, W., Kerivin, H.: New economical virtual private networks. Commun. ACM 46(6), 69–73 (2003)
Ben-Ameur, W., Kerivin, H.: Routing of uncertain traffic demands. Optim. Eng. 6(3), 283–313 (2005)
Poss, M., Raack, C.: Affine recourse for the robust network design problem: between static and dynamic routing. Networks 61(2), 180–198 (2013)
Ben-Tal, A., Ghaoui, L.E., Nemirovski, A.: Robust Optimization. Princeton University Press, Princeton (2009)
Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)
Parikh, N., Boyd, S.: Proximal algorithms. Found. Trends Optim. 1(3), 123–231 (2014)
Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)
ApS, M.: MOSEK Optimizer API for Python 8.0.0.64 (2017). http://docs.mosek.com/8.0/pythonapi/index.html
Diamond, S., Boyd, S.: CVXPY: A Python-embedded modeling language for convex optimization. J. Mach. Learn. Res. 17(83), 1–5 (2016)
Domahidi, A., Chu, E., Boyd, S.: ECOS: An SOCP solver for embedded systems. In: Proceedings of the 12th European Control Conference, pp. 3071–3076. IEEE (2013)
Author information
Authors and Affiliations
Corresponding author
Additional information
Paul I. Barton.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Solution of the Reservation Update Subproblem
Here, we give a simple method to carry out the reservation update problem (12). This section is mostly taken from [12, §6.4.1].
We can express (12) as
Here, the variables are \(x\in {\mathbb {R}}^K\) and \(t\in {\mathbb {R}}\) (The variable x corresponds to \(f_j\) in (12), and the parameters u and \(\beta \) correspond to \(\alpha f_j + (1-\alpha {\tilde{f}}_j)+\pi _j/\rho \) and \(p_j/\rho \), respectively).
The optimality conditions are
where \(x^\star \) and \(t^\star \) are the optimal primal variables and \(\mu ^\star \) is the optimal dual variable. If \(x_i < t\), the third condition implies that \(\mu ^\star _i = 0\), and if \(x_i^\star = t^\star \), the fourth condition implies that \(\mu ^\star _i = u_i - t^\star \). Because \(\mu _i^\star \ge 0\), this implies \(\mu ^\star _i = (u_i - t^\star )_+\). Substituting for \(\mu _i^\star \) in the fifth condition gives
We can solve this equation for \(t^\star \) by first finding an index k such that
where \(u_{[i]}\) is the sum of the i largest elements of u. This can be done efficiently by first sorting the elements of u and then computing the cumulative sums in the descending order until the left inequality in the above holds. Note that in this way there is no need to check the second inequality, as it will always hold (The number of computational operations required to sort u is \(K\log K\), making this the most computationally expensive step of the solution). With this index k computed, we have
We then recover \(x^\star \) as \(x^\star _i = \min \{t^\star , u_i\}.\)
Iterate Properties
Here, we prove that, for any \(l=1,2,\dots \), if \(\pi ^{(k)}\) are the columns of \(\varPi (l)\), optimality condition 1 is satisfied, i.e., \(\varPi (l){\mathbf {1}}= p\) and \(\varPi (l) \ge 0\). Additionally, if \(f^{(k)}\) are taken to be the columns of F(l), then condition 2 is also satisfied, i.e.,
To see this, we first note that, by defining \(h:{\mathbb {R}}^{m\times K}\rightarrow {\mathbb {R}}\) such that \(h(F) = p^T \max F\), then the subdifferential of h at F is
i.e., it is the set of matrices whose columns are scenario prices and whose nonzero elements coincide with the elements of F that are equal to the row-wise maximum value.
Now, from the reservation update equation, we have
The optimality conditions are
Using the price update equation, this is
This shows that the columns of \(\varPi (l+1)\) are indeed valid scenario prices. Additionally, we have (dropping the indices \((l+1)\) for \(\varPi (l+1)\) and \({\tilde{F}}(l+1)\))
The first step follows from the fact that the only terms for which \(\varPi (l+1)\) are positive, and thus the only terms that contribute to the sum, are those for which the corresponding element in \({\tilde{F}}(l+1)\) is equal to the maximum in that row. The second step follows from the fact that \(\varPi (l+1){\mathbf {1}}= p\).
Choice of Parameter \(\mu \)
Here, we provide some supplemental details regarding the selection of the ADMM algorithm parameter \(\rho \) (See Eq. (14)). A suitable range of values for the parameter \(\mu \) was found on the basis of numerical experiments. In particular, we varied n, m, and K, and generated graphs and scenario flows according to the method of Sect. 6. We then studied the number of iterations required for convergence to a tolerance of 0.01 (Problem instances taking longer than 200 iterations were terminated). Some representative examples are shown in Fig. 7. We see that the best choice of \(\mu \) is reasonably consistent across problem dimensions. An appropriate range of \(\mu \) is around 0.05 to 0.1.
Rights and permissions
About this article
Cite this article
Moehle, N., Shen, X., Luo, ZQ. et al. A Distributed Method for Optimal Capacity Reservation. J Optim Theory Appl 182, 1130–1149 (2019). https://doi.org/10.1007/s10957-019-01528-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-019-01528-5