Skip to main content
Log in

Competitive online algorithms for resource allocation over the positive semidefinite cone

  • Full Length Paper
  • Series B
  • Published:
Mathematical Programming Submit manuscript

Abstract

We consider a new and general online resource allocation problem, where the goal is to maximize a function of a positive semidefinite (PSD) matrix with a scalar budget constraint. The problem data arrives online, and the algorithm needs to make an irrevocable decision at each step. Of particular interest are classic experiment design problems in the online setting, with the algorithm deciding whether to allocate budget to each experiment as new experiments become available sequentially. We analyze two greedy primal-dual algorithms and provide bounds on their competitive ratios. Our analysis relies on a smooth surrogate of the objective function that needs to satisfy a new diminishing returns (PSD-DR) property (that its gradient is order-reversing with respect to the PSD cone). Using the representation for monotone maps on the PSD cone given by Löwner’s theorem, we obtain a convex parametrization of the family of functions satisfying PSD-DR. We then formulate a convex optimization problem to directly optimize our competitive ratio bound over this set. This design problem can be solved offline before the data start arriving. The online algorithm that uses the designed smoothing is tailored to the given cost function, and enjoys a competitive ratio at least as good as our optimized bound. We provide examples of computing the smooth surrogate for D-optimal and A-optimal experiment design, and demonstrate the performance of the custom-designed algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. To simplify the notation in the rest of the paper, assume \(H(0) = 0\) by replacing h(u) with \(h(u) - h(0)\).

  2. Note that we could choose, for instance, \(u_{\max } = b'\max _{t} c_t^{-1}\lambda _{\max }(A_t)\), but for certain classes of problems better bounds may be available.

  3. We can extend the domain of \(G_S\) to negative reals by letting \(G_S = 0\) on \(\mathbf {R}_{-}\) to satisfy the technical assumption on the domain of \(G_S\) in Assumption (2).

  4. Note that we could choose, for instance, \(u_{\max } = b'\max _t c_t^{-1}\lambda _{\max }(A_t)\), but for certain classes of problems better bounds may be available.

References

  1. Agrawal, S., Devanur, N.R.: Fast algorithms for online stochastic convex programming. In: Proceedings of the Twenty-Sixth Annual Symposium on Discrete Algorithms, pp. 1405–1424. SIAM (2015)

  2. Agrawal, S., Wang, Z., Ye, Y.: A dynamic near-optimal algorithm for online linear programming. Oper. Res. 62(4), 876–890 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  3. Azar, Y., Buchbinder, N., Chan, T.H., Chen, S., Cohen, I.R., Gupta, A., Huang, Z., Kang, N., Nagarajan, V., Naor, J., et al: Online algorithms for covering and packing problems with convex objectives. In: 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), pp. 148–157. IEEE (2016)

  4. Bian, A.A., Buhmann, J.M., Krause, A., Tschiatschek, S.: Guarantees for greedy maximization of non-submodular functions with applications (2017). arXiv:1703.02100

  5. Bouhtou, M., Gaubert, S., Sagnol, G.: Submodularity and randomized rounding techniques for optimal experimental design. Electron. Discret. Math. 36, 679–686 (2010)

    Article  MATH  Google Scholar 

  6. Buchbinder, N., Naor, J.: Improved bounds for online routing and packing via a primal-dual approach. In: 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS’06), pp. 293–304. IEEE (2006)

  7. Buchbinder, N., Naor, J.: Online primal-dual algorithms for covering and packing. Math. Oper. Res. 34(2), 270–286 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  8. Buchbinder, N., Naor, J.S.: The design of competitive online algorithms via a primal-dual approach Foundations and Trends®. Theor. Comput. Sci. 3(2–3), 93–263 (2009)

    MATH  Google Scholar 

  9. Buchbinder, N., Jain, K., Naor, J.S.: Online primal-dual algorithms for maximizing ad-auctions revenue. In: Algorithms–ESA 2007, pp. 253–264. Springer, Berlin (2007)

  10. Devanur, N.R., Jain, K.: Online matching with concave returns. In: Proceedings of the forty-fourth annual ACM symposium on theory of computing (STOC), pp. 137–144. ACM (2012)

  11. Dragomir, S.S.: Some Gronwall Type Inequalities and Applications. Nova Science, New York (2003)

    MATH  Google Scholar 

  12. Eghbali, R.: Online algorithm design via smoothing with application to online experiment selection. Ph.D. thesis, The University of Washington (2017)

  13. Eghbali, R., Fazel, M.: Designing smoothing functions for improved worst-case competitive ratio in online optimization. In: Advances in Neural Information Processing Systems, pp. 3279–3287 (2016)

  14. Eghbali, R., Fazel, M.: Worst case competitive analysis of online algorithms for conic optimization (2016). arXiv:1611.00507

  15. Eghbali, R., Swenson, J., Fazel, M.: Exponentiated subgradient algorithm for online optimization under the random permutation model (2014). arXiv:1410.7171

  16. Elad, N., Kale, S., Naor, J.S.: Online semidefinite programming. In: LIPIcs-Leibniz International Proceedings in Informatics, Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, vol. 55 (2016)

  17. Gupta, A., Molinaro, M.: How the experts algorithm can help solve LPs online. Math. Oper. Res. 41(4), 1404–1431 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  18. Hansen, F.: The fast track to Löwners theorem. Linear Algebra Appl. 438(11), 4557–4571 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  19. Kapralov, M., Post, I., Vondrák, J.: Online submodular welfare maximization: greedy is optimal. In: Proceedings of the Twenty-Fourth Annual Symposium on Discrete Algorithms, pp. 1216–1225.SIAM (2013)

  20. Legrain, A., Jaillet, P.: A stochastic algorithm for online bipartite resource allocation problems. Comput. Oper. Res. 75, 28–37 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  21. Lewis, A.S.: The convex analysis of unitarily invariant matrix functions. J. Convex Anal. 2(1), 173–183 (1995)

    MathSciNet  MATH  Google Scholar 

  22. Mehta, A., Saberi, A., Vazirani, U., Vazirani, V.: Adwords and generalized online matching. J. ACM 54(5), 22 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  23. Meng, D., Fazel, M., Mesbahi, M.: Online algorithms for network formation. In: IEEE Conference on Decision and Control (CDC), pp. 135–140 (2016)

  24. Molinaro, M., Ravi, R.: The geometry of online packing linear programs. Math. Oper. Res. 39(1), 46–59 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  25. Pukelsheim, F.: Optimal Design of Experiments, vol. 50. SIAM, Philadelphia (1993)

    MATH  Google Scholar 

  26. Reiz, A.: On the numerical solution of certain types of integral equations. Medd. Fran Lunds Astrono. Obs. Ser. I 161, 1–21 (1943)

    MathSciNet  MATH  Google Scholar 

  27. Rockafellar, R.T., Wets, R.J.B., Wets, M.: Variational Analysis, vol. 317. Springer, Berlin (1998)

    MATH  Google Scholar 

  28. Seeger, M.: On the submodularity of linear experimental design. Tech. rep., Saarland University (2009)

  29. Shalev-Shwartz, S., Singer, Y.: A primal-dual perspective of online learning algorithms. Mach. Learn. 69(2–3), 115–142 (2007)

    Article  Google Scholar 

  30. Shamaiah, M., Banerjee, S., Vikalo, H.: Greedy sensor selection: Leveraging submodularity. In: 49th IEEE Conference on Decision and Control (CDC), pp. 2572–2577. IEEE (2010)

  31. Wang, Y., Yu, A.W., Singh, A.: On computationally tractable selection of experiments in regression models (2016). arXiv:1601.02068

Download references

Acknowledgements

The authors thank Omid Sadeghi-Meibodi for helpful comments. The work of MF and RE was supported in part by Grants ONR N000141612789, NSF CCF 1409836, NSF Tripods 1740551, and ONR MURI N000141612710. Part of this work was done while RE and MF were visiting the Simons Institute for the Theory of Computing, partially supported by the DIMACS/Simons Collaboration on Bridging Continuous and Discrete Optimization through NSF Grant CCF-1740425.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Reza Eghbali.

A Additional proofs

A Additional proofs

Here, we provide additional proofs not given in detail in the body of the paper.

Proof

(of Lemma 1) By the definition of \(D_{\text {sim}}\), the definition of \(\tilde{x}_t\), and the concavity of \(H_S\) and \(G_S\), we have that

$$\begin{aligned} D_{\text {sim}}&={ \sum \limits _{t=1}^{m}}\left[ \langle {A_t\tilde{x}_t} , {\tilde{Y}_t} \rangle + c_t\tilde{x}_t\tilde{z}_t\right] - H^*(\tilde{Y}_m) - G^*(\tilde{z}_m)\\&\le { \sum \limits _{t=1}^{m}}\textstyle {\left[ H_S\left( \sum \limits _{s=1}^{t}A_s\tilde{x}_s\right) - H_S\left( \sum \limits _{s=1}^{t-1}A_s\tilde{x}_s\right) \right. }\\&\qquad {\left. +\, G_S\left( \sum \limits _{s=1}^{t}c_s\tilde{x}_s\right) - G_S\left( \sum \limits _{s=1}^{t-1}c_s\tilde{x}_s\right) \right] } - H^*(\tilde{Y}_m) - G^*(\tilde{z}_m)\\&= {H_S\left( \sum \limits _{s=1}^{m}A_s\tilde{x}_s\right) + G_S\left( \sum _{s=1}^{m}c_s\tilde{x}_s\right) } - H^*(\tilde{Y}_m) - G^*(\tilde{z}_m). \end{aligned}$$

The inequality follows from concavity of \(G_S\) and \(H_S\). The final equality holds by telescoping the sum and using the fact that \(H_S(0) = 0 = G_S(0)\). For the sequential algorithm we can write:

$$\begin{aligned} D_{\text {seq}}&= {\sum \limits _{t=1}^{m}}\left[ \langle {A_t\hat{x}_t} , {\hat{Y}_{t-1}} \rangle + c_t\hat{x}_t\hat{z}_{t-1}\right] - H^*(\hat{Y}_m) - G^*(\hat{z}_m)\\&={\sum \limits _{t=1}^{m}} \left[ \langle {A_t\hat{x}_t} , {\hat{Y}_{t}} \rangle + c_t\hat{x}_t\hat{z}_{t}\right] - H^*(\hat{Y}_m) - G^*(\hat{z}_m) \\&\qquad \qquad +{\sum \limits _{t=1}^{m}} \left[ \langle {A_t\hat{x}_t} , {\hat{Y}_{t-1} - \hat{Y}_{t}} \rangle + c_t\hat{x}_t\left( \hat{z}_{t-1} - \hat{z}_{t}\right) \right] \\ \end{aligned}$$

Now, the rest follows similar to steps as the simultaneous case. \(\square \)

Proof

(of Lemma 2) We write out the argument for the inequality \(D_{\text {sim}} \ge D^\star \). The argument showing that \(D_{\text {seq}} \ge D^\star \) is identical. We first show that the PSD-DR assumption on \(H_S\) implies

$$\begin{aligned} \sum _{t=1}^{m}\left( \langle {A_t} , {\tilde{Y}_t} \rangle +c_t\tilde{z}_t\right) _+ \ge \sum _{s=1}^{m}\left( \langle {A_s} , {\tilde{Y}_m} \rangle +c_s\tilde{z}_m\right) _+. \end{aligned}$$
(28)

Since \(A_s \in S_+^n\) and \(\tilde{x}_s \ge 0\) for all \(s\in [m]\), it follows that \(\sum _{s=1}^{t}A_s \tilde{x}_s \preceq \sum _{s=1}^{m}A_s\tilde{x}_s\) for all \(t\in [m]\). Since \(\tilde{Y}_t = \nabla H_S\left( \sum _{s=1}^{t}A_s\tilde{x}_s\right) \), if \(H_S\) satisfies the PSD-DR assumption then \(\tilde{Y}_t \succeq \tilde{Y}_m\) for all \(t\in [m]\). By a similar argument, since \(G_S\) is concave, \(\tilde{z}_t \ge \tilde{z}_m\) for all \(t\in [m]\). Since \(A_t \in S_+^n\) and \(c_t \ge 0\) for all \(t\in [m]\),

$$\begin{aligned} \langle {A_t} , {\tilde{Y}_t} \rangle + c_t\tilde{z}_t \ge \langle {A_t} , {\tilde{Y}_m} \rangle + c_t\tilde{z}_t \ge \langle {A_t} , {\tilde{Y}_m} \rangle + c_t\tilde{z}_m \end{aligned}$$

for all \(t\in [m]\). Taking the positive part and then summing establishes (28). To conclude that \(D_{\text {sim}}\ge D^\star \), we need only observe that \(D^\star \) is a lower bound on the dual objective (4) evaluated at \((\tilde{Y}_m,\tilde{z}_m)\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Eghbali, R., Saunderson, J. & Fazel, M. Competitive online algorithms for resource allocation over the positive semidefinite cone. Math. Program. 170, 267–292 (2018). https://doi.org/10.1007/s10107-018-1305-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-018-1305-1

Keywords

Mathematics Subject Classification

Navigation