Skip to main content
Log in

Amenable cones: error bounds without constraint qualifications

  • Full Length Paper
  • Series A
  • Published:
Mathematical Programming Submit manuscript

Abstract

We provide a framework for obtaining error bounds for linear conic problems without assuming constraint qualifications or regularity conditions. The key aspects of our approach are the notions of amenable cones and facial residual functions. For amenable cones, it is shown that error bounds can be expressed as a composition of facial residual functions. The number of compositions is related to the facial reduction technique and the singularity degree of the problem. In particular, we show that symmetric cones are amenable and compute facial residual functions. From that, we are able to furnish a new Hölderian error bound, thus extending and shedding new light on an earlier result by Sturm on semidefinite matrices. We also provide error bounds for the intersection of amenable cones, this will be used to prove error bounds for the doubly nonnegative cone. At the end, we list some open problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. Note that if \( {\mathcal {F}}\mathrel {\unlhd } {{\mathcal {K}}}\) and \( {\mathcal {F}}\subsetneq {{\mathcal {K}}}\), them \(\dim {\mathcal {F}}< \dim {{\mathcal {K}}}\).

  2. A cone is homogeneous if for every \(x,y \in \mathrm {ri}\, {{\mathcal {K}}}\) there is a linear bijection Q such that \(Q(x) = y\) and \(Q( {{\mathcal {K}}}) = {{\mathcal {K}}}\).

  3. In more detail, we have \( {\mathcal {F}}= \{v \in V(c,1)\mid \langle u , v \rangle \ge 0, \forall u \in {\mathcal {F}}\}\). Then, let \(v \in {\mathcal {F}}\) be arbitrary. Since \(z \in {\mathcal {F}}^*\), we have \(\langle v , z \rangle = \langle v , z_1 \rangle \ge 0 \), due to the orthogonality among V(c, 0), V(c, 1 / 2) and V(c, 1). This shows that \(z_1 \in {\mathcal {F}}\). Similarly, we can show that \( {\mathcal {F}}\cap \{z\}^\perp = {\mathcal {F}}\cap \{z_1\}^\perp . \)

  4. Rigorously, the argument so far only shows that \(z_1 \in \mathrm {ri}\,( {\mathcal {F}}^*\cap {\hat{ {\mathcal {F}}}}^\perp )\). However, since \(z_1 \in V(c,1)\), we can put “\(\mathrm {ri}\,\)” outside and conclude that \(V(c,1)\cap \mathrm {ri}\,( {\mathcal {F}}^*\cap {\hat{ {\mathcal {F}}}}^\perp ) = \mathrm {ri}\,( {\mathcal {F}}^*\cap {\hat{ {\mathcal {F}}}}^\perp \cap V(c,1))\). Therefore, as remarked, \(z_1 \in \mathrm {ri}\,{\hat{ {\mathcal {F}}}}^{\varDelta }\). Furthermore, since \(z_1 \in {{\mathcal {K}}}\) and \({\hat{c}} \in {\hat{ {\mathcal {F}}}}\), we have \(\langle {\hat{c}} , z \rangle = 0\). By item (iii) of Proposition 29, we have \( {{\hat{c}} \circ z } = 0\) and \(z_1 \in {\hat{V}}({\hat{c}}, 0)\) as claimed.

  5. Let \(u \in {{\mathcal {K}}}\) be such that \( {\mathrm {dist}\,}(x, {{\mathcal {K}}}) = \Vert x-u\Vert \). Decompose u following the same decomposition of x. We have \(u = u_{11} + u_{12} + u_{13} + u_2 + u_3\). By item (i) of Proposition 32, we have that \(u_{13} \in {\hat{ {\mathcal {F}}}} ^{\varDelta }\). Therefore \( {\mathrm {dist}\,}(x_{13}, {\hat{ {\mathcal {F}}}} ^{\varDelta }) \le \Vert x_{13} - u_{13}\Vert \le \Vert x-u\Vert \le \epsilon \). Similarly, we have \( {\mathrm {dist}\,}(x_{1}, {\mathcal {F}}) \le \Vert x_{1} - u_{1}\Vert \le \epsilon \).

  6. If \(x_1 \in {\mathcal {F}}\), then we have \(x_1 + \epsilon c \in {\mathcal {F}}\). If not, then \(\lambda _{\min }(x_1) < 0\). Here, we are considering the minimum eigenvalue of \(x_1\) with respect the algebra V(c, 1). In this case, from Proposition 20, we have that \(\epsilon \ge {\mathrm {dist}\,}(x_{1}, {\mathcal {F}})\ge -\lambda _{\min }(x_1)\). Then, since c is the identity in V(c, 1), adding \(\epsilon c\) to \(x_1\) has the effect of adding \(\epsilon \) to \(\lambda _{\min }(x_1)\).

  7. The subtlety here is that \(x_{13} + (\epsilon +\alpha )(c-{\hat{c}}) \) and its inverse, seen as elements of \({\hat{V}}({\hat{c}},0)\), have no zero eigenvalues, since they belong to \(\mathrm {ri}\,{\hat{ {\mathcal {F}}}}^{\varDelta }\). If we see them as elements of \({\mathcal {E}}\), zero eigenvalues might appear, but the corresponding idempotents certainly do not belong to \({\hat{V}}({\hat{c}},0)\).

  8. With that \(\lambda _{\min }((x_{13} + (\epsilon +\alpha )(c - {\hat{c}}) )^{-1})\) refers to the minimum eigenvalue in the algebra \({\hat{V}}({\hat{c}}, 0)\) and that is also why we can use (33) at the end.

References

  1. Arima, N., Kim, S., Kojima, M., Toh, K.-C.: Lagrangian-conic relaxations, part I: a unified framework and its applications to quadratic optimization problems. Pac. J. Optim. 14(1), 161–192 (2018)

    MathSciNet  Google Scholar 

  2. Arima, N., Kim, S., Kojima, M., Toh, K.-C.: A robust Lagrangian-DNN method for a class of quadratic optimization problems. Comput. Optim. Appl. 66(3), 453–479 (2017)

    MathSciNet  MATH  Google Scholar 

  3. Baes, M., Lin, H.: A Lipschitzian error bound for monotone symmetric cone linear complementarity problem. Optimization 64(11), 2395–2416 (2015)

    MathSciNet  MATH  Google Scholar 

  4. Barker, G.P.: Perfect cones. Linear Algebra Appl. 22, 211–221 (1978)

    MathSciNet  Google Scholar 

  5. Bauschke, H.H., Borwein, J.M., Li, W.: Strong conical hull intersection property, bounded linear regularity, Jameson’s property (G), and error bounds in convex optimization. Math. Program. 86(1), 135–160 (1999)

    MathSciNet  MATH  Google Scholar 

  6. Borwein, J.M., Wolkowicz, H.: Facial reduction for a cone-convex programming problem. J. Aust. Math. Soc. (Ser. A) 30(03), 369–380 (1981)

    MathSciNet  MATH  Google Scholar 

  7. Borwein, J.M., Wolkowicz, H.: Regularizing the abstract convex program. J. Math. Anal. Appl. 83(2), 495–530 (1981)

    MathSciNet  MATH  Google Scholar 

  8. Borwein, J.M., Wolkowicz, H.: Characterizations of optimality without constraint qualification for the abstract convex program. In: Guignard, M. (ed.) Optimality and Stability in Mathematical Programming, pp. 77–100. Springer, Berlin (1982)

    MATH  Google Scholar 

  9. Cheung, Y.-L., Schurr, S., Wolkowicz, H.: Preprocessing and regularization for degenerate semidefinite programs. In: Bailey, D.H., Bauschke, H.H., Borwein, P., Garvan, F., Théra, M., Vanderwerff, J.D., Wolkowicz, H. (eds.) Computational and Analytical Mathematics. Springer Proceedings in Mathematics and Statistics, vol. 50, pp. 251–303. Springer, New York (2013)

    MATH  Google Scholar 

  10. Chua, C.B.: Relating homogeneous cones and positive definite cones via T-algebras. SIAM J. Optim. 14(2), 500–506 (2003)

    MathSciNet  MATH  Google Scholar 

  11. Chua, C.B., Tunçel, L.: Invariance and efficiency of convex representations. Math. Program. 111, 113–140 (2008)

    MathSciNet  MATH  Google Scholar 

  12. Drusvyatskiy, D., Li, G., Wolkowicz, H.: A note on alternating projections for ill-posed semidefinite feasibility problems. Math. Program. 162(1), 537–548 (2017)

    MathSciNet  MATH  Google Scholar 

  13. Drusvyatskiy, D., Pataki, G., Wolkowicz, H.: Coordinate shadows of semidefinite and euclidean distance matrices. SIAM J. Optim. 25(2), 1160–1178 (2015)

    MathSciNet  MATH  Google Scholar 

  14. Drusvyatskiy, D., Wolkowicz, H.: The many faces of degeneracy in conic optimization. Technical report, University of Washington (2017)

  15. Faraut, J., Korányi, A.: Analysis on Symmetric Cones. Oxford Mathematical Monographs. Clarendon Press, Oxford (1994)

    MATH  Google Scholar 

  16. Faybusovich, L.: On Nesterov’s approach to semi-infinite programming. Acta Appl. Math. 74(2), 195–215 (2002)

    MathSciNet  MATH  Google Scholar 

  17. Faybusovich, L.: Jordan-algebraic approach to convexity theorems for quadratic mappings. SIAM J. Optim. 17(2), 558–576 (2006)

    MathSciNet  MATH  Google Scholar 

  18. Faybusovich, L.: Several Jordan-algebraic aspects of optimization. Optimization 57(3), 379–393 (2008)

    MathSciNet  MATH  Google Scholar 

  19. Friberg, H.A.: A relaxed-certificate facial reduction algorithm based on subspace intersection. Oper. Res. Lett. 44(6), 718–722 (2016)

    MathSciNet  MATH  Google Scholar 

  20. Gowda, M.S., Sznajder, R.: Schur complements, Schur determinantal and Haynsworth inertia formulas in Euclidean Jordan algebras. Linear Algebra Appl. 432(6), 1553–1559 (2010)

    MathSciNet  MATH  Google Scholar 

  21. Hoffman, A.J.: On approximate solutions of systems of linear inequalities. J. Res. Natl. Bur. Stand. 49(4), 263–265 (1957)

    MathSciNet  Google Scholar 

  22. Ioffe, A.D.: Variational Analysis of Regular Mappings: Theory and Applications. Springer Monographs in Mathematics. Springer, Berlin (2017)

    MATH  Google Scholar 

  23. Ito, M., Lourenço, B.F.: A bound on the Carathéodory number. Linear Algebra Appl. 532, 347–363 (2017)

    MathSciNet  MATH  Google Scholar 

  24. Ito, M., Lourenço, B.F.: The \(p\)-cones in dimension \(n\ge 3\) are not homogeneous when \(p\ne 2\). Linear Algebra Appl. 533, 326–335 (2017)

    MathSciNet  MATH  Google Scholar 

  25. Ito, M., Lourenço, B.F.: The automorphism group and the non-self-duality of p-cones. J. Math. Anal. Appl. 471(1), 392–410 (2019)

    MathSciNet  MATH  Google Scholar 

  26. Kim, S., Kojima, M., Toh, K.-C.: A Lagrangian-DNN relaxation: a fast method for computing tight lower bounds for a class of quadratic optimization problems. Math. Program. 156(1), 161–187 (2016)

    MathSciNet  MATH  Google Scholar 

  27. Koecher, M.: The Minnesota Notes on Jordan Algebras and Their Applications. Lecture Notes in Mathematics, vol. 1710. Springer, Berlin (1999)

    MATH  Google Scholar 

  28. Lewis, A.S., Pang, J.-S.: Error bounds for convex inequality systems. In: Crouzeix, J.-P., Martínez-Legaz, J.-E., Volle, M. (eds.) Generalized Convexity. Generalized Monotonicity: Recent Results, pp. 75–110. Springer, New York (1998)

    Google Scholar 

  29. Liu, M., Pataki, G.: Exact duals and short certificates of infeasibility and weak infeasibility in conic linear programming. Math. Program. 167(2), 435–480 (2018)

    MathSciNet  MATH  Google Scholar 

  30. Lourenço, B.F., Muramatsu, M., Tsuchiya, T.: Solving SDP completely with an interior point oracle. arXiv e-prints arXiv:1507.08065 (2015)

  31. Lourenço, B.F., Muramatsu, M., Tsuchiya, T.: Facial reduction and partial polyhedrality. SIAM J. Optim. 28(3), 2304–2326 (2018)

    MathSciNet  MATH  Google Scholar 

  32. Lourenço, B.F., Fukuda, E.H., Fukushima, M.: Optimality conditions for problems over symmetric cones and a simple augmented Lagrangian method. Math. Oper. Res. 43(4), 1233–1251 (2018)

    MathSciNet  MATH  Google Scholar 

  33. Luo, Z., Sturm, J.F.: Error analysis. In: Wolkowicz, H., Saigal, R., Vandenberghe, L. (eds.) Handbook of Semidefinite Programming: Theory, Algorithms, and Applications. Kluwer Academic Publishers, Dordrecht (2000)

    Google Scholar 

  34. Luo, Z., Sturm, J.F., Zhang, S.: Duality results for conic convex programming. Technical report, Econometric Institute, Erasmus University Rotterdam, The Netherlands (1997)

  35. Pang, J.-S.: Error bounds in mathematical programming. Math. Program. 79(1), 299–332 (1997)

    MathSciNet  MATH  Google Scholar 

  36. Pataki, G.: The geometry of semidefinite programming. In: Wolkowicz, H., Saigal, R., Vandenberghe, L. (eds.) Handbook of Semidefinite Programming: Theory, Algorithms, and Applications. Kluwer Academic Publishers, Dordrecht (2000)

    MATH  Google Scholar 

  37. Pataki, G.: On the connection of facially exposed and nice cones. J. Math. Anal. Appl. 400(1), 211–221 (2013)

    MathSciNet  MATH  Google Scholar 

  38. Pataki, G.: Strong duality in conic linear programming: facial reduction and extended duals. In: Bailey, D.H., Bauschke, H.H., Borwein, P., Garvan, F., Théra, M., Vanderwerff, J.D., Wolkowicz, H. (eds.) Computational and Analytical Mathematics, vol. 50, pp. 613–634. Springer, New York (2013)

    MATH  Google Scholar 

  39. Permenter, F.: Private Communication (2016)

  40. Permenter, F., Friberg, H.A., Andersen, E.D.: Solving conic optimization problems via self-dual embedding and facial reduction: a unified approach. SIAM J. Optim. 27(3), 1257–1282 (2017)

    MathSciNet  MATH  Google Scholar 

  41. Permenter, F., Parrilo, P.: Partial facial reduction: simplified, equivalent SDPs via approximations of the PSD cone. Math. Program. 171, 1–54 (2017)

    MathSciNet  MATH  Google Scholar 

  42. Pólik, I., Terlaky, T.: Exact duality for optimization over symmetric cones. AdvOL Report 2007/10, McMaster University, Advanced Optimization Lab, Hamilton, Canada. http://www.optimization-online.org/DB_HTML/2007/08/1754.html (2007)

  43. Renegar, J.: “Efficient” subgradient methods for general convex optimization. SIAM J. Optim. 26(4), 2649–2676 (2016)

    MathSciNet  MATH  Google Scholar 

  44. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)

    MATH  Google Scholar 

  45. Roshchina, V.: Facially exposed cones are not always nice. SIAM J. Optim. 24(1), 257–268 (2014)

    MathSciNet  MATH  Google Scholar 

  46. Roshchina, V., Tunçel, L.: Facially dual complete (nice) cones and lexicographic tangents. SIAM J. Optim. 29(3), 2363–2387 (2019). https://doi.org/10.1137/17M1126643

    Article  MathSciNet  MATH  Google Scholar 

  47. Sturm, J.F.: Error bounds for linear matrix inequalities. SIAM J. Optim. 10(4), 1228–1248 (2000)

    MathSciNet  MATH  Google Scholar 

  48. Sturm, J.F.: Similarity and other spectral relations for symmetric cones. Linear Algebra Appl. 312(1–3), 135–154 (2000)

    MathSciNet  MATH  Google Scholar 

  49. Sung, C.-H., Tam, B.-S.: A study of projectionally exposed cones. Linear Algebra Appl. 139, 225–252 (1990)

    MathSciNet  MATH  Google Scholar 

  50. Tam, B.-S.: A note on polyhedral cones. J. Aust. Math. Soc. 22(4), 456–461 (1976)

    MathSciNet  MATH  Google Scholar 

  51. Tunçel, L., Wolkowicz, H.: Strong duality and minimal representations for cone optimization. Comput. Optim. Appl. 53(2), 619–648 (2012)

    MathSciNet  MATH  Google Scholar 

  52. Waki, H., Muramatsu, M.: Facial reduction algorithms for conic optimization problems. J. Optim. Theory Appl. 158(1), 188–215 (2013)

    MathSciNet  MATH  Google Scholar 

  53. Yamashita, H.: Error bounds for nonlinear semidefinite optimization. Optimization Online (2016). http://www.optimization-online.org/DB_HTML/2016/10/5682.html

  54. Yoshise, A., Matsukawa, Y.: On optimization over the doubly nonnegative cone. In: IEEE International Symposium on Computer-Aided Control System Design (CACSD), pp 13–18 (2010). https://doi.org/10.1109/CACSD.2010.5612811

  55. Zhu, Y., Pataki, G., Tran-Dinh, Q.: Sieve-SDP: a simple facial reduction algorithm to preprocess semidefinite programs. Math. Program. Comput. 11(3), 503–586 (2019)

    MathSciNet  Google Scholar 

Download references

Acknowledgements

We thank the editors and four referees for their insightful comments, which helped to improve the paper substantially. In particular, the discussion on tangentially exposed cones and subtransversality was suggested by Referee 1. Also, comments from Referees 1 and 2 motivated Remark 10. We would like to thank Prof. Gábor Pataki for helpful advice and for suggesting that we take a look at projectionally exposed cones. Incidentally, this was also suggested by Referee 4. Referee 4 also suggested the remark on the tightness of the error bound for doubly nonnegative matrices. Feedback and encouragement from Prof. Tomonari Kitahara, Prof. Masakazu Muramatsu and Prof. Takashi Tsuchiya were highly helpful and they provided the official translation of “amenable cone” to Japanese:

figure a

(kyoujunsui). This work was partially supported by the Grant-in-Aid for Scientific Research (B) (18H03206) and the Grant-in-Aid for Young Scientists (19K20217) from Japan Society for the Promotion of Science.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bruno F. Lourenço.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A Miscellaneous proofs

A Miscellaneous proofs

1.1 Proof of Proposition 11

(i):

Let \( {\mathcal {F}}\) be a face of \( {{\mathcal {K}}}^1\times {{\mathcal {K}}}^2\). We have \( {\mathcal {F}}= {\mathcal {F}}^1 \times {\mathcal {F}}^2\), where \( {\mathcal {F}}^1\) and \( {\mathcal {F}}^2\) are faces of \( {{\mathcal {K}}}^1\) and \( {{\mathcal {K}}}^2\) respectively. From our assumptions in Sect. 2, Eq. (2) and the amenability of \( {{\mathcal {K}}}^1\) and \( {{\mathcal {K}}}^2\), it follows that there are positive constants \(\kappa _1, \kappa _2\) such that

$$\begin{aligned} {\mathrm {dist}\,}((x_1,x_2), {\mathcal {F}}) \le \sqrt{\kappa _1^2 {\mathrm {dist}\,}(x_1, {{\mathcal {K}}}^1)^2 + \kappa _2^2 {\mathrm {dist}\,}(x_2, {{\mathcal {K}}}^2)^2}, \end{aligned}$$

whenever \(x_1 \in \mathrm {span}\, {\mathcal {F}}^1\) and \(x_2 \in \mathrm {span}\, {\mathcal {F}}^2\). Therefore,

$$\begin{aligned} {\mathrm {dist}\,}((x_1,x_2), {\mathcal {F}})&\le \max \{\kappa _1,\kappa _2\}\sqrt{ {\mathrm {dist}\,}(x_1, {{\mathcal {K}}}^1)^2 + {\mathrm {dist}\,}(x_2, {{\mathcal {K}}}^2)^2}\\&= \max \{\kappa _1,\kappa _2\} {\mathrm {dist}\,}((x_1,x_2), {{\mathcal {K}}}^1\times {{\mathcal {K}}}^2), \end{aligned}$$

whenever \((x_1,x_2) \in \mathrm {span}\,( {\mathcal {F}}^1\times {\mathcal {F}}^2) = (\mathrm {span}\, {\mathcal {F}}^1) \times (\mathrm {span}\, {\mathcal {F}}^2)\).

(ii):

If \( {{\mathcal {A}}}\) is the zero map, we are done, since \(\{0\}\) is amenable. So, suppose that \( {{\mathcal {A}}}\) is a nonzero injective linear map. Then, the faces of \( {{\mathcal {A}}}( {{\mathcal {K}}})\) are images of faces of \( {{\mathcal {K}}}\) by \( {{\mathcal {A}}}\). Accordingly, let \( {\mathcal {F}}\mathrel {\unlhd } {{\mathcal {K}}}\). Because \( {{\mathcal {K}}}\) is amenable, there is \(\kappa \) such that

$$\begin{aligned} {\mathrm {dist}\,}(x, {\mathcal {F}}) \le \kappa {\mathrm {dist}\,}(x, {{\mathcal {K}}}), \quad \forall x \in \mathrm {span}\, {\mathcal {F}}. \end{aligned}$$
(41)

As \( {{\mathcal {A}}}\) is a linear map, we have \(\mathrm {span}\, {{\mathcal {A}}}( {\mathcal {F}}) = {{\mathcal {A}}}(\mathrm {span}\, {\mathcal {F}})\). Let \(\sigma _{\min }, \sigma _{\max }\) denote, respectively, the minimum and maximum singular values of \( {{\mathcal {A}}}\). We have

$$\begin{aligned} \sigma _{\min } = \min \{\Vert Ax\Vert \mid \Vert x\Vert = 1 \}, \quad \sigma _{\max } = \max \{\Vert Ax\Vert \mid \Vert x\Vert = 1 \}. \end{aligned}$$

They are both positive since \( {{\mathcal {A}}}\) is injective and nonzero. Now, let \(x \in \mathrm {span}\, {\mathcal {F}}\), then

$$\begin{aligned} {\mathrm {dist}\,}( {{\mathcal {A}}}(x), {{\mathcal {A}}}( {\mathcal {F}}))&\le \sigma _{\max } {\mathrm {dist}\,}(x, {\mathcal {F}})&\\&\le {\kappa }{\sigma _{\max }} {\mathrm {dist}\,}(x, {{\mathcal {K}}})&\text {(From } (41))\\&\le \frac{{\kappa }{\sigma _{\max }}}{\sigma _{\min }} {\mathrm {dist}\,}( {{\mathcal {A}}}(x), {{\mathcal {A}}}( {{\mathcal {K}}})).&\end{aligned}$$

\(\square \)

1.1.1 Proof of Proposition 12

Proof

\((i) \Rightarrow (ii)\)

Let \(x,u \in {\mathcal {E}}\) be such that \(x+u\in \mathrm {span}\, {\mathcal {F}}\) and \(\Vert u\Vert = {\mathrm {dist}\,}(x,\mathrm {span}\, {\mathcal {F}})\). Since \( {\mathrm {dist}\,}(\cdot , {{\mathcal {K}}})\) and \( {\mathrm {dist}\,}(\cdot , {\mathcal {F}})\) are sublinear functions, we have that (4) implies that

$$\begin{aligned} {\mathrm {dist}\,}(x, {\mathcal {F}})&\le {\mathrm {dist}\,}(-u, {\mathcal {F}}) + {\mathrm {dist}\,}(x+u, {\mathcal {F}}) \nonumber \\&\le {\mathrm {dist}\,}(x,\mathrm {span}\, {\mathcal {F}}) + \kappa ( {\mathrm {dist}\,}(x+u, {{\mathcal {K}}})) \nonumber \\&\le {\mathrm {dist}\,}(x,\mathrm {span}\, {\mathcal {F}}) + \kappa ( {\mathrm {dist}\,}(x, {{\mathcal {K}}})+ {\mathrm {dist}\,}(x,\mathrm {span}\, {\mathcal {F}}))\nonumber \\&\le (1+\kappa )( {\mathrm {dist}\,}(x, {{\mathcal {K}}})+ {\mathrm {dist}\,}(x,\mathrm {span}\, {\mathcal {F}})), \quad \forall x \in {\mathcal {E}}. \end{aligned}$$
(42)

Here we used the fact that \( {\mathrm {dist}\,}(-u, {\mathcal {F}}) \le \Vert -u\Vert \), since \(0 \in {\mathcal {F}}\). This shows that \((i) \Rightarrow (ii)\).

\((ii) \Rightarrow (i)\) Since \( {{\mathcal {K}}}\) and \(\mathrm {span}\, {\mathcal {F}}\) intersect at 0 subtransversally, there is \(\delta > 0\) such that

$$\begin{aligned} {\mathrm {dist}\,}(z, {\mathcal {F}}) \le \kappa ( {\mathrm {dist}\,}(z, {{\mathcal {K}}})+ {\mathrm {dist}\,}(z,\mathrm {span}\, {\mathcal {F}})), \quad \forall z \text { with } \Vert z\Vert \le \delta . \end{aligned}$$

Therefore, if \(x \in {\mathcal {E}}\) is nonzero, we have

$$\begin{aligned} {\mathrm {dist}\,}\left( \delta \frac{x}{\Vert x\Vert }, {\mathcal {F}}\right) \le \kappa \left( {\mathrm {dist}\,}(\delta \frac{x}{\Vert x\Vert }, {{\mathcal {K}}}\right) + {\mathrm {dist}\,}\left( \delta \frac{x}{\Vert x\Vert },\mathrm {span}\, {\mathcal {F}})\right) . \end{aligned}$$

Now, we recall that if C is a convex cone, then \( {\mathrm {dist}\,}(\alpha x, C) = \alpha {\mathrm {dist}\,}(x,C)\) for every positive \(\alpha \). We conclude that

$$\begin{aligned} {\mathrm {dist}\,}(x, {\mathcal {F}}) \le \kappa ( {\mathrm {dist}\,}(x, {{\mathcal {K}}})+ {\mathrm {dist}\,}(x,\mathrm {span}\, {\mathcal {F}})), \quad \forall x \in {\mathcal {E}}. \end{aligned}$$

Therefore, if \(x \in \mathrm {span}\, {\mathcal {F}}\), then \( {\mathrm {dist}\,}(x, {\mathcal {F}}) \le \kappa {\mathrm {dist}\,}(x, {{\mathcal {K}}})\).

\((i) \Rightarrow (iii)\) The inequality in (42) shows that

$$\begin{aligned} {\mathrm {dist}\,}(x, {\mathcal {F}}) \le (2+2\kappa )\max ( {\mathrm {dist}\,}(x, {{\mathcal {K}}}), {\mathrm {dist}\,}(x,\mathrm {span}\, {\mathcal {F}})), \quad \forall x \in {\mathcal {E}}. \end{aligned}$$

Therefore, \( {{\mathcal {K}}}\) and \(\mathrm {span}\, {\mathcal {F}}\) are boundedly linearly regular.

\((iii) \Rightarrow (ii)\) Let \(U = \{x \in {\mathcal {E}}\mid \Vert x\Vert \le 1 \}\). Then, there exists \(\kappa _U\) such that

$$\begin{aligned} {\mathrm {dist}\,}(x, {\mathcal {F}})&\le \kappa _U\max ( {\mathrm {dist}\,}(x, {{\mathcal {K}}}), {\mathrm {dist}\,}(x,\mathrm {span}\, {\mathcal {F}})) \\&\le \kappa _U ( {\mathrm {dist}\,}(x, {{\mathcal {K}}})+ {\mathrm {dist}\,}(x,\mathrm {span}\, {\mathcal {F}})), \quad \forall x \in U. \end{aligned}$$

Therefore, \( {{\mathcal {K}}}\) and \(\mathrm {span}\, {\mathcal {F}}\) intersect subtransversally at 0. \(\square \)

1.1.2 Proof of Proposition 17

  1. 1.

    Suppose that \(x \in \mathrm {span}\, {{\mathcal {K}}}\) satisfies the inequalities

    $$\begin{aligned} {\mathrm {dist}\,}(x, {{\mathcal {K}}}) \le \epsilon , \quad \langle x , z \rangle \le \epsilon , \quad {\mathrm {dist}\,}(x, \mathrm {span}\, {\mathcal {F}}) \le \epsilon . \end{aligned}$$
    (43)

    Note that

    $$\begin{aligned} {\mathcal {F}}\cap \{z\}^{\perp } = ( {\mathcal {F}}^{1} \cap \{z_1\}^\perp ) \times ( {\mathcal {F}}^{2} \cap \{z_2\}^\perp ). \end{aligned}$$

    Also, due to our assumptions (Sect. 2.1), we have

    $$\begin{aligned} \Vert x-y\Vert ^2 = \Vert x_1-y_1\Vert ^2 + \Vert x_2-y_2\Vert ^2 \end{aligned}$$

    for every \(x,y \in {\mathcal {E}}^1\times {\mathcal {E}}^2\). Thus we have the following implications:

    $$\begin{aligned} {\mathrm {dist}\,}(x, {{\mathcal {K}}}) \le \epsilon \quad&\Rightarrow \quad {\mathrm {dist}\,}(x_1, {{\mathcal {K}}}^1) \le \epsilon , \quad {\mathrm {dist}\,}(x_2, {{\mathcal {K}}}^2) \le \epsilon \end{aligned}$$
    (44)
    $$\begin{aligned} {\mathrm {dist}\,}(x,\mathrm {span}\, {\mathcal {F}}) \le \epsilon \quad&\Rightarrow \quad {\mathrm {dist}\,}(x_1,\mathrm {span}\, {\mathcal {F}}^1) \le \epsilon , \quad {\mathrm {dist}\,}(x_2,\mathrm {span}\, {\mathcal {F}}^2) \le \epsilon \end{aligned}$$
    (45)

    The first step is showing that there are positive constants \(\kappa _1\) and \(\kappa _2\) such that for all \(x \in {\mathcal {E}}^1\times {\mathcal {E}}^2\), we also have

    $$\begin{aligned} x \text { satisfies } (43)&\Rightarrow \quad \langle x_1 , z_1 \rangle \le \kappa _1\epsilon \quad \text {and} \quad \langle x_2 , z_2 \rangle \le \kappa _2\epsilon . \end{aligned}$$
    (46)

    Suppose x satisfies (43). By (45), we have \( {\mathrm {dist}\,}(x_1, \mathrm {span}\,{ {\mathcal {F}}^1}) \le \epsilon \). Therefore, there exists \(y_1 \in {\mathcal {E}}^1\) such that \(x_1 + y_1 \in \mathrm {span}\,{ {\mathcal {F}}^1} \) and \(\Vert y_1\Vert \le \epsilon \). Due to (44) and the amenability of \( {{\mathcal {K}}}^1\), there exists \({\hat{\kappa }}_1\) (not depending on \(x_1\)) such that

    $$\begin{aligned} {\mathrm {dist}\,}(x_1 + y_1, {\mathcal {F}}^1) \le {\hat{\kappa }}_1 {\mathrm {dist}\,}(x_1 + y_1, {{\mathcal {K}}}^1) \le 2\epsilon {\hat{\kappa }}_1. \end{aligned}$$

    Therefore, there exists \(v_1 \in {\mathcal {E}}^1\) such that \(\Vert v_1\Vert \le 2\epsilon {\hat{\kappa }}_1\) and

    $$\begin{aligned} x_1 + y_1 + v_1 \in {\mathcal {F}}^1. \end{aligned}$$

    In a completely analogous manner, there is a constant \({\hat{\kappa }}_2> 0\) and there are \(y_2,v_2 \in {\mathcal {E}}^2\) such

    $$\begin{aligned} x_2 + y_2 + v_2 \in {\mathcal {F}}^2, \end{aligned}$$

    with \(\Vert y_2\Vert \le \epsilon \) and \(\Vert v_2\Vert \le 2\epsilon {\hat{\kappa }}_2\). It follows that

    $$\begin{aligned} \langle (x_1+y_1+v_1,x_2+y_2+v_2) , (z_1,z_2) \rangle \le M\epsilon , \end{aligned}$$

    for \(M = 1 + \Vert z_1\Vert + 2{\hat{\kappa }}_1 + \Vert z_2\Vert + 2{\hat{\kappa }}_2\). Since \(\langle x_1+y_1+v_1 , z_1 \rangle \ge 0\) and \(\langle x_2+y_2+v_2 , z_2 \rangle \ge 0\), we get

    $$\begin{aligned} \langle x_i+y_i+v_i , z_i \rangle \le M\epsilon , \end{aligned}$$

    for \(i = 1,2\). We then conclude that

    $$\begin{aligned} \langle x , z \rangle \le \epsilon \quad \Rightarrow \quad \langle x_i , z_i \rangle \le \kappa _i\epsilon , \end{aligned}$$
    (47)

    whenever x satisfies (44) and (45), where \(\kappa _i = M+ \Vert z_i\Vert + \Vert z_i\Vert 2{\hat{\kappa }}_i\). Now, let \(\psi _{ {\mathcal {F}}_1,z_1}\) and \(\psi _{ {\mathcal {F}}_2,z_2}\) be arbitrary facial residual functions for \( {\mathcal {F}}_1,z_1\) and \( {\mathcal {F}}_2,z_2\), respectively. We positive rescale \(\psi _{ {\mathcal {F}}_1,z_1}\) and \(\psi _{ {\mathcal {F}}_2,z_2}\) so that

    $$\begin{aligned} {\mathrm {dist}\,}(x_i, {{\mathcal {K}}}) \le \epsilon ,\quad \langle x_i , z_i \rangle \le \kappa _i\epsilon , \quad {\mathrm {dist}\,}(x, \mathrm {span}\, {\mathcal {F}}_i ) \le \epsilon \end{aligned}$$

    implies \( {\mathrm {dist}\,}(x_i,{\hat{ {\mathcal {F}}}}_i) \le \psi _{ {\mathcal {F}}_i,z_i}(\epsilon ,\Vert x_i\Vert )\), for \(i = 1,2\).

    Finally, from (44), (45), (47) and using the fact that \(\psi _{ {\mathcal {F}}_1,z_1}\) and \(\psi _{ {\mathcal {F}}_2,z_2}\) are monotone nondecreasing on the second argument we conclude that whenever x satisfies (43) we have

    $$\begin{aligned} {\mathrm {dist}\,}(x,{\hat{ {\mathcal {F}}}})&= \sqrt{ {\mathrm {dist}\,}(x_1,{\hat{ {\mathcal {F}}}}^1)^2 + {\mathrm {dist}\,}(x_2,{\hat{ {\mathcal {F}}}}^2)^2 }\\&\le { {\mathrm {dist}\,}(x_1,{\hat{ {\mathcal {F}}}}^1)} + { {\mathrm {dist}\,}(x_2,{\hat{ {\mathcal {F}}}}^2)} \\&\le \psi _{ {\mathcal {F}}_1,z_1}(\epsilon ,\Vert x\Vert ) + \psi _{ {\mathcal {F}}_2,z_2}(\epsilon ,\Vert x\Vert ). \end{aligned}$$

    Therefore, \(\psi _{ {\mathcal {F}}_1,z_1}+\psi _{ {\mathcal {F}}_2,z_2}\) is a facial residual function for \( {\mathcal {F}},z\).

  2. 2.

    The proposition is true if \( {{\mathcal {A}}}\) is the zero map, so suppose that \( {{\mathcal {A}}}\) is a nonzero injective linear map. First, we observe that

    $$\begin{aligned} ( {{\mathcal {A}}}( {\mathcal {F}}))\cap \{z\}^\perp = {{\mathcal {A}}}( {\mathcal {F}}\cap \{ {{\mathcal {A}}}^\top z \}^\perp ). \end{aligned}$$

    Let \({\hat{ {\mathcal {F}}}} = {\mathcal {F}}\cap \{ {{\mathcal {A}}}^\top z \}^\perp \). Let \(\psi _{ {\mathcal {F}}, {{\mathcal {A}}}^\top z}\) be a facial residual function for \( {\mathcal {F}}\) and \( {{\mathcal {A}}}^\top z\). Let \(\sigma _{\min }\) denote the minimum singular value of \( {{\mathcal {A}}}\). We note that \(\sigma _{\min }\) is positive because \( {{\mathcal {A}}}\) is injective. We positive rescale \(\psi _{ {\mathcal {F}}, {{\mathcal {A}}}^\top z}\) so that whenever x satisfies

    $$\begin{aligned} {\mathrm {dist}\,}(x, {{\mathcal {K}}}) \le \frac{1}{\sigma _{\min }}\epsilon , \quad \langle x , {{\mathcal {A}}}^\top z \rangle \le \epsilon , \quad {\mathrm {dist}\,}(x, \mathrm {span}\, {\mathcal {F}}) \le \frac{1}{\sigma _{\min }}\epsilon \end{aligned}$$

    we have:

    $$\begin{aligned} {\mathrm {dist}\,}(x, {\hat{ {\mathcal {F}}}}) \le \psi _{ {\mathcal {F}}, {{\mathcal {A}}}^\top z} (\epsilon , \Vert x\Vert ). \end{aligned}$$

    Then, we have the following implications:

    $$\begin{aligned} {\mathrm {dist}\,}( {{\mathcal {A}}}(x), {{\mathcal {A}}}( {{\mathcal {K}}})) \le \epsilon \quad&\Rightarrow \quad {\mathrm {dist}\,}(x, {{\mathcal {K}}}) \le \frac{1}{\sigma _{\min }}\epsilon \\ \langle {{\mathcal {A}}}(x) , z \rangle \le \epsilon \quad&\Leftrightarrow \quad \langle x , {{\mathcal {A}}}^\top z \rangle \le \epsilon \\ {\mathrm {dist}\,}( {{\mathcal {A}}}(x),\mathrm {span}\, {{\mathcal {A}}}( {\mathcal {F}})) \le \epsilon \quad&\Rightarrow \quad { {\mathrm {dist}\,}(x,\mathrm {span}\, {\mathcal {F}}) \le \frac{1}{\sigma _{\min }}\epsilon }\\ {\mathrm {dist}\,}( {{\mathcal {A}}}(x), {{\mathcal {A}}}({\hat{ {\mathcal {F}}}})) \le \sigma _{\max }\psi _{ {\mathcal {F}}, {{\mathcal {A}}}^\top z} (\epsilon , \Vert {{\mathcal {A}}}x\Vert /\sigma _{\min }) \quad&\Leftarrow \quad {\mathrm {dist}\,}(x, {\hat{ {\mathcal {F}}}}) \le \psi _{ {\mathcal {F}}, {{\mathcal {A}}}^\top z} (\epsilon , \Vert x\Vert ), \end{aligned}$$

    where \(\sigma _{\max }\) is the maximum singular value of \( {{\mathcal {A}}}\). This shows that we can use

    $$\begin{aligned} {\tilde{\psi }} _{ {{\mathcal {A}}}( {\mathcal {F}}),z}(\epsilon , \Vert {{\mathcal {A}}}x\Vert ) = \sigma _{\max }\psi (\epsilon , \Vert {{\mathcal {A}}}x\Vert /\sigma _{\min }) \end{aligned}$$

    as a facial residual function for \( {{\mathcal {A}}}( {\mathcal {F}})\) and z. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lourenço, B.F. Amenable cones: error bounds without constraint qualifications. Math. Program. 186, 1–48 (2021). https://doi.org/10.1007/s10107-019-01439-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-019-01439-3

Keywords

Mathematics Subject Classification

Navigation