Skip to main content

Advertisement

Log in

The Ground State Energy and Concentration of Complexity in Spherical Bipartite Models

  • Published:
Communications in Mathematical Physics Aims and scope Submit manuscript

Abstract

We establish an asymptotic formula for the ground-state energy of the spherical pure \(( p,q) \)-spin glass model for \(p,q\ge 96\). We achieve this through understanding the concentration of the complexity of critical points with values within a region of the ground state energy. More specifically, we show that the second moment of this count coincides with the square of the first moment up to a sub-exponential factor.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Adler, R.J., Taylor, J.E.: Random Fields and Geometry. Springer Monographs in Mathematics, Springer, New York (2007)

    MATH  Google Scholar 

  2. Agliari, E., Barra, A., Bartolucci, S., Galluzzi, A., Guerra, F., Moauro, F.: Parallel processing in immune networks. Phys. Rev. E 87, 042701 (2013)

    Article  ADS  Google Scholar 

  3. Agliari, E., Barra, A., Galluzzi, A., Guerra, F., Moauro, F.: Multitasking associative networks. Phys. Rev. Lett. 109, 268101 (2012)

    Article  ADS  Google Scholar 

  4. Ajanki, O.H., Erdős, L., Krüger, T.: Stability of the matrix Dyson equation and random matrices with correlations. Probab. Theory Related Fields 173(1–2), 293–373 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  5. Amit, D.: Modeling Brain Function: The World of Attractor Neural Networks. Cambridge University Press, Cambridge (1992)

    Google Scholar 

  6. Anderson, G.W., Guionnet, A., Zeitouni, O.: An Introduction to Random Matrices. Cambridge Studies in Advanced Mathematics, vol. 118. Cambridge University Press, Cambridge (2010)

  7. Auffinger, A., Ben Arous, G., Černý, J.: Random matrices and complexity of spin glasses. Commun. Pure Appl. Math. 66(2), 165–201 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  8. Auffinger, A., Chen, W.-K.: Free energy and complexity of spherical bipartite models. J. Stat. Phys. 157(1), 40–59 (2014)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  9. Azaïs, J.-M., Wschebor, M.: Level Sets and Extrema of Random Processes and Fields. Wiley, Hoboken, NJ (2009)

    Book  MATH  Google Scholar 

  10. Baik, J., Lee, J.O.: Free energy of bipartite spherical Sherrington–Kirkpatrick model. Ann. Inst. Henri Poincaré Probab. Stat. 56(4), 2897–2934 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  11. Barra, A., Agliari, E.: A statistical mechanics approach to autopoietic immune networks. J. Stat. Mech: Theory Exp. 2010(7), 07004 (2010)

    Article  Google Scholar 

  12. Barra, A., Contucci, P.: Toward a quantitative approach to migrants integration. EPL (Europhys. Lett.) 89(6), 68001 (2010)

    Article  ADS  Google Scholar 

  13. Barra, A., Genovese, G., Guerra, F., Tantari, D.: How glassy are neural networks? J. Stat. Mech: Theory Exp. 2012(07), P07009 (2012)

    Article  Google Scholar 

  14. Barra, A., Genovese, G., Sollich, P., Tantari, D.: Phase diagram of restricted Boltzmann machines and generalized Hopfield networks with arbitrary priors. Phys. Rev. E 97, 022310 (2018)

    Article  ADS  Google Scholar 

  15. Ben Arous, G., Bourgade, P., McKenna, B.: Exponential growth of random determinants beyond invariance (2021). arXiv:2105.05000

  16. Ben Arous, G., Subag, E., Zeitouni, O.: Geometry and temperature chaos in mixed spherical spin glasses at low temperature: the perturbative regime. Commun. Pure Appl. Math. 73(8), 1732–1828 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  17. Cammarota, V., Marinucci, D., Wigman, I.: On the distribution of the critical values of random spherical harmonics. J. Geom. Anal. 26(4), 3252–3324 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  18. Cammarota, V., Wigman, I.: Fluctuations of the total number of critical points of random spherical harmonics. Stochastic Process. Appl. 127(12), 3825–3869 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  19. Chen, W.-K.: The Aizenman–Sims–Starr scheme and Parisi formula for mixed \(p\)-spin spherical models. Electron. J. Probab. 18(94), 14 (2013)

    MathSciNet  MATH  Google Scholar 

  20. Dembo, A., Zeitouni, O.: Large deviations techniques and applications, vol. 38 of Stochastic Modelling and Applied Probability. Springer-Verlag, Berlin (2010). Corrected reprint of the second (1998) edition

  21. Durlauf, S.N.: How can statistical mechanics contribute to social science? Proc. Natl. Acad. Sci. 96(19), 10582–10584 (1999)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  22. Erdős, L., Krüger, T., Schröder, D.: Random matrices with slow correlation decay. Forum Math. Sigma 7, Paper No. e8, 89 (2019)

  23. Guerra, F.: Broken replica symmetry bounds in the mean field spin glass model. Commun. Math. Phys. 233(1), 1–12 (2003)

  24. Ledoux, M.: Concentration of measure and logarithmic Sobolev inequalities. In: Séminaire de Probabilités, XXXIII, vol. 1709 of Lecture Notes in Math. Springer, Berlin (1999)

  25. McKenna, B.: Complexity of bipartite spherical spin glasses (2021). arXiv:2105.05043

  26. Nicolaescu, L.I.: Critical sets of random smooth functions on compact manifolds. Asian J. Math. 19(3), 391–432 (2015)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  27. Nicolaescu, L.I.: Critical points of multidimensional random Fourier series: variance estimates. J. Math. Phys. 57(8), 083304, 42 (2016)

  28. Panchenko, D.: The Parisi formula for mixed \(p\)-spin models. Ann. Probab. 42(3), 946–958 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  29. Panchenko, D.: The free energy in a multi-species Sherrington–Kirkpatrick model. Ann. Probab. 43(6), 3494–3513 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  30. Panchenko, D.: Free energy in the mixed \(p\)-spin models with vector spins. Ann. Probab. 46(2), 865–896 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  31. Panchenko, D.: Free energy in the Potts spin glass. Ann. Probab. 46(2), 829–864 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  32. Parisi, G.: A simple model for the immune network. Proc. Natl. Acad. Sci. USA 87(1), 429–33 (1990)

    Article  ADS  Google Scholar 

  33. Simon, M.K.: Probability Distributions Involving Gaussian Random Variables: A Handbook for Engineers. Scientists and Mathematicians, Springer-Verlag, Berlin, Heidelberg (2006)

    Google Scholar 

  34. Subag, E.: The complexity of spherical \(p\)-spin models-a second moment approach. Ann. Probab. 45(5), 3385–3450 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  35. Talagrand, M.: Free energy of the spherical mean field model. Probab. Theory Related Fields 134(3), 339–382 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  36. Talagrand, M.: The Parisi formula. Ann. Math. (2) 163(1), 221–263 (2006)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The author would also like to thank their advisor, Antonio Auffinger, for introducing them to this problem, and for many comments on earlier versions of this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pax Kivimae.

Additional information

Communicated by S. Chatterjee.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Research partially supported by NSF Grants DMS-2202720, DMS-1653552 and DMS-1502632.

Appendices

Covariance Computations and Conditional Densities

In this appendix we will study the covariance structure of the Gaussian vector

$$\begin{aligned} (h_N({{\textbf {n}}}),\nabla h_N({{\textbf {n}}}), \nabla ^2 h_N({{\textbf {n}}}),h_N({{\textbf {n}}}(r,t)),\nabla h_N({{\textbf {n}}}(r,t))),\end{aligned}$$
(242)

after which we will investigate some of its conditional density functions.

We will be interested specifically in studying the case where the orthonormal frame field \((E_i)_{i=1}^{N-2}\) is induced by a pullback of a choice of an orthonormal frame field on each component of the product. That is, let us write \(E^i=E_{i+N_1-1}\), and assume that \((E_i)_{i=1}^{N_1-1}\) is obtained from an orthonormal frame field on the factor \(S^{N_1-1}\), and similarly that \((E^i)_{i=1}^{N_2-1}\) is obtained from an orthogonal frame field on the factor \(S^{N_2-1}\). We observe that in this case, the action of \(E^i\) commutes with \(E_j\) for each ij.

With \(\delta _{ij}\) denoting the standard Kronecker \(\delta \), we denote \(\delta _{ijk}=\delta _{ij}\delta _{jk}\), \(\delta _{ij\ne 1}=\delta _{ij}(1-\delta _{i1})\), etc. In addition, we employ the shorthand \(x_*=\sqrt{1-x^2}\).

Lemma 14

For \(p,q\ge 2\), \(0<\gamma <1\) and \(r,t\in [-1,1]\) there exists an orthogonal frame field E, such that:

$$\begin{aligned}{} & {} {\mathbb {E}}[h_N({{\textbf {n}}})h_N({{\textbf {n}}}(r,t))]=r^p t^q \end{aligned}$$
(243)
$$\begin{aligned}{} & {} {\mathbb {E}}[E_i h_N({{\textbf {n}}})h_N({{\textbf {n}}}(r,t))]=-{\mathbb {E}}[h_N({{\textbf {n}}})E_i h_N({{\textbf {n}}}(r,t))]=pr^{p-1}r_*t^{q}\delta _{1i} \end{aligned}$$
(244)
$$\begin{aligned}{} & {} {\mathbb {E}}[E^i h_N({{\textbf {n}}})h_N({{\textbf {n}}}(r,t))]=-{\mathbb {E}}[h_N({{\textbf {n}}})E^i h_N({{\textbf {n}}}(r,t))]=qt^{q-1}t_*r^p\delta _{1i} \end{aligned}$$
(245)
$$\begin{aligned}{} & {} {\mathbb {E}}[E_i h_N({{\textbf {n}}})E_j h_N({{\textbf {n}}}(r,t))]=t^q(pr^{p-1}\delta _{ij\ne 1}+\delta _{ij1}pr^{p-2}(pr^2-(p-1))) \end{aligned}$$
(246)
$$\begin{aligned}{} & {} {\mathbb {E}}[E^i h_N({{\textbf {n}}})E^j h_N({{\textbf {n}}}(r,t))]=r^p(qt^{q-1}\delta _{ij\ne 1}+\delta _{ij1}qt^{q-2}(qt^2-(q-1))) \end{aligned}$$
(247)
$$\begin{aligned}{} & {} {\mathbb {E}}[E^i h_N({{\textbf {n}}})E_j h_N({{\textbf {n}}}(r,t))]={\mathbb {E}}[E_i h_N({{\textbf {n}}})E^j h_N({{\textbf {n}}}(r,t))]=-pqt^{q-1}r^{p-1}r_*t_*\delta _{ij1}\nonumber \\ \end{aligned}$$
(248)
$$\begin{aligned}{} & {} {\mathbb {E}}[E_i E_j h_N({{\textbf {n}}})h_N({{\textbf {n}}}(r,t))]={\mathbb {E}}[h_N({{\textbf {n}}})E_i E_j h_N({{\textbf {n}}}(r,t))]\nonumber \\{} & {} \quad =t^q(-pr^p\delta _{ij}+p(p-1)r^{p-2}r_*^2\delta _{ij1}) \end{aligned}$$
(249)
$$\begin{aligned}{} & {} {\mathbb {E}}[E^i E^j h_N({{\textbf {n}}})h_N({{\textbf {n}}}(r,t))]={\mathbb {E}}[h_N({{\textbf {n}}})E^i E^j h_N({{\textbf {n}}}(r,t))]\nonumber \\{} & {} \quad =r^p(-qt^q\delta _{ij}+q(q-1)t^{q-2}t_*^2\delta _{ij1}) \end{aligned}$$
(250)
$$\begin{aligned}{} & {} {\mathbb {E}}[E^i E_j h_N({{\textbf {n}}})h_N({{\textbf {n}}}(r,t))]={\mathbb {E}}[h_N({{\textbf {n}}})E^i E_j({{\textbf {n}}}(r,t))]=pqr^{p-1}t^{q-1}r_*t_*\delta _{ij1} \end{aligned}$$
(251)
$$\begin{aligned}{} & {} {\mathbb {E}}[E^i E^j h_N({{\textbf {n}}})E_k h_N({{\textbf {n}}}(r,t))]=-{\mathbb {E}}[E_k h_N({{\textbf {n}}})E^i E^j h_N({{\textbf {n}}}(r,t))] \end{aligned}$$
(252)
$$\begin{aligned}{} & {} \quad =-pr_*r^{p-1}\delta _{k1}(-\delta _{ij}t^q q+\delta _{ij1}q(q-1)t^{q-2}t_*^2) \end{aligned}$$
(253)
$$\begin{aligned}{} & {} {\mathbb {E}}[E_i E_j h_N({{\textbf {n}}})E^k h_N({{\textbf {n}}}(r,t))]=-{\mathbb {E}}[E^k h_N({{\textbf {n}}})E_i E_j h_N({{\textbf {n}}}(r,t))] \end{aligned}$$
(254)
$$\begin{aligned}{} & {} \quad =-qt_*t^{q-1}\delta _{k1}(-\delta _{ij}r^p p+\delta _{ij1}p(p-1)r^{p-2}r_*^2) \end{aligned}$$
(255)
$$\begin{aligned}{} & {} {\mathbb {E}}[E_i E^j h_N({{\textbf {n}}})E_k h_N({{\textbf {n}}}(r,t))]=-{\mathbb {E}}[E_k h_N({{\textbf {n}}})E_i E^j h_N({{\textbf {n}}}(r,t))] \end{aligned}$$
(256)
$$\begin{aligned}{} & {} \quad =q\delta _{j1}t^{q-1}t_*(pr^{p-1}\delta _{ik\ne 1}+\delta _{ik1}pr^{p-2}(pr^2-(p-1))) \end{aligned}$$
(257)
$$\begin{aligned}{} & {} {\mathbb {E}}[E^i E_j h_N({{\textbf {n}}})E^k h_N({{\textbf {n}}}(r,t))]=-{\mathbb {E}}[E^k h_N({{\textbf {n}}})E^i E_j h_N({{\textbf {n}}}(r,t))] \end{aligned}$$
(258)
$$\begin{aligned}{} & {} \quad =p\delta _{j1}r^{p-1}r_*(qt^{q-1}\delta _{ik\ne 1}+\delta _{ik1}qt^{q-2}(qt^2-(q-1))) \end{aligned}$$
(259)
$$\begin{aligned}{} & {} {\mathbb {E}}[E_i E_j h_N({{\textbf {n}}})E_k h_N({{\textbf {n}}}(r,t))]=-{\mathbb {E}}[E_k h_N({{\textbf {n}}})E_i E_j h_N({{\textbf {n}}}(r,t))] \end{aligned}$$
(260)
$$\begin{aligned}{} & {} \quad =t^q[\delta _{k\ne 1}(\delta _{i1}\delta _{jk}+\delta _{j1}\delta _{ik})p(p-1)r_*r^{p-2}+\delta _{k1}(\delta _{ij}p^2r^{p-1}r_*\nonumber \\{} & {} \qquad +\delta _{ij1}p(p-1)r^{p-3}r_*(2r^2-(p-2)r^2_*))] \end{aligned}$$
(261)
$$\begin{aligned}{} & {} {\mathbb {E}}[E^i E^j h_N({{\textbf {n}}})E^k h_N({{\textbf {n}}}(r,t))]=-{\mathbb {E}}[E^k h_N({{\textbf {n}}})E^i E^j h_N({{\textbf {n}}}(r,t))] \end{aligned}$$
(262)
$$\begin{aligned}{} & {} \quad =r^p[\delta _{k\ne 1}(\delta _{i1}\delta _{jk}+\delta _{j1}\delta _{ik})q(q-1)t_*t^{q-2}+\delta _{k1}(\delta _{ij}q^2t^{q-1}t_*\nonumber \\{} & {} \qquad +\delta _{ij1}q(q-1)t^{q-3}t_*(2t^2-(q-2)t^2_*))] \end{aligned}$$
(263)

Proof

The proof will proceed similarly to Lemma 30 of [34]. Let us denote \(r_1=r\) and \(r_2=t\). Let \(P_{{{\textbf {n}}},N}:S^{N-1}\rightarrow {\mathbb {R}}^{N-1}\) denote projection away from the first coordinate, let \(\theta _i\in [-\pi /2,\pi /2]\) denote the angles such that \(\sin (\theta _i)=r_i\), and let \(R_{\varphi ,N}:S^{N-1}\rightarrow S^{N-1}\) denote the rotation mapping

$$\begin{aligned} R_{\varphi ,N}(x_1,\dots , x_N)=(\cos (\varphi )x_1+\sin (\varphi )x_2,-\sin (\varphi )x_1+\cos (\varphi )x_2,x_3,\dots , x_N).\nonumber \\ \end{aligned}$$
(264)

      For \(i=1,2\), take a neighborhood \(U_i\) of \({{\textbf {n}}}_{N_i}\) in \(S^{N_i-1}\) such that restriction of \(P_{{{\textbf {n}}},N_i}\) is a diffeomorphism onto its image. Similarly take a neighborhood \(V_i\) of \({{\textbf {n}}}_{N_i}(r_i)\) in \(S^{N_i-1}\) such that \(P_{{{\textbf {n}}},N_i}\circ R_{-\theta _i,N_i}\) is a diffeomorphism onto its image. We denote these images as \({\bar{U}}_i\) and \({\bar{V}}_i\), respectively. We define functions \({\bar{h}}_1:{\bar{U}}_1\times {\bar{U}}_2\rightarrow {\mathbb {R}}\) and \({\bar{h}}_2:{\bar{V}}_1\times {\bar{V}}_2\rightarrow {\mathbb {R}}\) by

$$\begin{aligned} {\bar{h}}_{1}=h_N\circ (P_{{{\textbf {n}}},N_1}\times P_{{{\textbf {n}}},N_2})^{-1};\;\;\;{\bar{h}}_{2}=h_N\circ (P_{{{\textbf {n}}},N_1}\circ R_{-\theta _1,N_1}\times P_{{{\textbf {n}}},N_2}\circ R_{-\theta _2,N_2})^{-1}.\nonumber \\ \end{aligned}$$
(265)

      In the proof of Lemma 30 of [34] (and more specifically in their ft. 5) they show that for each \(N\ge 2\) and \(\theta \in [-\pi /2, \pi /2]\), there is an orthonormal frame field, \(E^{N,\theta }=(E_i^{N,\theta })_{i=1}^{N-1}\), on \(S^{N-1}\) with the following property: for any smooth function \(f:S^{N-1}\rightarrow {\mathbb {R}}\) if we denote

$$\begin{aligned} \sin (\theta )=r,\;\;\;{\bar{f}}_1=f\circ P_{{{\textbf {n}}},N}^{-1},\;\;\; {\bar{f}}_2=f\circ (P_{{{\textbf {n}}},N}\circ R_{-\theta ,N})^{-1},\end{aligned}$$
(266)

then for \(1\le i,j\le N-1\) we have that

$$\begin{aligned} E^{N,\theta }_i f({{\textbf {n}}})= & {} \partial _i{\bar{f}}_1(0),\;\;E^{N,\theta }_i E^{N,\theta }_j f({{\textbf {n}}})=\partial _i \partial _j {\bar{f}}_1(0), \end{aligned}$$
(267)
$$\begin{aligned} E^{N,\theta }_i f({{\textbf {n}}}(r))= & {} \partial _i {\bar{f}}_2(0),\;\;E^{N,\theta }_i E^{N,\theta }_j f({{\textbf {n}}}(r))=\partial _i \partial _j {\bar{f}}_2(0), \end{aligned}$$
(268)

where \(\partial _i\) denotes the i-th standard Euclidean partial derivative. We define our orthogonal frame field E on \(S^{N_1-1}\times S^{N_2-1}\) by letting \((E_i)_{i=1}^{N_1-1}=(E_i^{N_1,\theta _1})_{i=1}^{N_1-1}\), where \(E_i^{N_1,\theta _1}\) acts on the coordinates of the first sphere in the product, and \((E^i)_{i=1}^{N_2-1}=(E_i^{N_2,\theta _2})_{i=1}^{N_2-1}\), where \(E_i^{N_2,\theta _2}\) acts on the coordinates of the second sphere. Let us write the i-th Euclidean coordinate for \({\bar{U}}_1\subseteq {\mathbb {R}}^{N_1-1}\) as \(x_i\) and the i-th Euclidean coordinate for \({\bar{U}}_2\subseteq {\mathbb {R}}^{N_2-1}\) as \(y_i\). Then employing (267) and (268) we see that for \(1\le i,j\le N_1-1\) and \(1\le k,l\le N_2-1\)

$$\begin{aligned}{} & {} (h_N({{\textbf {n}}}),E_i h_N({{\textbf {n}}}), E^k h_N({{\textbf {n}}}), E_i E_j h_N({{\textbf {n}}}), E_i E^k h_N({{\textbf {n}}}), E^k E^l h_N({{\textbf {n}}})) \end{aligned}$$
(269)
$$\begin{aligned}{} & {} \quad =({\bar{h}}_{1}(0), \frac{d}{d x_i}{\bar{h}}_{1}(0), \frac{d}{d y_k}{\bar{h}}_1(0), \frac{d^2}{d x_i d x_j}{\bar{h}}_{1}(0), \frac{d^2}{d y_k d x_i}{\bar{h}}_1(0), \frac{d^2}{d y_k d y_l} {\bar{h}}_1(0)).\nonumber \\ \end{aligned}$$
(270)

A similar relationship holds for between the derivatives of \(h_N\) at \({{\textbf {n}}}(r,t)\) and \({\bar{h}}_2\) at 0.

If we now define

$$\begin{aligned} \rho _{r,N}(x,y)=\sum _{i=2}^{N-1}x_iy_i+rx_{1}y_{1}+r_*x_{1}\Vert y\Vert _*-r_*y_{1}\Vert x\Vert _*+r\Vert x\Vert _*\Vert y\Vert _*,\end{aligned}$$
(271)

and let \(C(x,y,z,w):={\mathbb {E}}[{\bar{h}}_1(x,y){\bar{h}}_2(z,w)]\), then for \(x,z\in {\mathbb {R}}^{N_1-1}\) and \(y,w\in {\mathbb {R}}^{N_2-1}\), we have

$$\begin{aligned} C(x,y,z,w)=(\rho _{r,N_1}(x,z))^p(\rho _{t,N_2}(y,w))^q.\end{aligned}$$
(272)

We recall (5.5.4) of [1], which states that for an arbitrary centered Gaussian field on \({\mathbb {R}}^n\), h, with smooth covariance function, C, we have that

$$\begin{aligned} {\mathbb {E}}[\frac{d^k}{dx_{i_1}\dots dx_{i_k}}h(x)\frac{d^l}{dy_{j_1}\dots dy_{j_l}}h(y)]=\frac{d^k}{dx_{i_1}\dots dx_{i_k}}\frac{d^l}{dy_{j_1}\dots dy_{j_l}}C(x,y).\nonumber \\ \end{aligned}$$
(273)

Routine calculation using these formulas now yields the desired results. \(\square \)

From this result we observe the following corollary.

Corollary 2

Let us take \(r,t\in [-1,1]\) and the choice of frame field in Lemma 14. Then all the following random variables are independent.

  • \(E_i E_j h_N({{\textbf {n}}})+p\delta _{ij}h_N({{\textbf {n}}})\) for \(1<i\le j\le N_1-1\).

  • \(E^i E^j h_N({{\textbf {n}}})+q\delta _{ij}h_N({{\textbf {n}}})\) for \(1<i\le j\le N_2-1\).

  • \(E_i E^j h_N({{\textbf {n}}})\) for \(1<i\le N_1-1,1<j\le N_2-1\).

  • \((E_{1}E_i h_N({{\textbf {n}}}),E^1 E_i h_N({{\textbf {n}}}),E_i h_N({{\textbf {n}}}),E_i h_N({{\textbf {n}}}(r,t)))\) for any \(1<i\le N_1-1\).

  • \((E_{1}E^i h_N({{\textbf {n}}}),E^1 E^i h_N({{\textbf {n}}}),E^i h_N({{\textbf {n}}}),E^i h_N({{\textbf {n}}}(r,t)))\) for any \(1<i\le N_2-1\).

  • \((E_{1}E_1h_N({{\textbf {n}}}),E_{1}E^1h_N({{\textbf {n}}}),E^{1}E^1h_N({{\textbf {n}}}),h_N({{\textbf {n}}}),h_N({{\textbf {n}}}(r,t))\),

    \(E_1h_N({{\textbf {n}}}),E^1h_N({{\textbf {n}}}),E_1h_N({{\textbf {n}}}(r,t)),E^1h_N({{\textbf {n}}}(r,t)))\).

We will now turn our attention to the computation of the relevant conditional densities, beginning with that of \((\nabla h_N({{\textbf {n}}}),\nabla h_N({{\textbf {n}}}(r,t)))\). In what follows we will repeatedly use the formula for the conditional law of Gaussian distributions (see for example, (1.2.7) and (1.2.8) of [1]). We observe that for \(1<i<N_1-1\) and \(1<j<N_2-1\)

$$\begin{aligned}{} & {} {\mathbb {E}}[(E_i h_N({{\textbf {n}}}), E_i h_N({{\textbf {n}}}(r,t)))(E_i h_N({{\textbf {n}}}), E_i h_N({{\textbf {n}}}(r,t)))^t]=\begin{bmatrix} p &{} p r^{p-1} t^q\\ p r^{p-1} t^q &{} p \end{bmatrix}, \nonumber \\ \end{aligned}$$
(274)
$$\begin{aligned}{} & {} {\mathbb {E}}[(E^j h_N({{\textbf {n}}}), E^j h_N({{\textbf {n}}}(r,t)))(E^j h_N({{\textbf {n}}}), E^j h_N({{\textbf {n}}}(r,t)))^t]=\begin{bmatrix} q &{} q r^{p} t^{q-1}\\ q r^{p} t^{q-1} &{} q \end{bmatrix}. \nonumber \\ \end{aligned}$$
(275)

From these we see that the vectors \((E_i h_N({{\textbf {n}}}), E_i h_N ({{\textbf {n}}}(r,t)))\) and \((E^j h_N({{\textbf {n}}}), E^j h_N({{\textbf {n}}}(r,t)))\) are non-degenerate unless \(|r|=|t|=1\). The remaining entries are more complicated. We may compute that

$$\begin{aligned}{} & {} {\mathbb {E}}[(E_1h_N({{\textbf {n}}}),E^1h_N({{\textbf {n}}}))(E_1h_N({{\textbf {n}}}(r,t)),E^1h_N({{\textbf {n}}}(r,t)))^t]\end{aligned}$$
(276)
$$\begin{aligned}{} & {} \quad =\begin{bmatrix} t^q p r^{p-2}(p r^2-(p-1))&{} -pqr^{p-1}t^{q-1}r_* t_*\\ -pqr^{p-1}t^{q-1}r_* t_* &{} r^p q t^{q-2}(q t^2-(q-1)) \end{bmatrix}=:{\bar{\varSigma }}_{L}(r,t), \end{aligned}$$
(277)

so that if we denote the covariance matrix of the vector

$$\begin{aligned} (E_1h_N({{\textbf {n}}}),E^1h_N({{\textbf {n}}}),E_1h_N({{\textbf {n}}}(r,t)),E^1h_N({{\textbf {n}}}(r,t)))\end{aligned}$$
(278)

as \(\varSigma _L(r,t)\) then we have that

$$\begin{aligned} \varSigma _L(r,t):= \begin{bmatrix} {\bar{\varSigma }}_{L}(1,1)&{} {\bar{\varSigma }}_{L}(r,t)\\ {\bar{\varSigma }}_{L}(r,t)&{} {\bar{\varSigma }}_{L}(1,1) \end{bmatrix}.\end{aligned}$$
(279)

We note for convenience that

$$\begin{aligned} {\bar{\varSigma }}_{L}(1,1)=\begin{bmatrix}p &{} 0 \\ 0 &{} q \end{bmatrix}. \end{aligned}$$
(280)

We observe that when \(\varSigma _L(r,t)\) is invertible, the density of the vector (278) at zero is

$$\begin{aligned} \varphi _{E_1h_N({{\textbf {n}}}),E^1h_N({{\textbf {n}}}),E_1h_N({{\textbf {n}}}(r,t)),E^1h_N({{\textbf {n}}}(r,t))}(0,0,0,0)=\frac{1}{(2\pi )^2\sqrt{\det (\varSigma _{L}(r,t))}}=:f_L(r,t).\nonumber \\ \end{aligned}$$
(281)

Proof of Lemma 9 and Lemma 10

By Lemma 17 below, for any \(r,t\in (-1,1)\) we have that the vector

$$\begin{aligned} (\nabla h_N({{\textbf {n}}}),\nabla h_N({{\textbf {n}}}(r,t)))\end{aligned}$$
(282)

is non-degenerate. Equation (143) now follows from (274) and (275) and the formula for the conditional density of a Gaussian random variable. We will use the notation

$$\begin{aligned} {\mathbb {E}}_{\nabla h_N({{\textbf {n}}}),\nabla h_N({{\textbf {n}}}(r,t))}[f]={\mathbb {E}}[f|\nabla h_N({{\textbf {n}}})=\nabla h_N({{\textbf {n}}}(r,t))=0].\nonumber \\ \end{aligned}$$
(283)

We define

$$\begin{aligned} \varSigma _U(r,t):={\mathbb {E}}_{\nabla h_N({{\textbf {n}}}),\nabla h_N({{\textbf {n}}}(r,t))}[(h_N({{\textbf {n}}}),h_N({{\textbf {n}}}(r,t)))(h_N({{\textbf {n}}}),h_N({{\textbf {n}}}(r,t)))^t].\nonumber \\ \end{aligned}$$
(284)

Then Lemma 17 guarantees that \(\varSigma _U(r,t)\) is strictly positive-definite, and the remaining claims follow from Lemma 17 and the formulas for the conditional law of Gaussian distributions. Together these results complete the proof of Lemma 9.

The proof of Lemma 10 follows from Lemma 14 and Corollary 2 and the same conditional formulas. \(\square \)

The next result shows that the only points where \(\det (\varSigma _L(r,t))=0\) have \(|r|=|t|=1\).

Lemma 15

For \(p,q\ge 5\), there is a constant \(C>0\), such that for \(r,t\in [-1,1]\), we have that

$$\begin{aligned} f_{L}(r,t)\le C(2-r^2-t^2)^{-1}.\end{aligned}$$
(285)

Proof

Appealing to Lemma 17 as before, for \(r,t\in (-1,1)\) we have that the vector

$$\begin{aligned} (\nabla h_N({{\textbf {n}}}),\nabla h_N({{\textbf {n}}}(r,t)))\end{aligned}$$
(286)

is non-degenerate, so that in particular \(\det (\varSigma _L(r,t))>0\). From this and continuity, we also see that \(\det (\varSigma _L(r,t))\ge 0\), so that, in particular, it suffices to show that there is \(c>0\) such that

$$\begin{aligned} \det (\varSigma _L(r,t))\ge c(2-r^2-t^2)^2.\end{aligned}$$
(287)

We observe that

$$\begin{aligned} {\bar{\varSigma }}_{L}(r,\pm 1)=(\pm 1)^{q}\begin{bmatrix} pr^{p-2}(pr^2-(p-1))&{} 0\\ 0 &{} r^{p} \end{bmatrix}.\end{aligned}$$
(288)

We note that for \(|r|<1\), we have by the AM-GM inequality that

$$\begin{aligned} \frac{1}{(p{-}1)}\frac{1{-}r^{2p{-}4}}{1{-}r^2}{=}\frac{1}{(p-1)}\sum _{i=0}^{p{-}2}r^{2i}> |r|^{p-2},\end{aligned}$$
(289)

so in particular, \((p-1)r^{p-2}(1-r^2)<1\) and so \(1+r^{p-2}(pr^2-(p-1))\ge 1-(p-1)r^{p-2}(1-r^2)>0\). As \(|r|<1\), we more easily see that \(1-r^{p-2}(pr^2-(p-1))\ge 1-r^{p-2}>0\) and that \(|r|^p<1\). In particular, we see that if \(|r|<1\) then

$$\begin{aligned} \det (\varSigma _L(r,\pm 1)){=}(1{-}pr^{p{-}2}(pr^2{-}(p{-}1)))(1{+}pr^{p-2}(pr^2{-}(p{-}1)))(1{-}r^{2p})>0\nonumber \\ \end{aligned}$$
(290)

Similarly we see that if \(|t|<1\) then \(\det (\varSigma _L(\pm 1,t)>0\). Thus what remains is to study the order of vanishing of \(\det (\varSigma _L(r,t))\) around the points \(|r|=|t|=1\). We observe that \(\det (\varSigma _L(r,t))\) is even in both r and t, so with the prior positivity results, it is sufficient to show that

$$\begin{aligned} \liminf _{(r,t)\rightarrow 1}\frac{\det (\varSigma _L(r,t))}{(2-r^2-t^2)^2}>0\end{aligned}$$
(291)

where the limit here (and below) is taken with \(r,t\in (-1,1)\). Let us denote \(\delta =(2-r^2-t^2)\). Then for \(r,t\in (-1,1)\) sufficiently close to (1, 1), we note that

$$\begin{aligned}{} & {} {\bar{\varSigma }}_L(r,t)=O(\delta ^2)+\begin{bmatrix} p&{} 0\\ 0&{} q\\ \end{bmatrix} \end{aligned}$$
(292)
$$\begin{aligned}{} & {} \quad {+}\begin{bmatrix} pq(t{-}1){+}p(p^2{-}(p{-}1)(p{-}2))(r{-}1)&{} {-}pq r_*t_*\\ {-}pq r_*t_*&{} pq(r{-}1){+}q(q^2{-}(q{-}1)(q{-}2))(t{-}1) \end{bmatrix},\nonumber \\ \end{aligned}$$
(293)

so that we have that

$$\begin{aligned} \det (\varSigma _L(1,1))=0,\nabla \det (\varSigma _L(1,1))=0.\end{aligned}$$
(294)

Finally a long but direct computation yields that

$$\begin{aligned} \nabla ^2 \det (\varSigma _L(1,1))=p^2q^2\begin{bmatrix} 4p(6p-4)&{} 12(p-1)(q-1)-4\\ 12(p-1)(q-1)-4&{} 4q(6q-4)\end{bmatrix}.\nonumber \\\end{aligned}$$
(295)

Let us denote the matrix on the right-hand side as \(H_L(p,q)\). We see that by Taylor’s theorem

$$\begin{aligned} \liminf _{(r,t)\rightarrow 1}\frac{\det (\varSigma _L(r,t))}{(2-r^2-t^2)^2}=\liminf _{(r,t)\rightarrow 1}\frac{(1-r,1-t)H_L(p,q)(1-r,1-t)^t}{2(2-r^2-t^2)^2}.\nonumber \\ \end{aligned}$$
(296)

On the other hand for \(p,q\ge 2\) all entries of \(H_L(p,q)\) are positive, so for small enough \(c>0\) and \(r,t\in (-1,1)\)

$$\begin{aligned}{} & {} (1-r,1-t)H_L(p,q)(1-r,1-t)^t\ge c[(1-r)^2+2(1-r)(1-t)+(1-t)^2]\nonumber \\{} & {} \quad =c(2-r-t)^2.\end{aligned}$$
(297)

Combining this with the observation that

$$\begin{aligned} \lim _{r,t\rightarrow 1}\frac{(1-r-t)^2}{(2-r^2-t^2)^2}=\frac{1}{4}\end{aligned}$$
(298)

then completes the proof. \(\square \)

Remark 4

By Lemma 15, (274), (275), and Corollary 2 we see that for \(r,t\in [-1,1]\), such that either \(|r|<1\) or \(|t|<1\), we have that \((\nabla h_N({{\textbf {n}}}),\nabla h_N({{\textbf {n}}}(r,t)))\) is a non-degenerate Gaussian random vector. As the covariance function for \(h_N\) is isotropic, we see by applying a rotation, that as long as either \(N^{-1}_1|(\sigma ,\sigma ')|<1\) or \(N^{-1}_2|(\tau ,\tau ')|<1\), the law of

$$\begin{aligned} (\nabla h_N(\sigma ,\tau ),\nabla h_N(\sigma ',\tau '))\end{aligned}$$
(299)

is non-degenerate.

We will also need a result that controls the rate at which the entries of the Hessian vanish as \(r,t\rightarrow 1\).

Lemma 16

There is a fixed constant \(C>0\), only dependent on pq, such that for \(1\le i\le N_1\), we have that

$$\begin{aligned} {\mathbb {E}}_{\nabla h_N({{\textbf {n}}}),\nabla h_N({{\textbf {n}}}(r,t))}\bigg [\left( E_i E_1h_N({{\textbf {n}}})\sqrt{1{-}r^2}{+}E_i E^1h_N({{\textbf {n}}})\sqrt{1{-}t^2}\right) ^2\bigg ]{\le } C(2{-}t^2{-}r^2)^2,\nonumber \\ \end{aligned}$$
(300)

and for \(1\le i\le N_2\), we have that

$$\begin{aligned} {\mathbb {E}}_{\nabla h_N({{\textbf {n}}}),\nabla h_N({{\textbf {n}}}(r,t))}\bigg [\left( E_i E^1h_N({{\textbf {n}}})\sqrt{1{-}r^2}{+}E^i E^1h_N({{\textbf {n}}})\sqrt{1{-}t^2}\right) ^2\bigg ]{\le } C(2{-}t^2{-}r^2)^2\nonumber \\ \end{aligned}$$
(301)

Proof

We only show (300), the case of (301) being similar. We also note that by symmetry and continuity, we may assume that \(r,t\in (0,1)\). As the quantity on the right is positive, we may additionally assume that \(r,t\in (1-\epsilon ,1)\) for any fixed \(\epsilon >0\).

Let \({\bar{h}}_1\) and \({\bar{h}}_2\) be as in Lemma 14, and let us denote \(v_{N}(s)=(\sqrt{1-s^2},0,\dots 0)\in {\mathbb {R}}^{N-1}\) and \(v(r,t)=(v_{N_1}(r),v_{N_2}(t))\). As both \(\nabla {\bar{h}}_2(0)\) and \(\nabla {\bar{h}}_1(v(r,t))\) are coordinate expressions of the gradient with respect to (potentially different) orthonormal frames, they are related at a fixed point by an invertible matrix. In particular, see that the linear relation \(\nabla h_N({{\textbf {n}}}(r,t))=\nabla {\bar{h}}_2(0)=0\) is equivalent to the linear relation \(\nabla {\bar{h}}_1(v(r,t))=0\). If we denote \({\bar{h}}_1={\bar{g}}\) we see then that

$$\begin{aligned} {\mathbb {E}}_{\nabla h_N({{\textbf {n}}}),\nabla h_N({{\textbf {n}}}(r,t))}[f]={\mathbb {E}}[f|\nabla {\bar{g}}(0)=\nabla {\bar{g}}(v(r,t))=0].\end{aligned}$$
(302)

With this notation, we see from the proof of Lemma 14 that we may rewrite (300) as

$$\begin{aligned} {\mathbb {E}}_{\nabla h_N({{\textbf {n}}}),\nabla h_N({{\textbf {n}}}(r,t))}\left[ \left( (\sqrt{1-r^2},\sqrt{1-t^2})^t(\frac{d^2}{dx_1dx_i}{\bar{g}}(0),\frac{d^2}{dy_1dx_i}{\bar{g}}(0))\right) ^2\right] .\nonumber \\ \end{aligned}$$
(303)

We will write \((x,y)\in {\mathbb {R}}^{N_1-1}\times {\mathbb {R}}^{N_2-1}\) to denote the standard Euclidean coordinates. Let us denote, for \(1\le i\le N_1\)

$$\begin{aligned} H_{r,t}^i=\begin{bmatrix}\frac{d^3}{dx_1^2dx_i}{\bar{g}}(v(r,t))&{}\frac{d^3}{dx_1dy_1dx_i}{\bar{g}}(v(r,t))\\ \frac{d^3}{dx_1dy_1dx_i}{\bar{g}}(v(r,t))&{}\frac{d^3}{dy_1^2dx_i}{\bar{g}}(v(r,t))\\ \end{bmatrix}.\end{aligned}$$
(304)

We observe that by Taylor’s Theorem, we have that

$$\begin{aligned}{} & {} |\frac{d}{dx_i}{\bar{g}}(v(r,t))-\frac{d}{dx_i}{\bar{g}}(0)-\frac{d^2}{dx_idx_1}{\bar{g}}(0)\sqrt{1-t^2}-\frac{d^2}{dx_idy_1}{\bar{g}}(0)\sqrt{1-r^2}| \nonumber \\ \end{aligned}$$
(305)
$$\begin{aligned}{} & {} \quad \le (\sup _{u,v\in [-1,1]^2}\Vert H_{u,v}^i\Vert )\Vert (\sqrt{1-r^2},\sqrt{1-t^2})\Vert ^2. \end{aligned}$$
(306)

Observing that \(\Vert (\sqrt{1-r^2},\sqrt{1-t^2})\Vert ^2=(2-r^2-t^2)\) we thus see by (302) that this implies that

$$\begin{aligned}{} & {} {\mathbb {E}}_{\nabla h_N({{\textbf {n}}}),\nabla h_N({{\textbf {n}}}(r,t))}\left[ \left( (\sqrt{1-r^2},\sqrt{1-t^2})^t(\frac{d^2}{dx_1dx_i}{\bar{g}}(0),\frac{d^2}{dy_1dx_i}{\bar{g}}(0))\right) ^2\right] \nonumber \\ \end{aligned}$$
(307)
$$\begin{aligned}{} & {} \quad \le (2-r^2-t^2)^2{\mathbb {E}}_{\nabla h_N({{\textbf {n}}}),\nabla h_N({{\textbf {n}}}(r,t))}[\sup _{u,v\in [-1,1]^2}\Vert H_{u,v}^i\Vert ^2]. \end{aligned}$$
(308)

Employing that \(\Vert H_{u,v}^i\Vert \le \Vert H_{u,v}^i\Vert _{HS}\), and that conditioning on a Gaussian vector may only decrease its variance, we see that

$$\begin{aligned} {\mathbb {E}}_{\nabla h_N({{\textbf {n}}}),\nabla h_N({{\textbf {n}}}(r,t))}[\sup _{u,v\in [-1,1]^2}\Vert H_{u,v}^i\Vert ^2]\le {\mathbb {E}}[\sup _{u,v\in [-1,1]^2}\Vert H_{u,v}^i\Vert _{HS}^2].\end{aligned}$$
(309)

As the covariance of the entries of \(H_{u,v}^i\) are clearly bounded (uniformly in uv and N), we see that the right-hand side of (309) is bounded by a fixed constant by the Borell-TIS inequality (see Theorem 2.1.1 of [1]). In view of (303), this completes the proof. \(\square \)

Finally, we will end this section by computing a useful expression involving \(\varSigma _U(r,t)\) that we will use above. We will fix \(r,t\in (-1,1)\) for the remainder of the section. We first note that by Corollary 2 and (284) that

$$\begin{aligned} \varSigma _U(r,t)={\mathbb {E}}[(h_N({{\textbf {n}}}),h_N({{\textbf {n}}}(r,t)))(h_N({{\textbf {n}}}),h_N({{\textbf {n}}}(r,t)))^t|E_1h_N({{\textbf {n}}}(r,t))=\end{aligned}$$
(310)
$$\begin{aligned} E^1h_N({{\textbf {n}}}(r,t))=E_1h_N({{\textbf {n}}})=E^1h_N({{\textbf {n}}})=0].\nonumber \\ \end{aligned}$$
(311)

To simplify this further, we observe that by direct computation from Lemma 14 the vector

$$\begin{aligned} \frac{1}{\sqrt{2}}\left( h_N({{\textbf {n}}})+h_N({{\textbf {n}}}(r,t)),E_1 h_N({{\textbf {n}}})-E_1 h_N({{\textbf {n}}}(r,t)),E^1 h_N({{\textbf {n}}})-E^1 h_N({{\textbf {n}}}(r,t))\right) \nonumber \\ \end{aligned}$$
(312)

and the vector

$$\begin{aligned} \frac{1}{\sqrt{2}}\left( h_N({{\textbf {n}}})-h_N({{\textbf {n}}}(r,t)),E_1 h_N({{\textbf {n}}})+E_1 h_N({{\textbf {n}}}(r,t)),E^1 h_N({{\textbf {n}}})+E^1 h_N({{\textbf {n}}}(r,t))\right) \nonumber \\ \end{aligned}$$
(313)

are independent. The covariance matrix of the vector (312) is given by

$$\begin{aligned}{} & {} \varSigma _Y(r,t)\nonumber \\{} & {} \quad {:=}\begin{bmatrix} 1+r^pt^q&{} -pr^{p-1}t^qr_* &{} -q r^p t^{q-1}t_*\\ -pr^{p-1}t^qr_* &{} p(1-r^pt^q+(p-1)r^{p-2}t^qr_*^2)&{} pq r^{p-1}t^{q-1}r_*t_*\\ -q r^p t^{q-1}t_* &{} pq r^{p-1}t^{q-1}r_*t_*&{} q(1-r^pt^q+(q-1)r^{p}t^{q-2}t_*^2) \end{bmatrix}.\nonumber \\ \end{aligned}$$
(314)

We observe that by the above independence we see that

$$\begin{aligned} -\frac{1}{2}(E,E)\varSigma _U(r,t)^{-1}(E,E)^t=-[\varSigma _Y(r,t)^{-1}]_{11}E^2.\end{aligned}$$
(315)

We now proceed to compute this quantity. We first note that by direct computation, we have that

$$\begin{aligned} \det (\varSigma _Y(r,t))= & {} pqb_{p,q}(r,t), \end{aligned}$$
(316)
$$\begin{aligned} \;b_{p,q}(r,t):= & {} (1-r^{p}t^q)(1-r^{2p-2}t^{2q-2})+(p-1)(1-r^2)r^{p-2}t^{q}(1-r^{p}t^{q-2}) \nonumber \\ \end{aligned}$$
(317)
$$\begin{aligned}{} & {} +(q-1)(1-t^2)r^{p}t^{q-2}(1-r^{p-2}t^{q}). \end{aligned}$$
(318)

We will also need a term from the adjugate matrix, namely

$$\begin{aligned}{} & {} [\varSigma _Y(r,t)]_{22}[\varSigma _Y(r,t)]_{33}-[\varSigma _Y(r,t)]_{23}^2=pq b_{p,q}(r,t)\nonumber \\{} & {} \quad -pq r^p t^{q}(1-r^{p-2}t^{q})(1-r^{p}t^{q-2}).\end{aligned}$$
(319)

Together these yield that

$$\begin{aligned}{}[\varSigma _Y(r,t)^{-1}]_{11}= & {} \frac{1}{\det (\varSigma _Y)}([\varSigma _Y(r,t)]_{22}[\varSigma _Y(r,t)]_{33}-[\varSigma _Y(r,t)]_{23}^2)\end{aligned}$$
(320)
$$\begin{aligned}= & {} 1-\frac{ r^p t^{q}(1-r^{p-2}t^{q})(1-r^{p}t^{q-2})}{b_{p,q}(r,t)}, \end{aligned}$$
(321)

so altogether, we see that

$$\begin{aligned} -\frac{1}{2}(1,1) \varSigma _U(r,t)^{-1}(1,1) ^t=-1+\frac{ r^p t^{q}(1-r^{p-2}t^{q})(1-r^{p}t^{q-2})}{b_{p,q}(r,t)}.\end{aligned}$$
(322)

Proof of Lemma 8

In this section, we provide a proof of Lemma 8. The proof will be almost identical to that of Lemma 4 of [34], which may be immediately adapted to this case if we prove the following result.

Lemma 17

For \(p,q\ge 5\), and \(r,t\in (-1,1)\), we have that the Gaussian vector

$$\begin{aligned} (h_N({{\textbf {n}}}),h_N({{\textbf {n}}}(r,t)),\nabla h_N({{\textbf {n}}}),\nabla h_N({{\textbf {n}}}(r,t)),\nabla ^2h_N({{\textbf {n}}}),\nabla ^2h_N({{\textbf {n}}}(r,t)))\end{aligned}$$
(323)

is non-degenerate, up to degeneracies required by symmetries of the Hessian. That is, the vector is non-degenerate if one only takes elements of the Hessians above and on the main diagonal.

Proof of Lemma 8

One may proceed almost identically to the proof of Lemma 4 of [34], replacing their Lemma 32 with our Lemma 17. \(\square \)

The proof of Lemma 17 will follow from two lemmas, both proven at the end of this section. To state the first of these, we first recall the (normalized) spherical \(\ell \)-spin glass model (see (1.1) and (4.1) of [34]). For \(\ell \ge 1\) and \(N\ge 2\), the (normalized) \(\ell \)-spin glass model, is a smooth centered Gaussian random field \(h_{N,\ell }:S^{N-1}\rightarrow {\mathbb {R}}\), with covariance

$$\begin{aligned} {\mathbb {E}}[h_{N,\ell }(\sigma )h_{N,\ell }(\sigma ')]=(\sigma ,\sigma ')^\ell .\end{aligned}$$
(324)

We see that this covariance is related to that of the (pq)-spin glass model in that

$$\begin{aligned} {\mathbb {E}}[h_{N_1,N_2,p,q}(\sigma ,\tau )h_{N_1,N_2,p,q}(\sigma ',\tau ')]={\mathbb {E}}[h_{N_1,p}(\sigma )h_{N_1,p}(\sigma ')]{\mathbb {E}}[h_{N_2,q}(\tau )h_{N_2,q}(\tau ')].\nonumber \\ \end{aligned}$$
(325)

For \(r\in (-1,1)\), let us denote \({{\textbf {n}}}(r)={{\textbf {n}}}_N(r)=(r,\sqrt{1-r^2},0,\dots ,0)\) and \({{\textbf {n}}}={{\textbf {n}}}(1)\). Furthermore, let \((E_i)_{i=1}^{N-1}\) denote the choice of (piece-wise) smooth orthonormal frame field on \(S^{N-1}\) defined in Lemma 30 of [34], and define \((\nabla h_\ell ,\nabla ^2 h_\ell )\) as above.

Lemma 18

For \(\ell \ge 5\), and \(r\in (-1,1)\), the Gaussian vector

$$\begin{aligned} (h_{N,\ell }({{\textbf {n}}}),h_{N,\ell }({{\textbf {n}}}(r)),\nabla h_{N,\ell }({{\textbf {n}}}),\nabla h_{N,\ell }({{\textbf {n}}}(r)),\nabla ^2 h_{N,\ell }({{\textbf {n}}}),\nabla ^2 h_{N,\ell }({{\textbf {n}}}(r))) \nonumber \\ \end{aligned}$$
(326)

is non-degenerate, up to degeneracies required by symmetries of the Hessian.

Remark 5

This is a similar to (and reliant on) Lemma 32 of [34], which establishes that for \(\ell \ge 3\) and \(r\in (-1,1)\), the Gaussian vector

$$\begin{aligned} (\nabla h_{N,\ell }({{\textbf {n}}}),\nabla h_{N,\ell }({{\textbf {n}}}(r)),\nabla ^2 h_{N,\ell }({{\textbf {n}}}),\nabla ^2 h_{N,\ell }({{\textbf {n}}}(r)))\end{aligned}$$
(327)

is non-degenerate, up to degeneracies required by symmetries of the Hessian.

We observe that the E defined in the proof of Lemma 14 is such that \((E_i)_{i=1}^{N_1-1}\) and \((E^i)_{i=N_1}^{N-2}\) are given by the extension to \(S^{N_1-1}\times S^{N_2-1}\) of derivations acting only \(S^{N_1-1}\) and \(S^{N_2-1}\), respectively. Moreover, by construction, these are chosen to so that their restrictions coincide with the E defined above in Lemma 18 with \(\ell =p\), \(N=N_1\), and \(r=r\) and with \(\ell =q\), \(N=N_2\) and \(r=t\), respectively. In particular, we see that Lemma 17 follows from Lemma 18 and Lemma 19 below.

Lemma 19

For \(1\le i\le 2\), let \(h_i\) denote a smooth centered Gaussian field defined on an open subset \(U_i\subseteq {\mathbb {R}}^{n_i}\), with smooth covariance function \(f_i\). Let h denote the smooth centered Gaussian field defined on \(U_1\times U_2\) with covariance function \(f_1 f_2\). For some \(\ell \ge 1\) and \(1\le i\le 2\), let us choose some sequence of points \(r_1^i,\dots r_\ell ^i\in U_i\), and let us denote \(r_{k,l}=(r_k^1,r_l^2)\). Then if we have, for both \(l=1,2\), that the Gaussian vector comprised of entries

$$\begin{aligned} (h_l(r_i^l),\nabla h_l(r_i^l),\nabla ^2 h_l(r_i^l))_{i=1}^{\ell }\end{aligned}$$
(328)

is non-degenerate up to degeneracies required by symmetries of the Hessian, then the Gaussian vector comprised of entries

$$\begin{aligned} (h(r_{ij}),\nabla h(r_{ij}),\nabla ^2 h(r_{ij}))_{1\le i,j\le \ell }\end{aligned}$$
(329)

is non-degenerate up to degeneracies required by symmetries of the Hessian.

We now complete this section by giving the proofs of Lemmas 18 and 19.

Proof of Lemma 18

For the duration of this proof we will denote \(h_N:=h_{N,\ell }\). To establish the desired claim, it is sufficient to show both that the vector

$$\begin{aligned} (h_{N}({{\textbf {n}}}),h_{N}({{\textbf {n}}}(r)),\nabla h_{N}({{\textbf {n}}}),\nabla h_{N}({{\textbf {n}}}(r)))\end{aligned}$$
(330)

is non-degenerate, and that the law of \((\nabla ^2 h_{N}({{\textbf {n}}}), \nabla ^2 h_{N}({{\textbf {n}}}(r)))\) conditioned on the event

$$\begin{aligned} (h_{N}({{\textbf {n}}})=h_{N}({{\textbf {n}}}(r))=0,\nabla h_{N}({{\textbf {n}}})=\nabla h_{N}({{\textbf {n}}}(r))=0)\end{aligned}$$
(331)

is non-degenerate.

For the ease of the reader, we will begin by recalling all the results of [34] that we will use. For clarity, their (fp) coincides with our \((h_N,\ell )\). In their Lemma 12 they show that the law of \((\nabla h_{N}({{\textbf {n}}}),\nabla h_{N}({{\textbf {n}}}(r)))\) is non-degenerate, and that the covariance matrix of \((h_{N}({{\textbf {n}}}),h_{N}({{\textbf {n}}}(r)))\) conditional on the event \((\nabla h_{N}({{\textbf {n}}})=\nabla h_{N}({{\textbf {n}}}(r))=0)\) is given by a matrix \(\varSigma _U(r)\), which they show is invertible in Remark 31. Together these show that (330) is non-degenerate. Now conditional on (331), their Lemma 13 shows that the only non-trivial correlations between entries of \((\nabla ^2h_N ({{\textbf {n}}}),\nabla ^2h_N ({{\textbf {n}}}(r)))\) are between \((E_{i} E_j h_N({{\textbf {n}}}),E_{i} E_j h_N({{\textbf {n}}}(r)))\), and those enforced by symmetry of the Hessian. Thus we are reduced to showing that the conditional law of \((E_{i} E_j h_N({{\textbf {n}}}),E_{i} E_j h_N({{\textbf {n}}}(r)))\) is non-degenerate for each \(1\le i\le j\le N-1\). The case of \(i,j<N-1\) is clear from their description in item (2) of Lemma 13, and the case of \(i<N-1\) and \(j=N-1\) follows from their description in item (3) and their proof of Lemma 32, where they show that \(\varSigma _Z(r)\) is invertible. The remaining case is \(i=j=N-1\), where the conditional covariance matrix is proportional to the matrix \(\varSigma _Q(r)\) they define in (10.2).

In principle, one should be able to derive the invertibility of \(\varSigma _Q(r)\) directly from this expression, but due to the technicality of this expression, we instead employ a more abstract argument. To perform this, we need only observe that the expression of \(\varSigma _Q(r)\) is N-independent, so that to show that \(\varSigma _Q(r)\) is invertible, it is thus sufficient to show that the conditional law of \((E_{N-1}E_{N-1}h_N({{\textbf {n}}}),E_{N-1}E_{N-1}h_N({{\textbf {n}}}(r)))\) is non-degenerate in the case of \(N=2\). Henceforth, we assume that \(N=2\) and omit N from the notation.

We observe that \(h_\ell \) coincides in law with the Gaussian field defined for \((x,y)\in S^1\) by

$$\begin{aligned} f(x,y)=\sum _{i=0}^{\ell }a_ix^iy^{\ell -i},\end{aligned}$$
(332)

where here \((a_i)_{i=0}^{\ell }\) are independent centered Gaussian random variables with \({\mathbb {E}}[a_i^2]=\left( {\begin{array}{c}\ell \\ i\end{array}}\right) \). We observe that by continuity of the derivatives of f as a function of \((a_i)_{i=1}^\ell \), if the matrix \(\varSigma _Q(r)\) is degenerate, any (deterministic) homogeneous polynomial of degree-\(\ell \) on \(S^1\) satisfying the linear relation (331) must satisfies a fixed non-trivial linear relationship between the elements of \((E_{1} E_1 f({{\textbf {n}}}),E_1 E_1 f({{\textbf {n}}}(r)))\). On the other hand, the polynomials

$$\begin{aligned} f_{1}(x,y)=x^2(y\sqrt{1-r^2}-x r)^3,\;\;f_{2}(x,y)=x^3(y\sqrt{1-r^2}-x r)^2\end{aligned}$$
(333)

satisfy (330) but in addition have

$$\begin{aligned} E_1 E_1 f_2({{\textbf {n}}})=E_1 E_1 f_1({{\textbf {n}}}(r))=0,\;\;\; E_1 E_1 f_1({{\textbf {n}}})=E_1 E_1 f_2({{\textbf {n}}}(r))=2(1-r^2)^{3/2}.\nonumber \\ \end{aligned}$$
(334)

Thus, no such relationship is possible, which shows that the law of

$$\begin{aligned} (E_{N-1}E_{N-1}h_N({{\textbf {n}}}),E_{N-1}E_{N-1}h_N({{\textbf {n}}}(r)))\end{aligned}$$
(335)

is non-degenerate when \(N=2\) and thus in general. \(\square \)

Proof of Lemma 19

This proof will consist of showing that the covariance matrix of (329), after removing duplicate entries of the Hessians, is a submatrix of the Kronecker product of the covariance matrices of (328), where again we have removed duplicate entries. Indeed, as the Kronecker product of two strictly positive definite matrices is strictly positive definite, the lemma then follows from the interlacing property for the eigenvalues of a submatrix of a symmetric matrix.

To demonstrate that one is a submatrix of the other, we will make repeated use of the identity (273) above, which expresses the covariance between derivatives of a Gaussian field in terms of the derivatives of the covariance function. For ease of notation, we introduce the vector-valued operator \(\nabla ^{U}\), which is defined for a function f in neighborhood of \(x\in {\mathbb {R}}^N\) as

$$\begin{aligned} \nabla ^{U}f(x)=\left( \frac{d^2}{dx_idx_j}f(x)\right) _{1\le i\le j\le N},\end{aligned}$$
(336)

considered as an \(N(N+1)/2\)-dimensional vector with the lexicographic ordering on (ij). Then for \(l=1,2\), we see from (273) that covariance of the vector \((h_l,\nabla h_l,\nabla ^{U} h_l)\) evaluated at two points \(x,y\in {\mathbb {R}}^{n_l}\) is given by

$$\begin{aligned} \varSigma ^{l}(x,y):=\begin{bmatrix} f_l(x,y)&{} \nabla _1 f_l(x,y) &{} \nabla ^{U}_1 f_l(x,y)\\ \nabla _2 f_l(x,y)&{} \nabla _2 \nabla _1 f_l(x,y) &{} \nabla _2 \nabla ^{U}_1 f_l(x,y)\\ \nabla ^{U}_2 f_l(x,y)&{} \nabla ^{U}_2 \nabla _1 f_l(x,y) &{} \nabla ^{U}_2 \nabla ^{U}_1 f_l(x,y)\\ \end{bmatrix}, \end{aligned}$$
(337)

where here \(\nabla _1 f_l\) and \(\nabla _{2} f_l\) denote the gradient of \(f_l:U_l\times U_l\rightarrow {\mathbb {R}}\) taken with respect to the first and second copy of \(U_l\), respectively (and similarly for \(\nabla ^{U}\)) and where the matrix indices such that the index of \(\nabla _2\) and \(\nabla ^{U}_2\) give the row number and the index of \(\nabla _1\) and \(\nabla ^{U}_1\) give the column number. With this notation we see that the covariance of the matrix of (328), with repeated entries removed, is up to reordering given by

$$\begin{aligned} \varSigma ^l:=\begin{bmatrix}\varSigma ^l(r^l_i,r^l_j) \end{bmatrix}_{1\le i,j\le \ell }.\end{aligned}$$
(338)

To specify the covariance matrix of (328), we note that covariance function of h is given by \(f_1f_2:U_1\times U_2\times U_1\times U_2\rightarrow {\mathbb {R}}\), or explicitly, for \((x,y,z,w)\in U_1\times U_2\times U_1\times U_2\) we have that

$$\begin{aligned} {\mathbb {E}}[h(x,y)h(z,w)]=f_1(x,z)f_2(y,w).\end{aligned}$$
(339)

We are interested in the covariance matrix of the vector \((h,\nabla h,\nabla ^{U} h)\) at two points (xy),  \((z,w)\in U_1\times U_2\). We will denote by \((\nabla _I,\nabla _{II})\) the decomposition of the gradient on \(U_1\times U_2\) into the first and second factor, and similarly for \((\nabla _{I}^{2,U},\nabla _{II}^{2,U})\). Using this notation, we will also reorder and write the vector \((h,\nabla h,\nabla ^{U} h)\) as \((h,\nabla _I h,\nabla ^{U}_I h,\nabla _{II} \)\( h,\nabla _{I} \nabla _{II} h,\nabla ^{U}_{II} h)\). With respect to this decomposition, we may write the covariance matrix in terms of the induced 6-by-6 block decomposition as

$$\begin{aligned}{} & {} \varSigma (x,y,z,w):=\nonumber \\ \end{aligned}$$
(340)
$$\begin{aligned}{} & {} \begin{bmatrix} f_1f_2&{} \nabla _1 f_1f_2 &{} \nabla ^{U}_1 f_1f_2&{} f_1\nabla _1 f_2 &{} \nabla _1 f_1\nabla _1 f_2 &{} f_1\nabla ^U_1 f_2\\ \nabla _2 f_1f_2&{} \nabla _{12} f_1f_2 &{} \nabla ^{U}_1 \nabla _2 f_1f_2&{} \nabla _2f_1\nabla _1 f_2 &{} \nabla _{12} f_1\nabla _1 f_2 &{} \nabla _2 f_1\nabla ^U_1 f_2\\ \nabla _2^U f_1f_2&{} \nabla _1\nabla _{2}^U f_1f_2 &{} \nabla ^{U}_1 \nabla _2^U f_1f_2&{} \nabla _2^U f_1\nabla _1 f_2 &{} \nabla _1\nabla _{2}^U f_1\nabla _1 f_2 &{} \nabla _2^U f_1\nabla ^U_1 f_2\\ f_1\nabla _2f_2&{} \nabla _1 f_1\nabla _2f_2 &{} \nabla ^{U}_1 f_1\nabla _2f_2&{} f_1\nabla _{12} f_2 &{} \nabla _1 f_1\nabla _{12} f_2 &{} f_1\nabla ^U_1\nabla _2 f_2\\ \nabla _2 f_1\nabla _2f_2&{} \nabla _{12} f_1\nabla _2f_2 &{} \nabla ^{U}_1 \nabla _2 f_1\nabla _2f_2&{} \nabla _2 f_1\nabla _{12} f_2 &{} \nabla _{12} f_1\nabla _{12} f_2 &{} \nabla _2 f_1\nabla ^U_1\nabla _2 f_2\\ f_1\nabla _2^U f_2&{} \nabla _1 f_1\nabla _2^U f_2 &{} \nabla ^{U}_1 f_1\nabla _2^U f_2&{} f_1\nabla _{1}\nabla _2^U f_2 &{} \nabla _1 f_1\nabla _{2}^U\nabla _1 f_2 &{} f_1\nabla ^U_1\nabla _2^U f_2\\ \end{bmatrix},\nonumber \\ \end{aligned}$$
(341)

where here \(f_1=f_1(x,z)\), \(f_2=f_2(y,w)\) (and similarly for their derivatives), \(\nabla _{12}=\nabla _1\nabla _2\) and \(\nabla _i f_l\) is given by its expression above. The covariance matrix of (329), with degenerate entries removed, is then given by up to reordering by

$$\begin{aligned} \varSigma :=\begin{bmatrix}\varSigma ^l(r_i,r_j) \end{bmatrix}_{1\le i,j\le \ell }.\end{aligned}$$
(342)

Now the matrix \(\varSigma ^1(x,z)\otimes \varSigma ^2(y,w)\), where \(\otimes \) denotes the Kronecker product, may be decomposed into blocks labeled by (ij) with \(1\le i,j\le 3\), by employing the block decomposition of (337). We observe then that \(\varSigma (x,y,z,w)\) is equivalent to the submatrix of \(\varSigma _1(x,z)\otimes \varSigma _2(y,w)\), consisting of blocks with indices (ij) with \(i+j\le 3\). As the choice of indices does not depend on the choice of (xyzw), we see as well that \(\varSigma \) is a submatrix of \(\varSigma ^1\otimes \varSigma ^2\). \(\square \)

Proof of Eqs. 3 and 4

In this section, we will show that formulas (3) and (4) coincide with their counterparts in Theorem 2.1 of [25]. In particular, we need to relate our \(\mu _{p,q,\gamma }\) to the “\(\mu _{\infty }(u)\)” defined in Remark 2.2 and 2.3 of [25], and in addition, we must relate the quantity “\(E_{\infty }(p,q,\gamma )\)”, defined in their Theorem 2.1 and Remark 2.3 to our \(E_{p,q,\gamma ;\infty }\). To avoid confusion, let us denote these quantities as \(\mu _{p,q,\gamma ,u}^{Mc}\) and \(E_{p,q,\gamma ;\infty }^{Mc}\), respectively. Specifically, to verify the desired claims of (3) and (4) we need to show that

$$\begin{aligned} E_{p,q,\gamma ;\infty }{} & {} =E_{p,q,\gamma ;\infty }^{Mc};\;\;\int \log (|x|)\mu _{p,q,\gamma ,u}^{Mc}(dx)\nonumber \\{} & {} =\int \log (|x-u|)\mu _{p,q,\gamma }(dx)+2C_{p,q,\gamma }-1,\end{aligned}$$
(343)

as well as verifying that the constructions given in Sect. 1.1 are indeed possible. In particular, given (343) we show that the complexity function \(\varSigma _{p,q,\gamma }\), given in (14) and (15), coincides with the function given in [25].

Before proceeding, we will explain the reason for the difference between our expressions and summarize the method to demonstrate (343). The difference is due to our rescaling of the rows and columns in the Hessian appearing in the Kac-Rice formula before passing to the limiting spectral measure. This trick, specific to the pure case, allows us to realize all the limiting empirical measures as translations of a single fixed measure. On the other hand, while the effect of our rescaling has an obvious effect on the determinant, it is less clear what effect it has on the limiting spectral measure, as there appears to be no direct way to compare the solution to our (13) to the solutions of (2.6) in [25]. Instead, we will use that this rescaling does not affect any of the proofs present in [25], and so we obtain an alternative expression for the complexity, which, when combined with the expression in for this in [25] gives (343).

To begin, we recall some results used in the proof of Theorem 2.1 in [25]. In particular, we need the following results, which follow from their application of the Kac-Rice formula in Lemma 3.2 (and noting that their \(\alpha _1=\alpha _2=0\) in the pure case, so that \(H_N(u)\) only depends on \(u_0\)): for nice \(B\subseteq {\mathbb {R}}\),

$$\begin{aligned}{} & {} \lim _{N\rightarrow \infty }\frac{1}{N}\log ({\mathbb {E}}[{\textrm{Crit}}_N(B)])=\frac{1}{2}[1+\gamma \log (\gamma /p)+(1-\gamma )\log ((1-\gamma )/q)] \nonumber \\ \end{aligned}$$
(344)
$$\begin{aligned}{} & {} \quad +\lim _{N\rightarrow \infty }\frac{1}{N}\log (\int _B e^{-Nu^2/2}{\mathbb {E}}[|\det (H_N(u))|]du), \end{aligned}$$
(345)
$$\begin{aligned}{} & {} \lim _{N\rightarrow \infty }\frac{1}{N}\log ({\mathbb {E}}[{\textrm{Crit}}_{N,0}(B)])=\frac{1}{2}[1+\gamma \log (\gamma /p)+(1-\gamma )\log ((1-\gamma )/q)] \nonumber \\ \end{aligned}$$
(346)
$$\begin{aligned}{} & {} \quad +\lim _{N\rightarrow \infty }\frac{1}{N}\log (\int _B e^{-Nu^2/2}{\mathbb {E}}[|\det (H_N(u))|I(H_N(u)\ge 0)]du), \end{aligned}$$
(347)

where here \(H_N(u)\) is an \((N-2)\)-by-\((N-2)\) Gaussian symmetric random matrix with independent entries, satisfying

$$\begin{aligned} {\mathbb {E}}[(H_N(u))_{ij}]= & {} u\delta _{ij}(\frac{N}{N_1}p\delta _{i\le N_1-1}+\frac{N}{N_2}q\delta _{i>N_1-1})\end{aligned}$$
(348)
$$\begin{aligned} {\textrm{Cov}}(H(u)_{ij})= & {} {\left\{ \begin{array}{ll} \frac{Np(p-1)}{N_1^2}(1+\delta _{ij});\;\;1\le i,j\le N_1-1\\ \frac{Npq}{N_1N_2};\;\;1\le i\le N_1-1<j\le N-2\\ \frac{Nq(q-1)}{N_2^2}(1+\delta _{ij});\;\; N_1-1< i,j\le N-2 \end{array}\right. }. \end{aligned}$$
(349)

Following their proof of Theorem 2.1 we see that

$$\begin{aligned}{} & {} \lim _{N\rightarrow \infty }\frac{1}{N}\log (\int _B e^{-Nu^2/2}{\mathbb {E}}[|\det (H_N(u))|]du) \end{aligned}$$
(350)
$$\begin{aligned}{} & {} \quad \sup _{u\in B}\left( -\frac{u^2}{2}+\int \log (|x|)\mu _{p,q,\gamma ,u}^{Mc}(dx)\right) \end{aligned}$$
(351)
$$\begin{aligned}{} & {} \lim _{N\rightarrow \infty }\frac{1}{N}\log (\int _B e^{-Nu^2/2}{\mathbb {E}}[|\det (H_N(u))|I(H_N(u)\ge 0)]du) \end{aligned}$$
(352)
$$\begin{aligned}{} & {} \quad =\sup _{u\in (-\infty ,-E_{p,q,\gamma ;\infty }^{Mc})\cap B}\left( -\frac{u^2}{2}+\int \log (|x|)\mu _{p,q,\gamma ,u}^{Mc}(dx)\right) \end{aligned}$$
(353)

Let us define the matrix

$$\begin{aligned} {\bar{D}}_{N}=\begin{bmatrix}\sqrt{N_1/(Np)}I_{N_1-1}&{}0\\ 0&{}\sqrt{N_2/(Nq)}I_{N_2-1}\end{bmatrix},\end{aligned}$$
(354)

and define

$$\begin{aligned} H^D_N(u)={\bar{D}}_{N}H_N(u){\bar{D}}_{N},\end{aligned}$$
(355)

If we denote \(H^D_N=H^D_N(0)\), we note that \(H^D_N(u)\) coincides with \(H_N^D-uI\) in law. We also note that

$$\begin{aligned} |\det (H_N^D-uI)|{\mathop {=}\limits ^{d}}\det (D_N)^2|\det (H_N(u))|=\exp (N(2C_{p,q,\gamma }{-}1{+}o(1)))|\det (H_N(u))|.\nonumber \\ \end{aligned}$$
(356)

Noting as well that the index is invariant under congruence transformations, we obtain the following result, which we formulate as a self-contained lemma for usage above.

Lemma 20

For \(p,q\ge 2\), \(0<\gamma <1\), and nice \(B\subseteq {\mathbb {R}}\) we have that

$$\begin{aligned}{} & {} \lim _{N\rightarrow \infty }\frac{1}{N}\log ({\mathbb {E}}[{\textrm{Crit}}_N(B)])=C_{p,q,\gamma }\nonumber \\{} & {} +\lim _{N\rightarrow \infty }\frac{1}{N}\log (\int _B e^{-Nu^2/2}{\mathbb {E}}[|\det (H_N^D-uI)|]du), \nonumber \\ \end{aligned}$$
(357)
$$\begin{aligned}{} & {} \lim _{N\rightarrow \infty }\frac{1}{N}\log ({\mathbb {E}}[{\textrm{Crit}}_{N,0}(B)]) \end{aligned}$$
(358)
$$\begin{aligned}{} & {} \quad = C_{p,q,\gamma }+\lim _{N\rightarrow \infty }\frac{1}{N}\log (\int _B e^{-Nu^2/2}{\mathbb {E}}[|\det (H_N^D-uI)|]I(H_N^D\ge uI)du),\nonumber \\ \end{aligned}$$
(359)

where \(H^{D}_N\) is a symmetric random matrix with independent centered Gaussian entries with covariances for \(1\le i,j\le N-2\) satisfying

$$\begin{aligned} {\mathbb {E}}[(H_N^D)_{ij}^2]=\frac{1}{N} {\left\{ \begin{array}{ll} p^{-1}(p-1)(1+\delta _{ij});\;\;1\le i,j\le N_1-1\\ 1;\;\;1\le i\le N_1-1<j\le N-2\\ q^{-1}(q-1)(1+\delta _{ij});\;\; N_1-1< i,j\le N-2 \end{array}\right. }. \end{aligned}$$
(360)

Now, proceeding through their Section 3, replacing \(H_N(u)\) by \(H_N^D(u)\), rescaling terms as necessary, the proofs in Section 3 of [25] give the following result.

Lemma 21

Fix \(p,q\ge 2\) and \(0<\gamma <1\). Then there exists a unique compact-supported probability measure with continuous bounded density on \({\mathbb {R}}\), \(\mu _{p,q,\gamma }\), specified by the procedure in (13). Moreover, for nice \(B\subseteq {\mathbb {R}}\)

$$\begin{aligned}{} & {} \lim _{N\rightarrow \infty }\frac{1}{N}\log (\int _B e^{-Nu^2/2}{\mathbb {E}}[|\det (H_N^D-uI)|]du)=\sup _{u\in B}\varTheta _{p,q,\gamma }(u), \end{aligned}$$
(361)
$$\begin{aligned}{} & {} \lim _{N\rightarrow \infty }\frac{1}{N}\log (\int _B e^{-Nu^2/2}{\mathbb {E}}[|\det (H_N^D-uI)|I(H_N^D-uI\ge 0)]du) \end{aligned}$$
(362)
$$\begin{aligned}{} & {} \quad =\sup _{u\in (-\infty ,-E_{p,q,\gamma ;\infty })\cap B}\varTheta _{p,q,\gamma }(u). \end{aligned}$$
(363)

Now employing this, and comparing the terms in Lemma 20 and equations (351) and (353) we may conclude (343).

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kivimae, P. The Ground State Energy and Concentration of Complexity in Spherical Bipartite Models. Commun. Math. Phys. 403, 37–81 (2023). https://doi.org/10.1007/s00220-023-04733-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00220-023-04733-6

Navigation