1 Introduction

A textbook result, due to K. Oka, states that a domain of holomorphy \(\Omega \) in \(\mathbb C^{n}\), not identical with the whole space, is (Hartogs) pseudoconvex i.e., the function \(-\log (d_{\partial \Omega }(z))\) is plurisubharmonic in \(\Omega \) with \(d_{\partial \Omega }\) being the Euclidean distance to \(\partial \Omega \).

The classical proof of Oka’s lemma (see for example [8], Chapter IX, Theorem D.4) is by contradiction. More precisely, the non-plurisubharmonicity of \(-\log (d_{\partial \Omega }(z))\) yields the existence of a pathological family of holomorphic discs near a boundary point of \(\Omega \) which leads to a contradiction. In [14] a question was raised whether a direct proof is possible. The main thrust would then be to show the result for smoothly bounded domains of holomorphy, which are well known to be Levi pseudoconvex. In this direction the following result was established:

Theorem 1

(Herbig-McNeal, [14]) Let \(\Omega \) be a smoothly bounded Levi pseudoconvex domain in \(\mathbb C^{n}\). Then, there is a neighborhood U of \(\partial \Omega \), such that \(-\log (d_{\partial \Omega }(z))\) is plurisubharmonic on \(\Omega \cap U\).

Of course, the main interest towards Oka’s lemma, just as in Theorem 1, is close to the boundary as the signed distance function

$$\begin{aligned} \delta (z)={\left\{ \begin{array}{ll}-d_{\partial \Omega }(z),\ \ z\in \Omega ;\\ d_{\partial \Omega }(z),\ \ z\in \mathbb C^{n}\setminus \Omega \end{array}\right. } \end{aligned}$$
(1)

is, still for a smoothly bounded domain \(\Omega \), a defining function with excellent analytic properties. It is nevertheless interesting to obtain a direct proof covering the plurisubharmonicity in the whole \(\Omega \) rather than just close to the boundary. It should be pointed out that, contrary to [14], non-smooth analysis has to be applied in such a project as the function \(-\log (d_{\partial \Omega }(z))\) is never smooth for a bounded \(\Omega \).

The aim of this note is to provide a direct proof of the full version of Oka’s lemma, that is the statement:

Theorem 2

(Oka) Let \(\Omega \subsetneq \mathbb C^{n}\) be a domain. Then, \(\Omega \) is pseudoconvex if and only if the function \(-\log (d_{\partial \Omega }(z))\) is plurisubharmonic on the whole domain \(\Omega \).

This extends the result of Herbig and McNeal. In fact, we show that the result is a simple consequence of the modern development of the theory of the distance function, suitably coupled with a recent removal of singularities theorem shown in [3]. Additionally, we hope to advertise the mentioned developments which do not seem to be well-known in the complex-analytic community.

The following result is undoubtedly known, yet we could not find a reference in the literature. We provide a proof, parallel to the proof of the main theorem, just to illustrate the method.

Theorem 3

Let \(\Omega \ne \mathbb R^{m}\) be any domain in \(\mathbb R^{m}\). Then, the function \(-\log (d_{\partial \Omega }(x))\) if \(m=1,2\) and \((d_{\partial \Omega }(x))^{2-m}\) if \(m>2\) is subharmonic on \(\Omega \). Moreover, if \(\Omega \) is smoothly bounded and \(m>2\) then \(-\log (d_{\partial \Omega }(x))\) is subharmonic in \(\Omega \) if the mean curvature H of the boundary satisfies \(H\ge \frac{-1}{(m-2) R}\) where R is the inradius of \(\Omega \), that is, the radius of the largest ball contained in \(\Omega \). The function \(-\log (d_{\partial \Omega }(x))\) is subharmonic near the boundary for any smoothly bounded \(\Omega \).

For \(m=1\) this boils down to the trivial fact that for \(a<x<b\) the function \(\max \{-\log (x-a),-\log (b-x)\}\) is convex. For \(m=2\) this result reflects the well-known fact that every planar domain is pseudoconvex.

For general domains \(\Omega \subsetneq \mathbb R^{m},\, m>2\) the function \(-\log (d_{\partial \Omega }(x))\) may fail to be subharmonic in the whole \(\Omega \). For example, if r is small (it is enough to assume \(0<r<\frac{m-2}{m}\)) and \(\Omega \) is the annular region

$$\begin{aligned} \Omega :=\{ x\in \mathbb R^{m} |\quad r<\Vert x\Vert <1\}=B(0,1)\setminus \overline{B(0,r)} \end{aligned}$$

then \(d_{\partial \Omega }(x)=\Vert x\Vert -r\) for all x such that \(r<\Vert x\Vert <\frac{1+r}{2}\). One can check that

$$\begin{aligned} \Delta (-\log (\Vert x\Vert -r))=\frac{(m-1)r-(m-2)\Vert x\Vert }{\Vert x\Vert (\Vert x\Vert -r)^2} \end{aligned}$$

which is negative for all x such that \(\frac{m-1}{m-2}r<\Vert x\Vert <\frac{1+r}{2}\). In particular, as \(B(0,1)\setminus \overline{B(0,r)}= \{x \quad |\quad \Vert x \Vert ^d + (1+r-\Vert x \Vert )^d< 1+r^d\}\text { for any } d<2-m\), the subharmonicity may fail for domains which are smooth sublevel sets of smooth strongly subharmonic functions, see also Remark 2.

For pseudoconvex domains the inequality \(H\ge \frac{-1}{(m-2) R}\) may also fail (again, the planar domain \(B(0,1)\setminus \overline{B(0,r)}\) is an example) but \(-\log (d_{\partial \Omega }(x))\) is still subharmonic as a consequence of Oka’s lemma. We provide a direct proof of this, not appealing to the plurisubharmonicity of \(-\log (d_{\partial \Omega }(x))\). This follows from a geometric property of pseudoconvex domains which is of independent interest and which, we believe, has not been observed so far:

Theorem 4

Let \(\Omega \subset \mathbb C^{n}\) be a bounded pseudoconvex domain with \(C^2\) smooth boundary. Then at any point \(w\in \partial \Omega \) one can choose \(2n-2\) out of the \(2n-1\) principal curvatures of \(\partial \Omega \) so that their sum

$$\begin{aligned} \kappa _1+\cdots +\kappa _{j-1}+\kappa _{j+1}+\cdots +\kappa _{2n-1} \end{aligned}$$

is non-negative. The sum is positive unless \(\partial \Omega \) is Levi flat at w. Moreover, at least \(n-1\) of the principal curvatures are non-negative (positive if \(\partial \Omega \) is strongly pseudoconvex at w).

We remark that on smoothly bounded pseudoconvex domains it is not possible to bound from below and hence to control the sum of all of the \(2n-1\) principal curvatures, nor a particular principal curvature, as demonstrated by the above example.

It is known that a stronger assumption than in Theorem 3, namely that \(-d_{\partial \Omega }(x)\) is subharmonic outside a prescribed compact singular subset of \(\Omega \), called the cut locus \(\Sigma \) below, is equivalent to the so-called mean convexity of a smoothly bounded domain \(\Omega \), see [9, 18, 24], a result that can be tracked back to Gromov, see [12] p. 18-19. This equivalence is utilized in [18] and [24] to characterize the domains with \(C^2\) boundary on which the Hardy’s inequality holds with optimal constant. Furthermore, [9, 18], and [24] use distribution theory to prove that the mean convexity is equivalent to the non-negativity of \(\Delta (-d_{\partial \Omega }(x))\) in the distributional sense. It is known that the latter is equivalent to subharmonicity of \(-d_{\partial \Omega }(x)\). The mean convexity of a domain is the property that the mean curvature of the boundary \(\partial \Omega \), considered as a hypersurface, is non-negative at each boundary point. For more on the notion of mean convexity, its significance in geometry and applications see [1, 21] and for a introductory level exposition see [12].

We observe that the same argument as in the proof of our main theorem allows one to restate the equivalence theorem of [9, 18], and [24] in a more transparent way, without using distribution theory:

Theorem 5

A smoothly bounded domain \(\Omega \subset \mathbb R^{m}\) is mean convex if and only if \(-d_{\partial \Omega }(x)\) is subharmonic throughout \(\Omega \).

It follows that, by using Theorem 5, one can meaningfully define the mean convexity of not necessarily smooth domains by the property that \(-d_{\partial \Omega }(x)\) is subharmonic or (which is equivalent) that \(\Omega \) may be exhausted by mean convex domains with smooth boundary.

2 Preliminaries

In this section we gather the notions and basic results used in our proof of Theorem 2.

1. Complex analytic notions. Below we recall the main notions related to complex convexity that will be utilized later on. For more details we refer to [8] or [16].

Definition 1

A domain \(\Omega \subset \mathbb C^{n}\) is said to be a domain of holomorphy if the following holds: there are no open sets \(\Omega _1,\Omega _2\) such that \(\Omega _2\) is connected, \(\Omega _2\not \subset \Omega \), \(\varnothing \ne \Omega _1\subset \Omega _2\cap \Omega \) and for any holomorphic function u on \(\Omega \) there is a holomorphic function \(u_2\) on \(\Omega _2\), such that \(u=u_2\) on \(\Omega _1\).

It is a classical result in complex analysis (see [16]) that the above definition is equivalent to holomorphic convexity:

Definition 2

A domain \(\Omega \subset \mathbb C^{n}\) is said to be holomorphically convex if for any compact set \(K\subset \Omega \) its holomorphic hull

$$\begin{aligned} {\hat{K}}:=\left\{ z\in \Omega \ |\ \forall f\in {\mathcal {O}}(\Omega )\ |f(z)|\le \sup _K|f|\right\} \end{aligned}$$

is also compact in \(\Omega \).

If the boundary of \(\Omega \) is at least \(C^2\) smooth the notion of Levi pseudoconvexity can be introduced:

Definition 3

A smoothly bounded domain \(\Omega \subset \mathbb C^{n}\) is Levi pseudoconvex if for any \(p\in \partial \Omega \) and any local defining function \(\rho \) of \(\partial \Omega \) near p the inequality

$$\begin{aligned} {\mathcal {L}}(\rho (p))(X,X):=\sum _{j,k=1}^n\frac{\partial ^2\rho }{\partial z_j\partial {\bar{z}}_k}(p)X_j{\overline{X}}_k\ge 0 \end{aligned}$$

holds for any complex tangent vector \(X=\sum _{j=1}^nX_j\frac{\partial }{\partial z_j}\in T_{p}^{\mathbb C}\partial \Omega \). The expression \({\mathcal {L}}(\rho (p))(X,X)\) is the Levi form of \(\rho \) at the boundary point p evaluated on the vector X.

It is a simple exercise to check that the inequality in the definition above does not depend on the choice of the local defining function. It is also invariant under a holomorphic change of variables near p.

A classical result in complex analysis (see [8], Chapter IX, Section B) is that domains of holomorphy in \(\mathbb C^{n}\) with \(C^2\) smooth boundaries are Levi pseudoconvex.

2. The distance function. The distance function and its signed version are classical objects of study. Unfortunately, the basic facts of the associated theory are scattered in the literature - we refer to [2, 5, 6, 13, 15, 17, 19, 20, 22] for a (necessarily incomplete) list of old and new contributions. As the distance function does not depend on the complex structure, we identify \(\mathbb C^{n}\) with \(\mathbb R^{m}\) for \(m=2n\) in this section and recall the relevant definitions and facts in the real setting:

Definition 4

Let \(\Omega \subset \mathbb R^{m}, \Omega \ne \mathbb R^{m}\) be a domain. The set of points \(x\in \Omega \) such that there is more than one point on \(\partial \Omega \) realizing the distance from x to \(\partial \Omega \) is called the medial axis of \(\partial \Omega \). We denote this set by \(\Gamma \).

As \(d_{\partial \Omega }(x)\) is 1-Lipschitz and differentiable at \(\Omega \setminus \Gamma \) (see [6], at such points one has \(\nabla d_{\partial \Omega }(x)=\frac{x-w}{\Vert x-w\Vert }\) with w being the unique closest boundary point) it follows that \(\Gamma \) is of Lebesgue measure zero. Much more is in fact true - P. Erdös in [5] proved that \(\Gamma \) is \((m-1)\)-rectifiable (see also [13] for a modern treatment). Topologically, \(\Gamma \) need not be a closed set as the example of a planar ellipse shows.

Definition 5

The Euclidean closure of \(\Gamma \) in \(\Omega \) is called the cut locus. We denote the cut locus by \(\Sigma \).

A beautiful example of Mantegazza and Mennucci (see [22]) shows that \(\Sigma \) can have positive Lebesgue measure even if \(\partial \Omega \) is \(C^{1,1}\) smooth. The crucial fact we shall need is that this does not happen if we add a little to the boundary regularity:

Lemma 6

(Crasta-Malusa, [2]) Let \(\Omega \subset \mathbb R^{m}\) be a bounded domain with \(C^2\)-smooth boundary. Then \(\Sigma \) is of zero Lebesgue measure.

Remark 1

If \(C^3\) boundary regularity is assumed, Nirenberg and Li have proved that the \((m-1)\)-dimensional Hausdorff measure of \(\Sigma \) is finite, which is optimal (see [20]). In our proof we can use either result.

Under the \(C^2\) boundary assumption, \(\Sigma \) is also known to be a strong deformation retract of \(\Omega \), see [26], which yields that \(\Sigma \) encodes many topological properties of \(\Omega \).

Our main interest towards \(\Gamma \) and \(\Sigma \) lies in the regularity properties of \(d_{\partial \Omega }\) in their complement. The following result is classical.

Lemma 7

[6, 17, 19] If \(\partial \Omega \) is \(C^{1,1}\) smooth and bounded then \(\Gamma \) is relatively compact in \(\Omega \).

Geometrically, the lemma above follows from the fact that such domains satisfy the interior ball condition. In particular, there is a neighborhood U of \(\partial \Omega \) (the very same neighborhood used in Theorem 1) where \(d_{\partial \Omega }(z)\) is differentiable.

When it comes to the set \(\Omega \setminus U\) it remains to observe that \(d_{\partial \Omega }\) is differentiable on \(\Omega \setminus \Gamma \) and continuously differentiable on \(\Omega \setminus \Sigma \) as there (and not only on \(\Omega \cap U\)) each point has a unique closest point on \(\partial \Omega \). Further regularity of \(d_{\partial \Omega }\) follows from the classical lemma of Gilbarg and Trudinger [10], see also [17]. We formulate the relevant result in a way suitable for our needs. The calculus can be found in Gilbarg–Trudinger, [10, Lemma 14.17] for points near the boundary and in [18] in full generality.

Lemma 8

Let \(\Omega \subset \mathbb R^{m}\) be a bounded domain with \(C^k\)-smooth boundary, \(k\ge 2\). Then \(d_{\partial \Omega }\) is \(C^k\) smooth on \({\overline{\Omega }}\setminus \Sigma \). In local real orthogonal coordinates \((t_1,\cdots ,t_{m})=(t',t_{m})\), called principal coordinates, such that \(\frac{\partial }{\partial t_{m}}\) is the inward normal at the origin and near the origin \(\partial \Omega \) is given by \(\partial \Omega =(t',h(t'))\), \(h\in C^k,\ h(0')=0, \nabla h(0')=0'\) and

$$\begin{aligned} \frac{\partial ^2 h}{\partial t_i\partial t_j}(0')=\kappa _i\delta _{ij},\quad i,j=1,\cdots ,m-1 \end{aligned}$$

(here \(\kappa _j\)’s are the principal curvatures of \(\partial \Omega \) at the origin) the real Hessian of \(d_{\partial \Omega }\) at \((0',t_{m})\) reads

$$\begin{aligned} \frac{\partial ^2 d_{\partial \Omega }}{\partial t_i\partial t_j}(0',t_{m})=\begin{pmatrix} { \frac{-\kappa _{i}}{1-t_{m}\kappa _{i}}\delta _{ij}}&{}\begin{matrix}0 \\ \vdots \end{matrix} \\ \begin{matrix}0 &{} \cdots \end{matrix}&0 \end{pmatrix}_{i,j=1}^{m}. \end{aligned}$$

We remark that the formula holds along the ray spanned by \(\frac{\partial }{\partial t_{m}}\) as long as \((0',0)\) remains the unique closest point to the boundary. Another remark is that the denominators \(1-t_{m}\kappa _{i}\) remain positive whenever the formula holds and in particular provide a bound for the location of closest point from \(\Sigma \) along the ray.

3. Generalized Laplacians and Removal of Singularities. Recall the definition of the generalized upper Laplace parameter (or the upper Privalov operator as it is sometimes called):

Definition 6

Let \(\Omega \subset \mathbb R^{m}\) be a domain and let \(u\ne -\infty \) be an upper semicontinuous function on \(\Omega \). For \(x\in \Omega \) and r such that the ball \(\overline{B(x,r)}\subset \Omega \) one defines the integral mean of u over the ball by

$$\begin{aligned} M_u(x,r):=\frac{\int _{B(x,r)} u d\lambda ^{m} }{\lambda ^{m}(B(x,r))}\in [-\infty ,\infty ), \end{aligned}$$

(with \(\lambda ^{m}\) denoting the m-dimensional Lebesgue measure).

Then, the generalized upper Laplace parameter is defined by

$$\begin{aligned} {\overline{\Delta }} u(x):=2(m+2)\limsup _{r\rightarrow 0^+}\frac{M_u(x,r)-u(x)}{r^2} \end{aligned}$$

for any \(x\in \Omega \setminus \{u=-\infty \}\).

For a \(C^2\) function \({\overline{\Delta }} u(x)\) is the classical Laplacian of u at x, hence \({\overline{\Delta }}\) is a kind of a generalized Laplacian. A modern exposition to these matters, with some recent improvements, is given in [23, 25]. We will need two results about \({\overline{\Delta }}\) which can be found there. The first one relates the subharmonicity of u to the values of the generalized upper Laplace parameter.

Theorem 9

(Blaschke-Privalov criterion) An upper semicontinuous function \(u\ne -\infty \) on \(\Omega \) is subharmonic if and only if \({\overline{\Delta }} u(x)\ge 0\) for any \(x\in \Omega \setminus \{u=-\infty \}\).

The second one, which is a stronger result reads:

Theorem 10

(Privalov) An upper semicontinuous function \(u\ne -\infty \) on \(\Omega \) is subharmonic if and only if \({\overline{\Delta }} u(x)\ge 0\) almost everywhere in \(\Omega \) with respect to the m dimensional Lebesgue measure \(\lambda ^{m}\) and \({\overline{\Delta }} u(x)> -\infty \) for any \(x\in \Omega \setminus (\{u=-\infty \}\cup P)\), where P is a polar set.

We will need the following simple observation:

Lemma 11

Let \(\Omega \subset \mathbb R^{m}\) be a domain and let \({\mathcal {F}}\) be a family of upper semicontinuous functions on \(\Omega \) which are locally uniformly bounded above and such that for any \(\varphi \in {\mathcal {F}}\) one has \({\overline{\Delta }} \varphi \ge \psi \), where \(\psi \) is a fixed lower semicontinuous function on \(\Omega \). Then, the upper semicontinuous regularization of the supremum of the family, defined as

$$\begin{aligned} \phi ^{*}(x):=\limsup _{\Omega \ni y\rightarrow x} \phi (y),\quad \text { where } \phi (y):=\sup _{\varphi \in {\mathcal {F}}} \varphi (y) \end{aligned}$$

also satisfies

$$\begin{aligned} {\overline{\Delta }} \phi ^{*}\ge \psi . \end{aligned}$$

Proof

Fix \(x\in \Omega \) and \(\varepsilon >0\) such that \(\overline{B(x,\varepsilon )}\subset \Omega \). Let \(C_{\varepsilon }:=\min _{y\in \overline{B(x,\varepsilon )}}\psi (y)\). Take any \(\varphi \in {\mathcal {F}}\). As \({\overline{\Delta }}\left( \varphi (y)-\frac{C_{\varepsilon }}{2}\Vert y\Vert ^2\right) \ge 0\) throughout \(B(x,\varepsilon )\), the functions \(\varphi (y)-\frac{C_{\varepsilon }}{2}\Vert y\Vert ^2\) are subharmonic in \(B(x,\varepsilon )\), by the theorem of Blaschke-Privalov. But then \(\phi ^{*}(y)-\frac{C_{\varepsilon }}{2}\Vert y\Vert ^2\) is also subharmonic and again by the Blaschke-Privalov theorem one has \({\overline{\Delta }} \phi ^{*}\ge C_{\varepsilon }\) in \(B(x,\varepsilon )\). Finally,

$$\begin{aligned} {\overline{\Delta }} \phi ^{*}(x)\ge \lim _{\varepsilon \rightarrow 0^{+}} C_{\varepsilon }=\lim _{\varepsilon \rightarrow 0^{+}}\min _{y\in \overline{B(x,\varepsilon )}}\psi (y)=\psi (x), \end{aligned}$$

by the lower semicontinuity of \(\psi \). \(\square \)

The following theorem from [3] will be crucial:

Theorem 12

Let \(\Omega \subset \mathbb C^{n}\) be a domain and \(E\subset \Omega \) be a closed set of zero Lebesgue measure. Let u be a subharmonic function on \(\Omega \) which is plurisubharmonic on \(\Omega \setminus E\). Then u is plurisubharmonic in the whole domain \(\Omega \).

4. Principal curvatures and the mean curvature. The notions and formulas in this part are standard in differential geometry, although in the vast majority of the references they are presented only for surfaces in \(\mathbb R^3\). We feel, however, that for people from other areas it is far from obvious why the mean curvature is given by an expression as in (6) or (7) below. For the benefit of the reader we provide the details. The computations are also helpful to understand the geometric background of the problem.

For a general \(C^2\) hypersurface M in \(\mathbb R^{m}\) the principal curvatures at points of M are locally defined \((m-1)\)-tuples of scalar values, whose sign depends on the choice of the ”inside” and ”outside” of the hypersurface, that is, of the direction of the normal vector. If M is orientable (strictly speaking, coorientable) then the notion can be made global.

Definition 7

The principal curvatures \(\kappa _1, \kappa _2,\ldots ,\kappa _{m-1}\) at the point \(w\in M\) of a \(m-1\) dimensional hypersurface M in \(\mathbb R^{m}\) are the eigenvalues of the second fundamental form \(\mathrm {I\!I}(w)\) of M at w, that is of the \((m-1)\times (m-1)\) matrix

$$\begin{aligned} A(w):=\begin{pmatrix} \mathrm {I\!I}(w)(T_1,T_1) &{}\cdots &{} \mathrm {I\!I}(w)(T_1,T_{m-1}) \\ \vdots &{}\ddots &{} \vdots \\ \mathrm {I\!I}(w)(T_{m-1},T_1) &{}\cdots &{} \mathrm {I\!I}(w)(T_{m-1},T_{m-1}) \end{pmatrix}, \end{aligned}$$

where \(T_{j},\, j=1,\ldots ,m-1\) form an orthonormal basis of the tangent space \(T_wM\) (orthonormality is with respect to the inner product on \(T_wM\), which is the restriction of the standard inner product on \(T_w\mathbb R^{m}\cong \mathbb R^{m}\)), \(\mathrm {I\!I}(w)(X,Y)\) is the bilinear form \(\mathrm {I\!I}(w)(X,Y)=-\langle d \nu (w) (X),Y\rangle _{T_wM}\), and \(\nu \) is the Gauss map \(w\rightarrow \nu (w)\) which maps the point \(w\in M\) to the unit normal vector to M at w directed ”inside”.

Alternatively, \(\kappa _1, \kappa _2,\ldots ,\kappa _{m-1}\) are given as the eigenvalues of the matrix

$$\begin{aligned} B(w):=\begin{pmatrix} \textrm{I}_{11} &{}\cdots &{} \textrm{I}_{1(m-1)}\\ \vdots &{}\ddots &{} \vdots \\ \textrm{I}_{(m-1)1}&{}\cdots &{} \textrm{I}_{(m-1)(m-1)} \end{pmatrix}^{-1}\begin{pmatrix} \mathrm {I\!I}_{11} &{}\cdots &{} \mathrm {I\!I}_{1(m-1)}\\ \vdots &{}\ddots &{} \vdots \\ \mathrm {I\!I}_{(m-1)1}&{}\cdots &{} \mathrm {I\!I}_{(m-1)(m-1)} \end{pmatrix}, \end{aligned}$$

where \( \textrm{I}_{jk}\) and \(\mathrm {I\!I}_{jk}\) are the coefficients of the first and second fundamental forms respectively in a given, not necessarily orthonormal, basis of the tangent space at w.

The principal directions are tangent directions of M at w given by the eigenvectors corresponding to \(\kappa _1,\ldots ,\kappa _{m-1}\).

The coefficients of the first fundamental form in the not necessarily orthonormal basis \(e_1,\ldots , e_{m-1}\) of \(T_wM\) are given by \(\textrm{I}_{jk}=\langle e_j,e_k\rangle _{T_wM}\). This is just the Gram matrix of the basis. So, the first fundamental form can be thought of as the Riemannian metric induced on the hypersurface by the Euclidean metric of \(\mathbb R^{m}\). In the same basis the coefficients of the second fundamental form are given by \(\mathrm {I\!I}_{jk}=\mathrm {I\!I}(e_j,e_k)\).

The negative of the differential of the Gauss map, that is \(-d\nu \), is called the Weingarten map or the shape operator. Note that formally \(-d\nu \) sends \(T_w M\) to the tangent space of the unit sphere at \(\nu (w)\). It is, however, parallel to, and hence can be identified with \(T_wM\). Thus, at \(w\in M\) the shape operator can be thought of as a linear transformation \(S:T_wM\rightarrow T_wM\). The formula \(\mathrm {I\!I}(X,Y)=\textrm{I}(S(X),Y)\) is well-known and sometimes useful.

If the hypersurface is locally parameterized by

$$\begin{aligned} \mathbb R^{m-1}\supset U\ni (t_1,\ldots ,t_{m-1})=t\rightarrow \varphi (t)=(\varphi _{1}(t),\ldots ,\varphi _{m}(t))\in M\subset \mathbb R^{m} \end{aligned}$$

in some local coordinate system \((t_1,\ldots ,t_{m-1})\), where we assume that the Jacobian of \(\varphi \) is of maximal rank at \(\varphi ^{-1}(w)\), then the Gauss map is \(\nu (w)=\nu (\varphi (t))=(n_1(w),\ldots ,n_{m}(w))^{T}\) where

$$\begin{aligned} n_k(w)=\frac{(-1)^{k+m}\det \begin{pmatrix} \frac{\partial \varphi _{1}}{\partial t_1} &{}\cdots &{} \frac{\partial \varphi _{1}}{\partial t_{m-1}}\\ \vdots &{}\ddots &{} \vdots \\ \frac{\partial \varphi _{k-1}}{\partial t_1} &{}\cdots &{} \frac{\partial \varphi _{k-1}}{\partial t_{m-1}}\\ \frac{\partial \varphi _{k+1}}{\partial t_1} &{}\cdots &{} \frac{\partial \varphi _{k+1}}{\partial t_{m-1}}\\ \vdots &{}\ddots &{} \vdots \\ \frac{\partial \varphi _{m}}{\partial t_1} &{}\cdots &{} \frac{\partial \varphi _{m}}{\partial t_{m-1}} \end{pmatrix}}{\sqrt{{\displaystyle \sum _{j=1}^{m}}\left( \det \begin{pmatrix} \frac{\partial \varphi _{1}}{\partial t_1} &{}\cdots &{} \frac{\partial \varphi _{1}}{\partial t_{m-1}}\\ \vdots &{} \ddots &{} \vdots \\ \frac{\partial \varphi _{j-1}}{\partial t_1} &{}\cdots &{} \frac{\partial \varphi _{j-1}}{\partial t_{m-1}}\\ \frac{\partial \varphi _{j+1}}{\partial t_1} &{}\cdots &{} \frac{\partial \varphi _{j+1}}{\partial t_{m-1}}\\ \vdots &{}\ddots &{} \vdots \\ \frac{\partial \varphi _{m}}{\partial t_1} &{}\cdots &{} \frac{\partial \varphi _{m}}{\partial t_{m-1}} \end{pmatrix}\right) ^2}},\quad k=1,\ldots ,m. \end{aligned}$$

Thus, computing the vector-valued differential form \(d\nu =(d n_1(w),\ldots , dn_{m}(w))^{T}\) and hence A(w) or B(w) is very involved if \(m>3\).

What is important is that the principal curvatures do not depend on the choice of a particular local parameterization. If the parameterization is that of a graph of a \(C^2\) function \(\psi \):

$$\begin{aligned} \mathbb R^{m-1}\supset U\ni (t_1,\ldots ,t_{m-1})=t\rightarrow \varphi (t)=(t_1,\ldots ,t_{m-1},\psi (t_1,\ldots ,t_{m-1}))\in M\subset \mathbb R^{m}, \end{aligned}$$

so that the ”inside” is the epigraph (or ”above”), the computations simplify significantly.

The Gauss map then reads:

$$\begin{aligned} \nu (w)=\left( \frac{-\frac{\partial \psi }{\partial t_1}}{\sqrt{1+\Vert \nabla \psi \Vert ^2}},\cdots ,\frac{-\frac{\partial \psi }{\partial t_{m-1}}}{\sqrt{1+\Vert \nabla \psi \Vert ^2}},\frac{1}{\sqrt{1+\Vert \nabla \psi \Vert ^2}}\right) ^{T}. \end{aligned}$$

(we use the column notation for vectors and ”T” denotes transposition).

The second fundamental form reads:

$$\begin{aligned} \mathrm {I\!I}(w)(X,Y)&=-\langle d \nu (w) (X),Y\rangle \\ {}&=\sum _{i=1}^{m-1}\sum _{j=1}^{m}\frac{\frac{\partial ^2 \psi }{\partial t_{j}\partial t_{i}}}{\sqrt{1+ \Vert \nabla \psi (t) \Vert ^2}} X_{j}Y_{i}- \sum _{i=1}^{m-1}\sum _{j=1}^{m}\frac{\frac{\partial \sqrt{1+ \Vert \nabla \psi (t) \Vert ^2}}{\partial t_{j}} \frac{\partial \psi }{\partial t_{i}}}{{1+ \Vert \nabla \psi (t) \Vert ^2}} X_{j}Y_{i}\\&-\sum _{j=1}^{m}\frac{\frac{\partial 1 }{\partial t_{j}}}{\sqrt{1+ \Vert \nabla \psi (t) \Vert ^2}} X_{j}Y_{m}+ \sum _{j=1}^{m}\frac{\frac{\partial \sqrt{1+ \Vert \nabla \psi (t) \Vert ^2}}{\partial t_{j}} }{{1+ \Vert \nabla \psi (t) \Vert ^2}} X_{j}Y_{m}. \end{aligned}$$

The third sum vanishes and the second and fourth add to zero, because

$$\begin{aligned} -\sum _{i=1}^{m-1}\sum _{j=1}^{m}\frac{\frac{\partial \sqrt{1+ \Vert \nabla \psi (t) \Vert ^2}}{\partial t_{j}} \frac{\partial \psi }{\partial t_{i}}}{{1+ \Vert \nabla \psi (t) \Vert ^2}} X_{j}Y_{i} + \sum _{j=1}^{m}\frac{\frac{\partial \sqrt{1+ \Vert \nabla \psi (t) \Vert ^2}}{\partial t_{j}} }{{1+ \Vert \nabla \psi (t) \Vert ^2}} X_{j}Y_{m} \end{aligned}$$
$$\begin{aligned} =\left( \sum _{j=1}^{m} \frac{\frac{\partial \sqrt{1+ \Vert \nabla \psi (t) \Vert ^2}}{\partial t_{j}} }{{1+ \Vert \nabla \psi (t) \Vert ^2}} X_{j} \right) \left\langle \left( -\frac{\partial \psi }{\partial t_{1}},\ldots ,-\frac{\partial \psi }{\partial t_{m-1}},1\right) ^{T} ,Y\right\rangle =0, \end{aligned}$$

as Y is a tangent vector. Finally,

$$\begin{aligned} \mathrm {I\!I}(w)(X,Y)=\frac{{\mathcal {H}} (\psi (t))({\tilde{X}},\tilde{Y})}{\sqrt{{1+ \Vert \nabla \psi (t) \Vert ^2}}}, \end{aligned}$$

where \(\tilde{X}=(X_1,\ldots X_{m-1})\) for \(X=(X_1,\ldots ,X_m)\) and the same for \({\tilde{Y}}\). Above \(\frac{\partial \psi }{\partial t_{m}}\) is understood as zero, \({\mathcal {H}} (\psi (t))\) is the real Hessian of \(\psi \) at \(t=\varphi ^{-1}(w)\) and the bilinear form \({\mathcal {H}} (\psi (t))({\tilde{X}},{\tilde{Y}})\) is identical with the expression \({\tilde{Y}}^{T}{\mathcal {H}} (\psi (t)){\tilde{X}}\). Observe that in the basis \(e_1=\frac{\partial \varphi }{\partial t_1},\cdots ,e_{m-1}=\frac{\partial \varphi }{\partial t_{m-1}}\) one has \({\tilde{e}}_1=(1,0,\ldots , 0)^{T},\ldots , \tilde{e}_{m-1}=(0,\ldots ,0,1)\), so this is the standard basis of \(\mathbb R^{m-1}\). Hence,

$$\begin{aligned} \begin{pmatrix} \mathrm {I\!I}_{11} &{}\cdots &{} \mathrm {I\!I}_{1(m-1)}\\ \vdots &{}\ddots &{} \vdots \\ \mathrm {I\!I}_{(m-1)1}&{}\cdots &{} \mathrm {I\!I}_{(m-1)(m-1)} \end{pmatrix}=&\begin{pmatrix} \frac{{\mathcal {H}} (\psi (t))({\tilde{e}}_1,{\tilde{e}}_1)}{\sqrt{{1+ \Vert \nabla \psi (t) \Vert ^2}}} &{}\cdots &{} \frac{{\mathcal {H}} (\psi (t))({\tilde{e}}_1,{\tilde{e}}_{m-1})}{\sqrt{{1+ \Vert \nabla \psi (t) \Vert ^2}}}\\ \vdots &{}\ddots &{} \vdots \\ \frac{{\mathcal {H}} (\psi (t))({\tilde{e}}_{m-1},{\tilde{e}}_1)}{\sqrt{{1+ \Vert \nabla \psi (t) \Vert ^2}}}&{}\cdots &{} \frac{{\mathcal {H}} (\psi (t))({\tilde{e}}_{m-1},{\tilde{e}}_{m-1})}{\sqrt{{1+ \Vert \nabla \psi (t) \Vert ^2}}} \end{pmatrix}\nonumber \\=&\frac{{\mathcal {H}} (\psi (t))}{\sqrt{{1+ \Vert \nabla \psi (t) \Vert ^2}}}. \end{aligned}$$
(2)

In the same basis \(\frac{\partial \varphi }{\partial t_1},\cdots ,\frac{\partial \varphi }{\partial t_{m-1}}\) the matrix of the first fundamental form has coefficients

$$\begin{aligned} \textrm{I}_{ij}=\left\langle \left( \delta _{i1},\ldots ,\delta _{i(m-1)} ,\frac{\partial \psi }{\partial t_i}\right) ^{T},\left( \delta _{j1},\ldots ,\delta _{j(m-1)} ,\frac{\partial \psi }{\partial t_j}\right) ^{T}\right\rangle =\delta _{ij}+\frac{\partial \psi }{\partial t_i}\frac{\partial \psi }{\partial t_j} \end{aligned}$$

and hence it’s inverse satisfies:

$$\begin{aligned} \left( Id_{(m-1)\times (m-1)}+\nabla \psi \nabla \psi ^{T}\right) ^{-1}=Id_{(m-1)\times (m-1)}-\frac{\nabla \psi \nabla \psi ^{T}}{1+\Vert \nabla \psi \Vert ^2}, \end{aligned}$$
(3)

by the simplest form of the Sherman-Morrison formula for the inverse matrix.

If the hypersurface is locally (in some neighborhood V of w) implicitly given as the set of solutions

$$\begin{aligned} M\supset \{(x_1,\ldots ,x_{m})= x\in V\subset \mathbb R^{m} | F(x)=0\} \end{aligned}$$

of some equation \(F=0\), with \(w\in M\) and \(\nabla F(w)\ne 0\) then the Gauss map is:

$$\begin{aligned} \nu (w)=-\frac{\nabla F(w)}{\Vert \nabla F(w)\Vert }. \end{aligned}$$

Here we assume that the ”inside” of the hypersurface is where \(F<0\). The second fundamental form reads:

$$\begin{aligned} \mathrm {I\!I}(w)(X,Y)=&-\langle d \nu (w) (X),Y\rangle =\sum _{i=1}^{m}\sum _{j=1}^{m}\frac{\frac{\partial ^2 F }{\partial x_{j}\partial x_{i}}}{\Vert \nabla F(w) \Vert } X_{j}Y_{i}- \sum _{i=1}^{m}\sum _{j=1}^{m}\frac{\frac{\partial \Vert \nabla F\Vert }{\partial x_{j}} \frac{\partial F }{\partial x_{i}}}{\Vert \nabla F(w) \Vert ^2} X_{j}Y_{i}\nonumber \\ =&\frac{{\mathcal {H}}(F(w))(X,Y)}{\Vert \nabla F(w)\Vert } \end{aligned}$$
(4)

because

$$\begin{aligned} \sum _{i=1}^{m}\sum _{j=1}^{m}\frac{\frac{\partial \Vert \nabla F\Vert }{\partial x_{j}} \frac{\partial F }{\partial x_{i}}}{\Vert \nabla F(w) \Vert ^2} X_{j}Y_{i}=\left( \sum _{j=1}^{m}\frac{\frac{\partial \Vert \nabla F\Vert }{\partial x_{j}} X_{j}}{\Vert \nabla F(w) \Vert ^2}\right) \langle \nabla F(w),Y\rangle =0, \end{aligned}$$

as Y is a tangent vector.

To compute the coefficients of the first fundamental form we assume \(\frac{\partial F}{\partial x_m}\ne 0\) and try to solve \(F=0\) for the last variable, that is to find \(\psi \) such that \(F(x_1,\ldots ,x_{m-1},\psi (x_1,\ldots ,x_{m-1}))=0\).The implicit function theorem gives \(\frac{\partial \psi }{\partial x_j}=-\frac{\frac{\partial F}{\partial x_j}}{ \frac{\partial F}{\partial x_m}}\). Thus, as above, the coefficients of the first fundamental form in the basis \(\frac{\partial \varphi }{\partial t_1},\cdots ,\frac{\partial \varphi }{\partial t_{m-1}}\) are

$$\begin{aligned}{} & {} \textrm{I}_{ij}=\left\langle \left( \delta _{i1},\ldots ,\delta _{i(m-1)} ,\frac{\partial \psi }{\partial t_i}\right) ^{T},\left( \delta _{j1},\ldots ,\delta _{j(m-1)} ,\frac{\partial \psi }{\partial t_j}\right) ^{T}\right\rangle \nonumber \\{} & {} =\delta _{ij}+\frac{\partial \psi }{\partial t_i}\frac{\partial \psi }{\partial t_j}=\delta _{ij}+\frac{\frac{\partial F}{\partial x_i}}{ \frac{\partial F}{\partial x_m}}\frac{\frac{\partial F}{\partial x_j}}{ \frac{\partial F}{\partial x_m}} \end{aligned}$$

and hence it’s inverse satisfies

$$\begin{aligned} \left( Id_{(m-1)\times (m-1)}+\nabla \psi \nabla \psi ^{T}\right) ^{-1}=&Id_{(m-1)\times (m-1)}-\frac{\nabla \psi \nabla \psi ^{T}}{1+\Vert \nabla \psi \Vert ^2}\nonumber \\=&Id_{(m-1)\times (m-1)}-\frac{\left( \frac{\partial F}{\partial x_1},\cdots ,\frac{\partial F}{\partial x_{m-1}}\right) \left( \frac{\partial F}{\partial x_1},\cdots ,\frac{\partial F}{\partial x_{m-1}}\right) ^{T}}{\Vert \nabla F\Vert ^2}, \end{aligned}$$
(5)

The principal curvatures do not depend on the choice of a particular F which defines the hypersurface locally.

Definition 8

One defines the mean curvature at the point \(w\in M\) as the average

$$\begin{aligned} H=H(w):=\frac{\kappa _1+\kappa _2+\cdots +\kappa _{m-1}}{m-1}. \end{aligned}$$

The definition is extrinsic, meaning that it depends on how M is embedded in \(\mathbb R^{m}\) and is not invariant with respect to smooth diffeomorphisms of the ambient space. For example, a scaling by a factor r results in H transforming to \(\frac{1}{r}H\). Moreover, one has to specify which side of the hypersurface is outer.

From linear algebra we know that the sum of the eigenvalues is the trace of a matrix, so

$$\begin{aligned} H(w)=\frac{\kappa _1+\kappa _2+\cdots +\kappa _{m-1}}{m-1}=\frac{1}{m-1}{\text {tr}} A(w)=\frac{1}{m-1}{\text {tr}} B(w). \end{aligned}$$

As above, if the hypersurface is locally parameterized by the graph of a \(C^2\) function \(\psi \)

$$\begin{aligned} \mathbb R^{m-1}\supset U\ni (t_1,\ldots ,t_{m-1})=t\rightarrow \varphi (t)=(t_1,\ldots ,t_{m-1},\psi (t_1,\ldots ,t_{m-1}))\in M\subset \mathbb R^{m} \end{aligned}$$

then by the definition of B(w), (2) and (3) we have

$$\begin{aligned} H(w)=&\frac{1}{m-1}{\text {tr}} B(w)=\frac{1}{m-1}{\text {tr}} \left( Id_{(m-1)\times (m-1)}-\frac{\nabla \psi \nabla \psi ^{T}}{1+\Vert \nabla \psi \Vert ^2}\right) \frac{{\mathcal {H}} (\psi (t))}{\sqrt{{1+ \Vert \nabla \psi (t) \Vert ^2}}}\nonumber \\ =&\frac{1}{m-1}\frac{{\displaystyle \sum _{i=1}^{m-1}\sum _{j=1}^{m-1}}\left( \delta _{ij}-\frac{\frac{\partial \psi }{\partial t_i}\frac{\partial \psi }{\partial t_j}}{1+\Vert \nabla \psi \Vert ^2}\right) \frac{\partial ^2\psi }{\partial t_i\partial t_j}}{\sqrt{1+\Vert \nabla \psi \Vert ^2}}. \end{aligned}$$
(6)

If further the direction of \(\nu (w)\) coincides with the m-th axis, that is if \(\nabla \psi =0\) at \(\varphi ^{-1}(w)\), then (6) simplifies to \(H(w)=\Delta \psi (t)\).

If the hypersurface is locally implicitly given as the set of solutions

$$\begin{aligned} M\supset \{ (x_1,\ldots ,x_{m})= x\in V\subset \mathbb R^{m} | F(x)=0\} \end{aligned}$$

of some equation \(F=0\), with \(w\in M\) and \(\nabla F(w)\ne 0\) we can simplify the calculations by fixing an orthonormal basis \(T_1,\ldots ,T_{m-1}\) of the tangent space \(T_wM\). Then \(T_1,\ldots ,T_{m-1}, \nu (w)\) is an orthonormal basis of \(\mathbb R^{m}\) and hence by the definition of A(w) and (4) we have

$$\begin{aligned} H(w)=&\frac{1}{m-1}{\text {tr}} A(w)=\frac{1}{m-1}\sum _{j=1}^{m-1}\mathrm {I\!I}(w)(T_j,T_j)=\frac{1}{m-1}\sum _{j=1}^{m-1}\frac{{\mathcal {H}}(F(w))(T_j,T_j)}{\Vert \nabla F(w)\Vert }\nonumber \\=&\frac{1}{m-1}\left( {\text {tr}}\frac{{\mathcal {H}}(F(w))}{\Vert \nabla F(w)\Vert } -\frac{{\mathcal {H}}(F(w))(\nu (w),\nu (w))}{\Vert \nabla F(w)\Vert } \right) \nonumber \\=&\frac{1}{m-1}\left( \frac{\Delta F(w)}{\Vert \nabla F(w)\Vert }-\frac{\frac{-\nabla F(w)}{\Vert \nabla F(w)\Vert }^{T} {\mathcal {H}}(F(w))\frac{-\nabla F(w)}{\Vert \nabla F(w)\Vert }}{\Vert \nabla F(w)\Vert }\right) \nonumber \\ =&\frac{1}{m-1}\frac{\nabla F(w)^{T} ( (\Delta F(w))Id_{m\times m}-{\mathcal {H}} (F(w)))\nabla F(w)}{\Vert \nabla F(w)\Vert ^3}. \end{aligned}$$
(7)

We used the fact that if Q is the matrix of the orthogonal transformation sending the j-th vector \(e_j\) of the standard basis of \(\mathbb R^{m}\) to the j-th vector of the basis \(T_1,\ldots ,T_{m-1}, \nu (w)\) then \(Q^{T}=Q^{-1}\) by orthogonality and

$$\begin{aligned} \left( \sum _{j=1}^{m-1} T_j^{T}{\mathcal {H}}(F(w)) T_j\right) +\nu (w)^{T}{\mathcal {H}}(F(w)) \nu (w)=\sum _{j=1}^{m}e_j^{T}Q^{T}{\mathcal {H}}(F(w))Qe_j \end{aligned}$$
$$\begin{aligned} =\sum _{j=1}^{m}e_j^{T}Q^{-1}{\mathcal {H}}(F(w))Qe_j={\text {tr}} Q^{-1}{\mathcal {H}}(F(w))Q ={\text {tr}} {\mathcal {H}}(F(w)), \end{aligned}$$

as the traces of similar matrices are equal.

By noticing that if one has a graph parameterization

$$\begin{aligned} (t_1,\ldots ,t_{m-1})=t\rightarrow \varphi (t)=(t_1,\ldots ,t_{m-1},\psi (t_1,\ldots ,t_{m-1})) \end{aligned}$$

then F can be chosen as \(F(x_1,\ldots ,x_m)=\psi (x_1,\ldots ,x_{m-1})-x_{m}\) one can obtain (7) directly from (6) and vice versa.

The notions of principal curvatures and mean curvature can be generalized to higher codimensional submanifolds of \(\mathbb R^{m}\) and to submanifolds of Riemannian manifolds. For more on these matters see [4], which is one of the few places where the above formulas are derived explicitly.

3 Proof of the Main Result and of Theorem 3

The ”if” part of Oka’s lemma (Theorem 2) is well known and does not need separate treatment. It follows that \(-\log (d_{\partial \Omega _j}(z))\) is a continuous plurisubharmonic exhaustion function, the existence of which guarantees the holomorphic convexity. The proof of the ”only if part”, as well as the considerations on subharmonicity properties, will be divided into several steps:

Step 1: Reduction to the case of a bounded domain with smooth boundary.

As this is completely standard we shall be brief. Fix a domain of holomorphy \(\Omega \subsetneq \mathbb C^{n}\). Recall that this implies that \(\Omega \) is holomorphically convex, which in turn implies that there is a smooth strictly plurisubharmonic exhaustion function \(\psi \) on \(\Omega \) (see [16]). By Sard’s theorem there is a sequence \(\lbrace t_j\rbrace _{j=1}^\infty \subset \mathbb R\), \(t_j\nearrow \infty \) such that the domains \(\Omega _j:=\lbrace z\in \Omega \ |\ \psi (z)<t_j\rbrace \) are bounded and their boundary is \(C^{\infty }\) smooth. As \(\psi \) is plurisubharmonic, the domains \(\Omega _j\) are also Levi pseudoconvex. Of course, they are even strictly pseudoconvex but we shall not use this fact later on.

It now suffices to prove that \(-\log (d_{\partial \Omega _j}(z))\) is plurisubharmonic in \(\Omega _j\) for each j as \(-\log (d_{\partial \Omega }(z))\) will then be the pointwise decreasing limit of the (plurisubharmonic) functions \(-\log (d_{\partial \Omega _j}(z))\) and will be hence plurisubharmonic. We fix j in Step 2 and for notational brevity we suppress this indice.

The subharmonic counterpart of the above reasoning is not so widely known. It is a theorem of Greene and Wu (see [11]) that any connected non-compact Riemannian manifold allows a smooth strongly subharmonic exhaustion function. Just take the manifold to be any fixed domain \(\Omega \subsetneq \mathbb R^{n}\) and the metric to be the Euclidean one. There is no completeness requirement. The rest is as above, just read subharmonic instead of plurisubharmonic.

Remark 2

In the subharmonic case we will not use the fact that a domain is a sublevel set of a subharmonic function, but just that it has a smooth boundary. Moreover, the former domains do not seem to exhibit clear distinctive geometric features. For example the hyperboloid \(\left\{ \frac{x^2}{a^2}+ \frac{y^2}{a^2}-\frac{ z^2}{c^2}= d\right\} \subset \mathbb R^3, \frac{2}{a^2}>\frac{1}{c^2}\) has either positive or negative Gaussian curvature, depending on the choice of d, and if \(d>0\) the mean curvature is positive in some regions and negative in other.

Step 2: Smooth analysis on a smoothly bounded domain \(\Omega \).

We shall prove the following claim (compare with Theorem 1):

Claim: Let \(\Omega \) be a smoothly bounded Levi pseudoconvex domain in \(\mathbb C^{n}\). If \(-\log (d_{\partial \Omega }(z))\) is smooth near \(p\in \Omega \) then it is plurisubharmonic near p in the sense that the Levi form is semi-positive definite on the whole \({\mathbb {C}}^{n}\).

Note that the claim implies that \(-\log (d_{\partial \Omega }(z))\) is plurisubharmonic in \(\Omega \setminus \Sigma \).

Essentially, the claim follows from the reasoning in [14] once one realizes that the analysis there holds in \(\Omega \setminus \Sigma \) and not only in a small collar around \(\partial \Omega \). We provide an alternative proof for the sake of completeness.

From Lemma 8 we know the full real Hessian of \(d_{\partial \Omega }\)

$$\begin{aligned} \frac{\partial ^2 d_{\partial \Omega }}{\partial t_i\partial t_j}(0',t_{2n})=\begin{pmatrix} { \frac{-\kappa _{i}}{1-t_{2n}\kappa _{i}}\delta _{ij}}&{}\begin{matrix}0 \\ \vdots \end{matrix} \\ \begin{matrix}0 &{} \cdots \end{matrix}&0 \end{pmatrix}_{i,j=1}^{2n} \end{aligned}$$

and hence the problem reduces to a computation. The main issue is that the real coordinates \((t_1,\cdots ,t_{2n})\) need not cohere well with the ambient complex coordinates.

Fix a point p in \(\Omega \setminus \Sigma \) so that there is a unique \(q\in \partial \Omega \) such that \(d_{\partial \Omega }(p)=\Vert p-q\Vert \). Changing the coordinates near q as in Lemma 8 we identify q with the coordinate origin. In these coordinates we have \(p=(0',d_{\partial \Omega }(p))\).

Let J denote the (standard) complex structure operator computed in the basis \(\frac{\partial }{\partial t_j},\ j=1,\cdots ,2n\). Then, given \(N=\sum _{j=1}^{n}n_j\frac{\partial }{\partial t_{2j}}\) (\(n_j\in \mathbb R\)), the vector \(Z=N-iJ(N)\) is a complex vector of type (1, 0) with respect to J. The following formula for any \(C^2\) function u is well-known (see [14]):

$$\begin{aligned} {\mathcal {L}}(u(p))(Z,Z)=\frac{1}{4}({\mathcal {H}}(u(p))(N,N)+\mathcal H(u(p))(J(N),J(N))), \end{aligned}$$
(8)

where \({\mathcal {H}}\) is the real 2n-dimensional Hessian of u.

Specializing to \(-\log (d_{\partial \Omega })\) we compute at p

$$\begin{aligned} {\mathcal {H}}(-\log (d_{\partial \Omega })(p))(X,X)=&-\frac{{\mathcal {H}}(d_{\partial \Omega }(p))(X,X)}{d_{\partial \Omega }}+\frac{|\langle \nabla d_{\partial \Omega },X\rangle |^2}{(d_{\partial \Omega })^2}\nonumber \\ =&X^T\begin{pmatrix} \frac{\kappa _{i}}{d_{\partial \Omega }(p)(1-d_{\partial \Omega }(p)\kappa _{i})}\delta _{ij}&{}\begin{matrix}0\\ \vdots \\ 0\end{matrix}\\ \begin{matrix}0&{}&{}\cdots &{}&{} 0\end{matrix}&\frac{1}{(d_{\partial {\Omega }}(p))^2} \end{pmatrix}X. \end{aligned}$$
(9)

In order to exploit the Levi pseudoconvexity of \(\partial \Omega \) we have to specify the complex tangent directions at q. To this end define the vector \(X:=-J\left( \frac{\partial }{\partial t_{2n}}\right) \) (so that \(J(X)=\frac{\partial }{\partial t_{2n}}\) is the inner normal at q). Fix any vector W which is orthogonal to both X and J(X). Then both W and J(W) belong to \(T_q\partial \Omega \) and \(V:=W-iJ(W)\) is a complex tangent vector of type (1, 0). Such vectors span the complex tangent space at q and together with \(Z=X-iJ(X)\) they span the whole \(\mathbb C^{n}\).

Any complex vector of type (1, 0) can then be written as \(V+\alpha Z\) with W and X as above, \(\alpha \in \mathbb C\). Multiplying the vector by \({\overline{\alpha }}\), which does not affect the sign of the Levi form in direction \(V+\alpha Z\) neither the way V is obtained, we can assume that the coefficient in front of Z and hence in front of X (still called \(\alpha \)) is real. In conclusion, it remains to check the positivity of

$$\begin{aligned} A:=&{\mathcal {H}}(-\log (d_{\partial \Omega })(p))(W+\alpha X,W+\alpha X)\nonumber \\&+{\mathcal {H}}(-\log (d_{\partial \Omega })(p))(J(W)+\alpha J(X),J(W)+\alpha J(X)), \end{aligned}$$
(10)

where \(W=\sum _{s=1}^{2n-1}w_s\frac{\partial }{\partial t_s}\), \(J(W)=\sum _{s=1}^{2n-1}\omega _s\frac{\partial }{\partial t_s}\) and \(X=\sum _{s=1}^{2n-1}\chi _s\frac{\partial }{\partial t_s}\), \(J(X)=\frac{\partial }{\partial t_{2n}}\).

Recalling now (9), the expression (10) becomes

$$\begin{aligned} A=\sum _{s=1}^{2n-1}\frac{(w_s^2+\omega _s^2+\alpha ^2\chi _s^2+2\alpha w_s\chi _s)\kappa _s}{d_{\partial \Omega }(p)(1-d_{\partial \Omega }(p)\kappa _{s})}+\frac{\alpha ^2}{(d_{\partial \Omega }(p))^2} \end{aligned}$$

(note that \({\mathcal {H}}(d_{\partial \Omega }(p))(J(W),J(X))\) vanishes). Exploiting the fact that \(\sum _{s=1}^{2n-1}\chi _s^2=1\) and completing the squares we obtain

$$\begin{aligned} A&=\sum _{s=1}^{2n-1}\frac{(w_s^2+\omega _s^2+\alpha ^2\chi _s^2+2\alpha w_s\chi _s)\kappa _s}{d_{\partial \Omega }(p)(1-d_{\partial \Omega }(p)\kappa _{s})}+\sum _{s=1}^{2n-1}\frac{\alpha ^2\chi _s^2}{(d_{\partial \Omega }(p))^2} \\&=\sum _{s=1}^{2n-1}\frac{(w_s^2+\omega _s^2)\kappa _s}{d_{\partial \Omega }(p)(1-d_{\partial \Omega }(p)\kappa _{s})}\\&+\sum _{s=1}^{2n-1}\left[ \frac{\alpha ^2\chi _s^2}{(d_{\partial \Omega }(p))^2(1-d_{\partial \Omega }(p)\kappa _{s})}+\frac{2\alpha w_s\chi _s\kappa _s}{d_{\partial \Omega }(p)(1-d_{\partial \Omega }(p)\kappa _{s})}\right] \\&\ge \sum _{s=1}^{2n-1}\frac{(w_s^2+\omega _s^2)\kappa _s}{d_{\partial \Omega }(p)(1-d_{\partial \Omega }(p)\kappa _{s})}\nonumber \\&-\sum _{s=1}^{2n-1}\frac{w_s^2\kappa _s^2}{1-d_{\partial \Omega }(p)\kappa _{s}}=\sum _{s=1}^{2n-1}\frac{\omega _s^2\kappa _s}{d_{\partial \Omega }(p)(1-d_{\partial \Omega }(p)\kappa _{s})}\\&+\sum _{s=1}^{2n-1}\frac{w_s^2\kappa _s}{d_{\partial \Omega }(p)}. \end{aligned}$$

Observe now the crucial fact: the elementary inequality \(\frac{\kappa _s}{1-d_{\partial \Omega }(p)\kappa _s}\ge \kappa _s\) holds regardless of the sign of \(\kappa _s\) (we learned this trick from [7]). Hence, A is further bounded from below by

$$\begin{aligned} \sum _{s=1}^{2n-1}\frac{(w_s^2+\omega _s^2)\kappa _s}{d_{\partial \Omega }(p)}. \end{aligned}$$

The latter expression is easily seen to be equal to

$$\begin{aligned} \frac{{\mathcal {H}}((h(t')-t_{2n})(0',0))(W,W)+{\mathcal {H}}((h(t')-t_{2n})(0',0))(J(W),J(W))}{d_{\partial \Omega }(p)}\\ =4\frac{{\mathcal {L}}((h(t')-t_{2n})(0',0))(V,V)}{d_{\partial \Omega }(p)}. \end{aligned}$$

As \(h(t')-t_{2n}\) is a local defining function for \(\partial \Omega \), the last expression is non-negative from the very definition of Levi pseudoconvexity. This finishes the proof of the Claim.

The subharmonic counterpart is much easier. By the same computation as above, and because the Laplacian is the trace of the Hessian one has:

$$\begin{aligned} \Delta (-\log (d_{\partial \Omega }(x)))=\frac{1}{(d_{\partial \Omega }(x))^2}+\sum _{i=1}^{m-1}\frac{\kappa _{i}}{d_{\partial \Omega }(x)(1-d_{\partial \Omega }(x)\kappa _{i})} \end{aligned}$$

and

$$\begin{aligned} \Delta (d_{\partial \Omega }(x))^{2-m}=\frac{(m-2)(m-1)}{(d_{\partial \Omega }(x))^m}+(m-2)\sum _{i=1}^{m-1}\frac{\kappa _{i}}{(d_{\partial \Omega }(x))^{m-1}(1-d_{\partial \Omega }(x)\kappa _{i})}. \end{aligned}$$

Recall that \(1-d_{\partial \Omega }(x)\kappa _{i}>0\) for any \(x\in \Omega \setminus \Sigma \) and any i (see also Lemma 2.2 in [18]). Observe that the function \(x\rightarrow \frac{x}{1-dx}\) is convex for \(x\in \left( -\infty , \frac{1}{d}\right) \), as its second derivative is \(\frac{2d}{(1-dx)^3}> 0\). Thus, by the Jensen inequality for convex functions, one has

$$\begin{aligned}{} & {} \Delta (-\log (d_{\partial \Omega }(x)))\ge \frac{1}{(d_{\partial \Omega }(x))^2}+(m-1)\frac{\frac{\sum _{i=1}^{m-1}\kappa _{i}}{m-1}}{d_{\partial \Omega }(x)\left( 1-d_{\partial \Omega }(x)\frac{\sum _{i=1}^{m-1}\kappa _{i}}{m-1}\right) }\\{} & {} \quad =\frac{1}{(d_{\partial \Omega }(x))^2}+\frac{(m-1)\sum _{i=1}^{m-1}\kappa _{i}}{d_{\partial \Omega }(x)\left( m-1-d_{\partial \Omega }(x)\sum _{i=1}^{m-1}\kappa _{i}\right) }=\frac{m-1+(m-2)d_{\partial \Omega }(x)\sum _{i=1}^{m-1}\kappa _{i}}{(d_{\partial \Omega }(x))^2\left( m-1-d_{\partial \Omega }(x)\sum _{i=1}^{m-1}\kappa _{i}\right) } \end{aligned}$$

and respectively

$$\begin{aligned} \Delta (d_{\partial \Omega }(x))^{2-m}&\ge \frac{(m-2)(m-1)}{(d_{\partial \Omega }(x))^m}+(m-2)(m-1)\frac{\frac{\sum _{i=1}^{m-1}\kappa _{i}}{m-1}}{(d_{\partial \Omega }(x))^{m-1}\left( 1-d_{\partial \Omega }(x)\frac{\sum _{i=1}^{m-1}\kappa _{i}}{m-1}\right) }\\&=(m-2)(m-1)\left( \frac{1}{(d_{\partial \Omega }(x))^m}+\frac{\sum _{i=1}^{m-1}\kappa _{i}}{(d_{\partial \Omega }(x))^{m-1}\left( m-1-d_{\partial \Omega }(x)\sum _{i=1}^{m-1}\kappa _{i}\right) }\right) \\&=\frac{(m-2)(m-1)^2}{(d_{\partial \Omega }(x))^m\left( m-1-d_{\partial \Omega }(x)\sum _{i=1}^{m-1}\kappa _{i}\right) }>0. \end{aligned}$$

In the former case, the subharmonicity is reduced to the non-negativity of

$$\begin{aligned} m-1+(m-2)d_{\partial \Omega }(x)\sum _{i=1}^{m-1}\kappa _{i}=(m-1)(1+(m-2)d_{\partial \Omega }(x)H). \end{aligned}$$

This is positive near the boundary. If H is non-negative then the subharmonicity follows. If \(H<0\) then the subharmonicity holds if \(d_{\partial \Omega }(x)\le \frac{1}{(m-2)|H|}\). Observe also that if R is the inradius of \(\Omega \) and \(H\ge \frac{-1}{(m-2)R}\) then \(1+(m-2)d_{\partial \Omega }(x)H\ge 1-\frac{d_{\partial \Omega }(x)}{R}>0\).

Finally, to deal with the smoothly bounded pseudoconvex case we assume Theorem 4, where without loss of generality the smallest eigenvalue is \(\kappa _{2n-1}\), and split the sum

$$\begin{aligned} \Delta (-\log (d_{\partial \Omega }(x)))&=\frac{1}{(d_{\partial \Omega }(x))^2}+\sum _{i=1}^{2n-1}\frac{\kappa _{i}}{d_{\partial \Omega }(x)(1-d_{\partial \Omega }(x)\kappa _{i})}\\&=\frac{1}{(d_{\partial \Omega }(x))^2}+\frac{\kappa _{2n-1}}{d_{\partial \Omega }(x)(1-d_{\partial \Omega }(x)\kappa _{2n-1})}\\&\quad +\sum _{i=1}^{2n-2}\frac{\kappa _{i}}{d_{\partial \Omega }(x)(1-d_{\partial \Omega }(x)\kappa _{i})}\\&\ge \frac{1-d_{\partial \Omega }(x)\kappa _{2n-1}+d_{\partial \Omega }(x)\kappa _{2n-1}}{d_{\partial \Omega }(x)(1-d_{\partial \Omega }(x)\kappa _{2n-1})}\\&+(2n-2)\frac{\frac{\sum _{i=1}^{2n-2}\kappa _{i}}{2n-2}}{d_{\partial \Omega }(x)\left( 1-d_{\partial \Omega }(x)\frac{\sum _{i=1}^{2n-2}\kappa _{i}}{2n-2}\right) }\\&\ge \frac{1}{d_{\partial \Omega }(x)(1-d_{\partial \Omega }(x)\kappa _{2n-1})}>0, \end{aligned}$$

by the Jensen inequality.

Remark 3

We note that the inequality

$$\begin{aligned} \sum _{i=1}^{m-1}\frac{\kappa _{i}}{1-d_{\partial \Omega }(x)\kappa _{i}}\ge (m-1)\frac{\frac{\sum _{i=1}^{m-1}\kappa _{i}}{m-1}}{1-d_{\partial \Omega }(x)\frac{\sum _{i=1}^{m-1}\kappa _{i}}{m-1}} \end{aligned}$$

is presented as a much deeper fact in [18], see Proposition 2.6 there and Proposition 2.5.3 in [1], for the proof of which one needs the Newton inequality for elementary symmetric polynomials.

Step 3: Non-smooth analysis on \(\Omega \).

It remains to prove that \(-\log (d_{\partial \Omega }(\cdot ))\) extends (pluri)subharmonically past \(\Sigma \) for each \(\Omega =\Omega _j\). As \(\partial \Omega \) is now smooth it follows from Lemma 6 that \(\Sigma \) is of zero Lebesgue measure. We start with the subharmonic case.

We observe that

$$\begin{aligned} -\log (d_{\partial \Omega }(x))=\sup _{ w\in \partial \Omega } -\log \Vert x-w\Vert . \end{aligned}$$

The functions \(x\rightarrow -\log \Vert x-w\Vert \) are not subharmonic for \(m>2\), yet the following lower bound for their Laplacian is available

$$\begin{aligned}{} & {} -\Delta \log \Vert x-w\Vert =-\sum _{k=1}^{m}\frac{\partial ^2 \frac{1}{2}\log \Vert x-w\Vert ^2}{\partial x_k\partial x_{k}}=-\sum _{k=1}^{m}\frac{\Vert x-w\Vert ^2-2(x_k-w_k)^2}{\Vert x-w\Vert ^4}\\{} & {} \quad = \frac{-(m-2)}{\Vert x-w\Vert ^2}. \end{aligned}$$

So,

$$\begin{aligned} {\overline{\Delta }} (-\log \Vert x-w\Vert )=\Delta (-\log \Vert x-w\Vert )\ge \frac{-(m-2)}{d_{\partial \Omega }(x)} \end{aligned}$$

for any \(w\in \partial \Omega \). Then, by Lemma 11, the same estimate holds for the regularized pointwise supremum: \({\overline{\Delta }} (-\log (d_{\partial \Omega }(x)))^{*}\ge \frac{-(m-2)}{d_{\partial \Omega }(x)}\). As the pointwise supremum is continuous there is no need to take the regularization and so \({\overline{\Delta }} (-\log (d_{\partial \Omega }(x)))\ge \frac{-(m-2)}{d_{\partial \Omega }(x)}>-\infty \) everywhere in \(\Omega \). Also, by the subharmonicity of \(-\log (d_{\partial \Omega }(x))\) on \(\Omega \setminus \Sigma \) established in Step 2, one has \({\overline{\Delta }} (-\log (d_{\partial \Omega }(x)))\ge 0\) there, that is, almost everywhere. By the theorem of Privalov, \(-\log (d_{\partial \Omega }(x))\) is subharmonic on \(\Omega \). When we consider \((d_{\partial \Omega }(x))^{2-m}\) the proof is the same: \((d_{\partial \Omega }(x))^{2-m}=\max _{w\in \partial \Omega }\Vert x-w\Vert ^{2-m}\) and \(\Delta \Vert x-w\Vert ^{2-m}=0\).

The proof in the subharmonic case is over. We now take the plurisubharmonic one. Since \(\Sigma \) is of zero Lebesgue measure, \(-\log (d_{\partial \Omega }(z))\) is plurisubharmonic on \(\Omega \setminus \Sigma \) by Step 2, and subharmonic in \(\Omega \) by Step 3 above, we invoke now Theorem 12 to directly obtain the claimed plurisubharmonicity of \(-\log (d_{\partial \Omega }(z))\) on the whole \(\Omega \).

Remark 4

As a simple corollary to Theorems 2 and 3 one can observe that the function \(\frac{1}{(d_{\partial \Omega }(z))^s}\) for any \(s>0\) is plurisubharmonic for pseudoconvex \(\Omega \), and \((d_{\partial \Omega }(x))^{t}\) for any \(t\le 2-m\) is subharmonic on \(\Omega \). Likewise, \(\frac{1}{(d_{\partial \Omega }(x))^s}, s>0\) is subharmonic near the boundary of any smoothly bounded \(\Omega \). This follows from the property that subharmonicity and plurisubharmonicity are preserved after a composition with convex increasing function (\(y\rightarrow y^{\frac{t}{2-m}}\) and \(y\rightarrow e^{sy}\) respectively).

4 Proof of Theorem 5

This is now very brief. Observe that \({{\overline{\Delta }}}(-d_{\partial \Omega }(x))\ge \frac{-(m-1)}{d_{\partial \Omega }(x)}\) because \(\Delta (-\Vert x-w\Vert )=\frac{-(m-1)}{\Vert x-w\Vert }>\frac{-(m-1)}{d_{\partial \Omega }(x)}\) for \(w\in \partial \Omega \) and the reasoning in Step 3 above can be repeated. By assumption \(-d_{\partial \Omega }(x)\) is subharmonic on \(\Omega \setminus \Sigma \). Hence, \(-d_{\partial \Omega }(x)\) is subharmonic in \(\Omega \) by the Privalov theorem. Thus, \(-d_{\partial \Omega }(x)\) being subharmonic on \(\Omega \setminus \Sigma \) and on the whole \(\Omega \) is the same, provided that \(\Sigma \) is a Lebesgue null set.

5 Proof of Theorem 4

Essentially, the proof boils down to linear algebra.

First, we have to specify what will be called ”the real part of a complex hyperplane”. Note that given a complex vector space with a fixed complex structure J there is no natural or canonical choice of a splitting into the real and imaginary parts - it is up to a choice of an antilinear involution \(v\rightarrow c(v)={\overline{v}}\). However, once such a splitting is fixed once can induce a compatible splitting on any complex hyperplane. To see this, let V be a complex vector space with \(\dim _{\mathbb C}V=n\), J be the complex structure, ReV and \(Im V =J(Re V)\) be the splitting, so that \(V=Re V\oplus Im V\) and \(\dim _{\mathbb R} Re V= \dim _{\mathbb R}Im V=n\). Let H be a complex hyperplane in V. From

$$\begin{aligned}{} & {} n+2n-2=\dim _{\mathbb R}Re V+ \dim _{\mathbb R} H=\dim _{\mathbb R} Re V\cap H \nonumber \\{} & {} \quad + \dim _{\mathbb R} Re V + H\le \dim _{\mathbb R} Re V\cap H +2n \end{aligned}$$

we see that \(\dim _{\mathbb R} Re V\cap H\) is either \(n, n-1\) or \(n-2\). The first possibility is ruled out as there is a linear bijection \( Re V\cap H\ni v\rightarrow J(v)\in Im V\cap H\), meaning that \( Im V\cap H= Im V\) and hence \(H= V\), a contradiction. So there is a \(v\in Re V\setminus H\). But now \(J(v)\in Im V\setminus H\) and hence \(J(v)\not \in Re V + H\). So, \(\dim _{\mathbb R} Re V + H\le 2n-1\) and \(\dim _{\mathbb R} Re V\cap H\ne n-2\) either. Thus, \(\dim _{\mathbb R} Re V\cap H=n-1\) and we define this subspace to be the real part of H.

As \(\Omega \) is a pseudoconvex domain in \(\mathbb C^{n}\) we first split (say in the standard way) \(\mathbb C^{n}= Re\, \mathbb C^{n}\oplus Im\,\mathbb C^{n}\) and for any tangent complex hyperplane we define its real part accordingly.

The setting is now as in Step 2 above. Let \(U\subset \mathbb R^{2n-1}=\mathop {span} \left\{ \frac{\partial }{\partial t_1},\cdots ,\frac{\partial }{\partial t_{2n-1}} \right\} \) be the real real part of the complex tangent space at q. This subspace is perpendicular to \(X=-J\left( \frac{\partial }{\partial t_{2n}}\right) \) and for any vector \(W\in U\), \(W-iJ(W)\) is a complex vector of type (1, 0) which is tangent to \(\partial \Omega \) at q. Likewise, J(U) is the imaginary part of the complex tangent space at q. As above, \(\dim _{\mathbb R} U= \dim _{\mathbb R} J(U)=n-1\) and \(\dim _{\mathbb R} U\oplus J(U)=2n-2\) since J is an orthogonal transformation.

We put the matrix K to be \(K:=\begin{pmatrix}\kappa _{1}&{} 0 &{}\cdots &{} 0\\ 0&{}\kappa _{2}&{}\cdots &{}0\\ \vdots &{}\vdots &{}\ddots &{}\vdots \\ 0&{}0&{}\cdots &{}\kappa _{2n-1} \end{pmatrix}\). Note that at the end of the proof of the Claim in Step 2 above we obtained the inequality

$$\begin{aligned} \sum _{s=1}^{2n-1}\frac{(w_s^2+\omega _s^2)\kappa _s}{d_{\partial \Omega }(p)}\ge 0. \end{aligned}$$

Disregarding the denominator, the latter is equivalent to (in matrix notation):

$$\begin{aligned} \begin{pmatrix}W^{T}&J(W)^{T}\end{pmatrix}\begin{pmatrix}K&{} \text{0 }_{(2n-1)\times (2n-1)}\\ \text{0 }_{(2n-1)\times (2n-1)} &{} K \end{pmatrix}\begin{pmatrix} W\\ J(W)\end{pmatrix}\ge 0 \end{aligned}$$
(11)

and

$$\begin{aligned} \begin{pmatrix}J(W)^{T}&W^{T}\end{pmatrix}\begin{pmatrix}K&{} \text{0 }_{(2n-1)\times (2n-1)}\\ \text{0 }_{(2n-1)\times (2n-1)} &{} K \end{pmatrix}\begin{pmatrix} J(W)\\ W\end{pmatrix}\ge 0 \end{aligned}$$
(12)

for any \(W\in U\), and where by abusing notation J is the restriction of the complex structure to \(\{X, JX\}^{\perp }\subset \mathbb R^{2n-1}\).

We use the following standard fact from linear algebra (a real version of the so-called Ky Fan maximum principle): for a symmetric real \(m\times m\) matrix A the sum of the \(j\le m\) greatest eigenvalues of A is equal to \(\max _{B}{\text {tr}} B^{T}AB\) where the maximum is taken over all real matrices B with m rows and j columns such that \(B^{T}B\) is the \(j\times j\) identity matrix.

If \(e_1,\ldots e_{n-1}\) is any orthonormal basis of U it follows from the block structure of the matrix that

$$\begin{aligned}{} & {} 2\max _{j\in \{1,\ldots , 2n-1\}}\kappa _1+\cdots +\kappa _{j-1}+{\hat{\kappa }}_j+\kappa _{j+1}+\cdots +\kappa _{2n-1}\\{} & {} \ge {\text {tr}} B^{T}\begin{pmatrix}K&{} \text{0 }_{(2n-1)\times (2n-1)}\\ \text{0 }_{(2n-1)\times (2n-1)} &{} K \end{pmatrix}B \end{aligned}$$

where

$$\begin{aligned} B= \underbrace{ \begin{pmatrix} e_1 &{}\cdots &{} e_{n-1}&{}J(e_1)&{} \cdots &{}J(e_{n-1})\\ J(e_1)&{} \cdots &{}J(e_{n-1})&{}e_1 &{}\cdots &{} e_{n-1}\end{pmatrix}}_{2n-2 \text { columns }}\left. \begin{matrix}\\ \end{matrix}\right\} {\scriptstyle 4n-2\text { rows }} . \end{aligned}$$

But from (11), (12) all the diagonal entries of the product matrix above are non-negative (positive if the Levi form is non-degenerate on the corresponding vector \(e_s-iJ(e_s)\)). Hence, we get the first part of the result.

Also, the matrix \(\begin{pmatrix}K&{} \text{0 }_{(2n-1)\times (2n-1)}\\ \text{0 }_{(2n-1)\times (2n-1)} &{} K \end{pmatrix}\), or rather the real bilinear form represented by it, is semi-positive definite when restricted to the following \(2n-2\) dimensional subspace of \(\mathbb R^{4n-2}\):

$$\begin{aligned} {span}\left\{ \begin{pmatrix} e_1\\ J(e_1) \end{pmatrix},\cdots ,\begin{pmatrix} e_{n-1}\\ J(e_{n-1}) \end{pmatrix},\begin{pmatrix} J(e_{1})\\ e_{1} \end{pmatrix},\cdots ,\begin{pmatrix} J(e_{n-1})\\ e_{n-1} \end{pmatrix} \right\} . \end{aligned}$$

It follows from linear algebra (or from the theory of Krein spaces) that at least \(n-1\) of the eigenvalues, that is of the numbers \(\kappa _1,\ldots , \kappa _{2n-1}\), are non-negative. In the strongly pseudoconvex case one has positive definiteness instead of semi-positive definiteness above and hence positive eigenvalues.

Remark 5

Theorem 4 is sharp in the sense that one can find a pseudoconvex domain with exactly \(n-1\) non-negative principal curvatures at some point of its boundary. Just take a close enough approximation of the product of n copies of the planar annulus \(B(0,1)\setminus \overline{B(0,r)}\), which can be obtained from a smooth exhaustion.

Remark 6

In the geometrically ideal situation, when the principal directions of the second fundamental form coincide with the coordinate axes with respect to the standard coordinates of \(\mathbb C^{n}\), we can choose \(W=\frac{\partial }{\partial t_{2s+1}},\, s=0,\ldots ,n-2\) to obtain

$$\begin{aligned} {\left\{ \begin{array}{ll} \kappa _1+\kappa _2&{}\ge 0\\ \kappa _3+\kappa _4&{}\ge 0\\ &{}\vdots \\ \kappa _{2n-3}+\kappa _{2n-2}&{}\ge 0\\ \end{array}\right. }.\end{aligned}$$
(13)

The same observation can be found in [12], p.24.