Abstract
The essential variety is an algebraic subvariety of dimension 5 in real projective space \(\mathbb R\mathrm P^{8}\) which encodes the relative pose of two calibrated pinhole cameras. The 5-point algorithm in computer vision computes the real points in the intersection of the essential variety with a linear space of codimension 5. The degree of the essential variety is 10, so this intersection consists of 10 complex points in general. We compute the expected number of real intersection points when the linear space is random. We focus on two probability distributions for linear spaces. The first distribution is invariant under the action of the orthogonal group \(\textrm{O}(9)\) acting on linear spaces in \(\mathbb R\mathrm P^{8}\). In this case, the expected number of real intersection points is equal to 4. The second distribution is motivated from computer vision and is defined by choosing 5 point correspondences in the image planes \(\mathbb R\mathrm P^2\times \mathbb R\mathrm P^2\) uniformly at random. A Monte Carlo computation suggests that with high probability the expected value lies in the interval \((3.95 - 0.05,\ 3.95 + 0.05)\).
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The mathematical abstraction of a pinhole camera is a projective linear map
where \(C\in \mathbb R^{3\times 4}\) is a matrix of rank 3. The camera is called calibrated, when \(C=[R, \textbf{t}]\), where \(R\in \textrm{SO}(3)\) is a rotation matrix and \(\textbf{t}\in \mathbb R^3\) is a translation vector.
The relative-pose problem is the problem of computing the relative position of two cameras in 3-space; see [8, Section 9]. Suppose that we have two calibrated cameras given by two matrices \(C_1\) and \(C_2\) of rank 3. Since we are only interested in relative positions, we can assume \(C_1=[\textrm{1}_3, \textbf{0}]\) and \(C_2=[R, \textbf{t}]\). If \(\textbf{x}\in \mathbb R\mathrm P^3\) is a point in 3-space, \(\textbf{u}=C_1\textbf{x}\in \mathbb R\mathrm P^2\) and \(\textbf{v}=C_2\textbf{x}\in \mathbb R\mathrm P^2\) are called a point-correspondence. Any point-correspondence \((\textbf{u},\textbf{v})\) satisfies the algebraic equation
and \([\textbf{t}]_\times \) is the matrix acting by \([\textbf{t}]_\times \textbf{x}= \textbf{t}\times \textbf{x},\) the cross-product in \(\mathbb R^3\). The set of all such matrices is denoted \(\widehat{\mathcal {E}}:= \{E(R,\textbf{t})\mid R \in \textrm{SO}(3), \textbf{t}\in \mathbb {R}^3\}\). This is an algebraic variety defined by the 10 cubic and homogeneous polynomial equations \(\det (E)=0,\; 2EE^TE - \textrm{Tr}(EE^T)E=0\); see [7, Section 4]. Therefore, if \(\pi : \mathbb {R}^{3\times 3} \mapsto \textrm{P}(\mathbb {R}^{3\times 3})\cong \mathbb R\mathrm P^8\) denotes the projectivization map, \(\widehat{\mathcal {E}}\) is the cone over the projective variety
which is called the essential variety.
In the following we view elements in \(\mathbb R\mathrm P^8\) as real \(3\times 3\) matrices up to scaling. The essential variety \(\mathcal E\) is of dimension \(5 = \dim \textrm{SO}(3) + \dim \mathbb {R}^3 - 1\). Demazure showed that its complexification has degree 10; see [6, Theorem 6.4]. Denote by \(\mathbb G:=G(3,\mathbb R\mathrm P^8)\) the Grassmannian of 3-dimensional linear spaces in \(\mathbb R\mathrm P^8\). By (1.1), every point correspondence induces a linear equation on \(\mathcal E\). For 5 general point correspondences \((\textbf{u}_1,\textbf{v}_1),\ldots ,(\textbf{u}_5,\textbf{v}_5)\in \mathbb R\mathrm P^2\times \mathbb R\mathrm P^2,\) the linear space
is general in \(\mathbb G\). Thus
That is, the relative pose problem can be solved by computing the real zeros of a system of polynomial equations that has 10 complex zeros in general. Once we have computed \(E=E(R,\textbf{t})\) we can recover the relative position of the two cameras from E. The process of recovering the relative pose of two calibrated cameras from five point correspondences is known as the 5-point algorithm, see [12].
The system of polynomial equations that we need to solve as part of the 5-point algorithm has 10 complex zeros in general, but the number of real zeros depends on L. Often, one computes all complex zeros and sorts out the real ones. Whether or not this is an efficient approach depends on how likely it is to have many real zeros out of 10 complex ones. Motivated by this observation, in this paper we study the average degree \(\mathop {\mathrm {{\mathbb {E}}}}\limits \# (\mathcal E\cap L)\) for random L.
Consider \(L=U\cdot L_0\), where \(L_0\in \mathbb G\) is fixed and \(U\sim \textrm{Unif}(\textrm{O}(9))\) then with respect to Haar measure on \(\mathbb G\) we in fact have \(L\sim \textrm{Unif}(\mathbb G)\); see [10, 13]. Our first result shows with this uniform distribution, we expect 4 of the 10 complex intersection points to be real.
Theorem 1.1
Let \(L\sim \textrm{Unif}(\mathbb G)\) then
This result is in fact quite surprising, because we get an integer, though there is no reason why it should even be a rational number (see also [3, Remark 2]).
To work within the computer vision framework, we need a different distribution than used in Theorem 1.1. The probability distribution is \(\textrm{O}(9)\)-invariant, yet linear equations of the type \(\textbf{u}^TE\textbf{v}=0\) are not \(\textrm{O}(9)\)-invariant. These special linear equations are \(\textrm{O}(3)\times \textrm{O}(3)\)-invariant by the group action \((U,V).(\textbf{u},\textbf{v}):=(U\textbf{u},V\textbf{v})\). The corresponding invariant probability distribution is given by the random point \(\textbf{a}=U\cdot \textbf{a}_0\in \mathbb R\mathrm P^2\), where \(U\sim \textrm{Unif}(\textrm{O}(3))\) and \(\textbf{a}_0\in \mathbb R\mathrm P^2\) is fixed. We denote this by \(\textbf{a}\sim \textrm{Unif}(\mathbb R\mathrm P^2)\).
Remark 1.2
The definition of \(\textrm{Unif}(\mathbb G)\) does not depend on the choice of \(L_0\), and the definition of \(\textrm{Unif}(\mathbb R\mathrm P^2)\) does not depend on the choice of \(\textbf{a}_0\).
We write \(L\sim \psi \), where \(L=\{E\in \mathbb R\mathrm P^8 \mid \textbf{u}_1^T E\textbf{v}_1 = \cdots = \textbf{u}_5^T E\textbf{v}_5 = 0\}\in \mathbb G\) is the random linear space given by i.i.d. points \(\textbf{u}_1,\textbf{v}_1,\ldots ,\textbf{u}_5,\textbf{v}_5\sim \textrm{Unif}(\mathbb R\mathrm P^2)\). We have the following result.
Theorem 1.3
With the distribution \(\psi \) defined above,
where \(\textbf{z}_1,\textbf{z}_2,\textbf{z}_3,\textbf{z}_4, \textbf{z}_5\sim \textbf{z}\) are i.i.d.,
and \(a,b,r,s\sim N(0,1)\), \(\theta \sim \textrm{Unif}([0,2\pi ))\) are independent.
We were not able to determine the exact value of the integral in this theorem. Yet, we can independently sample N random matrices of the form \(\begin{bmatrix} \textbf{z}_1&\textbf{z}_2&\textbf{z}_3&\textbf{z}_4&\textbf{z}_5\end{bmatrix}\) and compute their absolute determinants. This gives an empirical average value \(\mu _N\). An experiment with sample size \(N=5\cdot 10^9\) gives an empirical average of
In fact, \(\mu _N\) is itself a random variable and we have \(P\left( \ \vert \mu _N - \mathop {\mathrm {{\mathbb {E}}}}\limits _{L\sim \psi } \# (\mathcal E\cap L)\vert \ge \varepsilon \ \right) \le \frac{\pi ^6}{16}\cdot \frac{\sigma ^2}{N\cdot \varepsilon ^2}\) by Chebychev’s inequality, where \(\sigma ^2\) is the variance of the absolute determinant. We show in Proposition 4.4 below that \(\sigma ^2\le 360\). Using this in Chebychev’s inequality we get
(in fact, since 360 is an extremely coarse upper bound, the true probability should be much smaller). Therefore, it is likely that \(\mathop {\mathrm {{\mathbb {E}}}}\limits _{L\sim \psi } \# (\mathcal E\cap L)\) is strictly smaller than 4; i.e., it is likely that the expected value in Theorem 1.3 is less than the one in Theorem 1.1. See Fig. 1.
The distribution of zeros shown in Fig. 1 gives rise to further questions of interest in computer vision. When applying the 5-point algorithm it is important to know when there are no real solutions. In Fig. 1, for 1000 sampled spaces, the distribution with respect to \(\textrm{Unif}(\mathbb {G})\) had 10 instances with no real solutions, and the distribution with \(\psi \) had only 1 instance with no real solutions. The experiments give an indication that no real solutions is a relatively rare occurrence, but further work will need to be done to quantify and geometrically characterize these occurrences with respect to different distributions.
We remark that the distributions \(\textrm{Unif}(\mathbb G)\) and \(\psi \) are different in the following sense. For \(L\sim \textrm{Unif}(\mathbb G)\) every linear space \(L\in \mathbb G\) has the same probability. But when \(L\sim \psi \), it must be defined by 5 linear equations that are given by rank-one matrices of size 3. The Segre variety of rank-one matrices of size 3 in \(\mathbb R\mathrm P^8\) has dimension 4 (seeFootnote 1 [11, Section 4.3.5]), so that a general linear space of codimension \(4=9-5\) in \(\mathbb R\mathrm P^8\), spanned by 5 general \(3\times 3\) matrices, intersects the Segre variety in finitely many points. There is an Euclidean open subset in \(\mathbb G\), such that this intersection has strictly less than 5 points. Hence, there is a measurable subset \(\mathcal W\subset \mathbb G\) such that \(P_{L\sim \textrm{Unif}(\mathbb G)}(L\in \mathcal W)>0\) but \(P_{L\sim \psi }(L\in \mathcal W)=0\).
In Sect. 5 we use a result by Vitale [16] to express the expected value in Theorem 1.3 through the volume of a certain convex body \(K\subset \mathbb {R}^5\). Namely,
and K defined by its support function \(h_K(\textbf{x})= \tfrac{1}{2}\mathop {\mathrm {{\mathbb {E}}}}\limits _{\textbf{z}} \vert \textbf{x}^T\textbf{z}\vert \), and \(\textbf{z}\in \mathbb {R}^5\) is as above; K is a zonoid and we call it the essential zonoid. We use this to prove a lower bound for the expected number of real points \(\mathop {\mathrm {{\mathbb {E}}}}\limits _{L\sim \psi } \# (\mathcal E\cap L)\) in Theorem 5.1.
The two probability distributions in Theorem 1.1 and Theorem 1.3 are geometric, meaning that they are not biased towards preferred points in \(\mathbb G\) or \(\mathbb R\mathrm P^2\), respectively. In applications, however, one might be interested in other distributions, like for instance taking the \(\textbf{u}_i\) and \(\textbf{v}_i\) uniformly in a box (see Examples 4.1 and 4.3 below). For such a case, we do not get concrete results like Theorem 1.1 or Theorem 1.3. Nevertheless, in Theorem 4.2 below we give a general integral formula.
1.1 Outline
In Sect. 2 we give preliminaries. We recall the integral geometry formula in projective space and study the geometry of the essential variety. In Sect. 3 we prove Theorem 1.1 by computing the volume of the essential variety. In Sect. 4 we prove Theorems 1.3 and 4.2. In the last section, Sect. 5, we study the essential zonoid.
2 Preliminaries
Let us start by setting up our notation as well as making note of many key volume computations used throughout the paper. We consider the Euclidean space \(\mathbb R^n\) with the standard metric \(\langle \textbf{x}, \textbf{y}\rangle = \textbf{x}^T\textbf{y}\). The norm of a vector \(\textbf{x}\in \mathbb R^3\) will be denoted by \(\Vert \textbf{x}\Vert :=\sqrt{\langle \textbf{x},\textbf{x}\rangle }\) and the unit sphere by \(\mathbb {S}^{n-1}:=\{\textbf{x}\in \mathbb R^n \mid \Vert \textbf{x}\Vert = 1\}\). The Euclidean volume of the sphere is
In particular \({\textrm{vol}}(\mathbb {S}^1) =2\pi \) and \({\textrm{vol}}(\mathbb {S}^2) = 4\pi \). The standard basis vectors in \(\mathbb {R}^n\) are denoted \(\textbf{e}_i\) for \(1\le i\le n\). The space of real \(n\times n\) matrices \(\mathbb R^{n\times n}\) is also endowed with a Euclidean structure
We denote the identity matrix \(\textrm{1}_n\in \mathbb R^{n\times n}\) and the zero matrix \(0_n\). The orthogonal group will be denoted by \(\textrm{O}(n)\), while the special orthogonal group is \(\textrm{SO}(n)\). Both the orthogonal and special orthogonal group are Riemannian submanifolds of \(\mathbb R^{n\times n}\). Volumes of the two manifolds are
see [9, Equation (3-15)]. For instance, \(\textrm{vol}(\textrm{SO}(2)) = 2\pi \) and \(\textrm{vol}(\textrm{SO}(3)) = 8\pi ^2\).
2.1 Integral Geometry
The real projective space of dimension \(n-1\) is defined to be \(\mathbb R\mathrm P^{n-1}:= (\mathbb R^{n}\setminus \{0\})/\sim \), where the equivalence relation is \(\textbf{x}\sim \textbf{y}\Leftrightarrow \exists \lambda \in \mathbb R: \textbf{x}=\lambda \textbf{y}\). The projection \(\pi :\mathbb {S}^{n-1} \rightarrow \mathbb R\mathrm P^{n-1}\) that maps \(\textbf{x}\) to its class is a 2 : 1 cover. It induces a Riemannian structure on \(\mathbb R\mathrm P^{n-1}\) by declaring \(\pi \) to be a local isometry.
Let now \(X\subseteq \mathbb R\mathrm P^{n-1}\) be a submanifold of dimension m and \(L\subseteq \mathbb R\mathrm P^{n-1}\) be a linear space of codimension m. Howard [9] proved that for almost all \(U\in \textrm{O}(n)\) we have that \(X\cap U \cdot L\) is finite and
see [9, Theorem 3.8 & Corollary 3.9]. This formula will be used for proving Theorem 1.1.
2.2 The Coarea Formula
The proof of (2.2) is based on the coarea formula, which we will also need. In order to state the formula we need to introduce the normal Jacobian. Let M, N be Riemannian manifolds with \(\dim (M)\ge \dim (N)\) and let \(F:M\rightarrow N\) be a surjective smooth map. Fix a point \(\textbf{x}\in M\). The normal Jacobian \(\textrm{NJ}(F,\textbf{x})\) of F at \(\textbf{x}\) is
where J is the matrix representation of the derivative \(\mathrm D_\textbf{x}F\) relative to orthonormal bases in \(T_\textbf{x}M\) and \(T_{F(\textbf{x})}N\). Then for any integrable function \(h:M\rightarrow \mathbb {R}\)
See, e.g., [9, Section A-2].
2.3 The Geometry of the Essential Variety
In this subsection, we study in more detail the geometry of the essential variety \(\mathcal E\). Recall from (1.2) that \(\mathcal E\) is the projection of the cone \(\widehat{{\mathcal {E}}}\) to projective space \(\mathbb R\mathrm P^8\). We can also project \(\widehat{{\mathcal {E}}}\) to the sphere. This defines the spherical essential variety
Recall from (1.1) the definition of \(E(R, \textbf{t})\).
Lemma 2.1
The map \(E: \textrm{SO}(3)\times \mathbb {S}^2 \rightarrow \mathbb {R}^{3\times 3}, (R,\textbf{t})\mapsto E(R,\textbf{t})\) is 2:1 and \({\text {im}}(E) =\mathcal E_{\mathbb S}\).
Proof
Let \((R,\textbf{t})\in \textrm{SO}(3)\times \mathbb {S}^2 \). The matrix description of \([\textbf{t}]_\times \) is
In particular, this shows \(\textrm{Tr}\left( [\textbf{t}]_\times [\textbf{t}]_\times ^{T}\right) = 2\Vert \textbf{t}\Vert ^2 = 2\). Then, the norm squared of \(E(R,\textbf{t})\) is
Therefore, \({\text {im}}(E) =\mathcal E_{\mathbb S}\). Let \(M\in \textrm{SO}(3)\) be a matrix such that \(M\textbf{t}= \textbf{t}\) and \(M\textbf{x}=-\textbf{x}\) for all \(\textbf{x}\) orthogonal to \(\textbf{t}\), then we have \(M[-\textbf{t}]_\times = [\textbf{t}]_\times \) and we can write the following
This means that E is at least 2:1. To show it is at most 2 : 1, we consider the following
for some rotation M and \(\lambda \in \{\pm 1\}\). We want to check how many different rotation matrices M satisfy this equation. We have the following chain of implications
We see that the columns of \(1_3 - \lambda M\) are multiples of \(\textbf{t},\) therefore we can write \(1_3 - \lambda M = c\,\textbf{t}\textbf{t}^T\) for some \(c \in \mathbb {R}\). We make use of the fact that \(\det (M)=1.\) Firstly we compute the determinant
where we have used that \(\textbf{t}^T\textbf{t}=1\). This implies \(\lambda ^{3} = 1 - c\). If \(\lambda = 1\), then \(c=0\). If \(\lambda = -1\), then we have \(c = 2\). Thus, either \(M = 1_3\) or \(M = 2\textbf{t}\textbf{t}^T - 1_3\).
This is Rodrigues’ formula for 180-degree rotation about the axis spanned by \(\textbf{t}\). Additionally, it is worth mentioning that this symmetry of the essential variety is exactly the twisted pair, described in [8].
Next, we show the invariance properties of the map E. For \(U,V\in \textrm{SO}(3)\) we denote
In particular, the next lemma shows that this defines a group action on \(\mathcal E_{\mathbb {S}}\).
Lemma 2.2
For orthogonal matrices \(U,V\in \textrm{SO}(3)\) and \((R,\textbf{t})\in \textrm{SO}(3)\times \mathbb {S}^2\) we have
Proof
We have \(E(URV^T, U\textbf{t}) = [U\textbf{t}]_\times URV^T = ([U\textbf{t}]_\times UR)V^T\). Moreover, the cross product satisfies \((U\textbf{t})\times (U\textbf{x})=U(\textbf{t}\times \textbf{x})\) for all \(\textbf{x}\in \mathbb R^3\).
With the above lemma, we deduce the following result on \(\mathcal {E}_{\mathbb {S}}\).
Corollary 2.3
\(\mathcal E_{\mathbb S}\) is a homogeneous space for \(\textrm{SO}(3)\times \textrm{SO}(3)\) acting by left and right multiplication. In particular, \(\mathcal E_{\mathbb S}\), and hence also \(\mathcal E\), is smooth.
We now denote the following special matrix in \(\mathcal E\):
(recall that \(\textbf{e}_1\) denotes the first standard basis vector \((1,0,0)^T\)).
Lemma 2.4
The stabilizer group of \(E\in \mathcal E_{\mathbb {S}}\) under the \(\textrm{SO}(3)\times \textrm{SO}(3)\) action has volume equal to \(2\sqrt{2} \cdot \textrm{vol}(\textrm{SO}(2))\).
Proof
The stabilizer groups of \(E\in \mathcal E_{\mathbb {S}}\) all have the same volume. We compute the stabilizer group of \(E_0\). By Lemma 2.1, \(E(R,\textbf{t})\) is 2:1 and by (2.4) we have
where \(M=\left[ {\begin{matrix} 1 &{} 0 &{} 0 \\ 0 &{} -1 &{} 0 \\ 0 &{} 0&{}-1\end{matrix}}\right] \). Therefore, \((U,V).E_0 = E_0\) if and only if \(U\textbf{e}_1 = \textbf{e}_1\) and \(UV^T = 1_3\), or \(U\textbf{e}_1 = -\textbf{e}_1\) and \(UV^T = M\); i.e., \(MU=V\). That is, \(\textrm{stab}(E_0)\) is realized as the image of the map \({F: \textrm{SO}(2)\times \{-1,1\}\rightarrow \textrm{SO}(3)\times \textrm{SO}(3)}\) such that
The normal Jacobian of F at every point is \(\sqrt{2}.\) For fixed \(\varepsilon \), \(\textrm{SO}(2)\times \{\varepsilon \}\) is a homogeneous space under the action of \(\textrm{SO}(2)\) acting on itself. This group action is transitive and preserves the inner product, so the normal Jacobian is constant. Thus it suffices to compute the normal Jacobian at \((1_2,\varepsilon )\). To see this, the tangent space to \(\textrm{SO}(3)\) at the identity is
for \(F_{i,j}=\textbf{e}_i\textbf{e}_j^T - \textbf{e}_j\textbf{e}_i^T\). Thus an orthogonal basis for the tangent space of \(\textrm{SO}(3)\times \textrm{SO}(3)\) at \((\mathrm 1_3,\mathrm 1_3)\), is given by
Indeed, with respect to this basis and identifying the tangent space of \(\textrm{SO}(2) \times \{-1,1\}\) with \(\mathbb {R}\), we have \(D_{(\textrm{1}_2, \varepsilon )}F = \begin{bmatrix} 0&0&\varepsilon&0&0&1\end{bmatrix}^T\) and thus
We conclude by using the coarea formula (2.3) for \(M= \textrm{SO}(2) \times \{-1,1\},\) \(N= \textrm{stab}(E_0)\), \(h \equiv \sqrt{2}\), and \(F^{-1}(y)\) a single point by injectivity to obtain \(\textrm{vol}(\textrm{stab}(E_0)) = 2\sqrt{2} \cdot \textrm{vol}(\textrm{SO}(2)).\)
Next, we compute an orthonormal basis of the tangent space \(T_{E_0}\mathcal {E}\) at \(E_0\).
Lemma 2.5
An orthonormal basis of \(T_{E_0}\mathcal E\) is given by the following five matrices
Proof
First, we observe that the five matrices above are pairwise orthogonal and all of norm one. Since \(\dim \mathcal E=5\), it therefore suffices to show that \(B_1,\ldots ,B_5\in T_{E_0}\mathcal E = T_{E_0}\mathcal E_{\mathbb {S}}\). The derivatives of E evaluated in \((\textrm{1}_3, \dot{\textbf{t}})\) and \((\dot{R}, \textbf{e}_1)\) respectively are
We have \(T_{\textbf{e}_1} \mathbb {S}^2 = \textrm{span}\{\textbf{e}_2, \textbf{e}_3\}\) and \(T_{\textrm{1}_3}\textrm{SO}(3)=\textrm{span}\{F_{1,2}, F_{1,3}, F_{2,3}\}\), where \(F_{i,j}=\textbf{e}_i\textbf{e}_j^T - \textbf{e}_j\textbf{e}_i^T\) as above. Therefore, the following five matrices are in \(T_{E_0}\mathcal E\):
Each of the \(B_i\) above can be expressed as a linear combination of these five matrices, which shows \(B_i\in T_{E_0}\mathcal E\).
Alternatively, to prove Proposition 2.5 we consider the derivative of the smooth surjective map \(\gamma : \textrm{SO}(3)\times \textrm{SO}(3) \rightarrow \mathcal {E}_{\mathbb {S}}, (U, V)\mapsto (U, V).E_0\). Since the basis for the tangent space of \(\textrm{SO}(3)\times \textrm{SO}(3)\) at \((\mathrm 1_3,\mathrm 1_3)\) is given as in (2.6), the tangent space \(T_{E_0}\mathcal E\) is also spanned by the following six matrices
3 The Volume of the Essential Variety
In this section, we prove Theorem 1.1. The strategy is as follows. By Corollary 2.3, \(\mathcal E\) is a smooth submanifold of \(\mathbb R\mathrm P^8\). We can apply the integral geometry formula (2.2) to get
Thus, to prove Theorem 1.1 we can compute the volume of \(\mathcal E\). We do this in the next theorem. Notice that the result of the theorem, when plugged into (3.1) immediately, proves Theorem 1.1.
Theorem 3.1
The volume of the essential variety is
We give two different proofs of this theorem. Since \({\textrm{vol}}(\mathcal {E})=\tfrac{1}{2}\,{\textrm{vol}}(\mathcal {E}_{\mathbb {S}})\), it is enough to compute the latter volume.
Proof 1
By Lemma 2.1, we realize \(\mathcal {E}_\mathbb {S}\) as the image of the smooth map \((R,\textbf{t})\mapsto E(R,\textbf{t})\), and we now restrict the domain to the image. By Lemma 2.2, \(\textrm{NJ} (E, (R,\textbf{t}))\) is invariant under the action by \(\textrm{SO}(3)\times \textrm{SO}(3)\). Applying the coarea formula (2.3) over the 2-element fibers of E, we get that
This implies
Recall, \(F_{i,j}=\textbf{e}_i\textbf{e}_j^T - \textbf{e}_j\textbf{e}_i^T\). With respect to the orthonormal basis \(\{B_i\}\) computed in Lemma 2.5 and the orthonormal basis \(\{(0_3,\textbf{e}_2), (0_3,{\textbf{e}_3}), ( F_{1,2}, \textbf{0}), (F_{1,3},\textbf{0} ), ( F_{2,3},\textbf{0})\}\) computed for \( T_{\mathrm 1_3}\textrm{SO}(3)\times T_{\textbf{e}_1}\mathbb {S}^2\), the columns of the matrix J associated to the derivative of E at \((\mathrm 1_3,\textbf{e}_1)\) are the basis elements of \( T_{\mathrm 1_3}\textrm{SO}(3)\times T_{\textbf{e}_1}\mathbb {S}^2\) written as a combination of the basis given by Lemma 2.5:
So, we have that \(\textrm{NJ} (E, (\mathrm 1_3,\textbf{e}_1)) = \sqrt{\det JJ^T} = \frac{1}{4}\), and consequently \({\textrm{vol}}(\mathcal {E}_{\mathbb {S}}) = 4\pi ^3\). Therefore, we have \({\textrm{vol}}(\mathcal {E}) = 2\pi ^3\). By (2.1), \(\textrm{vol}(\mathbb R\mathrm P^5) = \frac{1}{2}\cdot \textrm{vol}(\mathbb {S}^5) = \frac{\pi ^3}{2}\), so \(\textrm{vol}(\mathcal E) = 4 \cdot \textrm{vol}(\mathbb R\mathrm P^5)\).
Proof 2
By Corollary 2.3, \(\mathcal {E}_{\mathbb {S}}\) is a homogeneous space under the action of \(\textrm{SO}(3)\times \textrm{SO}(3)\). We therefore have the surjective smooth map \(\gamma : \textrm{SO}(3)\times \textrm{SO}(3) \rightarrow \mathcal {E}_{\mathbb {S}}, (U,V)\mapsto (U,V).E_0\) with fibers that satisfy \(\textrm{vol}(\gamma ^{-1}(E)) = 2\sqrt{2}\cdot \textrm{vol}(\textrm{SO}(2))\) for all \(E\in \mathcal E_{\mathbb {S}}\); see Lemma 2.4. The coarea formula from (2.3) implies
By Lemma 2.2, the map \(\gamma \) is equivariant with respect to the \(\textrm{SO}(3)\times \textrm{SO}(3)\) action. This implies, that the value of the normal Jacobian does not depend on (U, V). Therefore, we have \(\textrm{vol}(\mathcal {E}_{\mathbb {S}})\cdot 2\sqrt{2}\cdot \textrm{vol}(\textrm{SO}(2)) = \textrm{NJ}(\gamma , (\mathrm 1_3,\mathrm 1_3)) \cdot \textrm{vol}(\textrm{SO}(3))^2,\) and so
We compute the normal Jacobian. Recall the notation \(F_{i,j}=\textbf{e}_i\textbf{e}_j^T - \textbf{e}_j\textbf{e}_i^T\).
With respect to the orthonormal basis computed in Lemma 2.5 and the orthonormal basis as in (2.6) for the tangent space of \(\textrm{SO}(3)\times \textrm{SO}(3)\) at \((\mathrm 1_3,\mathrm 1_3)\), the columns of the matrix J associated to the derivative of \(\gamma \) at \((\mathrm 1_3,\mathrm 1_3)\) are given by writing the matrices in (2.7) with respect to the basis in Lemma 2.5:
Taking determinant we obtain \( \textrm{NJ}(\gamma , (\mathrm 1_3,\mathrm 1_3)) = \sqrt{\det JJ^T} = \frac{1}{\sqrt{8}}.\) We get \(\textrm{vol}(\mathcal E_{\mathbb {S}}) = 4\pi ^3.\) As above, this implies \(\textrm{vol}(\mathcal E) = 4 \cdot \textrm{vol}(\mathbb R\mathrm P^5)\).
Another important notion in the context of relative pose problems in computer vision is the so-called fundamental matrix; see, e.g., [8, Section 9]. While essential matrices encode the relative pose of calibrated cameras, fundamental matrices encode the relative position between uncalibrated cameras. Fundamental matrices are precisely the matrices of rank two. So, similar to Lemma 3.1, the average degree of fundamental matrices is given by the normalized volume of the manifold of rank two matrices \(\mathcal F \subset \mathbb R\mathrm P^8\). The volume was computed by Beltrán in [1]: \(\textrm{vol}(\mathcal F) = \frac{\pi ^4}{6} = 2\cdot \textrm{vol}(\mathbb R\mathrm P^7).\) Notice that \(\dim \mathcal F =7\). We get
(here, \(L=U\cdot L_0, U\sim \textrm{Unif}(\mathrm O(9))\), is a random uniform line in \(\mathbb R\mathrm P^8\)).
Thus, the average degree of the manifold of fundamental matrices is 2, while the degree of its complexification is 3.
4 Average Number of Relative Poses
In this section we prove Theorem 1.3. Let \(\Psi :(\mathbb R\mathrm P^2)^{\times 10}\rightarrow \mathbb {R}\) be a measurable function and denote \(\textbf{p}:= (\textbf{u}_1,\textbf{v}_1,\ldots ,\textbf{u}_5,\textbf{v}_5)\in (\mathbb R\mathrm P^2)^{\times 10},\) where \((\mathbb R\mathrm P^2)^{\times 10}\), represents taking the cartesian product of \((\mathbb R\mathrm P^2)\) with itself 10 times. We consider the following expected value for the number of real solutions to the relative pose problem
For \(\Psi (\textbf{p})=1\), the constant one function, \(\mu =\mathop {\mathrm {{\mathbb {E}}}}\limits _{L\sim \psi } \# (\mathcal E\cap L)\). In the general case, \(\mu \) is the expected value of \(\# (\mathcal E\cap L)\) for a probability distribution with probability density \(\Psi (\textbf{p})\).
Example 4.1
We regard \(\mathbb {R}^2\) as a subset of \(\mathbb R\mathrm P^2\) by using the embedding \(\phi :\mathbb {R}^2\rightarrow \mathbb R\mathrm P^2\) such that \( \textbf{u}:=\phi (\textbf{y})=[\textbf{y}: 1]\). Consider the case when \(\textbf{y}\in \mathbb {R}^2\) is chosen uniformly in the box \(B:=[a,b]\times [c,d]\subset \mathbb {R}^2\). We compute the probability density of \(\textbf{u}\) relative to the uniform measure on \(\mathbb R\mathrm P^2\). The probability density of \(\textbf{y}\) relative to the Lebesgue measure in \(\mathbb {R}^2\) is \(\frac{1}{(b-a)(d-c)}\cdot \delta _B(\textbf{y})\), where \(\delta _B(\textbf{y})\) is the indicator function of the box B. Let \(W\subset \phi (B)\) be a measurable subset, then \({P}(\textbf{u}\in W)={P}(\textbf{y}\in \phi ^{-1}(W))=\int _{\phi ^{-1}(W)}\frac{1}{(b-a)(d-c)}\; \textrm{d}\textbf{y}\). Using the coarea formula (2.3) we express the probability of W as
Therefore, the probability density of \(\textbf{u}\) is \(\left( (b-a)(d-c)\cdot \textrm{NJ}(\phi ,\textbf{y})\right) ^{-1}\). Let us compute the normal Jacobian of the map \(\phi \). Since we can work locally, we compute the derivative of the map \( \textbf{y}\mapsto \textbf{s}:= \frac{1}{\sqrt{y_1^2+y_2^2+1}}(y_1,y_2,1) \in \mathbb {S}^2\). The derivative of this map relative to the standard basis in \(\mathbb {R}^2\) and \(\mathbb {R}^3\) is expressed by the matrix
The tangent space of the sphere is \(T_{\textbf{s}}\mathbb {S}^2=\textbf{s}^\perp \). Let \(P_{\textbf{s}}=\textrm{1}_3-\textbf{s}\textbf{s}^T\) be the projection onto \(\textbf{s}^\perp \). To get the derivative relative to an orthonormal basis of \(\textbf{s}^\perp \), we have to multiply the above matrix from the left with \(P_{\textbf{s}}\). We get
We have \(\sqrt{\det M^TM}=\vert \langle \textbf{s}, \textbf{e}_3 \rangle \vert \). This implies that the probability density of \(\textbf{u}\) is given by
where \(\alpha \) is the angle between the lines through \(\textbf{u}\) and \(\textbf{e}_3\).
Let us write \(g(\textbf{u}):=\frac{u_1^2+u_2^2+u_3^2}{u_3^2} \cdot \frac{1}{\cos {\alpha }}\). If for \(1\le i\le 5\) we choose independently \(\textbf{u}_i\) from the box \([a_i,b_i]\times [c_i,d_i]\) and \(\textbf{v}_i\) from the box \([a_i',b_i']\times [c_i',d_i']\) we obtain the density \(\Psi (\textbf{p})\) with
when \(\textbf{p}\) is in the product of boxes, and \(\Psi (\textbf{p})=0\) otherwise.
We will also denote \(\Psi :(\mathbb {R}^3\setminus \{0\})^{\times 10}\rightarrow \mathbb {R}\) defined by \(\Psi (\textbf{u}_1,\ldots ,\textbf{v}_5):=\Psi (\pi (\textbf{u}_1),\ldots ,\pi (\textbf{v}_5)),\) where \(\pi :\mathbb {R}^3\setminus \{0\}\rightarrow \mathbb R\mathrm P^2\) is the projection. It will be convenient to replace the uniform random variables in \(\mathbb R\mathrm P^2\) by Gaussian random variables in \(\mathbb R^3\), see [5, Remark 2.24]:
Again, \(\mathop {\mathrm {{\mathbb {E}}}}\limits _{L\sim \psi } \# (\mathcal E\cap L)\) is recovered by setting \(\Psi (\textbf{p})=1\) in (4.1). We denote the Gaussian density by \(\Phi (\textbf{p})=(2\pi )^{-15} \exp (-\tfrac{1}{2} \sum _{i=1}^5\Vert \textbf{u}_i\Vert ^2 + \Vert \textbf{v}_i\Vert ^2 )\).
The proof of Theorem 1.3 consists of three steps, separated into three subsections. In the initial two subsections, our objective is to calculate the normal Jacobian and apply the coarea formula. However, in this process, we do not arrive at an explicit or practical form. Following that in the final subsection, we adopt an alternative approach that involves a new parametrization. This transformation allows us to obtain a closed-form expression for Theorem 1.3.
4.1 The Incidence Variety
The incidence variety is
This is a real algebraic subvariety of \(({{\mathbb R^3{\setminus } \{0\}}})^{\times 10}\times \mathcal E\). Recall from Lemma 2.2 that \(\textrm{SO}(3)\times \textrm{SO}(3)\) acts transitively on \(\mathcal E\) by left and right multiplication. This extends to a group action on \(\mathcal I\) via \((U,V).(\textbf{p},E):= (U\textbf{u}_1,V\textbf{v}_1,\ldots ,U\textbf{u}_5,V\textbf{v}_5,\ UEV^T).\) Let \(E_0:=E(\mathrm 1_3, \textbf{e}_1)\) be as in (2.5) and let us denote the quadric
where \(\textbf{u}=(u_1,u_2,u_3)^T\) and \(\textbf{v}=(v_1,v_2,v_3)^T\). We denote its zero set by
Since \(\mathcal E\) is an orbit of the \(\textrm{SO}(3)\times \textrm{SO}(3)\) action, \(\mathcal I = \bigcup _{(U,V) \in \textrm{SO}(3)\times \textrm{SO}(3)} \; (U,V).(Q^{\times 5}\times \{E_0\}).\) Let us denote \(\widetilde{Q}:=\{(\textbf{u},\textbf{v})\in Q\mid \textbf{u},\textbf{v}\not \in \mathbb R\textbf{e}_1\}\). This is a Zariski open subset of Q. Let
We prove that \(\widetilde{{\mathcal {I}}}\) is smooth by showing that the Jacobian matrix of the system of equations \(\textbf{u}_i^T{E}\textbf{v}_i=0,\) for \(i=1,\ldots ,5\) has full rank at every point in \(\widetilde{{\mathcal {I}}}\); see, e.g., [5, Theorem A.9].
The Jacobian matrix of q is the \(1\times 6\) matrix \(J(\textbf{u},\textbf{v}):= \begin{bmatrix} 0&-v_3&v_2&0&u_3&-u_2\end{bmatrix}\). Denote
For \((\textbf{p}, E_0)\in \widetilde{{\mathcal {I}}}\) the matrix A has full rank. Since the image of A is contained in the image of the Jacobian matrix of \(\textbf{u}_i^T{E}\textbf{v}_i=0, i=1,\ldots ,5\), we see that the latter has full rank. Therefore, \(\widetilde{{\mathcal {I}}}\) is smooth.
4.2 Computing the Normal Jacobian
On \(\mathcal I\) we have the two coordinate projections \(\pi _1: \mathcal I \rightarrow (\mathbb R^3{\setminus }\{0\})^{\times 10}\) and \(\pi _2:\mathcal I\rightarrow \mathcal E\). The projection \(\pi _2\) is surjective, but \(\pi _1\) is not, since out of the 10 complex solutions of the system of equations \(\textbf{u}_i^T{E}\textbf{v}_i=0, i=1,\ldots ,5\), there can be 0 real solutions. Notice that \({\text {im}}(\pi _1)\) is a full-dimensional semi-algebraic set. Let \(\mathcal U\) be the interior of \({\text {im}}(\pi _1)\). Then, \(\mathcal U\) is an open set, hence measurable. Integrating over \({\text {im}}(\pi _1)\) is the same as integrating over \(\mathcal U\). We therefore have, using (4.1),
Let us also denote \(\widetilde{{\mathcal {U}}}:=\pi _1(\widetilde{{\mathcal {I}}})\). Consider a point \(\textbf{p}\in \mathcal U\setminus \widetilde{{\mathcal {U}}}\) and suppose that \((\textbf{p}, E)\in \mathcal I\). Let \((U,V)\in \textrm{SO}(3)\times \textrm{SO}(3)\) such that \((U,V).E=E_0\). Since \(\widetilde{Q}\) is Zariski open in Q, every neighborhood of \((U,V).\textbf{p}\) intersects \(\widetilde{Q}\). Consequently, every neighborhood of \(\textbf{p}\) intersects \(\widetilde{{\mathcal {U}}}\). This means that \(\mathcal U'\) is open dense in \(\mathcal U\) in the Euclidean topology. Hence, in (4.3) we can replace \(\mathcal U\) by \(\widetilde{{\mathcal {U}}}\) to get
We have shown in the previous subsection that \(\widetilde{{\mathcal {I}}}\) is a smooth manifold. We may therefore apply the coarea formula from (2.3) twice, first to \(\pi _1\) and then to \(\pi _2\), to get
Let now \((U,V)\in \textrm{SO}(3)\times \textrm{SO}(3)\) such that \(UEV^T = E_0\). It follows from Lemma 2.2 that \(\pi _1,\pi _2\) are equivariant, which implies that \(\textrm{NJ}(\pi _{i},(\textbf{p},E)) = \textrm{NJ}(\pi _{i}, (U,V).(\textbf{p},E)), i=1,2\). Furthermore, the Gaussian density \(\Phi (\textbf{p})\) is also invariant under the \(\textrm{SO}(3)\times \textrm{SO}(3)\) action. The fiber over \(E_0\) is \(\pi _2^{-1}(E_0) = \widetilde{Q}^{\times 5}\times \{E_0\}\), which is open dense in \(Q^{\times 5}\times \{E_0\}\). So,
where \((U,V)\in \textrm{SO}(3)\times \textrm{SO}(3)\) is such that \(E=(U,V).E_0\). The ratio of normal Jacobians is computed next.
Recall from (4.2) the definition of the matrix \(A\in \mathbb {R}^{5\times 30}\). For \(B_1,\ldots ,B_5\) the basis from Proposition 2.5 we denote
Then, the tangent space of \(\widetilde{{\mathcal {I}}}\) at \((\textbf{p}, E_0)\) is defined by the linear equation \(A\dot{\textbf{p}} + B\dot{E}=0\). Therefore, when B is invertible, \(-B^{-1}A\) is a matrix representation for \(\mathrm D_{(\textbf{p},E_0)}\pi _2\mathrm D_{(\textbf{p},E_0)}\pi _1^{-1}\) with respect to orthonormal bases. So,
When B is not invertible, \(\textrm{NJ}(\pi _1,(\textbf{p},E_0))=0\) and the formula in (4.5) also holds.
4.3 Integration on the Quadric
We plug (4.5) into (4.4) and obtain
We denote \(f(\textbf{u},\textbf{v}):=u_2^2 + u_3^2 + v_2^2 + v_3^2\) for \(\textbf{u}=(u_1,u_2,u_3), \textbf{v}=(v_1,v_2,v_3)\). Then,
We have \((\textbf{u},\textbf{v})\in Q\) if and only if \((u_2,u_3)\) is a multiple of \((v_2,v_3)\). Therefore, we have the following 2 : 1 parametrization:
The Jacobian matrix of \(\phi \) is
Then, \(\textrm{NJ}(\phi , (a,b,r,s,\theta )) = \sqrt{\det (J^TJ)}\) and
Let us denote \(\textbf{a}:=(a_i,b_i,r_i,s_i,\theta _i)_{i=1}^5\). We get:
Notice that \(\Phi (\phi (\textbf{a})) = \tfrac{1}{(2\pi )^{5}}\,\tfrac{1}{(2\pi )^{10}}\,\exp (-\tfrac{1}{2}\sum _{i=1}^5 (a_i^2 + b_i^2 + r_i^2 + s_i^2))\) is the probability density, such that \(a_i,b_i,r_i,s_i\) are all standard normal and \(\theta _i\) is uniform in \([0,2\pi )\) for every i, and all variables are independent. We can therefore rephrase (4.6) as
The rows of B are all of the form
This shows that \(\vert \det (B)\vert \sim 4 \cdot \vert \det \begin{bmatrix} \textbf{z}_1&\ldots&\textbf{z}_5\end{bmatrix}\vert \), where \(\textbf{z}_1,\ldots , \textbf{z}_5\sim \textbf{z}\) i.i.d. for
We state a general integral formula.
Theorem 4.2
With the notation above, we have that the expected value \(\mu = \mathop {\mathrm {{\mathbb {E}}}}\limits \#(\mathcal E\cap L)\), where the distribution of L is defined by a nonnegative measurable function \(\Psi :(\mathbb R\mathrm P^2)^{\times 10}\rightarrow \mathbb {R}\), is given by
where \((U,V)\in \textrm{SO}(3)\times \textrm{SO}(3)\) is such that \(E=(U,V).E_0\), and the first expected value is over the uniform distribution in \(\mathcal E\). The second expectedd value is for \(\textbf{a}=(a_i,b_i,r_i,s_i,\theta _i)_{i=1}^5\) with \(a_i,b_i,r_i,s_i\sim N(0,1),\, \theta _i\sim \textrm{Unif}([0,2\pi ))\) and all are independent.
We continue Example 4.1 by computing the distribution and approximating the mean value.
Example 4.3
As in Example 4.1 we consider the case when the \(\textbf{x}_i\) and \(\textbf{y}_i\) are sampled i.i.d. from the box \([-5,5]\times [-5,5]\subset \mathbb R^2\). Figure 2 shows the empirical distribution of the number of real zeros and an empirical mean of \(\approx 3.788\). We could also approximate the average number of real zeros using Theorem 4.2.
We sample from the probability density \(\Psi ((U,V).\phi (\textbf{a}))\) in Theorem 4.2 using the basic version of Metropolis-Hastings algorithm (see, e.g., [14]). For this, we use the proposal density for \((E,\textbf{a})\), such that \(\textbf{a}\) is as above and \(E\in \mathcal E\) is uniform. We computed a corresponding Markov-Chain with \(10^6\) states. The Metropolis-Hastings algorithm rejected all but 796 of those states. The empirical mean computed from the 796 states is \(\approx 3.5563\).
Let us now work towards proving Theorem 1.3. In the setting of Theorem 1.3 we have \(\Psi (\textbf{p})=1\) and thus, by Theorem 4.2, \( \mathop {\mathrm {{\mathbb {E}}}}\limits _{L\sim \psi } \# (\mathcal E\cap L) = 2^{-3}\cdot \textrm{vol}(\mathcal E)\cdot \mathop {\mathrm {{\mathbb {E}}}}\limits \left| \det \begin{bmatrix} \textbf{z}_1&\textbf{z}_2&\textbf{z}_3&\textbf{z}_4&\textbf{z}_5\end{bmatrix}\right| . \) We have shown in Lemma 3.1 that \({\textrm{vol}}(\mathcal E) = 4\cdot \textrm{vol}(\mathbb R\mathrm P^5) = 2 \pi ^3\). Consequently,
as stated in Theorem 1.3.
We close this section by giving a (extremely coarse) upper bound on the variance of the random determinant. This bound is used for applying Chebychev’s inequality in the introduction.
Proposition 4.4
\(\textrm{Var}\left( \ \left| \det \begin{bmatrix} \textbf{z}_1&\textbf{z}_2&\textbf{z}_3&\textbf{z}_4&\textbf{z}_5\end{bmatrix}\right| \ \right) \le 360\).
Proof
Let D denote the random absolute determinant. We have \(\textrm{Var}(D)\le \mathop {\mathrm {{\mathbb {E}}}}\limits D^2\). Expanding the determinant with Laplace expansion, multiplying out the square, and taking the expected value we see that all mixed terms (that is, all terms which are not a square) average to 0 because the distributions of a, b, r, s are symmetric around 0. This implies
where we have used that \(\mathop {\mathrm {{\mathbb {E}}}}\limits _\theta \cos ^2\theta = \mathop {\mathrm {{\mathbb {E}}}}\limits _\theta \sin ^2\theta = \tfrac{1}{2}\).
5 The Essential Zonoid
Vitale [16] showed that the expected absolute determinant of a random matrix can be expressed as the volume of a convex body. More specifically, of a zonoid. Zonoids are limits of zonotopes in the Hausdorff topology on the space of all convex bodies, and zonotopes are Minkowski sums of line segments; see [15] for more details.
Notice that the probability distribution of \(\textbf{z}\) from (4.7) is invariant under multiplying by \(-1\); i.e., \(\textbf{z}\sim -\textbf{z}\). In this case, based on Vitale’s result, it was shown in [2, Theorem 5.4] that \(\mathop {\mathrm {{\mathbb {E}}}}\limits \left| \det \begin{bmatrix} \textbf{z}_1&\textbf{z}_2&\textbf{z}_3&\textbf{z}_4&\textbf{z}_5\end{bmatrix}\right| = 5!\cdot \textrm{vol}(K)\), where \(K\subset \mathbb R^5\) is the convex body with support function \(h_K(\textbf{x}) = \tfrac{1}{2}\mathop {\mathrm {{\mathbb {E}}}}\limits \vert \langle \textbf{x},\textbf{z}\rangle \vert \). So
We call K the essential zonoid.
In the remainder of this section, we bound \(h_K(\textbf{x})\) from below to find a convex body whose volumes give a lower bound for \(\textrm{vol}(K)\). This gives, using (5.1), the following result.
Proposition 5.1
\(\displaystyle \mathop {\mathrm {{\mathbb {E}}}}\limits _{L\sim \psi } \# (\mathcal E\cap L) \ge 0.93\).
It is important to note that the mentioned lower bound involves numerical computations in its calculation.
Remark 5.2
The value of 0.93 is not close to the experimental value of 3.95 from the introduction. To get a lower bound closer to 3.95 one would need to understand the support function of K at points \(\textbf{x}=(x_1,\ldots ,x_5)\in \mathbb R^5\), where all entries are nonzero. In the computation below we always either have \(x_1=x_2=0\) or \(x_3=x_4=0\). For such points we can work with the function that maps \(\textbf{x}\) to the vector of norms \(\varvec{\rho }=(\rho _1,\rho _2,\rho _3)\), where \(\rho _1=\sqrt{x_1^2+x_2^2}, \rho _2 = \sqrt{x_3^2+x_4^2}\) and \(\rho _3 = \vert x_5\vert \). However, if all entries of \(\textbf{x}\) are nonzero, also the angle between the two points \((x_1,x_2),(x_3,x_4)\in \mathbb R^2\) will play a role, not just their norms. We were not able to find a lower bound for \(h_K(\textbf{x})\) in this case. We nevertheless prove Theorem 5.1 for completeness.
We will need the following lemma.
Lemma 5.3
We have
-
(1)
\(\displaystyle \mathop {\mathrm {{\mathbb {E}}}}\limits _{\xi \sim N(0,\sigma ^2)} \vert \xi \vert =\sigma \sqrt{\tfrac{2}{\pi }}\);
-
(2)
\(\displaystyle \mathop {\mathrm {{\mathbb {E}}}}\limits _{\theta \sim \textrm{Unif}([0,2\pi ))} \vert \cos \theta \vert = \tfrac{2}{\pi }.\)
Proof
The first formula is proved by using \(\mathop {\mathrm {{\mathbb {E}}}}\limits _{\xi \sim N(0,1)} \vert \xi \vert = 2\int _{0}^\infty x \cdot \tfrac{1}{\sqrt{2\pi }}e^{-\frac{1}{2}x^2}\;\mathrm d x = \sqrt{\tfrac{2}{\pi }},\) and then \(\mathop {\mathrm {{\mathbb {E}}}}\limits _{\xi \sim N(0,\sigma ^2)} \vert \xi \vert =\sigma \mathop {\mathrm {{\mathbb {E}}}}\limits _{\xi \sim N(0,1)} \vert \xi \vert \). The second is \(\mathop {\mathrm {{\mathbb {E}}}}\limits \vert \cos \theta \vert = 4\int _0^{\frac{\pi }{2}} \cos \theta \cdot \tfrac{1}{2\pi }\; \mathrm d \theta = \tfrac{2}{\pi }.\)
Let us have a closer look at the support function.
where C is the \(2\times 2\) matrix
Let \(\sigma _1\ge \sigma _2\ge 0\) denote the two singular values of C. The Gaussian vectors (a, r) and (b, s) are invariant under rotations. Therefore, \(h_K(\textbf{x}) = \frac{1}{2}\mathop {\mathrm {{\mathbb {E}}}}\limits \vert \sigma _1ab + \sigma _2rs\vert \). The law of adding Gaussians implies that for fixed a, r and random b, s we have \(\sigma _1 ab + \sigma _2 rs\sim N(0, \sigma _1^2a^2 + \sigma _2^2r^2)\). We now keep a, r fixed and take the expectation with respect to b, s. This gives, using the first formula from Lemma 5.3,
For \(\textbf{x}\in \mathbb {R}^5\) let us write
From (5.2) we have \(h_K(\textbf{x}) \ge \frac{1}{\sqrt{2\pi }}\,\mathop {\mathrm {{\mathbb {E}}}}\limits _{a,\theta } \vert \sigma _1a\vert \) as \(\sigma _2^2r^2\ge 0\). Since \(\sigma _1\) does not depend on a and \(a,\theta \) are independent, this gives \(h_K(\textbf{x})\ge \frac{1}{\sqrt{2\pi }}\,\mathop {\mathrm {{\mathbb {E}}}}\limits _{a} \vert a\vert \mathop {\mathrm {{\mathbb {E}}}}\limits _\theta \vert \sigma _1\vert \). Using Lemma 5.3 we get
The larger singular value \(\sigma _1\) can be expressed as
Therefore,
the last equality by rotational invariance and Lemma 5.3. Similarly, \(h_K(\textbf{x}) \ge \tfrac{2}{\pi ^2}\rho _2,\) and also \(h_K(\textbf{x}) \ge \tfrac{1}{\pi } \rho _3\).
We recall the definition of the elliptic integral of the second kind
and define
Then, we have
Similarly, we have \(h_K(\textbf{x}) \ge F(\rho _2,\rho _3).\)
Let \(L'\subset \mathbb R^3\) be the convex body whose support function is
and define \(\varphi : \mathbb R^5\rightarrow \mathbb R^3_{\ge 0},\; \textbf{x}\mapsto \varvec{\rho }\), and
We have thus shown that \(h_K(\textbf{x})\ge h_{\varphi ^{-1}(L)}(\textbf{x}).\) Since
this means \(\varphi ^{-1}(L)\subset K\).
For every point \(\textbf{x}\in \mathbb R^5\) we have \(\textrm{NJ}(\varphi ,\textbf{x})=\rho _1\cdot \rho _2\). For a fixed \(\varvec{\rho }\in \mathbb R^3\) the fiber \(\varphi ^{-1}(\varvec{\rho })\) consists of the product of two circles (all points \(\textbf{x}\) with \(\sqrt{x_1^2+x_2^2}=\rho _1\) and \(\sqrt{x_3^2+x_4^2}=\rho _1\)) and two points (\(-x_5\) and \(x_5\)). Therefore, the fibers of \(\varphi \) have volume \(2 \textrm{vol}(\mathbb {S}^1)^2 = 2(2\pi )^2\). Then, by the coarea formula (2.3),
where \(\delta _{\varphi ^{-1}(L)}\) is the indicator function of the interior of \(\varphi ^{-1}(L)\).
We have \(\textbf{0}\in L\). Since \(\langle \tfrac{2}{\pi ^2}\textbf{e}_1, \varvec{\rho }\rangle = \tfrac{2}{\pi ^2} \rho _1 \le h_L(\varvec{\rho })\) for all \(\varvec{\rho }\ne \textbf{0}\), we also have, by (5.3),
Using Mathematica [17] we prove that
where \(\lambda _1=0.73, \lambda _2 = 0.86, \lambda _3 = 0.85, \lambda _4=0.966, \lambda _5 = 0.957.\)
By convexity, L contains the convex hull of all these points. We define
(see Fig. 3). Then, using the coarea formula (2.3) we have
We evaluate this integral using Mathematica [17] and get \(\int _P\rho _1\cdot \rho _2\;\mathrm d\varvec{\rho }\ge 0.0236165\).
Proof of Theorem 5.1
By (5.1), we have \(\mathop {\mathrm {{\mathbb {E}}}}\limits _{L\sim \psi } \# (\mathcal E\cap L) = 5!\cdot \frac{\pi ^3}{4}\cdot \textrm{vol}(K)\). Above we have shown
So, \(\mathop {\mathrm {{\mathbb {E}}}}\limits _{L\sim \psi } \# (\mathcal E\cap L)\ge 5! \cdot \frac{2^5}{\pi ^4} \cdot 0.0236165 \ge 0.93\).
Data availability
We do not have any data.
Notes
In [11] one can find a formula for the dimension of the complex Segre variety. The real Segre variety is Zariski dense in the complex Segre variety, so their real and complex dimensions coincide.
References
Beltrán, C.: Estimates on the condition number of random rank-deficient matrices. IMA J. Numer. Anal. 31(1), 25–39 (2009)
Breiding, P., Bürgisser, P., Lerario, A., Mathis, L.: The zonoid algebra, generalized mixed volumes, and random determinants. Adv. in Math., 2022
Breiding, P., Kozhasov, K., Lerario, A.: On the geometry of the set of symmetric matrices with repeated eigenvalues. Arnold Math. J. 4(3), 423–443 (2018)
Breiding, P., Timme, S.: HomotopyContinuation.jl: a package for homotopy continuation in Julia. In Mathematical Software – ICMS 2018, pages 458–465, Cham, 2018. Springer International Publishing
Bürgisser, P., Cucker, F.: Condition. The geometry of numerical algorithms, volume 349 of Grundlehren Math. Wiss. Berlin: Springer, 2013
Demazure, M.: Sur deux problemes de reconstruction. Rapports de Recherche, 882, (1988)
Faugeras, O.D., Maybank, S.: Motion from point matches: multiplicity of solutions. Int. J. Comput. Vision 4(3), 225–246 (1990)
Hartley, R., Zisserman, A.: Multiple view geometry in computer vision. With foreword by Olivier Faugeras., pages i–iv. Cambridge University Press, 2 edition, 2004
Howard, R.: The kinematic formula in Riemannian homogeneous spaces. Mem. Am. Math. Soc. 106(509), vi+69 (1993)
Kassel, A., L’evy, T.: Determinantal probability measures on grassmannians. Annales de l’Institut Henri Poincaré D, 2019
Landsberg, J.M.: Tensors: geometry and applications. Graduate Studies in Mathematics, vol. 128. AMS, Providence, Rhode Island (2012)
Nistér, D.: An efficient solution to the five-point relative pose problem. IEEE Trans. Pattern Anal. Mach. Intell. 26(6), 756–770 (2004)
Pausinger, F.: Uniformly distributed sequences in the orthogonal group and on the grassmannian manifold. Math. Comput. Simul. 160, 13–22 (2019)
Roberts, G., Rosenthal, J.: General state space Markov chains and MCMC algorithms. Prob. Surv. 2, 20–71 (2004)
Schneider, R.: Convex bodies: the Brunn-Minkowski theory. Encycl, vol. 151, expanded Math. Appl. Cambridge University Press, Cambridge (2014)
Vitale, R.A.: Expected absolute random determinants and zonoids. Ann. Appl. Probab. 1(2), 293–300 (1991)
Wolfram Research, Inc. Mathematica, version 13.1. Champaign, IL, 2022
Funding
Open Access funding enabled and organized by Projekt DEAL. The implementations of all numerical computations made in this contribution can be found: https://mathrepo.mis.mpg.de/average_degree/index.html
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Breiding, P., Fairchild, S., Santarsiero, P. et al. Average Degree of the Essential Variety. La Matematica (2024). https://doi.org/10.1007/s44007-024-00106-0
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s44007-024-00106-0