Skip to main content

Advertisement

Log in

A Minimal Closed-form Solution for the Perspective Three Orthogonal Angles (P3oA) Problem: Application To Visual Odometry

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

We provide a simple closed-form solution to the Perspective three orthogonal angles (P3oA) problem: given the projection of three orthogonal lines in a calibrated camera, find their 3D directions. Upon this solution, an algorithm for the estimation of the camera relative rotation between two frames is proposed. The key idea is to detect triplets of orthogonal lines in a hypothesize-and-test framework and use all of them to compute the camera rotation in a robust way. This approach is suitable for human-made environments where numerous groups of orthogonal lines exist. We evaluate the numerical stability of the P3oA solution and the estimation of the relative rotation with synthetic and real data, comparing our results to other state-of-the-art approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Haralick, B.M., Lee, C.-N., Ottenberg, K., Nölle, M.: Review and analysis of solutions of the three point perspective pose estimation problem. International Journal of Computer Vision 13(3), 331–356 (1994)

    Article  Google Scholar 

  2. Lepetit, V., Moreno-Noguer, F., Fua, P.: EPnP: An Accurate O(n) Solution to the PnP Problem. International Journal of Computer Vision 81, 155–166 (2008)

    Article  Google Scholar 

  3. Hesch, J.A., Roumeliotis, S.I.: A direct least-squares (DLS) method for PnP. In: IEEE international conference on computer vision (ICCV), pp. 383–390. (2011)

  4. Li, S., Xu, C., Xie, M.: A robust O(n) solution to the perspective-n-point problem. IEEE Transactions on Pattern Analysis and Machine Intelligence 34(7), 1444–1450 (2012)

    Article  Google Scholar 

  5. Mirzaei, F.M., Roumeliotis, S.I.: Globally optimal pose estimation from line correspondences. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 5581–5588. (2011)

  6. Zhang, L., Xu, C., Lee, K.-M., Koch, R.: Robust and efficient pose estimation from line correspondences. In: Asian Conference on Computer Vision (ACCV), 2012, pp. 217–230. Springer, (2013)

  7. Ramalingam, S., Bouaziz, S., Sturm, P.: Pose estimation using both points and lines for geo-localization. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 4716–4723. (2011)

  8. Hartley, R.I.: Lines and points in three views and the trifocal tensor. International Journal of Computer Vision 22(2), 125–140 (1997)

    Article  Google Scholar 

  9. Bartoli, A., Sturm, P.: Structure-from-motion using lines: Representation, triangulation, and bundle adjustment. Computer Vision and Image Understanding 100(3), 416–441 (2005)

    Article  Google Scholar 

  10. Košecká, J., Zhang, W.: Extraction, matching, and pose recovery based on dominant rectangular structures. Computer Vision and Image Understanding 100(3), 274–293 (2005)

    Article  Google Scholar 

  11. Schindler, G., Krishnamurthy, P., Dellaert, F.: Line-based structure from motion for urban environments. In: Third International Symposium on 3D Data Processing, Visualization, and Transmission, IEEE, pp. 846–853. (2006)

  12. Förstner, W.: Optimal vanishing point detection and rotation estimation of single images from a legoland scene. In: ISPRS Commission III Symposium of Photogrammetric Computer Vision and Image Analysis, pp. 157–162. (2010)

  13. Mirzaei, F.M., Roumeliotis, S.I.: Optimal estimation of vanishing points in a Manhattan world. In: IEEE International Conference on Computer Vision (ICCV), IEEE, pp. 2454–2461. (2011)

  14. Bazin, J.C., Seo, Y., Demonceaux, C., Vasseur, P., Ikeuchi, K., Kweon, I., Pollefeys, M.: Globally optimal line clustering and vanishing point estimation in Manhattan world. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 638–645. (2012)

  15. Elqursh, A., Elgammal, A.: Line-based relative pose estimation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3049–3056. (2011)

  16. Finotto, M., Menegatti, E.: Humanoid gait stabilization based on omnidirectional visual gyroscope. In: Workshop on Humanoid Soccer Robots (Humanoids’ 09). (2009)

  17. Rondon, E., Carrillo, L.R.G., Fantoni, I.: Vision-based altitude, position and speed regulation of a quadrotor rotorcraft. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 628–633. (2010)

  18. Barnard, S.T.: Choosing a basis for perceptual space. Comput. Vis. Graph. Image Process. 29(1), 87–99 (1985)

    Article  MathSciNet  Google Scholar 

  19. Kanatani, K.I.: Constraints on length and angle. Comput. Vis. Graph. Image Process. 41(1), 28–42 (1988)

    Article  Google Scholar 

  20. Wu, Y., Iyengar, S.S., Jain, R., Bose, S.: A new generalized computational framework for finding object orientation using perspective trihedral angle constraint. IEEE Trans. Pattern Anal. Mach. Intell. 16(10), 961–975 (1994)

    Article  Google Scholar 

  21. Dhome, M., Richetin, M., Lapreste, J.-T., Rives, G.: Determination of the attitude of 3d objects from a single perspective view. IEEE Trans. Pattern Anal. Mach. Intell. 11(12), 1265–1278 (1989)

    Article  Google Scholar 

  22. Criminisi, A., Reid, I., Zisserman, A.: Single view metrology. Int. J. Comput. Vis. 40(2), 123–148 (2000)

    Article  MATH  Google Scholar 

  23. Antone, M.E., Teller, S.: Automatic recovery of relative camera rotations for urban scenes. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, pp. 282–289. (2000)

  24. Košecká, J., Zhang, W.: Video compass. In: European Conference on Computer Vision (ECCV), pp. 476–490, Springer, (2002)

  25. Denis, P., Elder, J., Estrada, F.: Efficient edge-based methods for estimating manhattan frames in urban imagery. Springer, Heidelberg (2008)

    Book  Google Scholar 

  26. Bazin, J.C., Pollefeys, M.: 3-line RANSAC for orthogonal vanishing point detection. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4282–4287. (2012)

  27. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, New York (2003)

    MATH  Google Scholar 

  28. Nistér, D., Naroditsky, O., Bergen, J.: Visual odometry for ground vehicle applications. J. Field Robot. 23(1), 3–20 (2006)

    Article  MATH  Google Scholar 

  29. Hartley, R., Trumpf, J., Dai, Y., Li, H.: Rotation averaging. Int. J. Comput. Vis. 103(3), 267–305 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  30. Carlone, L., Dellaert, F.: Duality-based verification techniques for 2D SLAM. In: International Conference on Robotics and Automation (ICRA). (2015)

  31. Gower, J.C., Dijksterhuis, G.B.: Procrustes problems, vol. 3. Oxford University Press, Oxford (2004)

    Book  MATH  Google Scholar 

  32. Arun, K.S., Huang, T.S., Blostein, S.D.: Least-squares fitting of two 3-d point sets. IEEE Trans. Pattern Anal. Mach. Intell. 5, 698–700 (1987)

    Article  Google Scholar 

  33. Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cremers, D.: A benchmark for the evaluation of RGB-D SLAM systems. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, pp. 573–580. (2012)

  34. Handa, A., Whelan, T., McDonald, J., Davison, A.: A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM. In: IEEE International Conference on Robotics and Automation, ICRA. (Hong Kong, China), (2014)

  35. Grompone von Gioi, R., Jakubowicz, J., Morel, J.-M., Randall, G.: LSD: a line segment detector. Image Process. Online 2, 35–55 (2012)

    Article  Google Scholar 

  36. Zhang, L., Koch, R.: An efficient and robust line segment matching approach based on lbd descriptor and pairwise geometric consistency. J Vis. Commun. Image Represent. 24(7), 794–805 (2013)

    Article  Google Scholar 

  37. Schindler, G., Dellaert, F.: Atlanta world: an expectation maximization framework for simultaneous low-level edge grouping and camera calibration in complex man-made environments. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. vol. 1, pp. 203–209. (2004)

  38. Straub, J., Rosman, G., Freifeld, O., Leonard, J.J., Fisher, J.W.: A mixture of manhattan frames: Beyond the manhattan world. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3770–3777. (2014)

Download references

Acknowledgments

We would like to thank to Ali Elqursh for providing us the source code of the odometry method of Elqursh and Elgammal [15]. This work has been funded by the Spanish Ministerio de Ciencia e Innovacion under the projects “TAROTH: New developments toward a Robot at Home” (Contract DPI 2011-25483) and “PROMOVE: Advances in mobile robotics for promoting independent life of elders” (Contract DPI 2014-55826-R). Both projects are cofounded by the “European Regional Development Fund ERDF”.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jesus Briales.

Appendices

Appendix 1: Reduction of Pencil Basis Parameter

The bilinear map \( \varvec{B}_{ij} \) is greatly simplified when pencil bases are used. The substitution of \( \varvec{\varOmega }_k \) defined in (7) into (5) gives

since \( \left||\varvec{t}\right|| = 1 \Rightarrow \varvec{t}^\top \varvec{t}= 1 \) and by definition of the cross product \( \varvec{t}^\top \left( \varvec{t}\times \varvec{n}\right) = 0 \) for any vector \( \varvec{n}\).

The remaining parameter

$$\begin{aligned} \alpha _{ij}= - \left( \varvec{t}\times \varvec{n}_i\right) ^\top \left( \varvec{t}\times \varvec{n}_j\right) \end{aligned}$$

can be further simplified to a single scalar product by applying some properties of the cross product. Firstly, the skew matrix representation for the cross product as well as the property \( \left[ \varvec{t}\right] _\times ^\top = - \left[ \varvec{t}\right] _\times \) allows us to write

$$\begin{aligned} \alpha _{ij}= \varvec{n}_i^\top \left[ \varvec{t}\right] _\times \left[ \varvec{t}\right] _\times \varvec{n}_j \end{aligned}$$

and then rewriting the product of skew matrices in the equivalent form

$$\begin{aligned} \left[ \varvec{a}\right] _\times \left[ \varvec{b}\right] _\times = \varvec{b}\varvec{a}^\top -(\varvec{a}\cdot \varvec{b})\mathbf I _3 \end{aligned}$$

the expression finally reduces to

$$\begin{aligned} \alpha _{ij}= \varvec{n}_i^\top \left( \varvec{t}\varvec{t}^\top - (\varvec{t}^\top \varvec{t})\cdot \mathbf I _3 \right) \varvec{n}_j = -\varvec{n}_i^\top \varvec{n}_j \end{aligned}$$

where it is used that \( \varvec{t}\perp \varvec{n}_k \) by the definition in (6).

Appendix 2: Relations between Interpretation Planes for the Degenerate P3oA Problem

Given a trihedron formed by three intersecting orthogonal lines, if the projections of two lines \( \mathbf l _i \) and \( \mathbf l _j \) become the same, the normals of the interpretation planes of the lines fulfill \(\varvec{n}_i \parallel \varvec{n}_j \perp \varvec{n}_k\).

The corresponding proof is developed in a sequence of minor steps:

  • If the lines \(\mathbf l _i\) and \(\mathbf l _j\) are parallel, so are the corresponding normals \(\varvec{n}_i\) and \(\varvec{n}_j\): \( \mathbf l _i \parallel \mathbf l _j \Rightarrow \varvec{n}_i \parallel \varvec{n}_j \).

  • If the normals \(\varvec{n}_i\) and \(\varvec{n}_j\) are parallel, the camera center must lie in the IJ plane defined by the lines \(L_i\) and \(L_j\).

  • If the camera center lies in the IJ plane, the normal \(\varvec{n}_k\) is orthogonal to \(\varvec{n}_i=\varvec{n}_j\).

1.1 (a) Parallelism of \(n_i\) and \(n_j\)

Let \( \mathbf l _i \) and \( \mathbf l _j \) be the homogeneous vectors corresponding to the projection of the lines into the camera image. Since the projective entities \( \mathbf l _i, \mathbf l _j \in \mathbb {P}^2 \) are defined up to scale, we use the similarity operator \( \mathbf l _i \sim \mathbf l _j \) to represent the equality of both variables in the projective space \( \mathbb {P}^2 \). Two vectors are then equivalent if they are parallel, or stated otherwise,

$$\begin{aligned} \mathbf l _i \times \mathbf l _j = \varvec{0}. \end{aligned}$$

The homogeneous lines in the image are related to the normal of the corresponding interpretation plane through the intrinsic calibration matrix \(\varvec{K}\) [27]

$$\begin{aligned} \varvec{n}\sim \varvec{K}^\top \mathbf l , \end{aligned}$$

so the equivalency relation is transmitted to the normals:

$$\begin{aligned} \varvec{n}_i \times \varvec{n}_j&= (\varvec{K}^\top \mathbf l _i) \times (\varvec{K}^\top \mathbf l _j)\\&= \det (\varvec{K}) \varvec{K}^{-\top } (\mathbf l _i \times \mathbf l _j) = \varvec{0}\Rightarrow \varvec{n}_i \sim \varvec{n}_j \end{aligned}$$

1.2 (b) Position of the Camera to Fulfill \(n_i \parallel n_j\)

Let us assume, without loss of generality, that the three intersecting lines fit the axes of the canonical coordinate system. As a result, the direction of the lines in this particular case is given by the canonical vectors \( \{\varvec{e}_k\}_{k=1}^3 \). Denote the position and orientation of the camera as seen from this coordinate system as \( \varvec{t}\) and \( \varvec{R}\), respectively. The normal to the interpretation plane of each line \( L_k \), as seen from the camera, is then equal (up to scale) to

$$\begin{aligned} \varvec{n}_k \sim \varvec{R}^\top (\varvec{e}_k \times \varvec{t}). \end{aligned}$$
(42)

From the equivalency of the lines i and j , it follows then that

$$\begin{aligned} \varvec{n}_i \sim \varvec{n}_j&\Rightarrow (\varvec{R}^\top (\varvec{e}_i \times \varvec{t})) \times (\varvec{R}^\top (\varvec{e}_j \times \varvec{t}))\\&= \varvec{R}^\top \left( (\varvec{e}_i \times \varvec{t}) \times (\varvec{e}_j \times \varvec{t})\right) \\&= \varvec{0}\Rightarrow (\varvec{e}_i \times \varvec{t}) \times (\varvec{e}_j \times \varvec{t}) = \varvec{0}\end{aligned}$$

and this expression is symbolically equivalent to

$$\begin{aligned} (\varvec{e}_i \times \varvec{t}) \times (\varvec{e}_j \times \varvec{t})&= \left[ \varvec{e}_i \times \varvec{t}\right] _\times \left[ \varvec{e}_j\right] _\times \varvec{t}\\&= ( \varvec{t}\varvec{e}_i^\top - \varvec{e}_i \varvec{t}^\top ) \left[ \varvec{e}_j\right] _\times \varvec{t}\\&= (\varvec{e}_k^\top \varvec{t}) \; \varvec{t}\end{aligned}$$

So, it is concluded that the equivalency \(\varvec{n}_i \sim \varvec{n}_j\) is only possible if \(\varvec{t}= \varvec{0}\) or \(\varvec{e}_k^\top \varvec{t}= 0\). The first solution forces the camera to be in the point of intersection of the three lines, but this makes no sense. The second solution implies that, for \( L_i \) and \( L_j \) to project into the same image line \( \mathbf l _i \sim \mathbf l _j \), the camera center must lie in the plane defined by lines \( L_i \) and \( L_j \).

1.3 (c) Orthogonality of Normals when the Camera Lies in the IJ Plane

Now, we will prove that if the camera center lies in the IJ plane, the normal of the interpretation plane for the remaining line \( L_k \) is orthogonal to both \( \varvec{n}_i \sim \varvec{n}_j \). Say \( \varvec{n}_k \) and \( \varvec{n}_i \) are orthogonal, which is equivalent to state \(\varvec{n}_k^\top \varvec{n}_i = 0\). Using the relation in (42)

$$\begin{aligned} \varvec{n}_k^\top \varvec{n}_i&= (\varvec{e}_k \times \varvec{t}) \varvec{R}\varvec{R}^\top (\varvec{e}_i \times \varvec{t})\\&= \varvec{t}^\top \left[ \varvec{e}_k\right] _\times ^\top \left[ \varvec{e}_i\right] _\times \varvec{t}\\&= - \varvec{t}^\top ( \varvec{e}_i \varvec{e}_k^\top - (\varvec{e}_k^\top \varvec{e}_i) \mathbf I _3 ) \varvec{t}\\&= - (\varvec{t}^\top \varvec{e}_i) (\varvec{e}_k^\top \varvec{t})\\&= - (\varvec{e}_i^\top \varvec{t}) (\varvec{e}_k^\top \varvec{t}) = 0 \end{aligned}$$

and the condition above is then fulfilled only if the camera center lies in the JK plane, the IJ plane, or both. Similarly, if \( \varvec{n}_k \) and \( \varvec{n}_j \) are orthogonal (necessary from \( \varvec{n}_i \sim \varvec{n}_j \)), the camera center lies in the IJ plane, the IK plane, or both. As a result, we see that if the camera lies in the IJ plane as assumed, the orthogonality constraints are fulfilled.

In conclusion, we see that if the projections of two lines, \( \mathbf l _i \) and \( \mathbf l _j \), are coincident, then the camera center lies in the plane formed by \( L_i \) and \( L_j \). This, at the same time, provokes that \( \varvec{n}_k \perp \varvec{n}_i \), or equivalently, \( \varvec{n}_k \perp \varvec{n}_j \).

Appendix 3: Reparameterization of Radical Equation as a Quadratic Form

The equation

$$\begin{aligned} \varvec{n}_k^\top \varvec{t}\cdot \bar{\alpha }+ \alpha _{ij} \cdot \varvec{n}_k^\top \left[ \varvec{t}\right] _\times \varvec{n}_* = 0 \end{aligned}$$

defined in (18) is non-linear wrt \( \varvec{n}_* \) due to the square root operation in

$$\begin{aligned} \bar{\alpha }&= \pm \,\sqrt{ \alpha _{*i}\alpha _{ij}\alpha _{j*}}\\&= \pm \,\sqrt{ - \left( \varvec{n}_*^\top \varvec{n}_i\right) \left( \varvec{n}_i^\top \varvec{n}_j\right) \left( \varvec{n}_j^\top \varvec{n}_*\right) } \end{aligned}$$

which makes the equation radical. As usual for these equations, we separate both terms in the sum and square them to get an almost-equivalent quadratic equation

$$\begin{aligned} \left( \varvec{n}_k^\top \varvec{t}\right) ^2 \cdot \left( \pm \sqrt{\alpha _{*i} \alpha _{ij} \alpha _{j*}} \right) ^2&= \alpha _{ij}^2 \cdot (\varvec{n}_k^\top \left[ \varvec{t}\right] _\times \varvec{n}_*)^2\\ \frac{\left( \varvec{n}_k^\top \varvec{t}\right) ^2}{\alpha _{ij}} \cdot \alpha _{*i} \alpha _{j*}&= (\varvec{n}_k^\top \left[ \varvec{t}\right] _\times \varvec{n}_*)^2 \end{aligned}$$

Then, we substitute \( \alpha _{*i} \) and \( \alpha _{j*} \) and arrange the matrix operations

$$\begin{aligned} \frac{(\varvec{n}_k^\top \varvec{t})^2}{\alpha _{ij}} \cdot \varvec{n}_*^\top \left( \varvec{n}_i \varvec{n}_j^\top \right) \varvec{n}_*&= (\varvec{n}_k^\top \left[ \varvec{t}\right] _\times \varvec{n}_*)^\top \varvec{n}_k^\top \left[ \varvec{t}\right] _\times \varvec{n}_*\\&= \varvec{n}_*^\top \left[ \varvec{t}\right] _\times ^\top \left( \varvec{n}_k \varvec{n}_k^\top \right) \left[ \varvec{t}\right] _\times \varvec{n}_* \end{aligned}$$

so that the condition above can be encoded by a quadratic form

$$\begin{aligned} \varvec{n}_*^\top \left( \frac{(\varvec{n}_k^\top \varvec{t})^2}{\alpha _{ij}} \cdot \varvec{n}_i \varvec{n}_j^\top - \left[ \varvec{t}\right] _\times ^\top \varvec{n}_k \varvec{n}_k^\top \left[ \varvec{t}\right] _\times \right) \varvec{n}_* = 0 \end{aligned}$$

From all infinite matrix representations available for the quadratic form given above, we choose the symmetric one

$$\begin{aligned} \varvec{Q}= \frac{1}{2} \frac{(\varvec{n}_k^\top \varvec{t})^2}{\alpha _{ij}} \left( \varvec{n}_i \varvec{n}_j^\top + \varvec{n}_j \varvec{n}_i^\top \right) + \left[ \varvec{t}\right] _\times \left( \varvec{n}_k \varvec{n}_k^\top \right) \left[ \varvec{t}\right] _\times \end{aligned}$$

which permits us to readily diagonalize the quadratic form by eigenvalue decomposition in a unique way.

Then, the original problem in (18) is equivalent to solving

$$\begin{aligned} \varvec{n}_*^\top \varvec{Q}\,\varvec{n}_* = 0 \end{aligned}$$

for the defined \( \varvec{Q}\).

Appendix 4: Necker Duality and Parallax Effect

The two distinct solutions of the P3oA problem for the case of meeting lines are mirrored wrt the plane whose normal is the back-projection direction \( \varvec{t}\) through the intersection point. This relation can be easily expressed through a reflection or Householder matrix [27]. Let \( \varvec{V}\) be the real solution and \( \varvec{V}^* \) the mirrored one. The following relation stands:

$$\begin{aligned} \varvec{V}= \varvec{H}_{\varvec{t}} \, \varvec{V}^* \end{aligned}$$
(43)

with \( \varvec{H}_{\varvec{t}} = \mathbf I _3 -2\,\varvec{t}\,\varvec{t}^\top /\varvec{t}^\top \varvec{t}\). This relation only stands when the three lines meet in a single point, otherwise there exists no reflection fulfilling the relation (43), even though this duality still exists (see green configuration in Fig. 4).

Without loss of generality, we will use the case of meeting lines to prove that both dual solutions provide the same rotation in the case of zero baseline (pure rotation). In such cases, taking the pair of false configurations will produce the relative rotation

$$\begin{aligned} \varvec{R}^*&= \varvec{V}_1^* \, (\varvec{V}_2^*)^\top \nonumber \\&= \varvec{H}_{p_1} \varvec{V}_1 \, \varvec{V}_2^\top \varvec{H}_{p_2}^\top \nonumber \\&= \varvec{H}_{p_1} \, \varvec{R}\, \varvec{H}_{p_2}^\top \end{aligned}$$
(44)

where \( p_i \) stands for the 3D coordinates of intersection point in i -th image wrt the camera frame. A rigid transformation exists between \( p_1 \) and \( p_2 \)

$$\begin{aligned} p_1 = \varvec{R}\, p_2 + \varvec{t}_{\text {rel}}\end{aligned}$$

so that, in the case of zero baseline we have

$$\begin{aligned} \varvec{H}_{p_1} = \mathbf I -2 \frac{\varvec{R}p_2 p_2^\top \varvec{R}^\top }{p_2^\top p_2} = \varvec{R}\varvec{H}_{p_2} \varvec{R}^\top \end{aligned}$$

and substituting in expression (44) reveals that

$$\begin{aligned} \varvec{R}^* = \varvec{R}\varvec{H}_{p_2} \varvec{R}^\top \varvec{R}\, \varvec{H}_{p_2}^\top = \varvec{R}\end{aligned}$$

As a conclusion, in the case of pure rotation, the true and false solutions are the same. So, the greater the parallax effect due to non zero baseline \( \left||\varvec{t}_{\text {rel}}\right|| \), the bigger the difference between both possible solutions.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Briales, J., Gonzalez-Jimenez, J. A Minimal Closed-form Solution for the Perspective Three Orthogonal Angles (P3oA) Problem: Application To Visual Odometry. J Math Imaging Vis 55, 266–283 (2016). https://doi.org/10.1007/s10851-015-0620-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-015-0620-x

Keywords

Navigation