Abstract
This article describes a fast and stable camera tracking method aimed for real-time augmented reality applications. From the feature tracking of a known marker on a single frame, we estimate the camera position and translation parameters. The entire pose estimation process is linear and initial estimates are not required. As an experimental setup, we implemented a video augmentation system to replace detected markers with virtual 3D graphical objects. Experimental results showed that the proposed camera tracking method is robust and fast applicable to interactive augmented reality applications.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Carceroni, R., Brown, C.: Numerical Methods for Model-Based Pose Recovery. (1997)
Tomasi, C., Kanade, T.: Shape andMotion from Image Streams Under Orthography: A Factorization Approach. IJCV, 9 (1992) 137–154
Dementhon, D., Davis, L.: Model-based Object Pose in 25 Lines of Code. IJCV, 15 (1995) 123–141
Poelman, C., Kanade, T.: A Paraperspective Factorization Method for Shape and Motion Recovery. IEEE T-PAMI, 19 (1997) 206–218
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Lee, BJ., Park, JS. (2006). Fast Vision-Based Camera Tracking for Augmented Environments. In: Huang, DS., Li, K., Irwin, G.W. (eds) Intelligent Computing in Signal Processing and Pattern Recognition. Lecture Notes in Control and Information Sciences, vol 345. Springer, Berlin, Heidelberg . https://doi.org/10.1007/978-3-540-37258-5_130
Download citation
DOI: https://doi.org/10.1007/978-3-540-37258-5_130
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-37257-8
Online ISBN: 978-3-540-37258-5
eBook Packages: EngineeringEngineering (R0)