Skip to main content
Log in

The role of fixation in visual motion analysis

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

How does the ability of humans and primates to fixate at environmental points in the presence of relative motion help their visual systems in solving various tasks? To state the question in a more formal setting, we investigate in this article the following problem: Suppose that we have an active vision system, that is, a camera resting on a platform and being controlled through motors by a computer that has access to the images sensed by the camera in real time. The platform can move freely in the environment. If this machine can fixate on targets being in relative motion with it, can it solve visual tasks in an efficient and robust manner? By restricting our attention to a set of navigational tasks, we find that such an active observer can solve the problems of 3-D motion estimation, egomotion recovery, and estimation of time-to-contact in a very efficient manner, using as input the spatiotemporal derivatives of the image-intensity function (or normal flow). Fixation over time changes the input (motion field) in a controlled way and from this change additional information is derived making the previously mentioned tasks easier to solve.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Adiv, G. 1985. Determining 3-D motion and structure from optical flow generated by several moving objects,IEEE Trans. PAMI 7:384–401.

    Google Scholar 

  • Aloimonos Y. 1990. Purposive and qualitative active vision,Proc. Image Understanding Workshop, pp. 816–828.

  • Aloimonos Y., and Brown, C.M. 1984. Direct processing of curvilinear sensor motion from a sequence of perspective images,Proc. Workshop on Computer Vision: Representation and Control, pp. 72–77.

  • Aloimonos, Y., and Brown, C.M. 1989. On the kinetic depth effect,Biological Cybernetics 60:445–455.

    Google Scholar 

  • Aloimonos, Y., Weiss, I., and Bandopadhay, A. 1988. Active Vision,Intern. J. Comput. Vis. 2:333–356.

    Google Scholar 

  • Bajcsy, R. 1985. Active perception vs. passive perception,Proc. IEEE Workshop on Computer Vision, pp. 55–59.

  • Ballard, D.H. 1991. Animate vision,Artificial Intelligence 48:57–86.

    Google Scholar 

  • Ballard, D.H., and Brown, C.M. 1992. Principles of animate vision,Comput. Vis., Graph., Image Process, special issue onPurposive, Qualitative, Active Vision, Y. Aloimonos (ed.), 56: 3–21.

  • Bandopadhay, A., and Ballard, D.H. 1991. Egomotion perception using visual tracking,Computational Intelligence 7:39–47.

    Google Scholar 

  • Cipolla R., and Blake, A. 1992. Surface orientation and time to contact from image divergence and deformation. In A. Blake and A. Yuille, eds.,Active Vision, MIT Press: Cambridge, MA, pp. 39–58.

    Google Scholar 

  • Fermüller, C. 1993a. Basic Visual Capabilities, Ph.D. thesis, Center for Automation Research, University of Maryland, College Park and Institute for Automation, Technical University of Vienna.

    Google Scholar 

  • Fermüller, C. 1993b. Motion constraint patterns,Proc. Image Understanding Workshop, Washington, D.C.

  • Fermüller, C. 1993c. Navigational preliminaries, inActive Computer Vision, Y. Aloimonos, ed., Erlbaum: Hillsdale, NJ, p. 51–103.

    Google Scholar 

  • Fermüller, C., and Aloimonos, Y. 1991. Estimating 3-D motion from image gradients, Tech. Rept. CAR-TR-564, Center for Automation Research, University of Maryland, College Park.

    Google Scholar 

  • Fermüller, C., and Aloimonos, Y. 1992. Tracking facilitates 3-D motion estimation,Biological Cybernetics, 67:259–268.

    Google Scholar 

  • Pahlavan, K. 1993. Active Robot Vision and Primary Ocular Processes, Ph.D. thesis, Department of Numerical Analysis and Computing Science, Royal Institute of Technology, Stockholm.

    Google Scholar 

  • Fermüller, C. and Kropatsch, W. 1992. Multiresolution shape description by corners,Proc. Conf. Comput. Vis. Patt. Recog., pp. 271–276, Urbana-Champaign, IL.

  • Fleet, D.J., and Jepson, A.D. 1990. Computation of component velocity from local phase information,Intern. J. Comput. Vis., 5: 77–104.

    Google Scholar 

  • Swain, M.J., and Ballard, D.H. 1991. Color indexing,Intern. J. Comput. Vis., 7:11–32.

    Google Scholar 

  • Swain, M.J., and Stricker, M., eds., Promising directions in active vision, Tech. Rept. University of Chicago, CS 91-27.

  • Horn, B., and Schunck, B. 1981. Determining optical flow,Artificial Intelligence 17:185–203.

    Google Scholar 

  • Horn, B.K.P., and Weldon, E.J. 1987. Computationally efficient methods of recovering translational motion,Proc. 1st Intern. Conf. Comput. Vis., London, pp. 2–11.

  • Longuet-Higgins, H.C., and Prazdny, K. 1984. The interpretation of moving retinal images,Proc. Roy. Soc. London B 208:385–397.

    Google Scholar 

  • Marr, D. 1982.Vision, Freeman: San Francisco.

    Google Scholar 

  • Negadharipour, S. 1986. Ph.D. thesis, MIT Artificial Intelligence Laboratory.

  • Negadharipour, S., and Horn, B.K.P. 1989. A direct method for locating the focus of expansion,Comput. Vis., Graph., Image Process 46(3).

  • Negadharipour, S., and Ganesan, P. 1992. A simple method for locating the focus of expansion with confidence measures,Proc. IEEE Conf. Comput. Vis. Patt. Recog., Urbana-Champaign.

  • Pahlavan, K., and Eklundh, J.-O. 1992. A head-eye system-Analysis and design,Comput. Vis. Graph., Image Process: Image Understanding, special issue onPurposive, Qualitative, Active Vision, Y. Aloimonos (ed.), 56: 41–56.

  • Sohon, F.W. 1941.The Sterographic Projection, Chelsea Press: New York.

    Google Scholar 

  • Spetsakis, M.E., and Aloimonos, Y. 1992. Optimal visual motion estimation,IEEE Trans. PAMI 14: 959–964.

    Google Scholar 

  • Spetsakis, M.E., and Aloimonos, Y. 1990. Structure from motion using line correspondences,Intern. J. Comput. Vis. 4:171–183.

    Google Scholar 

  • Spetsakis, M.E., and Aloimonos, Y. 1991. A multiframe approach to visual motion perception,Intern. J. Comput. Vis. 6:245–255.

    Google Scholar 

  • Singh, A. 1990. Optic flow computation: A unified perspective, Ph.D. thesis, Department of Computer Science, Columbia University, New York.

    Google Scholar 

  • Taalebi-Nezhaad, M.A. 1990. Direct recovery of motion and shape in the general case by fixation,Proc. Image Understanding Workshop, pp. 284–291.

  • Tistarelli, M. and Sandini, G. 1992. Dynamic aspects in active vision,Comput. Vis. Graph. Image Process: Image Understanding, special issue onPurposive, Qualitative, Active Vision, Y. Aloimonos (ed.), 56: 108–129.

  • Tsai, R.Y., and Huang, T.S. 1984. Uniqueness and estimation of three-dimensional motion parameters of rigid objects with curved surfaces,IEEE Trans PAMI 6:13–27.

    Google Scholar 

  • Ullman, S. 1979.The Interpretation of Visual Motion, MIT Press: Cambridge, MA.

    Google Scholar 

  • Verri, A. and Poggio, T. 1989. Motion field and optical flow: qualitative properties,IEEE Trans. PAMI, 11 (5):490–498.

    Google Scholar 

  • Waxman, A.M., Kamgar-Parsi, B., and Subbarao, M. 1987. Closed-form solutions to image flow equations for 3-D structure and motion,Intern. J. Comput. Vis. 1:239–258.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Fermüller, C., Aloimonos, Y. The role of fixation in visual motion analysis. Int J Comput Vision 11, 165–186 (1993). https://doi.org/10.1007/BF01469227

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01469227

Keywords

Navigation