ABSTRACT
Computer Vision and other direct sensing technologies have progressed to the point where we can detect many aspects of a user's activity reliably and in real time. Simply recognizing the activity is not enough, however. If perceptual interaction is going to become a part of the user interface, we must turn our attention to the tasks we wish to perform and methods to effectively perform them.This paper attempts to further our understanding of vision-based interaction by looking at the steps involved in building practical systems, giving examples from several existing systems. We classify the types of tasks well suited to this type of interaction as pointing, control or selection, and discuss interaction techniques for each class. We address the factors affecting the selection of the control action, and various types of control signals that can be extracted from visual input. We present our design for widgets to perform different types of tasks, and techniques, similar to those used with established user interface devices, to give the user the type of control they need to perform the task well. We look at ways to combine individual widgets into Visual Interfaces that allow the user to perform these tasks both concurrently and sequentially.
- Communications of the ACM, Vol 43. No. 3, March 2000, M. Turk ed.Google Scholar
- Crowley, J., Coutaz, J., and Berard, F., "Things that See: Machine Perception for Human Computer Interaction", Communications of the A.C.M., Vol 43, No. 3, pp 54--64, March 2000 Google ScholarDigital Library
- Kjeldsen, F., "Visual Recognition of Hand Gesture as a Practical Interface Modality," PhD Dissertation, Columbia University, 1997 Google ScholarDigital Library
- Kjeldsen, R., "Facial Pointing", presented at the 4th International Workshop on Gesture and Sign Language based Human-Computer Interaction (Gesture Workshop 2001), proceedings forthcomingGoogle Scholar
- Kjeldsen, R., "Head Gestures for Computer Control", in the Proceedings of the Workshop on Recognition And Tracking of Face and Gesture - Real Time Systems (RATFG-RTS), Vancouver, BC, Canada, July 2001 Google ScholarDigital Library
- Pinhanez, C., et. al., "Transforming Surfaces Into Touch-Screens", submitted to CHI 2001 (system demonstrated in the Emerging Technology section of SIGGRAPH '01, Los Angeles, CA)Google Scholar
- Proceedings of the Fourth International Conference on Automatic Face and Gesture Recognition, 28--30 March, 2000, Grenoble, France. IEEE Computer Society Order Number PR00580Google Scholar
- Proceedings of the 1998 Workshop on Perceptual User Interfaces (PUI '98), San Francisco, CA, Nov. 1998, M. Turk editorGoogle Scholar
- Rutledge, J. and Selker, T., Force-to-Motion Functions for Pointing, in the Proceedings of the IFIP TC 13 Third International Conference on Human-Computer Interaction, August 1990 (Interact '90) Google ScholarDigital Library
- Toyama, K., "Look, Ma - No Hands! Hands-Free Cursor Control with Real-Time 3D Face Tracking" in {8}.Google Scholar
- Wu, Y. and Huang, T., "Vision-based gesture recognition: A review" in Lecture Notes in Artificial Intelligence V1739 1999Google Scholar
- Wu, Y. and Toyama, K., "Wide-Range, Person- and Illumination-Insensitive Head Orientation Estimation", in {7}. Google ScholarDigital Library
- TouchFree Switch, a camera-based switch for physically disabled users, released by Edmark. See http://www.edmark.com/prod/tfs/Google Scholar
- Design issues for vision-based computer interaction systems
Recommendations
Improvements in vision-based pointer control
Assets '06: Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibilityVision-based head trackers have been around for some years and are even beginning to be commercialized, but problems remain with respect to usability. Users without the ability to use traditional pointing devices - the intended audience of such systems -...
The University of Alberta user interface management system
In this paper the design and implementation of the University of Alberta user interface management system (UIMS) is discussed. This UIMS is based on the Seeheim model of user interfaces, which divides the user interface into three separate components. ...
Control menus: excecution and control in a single interactor
CHI EA '00: CHI '00 Extended Abstracts on Human Factors in Computing SystemsWe propose a new type of contextual pop-up menu called a control menu. These menus combine the selection of an operation and the control of this operation. They integrate up to two scroll bars or spin-boxes and thus allow users to keep their attention ...
Comments