Abstract
Modern wearable computer designs package workstation-level performance in systems small enough to be worn as clothing. These machines enable technology to be brought where it is needed most for the handicapped: everyday mobile environments. This paper describes a research effort to make a wearable computer that can recognise (with the possible goal of translating) sentence-level American Sign Language (ASL) using only a baseball cap mounted camera for input. Current accuracy exceeds 97% per word on a 40-word lexicon.
Similar content being viewed by others
References
Takahashi T and Kishino F. Hand gesture coding based on experiments using a hand gesture interface device. SIGCHI Bul., 1991; 23(2): 67–73.
Starner T and Pentland A. Real-time American Sign Language recognition from video using hidden markov models. MIT Media Laboratory, Perceptual Computing Group TR#375, Presented at ISCV'95.
Dorner B. Hand shape identification and tracking for sign language interpretation. IJCAI Workshop on Looking at People, 1993.
Poizner H, Bellugi U and Lutes-Driscoll V. Perception of American Sign Language in dynamic point-light displays. J. Exp. Psychol.: Human Perform. 1981; 7: 430–440.
Sperling G, Landy M, Cohen Y and Pavel M. Intelligible encoding of ASL image sequences at extremely low information rates. Comp. Vision, Graphics, and Image Proc. 1985; 31: 335–391.
Starner T, Makhoul J, Schwartz R and Chou G. On-line cursive handwriting recognition using speech recognition methods. ICASSP, V-125, 1994.
Huang X, Ariki Y and Jack M. Hidden Markov Models for Speech Recognition. Edinburgh Univ. Press, Edinburgh, 1990.
Essa I, Darrell T and Pentland A. Tracking facial motion. IEEE Workshop on Nonrigid and Articulated Motion, Austin TX, Nov. 1994.
Humphries T, Padden C and O'Rourke T. A Basic Course in American Sign Language. T. J. Publ., Inc., Silver Spring, MD, 1980.
Tamura S and Kawasaki S. Recognition of sign language motion images. Pattern Recognition, 1988; 21: 343–353.
Charayaphan C and Marble A. Image processing system for interpreting motion in American Sign Language. Journal of Biomedical Engineering, 1992; 14: 419–425.
Cui Y and Weng J. Learning-based hand sign recognition. Intl. Work. Auto. Face Gest. Recog. (IWAFGR) '95 Proceedings, 1995; 201–206.
Murakami K and Taguchi H. Gesture recognition using recurrent neural networks. CHI '91 Conference Proceedings, 1991; 237–241.
Lee C and Xu Y. Online, interactive learning of gestures for human/robot interfaces. IEEE Int. Conf. on Robotics and Automation, 1996; 2982–2987.
Messing L, Erenshteyn R., Foulds R., Galuska S and Stern G. American Sign Language computer recognition: its present and its promise. Conf. the Intl. Society for Augmentative and Alternative Communication, 1994; 289–291.
Kadous W. Recognition of Australian Sign Language using instrumented gloves. Bachelor's thesis, University of New South Wales, October 1995.
Liang R. and Ouhyoung M., A real-time continuous gesture interface for Taiwanese Sign Language. Submitted to UIST, 1997.
Yamato J, Ohya J and Ishii K. Recognising human action in time-sequential images using hidden Markov models. Proc. 1992 ICCV, IEEE Press, 1992; 379–385.
Darrell T and Pentland A. Space-time gestures. CVPR, 1993; 335–340.
Schlenzig J, Hunter E and Jain R. Recursive identification of gesture inputers using hidden Markov models. Proc. Second Ann. Conf. on Appl. of Comp. Vision, 1994; 187–194.
Wilson A and Bobick A. Learning visual behaviour for gesture analysis. Proc. IEEE Int'l. Symp. on Comp. Vis., Nov. 1995.
Campbell L, Becker D, Azarbayejani A, Bobick A and Pentland A. Invariant features for 3-D gesture recognition. Intl. Conf. on Face and Gesture Recogn. 1996; 157–162.
Rehg J and Kanade T. DigitEyes: vision-based human hand tracking. School of Computer Science Technical Report CMU-CS-93-220, Carnegie Mellon Univ., Dec. 1993.
Mann S. Mediated reality. MIT Media Lab, Perceptual Computing Group TR# 260, 1995.
Horn B. Robot Vision. MIT Press, NY, 1986.
Baum L. An inequality and associated maximisation technique in statistical estimation of probabilistic functions of Markov processes. Inequalities, 1972; 3: 1–8.
Rabiner L and Juang B. An introduction to hidden Markov models. IEEE ASSP Magazine, Jan. 1996; 4–16.
Young S. HTK: Hidden Markov Model Toolkit V1.5. Cambridge Univ. Eng. Dept. Speech Group and Entropic Research Lab. Inc., Washington DC, Dec. 1993.
Starner T. Visual recognition of American Sign Language using hidden Markov models. Master's thesis, MIT Media Laboratory, Feb. 1995.
Juang B. Maximum likelihood estimation for mixture multivariate observations of Markov chains. AT&T Tech. J., 1985; 64: 1235–1249.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Starner, T., Weaver, J. & Pentland, A. A wearable computer-based American sign Language Recogniser. Personal Technologies 1, 241–250 (1997). https://doi.org/10.1007/BF01682027
Issue Date:
DOI: https://doi.org/10.1007/BF01682027