Skip to main content
Log in

A wearable computer-based American sign Language Recogniser

  • Published:
Personal Technologies Aims and scope Submit manuscript

Abstract

Modern wearable computer designs package workstation-level performance in systems small enough to be worn as clothing. These machines enable technology to be brought where it is needed most for the handicapped: everyday mobile environments. This paper describes a research effort to make a wearable computer that can recognise (with the possible goal of translating) sentence-level American Sign Language (ASL) using only a baseball cap mounted camera for input. Current accuracy exceeds 97% per word on a 40-word lexicon.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Takahashi T and Kishino F. Hand gesture coding based on experiments using a hand gesture interface device. SIGCHI Bul., 1991; 23(2): 67–73.

    Google Scholar 

  2. Starner T and Pentland A. Real-time American Sign Language recognition from video using hidden markov models. MIT Media Laboratory, Perceptual Computing Group TR#375, Presented at ISCV'95.

  3. Dorner B. Hand shape identification and tracking for sign language interpretation. IJCAI Workshop on Looking at People, 1993.

  4. Poizner H, Bellugi U and Lutes-Driscoll V. Perception of American Sign Language in dynamic point-light displays. J. Exp. Psychol.: Human Perform. 1981; 7: 430–440.

    Google Scholar 

  5. Sperling G, Landy M, Cohen Y and Pavel M. Intelligible encoding of ASL image sequences at extremely low information rates. Comp. Vision, Graphics, and Image Proc. 1985; 31: 335–391.

    Google Scholar 

  6. Starner T, Makhoul J, Schwartz R and Chou G. On-line cursive handwriting recognition using speech recognition methods. ICASSP, V-125, 1994.

  7. Huang X, Ariki Y and Jack M. Hidden Markov Models for Speech Recognition. Edinburgh Univ. Press, Edinburgh, 1990.

    Google Scholar 

  8. Essa I, Darrell T and Pentland A. Tracking facial motion. IEEE Workshop on Nonrigid and Articulated Motion, Austin TX, Nov. 1994.

  9. Humphries T, Padden C and O'Rourke T. A Basic Course in American Sign Language. T. J. Publ., Inc., Silver Spring, MD, 1980.

    Google Scholar 

  10. Tamura S and Kawasaki S. Recognition of sign language motion images. Pattern Recognition, 1988; 21: 343–353.

    Google Scholar 

  11. Charayaphan C and Marble A. Image processing system for interpreting motion in American Sign Language. Journal of Biomedical Engineering, 1992; 14: 419–425.

    Google Scholar 

  12. Cui Y and Weng J. Learning-based hand sign recognition. Intl. Work. Auto. Face Gest. Recog. (IWAFGR) '95 Proceedings, 1995; 201–206.

  13. Murakami K and Taguchi H. Gesture recognition using recurrent neural networks. CHI '91 Conference Proceedings, 1991; 237–241.

  14. Lee C and Xu Y. Online, interactive learning of gestures for human/robot interfaces. IEEE Int. Conf. on Robotics and Automation, 1996; 2982–2987.

  15. Messing L, Erenshteyn R., Foulds R., Galuska S and Stern G. American Sign Language computer recognition: its present and its promise. Conf. the Intl. Society for Augmentative and Alternative Communication, 1994; 289–291.

  16. Kadous W. Recognition of Australian Sign Language using instrumented gloves. Bachelor's thesis, University of New South Wales, October 1995.

  17. Liang R. and Ouhyoung M., A real-time continuous gesture interface for Taiwanese Sign Language. Submitted to UIST, 1997.

  18. Yamato J, Ohya J and Ishii K. Recognising human action in time-sequential images using hidden Markov models. Proc. 1992 ICCV, IEEE Press, 1992; 379–385.

  19. Darrell T and Pentland A. Space-time gestures. CVPR, 1993; 335–340.

  20. Schlenzig J, Hunter E and Jain R. Recursive identification of gesture inputers using hidden Markov models. Proc. Second Ann. Conf. on Appl. of Comp. Vision, 1994; 187–194.

  21. Wilson A and Bobick A. Learning visual behaviour for gesture analysis. Proc. IEEE Int'l. Symp. on Comp. Vis., Nov. 1995.

  22. Campbell L, Becker D, Azarbayejani A, Bobick A and Pentland A. Invariant features for 3-D gesture recognition. Intl. Conf. on Face and Gesture Recogn. 1996; 157–162.

  23. Rehg J and Kanade T. DigitEyes: vision-based human hand tracking. School of Computer Science Technical Report CMU-CS-93-220, Carnegie Mellon Univ., Dec. 1993.

  24. Mann S. Mediated reality. MIT Media Lab, Perceptual Computing Group TR# 260, 1995.

  25. Horn B. Robot Vision. MIT Press, NY, 1986.

    Google Scholar 

  26. Baum L. An inequality and associated maximisation technique in statistical estimation of probabilistic functions of Markov processes. Inequalities, 1972; 3: 1–8.

    Google Scholar 

  27. Rabiner L and Juang B. An introduction to hidden Markov models. IEEE ASSP Magazine, Jan. 1996; 4–16.

  28. Young S. HTK: Hidden Markov Model Toolkit V1.5. Cambridge Univ. Eng. Dept. Speech Group and Entropic Research Lab. Inc., Washington DC, Dec. 1993.

    Google Scholar 

  29. Starner T. Visual recognition of American Sign Language using hidden Markov models. Master's thesis, MIT Media Laboratory, Feb. 1995.

  30. Juang B. Maximum likelihood estimation for mixture multivariate observations of Markov chains. AT&T Tech. J., 1985; 64: 1235–1249.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thad Starner.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Starner, T., Weaver, J. & Pentland, A. A wearable computer-based American sign Language Recogniser. Personal Technologies 1, 241–250 (1997). https://doi.org/10.1007/BF01682027

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01682027

Keywords

Navigation