Skip to main content

Extraction of Displayed Objects Corresponding to Demonstrative Words for Use in Remote Transcription

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 6180))

Abstract

A previously proposed system for extracting target objects displayed during lectures by using demonstrative words and phrases and pointing gestures has now been evaluated. The system identifies pointing gestures by analyzing the trajectory of the stick pointer and extracts the objects to which the speaker points. The extracted objects are displayed on the transcriber’s monitor at a remote location, thereby helping the transcriber to translate the demonstrative word or phrase into a short description of the object. Testing using video of an actual lecture showed that the system had a recall rate of 85.7% and precision of 84.8%. Testing using two extracted scenes showed that transcribers replaced significantly more demonstrative words with short descriptions of the target objects when the extracted objects were displayed on the transcriber’s screen. A transcriber using this system can thus transcribe speech more easily and produce more meaningful transcriptions for hearing-impaired listeners.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Kato, N., Kawano, S., Miyoshi, S., Nishioka, T., Murakami, H., Minagawa, H., Wakatsuki, D., Shirasawa, M., Ishihara, Y., Naito, I.: Subjective Evaluation of Displaying Keywords for Speech to Text Service Operators. The Transactions of Human Interface Society 9(2), 195–203 (2007) (in Japanese)

    Google Scholar 

  2. Miyoshi, S., Kawano, S., Nishioka, T., Kato, N., Shirasawa, M., Murakami, H., Minagawa, H., Ishihara, Y., Naito, I., Wakatsuki, D., Kuroki, H., Kobayashi, M.: A Basic Study on Supplementary Visual Information for Real-Time Captionists in the Lecture of Information Science. IEICE Transactions on Information and Systems (Japanese edition) J91-D(9), 2236–2246 (2008)

    Google Scholar 

  3. Miyoshi, S., Kuroki, H., Kawano, S., Shirasawa, M., Ishihara, Y., Kobayashi, M.: Support Technique for Real-Time Captionist to Use Speech Recognition Software. In: Miesenberger, K., Klaus, J., Zagler, W.L., Karshmer, A.I. (eds.) ICCHP 2008. LNCS, vol. 5105, pp. 647–650. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  4. Wald, M., Bain, K.: Universal access to communication and learning: role of automatic speech recognition. Universal Access in the Information Society 6(4), 435–447 (2007)

    Article  Google Scholar 

  5. Sakiyama, T., Mukunoki, M., Katsuo, I.: Detection of the Indicated Area with an Indication Stick. In: Int. Conf. on Multimodal Interfaces, pp. 480–487 (2000)

    Google Scholar 

  6. Marutani, T., Nishiguchi, S., Kakusho, K., Minoh, M.: Making a lecture content with deictic information about indicated objects in lecture materials. In: AEARU Workshop on Network Education, pp. 70–75 (2005)

    Google Scholar 

  7. Takeuchi, Y., Saito, K., Ito, A., Ohnishi, N., Iizuka, S., Nakajima, S.: Extracting Pointing Object with Demonstrative Speech Phrase for Remote Transcription in Lecture. In: Miesenberger, K., Klaus, J., Zagler, W.L., Karshmer, A.I. (eds.) ICCHP 2008. LNCS, vol. 5105, pp. 624–631. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  8. Ito, A., Saito, K., Takeuchi, Y., Ohnishi, N., Iizuka, S., Nakajima, S.: A Study on Demonstrative Words Extraction in Instructor Utterance on Communication Support for Hearing Impaired Persons. ibid, pp. 632–639 (2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Takeuchi, Y., Ohta, H., Ohnishi, N., Wakatsuki, D., Minagawa, H. (2010). Extraction of Displayed Objects Corresponding to Demonstrative Words for Use in Remote Transcription. In: Miesenberger, K., Klaus, J., Zagler, W., Karshmer, A. (eds) Computers Helping People with Special Needs. ICCHP 2010. Lecture Notes in Computer Science, vol 6180. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-14100-3_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-14100-3_24

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-14099-0

  • Online ISBN: 978-3-642-14100-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics