skip to main content
10.1145/3267242.3267274acmconferencesArticle/Chapter ViewAbstractPublication PagesubicompConference Proceedingsconference-collections
research-article

Improving ultrasound-based gesture recognition using a partially shielded single microphone

Published:08 October 2018Publication History

ABSTRACT

We propose a method to improve ultrasound-based in-air gesture recognition by altering the acoustic characteristics of a microphone. The Doppler effect is often utilized to recognize ultrasound-based gestures. However, increasing the number of gestures is difficult because of the limited information obtained from the Doppler effect. In this study, we partially shield a microphone with a 3D-printed cover. The cover alters the sensitivity of the microphone and the characteristics of the obtained Doppler effect. Since the proposed method utilizes a 3D-printed cover with a single microphone and speaker embedded in a device, it does not require additional electronic devices to improve gesture recognition. We design four different microphone covers and evaluate the performance of the proposed method on six gestures with eight participants. The evaluation results confirm that recognition accuracy is increased by 15.3% by utilizing the proposed method.

Skip Supplemental Material Section

Supplemental Material

p9-watanabe.mp4

mp4

42.8 MB

References

  1. Apple. Siri. https://www.apple.com/ios/siri/.Google ScholarGoogle Scholar
  2. K.-Y. Chen, D. Ashbrook, M. Goel, S.-H. Lee, and S. Patel. 2014. AirLink: Sharing Files Between Multiple Devices Using In-air Gestures. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp '14). ACM, 565--569. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Chirp Microsystems. Technology. http://www.chirpmicro.com/technology.html.Google ScholarGoogle Scholar
  4. M. Goel, C. Zhao, R. Vinisha, and S. N. Patel. 2015. Tongue-in-Cheek: Using Wireless Signals to Enable Non-Intrusive and Flexible Facial Gestures Detection. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, 255--258. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Google. Google Assistant. https://assistant.google.com.Google ScholarGoogle Scholar
  6. S. Gupta, D. Morris, S. Patel, and D. Tan. 2012. SoundWave: Using the Doppler Effect to Sense Gestures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12). ACM, 1911--1914. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. G. Laput, E. Brockmeyer, S. E. Hudson, and C. Harrison. 2015. Acoustruments: Passive, Acoustically-Driven, Interactive Controls for Handheld Devices. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, 2161--2170. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. H. Manabe. 2013. Multi-touch Gesture Recognition by Single Photoreflector. In Proceedings of the Adjunct Publication of the 26th Annual ACM Symposium on User Interface Software and Technology (UIST '13 Adjunct). ACM, 15--16. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. R. Nandakumar, V. Iyer, D. Tan, and S. Gollakota. 2016. FingerIO: Using Active Sonar for Fine-Grained Finger Tracking. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, 1515--1525. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. M. Ogata, Y. Sugiura, Y. Makino, M. Inami, and M. Imai. 2013. SenSkin: Adapting Skin As a Soft Interface. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (UIST '13). ACM, 539-544. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. M. Ono, B. Shizuki, and J. Tanaka. 2013. Touch & Activate: Adding Interactivity to Existing Objects Using Active Acoustic Sensing. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (UIST '13). ACM, 31--40. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. G. Reyes, D. Zhang, S. Ghosh, P. Shah, J. Wu, A. Parnami, B. Bercik, T. Starner, G. D. Abowd, and W. K. Edwards. 2016. Whoosh: Non-voice Acoustics for Low-cost, Hands-free, and Rapid Input on Smartwatches. In Proceedings of the 2016 ACM International Symposium on Wearable Computers (ISWC '16). ACM, 120--127. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. W. Ruan, Q. Z. Sheng, L. Yang, T. Gu, P. Xu, and L. Shangguan. 2016. AudioGest: Enabling Fine-grained Hand Gesture Detection by Decoding Echo Signal. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp '16). ACM, 474--485. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. J. Song, G. Sörös, F. Pece, S. R. Fanello, S. Izadi, C. Keskin, and O. Hilliges. 2014. In-air Gestures Around Unmodified Mobile Devices. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST '14). ACM, 319--329. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. A. Withana, R. Peiris, N. Samarasekara, and S. Nanayakkara. 2015. zSense: Enabling Shallow Depth Gesture Recognition for Greater Input Expressivity on Smart Wearables. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, 3661--3670. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. X.-D. Yang, K. Hasan, N. Bruce, and P. Irani. 2013. Surround-see: Enabling Peripheral Vision on Smartphones During Active Use. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (UIST '13). ACM, 291--300. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. S. Yun, Y.-C. Chen, H. Zheng, L. Qiu, and W. Mao. 2017. Strata: Fine-Grained Acoustic-based Device-Free Tracking. In Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services (MobiSys '17). ACM, 15--28. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. C. Zhang, Q. Xue, A. Waghmare, S. Jain, Y. Pu, S. Hersek, K. Lyons, K. A. Cunefare, O. T. Inan, and G. D. Abowd. 2017. SoundTrak: Continuous 3D Tracking of a Finger Using Active Acoustics. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1, 2, Article 30 (June 2017), 25 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    ISWC '18: Proceedings of the 2018 ACM International Symposium on Wearable Computers
    October 2018
    307 pages
    ISBN:9781450359672
    DOI:10.1145/3267242

    Copyright © 2018 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 8 October 2018

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article

    Acceptance Rates

    Overall Acceptance Rate38of196submissions,19%

    Upcoming Conference

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader