ABSTRACT
We propose a method to improve ultrasound-based in-air gesture recognition by altering the acoustic characteristics of a microphone. The Doppler effect is often utilized to recognize ultrasound-based gestures. However, increasing the number of gestures is difficult because of the limited information obtained from the Doppler effect. In this study, we partially shield a microphone with a 3D-printed cover. The cover alters the sensitivity of the microphone and the characteristics of the obtained Doppler effect. Since the proposed method utilizes a 3D-printed cover with a single microphone and speaker embedded in a device, it does not require additional electronic devices to improve gesture recognition. We design four different microphone covers and evaluate the performance of the proposed method on six gestures with eight participants. The evaluation results confirm that recognition accuracy is increased by 15.3% by utilizing the proposed method.
Supplemental Material
Available for Download
Supplemental material.
- Apple. Siri. https://www.apple.com/ios/siri/.Google Scholar
- K.-Y. Chen, D. Ashbrook, M. Goel, S.-H. Lee, and S. Patel. 2014. AirLink: Sharing Files Between Multiple Devices Using In-air Gestures. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp '14). ACM, 565--569. Google ScholarDigital Library
- Chirp Microsystems. Technology. http://www.chirpmicro.com/technology.html.Google Scholar
- M. Goel, C. Zhao, R. Vinisha, and S. N. Patel. 2015. Tongue-in-Cheek: Using Wireless Signals to Enable Non-Intrusive and Flexible Facial Gestures Detection. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, 255--258. Google ScholarDigital Library
- Google. Google Assistant. https://assistant.google.com.Google Scholar
- S. Gupta, D. Morris, S. Patel, and D. Tan. 2012. SoundWave: Using the Doppler Effect to Sense Gestures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12). ACM, 1911--1914. Google ScholarDigital Library
- G. Laput, E. Brockmeyer, S. E. Hudson, and C. Harrison. 2015. Acoustruments: Passive, Acoustically-Driven, Interactive Controls for Handheld Devices. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, 2161--2170. Google ScholarDigital Library
- H. Manabe. 2013. Multi-touch Gesture Recognition by Single Photoreflector. In Proceedings of the Adjunct Publication of the 26th Annual ACM Symposium on User Interface Software and Technology (UIST '13 Adjunct). ACM, 15--16. Google ScholarDigital Library
- R. Nandakumar, V. Iyer, D. Tan, and S. Gollakota. 2016. FingerIO: Using Active Sonar for Fine-Grained Finger Tracking. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, 1515--1525. Google ScholarDigital Library
- M. Ogata, Y. Sugiura, Y. Makino, M. Inami, and M. Imai. 2013. SenSkin: Adapting Skin As a Soft Interface. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (UIST '13). ACM, 539-544. Google ScholarDigital Library
- M. Ono, B. Shizuki, and J. Tanaka. 2013. Touch & Activate: Adding Interactivity to Existing Objects Using Active Acoustic Sensing. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (UIST '13). ACM, 31--40. Google ScholarDigital Library
- G. Reyes, D. Zhang, S. Ghosh, P. Shah, J. Wu, A. Parnami, B. Bercik, T. Starner, G. D. Abowd, and W. K. Edwards. 2016. Whoosh: Non-voice Acoustics for Low-cost, Hands-free, and Rapid Input on Smartwatches. In Proceedings of the 2016 ACM International Symposium on Wearable Computers (ISWC '16). ACM, 120--127. Google ScholarDigital Library
- W. Ruan, Q. Z. Sheng, L. Yang, T. Gu, P. Xu, and L. Shangguan. 2016. AudioGest: Enabling Fine-grained Hand Gesture Detection by Decoding Echo Signal. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp '16). ACM, 474--485. Google ScholarDigital Library
- J. Song, G. Sörös, F. Pece, S. R. Fanello, S. Izadi, C. Keskin, and O. Hilliges. 2014. In-air Gestures Around Unmodified Mobile Devices. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST '14). ACM, 319--329. Google ScholarDigital Library
- A. Withana, R. Peiris, N. Samarasekara, and S. Nanayakkara. 2015. zSense: Enabling Shallow Depth Gesture Recognition for Greater Input Expressivity on Smart Wearables. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, 3661--3670. Google ScholarDigital Library
- X.-D. Yang, K. Hasan, N. Bruce, and P. Irani. 2013. Surround-see: Enabling Peripheral Vision on Smartphones During Active Use. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (UIST '13). ACM, 291--300. Google ScholarDigital Library
- S. Yun, Y.-C. Chen, H. Zheng, L. Qiu, and W. Mao. 2017. Strata: Fine-Grained Acoustic-based Device-Free Tracking. In Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services (MobiSys '17). ACM, 15--28. Google ScholarDigital Library
- C. Zhang, Q. Xue, A. Waghmare, S. Jain, Y. Pu, S. Hersek, K. Lyons, K. A. Cunefare, O. T. Inan, and G. D. Abowd. 2017. SoundTrak: Continuous 3D Tracking of a Finger Using Active Acoustics. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1, 2, Article 30 (June 2017), 25 pages. Google ScholarDigital Library
Recommendations
Device-free gesture tracking using acoustic signals
MobiCom '16: Proceedings of the 22nd Annual International Conference on Mobile Computing and NetworkingDevice-free gesture tracking is an enabling HCI mechanism for small wearable devices because fingers are too big to control the GUI elements on such small screens, and it is also an important HCI mechanism for medium-to-large size mobile devices because ...
Ultrasound-based movement sensing, gesture-, and context-recognition
ISWC '13: Proceedings of the 2013 International Symposium on Wearable ComputersWe propose an activity and context recognition method where the user carries a neck-worn receiver comprising a microphone, and small speakers on his wrists that generate ultrasounds. The system recognizes gestures on the basis of the volume of the ...
Multi-scenario gesture recognition using Kinect
CGAMES '12: Proceedings of the 2012 17th International Conference on Computer Games: AI, Animation, Mobile, Interactive Multimedia, Educational & Serious Games (CGAMES)Hand gesture recognition (HGR) is an important research topic because some situations require silent communication with sign languages. Computational HGR systems assist silent communication, and help people learn a sign language. In this article, a ...
Comments