skip to main content
10.1145/3610977.3634935acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
research-article

Presentation of Robot-Intended Handover Position using Vibrotactile Interface during Robot-to-Human Handover Task

Published:11 March 2024Publication History

ABSTRACT

Advancements in robot autonomy and safety have enabled close interactions, such as object handovers, with humans. During robot-to-human handovers in assembly tasks, the robot considers the state of the human to determine its optimal handover position and timing. However, humans may struggle to focus on their primary tasks because of the need to track the robot's movement. This study aims to develop a vibrotactile interface that helps humans maintain focus on their primary tasks during object reception. The interface conveys the robot-intended handover position on the human forearm by displaying the angular direction and distance relative to the human hand via vibrotactile cues. The experimental results demonstrated that the interface allowed participants to receive objects with faster reactions and completion times, with reduced head rotation towards the robot. Participants also subjectively perceived improved performance and reduced mental workload compared with the condition without the interface.

References

  1. Arash Ajoudani, Andrea Maria Zanchettin, Serena Ivaldi, Alin Albu-Sch"affer, Kazuhiro Kosuge, and Oussama Khatib. 2018. Progress and prospects of the human--robot collaboration. Autonomous Robots , Vol. 42 (2018), 957--975.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Jean-David Boucher, Ugo Pattacini, Amelie Lelong, Gerard Bailly, Frederic Elisei, Sascha Fagel, Peter Dominey, and Jocelyne Ventre-Dominey. 2012. I Reach Faster When I See You Look: Gaze Effects in Human--Human and Human--Robot Face-to-Face Cooperation. Frontiers in Neurorobotics , Vol. 6 (2012). https://doi.org/10.3389/fnbot.2012.00003Google ScholarGoogle ScholarCross RefCross Ref
  3. Elizabeth Cha, Naomi T. Fitter, Yunkyung Kim, Terrence Fong, and Maja J. Matarić. 2018. Effects of Robot Sound on Auditory Localization in Human-Robot Collaboration. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (Chicago, IL, USA) (HRI '18). Association for Computing Machinery, New York, NY, USA, 434--442. https://doi.org/10.1145/3171221.3171285Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Vincent Duchaine and Clément Gosselin. 2009. Safe, stable and intuitive control for physical human-robot interaction. In 2009 IEEE International Conference on Robotics and Automation. IEEE, 3383--3388.Google ScholarGoogle ScholarCross RefCross Ref
  5. Jonggi Hong, Alisha Pradhan, Jon E. Froehlich, and Leah Findlater. 2017. Evaluating Wrist-Based Haptic Feedback for Non-Visual Target Finding and Path Tracing on a 2D Surface. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility (Baltimore, Maryland, USA) (ASSETS '17). Association for Computing Machinery, New York, NY, USA, 210--219. https://doi.org/10.1145/3132525.3132538Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Yukiko Iwasaki, Kozo Ando, and Hiroyasu Iwata. 2019. Haptic feedback system of the additional hand position for multiple task situations using a wearable robot arm. In 2019 IEEE International Conference on Cyborg and Bionic Systems (CBS). IEEE, 247--252.Google ScholarGoogle ScholarCross RefCross Ref
  7. Zhenyu Liao, Jose V. Salazar Luces, and Yasuhisa Hirata. 2020. Human Navigation Using Phantom Tactile Sensation Based Vibrotactile Feedback. IEEE Robotics and Automation Letters, Vol. 5, 4 (2020), 5732--5739. https://doi.org/10.1109/LRA.2020.3010447Google ScholarGoogle ScholarCross RefCross Ref
  8. Jose V. Salazar Luces, Kanako Ishida, and Yasuhisa Hirata. 2019. Human Position Guidance Using Vibrotactile Feedback Stimulation Based on Phantom-Sensation. In 2019 IEEE International Conference on Cyborg and Bionic Systems (CBS). 235--240. https://doi.org/10.1109/CBS46900.2019.9114479Google ScholarGoogle ScholarCross RefCross Ref
  9. Simone Macciò, Alessandro Carf`i, and Fulvio Mastrogiovanni. 2022. Mixed Reality as Communication Medium for Human-Robot Collaboration. In 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2796--2802.Google ScholarGoogle Scholar
  10. Matteo Melchiorre, Leonardo Sabatino Scimmi, Stefano Mauro, and Stefano Paolo Pastorelli. 2021. Vision-based control architecture for human--robot hand-over applications. Asian Journal of Control, Vol. 23, 1 (2021), 105--117.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Muhammad Akmal Bin Mohammed Zaffir and Takahiro Wada. 2023. Presenting Human-Robot Relative Hand Position using a Multi-Step Vibrotactile Stimulus for Handover Task. In Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction. 426--430.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. AJung Moon, Daniel M. Troniak, Brian Gleeson, Matthew K.X.J. Pan, Minhua Zheng, Benjamin A. Blumer, Karon MacLean, and Elizabeth A. Croft. 2014. Meet Me Where i'm Gazing: How Shared Attention Gaze Affects Human-Robot Handover Timing. In Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction (Bielefeld, Germany) (HRI '14). Association for Computing Machinery, New York, NY, USA, 334--341. https://doi.org/10.1145/2559636.2559656Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Rhys Newbury, Akansel Cosgun, Tysha Crowley-Davis, Wesley P. Chan, Tom Drummond, and Elizabeth A. Croft. 2022. Visualizing Robot Intent for Object Handovers with Augmented Reality. In 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). 1264--1270. https://doi.org/10.1109/RO-MAN53752.2022.9900524Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Alessia Noccaro, Luigi Raiano, Mattia Pinardi, Domenico Formica, and Giovanni Di Pino. 2020. A novel proprioceptive feedback system for supernumerary robotic limb. In 2020 8th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob). IEEE, 1024--1029.Google ScholarGoogle ScholarCross RefCross Ref
  15. Jumpei Okimoto and Mihoko Niitsuma. 2020. Effects of Auditory Cues on Human-Robot Collaboration. In 2020 IEEE 29th International Symposium on Industrial Electronics (ISIE). 1572--1577. https://doi.org/10.1109/ISIE45063.2020.9152413Google ScholarGoogle ScholarCross RefCross Ref
  16. Max Pascher, Til Franzen, Kirill Kronhardt, and Jens Gerken. 2022. HaptiX: Extending Cobot's Motion Intention Visualization by Haptic Feedback. arXiv preprint arXiv:2210.16027 (2022).Google ScholarGoogle Scholar
  17. Mattia Pinardi, Alessia Noccaro, Luigi Raiano, Domenico Formica, and Giovanni Di Pino. 2023. Comparing end-effector position and joint angle feedback for online robotic limb tracking. Plos one, Vol. 18, 6 (2023), e0286566.Google ScholarGoogle ScholarCross RefCross Ref
  18. M. Pinardi, L. Raiano, A. Noccaro, D. Formica, and G. Di Pino. 2021. Cartesian Space Feedback for Real Time Tracking of a Supernumerary Robotic Limb: a Pilot Study. In 2021 10th International IEEE/EMBS Conference on Neural Engineering (NER). 889--892. https://doi.org/10.1109/NER49283.2021.9441174Google ScholarGoogle ScholarCross RefCross Ref
  19. Jose Salazar, Keisuke Okabe, and Yasuhisa Hirata. 2018. Path-Following Guidance Using Phantom Sensation Based Vibrotactile Cues Around the Wrist. IEEE Robotics and Automation Letters, Vol. 3, 3 (2018), 2485--2492. https://doi.org/10.1109/LRA.2018.2810939Google ScholarGoogle ScholarCross RefCross Ref
  20. Jose V. Salazar Luces, Keisuke Okabe, Yoshiki Murao, and Yasuhisa Hirata. 2018. A Phantom-Sensation Based Paradigm for Continuous Vibrotactile Wrist Guidance in Two-Dimensional Space. IEEE Robotics and Automation Letters, Vol. 3, 1 (2018), 163--170. https://doi.org/10.1109/LRA.2017.2737480Google ScholarGoogle ScholarCross RefCross Ref
  21. Valay A Shah, Nicoletta Risi, Giulia Ballardini, Leigh Ann Mrotek, Maura Casadio, and Robert A Scheidt. 2018. Effect of Dual Tasking on Vibrotactile Feedback Guided Reaching -- A Pilot Study. In Haptics: Science, Technology, and Applications, Domenico Prattichizzo, Hiroyuki Shinoda, Hong Z Tan, Emanuele Ruffaldi, and Antonio Frisoli (Eds.). Springer International Publishing, Cham, 3--14.Google ScholarGoogle Scholar
  22. Keita Suzuki, Yuichi Muramatsu, Trygve Thomessen, and Mihoko Niitsuma. 2013. Phantom sensation on a forearm using a vibrotactile interface. In Multimodal man-machine communication 2013, 4th IEEE International Conference on Cognitive Infocommunications.Google ScholarGoogle Scholar
  23. Georgios Tsamis, Georgios Chantziaras, Dimitrios Giakoumis, Ioannis Kostavelis, Andreas Kargakos, Athanasios Tsakiris, and Dimitrios Tzovaras. 2021. Intuitive and safe interaction in multi-user human robot collaboration environments through augmented reality displays. In 2021 30th IEEE international conference on robot & human interactive communication (RO-MAN). IEEE, 520--526.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Michael Walker, Hooman Hedayati, Jennifer Lee, and Daniel Szafir. 2018. Communicating Robot Motion Intent with Augmented Reality. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (Chicago, IL, USA) (HRI '18). Association for Computing Machinery, New York, NY, USA, 316--324. https://doi.org/10.1145/3171221.3171253Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Wei Yang, Chris Paxton, Maya Cakmak, and Dieter Fox. 2020. Human grasp classification for reactive human-to-robot handovers. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 11123--11130.Google ScholarGoogle ScholarCross RefCross Ref
  26. Minhua Zheng, AJung Moon, Elizabeth A Croft, and Max Q.-H. Meng. 2015. Impacts of Robot Head Gaze on Robot-to-Human Handovers. International Journal of Social Robotics, Vol. 7, 5 (2015), 783--798. https://doi.org/10.1007/s12369-015-0305-z ioGoogle ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Presentation of Robot-Intended Handover Position using Vibrotactile Interface during Robot-to-Human Handover Task

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          HRI '24: Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction
          March 2024
          982 pages
          ISBN:9798400703225
          DOI:10.1145/3610977

          Copyright © 2024 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 11 March 2024

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          Overall Acceptance Rate242of1,000submissions,24%
        • Article Metrics

          • Downloads (Last 12 months)82
          • Downloads (Last 6 weeks)65

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader