As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Existing human-robot interaction (HRI) technologies are not intuitive or effective enough for the disabled. Most of them can neither express their explicit intent (voice, posture, touch, etc.), nor communicate based on implicit intent. We propose a novel HRI framework for implicit intent reasoning based on eye tracking. The framework is designed for people with normal vision, and it will track and analyze their eye movements to infer their intents in a smart home environment. These intents are then sent to an assistive robot and guide the robot to accomplish specific tasks such as using keys to open the door. Two Experiments were carried out to validate the effectiveness of the framework. In the implicit intents classification task, the average classification accuracy of the two implicit intents (task-free visual browsing and task-oriented visual search) was 84.43% via the Support Vector Machine (SVM). In implicit intent reasoning task, Naive Bayesian (NB) networks were used to establish the intent knowledge base, reaching the highest accuracy of intent reasoning was 98%. These results proved that our framework could aid the elderly and the disabled to better adapt to the smart home environment in their daily life.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.