Next Article in Journal
Websites with Multimedia Content: A Heuristic Evaluation of the Medical/Anatomical Museums
Previous Article in Journal
3D Point Clouds and Eye Tracking for Investigating the Perception and Acceptance of Power Lines in Different Landscapes
Previous Article in Special Issue
Towards a Taxonomy for In-Vehicle Interactions Using Wearable Smart Textiles: Insights from a User-Elicitation Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Guest Editors’ Introduction: Multimodal Technologies and Interaction in the Era of Automated Driving

1
Technische Hochschule Ingolstadt, Faculty of Electrical Engineering and Computer Science, Esplanade 10, D-85049 Ingolstadt, Germany
2
Virginia Tech, Grado Department of Industrial and Systems Engineering, Blacksburg, VA 24061, USA
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2019, 3(2), 41; https://doi.org/10.3390/mti3020041
Submission received: 24 May 2019 / Accepted: 10 June 2019 / Published: 12 June 2019

1. Motivation and Background

Recent advancements in automated vehicle technologies pose numerous opportunities and challenges to support the diverse facets of user needs. These needs range from inexperienced, thrill-seeking, and young novice drivers, to risk-averse, safety conscious elderly drivers, to those who have not been considered as drivers traditionally, together with their natural limitations and preferences [1]. In the future, the driving task will increasingly be shared between the driver and the vehicle (Level 3, according to SAE J3016 [2]), or the driver will be pushed into a (passive) passenger or occupant role (Level 4). In the long term, full automation (Level 5) will require entirely new user interface concepts, as there will be no need (or any possible way) for the drivers/passengers to control the vehicle. Thus, we need to put our efforts into the design of radically new automotive user interfaces to support the drivers/passengers in different levels and activities. The drivers/passengers will secure additional time to spend on what they want to do within an isolated (but still connected) space. However, acceptance of these new technologies will be highly dependent on several aspects, such as the reliability of automated driving systems (ADSs), trust and acceptance of technology, as well as the successful communication of the vehicle’s intention and behavior (to users). This special issue features a collection of the current state-of-the-art novel user interface concepts for tomorrow’s vehicles. We hope that this special issue can serve as a positive step towards the direction in which we want to move.

2. New Opportunities in Multimodal Technologies and Interaction for Futuristic Vehicles

Ongoing technological development will put completely new demands on the design of the interaction inside and outside the vehicle. To illustrate, interactions may include vehicle to occupants (i.e., Intelligent Vehicle (IV)), vehicle/occupants to infrastructure (V2I), vehicle/occupants to other vehicle (or occupants) (V2V), vehicle/occupants to business (V2B) [3], or vehicle/occupants to pedestrians [4]. With increasing automation, the goals of these interactions will also vary. In-vehicle interactions in this new era will promote driver situation awareness, trust [5], and better user experiences, as well as usability and safety. Following Wicken’s Multiple Resources Theory [6], in-vehicle displays have adopted multimodal technologies to ease the physical and mental workload of drivers. With the pervasive ADSs, the use of multimodal displays and controls are expected to increase, by expanding user interfaces beyond traditional graphical user interfaces with auditory [7], tactile/haptic [8], gesture [9], wearable [10], and Augmented Reality (AR)/Virtual Reality (VR) /Mixed Reality (MR) technologies [11]. To ensure a more natural interaction, not only does the driver/user have to monitor the vehicle, the vehicle also has to monitor the driver/user. To achieve higher accuracy in interactions, vehicles are becoming equipped with multimodal sensing technologies as well [12]. Moreover, these vehicles try to estimate and detect not only cognitive workload, but also various driver states, including fatigue and drowsiness [13], emotions [14], and mind wandering [15]. In the same line, the articles of this special issue address diverse multimodal technologies for automated driving situations.

3. Submissions and Review Process

The articles submitted to this special issue have undergone a rigorous peer-review process and each manuscript was reviewed (on average) by three independent reviewers. The guest editors performed meta-reviews on the papers in each round of review (up to three), and finally, according to an objective score sheet, four articles were selected for publication. Each of the accepted papers has received no reviews below the second-highest ranking category.

4. Summary of Contributions

All papers accepted for the special issue on “User Interfaces to Pave the Way for Interaction with Tomorrow’s Vehicles” address topics related to automated driving from different angles. The first article [16] investigates the characteristics of in-vehicle voice agents to increase technology acceptance and improve upon the perceived ease of use of automated vehicle technology. The next article [17] addresses the issue of driver emotions that may have a negative influence on road safety, and the authors suggest employing affective computing for emotional state detection and parametrization of empathic digital assistants to improve driver emotions and, as a consequence, enable safer driving. The authors of the third article [18] focus (again) on the effect of feedback on the perceived ease of use when interacting with automated driving systems at lower levels of automation (Society of Automotive Engineers (SAE) Levels 0–3) and conclude that the perceived ease of use can be used as a diagnostic measure in interaction with automated vehicles. In the last article, Nanjappan et al. [19] investigate the design space of textile-based wearable user interfaces for in-vehicle secondary interactions and conclude with a list of design recommendations for fabric-based wrist interfaces that could help designers produce their own wearable interfaces for the vehicle context.
In more detail, the contributions (ordered by the publication date) are as follows:
Sanguk Lee, Rabindra Ratan, and Taiwoo Park explore in their article, “The Voice Makes the Car: Enhancing Autonomous Vehicle Perceptions and Adoption Intention through Voice Agent Gender and Style” [16], discussing how the design of voice agents for automated vehicles influences passenger intentions to adopt those vehicles. Using an online experiment, the authors examined the role of gender stereotypes in response to a voice agent in an automated vehicle, with respect to the technology acceptance model (TAM), the perceived ease of use (PEOU), and the perceived usefulness (PU). Findings indicate that characteristics of the voice agent that are more consistent with the stereotypical expectation of the social role (e.g., informative male and social female voice agents) foster greater PEOU and PU than the inconsistent conditions (social male and informative female voice agents). The results of this study offer theoretical implications regarding the technology acceptance model, PEOU and PU, in the context of automated technology, as well as practical implications for the design of in-vehicle voice agents. The authors conclude that interactions with voice agents have the potential to influence the perception outside of media use. Therefore, they invite designers not to reinforce any existing social role stereotypes, but to shape social norms that guide social role expectations.
The basic assumption of the authors in “Improving Driver Emotions with Affective Strategies” [17] is that sad or angry drivers are prone to perform worse at driving which, in turn, decreases overall road safety. To counteract, Braun et al. suggest using affective computing tools and methods for the detection of drivers’ (negative) emotional states. This information could then be used to build a system that reacts upon, possibly, dangerous driver states and influences the driver to drive more safely. Results from a driving simulator study with different conditions suggest that an emotional voice assistant with the ability to empathize with the driver is the most promising approach as it best improves negative states and is rated most positively by drivers. The final conclusion of the authors is that digital assistants are a valuable platform to improve driver emotions in automotive environments and, thereby, have the potential to enable safer driving.
In the third article, “Tell Them How They Did: Feedback on Operator Performance Helps Calibrate Perceived Ease of Use in Automated Driving” [18], Forster et al. investigate the effect of feedback to drivers on interaction performance when interacting with automated driving systems on SAE Levels 0, 2, and 3. This paper addresses a timely issue, since automated driving technology is proliferating and we need to understand a great deal about the interaction between users, technologies, and use cases at various levels of automation. This paper also discusses an important research question, as users need to have the correct calibration to develop the appropriate level of trust in the system. The central hypothesis of the paper is that providing feedback about the actual performance on automated driving tasks will improve users’ calibration of the perceived ease of use (PEOU). The authors conclude that their results support the application of PEOU as a diagnostic measure in interaction with automated vehicles and that interface evaluation can benefit from supporting feedback to obtain more conservative results. These results are in line with recent findings from Frison et al. [20], who found that, in addition to the actual study, aesthetics and perceived usability of driver-vehicle interfaces have an impact on the perception of automated driving (e.g., system performance and trustworthiness). The paper has implications for the acceptance of automated driving technology, the effect of feedback on operator perception, and possibly its performance.
In the last article of this special issue, “Towards a Taxonomy for In-Vehicle Interactions Using Wearable Smart Textiles: Insights from a User-Elicitation Study” [19] by Nanjappan, et al., the authors discuss the potential of clothing-based textile wearable interfaces for interaction with in-vehicle controls. To this end, they conducted a user elicitation study with a non-functional fabric-based wearable input device on the wrist to control a mobile phone, a music player, and a map navigation in the car. The main aim was to investigate the design space of wearables for in-vehicle secondary interactions and to learn what kind of gestures users would like to use for controlling different devices and apps. The study results suggest that in-vehicle interactions using fabric-based interfaces are simple, natural, intuitive, and convenient to perform while steering the car as a primary task. Nanjappan et al. finally come up with a list of design recommendations for fabric-based wrist interfaces, which should be useful for designers to produce textile-based wearable interfaces in the vehicle context.

5. Conclusion and Outlook

Technologies for automated vehicles continue to develop and, with the increasing level of automation, users are being faced with different roles than in manual driving, with the ability to use their time in the vehicle more effectively and efficiently by engaging in non-driving related tasks. Nevertheless, for the foreseeable future, users will have to monitor vehicle/traffic behavior and be ready to take over on a short notice. In the meantime, technological progress (e.g., sophisticated assistance systems and non-deterministic machine learning algorithms amongst others) and detachment from the driving task (e.g., deskilling and lack of situational awareness) will require new ways to “keep the driver in the loop—when required” and guarantee road safety in (conditional) automation. Therefore, the design and representation of the user interface is considered the most critical issue for effective and efficient interaction. In this special issue, the authors of the four accepted articles have contributed to this challenging field with novel and adaptive interfaces, emotional voice assistants, and fabric-based wearables to increase technology acceptance, improve ease of use, and allow for simple and convenient interaction. To make an optimized interaction framework in futuristic vehicles, we need to conduct more research on integrating monitoring technologies, including neuroergonomic devices, trying out more modalities (e.g., olfactory displays, ambient displays, etc.), and making closed-loop interaction with a just-in-time feedback system.
Finally, we greatly appreciate all the hard work of the authors and reviewers for their contributions to shaping this special issue. We cordially invite you to a journey through a collection of high-quality research articles compiled in this special issue on “User Interfaces to Pave the Way for Interaction with Tomorrow’s Vehicles”.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jeon, M.; Politis, I.; Shladover, S.; Sutter, C.; Terken, J.; Poppinga, B. Towards Life-Long Mobility: Accessible Transportation with Automation. In Proceedings of the 8th International Conference on Automotive User Interfaces and Vehicular Applications—AutomotiveUI’16, Ann Arbor, MI, USA, 24–26 October 2016. [Google Scholar] [CrossRef]
  2. SAE (Society of Automotive Engineers). Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles (J3016 Ground Vehicle Standard); SAE: Warrendale, PA, USA, 2018. [Google Scholar] [CrossRef]
  3. Jeon, M.; Riener, A.; Lee, J.-H.; Schuett, J.; Walker, B.N. Cross-Cultural Differences in the Use of In-Vehicle Technologies and Vehicle Area Network Services: Austria, USA, and South Korea. In Proceedings of the 4th International Conference on Automotive User Interfaces and Interactive Vehicular Applications—AutomotiveUI’12, Portsmouth, NH, USA, 17–19 October 2012. [Google Scholar] [CrossRef]
  4. Dey, D.; Habibovic, A.; Klingegård, M.; Lundgren, V.M.; Andersson, J.; Schieben, A. Workshop on Methodology: Evaluating Interactions between Automated Vehicles and Other Road Users—What Works in Practice? In Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications—AutomotiveUI’18, Toronto, ON, Canada, 23–25 September 2018. [Google Scholar] [CrossRef]
  5. Wintersberger, P.; Frison, A.-K.; Riener, A.; von Sawitzky, T. Fostering User Acceptance and Trust in Fully Automated Vehicles: Evaluating the Potential of Augmented Reality. Presence. Teleoper. Virtual Environ. 2019, 27, 46–62. [Google Scholar]
  6. Wickens, C.D. Multiple resources and performance prediction. Theor. Issues Ergon. Sci. 2002, 3, 159–177. [Google Scholar] [CrossRef]
  7. Jeon, M.; FakhrHosseini, M.; Vasey, E.; Nees, M.A. Blueprint of the auditory interactions in automated vehicles: Report on the workshop and tutorial. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications—AutomotiveUI’17, Oldenburg, Germany, 24–27 September 2017. [Google Scholar] [CrossRef]
  8. Harrington, K.; Large, D.R.; Burnett, G.; Georgiou, O. Exploring the Use of Mid-Air Ultrasonic Feedback to Enhance Automotive User Interfaces. In Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications—AutomotiveUI’18, Toronto, ON, Canada, 23–25 September 2018. [Google Scholar] [CrossRef]
  9. Sterkenburg, J.; Landry, S.; Jeon, M. Design and evaluation of auditory-supported air gesture controls in vehicles. J. Multimodal User Interfaces 2019, 13, 55–70. [Google Scholar] [CrossRef] [Green Version]
  10. Tippey, K.G.; Sivaraj, E.; Ardoin, W.J.; Roady, T.; Ferris, T.K. Texting while Driving Using Google Glass: Investigating the Combined Effect of Heads-Up Display and Hands-Free Input on Driving Safety and Performance. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting; SAGE Publications: Los Angeles, CA, USA, 2014. [Google Scholar] [CrossRef]
  11. Riener, A.; Kun, A.L.; Gabbard, J.; Brewster, S.; Riegler, A. ARV 2018: 2nd Workshop on Augmented Reality for Intelligent Vehicles. In Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications—AutomotiveUI’18, Toronto, ON, Canada, 23–25 September 2018. [Google Scholar] [CrossRef]
  12. Riener, A.; Jeon, M.; Alvarez, I.; Frison, A.K. Driver in the loop: Best practices in automotive sensing and feedback mechanisms. In Automotive User Interfaces: Human-Computer Interaction Series; Meixner, G., Mueller, C., Eds.; Springer International Publishing AG: Cham, Switzerland, 2017. [Google Scholar] [CrossRef]
  13. Kundinger, T.; Riener, A.; Sofra, N.; Weigl, K. Drowsiness Detection and Warning in Manual and Automated Driving: Results from Subjective Evaluation. In Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications—AutomotiveUI’18, Toronto, ON, Canada, 23–25 September 2018. [Google Scholar] [CrossRef]
  14. Vasey, E.; Ko, S.; Jeon, M. In-Vehicle Affect Detection System: Identification of Emotional Arousal by Monitoring the Driver and Driving Style. In Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications—AutomotiveUI’18, Toronto, ON, Canada, 23–25 September 2018. [Google Scholar] [CrossRef]
  15. Yanko, M.R.; Spalek, T.M. Driving with the wandering mind: The effect that mind-wandering has on driving performance. Hum. Factors 2014, 56, 260–269. [Google Scholar] [CrossRef] [PubMed]
  16. Lee, S.; Ratan, R.; Park, T. The Voice Makes the Car: Enhancing Autonomous Vehicle Perceptions and Adoption Intention through Voice Agent Gender and Style. Multimodal Technol. Interact. 2019, 3, 20. [Google Scholar] [CrossRef]
  17. Braun, M.; Schubert, J.; Pfleging, B.; Alt, F. Improving Driver Emotions with Affective Strategies. Multimodal Technol. Interact. 2019, 3, 21. [Google Scholar] [CrossRef]
  18. Forster, Y.; Hergeth, S.; Naujkos, F.; Krems, J.; Keinath, A. Tell Them How They Did: Feedback on Operator Performance Helps Calibrate Perceived Ease of Use in Automated Driving. Multimodal Technol. Interact. 2019, 3, 29. [Google Scholar] [CrossRef]
  19. Nanjappan, V.; Shi, R.; Liang, H.N.; Lau, K.K.-T.; Yue, Y.; Atkinson, K. Towards a Taxonomy for In-Vehicle Interactions Using Wearable Smart Textiles: Insights from a User-Elicitation Study. Multimodal Technol. Interact. 2019, 3, 33. [Google Scholar] [CrossRef]
  20. Frison, A.-K.; Wintersberger, P.; Riener, A.; Schartmüller, C.; Boyle, L.N.; Miller, E.; Weigl, K. In UX We Trust: Investigation of Aesthetics and Usability of Driver-Vehicle Interfaces and Their Impact on the Perception of Automated Driving. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems—CHI’19, Glasgow, UK, 4–9 May 2019. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Riener, A.; Jeon, M. Guest Editors’ Introduction: Multimodal Technologies and Interaction in the Era of Automated Driving. Multimodal Technol. Interact. 2019, 3, 41. https://doi.org/10.3390/mti3020041

AMA Style

Riener A, Jeon M. Guest Editors’ Introduction: Multimodal Technologies and Interaction in the Era of Automated Driving. Multimodal Technologies and Interaction. 2019; 3(2):41. https://doi.org/10.3390/mti3020041

Chicago/Turabian Style

Riener, Andreas, and Myounghoon Jeon. 2019. "Guest Editors’ Introduction: Multimodal Technologies and Interaction in the Era of Automated Driving" Multimodal Technologies and Interaction 3, no. 2: 41. https://doi.org/10.3390/mti3020041

Article Metrics

Back to TopTop