skip to main content
10.1145/3613905.3650852acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
Work in Progress
Free Access

User-Driven Adaptation: Tailoring Autonomous Driving Systems with Dynamic Preferences

Published:11 May 2024Publication History

Abstract

In the realm of autonomous vehicles, dynamic user preferences are critical yet challenging to accommodate. Existing methods often misrepresent these preferences, either by overlooking their dynamism or overburdening users as humans often find it challenging to express their objectives mathematically. The previously introduced framework, which interprets dynamic preferences as inherent uncertainty and includes a “human-on-the-loop” mechanism enabling users to give feedback when dissatisfied with system behaviors, addresses this gap. In this study, we further enhance the approach with a user study of 20 participants, focusing on aligning system behavior with user expectations through feedback-driven adaptation. The findings affirm the approach’s ability to effectively merge algorithm-driven adjustments with user complaints, leading to improved participants’ subjective satisfaction in autonomous systems.

Skip 1INTRODUCTION Section

1 INTRODUCTION

In the evolution of software systems, particularly in areas like autonomous vehicles and cyber-physical systems, the need to understand and prioritize user preferences has become paramount. User preferences typically involve determining the relative importance of conflicting quality attributes, such as efficiency, safety, and privacy [15]. However, one-size-fits-all or standardized approaches often don’t adequately meet individual user needs, leading to dissatisfaction and anxiety [4, 19]. Thus, integrating user preferences effectively is crucial in user-centric design.

User preferences are dynamic and context-dependent. Consider an autonomous vehicle: initially, a user might prioritize efficiency, favoring the shortest routes to their destination. However, in certain scenarios, such as during a vacation, their emphasis might move towards ride comfort, favoring smoother and more leisurely routes to enjoy the view, even if they are longer. Similarly, customizable self-tracking tools permit frequent adjustments to display aesthetics and functionalities [7] catering to unique user needs, with some users making changes multiple times daily. This variability underlines the necessity for systems to adapt their decision-making strategies swiftly to align with evolving user expectations [13, 14, 24].

Existing research in autonomous driving often misrepresent user preferences, either by pre-setting preferences before deployment without considering individual variability and dynamism [5], or by assuming that users can clearly understand and articulate their preferences [2, 9, 10]. Addressing these issues, Li et al. [15] developed a novel ’human-on-the-loop’ framework to accommodate the inherent uncertainty in dynamic human preferences. This framework was inspired by the recognition that users are adept at identifying and expressing dissatisfaction with system behaviors [10, 22]. Central to this framework is the incorporation of real-time user dissatisfaction, expressed as complaints, into the fitness function of a genetic algorithm. This algorithm is designed to detect and adapt to nuanced changes in user preferences over time. Consequently, the system’s decision-making process is continuously updated and refined, by an evolving understanding of user preferences, thereby enhancing the alignment between system behaviors and user desires. However, its effectiveness of this work in adapting to changing human preferences and optimizing system performance has only been validated through theoretical experiments. These experiments, while informative, were conducted with predefined preferences and did not involve direct user interaction, which presents a limitation in terms of validating the framework’s practical applicability and effectiveness in real-world scenarios.

Expanding upon the established theoretical foundation, this work applies the previously discussed framework to an autonomous driving system, incorporating a practical user study with 20 participants. This study aims to address a critical gap - the real-world application and validation. This study concentrates on essential quality attributes, including efficiency, riding comfort, and landscape aesthetics. Utilizing the Unreal Engine Simulator [6], we created several 3D autonomous driving scenarios to evaluate the framework’s effectiveness in harmonizing system operations with user expectation, particularly in the context of refining route selections based on participant feedback. The results of our user study clearly illustrate that the framework effectively aligns algorithm-derived system behaviors with self-reported user preferences, while also significantly enhancing user satisfaction and reducing the frequency of complaints about system behaviors. This improvement is a testament to the framework’s ability to dynamically adjust to user preferences in real-time scenarios, offering a more personalized and satisfying user experience in autonomous driving.

The rest of the paper is organized as follows: Section 2 delves into related work. Section 3 lays out an exploration scenario centered on route choice in autonomous driving scenario, and provides a brief introduction of the preference adaptation approach in use. Section 4 details our practical user study, followed by the study results and an in-depth analysis in Section 5. Section 6 concludes the paper, touching on limitations and possible avenues for future exploration.

Skip 2RELATED WORK ON AUTONOMOUS DRIVING AND PREFERENCE ADAPTATION Section

2 RELATED WORK ON AUTONOMOUS DRIVING AND PREFERENCE ADAPTATION

The field of autonomous driving encompasses various challenges and developments. Chu et al. examine the role of safety drivers in the autonomous vehicle industry, emphasizing their impact on risk management and professional development [3]. Tener and Liu highlight the ongoing need for human assistance in autonomous vehicles (AVs), despite technological advancements [23]. Schneider et al. explore how system transparency in AVs affects user experience and safety perception, providing design guidelines for integrating user experience with autonomous driving [20]. Dillen et al. focus on how fixed driving styles in AVs can conflict with passenger expectations, affecting comfort and anxiety [5]. These studies collectively highlight the diverse aspects of autonomous driving, from user experience to system design.

In the rapidly evolving landscape of transportation intelligence, the necessity to understand and integrate driver preferences into decision-making scenarios for AVs is becoming increasingly critical. Pan et al. utilized inverse reinforcement learning (IRL) to understand taxi driver preferences in their passenger-search actions, offering insights into the dynamic nature of driver preferences [18]. In typical autonomous system development, users are required to specify their preferences by ranking various quality attributes before the system’s deployment [21, 24]. However, a significant challenge arises as users often struggle to precisely quantify these preferences in numerical terms [1, 2, 9, 10, 16], a critical aspect in customizing AV algorithms for individual user experience.

To emphasize the dynamic adaptation of preferences during runtime, adjusting utility functions to mirror the preference evolution is key [13]. Interactive techniques that assist users in adjusting rankings and understanding decision impacts enhance this adaptability [8, 12, 17]. Wohlrab et al. and Song et al. have each contributed to this area by focusing on weight adjustments in response to contextual changes or direct user inputs [21, 24]. A notable study in AVs proposed voice-guided input for drivers to express their driving style preferences, highlighting the importance of integrating user preference in AV control systems [11]. These studies operate under the assumption that users have a clear understanding of their preferences and can articulate them in mathematical terms.

Skip 3EXPLORATION SCENARIO AND GA-BASED PREFERENCE ADAPTATION Section

3 EXPLORATION SCENARIO AND GA-BASED PREFERENCE ADAPTATION

To illustrate the preference adaptation framework, we focus on an autonomous driving system’s route choice scenario. Fig. 1(a) shows an autonomous vehicle transport the user from a start to end point, with multiple options: 1. The Shortest Route: Located at the bottom, it features a rough stone road with considerable noise; 2. The Middle Route: This path offers a fine stone road bordered by bushes, but is not immune to noise disturbances; 3. The Third Route: Predominantly consists of a fine stone road; 4. The Scenic Route: The longest and most winding, it is enveloped by tree canopies, offering a smooth and flat ride within a tranquil and scenic setting.

Figure 1:

Figure 1: The Motivating Scenario and User Study built on Unreal Engine.

Key quality attributes for this route selection include: Efficiency: the vehicle’s travel time. Aesthetic Appeal: the visual and auditory impact of the surroundings, with factors like trees enhancing aesthetic appeal and ambient noise reducing it. Road Condition: the quality of the road infrastructure and its maintenance, significantly influencing ride smoothness. Notably, the nature of the terrain, such as a bumpy road, can greatly impact passenger comfort [25]1.

The vehicle selects routes based on distinct user preferences. For example, a preference weighting of ⟨ 0.333 (Efficiency), 0.333 (Aesthetic Appeal), 0.333 (Road Condition) ⟩ indicates an equal emphasis on three attributes. A certain route is evaluated by aggregating the utility values associated with each quality attribute, defined as \(U(route) = \sum \limits _{i=1}^{n} w_i \times u_i\) where wi stands for the preference value assigned to a particular quality attribute i and ui denotes the utility value for that attribute. For Efficiency, the utility is based on travel distance, assessed by the number of segments traversed. In the case of Road Condition, rougher roads are assigned lower utility due to increased discomfort, while smoother roads have higher utility indicating a more comfortable and less wearing journey. Similarly, Aesthetic Appeal is evaluated based on environmental factors like noise levels and greenery. Areas with less aesthetic appeal, such as noisy or treeless zones, incur lower utility, while more serene and green environments have higher utility. The optimal route is determined by maximizing the aggregation utility function, which can be implemented through various algorithms like A star algorithm. In this typical scenario as in Fig. 1-(a), the best choice is route 3 for the given user preference. Details regarding utility assignments for different quality attributes and the decision-making process for selecting the optimal route based on user preferences are not the main focus of this paper. Those looking for a deeper understanding of the underlying calculations can refer to the work [15].

In our research, preference adaptation transforms into an optimization problem seeking ideal weight configurations by iteratively refining attribute weights in light of user feedback. We aim to achieve three key objectives through these updates: (1) Preference Divergence, aiming to minimize the variance between initial and updated preferences while ensuring user satisfaction, based on the premise that preference shifts occur incrementally over time [18]; (2) Complaint Avoidance, ensuring that the optimal trajectory derived from updated preferences does not intersect with states previously identified as problematic by users; and (3) Implicit Constraints, which involves maintaining specific hierarchical relationships within the attribute weights, ensuring that certain attributes are prioritized based on user feedback such as "excessive noise!" or "excessive bumpiness!". These objectives are encapsulated within a unified fitness function encoding user complaints, formulated as f(p) = λ1f1 + λ2f2 + λ3f3, where λ1, λ2, λ3 are positive coefficients, f1, f2 and f3 represent the objectives mentioned, and p = ⟨w1,..., wn⟩ denotes the preference vector.

To align user preferences with these objectives, the approach employs a Genetic Algorithm (GA) framework, renowned for its efficacy in solving such search problems. Within this framework, the fitness function corresponds to the objective function described earlier. Each attribute weight is conceptualized as an individual within the GA population, represented as vectors like p1 = ⟨w1, w2,..., wn⟩ and \(p_2=\langle w_1^{\prime },w_2^{\prime },...,w_n^{\prime } \rangle\). In this context, each weight, wi, functions as a gene within these individuals. The crossover operation in GA entails exchanging a subset of weights between individuals, generating novel combinations. Post-crossover, individuals exceeding the total weight constraint are either normalized or discarded to maintain validity. The mutation operation involves minor, random adjustments to a weight, with compensatory changes in other weights to preserve the sum constraint. The specifics of these GA operations are further elucidated in the appendix. By employing GA, we establish a systematic, evolutionary method to iteratively refine and optimize preference vectors, adhering to the predefined objectives and user feedback.

Skip 4USER STUDY Section

4 USER STUDY

To empirically assess the efficacy of the proposed preference adaptation framework with user complaints, our user study is designed to answer the following research questions. Informed consent forms were distributed prior to the participation in the experiment, and data were processed in line with the recommendations of the ethics board of the university of researchers responsible for the study.

RQ1: Quantitative Analysis of Preference Alignment. Does the framework, employing a complaints feedback mechanism and genetic algorithm for preference adaptation, effectively update preferences to align more closely with users’ self-reported preferences?

RQ2: Qualitative Assessment of User Satisfaction. How does the integration of the framework, particularly its adaptation of preferences based on user complaints, impact user satisfaction with system behaviors?

4.1 Procedure

Our user study commenced with a baseline assumption: all users have equal preference values across three quality attributes. These initial settings were considered as starting or ’outdated’ preferences (i.e., ⟨ 0.333, 0.333, 0.334⟩). Utilizing Unreal Engine [6], a 3D computer graphics game engine, we developed a series of maps, as shown in Fig. 1, as the experimental environment. As the study progressed, we focused on dynamically updating these preferences by inferring from each user’s complaints regarding the routes recommended by the system. This adaptation process is a practical application of ’dynamic updating’ of preferences, illustrating the system’s ability to evolve and align with real-time changes in user preferences. The study was structured into three sequential phases: a pre-experiment questionnaire, the main experiment, and a post-experiment questionnaire complemented by an interview.

Pre-experiment Questionnaire. Before delving into the main experiment, the session began with an interactive Q&A segment managed by the organizer. Participants were tasked with filling out an initial questionnaire, encompassing: (1) essential personal and demographic data; and (2) a self-reported preference after viewing a sample video. This video depicted a driver’s first-person perspective under various road environments, such as routes characterized by trees, bushes, fine stones, rough stones, and noise. These preferences were expected to remain consistent throughout the duration of the study, serving as a benchmark against which any shifts in preferences could be measured and analyzed.

Main Experiment. Within the UE simulator, we designed three tailored maps, with each map comprising four distinct driving routes similar to Fig. 1-(a). For each of these maps, participants underwent a structured set of tasks: (1) Viewing the Algorithm-Recommended Route: Initiated by the organizer, participants watched a screen-shared video, representing an autonomous-driven journey along the route recommended by the system. Recommendations for subsequent maps were modified, taking into account feedback from the previous map’s experience. (2) Complaints and Route Scoring: After viewing the suggested route, participants were given the opportunity to voice complaints of general discontent and specific discontent. Available options encompassed: dislike of the road, excessive noise, excessive bumpiness, excessive distance, or no complaints. We focused on these complaint types as they precisely capture users’ dissatisfaction stemming from system behavior, providing a clearer insight into explicit user intentions over preference-based grievances. Additionally, based on their experience with the route, they were requested to quantify their experience, rating their satisfaction on a standardized scale from very dissatisfied to very satisfied. (3) Viewing and Scoring Alternative Routes: Subsequently, participants were shown videos of the other three available routes within that particular map. After each video, they were instructed to provide a rating, reflecting their satisfaction with the route, using the same scale.

Post-experiment Questionnaire and Interview. Upon completion of the primary tasks, participants were guided to a concluding assessment phase, which included a questionnaire and supplementary interviews. This assessment encapsulated: (1) a refreshed self-report on their preferences after interacting with the simulation; (2) an opportunity to provide feedback or complaints about the route calculated based on their initial self-reported preferences in the third map, especially if it differed from the route suggested by the system after updating preferences with user feedback; (3) open-ended questions designed to delve deeper into their experiences and collect personalized and detailed feedback; and (4) a targeted interview for select participants, specifically those who demonstrated significant variances between their pre and post-experiment self-reported preferences were engaged in individual interviews to better understand their perspectives and rationale.

4.2 Participants

We recruited 20 participants, comprising 12 males and 8 females, from university campus. Each participant is presented with high-definition (1080p) video portrayals of the autonomous driving experience, generated from the UE simulator (Refer to Fig. 1-(b)). All participants possessed normal hearing and either normal vision or vision corrected to normal standards. Participant ages spanned from 19 to 49 years, with a mean age of 27.5 and a standard deviation of 4.1. Participation was on a voluntary basis, and no financial incentives were offered.

4.3 Metrics and Data Processing

For clarity, we delineate the metrics associated with RQ1 and RQ2, along with the associated data collection and processing methods.

RQ1: Addressing RQ1, our primary metric centers on evaluating the preference similarity. This involves a detailed comparison between the user preferences dynamically refined by our algorithm and the users’ self-reported preferences as gathered from the post-experiment questionnaire. Initially, each participant’s preference is set to a default vector of ⟨0.333, 0.333, 0.334⟩. This preference vector undergoes sequential adaptations after each map experiment, contingent upon the first, second, and third instances of user complaints, if any. To quantitatively assess these preference similarities, we represent the preferences as three-dimensional vectors and employ Cosine similarity as our metric for evaluation.

RQ2: For RQ2, we considered three key metrics: (1) Frequency of complaints: The recurrence of user complaints serves as an insightful indicator of user satisfaction with the system’s behaviors. We counted the number of user complaints about the recommended routes across the three maps; (2) Self-assessed satisfaction: To glean insights into user satisfaction, participants were presented with a Likert scale query in the post-experiment questionnaire: "Do you agree: The system-recommended routes increasingly satisfy you?" The response spectrum was articulated on a five-point Likert scale, ranging from "Strongly Disagree" to "Strongly Agree;" (3) Scoring and ranking of recommended routes: As part of the experimental procedure, participants were prompted to score each route on a Likert scale from 1 to 5 post-viewing, with the scale defined as follows: 1 - Very Dissatisfied, 2 - Dissatisfied, 3 - Neutral, 4 - Satisfied, 5 - Very Satisfied. This facilitated the extraction of both the rank (relative to the other three routes within the same map) and numerical evaluation for the recommended routes. To ensure comparability across diverse scenarios, user satisfaction values for each map were subjected to min-max normalization.

Skip 5RESULTS AND ANALYSIS Section

5 RESULTS AND ANALYSIS

Figure 2:

Figure 2: User Study Results

Fig. 2.(a) and (b) trace the temporal evolution of preference similarity. Fig. 2.(a) delineates the distribution of these similarities. The median values for these similarities are reported as 0.926 (with an IQR of 0.949 - 0.895), 0.982 (IQR: 0.993 - 0.927), 0.988 (IQR: 0.995 - 0.970), and 0.990 (IQR: 0.995 - 0.970). This pattern signifies a notable trend: the initial balanced setting of preference weights contributed to the initial similarity score (0.926), as it closely aligns with the general tendency of users to distribute their preferences evenly across multiple attributes. As user complaints accumulate, there is a progressive alignment between the algorithmically updated preferences and the users’ self-reported preferences, pushing the Cosine similarity metric closer to 1. It is also crucial to highlight the presence of three outliers within this data. Remarkably, in two of these outlier instances, the preferences updated by the algorithm coincide with the route recommendations as reported by the users in their self-reported preferences. This observation underscores the nuanced efficacy of the algorithm in adapting to user feedback and aligning with user preferences over time.

In our analysis of user complaints across the three maps, we observed a decreasing trend: 20 complaints for Map 1, 11 for Map 2, and only 2 for Map 3. This reduction in complaints can be attributed to the preference adaptation. After addressing the initial set of complaints from Map 1, the algorithm adjusted the recommended routes, which resulted in a significant drop in user dissatisfaction, as evidenced by the decrease in complaints from 20 to 11. Further refinements based on subsequent feedback led to an even more pronounced decline in complaints, down to just 2 in Map 3. Regarding participant satisfaction with the system recommendation, the response was overwhelmingly positive. Out of the participants, 12 expressed “Strongly Agree”, and 8 chose “Agree” when asked “if the system-recommended routes increasingly met their expectations”. Notably, there were no neutral or negative responses (“Neutral”, “Disagree”, or “Strongly Disagree”).

Fig. 2.(c) and (d) offer insights into the satisfaction ranking and scores of the recommended routes. As depicted in Fig. 2.(c), for map 1, based on the algorithm’s initial default preferences, the route rankings were as follows: 4 participants ranked it first, 4 participants second, 10 participants third, and 2 participants fourth. However, subsequent maps showed significant improvements; in map 2, 12 participants ranked the recommended route first, and 8 participants second. In map 3, 19 participants ranked the recommended route first, and 1 participant second. This progression highlights the effectiveness of preference adaptation, demonstrating that the system-recommended routes increasingly aligned with users’ best options. Delving into the scores shown in Fig. 2.(d), a notable increase in median satisfaction scores is observed across the maps: from 0.5 (IQR: 0.5 - 0.35) in map 1, the score rose to 0.75 (IQR: 0.75 - 0.5) for map 2, and further to 0.75 (IQR: 1 - 0.75) for map 3. This trend underscores that with an increasing number of complaints, users’ scoring of the system’s recommended routes progressively improves.

The results of our study underscore the importance of considering the dynamic nature of user preferences in the development of user-centric systems. Specifically, they validate the effectiveness of our GA-based preference adaptation, which adeptly integrates a feedback mechanism via user complaints. While our approach uses self-reported preferences as a comparative benchmark and the results indicate a gradual convergence of the algorithmically updated preferences towards these self-reported preferences, it is crucial to note that relying solely on direct user input for setting preferences may not be entirely effective, especially in real-world scenarios. This is supported by the following considerations:

Inaccuracy and Contradictions in Self-Reported Preferences: Our analysis revealed that users might not always provide an accurate reflection of their true preferences, which can sometimes be confusing or contradictory. This was apparent when we examined the route choices in Map 3. We compared routes based on two different sources: participants’ self-reported preferences and those generated by our algorithm. Remarkably, in 85% (17 out of 20) of cases, the routes suggested by both methods were consistent. However, among the remaining three participants, one found equal satisfaction with both routes, indicating potential inconsistencies in their self-reported preferences. The second participant showed a clear preference for the algorithm-suggested route, while the third participant voiced complaints about the self-reported preference route. These observations align with findings that humans often struggle to articulate their objectives accurately [2, 9, 10]. As such, the gradual reduction in user complaints over time becomes a more reliable and complementary indicator of the system’s effectiveness in aligning with users’ evolving preferences.

Challenges in Timely Setting of User Preferences : Determining the optimal moments to solicit updated preferences from users poses a significant challenge, given the fluidity of user preferences. Our pre- and post-experiment analysis of self-reported preferences revealed that only 35% (7 out of 20) of participants retained identical preferences (as indicated by a cosine similarity of 1). Notably, 25% (5 out of 20) exhibited a similarity score of less than 0.9, with one participant’s score falling below 0.5. A post-experiment interview with this participant highlighted the reasons behind this significant shift. Initially, the participant was averse to longer routes. However, after viewing videos during the experiment, which depicted high levels of noise and bumpiness, the participant’s preference changed due to the discomfort these factors caused.

Skip 6CONCLUSION AND FUTURE WORK Section

6 CONCLUSION AND FUTURE WORK

To tackle the complexity and dynamism of user preferences, our study introduces a novel framework underpinned by the "human-on-the-loop" concept. This framework enables the system to adapt to human preferences using a genetic algorithm driven by user dissatisfaction, thereby aligning system behavior more closely with human expectations. While the preference adaptation in our user study of autonomous driving scenario —transitioning from initial settings to more closely match human self-reported preferences—is reactive in nature, the results demonstrate its quantitative effectiveness in aligning preferences and qualitative enhancement in user satisfaction.

In our study, conducted within the Unreal Engine environment, we effectively simulated three key quality attributes relevant to route choice scenarios. The aim of selecting these particular attributes was to ensure that our study results are coherent and rooted in the tangible experiences of participants. Although this was instrumental in achieving our immediate research goals, we acknowledge the necessity of including a broader range of attributes in future studies to gain a more comprehensive understanding of user preferences in autonomous driving contexts. Besides, a notable limitation concerns the perception and impact of Efficiency. The duration of each route in the simulation ranged from 40 to 80 seconds. This narrow time range, coupled with the absence of significant consequences for longer travel times, may have inadequately captured the practical implications of efficiency in route selection. Consequently, this limitation might have influenced the participants’ ability to discern and prioritize efficiency as a critical attribute, posing a threat to the validity of our findings regarding efficiency preferences.

To optimize our framework and further align it with user needs, future enhancements could focus on implementing more refined feedback mechanisms. A key area of improvement involves developing a system capable of discerning varying degrees of user concerns. For example, differentiating between "slightly noisy" and "extremely noisy" environments would offer a more detailed understanding of user preferences, thereby enabling a more nuanced response to their specific complaints. Additionally, the integration of large language models holds promise for augmenting our framework’s capacity to parse and analyze user feedback. These models are particularly adept at processing natural language, which ranges from specific complaints to general comments. Their inclusion would enable our system to more effectively adapt the fitness function within the genetic algorithm, ensuring that it responds more precisely to the issues and preferences identified in user feedback.

Skip ACKNOWLEDGMENTS Section

ACKNOWLEDGMENTS

This work is supported by the Fundamental Research Funds for the Central Universities (No. SWU-KQ23005). And we are very grateful to Chenyu Hu, Enhong Mou, Yaozhong Zhang, Qinxin Chen, and others for participating in the user study.

A APPENDIX

A.1 Complaint Encoding into Fitness Function

Figure 3:

Figure 3: An Illustration of Crossover and Mutation Operations in GA-based Preference Update.

We leverage the genetic algorithms (GA) to facilitate the preference update process. In this GA-based approach: 1)each preference configuration is treated as an individual within the GA (two representative individuals can be described as p1 = ⟨w1, w2,..., wn⟩ and \(p_2=\langle w_1^{\prime },w_2^{\prime },...,w_n^{\prime } \rangle\)); 2) every weight, denoted by wi, acts as a gene within the said individual; 3) A group of these individuals comes together to form what is termed a "population". By adopting this structure, the GA allows for a systematic and evolutionary approach to updating and optimizing preference vectors based on a given set of criteria.

Within GA, the fitness of each individual in the population is assessed in every iteration.The primary objective of updating preferences in our study is to augment user satisfaction and minimize user complaints, leading us to embed user complaints into the fitness function.

The function is formulated as: f(p) = λ1f1 + λ2f2 + λ3f3, where λ1, λ2, λ3 are positive coefficient parameters, and p = ⟨w1,..., wn⟩ is the vector of preferences. The functions f1, f2, and f3 quantitatively characterize the evaluation of three distinct aspects: (1) Preference Divergence. Minimizing the discrepancy in preferences pre and post-update while maintaining user satisfaction. The basic idea behind this discrepancy is that each individual changes his preference in a small step within a certain time frame [18]. (2) Complaint Avoidance. Assessing if the optimal path, derived from the given preference, traverses any complained-about states - it ideally shouldn’t. (3) Implicit Constraints. Addressing constraints tied to the following four complaint categories (General Discontent, Specific Discontent, General Preference Discontent), like projecting an attribute weight increase or one attribute’s weight overtaking another’s. The meaning of these four implicit complaints is as follows: 1. General Discontent: E.g., "I don’t like this road." 2. Specific Discontent: E.g., "the road is too bumpy" or "it’s too noisy." 3. General Preferences Discontent: E.g., "road condition takes precedence over trip efficiency." 4. Specific Preferences Discontent: E.g., "The weight for road condition should be 0.8!". It’s evident that both "Preference Divergence" and "Complaint Avoidance" are universally relevant across complaint categories. As such, both f1 and f2 remain consistent throughout. However, given the inherent uniqueness of each complaint type, designing specific f3 functions for each becomes essential.

Function f1 represents the cosine similarity between two preference configurations, p and p′. f2 accounts for user complaint avoidance, with ρ1 acting as a negative penalty parameter, τ being determined by \(\text{PATH}\_\text{FINDING}(map, p)\), and \(\mathbb {C}\) symbolizes a set of states. (1) \(\begin{equation} \begin{split} f_1&=-{\rm\small {COSSIM}}(p,p^{\prime })\ = -\quad \dfrac{p \wedge p^{\prime }}{||p||\times ||p^{\prime }||} \\ &= -\quad \dfrac{\quad \sum \limits _{i=1}^{n} w_i\times w_i^{\prime }}{\sqrt {\sum \limits _{i=1}^{n} w_i^2}\times \sqrt {\sum \limits _{i=1}^{n} w_i^{\prime 2}}} \end{split} \end{equation}\) (2) \(\begin{equation} \begin{split} \begin{aligned} f_2 ={\left\lbrace \begin{array}{@{}l@{\quad }l@{}}\rho _1, \text{\ \ \ if $\mathbb {C}\cap \tau \ne \emptyset $}\\ 0, \text{\ \ \ \ \ otherwise} \end{array}\right.} \end{aligned} \end{split} \end{equation}\)

For the design of f3 based on complaint categories, various considerations emerge. For General Discontent complaints, like "dislike of the road", f3 is straightforwardly set to a constant value of 0, given its absence of supplemental constraints. Diving into Specific Discontent, we categorize the complaints by their attribute order: ⟨ Road Condition, Efficiency, Aesthetic Appeal ⟩. For instance, complaints like "excessive noise!" maps to attribute id = 3, "excessive bumpiness!" to id = 1, and "excessive distance" to id = 2. The structure of f3 in this category leverages a negative penalty parameter, ρ2 < 0, coupled with a fuzzy logic representation. This induces a transition between conditions \(w_{id}\le w^{\prime }_{id}\) and \(w_{id}\gt w^{\prime }_{id}\). Additionally, the bias b is derived from \(b=\min (\phi ,1-w^{\prime }_{id})\) where ϕ , a hyperparameter, represents the size of this transition, usually lying between 0.1 and 0.2. A detailed representation is found in Eq.(3). (3) \(\begin{equation} \begin{aligned} f_3 ={\left\lbrace \begin{array}{@{}l@{\quad }l@{}}\rho _2, \text{\quad \quad \quad \quad \quad \quad \quad if $w_{id}\in [0,w^{\prime }_{id}]$}\\ \rho _2\big (\frac{w^{\prime }_{id}-w_{id}}{b}+1\big), \text{\quad \ if $w_{id}\in (w^{\prime }_{id},w^{\prime }_{id}+b]$}\\ 0, \text{\quad \quad \quad \quad \quad \quad \quad \ if $w_{id}\in (w^{\prime }_{id}+b,1]$} \end{array}\right.} \end{aligned} \end{equation}\)

When addressing General Preference Discontent, two pivotal attributes, id1 and id2, emerge from the complaint, with the former overshadowing the latter in significance. This is captured succinctly in Eq.(4). Lastly, for Specific Preference Discontent, the complaint pinpoints a particular attribute, id, and stipulates an ideal preference value, wopt. The expression for f3 under this complaint type is as shown in Eq.(5). (4) \(\begin{equation} \begin{aligned} f_3 ={\left\lbrace \begin{array}{@{}l@{\quad }l@{}}\rho _2, \text{\quad \quad if $w_{id1}\lt w_{id2}$}\\ 0, \text{\ \ \quad \quad if $w_{id1}\ge w_{id2}$} \end{array}\right.} \end{aligned} \end{equation}\) (5) \(\begin{equation} \begin{aligned} f_3 ={\left\lbrace \begin{array}{@{}l@{\quad }l@{}}\rho _2, \text{\quad \quad if $w_{id}=w_{opt}$}\\ 0, \text{\quad \quad \ \ if $w_{id}\ne w_{opt}$} \end{array}\right.} \end{aligned} \end{equation}\)

The condition ρ1ρ2 < 0 is essential because complaint avoidance offers a critical user feedback, indicating the system to steer clear of complaint points. This feedback must always be prioritized, while other considerations from implicit constraints and preference divergence can be treated with more discretion.

A.2 GA-Based Preference Adaptation

The GA-based preference adaptation starts with a random population initialization and then progresses through successive generations using selection, crossover, and mutation operations. The process continues until a termination criterion is met, either by exceeding a set number of iterations or when the population’s best fitness remains unchanged for a certain period.

Selection operation: After calculating the fitness of each individual in the population, individuals are selected (with potential repetition) based on the following probability: (6) \(\begin{equation} P(p)=\frac{f(p)-p_{min}}{\sum _{p^{\prime }\in Pop}\big (f(p)-p_{min} \big)} \end{equation}\) where \(p_{min}=\min _{p^{\prime }\in Pop}f(p^{\prime })\), and Pop is the population, and the operation could be implemented with Roulette-wheel selection.

Crossover operation: The selected individuals constitute a new population. Within this new population, individuals are paired two by two for crossover operations. The operation process is depicted in Fig. 3: given two individuals p1 and p2, an initial crossover point k ∈ {1, 2,..., n} is selected randomly. Then, the genes from 1 to k of individual p2 (i.e., \(w^{\prime }_1\) to \(w^{\prime }_k\)) and the genes from k + 1 to n of individual p1 (i.e.,wk + 1 to wn) are combined to form a new individual represented as \(p^{\prime }_{1}\). Likewise, the genes from 1 to k of p1 (i.e., w1 to wk) and the genes from k + 1 to n of p2 (i.e., \(w^{\prime }_{k+1}\) to \(w^{\prime }_n\)) are combined to form a new individual represented as \(p_{2}^{\prime }\). To ensure the legitimacy of \(p^{\prime }_1\) and \(p^{\prime }_2\), if a crossover results in the condition \(\sum _{i=1}^k{w_i^{\prime }} + \sum _{i=k+1}^n{w_i}\gt 1\), then \(p_1^{\prime }\) is directly set to p1 and \(p_2^{\prime }\) is directly set to p2.

Mutation operation: Upon obtaining the new individuals after the crossover, a mutation operation is then performed. The procedure for the operation is illustrated in Fig. 3: for each individual, a mutation point i ∈ {1, 2,..., n} and a mutation step δ are selected randomly. Subsequently, the individual’s gene wi is modified to wi + δ , while the other genes are modified to \( w-\frac{\delta }{n-1}\). Let wk denote the weights before mutation, and \(w^{\prime }_k\) represent the weights after mutation. If the mutation results in \(\exists k\in \lbrace 1,2,...,n\rbrace.w^{\prime }_k \lt 0\) or \(\exists k\in \lbrace 1,2,...,n\rbrace. w^{\prime }_k \gt 1\), then δ needs to be trimmed as follows: δ = min (δ, min k ∈ {1, 2,..., n}|wk − 0|, min i ∈ {1, 2,..., n}|wk − 1|).

Footnotes

  1. Corresponding author

  2. 1 Autonomous driving scenarios span a broad spectrum of attributes, including safety, energy consumption, interaction with other vehicles, and more. However, to illustrate the framework and user study later effectively, we narrowed our focus to three attributes. This decision was guided by two key factors. Firstly, these attributes are directly perceptible. In user studies, participants can quickly discern the implications of travel time (Efficiency), visual surroundings (Aesthetic Appeal), and ride comfort (Road Condition). Second, they are easily simulated in Unreal Engine and quantifiable, allowing clear representation and measurement of user experience in our study. While crucial, attributes like energy consumption aren’t as immediately noticeable in short user studies. In contrast, the impact of picturesque routes or the feel of a bumpy road is immediately evident.

    Footnote
Skip Supplemental Material Section

Supplemental Material

Demo Video

Demo Video

mp4

14.5 MB

3613905.3650852-talk-video.mp4

Talk Video

mp4

25.6 MB

References

  1. Ryotaro Abe, Jinyu Cai, Tianchen Wang, Jialong Li, Shinichi Honiden, and Kenji Tei. 2024. Towards Enhancing Driver’s Perceived Safety in Autonomous Driving: A Shield-based Approach. In Intelligent Systems Design and Applications. Springer.Google ScholarGoogle Scholar
  2. Betty H. C. Cheng, Rogério de Lemos, Holger Giese, Paola Inverardi, and Jeff Magee (Eds.). 2009. Software Engineering for Self-Adaptive Systems [outcome of a Dagstuhl Seminar]. Lecture Notes in Computer Science, Vol. 5525. Springer.Google ScholarGoogle Scholar
  3. Mengdi Chu, Keyu Zong, Xin Shu, Jiangtao Gong, Zhicong Lu, Kaimin Guo, Xinyi Dai, and Guyue Zhou. 2023. Work with AI and Work for AI: Autonomous Vehicle Safety Drivers’ Lived Experiences. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI 2023, Hamburg, Germany, April 23-28, 2023, Albrecht Schmidt, Kaisa Väänänen, Tesh Goyal, Per Ola Kristensson, Anicia Peters, Stefanie Mueller, Julie R. Williamson, and Max L. Wilson (Eds.). ACM, 753:1–753:16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Nicole Dillen, Marko Ilievski, Edith Law, Lennart E. Nacke, Krzysztof Czarnecki, and Oliver Schneider. 2020. Keep Calm and Ride Along: Passenger Comfort and Anxiety as Physiological Responses to Autonomous Driving Styles. In CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, Regina Bernhaupt, Florian ’Floyd’ Mueller, David Verweij, Josh Andres, Joanna McGrenere, Andy Cockburn, Ignacio Avellino, Alix Goguey, Pernille Bjøn, Shengdong Zhao, Briane Paul Samson, and Rafal Kocielnik (Eds.). ACM, 1–13. https://doi.org/10.1145/3313831.3376247Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Nicole Dillen, Marko Ilievski, Edith Law, Lennart E. Nacke, Krzysztof Czarnecki, and Oliver Schneider. 2020. Keep Calm and Ride Along: Passenger Comfort and Anxiety as Physiological Responses to Autonomous Driving Styles. In CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, Regina Bernhaupt, Florian ’Floyd’ Mueller, David Verweij, Josh Andres, Joanna McGrenere, Andy Cockburn, Ignacio Avellino, Alix Goguey, Pernille Bjøn, Shengdong Zhao, Briane Paul Samson, and Rafal Kocielnik (Eds.). ACM, 1–13.Google ScholarGoogle Scholar
  6. Epic Games. 2024. Unreal Engine - The Most Powerful Real-Time 3D Creation Tool. https://www.unrealengine.com/en-US Accessed: 2024-01-23.Google ScholarGoogle Scholar
  7. Rúben Gouveia and Daniel A. Epstein. 2023. This Watchface Fits with my Tattoos: Investigating Customisation Needs and Preferences in Personal Tracking. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI 2023, Hamburg, Germany, April 23-28, 2023, Albrecht Schmidt, Kaisa Väänänen, Tesh Goyal, Per Ola Kristensson, Anicia Peters, Stefanie Mueller, Julie R. Williamson, and Max L. Wilson (Eds.). ACM, 327:1–327:15. https://doi.org/10.1145/3544548.3580955Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Samuel Gratzl, Alexander Lex, Nils Gehlenborg, Hanspeter Pfister, and Marc Streit. 2013. LineUp: Visual Analysis of Multi-Attribute Rankings. IEEE Trans. Vis. Comput. Graph. 19, 12 (2013), 2277–2286.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Giannis Karagiannakis, Anna Baccaglini-Frank, and Yiannis Papadatos. 2014. Mathematical learning difficulties subtypes classification. Frontiers in Human Neuroscience 8 (01 2014).Google ScholarGoogle Scholar
  10. Jeffrey Kephart. 2021. Viewing Autonomic Computing through the Lens of Embodied Artificial Intelligence: A Self-Debate.Keynote at the 16th Symposium on Software Engineering for Adaptive and Self-Managing Systems. (SEAMS 2021) (2021).Google ScholarGoogle Scholar
  11. Keunwoo Kim, Minjung Park, and Youn-kyung Lim. 2021. Guiding preferred driving style using voice in autonomous vehicles: An on-road wizard-of-oz study. In Designing Interactive Systems Conference 2021. 352–364.Google ScholarGoogle Scholar
  12. Caitlin Kuhlman, MaryAnn Van Valkenburg, Diana Doherty, Malika Nurbekova, Goutham Deva, Zarni Phyo, Elke A. Rundensteiner, and Lane Harrison. 2018. Preference-driven Interactive Ranking System for Personalized Decision Support. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM 2018, Torino, Italy, October 22-26, 2018, Alfredo Cuzzocrea, James Allan, Norman W. Paton, Divesh Srivastava, Rakesh Agrawal, Andrei Z. Broder, Mohammed J. Zaki, K. Selçuk Candan, Alexandros Labrinidis, Assaf Schuster, and Haixun Wang (Eds.). ACM, 1931–1934.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Veronika Lesch, Marius Hadry, Samuel Kounev, and Christian Krupitzer. 2021. Utility-based Vehicle Routing Integrating User Preferences. In 19th IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events, PerCom Workshops 2021, Kassel, Germany, March 22-26, 2021. IEEE, 263–268.Google ScholarGoogle Scholar
  14. Jialong Li, Zhenyu Mao, Zhen Cao, Kenji Tei, and Shinichi Honiden. 2021. Self-adaptive Hydroponics Care System for Human-hydroponics Coexistence. In 2021 IEEE 3rd Global Conference on Life Sciences and Technologies (LifeTech). 204–206.Google ScholarGoogle Scholar
  15. Nianyu Li, Mingyue Zhang, Jialong Li, Eunsuk Kang, and Kenji Tei. 2023. Preference Adaptation: user satisfaction is all you need!. In 18th IEEE/ACM Symposium on Software Engineering for Adaptive and Self-Managing Systems, SEAMS 2023, Melbourne, Australia, May 15-16, 2023. IEEE, 133–144.Google ScholarGoogle ScholarCross RefCross Ref
  16. Jiali Ling, Jialong Li, Kenji Tei, and Shinichi Honiden. 2021. Towards Personalized Autonomous Driving: An Emotion Preference Style Adaptation Framework. In 2021 IEEE International Conference on Agents (ICA). 47–52. https://doi.org/10.1109/ICA54137.2021.00015Google ScholarGoogle ScholarCross RefCross Ref
  17. Stephan Pajer, Marc Streit, Thomas Torsney-Weir, Florian Spechtenhauser, Torsten Möller, and Harald Piringer. 2017. WeightLifter: Visual Weight Space Exploration for Multi-Criteria Decision Making. IEEE Trans. Vis. Comput. Graph. 23, 1 (2017), 611–620. https://doi.org/10.1109/TVCG.2016.2598589Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Menghai Pan, Weixiao Huang, Yanhua Li, Xun Zhou, Zhenming Liu, Rui Song, Hui Lu, Zhihong Tian, and Jun Luo. 2020. DHPA: Dynamic Human Preference Analytics Framework: A Case Study on Taxi Drivers’ Learning Curve Analysis. ACM Trans. Intell. Syst. Technol. 11, 1 (2020), 8:1–8:19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. So Yeon Park, Dylan James Moore, and David Sirkin. 2020. What a Driver Wants: User Preferences in Semi-Autonomous Vehicle Decision-Making. In CHI ’20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, Regina Bernhaupt, Florian ’Floyd’ Mueller, David Verweij, Josh Andres, Joanna McGrenere, Andy Cockburn, Ignacio Avellino, Alix Goguey, Pernille Bjøn, Shengdong Zhao, Briane Paul Samson, and Rafal Kocielnik (Eds.). ACM, 1–13.Google ScholarGoogle Scholar
  20. Tobias Schneider, Joana Hois, Alischa Rosenstein, Sabiha Ghellal, Dimitra Theofanou-Fülbier, and Ansgar R. S. Gerlicher. 2021. ExplAIn Yourself! Transparency for Positive UX in Autonomous Driving. In CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021, Yoshifumi Kitamura, Aaron Quigley, Katherine Isbister, Takeo Igarashi, Pernille Bjørn, and Steven Mark Drucker (Eds.). ACM, 161:1–161:12.Google ScholarGoogle Scholar
  21. Hui Song, Stephen Barrett, Aidan Clarke, and Siobhán Clarke. 2013. Self-adaptation with End-User Preferences: Using Run-Time Models and Constraint Solving. In Model-Driven Engineering Languages and Systems - 16th International Conference, MODELS 2013, Miami, FL, USA, September 29 - October 4, 2013. Proceedings(Lecture Notes in Computer Science, Vol. 8107), Ana Moreira, Bernhard Schätz, Jeff Gray, Antonio Vallecillo, and Peter J. Clarke (Eds.). Springer, 555–571.Google ScholarGoogle Scholar
  22. Craig A. N. Soules and Gregory R. Ganger. 2003. Why Can’t I Find My Files? New Methods for Automating Attribute Assignment. In Proceedings of HotOS’03: 9th Workshop on Hot Topics in Operating Systems, May 18-21, 2003, Lihue (Kauai), Hawaii, USA, Michael B. Jones (Ed.). USENIX, 115–120.Google ScholarGoogle Scholar
  23. Felix Tener and Joel Lanir. 2022. Driving from a Distance: Challenges and Guidelines for Autonomous Vehicle Teleoperation Interfaces. In CHI ’22: CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April 2022 - 5 May 2022, Simone D. J. Barbosa, Cliff Lampe, Caroline Appert, David A. Shamma, Steven Mark Drucker, Julie R. Williamson, and Koji Yatani (Eds.). ACM, 250:1–250:13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Rebekka Wohlrab, Rômulo Meira-Góes, and Michael Vierhauser. 2022. Run-Time Adaptation of Quality Attributes for Automated Planning. In International Symposium on Software Engineering for Adaptive and Self-Managing Systems, SEAMS 2022, Pittsburgh, PA, USA, May 22-24, 2022, Bradley R. Schmerl, Martina Maggio, and Javier Cámara (Eds.). ACM/IEEE, 98–105.Google ScholarGoogle Scholar
  25. C. Yang and M. Mesbah. 2013. Route choice behaviour of cyclists by stated preference and revealed preference.Australasian Transport Research Forum, ATRF 2013 - Proceedings. (2013).Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    CHI EA '24: Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems
    May 2024
    4761 pages
    ISBN:9798400703317
    DOI:10.1145/3613905

    Copyright © 2024 Owner/Author

    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 11 May 2024

    Check for updates

    Qualifiers

    • Work in Progress
    • Research
    • Refereed limited
  • Article Metrics

    • Downloads (Last 12 months)49
    • Downloads (Last 6 weeks)49

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format