Racism, responsibility and autonomy in HCI: Testing perceptions of an AI agent
Section snippets
AI and racism
Just as Weizenbaum (1976) anticipated biased algorithms, implementations of AI have shown group-based implications including many cases where race and gender have been a factor. The issue started to gain public attention with the advent of the Microsoft twitter chatbot, Tay, which notably used offensive language in 2016 (Beran, 2018). Recently, Buolamwini and Gebru noted that the efficiency of facial recognition programs varies based on race and gender (2018), finding that commercial face
The autonomy of artificial intelligence
The purpose of creating AI is to produce computer programs that function autonomously to find the best possible answers to questions (Russell & Norvig, 2010). Studies investigating reactions to the autonomy of machines have found that perception comes from two dissimilar feelings: trustworthiness and threat. On the one hand, a study found the autonomy of AI agent influences the perception of the agent's trustworthiness (Lee, Kim, Lee, & Shin, 2015). On the other hand, Złotowski, Yogeeswaran,
Attribution theory
Despite the reason why AI and computers make such unethical decisions, this study is to see how people would react to the information that is the technologies are being unfair. Particularly, this study mainly investigates the case of artificial intelligence, which is often perceived as an autonomous technology (Weng et al., 2001, Zgrzebnicki, 2017), is culpable by comparing it with the same case with a human counterpart. To see the relationship between the level of blame and the identity of a
Seriousness of crime and acceptance of authority
Finally, it is expected that crime predictors will receive less blame if a defendant in given scenarios committed a more serious crime. Because the severity of the crime is found to have a positive relationship with perceived dangerousness (Sanderson, Zanna, & Darley, 2000), crimes with more serious outcomes may induce more trust in decisions by an authority. Authoritarianism derives precisely from the relationship between the perception of threat and the acceptance of authority (Feldman &
Computers Are Social Actors (CASA)
Computers Are Social Actors (CASA) is a theoretical framework for examining how people perceive computers. According to Nass and Moon (2000), people tend to perform regular social behaviors in their human-computer interactions, as if the computer was another person. In CASA, these social norms are applied mindlessly as a heuristic shortcut, but have the effect of impacting our opinions of computers. Importantly, people tend to see computers as relatively autonomous, and do not focus on their
Methods
In order to test the hypotheses, a 2 × 2 experiment was designed and conducted, varying both the kind of predictor (human or AI) as well as the seriousness of the crime (high or low). The dependent variables for this study are the perception of the autonomy of the crime predictor and the responsibility assigned to the predictor.
Results
To verify the efficacy of the manipulations, the responses from “Do you think the type of crime mentioned in the reading is a serious crime?” and “To which extent do you think the crime predictor has human characteristics (not humanistic)?” were analyzed and compared between different scenarios using a t-test. The efficacy of the “seriousness of crime” manipulation showed a significant outcome between low seriousness (M = 5.03, SD = 1.33) and high seriousness (M = 5.54, SD = 1.28) crime
Discussion
The results of this experimental study show that there are strong correlations between autonomy and blame, which are found in both human and AI crime predictor cases. These outcomes support CASA theory because they illustrate that people blame AI at similar rates as they blame humans for a racist decision. Because these outcomes are correlations, further studies using an experiment to inquire about causal relationships between autonomy and the level of blame in human-computer interacting
References (62)
- et al.
Impertinent mobiles - effects of politeness and impoliteness in human-smartphone interaction
Computers in Human Behavior
(2019) - et al.
Evaluations of an artificial intelligence instructor's voice: Social Identity Theory in human-robot interactions
Computers in Human Behavior
(2019) Public's responses to an oil spill accident: A test of the attribution theory and situational crisis communication theory
Public Relations Review
(2009)- et al.
It's OK if “my brain made me do it”: People's intuitions about free will and neuroscientific prediction
Cognition
(2014) - et al.
Identification, situational constraint, and social cognition: Studies in the attribution of moral responsibility
Cognition: International Journal of Cognitive Science
(2006) - et al.
Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources
International Journal of Human-Computer Studies
(2017) - et al.
Predicting behavior
Intelligent Systems, IEEE
(2015) - et al.
Machine bias. ProPublica
- et al.
Politics of prediction: Security and the time/space of governmentality in the age of big data
European Journal of Social Theory
(2017) The unfairness of risk-based possession offences
Criminal Law and Philosophy
(2011)
An attitude towards an artificial soul? Responses to the “nazi chatbot
Philosophical Investigations
How white engineers built racist code – and why it's dangerous for black people
Gender shades: Intersectional accuracy disparities in commercial gender classification
Motivational biases in the attribution of responsibility for an accident: A meta-analysis of the defensive-attribution hypothesis
Psychological Bulletin
Using Dreyfus' legacy to understand justice in algorithm-based processes
AI & Society
Will it Be possible for artificial intelligence robots to acquire free will and believe in god?
Beytulhikme - An International Journal of Philosophy
Inside trends and forecast for the $3.9T AI industry. Forbes
Algorithmic accountability
Digital Journalism
The accuracy, fairness, and limits of predicting recidivism
Science Advances
Effects of automating recidivism risk assessment on reliability, predictive validity, and return on investment
Criminology & Public Policy
Constructing recidivism risk
Emory Law Journal
Perceived threat and authoritarianism
Political Psychology
Social cognition
False positives, false negatives, and false analyses: A rejoinder to machine bias: There's software used across the country to predict future criminals. And it's biased against blacks. Fed
Probation
Improving judicial administration through implementation of an automated sentencing guidelines system
Criminal Justice Policy Review
A closed circuit technological vision: On minority Report, event detection and enabling technologies
Surveillance and Society
The culture of fear: Why Americans are afraid of the wrong things
An attributional analysis of punishment goals and public reactions to O. J. Simpson
Personality and Social Psychology Bulletin
Predictive policing as a new tool for law enforcement? Recent developments and challenges
European Journal on Criminal Policy and Research
The ugly truth about ourselves and our robot creations: The problem of bias and social inequity
Science and Engineering Ethics
Descriptive statistics for modern test score distributions: Skewness, kurtosis, discreteness, and ceiling effects
Educational and Psychological Measurement
Cited by (31)
The blame shift: Robot service failures hold service firms more accountable
2024, Journal of Business ResearchShould the chatbot “save itself” or “be helped by others”? The influence of service recovery types on consumer perceptions of recovery satisfaction
2022, Electronic Commerce Research and ApplicationsCitation Excerpt :For example, the world’s first AI hotel, the Henn na Hotel in Japan, has received a massive number of consumer complaints because the facility’s AI and robotic systems were not able to handle service failures independently, resulting in a great deal of lost business and a negative reputation among consumers (Gale and Mochizuki, 2019). Outcomes of this nature should be avoided to prevent a series of serious consequences, such as customers canceling their services or abandoning the usage of certain functions (Kuang et al., 2022), increasing dissatisfaction with the platform (Hong and Williams, 2019), or negative effects on reputation through word-of-mouth (Crisafulli and Singh, 2017) and so forth caused by service failures of computational service agents. Moreover, due to practical considerations such as cost and human labor requirements, chatbots must independently carry out proper self-recovery operations in a timely manner.
Effects of different service failure types and recovery strategies on the consumer response mechanism of chatbots
2022, Technology in SocietyCitation Excerpt :A chatbot is a computer program that can fulfill user requests through text conversations with customers [10]. Because they can mimic human behavior in a conversation, consumers may assume that robots can perform tasks similar to humans [26]. However, robots are not always perfect [1], and chatbot service failure is common [6].
The role of inference in AI: Start S.M.A.L.L. with mindful modeling
2022, AI Assurance: Towards Trustworthy, Explainable, Safe, and Ethical AIAI, you can drive my car: How we evaluate human drivers vs. self-driving cars
2021, Computers in Human BehaviorCitation Excerpt :When behavior performed by AI appears similar to human behavior, it is likely that people apply their schemata and evaluate AI similarly to humans. For instance, people attribute responsibility to AI agents when they engage in ethical violations as they would to human violators of the same behavior (Hong & Williams, 2019; Shank & Desanti, 2018). A similar argument was made by Computers-Are-Social-Actors (CASA).