Elsevier

Computers in Human Behavior

Volume 100, November 2019, Pages 79-84
Computers in Human Behavior

Racism, responsibility and autonomy in HCI: Testing perceptions of an AI agent

https://doi.org/10.1016/j.chb.2019.06.012Get rights and content

Highlights

  • People think that an AI crime predictor has significantly less autonomy than a human crime predictor.

  • No difference was found between the type of crime predictor and between crime seriousness for assigned responsibility.

  • A clear positive relationship between autonomy and responsibility was found in both human and AI crime predictor scenarios.

Abstract

This study employs an experiment to test subjects' perceptions of an artificial intelligence (AI) crime-predicting agent that produces clearly racist predictions. It used a 2 (human crime predictor/AI crime predictor) x 2 (high/low seriousness of crime) design to test the relationship between the level of autonomy and responsibility for the unjust results. The seriousness of crime was manipulated to examine the relationship between the perceived threat and trust in the authority's decisions. Participants (N = 334) responded to an online questionnaire after reading one of four scenarios with the same story depicting a crime predictor unjustly reporting a higher likelihood of subsequent crimes for a black defendant than for a white defendant for similar crimes. The results indicate that people think that an AI crime predictor has significantly less autonomy than a human crime predictor. However, both the identity of the crime predictor and the seriousness of the crime showed insignificant results on the level of responsibility assigned to the predictor. Also, a clear positive relationship between autonomy and responsibility was found in both human and AI crime predictor scenarios. The implications of the findings for applications and theory are discussed.

Section snippets

AI and racism

Just as Weizenbaum (1976) anticipated biased algorithms, implementations of AI have shown group-based implications including many cases where race and gender have been a factor. The issue started to gain public attention with the advent of the Microsoft twitter chatbot, Tay, which notably used offensive language in 2016 (Beran, 2018). Recently, Buolamwini and Gebru noted that the efficiency of facial recognition programs varies based on race and gender (2018), finding that commercial face

The autonomy of artificial intelligence

The purpose of creating AI is to produce computer programs that function autonomously to find the best possible answers to questions (Russell & Norvig, 2010). Studies investigating reactions to the autonomy of machines have found that perception comes from two dissimilar feelings: trustworthiness and threat. On the one hand, a study found the autonomy of AI agent influences the perception of the agent's trustworthiness (Lee, Kim, Lee, & Shin, 2015). On the other hand, Złotowski, Yogeeswaran,

Attribution theory

Despite the reason why AI and computers make such unethical decisions, this study is to see how people would react to the information that is the technologies are being unfair. Particularly, this study mainly investigates the case of artificial intelligence, which is often perceived as an autonomous technology (Weng et al., 2001, Zgrzebnicki, 2017), is culpable by comparing it with the same case with a human counterpart. To see the relationship between the level of blame and the identity of a

Seriousness of crime and acceptance of authority

Finally, it is expected that crime predictors will receive less blame if a defendant in given scenarios committed a more serious crime. Because the severity of the crime is found to have a positive relationship with perceived dangerousness (Sanderson, Zanna, & Darley, 2000), crimes with more serious outcomes may induce more trust in decisions by an authority. Authoritarianism derives precisely from the relationship between the perception of threat and the acceptance of authority (Feldman &

Computers Are Social Actors (CASA)

Computers Are Social Actors (CASA) is a theoretical framework for examining how people perceive computers. According to Nass and Moon (2000), people tend to perform regular social behaviors in their human-computer interactions, as if the computer was another person. In CASA, these social norms are applied mindlessly as a heuristic shortcut, but have the effect of impacting our opinions of computers. Importantly, people tend to see computers as relatively autonomous, and do not focus on their

Methods

In order to test the hypotheses, a 2 × 2 experiment was designed and conducted, varying both the kind of predictor (human or AI) as well as the seriousness of the crime (high or low). The dependent variables for this study are the perception of the autonomy of the crime predictor and the responsibility assigned to the predictor.

Results

To verify the efficacy of the manipulations, the responses from “Do you think the type of crime mentioned in the reading is a serious crime?” and “To which extent do you think the crime predictor has human characteristics (not humanistic)?” were analyzed and compared between different scenarios using a t-test. The efficacy of the “seriousness of crime” manipulation showed a significant outcome between low seriousness (M = 5.03, SD = 1.33) and high seriousness (M = 5.54, SD = 1.28) crime

Discussion

The results of this experimental study show that there are strong correlations between autonomy and blame, which are found in both human and AI crime predictor cases. These outcomes support CASA theory because they illustrate that people blame AI at similar rates as they blame humans for a racist decision. Because these outcomes are correlations, further studies using an experiment to inquire about causal relationships between autonomy and the level of blame in human-computer interacting

References (62)

  • O. Beran

    An attitude towards an artificial soul? Responses to the “nazi chatbot

    Philosophical Investigations

    (2018)
  • A. Breland

    How white engineers built racist code – and why it's dangerous for black people

    (2017)
  • J. Buolamwini et al.

    Gender shades: Intersectional accuracy disparities in commercial gender classification

  • J. Burger

    Motivational biases in the attribution of responsibility for an accident: A meta-analysis of the defensive-attribution hypothesis

    Psychological Bulletin

    (1981)
  • D. Casacuberta et al.

    Using Dreyfus' legacy to understand justice in algorithm-based processes

    AI & Society

    (2018)
  • M. Cevik

    Will it Be possible for artificial intelligence robots to acquire free will and believe in god?

    Beytulhikme - An International Journal of Philosophy

    (2017)
  • L.D. Coleman

    Inside trends and forecast for the $3.9T AI industry. Forbes

  • N. Diakopoulos

    Algorithmic accountability

    Digital Journalism

    (2014)
  • J. Dressel et al.

    The accuracy, fairness, and limits of predicting recidivism

    Science Advances

    (2018)
  • G. Duwe et al.

    Effects of automating recidivism risk assessment on reliability, predictive validity, and return on investment

    Criminology & Public Policy

    (2017)
  • J. Eaglin

    Constructing recidivism risk

    Emory Law Journal

    (2017)
  • S. Feldman et al.

    Perceived threat and authoritarianism

    Political Psychology

    (1997)
  • S.T. Fiske et al.

    Social cognition

    (1991)
  • A.W. Flores et al.

    False positives, false negatives, and false analyses: A rejoinder to machine bias: There's software used across the country to predict future criminals. And it's biased against blacks. Fed

    Probation

    (2016)
  • D. Fox et al.

    Improving judicial administration through implementation of an automated sentencing guidelines system

    Criminal Justice Policy Review

    (2018)
  • C. Gad et al.

    A closed circuit technological vision: On minority Report, event detection and enabling technologies

    Surveillance and Society

    (2013)
  • B. Glassner

    The culture of fear: Why Americans are afraid of the wrong things

    (1999)
  • S. Graham et al.

    An attributional analysis of punishment goals and public reactions to O. J. Simpson

    Personality and Social Psychology Bulletin

    (1997)
  • W. Hardyns et al.

    Predictive policing as a new tool for law enforcement? Recent developments and challenges

    European Journal on Criminal Policy and Research

    (2018)
  • A. Howard et al.

    The ugly truth about ourselves and our robot creations: The problem of bias and social inequity

    Science and Engineering Ethics

    (2017)
  • A. Ho et al.

    Descriptive statistics for modern test score distributions: Skewness, kurtosis, discreteness, and ceiling effects

    Educational and Psychological Measurement

    (2015)
  • Cited by (31)

    • Should the chatbot “save itself” or “be helped by others”? The influence of service recovery types on consumer perceptions of recovery satisfaction

      2022, Electronic Commerce Research and Applications
      Citation Excerpt :

      For example, the world’s first AI hotel, the Henn na Hotel in Japan, has received a massive number of consumer complaints because the facility’s AI and robotic systems were not able to handle service failures independently, resulting in a great deal of lost business and a negative reputation among consumers (Gale and Mochizuki, 2019). Outcomes of this nature should be avoided to prevent a series of serious consequences, such as customers canceling their services or abandoning the usage of certain functions (Kuang et al., 2022), increasing dissatisfaction with the platform (Hong and Williams, 2019), or negative effects on reputation through word-of-mouth (Crisafulli and Singh, 2017) and so forth caused by service failures of computational service agents. Moreover, due to practical considerations such as cost and human labor requirements, chatbots must independently carry out proper self-recovery operations in a timely manner.

    • Effects of different service failure types and recovery strategies on the consumer response mechanism of chatbots

      2022, Technology in Society
      Citation Excerpt :

      A chatbot is a computer program that can fulfill user requests through text conversations with customers [10]. Because they can mimic human behavior in a conversation, consumers may assume that robots can perform tasks similar to humans [26]. However, robots are not always perfect [1], and chatbot service failure is common [6].

    • The role of inference in AI: Start S.M.A.L.L. with mindful modeling

      2022, AI Assurance: Towards Trustworthy, Explainable, Safe, and Ethical AI
    • AI, you can drive my car: How we evaluate human drivers vs. self-driving cars

      2021, Computers in Human Behavior
      Citation Excerpt :

      When behavior performed by AI appears similar to human behavior, it is likely that people apply their schemata and evaluate AI similarly to humans. For instance, people attribute responsibility to AI agents when they engage in ethical violations as they would to human violators of the same behavior (Hong & Williams, 2019; Shank & Desanti, 2018). A similar argument was made by Computers-Are-Social-Actors (CASA).

    View all citing articles on Scopus
    View full text