1 Introduction

In 2021, Alphabet CEO Sundar Pichai declared that artificial intelligence (AI) was “the most profound technology humanity will ever work on,”—a larger driver of societal change than “fire or electricity or the internet” (Steiner 2021). Pichai’s pronouncement is not unique. Future trend-spotters in transportation, healthcare, and warfare all foresee an autonomous future with autonomous cars, robotic surgeons, and drones changing the way humans interact, compete, and survive (Fryer-Biggs 2019; Gupta 2021; The Medical Futurist 2021).

However, while artificial intelligence and the way it enables autonomous applications may be diffusing across societies, technology does not diffuse without human intervention. It is human behavior, not necessarily technology in a vacuum, that dictates the vagaries of how technology seeps into and changes societies (Herrera 2006; Jasanoff 2004; Lee et al. 2013; MacKenzie 1993; Slayton 2013). Whether it is the adoption of artificial intelligence in consumer goods, infrastructure, or national security—quite often it is the consumer, the citizen, the taxpayer, and the soldier who dictates the use and growth of new technologies. As Hall and Khan argue about the adoption of new technologies, “it is diffusion rather than invention or innovation” (Hall and Khan 2004) that ultimately determines the impact of technologies (Horowitz 2010). If artificial intelligence is as ubiquitous or as profound as its proponents claim, then its reach across economies and societies makes for a fascinating phenomenon that impacts how individuals vote, economies and markets evolve, regimes govern, and when and why states go to war (Bissell 2018; Helbing et al. 2019; Horowitz 2018; Levy 2018; Zhang et al. 2008).

To better understand technological adoption and the spread of AI-enabled autonomous technologies today, we look at representative adult samples of US public opinion in 2018 and 2020 on the use of four types of autonomous technologies: vehicles, surgery, weapons, and cyber defense. By focusing on these four uses of AI-enabled autonomy that span transportation, medicine, and national security, we exploit the inherent variation between these AI-enabled autonomous use cases. This includes both uses of AI with greater salience for the public (self-driving vehicles), potential applications relevant to individual well-being (robotic surgery), and both offensive and defensive military applications (autonomous weapon systems and cyber defense).

We theorize that support for AI-enabled autonomous technologies depends in part on familiarity and trust, even across use cases (Schepman and Rodway 2020). We further theorize that there are delegation effects whereby people who have already made the decision to delegate specific tasks, such as driving, to other humans might be more supportive of delegating those same tasks to artificial intelligence technologies. Additionally, the variation between 2018 and 2020 provides a novel mechanism that allows us to examine directly how attitudes about AI adoption change over time, and what factors might drive these shifts.

We find that those with familiarity and expertise with AI and similar technologies were more likely to support all of the autonomous applications we tested (except weapons) than those with a limited understanding of the technology. We find support for the theorized delegation effect when it comes to autonomous vehicles, but less support when technology automated tasks with which individuals were not familiar operating. Finally, opposition to AI-enabled military applications slightly increased over time.Footnote 1 Our findings suggest a complicated relationship between users and AI-enabled technologies where familiarity with AI may instill trust, but only up to a point. The old saying that “familiarity breeds contempt” could help explain why users are less likely to adopt automated versions of technologies with which they are already accustomed to operating without any AI interventions, or to accept machine intervention when accustomed to more direct human involvement.

Below, we first introduce existing data on public support for AI-enabled autonomous systems. We then use the literature on technology diffusion to examine existing theories about the determinants of technological adoption. In doing so, we identify a series of hypotheses that we then test in our subsequent data section. We then turn to a discussion of demographic and intervening variables before concluding.

2 Theory

For the purposes of this paper, we define artificial intelligence as the capability for machines to conduct tasks once thought to require human intelligence (Russell and Norvig 2020). Artificial intelligence methods like machine learning are one way to program autonomous systems or systems that operate with minimal or no human oversight. Past research finds that public attitudes toward AI-enabled autonomous systems tend to vary based on the technology’s application. For example, support for higher risk technologies, such as autonomous vehicles, has remained relatively sticky over time (West 2018), with higher levels of support among young, high-income males within the tech field (Bansal et al. 2016; Hulse et al. 2018; Payre et al. 2014). Similarly, support for the development of AI technologies for use in warfare has remained relatively low at 30% (though support increases to 45% when adversaries develop similar weapons (West 2018)). More general questions about AI in the same survey reveal a conflicted public; when asked whether AI “is a good thing/bad thing for society,” 44% of US adults said it was a good thing, while 47% said it was a bad thing.

Existing surveys and polls provide valuable snapshots of support for AI technologies and their change over time. However, what these surveys struggle to explain is what drives support for the adoption of AI-enabled autonomous systems in the first instance. Are changes in support due to simply a greater exposure to and awareness of the technology as it develops and becomes more prevalent? Or, alternatively, is supported to changes in how the technology is used, including how much control an individual has over the system? The existing literature on technology adoption and diffusion suggests a series of hypotheses that drive at the heart of this puzzle.

2.1 Prior experience, familiarity, and knowledge

One of the primary factors that might influence support for the use of AI-enabled technology is familiarity with the technology. Existing research suggests that understanding the application of algorithms in the real world—and greater familiarity with autonomy in general—might lead to both a recognition of possibilities and an appreciation of technical limits (Chau 1996; King and He 2006; Marangunić and Granić 2015; Yarbrough and Smith 2007). For example, research in the medical field shows doctor familiarity with computing in the late 1990s and early 2000s made the adoption of computerized healthcare processes more likely (Austin et al. 2006; Lapinsky et al. 2004). Similarly, research on the adoption of autonomous vehicles suggests that those with careers in high-technology fields were more likely to support the development of fully autonomous vehicles (Bansal et al. 2016; Moody et al. 2020; Payre et al. 2014) and that those that had more familiarity with autonomous vehicles directly were more likely to find them safe (Penmetsa et al. 2019; Sanbonmatsu et al. 2018).

More broadly, behavioral psychology research illustrates the link between personal experience, familiarity, and support for technologies (Taylor and Todd 1995). Direct experience influences how people process information. When an individual believes that they have experience with a concept or application, it makes them more empathetic to that concept or application, ultimately viewing it in a positive light (Fazio et al. 1978, 1981). Prior experience also makes it easier to rely on one’s own judgment when making an assessment, rather than the opinions of others (Burnkrant and Cousineau 1975). This is particularly true when considering information systems—exposure generates favorable attitudes toward future adoption (Hartwick and Barki 1994). This is because personal experience and increased familiarity can generate a greater sense of knowledge about, and confidence in, the use of a given technology. Prior survey research of the general public shows that individuals are more comfortable adopting new technologies once they are familiar with them. 52% of the public prefers using familiar brands and products, and only 35% wants to try new technologies without additional evidence of effectiveness. 39% describe themselves as preferring to wait until they hear about others’ experiences before trying something new themselves (Kennedy and Funk 2016). It is also true when looking at specific research on artificial intelligence, which has shown that factors such as comfort with specific applications are often even stronger predictors of general attitudes toward AI than the perceived capability of the AI itself (Schepman and Rodway 2020). Similarly, other attempts to test confidence in AI systems have found that different types of direct experience with AI (either positive or negative) have a significant impact on not only how humans approach using AI systems, but also their self-confidence in completing a task (Chong et al. 2022).

Hypothesis 1: Greater familiarity with AI, through knowledge and self-reported use, should lead to greater support for uses of AI.

2.2 Delegation

Despite strong evidence that familiarity leads to increased public support for AI-enabled systems, this is not always enough to lead to adoption. We theorize that people’s attitudes toward AI-enabled technologies are also determined by a variable that interacts with familiarity—whether individuals are already comfortable with delegating decision-making for the task. Many AI-enabled technologies require individuals to delegate some degree of decision-making power—whether selecting grocery produce, making smart banking choices, or driving around town. What makes an individual more or less willing to delegate decision-making to a machine?

In general, previous research suggests that people are more likely to delegate to AI-enabled technologies in situations where they have already delegated authority or control over the activity to another human or technology (Miller and Parasuraman 2007). When individuals have already ceded some decision-making powers—for example, to ridesharing app drivers—they have already made the decision to trust another agent. Therefore, the decision to delegate to AI-enabled technologies should be easier for those individuals than for others who have not already delegated (whether to human or machine).

When individuals currently conduct the task themselves, they may be less willing to delegate responsibility to a machine. Their familiarity and experience with operating the technology make them more distrustful of machine intervention. For example, research on the adoption of autonomous vehicles finds experienced drivers often question whether autonomous vehicles are safe enough to adopt (König and Neumayr 2017). They value the control, even though it involves a greater cognitive load for themselves (Miller and Parasuraman 2007). More abstractly, research on AI shows that despite a potential aversion to entrusting strategic decisions to algorithms (Leyer and Schneider 2019), delegation to an algorithm is easier if someone has already transferred control of a task in the first place. This is because using the algorithm only requires trusting the algorithm, not delegating the decision in the first place (Heber and Schneider 2020). Part of the logic here involves direct experience with the task, since “in general, any form of task delegation—whether to automation or other humans—must necessarily result in added unpredictability if it offloads tasks” (Miller and Parasuraman 2007).

In our cases, we can evaluate delegation and support for AI-enabled autonomous systems in a few ways. First, we can measure whether the individual already delegates driving via the use of ridesharing apps. We would predict individuals who have delegated control to ridesharing apps to be more likely to support autonomous vehicles than those who have not.

Hypothesis 2a: Those that used ridesharing apps prior to the pandemic should be more supportive of autonomous vehicles than those surveyed in the 2020 CCES.

Second, individuals already delegate responsibility when undergoing surgery in hospitals. Even those who attempt to manage their own healthcare decisions have to trust others when it comes to the actual operations. Therefore, since people have already decided to delegate surgery to a doctor, the decision to trust an algorithm or machine may be easier than when compared to trusting an algorithm with something they currently do themselves.

Hypothesis 2b: Support for autonomous surgery should be higher than support for autonomous vehicles.

2.3 Defense applications: AI-enabled weapons vs. AI-enabled cyber defense

Autonomous vehicles and surgeries use artificial intelligence for tasks society sees as generally beneficial. Can the same theories of familiarity and delegation explain public support for AI-enabled weapons and cyber defense? These are tough cases. The public is certainly familiar with the idea of remotely piloted aircraft and AI-enabled weapons. The Campaign to Stop Killer Robots and public figures like Elon Musk have raised public and elite awareness about the potential dangers surrounding highly autonomous weapon systems. Popular science fiction TV, movies, and books also make it easier to imagine the worst-case scenario possibilities of developing and using autonomous weapon systems. Previous surveys have found the US public and AI experts alike, to be wary about the use of autonomy and artificial intelligence within offensive military operations (Horowitz 2016; Ipsos 2023; Zhang et al. 2021).

However, even though the average American may be familiar with AI-enabled weaponry as presented in the media, they have little-to- no familiarity with operating or experiencing these technologies within their own lives. This makes AI-enabled weapons different than other technologies like autonomous cars or even autonomous surgeries–which normal Americans are more likely to experience or engage with in their day-to-day lives. When forming opinions about delegating tasks to AI-enabled weaponry, the public’s perception of familiarity is tempered by their actual lack of experience using these technologies. Thus, unlike other uses of AI-enabled technology which should see a general increase in support based on familiarity with AI technologies, we do not expect this to occur with AI-enabled weapons.

Further, public support for the use of force, and their willingness to delegate to the military decisions about the use of force on the public’s behalf, is complicated (Feaver and Gelpi 2011; Jentleson 1992). The American public is generally concerned about civilian collateral damage and weapons are seen as more likely to harm civilians and are thus more likely to be met with public disapproval (Schneider and Macdonald 2016; Walsh 2015). Therefore, with AI-enabled weapons, we introduce a case with potential perceptions of high familiarity, but actual low familiarity and subsequent high concerns about delegation.

This is a particularly interesting case to explore familiarity and delegation, because it tests whether these concepts help explain technologies that the public will likely never be familiar nor experienced with using in their day-to-day lives. Unlike autonomous vehicles or even autonomous surgery, the average American already delegates their defense to the military, but in a much more extreme fashion given the average person does not have any control or direct recourse when it comes to national defense. Moreover, while this might initially suggest greater levels of support, public debate, and concern over the ethics and safety of the introduction of AI into military contexts (such as the Campaign to Stop Killer Robots) likely overwhelms any potential delegation effect. Given limited familiarity with non-sensationalized uses of the technology, and a general reluctance to delegate responsibility to machines in war because of concerns about high potential for accidents or collateral damage, we expect that, in general, support for AI in weapons systems will be lower than AI use in more public utility functions such as autonomous vehicles or AI-enabled surgeries.

Hypothesis 3: Support for AI will be lowest when applied to autonomous weapon systems.

2.4 Policy support vs. personal use

While the baseline questions about AI-enabled autonomous system adoption focus on support for the use of AI in a particular arena, such as autonomous vehicles, support for use of AI as a matter of public policy may differ from personal beliefs about or willingness to use. People process information differently when it involves their own experiences or potential experiences, especially when it involves risk to themselves. Specifically, support for use of AI as a matter of public policy for areas such as autonomous vehicles may be higher than the willingness of the same respondents to ride in an autonomous vehicle. For instance, in a 2014 study of autonomous vehicle adoption, researchers found a significant disparity between participants’ general support for fully autonomous vehicles and actual willingness to buy these vehicles (Payre et al. 2014), with trust and risk playing the most important role in distinguishing between general support and willingness to pay for the technology (Liu et al. 2019). Essentially, as people have to shift from thinking about adoption from a societal perspective—a public policy judgment—to thinking about adoption from an individual perspective—their use of AI in a particular area—safety and reliability concerns are likely to grow, leading to greater opposition.

This concept of a support-use gap is reinforced by existing research on how confidence and trust influence human–machine relationships (Macdonald and Schneider 2019). Trust inherently involves a degree of uncertainty, since it involves having to rely on another. It is “the willingness to make oneself vulnerable to another based on a judgment of similarity of intentions or values” (Siegrist et al. 2005). Trust is important in helping to facilitate choices in situations characterized by uncertainty, vulnerability, and perceived risk, where the “motives, intentions, and prospective actions of others” are unknown (Josang and Presti 2004; Kramer 1999).

AI is a newer technology that elicits a degree of public concern. Studies of American public opinion show that, in general, only about 18% of those surveyed are “more excited than concerned” about “the increased use of AI in daily life.” Rainie et al. (2022) In terms of public policy preferences, we would expect this to translate into stronger support for AI adoption for society overall, and for individuals to be less supportive of personally using AI, and that these trends will be magnified for those applications that pose higher levels of risk.

Hypothesis 4: Support for broad AI adoption will be higher, on average, than a willingness to personally use AI, across all AI applications

3 Research design

We test our hypotheses about support for AI-enabled autonomous systems by evaluating questions in the 2018 and 2020 waves of the Cooperative Congressional Election Study (CCES), now called the Cooperative Election Study (CES) (Schaffner et al. 2019, 2021).

Both the 2018 and 2020 samples are representative of the US adult public, based on the CCES/CES methodology (Schaffner et al. 2019, 2021). The 2018 survey was fielded on 1000 individuals in two phases—before and after the November 2018 general elections in the United States. Similarly, the 2020 survey was also fielded on 1000 individuals in two phases—before and after the November 2020 general elections in the United States. A module in the 2018 CCES featured questions about attitudes surrounding the adoption of autonomous systems and artificial intelligence across the areas described above. We then included the same questions in the 2020 CES, but with additional covariates to test the hypotheses above. The study was preregistered using Open Science.Footnote 2

As there was not a substantial change in AI-enabled autonomous systems that would be salient to the general public, there should not be a technology-based driver of a shift in attitudes. We can further control for the impact of demographic factors and partisanship in regression models (see the appendix). We present the results below without team sample weights, but we show that in the appendix the results are identical when adding team weights designed to make the sample even more representative.

The dependent variables come from four sets of questions asking respondents about their support for the adoption of AI-enabled autonomous systems: autonomous vehicles, autonomous surgery, autonomous cyber defense, and autonomous weapon systems. Each support question is measured on a four-point scale, where 1 represents very unsupportive and 4 represents very supportive. Full details on the coding of each item are available in the appendix. We describe our key independent variables of interest and control variables below. All come from the CCES/CES data unless explicitly described otherwise. We include a number of individual difference variables, such as age and level of education, in the table here, because we use them as control variables in some of the regression models below, even though we do not theorize about them in the hypotheses above.

  • Sex (1 if Female, 0 if Male)

  • Age (Count)

  • Race (1 if a respondent identified as White, 0 otherwise)

  • Prior Military Service (1 if yes, 0 otherwise)

  • Level of education (1–6, where 1 = did not complete high school and 6 = graduate degree)

  • Partisanship (1–7, where 1 = strong Democrat and 7 = strong Republican)

  • Use of Ridesharing Apps (1 if respondent has used ridesharing apps before COVID-19 pandemic, 0 otherwise.Footnote 3)

  • Drive (1 if respondent has a driver’s license and 0 otherwise.Footnote 4)

  • Urbanization (1–4, where 1 = living in a city and 4 = living in a rural area)

  • Self-reported level of prior experience with AI (0–5 scale where 0 is lowest and 5 is highest) (2020 version)

  • Self-reported level of prior experience with AI (0–2 scale where 0 is lowest and 2 is highest) (2018 version)

The measure of prior knowledge and experience with AI contains three parts in the 2020 survey, and we can decompose them to see what kinds of prior self-reported experience actually lead to more positive attitudes about AI-enabled autonomous systems. The first part of the measure is a question that asks respondents if they use AI at home, at work, both, or neither. This measures whether people have exposure to algorithms in their daily lives. The second part of the measure is a question that asks people whether they think of themselves as using algorithms when using services that make media suggestions based on user history, like the Netflix selection algorithm. The third part of the measure is two questions testing respondent knowledge about artificial intelligence methods such as machine learning. We aggregate these into an index. The index score is 0 if someone answers no to the home/work question and the music/movies question, and gets both of the knowledge questions wrong. The index score is 5 if someone answers yes to everything and gets the knowledge questions correctly. The 2018 survey only asked the first question, so the distribution is very different, running from 0 to 2. We do not compare the impact of AI knowledge from 2018 to 2020 for this reason. Combined, for the 2020 survey, however, we can generate an index of self-reported prior use and knowledge that should lead to greater support for AI adoption, following the literature on AI support indices (Parasuraman and Colby 2014; Schepman and Rodway 2020).

Table 1 highlights the distribution of our key demographic variables across the 2018 and 2020 CCES.

Table 1 Summary statistics COMBINED

4 Results

We start by assessing average levels of support for AI-enabled autonomous systems. Figure 1 illustrates the mean level of support, concern over safety, and willingness to use these applications in 2018 and 2020 among the US adult public. Overall, support slightly decreased for most AI-enabled autonomous systems, though the results are broadly stable—surgery dropped from 2.482 to 2.374, weapons systems from 2.199 to 2.03, and cyber defense from 2.568 to 2.362. Support for autonomous vehicles increased slightly, from 2.354 to 2.411, but the change was not statistically significant. There is substantial variation in the magnitude of support depending on the application of the technology, with cyber defense, surgery, and vehicles, generally receiving more support than autonomous weapon systems.

Responses to the ’Willingness to use’ question followed a similar pattern, with respondents being less likely to opt into autonomous surgery in 2020 (2.319) than in 2018 (2.078), or support the use of autonomous weapon systems (decreased from 2.373 to 2.313) or cyber defense systems (decreased from 2.457 to 2.227) in high-importance missions. Willingness to ride in an autonomous vehicle, however, increased by a small margin from 2.14 to 2.186.

Despite the change in support and willingness to use, overall concern about the safety of these technologies very slightly decreased from 2018 to 2020. Overall, individuals were most concerned about the potential impact of autonomous weapon systems on civilians, with average safety concern scores of 1.572 and 1.591 in 2018 and 2020, respectively. (Note less concern indicates a higher mean, whereas more concern indicates a lower mean.) In 2018, individuals also appeared more concerned about the safety of autonomous surgery, with a mean level of concern of 1.871, which lessened to an average of 1.96 in 2020, putting it more on par with the level of concern for militaries using autonomous weapon systems as well as autonomous vehicles, all which hovered around \(1.851-1.718\). Respondents were least concerned about the safety of autonomous cyber defense, which remained constant at 1.96.

Fig. 1
figure 1

Mean support, willingness to use, and concern for safety for all applications, 2018 vs 2020

4.1 Type of application

We also find support for hypothesis 3. The results confirm existing research (Horowitz 2016; Young and Carpenter 2018) that autonomous weapon systems are controversial and face opposition from the general public. Figure 2 compares the support, use, and concern averages for autonomous weapon systems and another non-civilian use case—AI cyber defense—in both 2020 and 2018.

Fig. 2
figure 2

Mean support, willingness to use, and concern for safety for autonomous cyber defense and autonomous weapons, 2018 vs 2020

Why is support for autonomous weapon systems so low relative not only to civilian applications of AI but other potential military applications? One factor potentially at play is both general ethical concerns and pop culture portrayals of AI-enabled weapons as dangerous, for example in movies such as The Terminator. Since weapons have an inherent potential for violence, automating weapons comes with more substantial concerns of uncontrollable, dangerous technology.

The scenario in which support for autonomous weapon systems is the highest is when we ask respondents specifically about a situation of high importance for US national security. In that scenario, support for autonomous weapon systems rises to 2.31, substantially higher than the 2.03 average level of support for autonomous weapon systems in general. AI weapons are perhaps seen as a necessity in a severe case and thereby could potentially justify setting aside ethical concerns.

4.2 Prior experience and knowledge

Hypothesis 1 focuses on how familiarity and prior experience with AI technologies and applications, measured via self-reported use and tested knowledge, should lead to increased positive sentiments including less concern, more support for, and a greater willingness to use AI-enabled autonomous systems (Table 2).

Table 2 Distribution of self-reported AI knowledge

Most of the responses are clustered toward the lower end of the scale, with 65% of the respondents reporting little-to-no prior use of AI. However, almost 30% report a mid-level of prior experience/knowledge, with only a small number answering yes for all of the experience questions and answering one or both of the knowledge questions correctly.

The results are broadly supportive of hypothesis 1—there is a statistically significant, positive relationship between the level of familiarity with AI and support for the adoption and use of autonomous systems for all applications except for autonomous weapon systems. The lack of a relationship between experience with AI and support for autonomous weapon systems is consistent with prior research on attitudes about LAWS from the general public and AI/ML experts (Horowitz 2016; Young and Carpenter 2018; Zhang et al. 2021). Tables A1–A5 in the appendix and Fig. 3 display these results.

Fig. 3
figure 3

Willingness to personally use a given AI application, relative to the level of familiarity with AI (on a scale from 0 = no experience or knowledge of AI to 5 = has substantial machine learning knowledge and uses AI in multiple contexts)

To understand better the effects of prior knowledge and AI experience in context, we estimate OLS regression models to determine the relative effect of prior knowledge of AI and experience with AI on support for AI-enabled autonomous systems. The results, displayed in Figs. 9 and 10 in the discussion, show generally strong substantive effects for prior AI knowledge. Moving from a low to a high level of prior AI knowledge generates a 9% increase in support for autonomous vehicles, a 10% increase in support for autonomous surgery, and a 6% increase in support for autonomous cyber defense, with all of those increases statistically significant at the 0.05 level or better. The lack of significance for the relationship between AI knowledge and experience and autonomous weapon systems is explained above.

To test which of the measures of AI knowledge and experience are driving the results, we re-run the main models shown in Figs. 9 and 10, substituting in each of the components of the AI index in turn. The results, displayed in Tables A1–A4 in the appendix, highlight how the experience variables are driving the results much more than the knowledge variables. Self-reported use of AI at home or work is positive and significantly associated (\(p<0.05\)) with support for autonomous vehicles, surgery, and weapons, and is positive but not significant for cyber defense. The use of AI to select music and movies is also positive and significantly associated (\(p<0.05\)) with support for autonomous vehicles, surgery, and cyber defense, but not autonomous weapon systems. Meanwhile, the AI knowledge questions were not statistically associated with greater support for AI adoption for any of the AI-enabled autonomous systems. What explains this result? One possibility is the knowledge questions, as displayed in the appendix, may have been too difficult. They asked respondents to identify what did and did not qualify as AI and machine learning, and that might have been too challenging. Future research should build on new attempts to test AI knowledge and awareness in the general public (Schepman and Rodway 2020).

4.3 Delegation

We now evaluate our theory about delegation in the context of support for AI-enabled autonomous systems, especially autonomous vehicles, and surgery. We directly test this theory by looking at how those who used ridesharing apps prior to the COVID-19 pandemic feel about autonomous vehicles. Ridesharing users, after all, already made the decision to delegate driving to someone else, so they should be more supportive of self-driving cars than those that did not use ridesharing apps.

Table 3 Pre-COVID ridesharing use and attitudes about autonomous vehicles

In Table 3, we show those somewhat or very supportive in each category as a percentage of the total number of respondents in that category of ridesharing users. 42% of the 564 respondents that never used ridesharing were somewhat or very supportive of autonomous vehicles, but that percentage jumps above 60% for all categories of respondents that used ridesharing prior to the COVID-19 pandemic. Similarly, those that used ridesharing are substantially less likely to be concerned about the safety of autonomous vehicles, and more likely to report they would personally use autonomous vehicles.

This provides initial support for hypothesis 2a, which is reinforced in Fig. 4. Support for autonomous vehicles rises from an average of 2.39 for all respondents to 2.75 for those that used ridesharing pre-COVID. Similarly, personal willingness to use autonomous vehicles grows from an average of 2.15 for all respondents to 2.54 for those that used ridesharing pre-COVID. These gaps clearly show how those that delegated driving to others prior to the COVID-19 pandemic are more supportive of autonomous vehicles, as predicted.

Fig. 4
figure 4

Impact of pre-COVID-19 ridesharing use on willingness to support or use autonomous vehicles. Average and 95% confidence interval. Note: Higher numbers equal a higher degree of support

Support for hypothesis 2a is further confirmed when we shift to a regression context, based on the regression analysis described above in the context of hypothesis 1 and displayed in Fig. 9. Prior use of ridesharing apps has a large substantive effect—leading to an increase of almost 20% in support for autonomous vehicles even when controlling for a range of demographic factors and prior AI knowledge and experience.

We also test for delegation effects for autonomous vehicles by looking at the sub-population of those in our sample that do not have a driver’s license. By definition, they have already delegated driving to someone else. There are 123 respondents without a driver’s license. They are more supportive of autonomous vehicles (average support \(= 2.52\)) than those with a driver’s license (average support \(= 2.37\)), but the difference is not statistically significant at the \(p<0.05\) level. The results for the personal use question are similar. The large confidence interval is likely driven by the small sample of non-drivers, so future research that over-samples on non-drivers could help address this issue. Alternatively, there might be health or mobility reasons why some people do not have a driver’s license which might also limit the utility of autonomous vehicles for them, confounding any findings.

The results do not support hypothesis 2b concerning the relationship between autonomous surgery and vehicles. As Fig. 4 shows, for the support and use questions, excluding those who used ride-share apps prior to COVID-19, there is no statistically significant difference between the averages for autonomous surgery and autonomous vehicles. In fact, for the use question, approval of the use of autonomous vehicles, even among those that did not use ridesharing apps, is higher than the approval of the use of autonomous surgery, though the difference is not statistically significant.

What explains the lack of a hypothesized result? One potential explanation is that the type of delegation is not the same across these use cases, since one involves the delegation of a “daily” activity, and one involves the delegation of a “rare” activity. Daily activities are things such as driving (even if everyone does not drive every day, driving for people with a driver’s license is often commonplace, if not a frequent activity). Driving is a regular activity for most people, and though it is quite dangerous, given the number of accidents and accident-related deaths and injuries per year, it is probably perceived as less dangerous, since it is familiar (Guerin 1994; Shariff et al. 2021). For driving, delegation is a decision that can be adapted or changed dynamically, in the moment, based on circumstances and the comfort level of the individual doing the delegating or driving. Rare activities are those such as surgery, which is often also perceived as inherently dangerous. In a surgery case, the human cannot manage the risk themselves and is sometimes not awake or cognitively aware of the act of surgery itself. Additionally, whereas with driving, nearly all adults have experience as both a driver and a passenger, with surgery, unless you are a surgeon, you do not have that experience on the other side of the patient–surgeon interaction. Thus, with surgery, the decision to delegate is a forced choice, and an unfamiliar experience, and so not entirely parallel to that of autonomous vehicles. Similarly, individuals also cannot individually manage the risk themselves when it comes to national defense decisions.

Furthermore, one could argue that it is not true delegation in the surgery and defense cases, as prior to any actual action being taken, the responsibility and procedures are already clearly established, with clear norms, guidelines, expectations, and requirements such as attending medical school or joining the military. As other research has highlighted, “when this division of labor is done by a designer prior to operation, it is a part of the design for that system,” however, when this is done by a supervisor, human team, or individual dynamically during it, such as in the case of driving, “the process may be called ’delegation’ or, more generally, ’tasking’ and task management” (Miller and Parasuraman 2007). Thus, delegation as the theory section imagined is not appropriate, as delegation for a low-barrier, everyday activity such as driving is not comparable to infrequent activities that require specialized knowledge, membership, or access such as surgery or defense. This is an important avenue for future research.

4.4 Policy support vs. use

We now turn to assess whether people are more inclined to support these technologies in theory than to actually use them themselves. The results displayed below, in Figs. 5678, largely support hypothesis 5. There is a gap between support for development and willingness to use most of the autonomous systems we evaluate. Support for the systems as a matter of public policy is almost always higher than the willingness of individuals to use them.

Most respondents, on average, demonstrate a much lower personal willingness to undergo autonomous surgery than policy support generally. The same phenomenon is visible for vehicles, as well. This suggests there are some indicators that individuals might be excited by the broader societal benefits of these applications, but wary about the risks to the individual. However, when it came to military rather than civilian autonomous applications (weapons systems and cyber defense) the gap—between willingness to use and general support—diminished. Interestingly, for autonomous weapon systems in particular, on average, individuals were more encouraging of their “use to carry out a military mission of high importance to US national security” and shied away from supporting their general development more broadly.

Fig. 5
figure 5

Percentage support for/willingness to use autonomous surgery

Fig. 6
figure 6

Percentage support for/willingness to use autonomous vehicles

Fig. 7
figure 7

Percentage support for/willingness to use autonomous cyber defense

Fig. 8
figure 8

Percentage support for/willingness to use autonomous weapon systems

5 Demography and ideology

We now explore individual-level covariates and their relationship to support and use of these AI-enabled autonomous technologies. While we do not theorize about them, we include them as control variables. We discuss their importance, or lack thereof below to lay the groundwork for future research and further contribute to the literature in more descriptive fashion. In general, research suggests that emerging technologies are more likely to be adopted by younger, male, high-income individuals that work within technological fields (Bansal et al. 2016; Haboucha et al. 2017; Kadylak and Cotten 2020; Moody et al. 2020; Payre et al. 2014; Wang et al. 2020). It is not a surprise this demographic is the most likely to use autonomous vehicles, especially as they also display a higher risk propensity for more general technology adoption (Hulse et al. 2018). Moreover, attitudes about emerging technologies and science and technology issues are often polarized (Drummond and Fischhoff 2017; Gauchat 2012; Guber 2013). What do our results show? We now use OLS regression models, where the dependent variable is the level of support, and the independent variables are the covariates described in the Research Design section. We employ team weights to ensure population representation. The models are consistent—using ordered logit models, logit models, and without team weights.

Fig. 9
figure 9

Drivers of support for autonomous vehicles and surgery

The results show support for AI-enabled vehicles and surgery is substantially lower for women than men, with effect sizes that suggest a 40% relative decline in support. Age is negative, but the substantive effects are very small, while higher levels of education, consistent with the literature, lead to stronger support for AI-enabled vehicles and surgery. Being in an urban area is not significantly associated with support for vehicles or surgery.

There are partisanship effects for vehicles, but not surgery. Republicans are significantly less likely, all else equal, to support autonomous vehicles, but there is no significant effect for autonomous surgery. Those that live in top ten auto manufacturing states like Michigan are substantially less likely to support AI-enabled autonomous vehicles, with a 20% drop in support—though the confidence interval is quite larger. Those in the top ten healthcare employment states are more likely to support AI-enabled autonomous surgery.

Fig. 10
figure 10

Drivers of support for autonomous weapon systems and cyber defenses

The results for AI-enabled military systems differ in some ways from the vehicles and surgery results. There are strong gender effects for cyber defenses, with women less likely to support them than men, but while the coefficient is negative, there is not a statistically significant gender gap for autonomous weapon systems (perhaps because men are less likely to support them than any other AI-enabled autonomous system). There are no age effects, but there are race effects. Non-white respondents are substantially less likely to support autonomous weapon systems and autonomous cyber defenses, which we did not anticipate. This requires further investigation to understand why.

Higher levels of education, unlike for vehicles and surgery, do not lead to stronger support for autonomous weapon systems and cyber defenses. Prior military service is also not associated with stronger support. The only other clear effect comes from partisanship. Republicans are more likely to support autonomous weapon systems than Democrats (though not autonomous cyber defenses). This potentially reflects stronger Republican support, on average, for military systems.

6 Conclusion

This paper provides important new context for how familiarity with technology and previous delegation of related life decisions may influence the politics of support for AI-enabled autonomous systems. Across two representative surveys of US adults, we find those individuals with more experience using technology in the contexts of transportation and delegating the responsibility of driving via ridesharing apps prior to COVID-19 were more likely to support the adoption and use of AI-enabled autonomous systems in most cases.

We also show that individuals are more willing to support the development of these technologies than they are to actually use them themselves, suggesting that while these technologies are interesting to the public, and the benefits they might provide, it is possible they are not yet familiar or convinced enough by the current state of the technology to use them comfortably in their daily lives. Finally, we also found any support, interest, or openness to these AI-enabled autonomous systems differs depending on use case. In particular, there exists a persistent, strong, aversion to autonomous weapon systems across key demographic categories.

There are limitations to these findings and the research design that could inform future research. We only survey US adult respondents. A larger, more global sample could further break down and show whether these perceptions of AI-enabled autonomous systems are general, or specific to certain contexts and cultures. Future research could also integrate more sub-populations to focus on how their views differ from those of the general public, such as healthcare providers’ view on autonomous surgery specifically, or how those who work in the military view autonomous cyber defense and weapon systems. Evaluating sub-populations with a close view of specific applications of AI-enabled autonomous systems will allow researchers to further explore how familiarity and the potential ability to perceive the benefits, risks, and uses of a given application alter support for their development.