Abstract
Robots are becoming an increasingly prominent part of society. Despite their growing importance, there exists no overarching model that synthesizes people’s psychological reactions to robots and identifies what factors shape them. To address this, we created a taxonomy of affective, cognitive and behavioural processes in response to a comprehensive stimulus sample depicting robots from 28 domains of human activity (for example, education, hospitality and industry) and examined its individual difference predictors. Across seven studies that tested 9,274 UK and US participants recruited via online panels, we used a data-driven approach combining qualitative and quantitative techniques to develop the positive–negative–competence model, which categorizes all psychological processes in response to the stimulus sample into three dimensions: positive, negative and competence-related. We also established the main individual difference predictors of these dimensions and examined the mechanisms for each predictor. Overall, this research provides an in-depth understanding of psychological functioning regarding representations of robots.
Similar content being viewed by others
Main
Various projections indicate that robots will soon become a constituent part of society and will need to be increasingly integrated into it1,2,3,4,5. This trend highlights the importance of understanding people’s psychological processes (for example, feelings, thoughts and actions) towards robots. Indeed, these processes form the basis of human–robot relationships and are therefore likely to shape the dynamics of the new world permeated by robots6,7,8,9,10. In this respect, although various processes have been investigated6, this research area is still in its infancy for several reasons.
First, scholars have not synthesized psychological processes towards robots into an overarching framework that clarifies how they function as a whole and allows for building theories that would explain them. Second, it is unclear whether, and how many, important psychological processes remain hidden due to the lack of systematic research on this topic. Third, previous studies have mainly focused on specific robot types (for example, social6) rather than examining the full content space of robots across all domains of human activity (for example, education, hospitality and industry). Finally, most research has been conducted outside of psychology (for example, healthcare and robotics6,11,12). Consequently, there has been little effort to integrate people’s responses to robots with important constructs from psychology in a way that would allow the field to study this topic more systematically and establish a coherent research stream around it.
To address this, the present research has two objectives: (1) to develop an integrative and comprehensive taxonomy of psychological processes in response to robots from all domains of human activity that organizes these processes into dimensions; and (2) to establish which individual differences widely studied in psychology are the most important predictors of these dimensions and to understand the mechanisms behind their relationships.
In this context we use the term ‘psychological processes’ in reference to people’s affective (that is, feelings towards robots), cognitive (that is, thoughts about robots) and behavioural responses (that is, actions towards them). This rule-of-thumb classification is often used to summarize and investigate psychological processes in an all-encompassing way13,14,15, because an official taxonomy does not exist. We adopt it because it is useful as a guiding principle when (1) eliciting diverse psychological processes and (2) identifying and organizing previous literature, considering that psychological functioning involving robots is typically not studied as a uniform construct and comprises studies from numerous areas.
Next, we briefly review previous research on psychological processes regarding robots in terms of affective, cognitive and behavioural responses (for a detailed review see Supplementary Notes). Before this review, we first clarify how we define robots because their definition is often confined to various specific types (for example, autonomous and social6,16,17,18,19,20), and describing them as an overarching category can be less straightforward19,20,21.
We adopt a general definition proposed by the Institute of Electrical and Electronics Engineers (IEEE22), according to which robots are devices that can act in the physical world to accomplish different tasks and are made of mechanical and electronic parts. These devices can be autonomous or subordinated to humans or software agents acting on behalf of humans. Robots can also form groups (that is, robotic systems) in which they cooperate to accomplish collective goals (for example, car manufacturing).
Previous research has documented diverse affective responses to robots, which can be classified as negative or positive6. Regarding negative feelings, fear and anxiety are typically experienced concerning robots taking people’s jobs23,24,25,26,27. Individuals can also find robots creepy if they are designed to be human-like but look unnatural and inconsistent with human appearance28. Regarding positive feelings, individuals can experience happiness, amazement, amusement, enjoyment, pleasure, warmth and empathy towards robots6,10,25,26,29,30,31,32,33,34,35,36,37,38. Interestingly, people can also become emotionally attached, feel attracted and be in love with robots39,40,41,42,43,44. Whereas these romantic feelings are perceived by many as taboo, they are becoming increasingly frequent nowadays42.
In terms of cognitive responses, people’s thoughts about robots can be organized into several themes. A key theme is the level of competence displayed by robots concerning tasks in which they are specialized19,30,45,46. For example, robots are often seen as efficient and accurate in what they do and as more physically endurant than humans19,47,48. Individuals can also consider robots helpful and appreciate their effectiveness in accomplishing various tasks, from household chores to carrying heavy loads49,50,51,52. Another important theme is anthropomorphism (that is, ascribing human characteristics to non-living entities7). For instance, people may perceive robots as sentient beings that have feelings53,54,55,56,57,58,59 but they may also see them as distinct from humans (for example, cold or soulless19,60,61,62) and question whether robots can be trusted in their capacities as companions, coworkers and other roles they assume63,64,65.
In terms of behavioural responses, actions towards robots can be classified as either approach (for example, engaging with them) or avoidance (for example, evading them)10,66,67,68. Common approach behaviours involve communicating, cooperating, playing and requesting information10,26,69,70. More negative approach behaviours have also been documented, including several instances of robot abuse71,72,73. In contrast to approach, avoidance behaviours (for example, hiding from robots) are infrequently mentioned in the literature and may typically occur in environments where robots could potentially injure humans74,75.
Overall, the reviewed literature indicates that various psychological responses to robots have been observed. However, because this topic is not studied under a common umbrella of psychological processes but in relation to diverse topics (for example, anthropomorphism, robotic job replacement or robot acceptance7,49,76), it is unclear how these processes are interlinked, what shapes them and whether all important processes have been discovered.
For these reasons, our research adopted a data-driven rather than a theory-driven approach77,78,79. Contrary to theory-driven studies that are inherently deductive because they test hypotheses deduced from general principles (that is, theory), data-driven research is inductive because it starts with empirical observations that are not guided by hypotheses and can progressively evolve into theory77,78,79,80,81,82.
A data-driven approach is recommended if (1) a construct is in its early stages of development and/or (2) its theoretical foundations have not been established77,78,79,80,82,83. Based on this, a data-driven approach is optimal for our research for both reasons. First, as previously indicated, the conceptual bases of our topic are at an early stage because different affective, cognitive and behavioural responses to robots have not been studied under an all-encompassing construct (that is, psychological processes). Second, theoretical foundations have not yet been developed, because encapsulating the entirety of psychological functioning regarding robots by identifying, organizing and predicting the psychological processes triggered by robots is beyond the scope of existing models of human–technology relationships. To illustrate this, the technology acceptance model84,85,86 and its extensions—the unified theory of acceptance and use of technology87,88,89 and the Almere model90—examine the factors that make people accept technology (for example, perceived usefulness, ease of use or social influence) whereas the media equation91,92,93 examines whether people interact with media (for example, computers) similarly to how they interact with other humans.
Data-driven approaches have three main benefits. First, they allow the study of novel topics without engaging in premature theorizing that can lead to post hoc hypothesizing and false-positive findings77,78,94,95,96,97. Second, because the emphasis is on inferences from data that are not constrained by previous theories and findings, these approaches can diversify knowledge of human psychology and spark unexpected insights79,81,98. Third, they can be more beneficial to previous research on the topic than deductive approaches directly informed by this research. In behavioural sciences, failed replications are common and researchers examining the same research questions and hypotheses, even with identical data, can often obtain different findings99,100,101,102,103. Therefore, if a data-driven study produces a finding consistent with previous research and theorizing, despite using a methodological approach that is solely guided by data and not constrained by their assumptions, this is a compelling case of support for the previous work. It is thus important to emphasize that using a data-driven approach does not imply conducting a research project that disregards previous literature. Quite the contrary, it is essential to comprehensively evaluate and discuss how the findings are linked to previous work to illuminate how the present research has extended this work and moved the field forward—a process labelled inductive integration77.
Drawing on data-driven approaches, our research objectives—(1) establishing a taxonomy of psychological processes involving robots and (2) examining its individual difference predictors—are achieved in three phases comprising seven studies (Fig. 1; for participant information see Table 1).
Phase 1 consisted of two studies that undertook an in-depth examination of the construct of robots that was necessary to build the taxonomy. In Study 1 we developed an all-encompassing general definition of robots. In Study 2 we used this definition to identify all domains of human activity in which robots operate.
Phase 2 consisted of three studies aimed at creating the taxonomy. In Study 3 we sampled a comprehensive content space of people’s psychological processes involving robots across the domains identified in Phase 1 to develop items assessing each process. In Study 4 we determined the main dimensions of these processes using exploratory factor analyses (EFAs104,105,106). In Study 5 we further confirmed these dimensions using exploratory structural equation modelling (ESEM107,108) and developed the psychological responses to robots (PRR) scale that can assess psychological processes towards any robot.
Phase 3 consisted of two studies that focused on determining the most important individual difference predictors of the psychological responses and testing the mechanisms behind these relationships. In Study 6 we used machine learning109,110 to identify the key predictors of the main dimensions of the PRR scale. In Study 7 we probed the mechanisms behind these predictors.
All in all, to achieve our research objectives, as stimuli we used representations (that is, images and descriptions) of robots (Supplementary Table 7) from 28 exhaustive domains of human activity in which robots operate (Table 2). This comprehensive approach allowed us to minimize the chance that our findings are driven by idiosyncrasies of a sample that is small in size and/or variety of robot types, which could compromise replicability111,112. Despite the wide variety of our stimulus sample, it is unclear to what degree this sample is representative of the general population of robots because (1) there are no established recommendations on what variables would need to be measured to accurately define this population, (2) the type of data used to quantify general characteristics of human populations is not available for robots and (3) the field of robotics is rapidly evolving. Therefore, in the context of our research we use the term ‘robot/s’ in reference to our specific stimulus sample and we do not imply that our insights extend to the general population (that is, all physical robots).
Results
In this section we briefly present the results (for a detailed description see Supplementary Results).
Phase 1: mapping a comprehensive content space of robots
Phase 1 aimed to establish a comprehensive content space that encompasses a wide range of robots by identifying all domains of human activity in which robots operate, to ensure that our taxonomy developed in Phase 2 is not biased towards only a few robot types111,112.
The first step in this endeavour was to devise a general definition of robots in Study 1, because robot definitions are typically proposed by experts6,21,22,113 and it is less well known whether these reflect how people more broadly perceive robots. Because any robot definition is essentially a set of characteristics that describe robots (for example, made of mechanical parts, autonomous6,22,113), to develop a general definition we first recruited Sample 1 and asked them to generate robot characteristics. Using this approach, 277 characteristics were identified (Supplementary Table 3). We then recruited Sample 2 and asked them to group these characteristics into common categories. Using hierarchical cluster analysis114,115,116, the following main clusters of robot characteristics were identified: (1) characteristics conveying the degree of robot–human similarity; (2) positive characteristics; (3) characteristics conveying robots’ composition; (4) negative characteristics; and (5) characteristics conveying robots’ ability to perform various tasks (Supplementary Table 3).
The general definition of robots that we subsequently developed by linking the themes of each cluster is available in Table 2. It is important to emphasize that we did not form the definition by always translating an individual cluster theme into a separate part, because the definition was more succinct and coherent if certain themes were combined in the same parts.
In Study 2 we used this robot definition to identify a comprehensive list of domains in which robots operate. Participants were presented with the definition and asked to generate all such domains they could think of. To develop an extensive inventory of domains, we analysed their responses using inductive content analysis117,118,119,120,121. Additionally, to ensure we did not miss any domains that participants were unable to identify, we consulted various other resources (for example, articles from the literature review of this paper and classifications detailed in Methods). The final list of domains, accompanied by the example items generated by participants, is available in Table 2.
Phase 2: creating the taxonomy of psychological processes
To develop the taxonomy, it was first necessary to identify a comprehensive range of psychological processes involving robots in Study 3. We instructed participants to write about any feelings, thoughts and behaviours they could think of concerning robots from the domains developed in Study 2 (Table 2)—each participant was randomly allocated to one of five domains. Participants were not provided with specific robot examples for a given domain, because we expected that reliance on their own reflections and experiences would cover a broader spectrum of robots and therefore increase the diversity of psychological processes reported (for a similar methodological approach see ref. 82). Table 3 contains the final list of psychological processes derived from participants’ responses using iterative categorization122.
In Study 4 we then created items for each of these processes (Table 3) and asked participants to answer the items about an example of a robot (Supplementary Table 7) from one of the 28 domains (Table 2) to which they were randomly allocated. To develop the taxonomy from participants’ responses we used maximum-likelihood EFAs104 with Kaiser123 normalized promax rotation106,124. EFAs were appropriate because the Kaiser–Meyer–Olkin measure of sampling adequacy was 0.983 and 0.984 for Samples 1 and 2, respectively, and Bartlett’s test of sphericity was significant (for both samples, P < 0.001)125.
To select the most appropriate factor solution we used the following procedure. We first consulted parallel analysis126,127, very simple structure128, Velicer map129, optimal coordinates130, acceleration factor130, Kaiser rule131 and visual inspection of scree plots132, which indicated that extraction of between one and 19 factors (Sample 1) and between two and 18 factors (Sample 2) could be optimal. Next, we evaluated the largest factor solutions (that is, 19 factors for Sample 1 and 18 for Sample 2) against several statistical and semantic benchmarks. If the benchmarks were not met we decreased the number of factors by one and evaluated these new solutions. This procedure was continued until the benchmarks were met. Concerning statistical benchmarks, a factor solution was required to produce only valid factors—those that have at least three items with standardized loadings ≥0.5 and cross-loadings of <0.32 (refs. 105,125,133,134). Semantically, a solution was required to make sense conceptually by having factors that are coherent and easy to interpret135,136.
For Samples 1 and 2, three-factor solutions emerged as the most optimal. These met the statistical criteria and had semantically coherent factors that denoted positive, negative and competence-related psychological processes (Table 3). Therefore, the taxonomy was labelled the positive–negative–competence (PNC) model of psychological processes regarding robots. None of the larger factor solutions met the statistical criteria.
We aimed to further validate the PNC model by confirming its dimensions and thereby developing the PRR scale that measures them. To do this, in Study 4 we selected a representative subset of PNC items (bold items in Table 3) and subjected them to ESEM107 using the maximum-likelihood with robust standard errors (MLR) estimator137,138 and target rotation with all cross-loadings as targets of zero139,140. For both samples, fit indices showed good to excellent fit (that is, SRMR < 0.05, CFI > 0.90, RMSEA < 0.06 (refs. 141,142,143)): Sample 1, χ2(558) = 1,953.820, P <0.001, SRMR = 0.026, CFI = 0.939, RMSEA = 0.041, 90% confidence interval (CI) [0.039, 0.043]; Sample 2: χ2(558) = 1,850.880, P <0.001, SRMR = 0.025, CFI = 0.944, RMSEA = 0.039, 90% CI [0.037, 0.041].
Subsequently, in Study 5 we recruited two additional samples and asked participants to answer these items about one of the two robot examples (Supplementary Table 7) from one of the 28 domains to which participants were randomly allocated. The ESEM models for both samples had a good to excellent fit (Table 4). Moreover, items previously classified under a specific dimension (that is, positive, negative or competence) by EFAs in Study 4 (Table 3) had the highest loadings for this dimension whereas the cross-loadings were <0.32. To ensure that the model comprising the three dimensions was the most appropriate we tested several alternative models, which were all rejected due to poor fit (Supplementary Results).
To show that our model has equivalent factor structure, loadings and intercepts regardless of participants’ country, robot examples used and several key demographic characteristics, we tested configural, metric and scalar measurement invariance144,145,146. As shown in Table 5, measurement invariance was demonstrated in all cases given that the configural model demonstrated good to excellent fit (SRMR < 0.05, CFI > 0.90, RMSEA < 0.06 (refs. 141,142,143); changes in SRMR, CFI and RMSEA were, respectively, ≤0.030, 0.010 and 0.015 for the metric model and ≤0.015, 0.010 and 0.015 for the scalar model144. Since we could not analyse measurement invariance for participants who did versus did not use robots at work in Study 5 because the number of those who did was insufficient (Table 1), we tested this in Study 6 where sample sizes were larger. In Study 6 we also computed measurement invariance for additional participant characteristics assessed in that study (educational attainment, income, being liberal versus conservative, ethnic identity and relationship status). Measurement invariance was demonstrated in all cases (Supplementary Table 10).
Overall, the structure of the PNC model and its validity across different subgroups of participants were confirmed.
Phase 3: examining individual difference predictors
In Study 6, to identify the main predictors of the PNC model we followed the analytic strategy described in Methods. We first computed 11 common machine learning models (for example, linear least squares, lasso109,110) for the positive, negative and competence dimensions separately. The key predictors in each model were 79 personality measures that were found to be conceptually or theoretically relevant to the PNC dimensions. We selected these measures by examining several comprehensive psychological scale databases (for example, Database of Individual Differences Survey Tools147). All measures and their justifications are available in Supplementary Table 11.
We then identified the most predictive models, which were the same across all PNC dimensions: conditional random forest (r.m.s.e.Positive = 0.919; r.m.s.e.Negative = 0.988; r.m.s.e.Competence = 0.778), linear least squares (r.m.s.e.Positive = 0.929; r.m.s.e.Negative = 1.000; r.m.s.e.Competence = 0.795), ridge (r.m.s.e.Positive = 0.921; r.m.s.e.Negative = 0.994; r.m.s.e.Competence = 0.787), lasso (r.m.s.e.Positive = 0.921; r.m.s.e.Negative = 0.993; r.m.s.e.Competence = 0.784), elastic net (r.m.s.e.Positive = 0.921; r.m.s.e.Negative = 0.993; r.m.s.e.Competence = 0.784) and random forest (r.m.s.e.Positive = 0.925; r.m.s.e.Negative = 0.995; r.m.s.e.Competence = 0.781).
Subsequently we determined all individual differences that were among the top 30 predictors across these six models and that were also statistically significant in the linear least-squares model after applying the false-discovery rate148 correction (Supplementary Tables 12–15). Several variables met these criteria and were therefore deemed the main individual difference predictors of PNC dimensions. For the positive dimension these were general risk propensity (GRP149), anthropomorphism (IDAQ150) and parental expectations (FMPS_PE151); for the negative dimension these were trait negative affect (PANAS_TNA152), psychopathy (SD3_P153), anthropomorphism (IDAQ150) and expressive suppression (ERQ_ES154); and for the competence dimension these were approach temperament (ATQ_AP155) and security-societal (PVQ5X_SS156). According to the most interpretable model (that is, the linear least squares) these most predictive individual differences were positively associated with the corresponding PNC dimensions.
In Study 7, to replicate the findings we measured the most predictive individual differences in wave 1 and used linear regressions to show that they significantly predicted PNC dimensions in wave 2 (Table 6), consistent with Study 6. Furthermore, we examined various potential mediators of the relationship between each predictor and a PNC dimension using parallel mediation analyses157 percentile-bootstrapped with 10,000 samples (for mediators and mediated effects see Table 7; the rationale behind each mediator and detailed mediation analyses are available in Supplementary Table 17 and Supplementary Results, respectively).
To aid the interpretation of the mechanisms, below we summarize the mediated effects from Table 7 that successfully explain a portion of the relationship between the key individual differences and PNC dimensions.
For the positive dimension, GRP149 was a positive predictor because people scoring higher on this trait valued the risks associated with robot adoption (GRP_M3) and were curious to see how robots would change the world (GRP_M4). Moreover, IDAQ150 was a positive predictor because people scoring higher on this trait generally felt positive towards inanimate entities with human features (IDAQ_M3), and because interaction with such entities helped them fulfil the need to experience strong emotions regularly (IDAQ_M2). FMPS_PE151 was also a positive predictor due its association with valuing robots because they were closer to perfection than humans (FMPS_PE_M1), and also because they could help humans fulfil their own high expectations (FMPS_PE_M2) and could help humans cope with their own high expectations of themselves (FMPS_PE_M6).
For the negative dimension, PANAS_TNA152 was a positive predictor because people scoring high on this trait were more likely to be in a state of activated displeasure (for example, feeling scared and upset; 12-PAC_AD158). Furthermore, SD3_P153 was a positive predictor because people scoring high on this trait were also more likely to be in the state of activated displeasure (12-PAC_AD158), had negative feelings towards other people’s inventions (SD3_P_M2) and felt inferior towards technologies in which they were not proficient (SD3_P_M3). For ERQ_ES154 and IDAQ150, we did not manage to explain the mechanism behind their relationship with the negative dimension.
For the competence dimension, ATQ_AP155 was a positive predictor because people scoring high on this trait were more likely to value exceptional skills and competencies (ATQ_AP_M5). PVQ5X_SS156 was also a positive predictor because it was associated with people linking advanced technology (for example, robots and machines) with how powerful society is (PVQ5X_SS_M4).
Discussion
In this section we first discuss (1) our findings and their contributions in relation to previous research to achieve inductive integration77 and then (2) the main limitations (for a detailed discussion see Supplementary Discussion).
Starting with Phase 1, we first discuss the robot definition (Table 2) and then the domains (Table 2). Regarding the definition, ours and that of IEEE22 both conceptualize robots as devices or entities that can perform different tasks (Part 1, Table 2), emphasize that robots can have different degrees of autonomy (Part 2, Table 2) and include robots’ composition (Part 5, Table 2). However, the two definitions also have unique elements: ours includes robots’ durability (Part 3, Table 2) and positive/negative attributes (Part 4, Table 2) whereas the IEEE definition includes robots’ capability to form robotic systems. Overall, although our definition is somewhat more nuanced, both definitions are remarkably aligned, which indicates that experts and lay individuals perceive robots similarly.
Regarding the domains in which robots operate we have identified 28 (Table 2), which is more than professional organizations usually propose (for example, the IEEE lists 18 domains on their website, https://robots.ieee.org/learn/types-of-robots/). However, this is not surprising because our list was intentionally nuanced to enable the identification of a comprehensive sample of robots, and we hope that other scholars will adopt it in their research for this purpose. It is important to emphasize that, despite the meticulous procedure used to develop the list, it is possible that (1) we failed to identify more niche domains and (2) the number of domains might increase as technology advances.
Continuing with Phase 2, we first compare the psychological processes of the PNC model (Table 3) against those reported in previous research and then discuss the model (Tables 3 and 4) more specifically. In general, participants evoked the processes identified in the literature reviewed in the Introduction, including positive feelings such as happiness10,33 (Item 32); negative feelings such as anxiety27 (Item 13); performance48 and usefulness19 (Items 4 and 5); anthropomorphism36,159 (Items 24 and 27); and various approach66 (Items 22 and 40) or avoidance26 (Items 11 and 52) behaviours (Table 3). Importantly, participants also described many infrequent or previously unidentified processes. For example, they indicated that robots contribute to human degeneration (Item 142); lead to existential questioning (Item 148); make people feel dehumanized (Item 25); help humans self-improve (Item 132); and restrict freedom (Item 114).
One of the main contributions of our research is showing that these seemingly highly diverse psychological processes fall under three dimensions: positive (P), negative (N) and competence (C) (Tables 3 and 4). In general, previous research on human–robot relationships and interactions has focused on studying and measuring specific psychological reactions to robots (for example, safety, anthropomorphism, animacy, intelligence, likeability and various social attributes159,160) but did not attempt to identify all these reactions and investigate them under an all-encompassing construct of psychological processes. In that regard, the PNC model can be seen as an integrative framework that links and organizes an exhaustive list of psychological processes, both those that researchers have already studied separately and the less common ones generated by our participants. We believe that our model moves the field forward, not only through this integration but also by enabling researchers to systematically study psychological processes regarding robots by (1) using the PNC as a guide to inform the design of future research on these processes and (2) employing the PRR scale to measure them.
One of the most interesting insights spawned by the PNC model stems from comparing it with the stereotype content model (SCM161,162). According to the SCM, people form impressions of other humans along two dimensions: warmth (that is, positive and negative social characteristics) and competence (that is, a person’s ability to successfully accomplish tasks). Although our model is broader than the SCM because it comprises all psychological processes rather than only social and intellectual characteristics, the competence dimensions from the two models are thematically comparable whereas the positive and negative attributes from the SCM’s warmth dimension are broadly aligned with our positive and negative dimensions. These comparisons suggest that (1) people use similar criteria when forming impressions of robots and humans and (2) robots’ similarity to humans does not play a role in this regard, because many of our stimuli depicted non-humanoid robots (Supplementary Table 7).
Ending with Phase 3, we discuss our findings on individual difference predictors (Tables 6 and 7) in relation to the previous relevant literature. In this respect, researchers found that extraversion, openness and anthropomorphism predicted positive responses to robots163,164,165,166; the need for cognition predicted lower negative attitudes towards robots167; and animal reminder disgust, neuroticism and religiosity predicted experiencing robots as eerie168. Among these, our research corroborated only the positive relationship between anthropomorphism and positive responses (Table 6).
We also went beyond previous research by discovering many relationships not easily anticipated by theory. For example, although we had a sound rationale behind each predictor (Supplementary Table 11) it would have been difficult to foresee psychopathy as the most robust predictor of the negative dimension (Table 6)153. We also did not expect that one of the main mechanisms behind negative robot perceptions would be negative feelings towards other people’s creations and the state of activated displeasure, which mediated the relationship between psychopathy and the negative PNC dimension (Table 7). Therefore, using a data-driven approach allowed us to generate unexpected insights, thus diversifying the body of knowledge on psychological reactions to robots79,81,98.
There are several limitations to this research. First, the stimuli were not physical robots but their depictions. These stimuli hold ecological validity because people often interact with robots indirectly (for example, via social media or various websites), and many psychological processes may therefore be shaped in this manner. Nonetheless, previous research showed that direct interaction with robots impacts people’s experiences27,169,170. Therefore, based on the present findings it is not known whether our taxonomy applies to the physical counterparts of the robots depicted by our stimuli, and investigating this is currently unachievable because many of these robots are inaccessible for in-person research due to their size, cost, limited production or potential use as weapons (for example, industrial and military robots). However, this research may be possible in the future if such robots become more accessible.
Second, participants were from Western, educated, industrialized, rich and democratic171 countries (United Kingdom and United States). Because our research proposed and investigated a construct (that is, psychological processes regarding robots) from scratch, our priority was to establish its foundations. Combining the investigation of cultural differences with this agenda using equally meticulous methods would have exceeded the scope of a single article. Nevertheless, because measurement invariance analyses showed that the PNC model applies to individuals regardless of their income, age, education, use of robots at work, political orientation, ethnic identity and relationship status, it is plausible that the model would generalize to countries that differ from the United Kingdom or United States on these population characteristics. Conducting an in-depth examination of this question will be a crucial step as this research topic progresses.
Third, we recruited online participants who are inherently more confident with technology. Whereas this might have influenced the findings, alternative modes of recruitment (for example, laboratory) would have yielded smaller and less representative participant samples172,173,174,175,176. Furthermore, to reduce the chance of technological proficiency biasing the findings, all machine learning models controlled for a variable indicative of technological proficiency (that is, previous frequency of interaction with robots; Supplementary Tables 11 and 12).
Finally, rapid technological development might make robots with an embodiment similar to humans able to perform and simulate all human activities, thereby substantially changing how people perceive robots. However, since our comparison of the PNC model and SCM161,162 indicates that people form impressions of robots and humans in a similar manner, it is unlikely that robots becoming more like humans will have a notable impact on the structure of our model. Even if it does, the PNC can be updated via the same methodological procedures we used.
Methods
This research complies with the ethics policy and procedures of the London School of Economics and Political Science (LSE), and has also been approved by its Research Ethics Committee (no. 20810). Informed consent was obtained from all participants and they were compensated for their participation. Table 1 summarizes key participant information. In Studies 4–6, participants were recruited to be reasonably representative of the UK/US populations for age, gender and geographical region, and in Study 1 (Sample 1) for gender only. More comprehensive breakdowns of participant information and the criteria used for representative sampling are available in Supplementary Tables 1 and 2.
To be included in analyses, participants had to pass seriousness checks177, instructed-response items (for example, please respond with ‘somewhat disagree’)178,179,180, understanding checks in which they identified the main research topic (that is, robots) amongst dummy topics (for example, animals or art) and completely automated public Turing tests to tell computers and humans apart (CAPTCHAs), used to safeguard against bots181. The number of these quality checks varied per study. For seriousness checks: Study 1, two (one per sample); Study 2, one; Study 3, one; Study 4, two (one per sample); Study 5, two (one per sample); Study 6, one; and Study 7, two (one per wave). For instructed-response items: Study 1, six (two in Sample 1 and four in Sample 2); Studies 2 and 3, none; Study 4, eight (four per sample); Study 5, four (two per sample); Study 6, three; and Study 7, three (two in wave 1 and one in wave 2). For understanding checks: Study 1, two (one per sample); Study 2, one; Study 3, one; Study 4, two (one per sample); Study 5, two (one per sample); Study 6, one; and Study 7, none. For CAPTCHA: Study 1, one (in Sample 2); Study 2, one; Study 3, one; Study 4, two (one per sample); Study 5, two (one per sample); Study 6, one; and Study 7, two (one per wave).
In Studies 4–7, which were quantitative, we employed pairwise deletion for missing data because various simulations showed that this does not bias the type of analyses we used when missing data are infrequent (≤5%)—even in smaller participant samples (for example, 240)—and larger samples are generally more robust to missing data182,183. In our analyses, the percentage of participants with missing data never exceeded 1.95.
The analyses using machine learning models (Study 6) did not rely on distributional assumptions due to cross-validation184, and neither did the mediation analyses (Study 7) due to bootstrapped confidence intervals used to test mediated effects157. All other quantitative analyses assumed a normal distribution of data. Because formal normality tests are sensitive to small deviations that do not bias findings134, we assumed variables to be normal if they had skewness between −2 and 2 and kurtosis between −7 and 7 (refs. 185,186,187). All the required variables met these criteria (Supplementary Tables 18–23). Given the large sample sizes we used, even severe deviations from normality would not compromise the validity of statistical inferences157,188,189.
Next, we succinctly describe the methods of the studies in each phase (for a more comprehensive description, see Supplementary Methods). Study 7 was preregistered on 12 December 2021 via the Open Science Framework (OSF) and can be accessed using this link: https://osf.io/nejvm?view_only = 79b6eeee42e24cb2a977927712bdcdd2. There were no deviations from the preregistered protocol. Data and analysis codes for all studies are also publicly available via the OSF using the following link: https://osf.io/2ntdy/?view_only = 2cacc7b1cf2141cf8c343f3ee28dab1d
Phase 1: mapping a comprehensive content space of robots
Study 1
Sample size. To determine Sample 1 size we relied on previous work showing that, in qualitative research, samples having 30–50 participants tend to reach the point of data saturation, which means that the addition of further participants produces little new information190,191,192,193,194,195. We recruited a considerably larger sample (266; Table 1) to ensure that the study detected all important robot characteristics because the robot definition we wanted to develop was essential for all subsequent studies. For Sample 2 we recruited 100 participants (Table 1), which is comparable to other studies using hierarchical clustering196,197 given the lack of guidelines on optimal sample sizes for this technique (for additional insights based on simulations, see Supplementary Methods).
Procedure. In Sample 1, participants first answered the consent form after which they were presented with three items that elicited robot characteristics. In the following order, they were asked to: (1) state the first thing that comes to mind when they think about a robot; (2) define in their own words what a robot is; and (3) list as many characteristics they associate with robots that they could think of. At the end we assessed participant information, including gender, age, employment status and use of robots at work (Table 1). In Sample 2, after answering the consent form, participants were exposed to 277 robot characteristics produced by Sample 1 (Supplementary Table 3) and were asked to sort them into groups based on similarity. In this regard, participants were provided with up to 60 empty boxes representing different groups into which they could drag the characteristics they perceived as being similar. At the end, participant information was assessed as for Sample 1.
Analytic approach. We first extracted robot characteristics generated by Sample 1 participants for the three questions described in the Study 1 procedure and then rephrased those that were stated vaguely (for example, ‘appearance of thought’) into a more precise formulation (for example, ‘appears to think on its own’). Next, we deleted all characteristics that were identical and therefore redundant. However, we included many items that were overlapping or similar (for example, ‘performs actions’ and ‘performs certain actions’) to ensure that the potential content space of robot characteristics was sampled in detail (for the final list of 277 characteristics see Supplementary Table 3). The characteristics, as sorted into categories by Sample 2 participants, were subjected to hierarchical cluster analysis for categorical data114,115,116: a dissimilarity matrix was computed using Gower’s distance198,199, clusters were produced using Ward’s linkage method200,201 and the optimal number of clusters was determined via the mean silhouette width approach using the partitioning around medoids algorithm114,202,203. The five clusters that emerged were then arranged into the robot definition (Table 2).
Study 2
Sample size. To determine sample size we followed the same guidelines as for Study 1 (Sample 1) that considered the point of data saturation in qualitative research.
Procedure. After completing the consent form, participants were presented with the robot definition developed in Study 1. They were then asked to think about and list any domains that came to mind in which humans can encounter and/or interact with robots. It was explained that, by ‘domains’, we mean any area of human life and human activity in which people encounter, interact with, use, are helped by and/or are substituted by robots. At the end, participant information was assessed as in Study 1 (Table 1).
Analytic approach. To identify the domains we performed an inductive qualitative content analysis on participants’ responses117,118,119,120,121: we first created a list of all domain items identified by participants (see Supplementary Results, subsection ‘Additional analysis output’) and then arranged these items into common categories that correspond to the domains of robot use. The first author created the initial list of categories from the domain items. The list was revised by the remaining authors and, eventually, it was consolidated by all three authors. To ensure that no important domains had been omitted we also consulted the classification of robots proposed by IEEE (https://robots.ieee.org/learn/types-of-robots/), the list of industries and sectors endorsed by the International Labor Organization (https://www.ilo.org/global/industries-and-sectors/lang--en/index.htm) and the articles from our literature review.
Phase 2: creating the taxonomy of psychological processes
Study 3
Sample size. To determine sample size we followed the same guidelines as for Study 1 (Sample 1) and Study 2. Because Study 3 aimed to identify a comprehensive range of psychological processes towards robots, which was a crucial step of our research, we recruited a substantially larger sample than required (350; Table 1) to ensure that even highly infrequent processes were detected.
Procedure. Participants first completed the consent form and were then randomly allocated to five out of the 28 domains we developed (Table 2). After reading the definition of robots (Table 2), we prompted them to think about robots from the allocated domains by writing about interactions they had with such robots, or else about interactions they could imagine or were exposed to via media. To assess participants’ psychological processes, we then asked them to list and describe feelings they had experienced (for affective responses), thoughts they had (for cognitive responses) and actions they engaged in (for behavioural responses) when they interacted with any robots they could think of from each domain, or to write about feelings, thoughts and actions they could conceive in case they had never interacted with these robots. At the end, participant information was assessed as in Studies 1 and 2 (Table 1).
Analytic approach. We implemented iterative categorization122. This qualitative analysis involved first splitting participants’ responses to questions assessing their psychological processes into key points (that is, separate issues or thoughts—for example, ‘I think this will be the future’)—and then grouping these points into themes based on similarity. Out of 334 participants who were included in analyses (Table 1), only four produced merely meaningless responses that could not be analysed and the remaining 330 generated 10,332 valid key points (approximately 31 per participant) that were analysed.
Study 4
Sample size. Because power analyses are difficult to implement for EFAs before any parameters are known, to determine the sizes of Samples 1 and 2 we consulted various resources that estimated the optimal sample size for EFAs (for a more comprehensive description see Supplementary Methods). Because the size of 1,500 met all the estimates, we recruited the samples required to reach this number after accounting for exclusions (Table 1).
Procedure. The procedure for both samples was identical. After answering the consent form, participants were randomly allocated to a domain (Table 2) and received a specific example of a robot from that domain (Supplementary Table 7) that included an image and description approximately eight lines long. For the sex domain, two robot examples were created (one male and one female) and participants assigned to this domain were randomly allocated to one. Participants were then asked to answer 149 items (Table 3), presented in a randomized order, about the robot in question. At the end, participant information was assessed as in Studies 1, 2 and 3 (Table 1).
Analytic approach. For both samples we planned several steps to determine the optimal factor structure. First, the Kaiser–Meyer–Olkin measure of sampling adequacy and Bartlett’s test of sphericity were required to show that our data are suitable for EFAs125. Second, to determine the preliminary number of factors for examining in EFAs, we used parallel analysis126,127,204, very simple structure128, Velicer map129, optimal coordinates130, acceleration factor130, Kaiser rule131 and visual inspection of scree plots132. This was advisable because consulting several criteria allows understanding of the range within which lies the optimal number of factors potentially82,135,136,205,206. Next, we aimed to evaluate the largest factor solution identified in the previous step against several statistical benchmarks using maximum-likelihood EFAs104,105 with Kaiser-normalized123 promax rotation106,124. Namely, the factor solution was required to produce only valid factors (that is, those that have at least three items with loadings ≥0.5 and cross-loadings <0.32) to be accepted105,125,133,134. If these criteria were not met, we aimed to decrease the number of factors by one and evaluate the new solution—this procedure would continue until a satisfying solution was identified. Finally, the accepted factor structure also had to have factors that are coherent and easy to interpret135,136. Importantly, this approach to selecting the best structure is not only statistically and semantically viable but has precedent in previous taxonomic research82,205.
Study 5
Sample size. To determine sample size we used Monte Carlo simulations207 based on the data from Samples 1 and 2 (Study 4). Details are available in Supplementary Methods.
Procedure. The procedure for both samples was identical. After answering the consent form, participants were randomly allocated to one robot example. The randomization procedure was the same as in Study 4 except that there were two (rather than one) possible robot examples per domain (Supplementary Table 7). The sex domain had four examples—two male and two female robots. The descriptions of robots were also consistent with Study 4. Participants were then asked to answer the 37 selected items (Table 4), presented in a randomized order, about the robot in question. At the end, participant information was assessed as in Studies 1, 2, 3 and 4 (Table 1).
Analytic approach. The maximum-likelihood with robust standard errors estimator137,138 was implemented using ESEM82,107,108,208 to test model fit. Target rotation with all cross-loadings specified as targets of zero was chosen139,140. The following fit criteria were used141,142,143: SRMR < 0.05, excellent fit; SRMR = 0.05–0.08, good fit; SRMR > 0.08, poor fit; CFI > 0.95, excellent fit; CFI = 0.90–0.95, good fit; CFI < 0.90, poor fit; RMSEA < 0.06, excellent fit; RMSEA = 0.06–0.10, good fit; and RMSEA > 0.10, poor fit. For testing of configural measurement invariance the same fit criteria were used. For metric invariance, changes in SRMR, CFI and RMSEA were required to be ≤0.030, 0.010 and 0.015, respectively, and, for scalar invariance, ≤0.015, 0.010 and 0.015, respectively144,146.
Phase 3: examining individual difference predictors
Study 6
Sample size. For machine learning algorithms combined with cross-validation there are no straightforward guidelines for compution of power analyses. Simulations showed that, for the tenfold cross-validations we were planning to use, a sample of 2,000 leads to high generalizability (that is, a likelihood that the results will apply to other samples from the same population) without inflating the time taken to run the models209. Therefore, we aimed to recruit a sample that would have approximately 2,200 participants after accounting for exclusions, in case of any additional missing data.
Procedure. After answering the consent form, participants were randomly allocated to one robot example as in Study 5 and asked to answer the PRR scale items (Table 4) presented in a randomized order. They then completed measures that assessed the 79 individual differences we tested as predictors (Supplementary Table 11), ranging from general personality traits, such as BIG 5 (ref. 210) or approach temperament155, to more specific ones, such as psychopathy153. We also measured covariates for inclusion in the models alongside the individual differences (that is, familiarity with the robot, frequency of interaction, descriptive norms, injunctive norms, age, income and political orientation; Supplementary Table 11). Finally participant information was assessed as in the previous studies, with the addition of education level, ethnic identity and relationship status (Supplementary Table 1).
Analytic approach. We implemented a rigorous multistep procedure to select the most predictive individual differences. Using the caret package109,110 in R, we computed the following 11 machine learning models for each PNC dimension separately: linear least squares, ridge, lasso, elastic net, k-nearest neighbours, regression trees, conditional inference trees, random forest, conditional random forest, neural networks and neural networks with a principal component step. For each model, tenfold cross-validation184,211,212,213,214 was implemented and all 79 individual differences plus covariates were used as predictors.
The most predictive models were selected using root-mean-square error (r.m.s.e.)109,110,184. For each PNC dimension, the model with the highest r.m.s.e. was identified and the remaining models were compared with it using paired-samples t-tests (Bonferroni corrected α of 0.00167 was used as the significance criterion). Ultimately, the model with the highest r.m.s.e. and those not significantly different from it were identified as the most predictive models. For each of these models we first identified the 30 most important predictors using the VarImp function in R110 and then identified individual differences that appeared in the top 30 across all models.
Based on the linear least-squares model—which is in essence a linear regression algorithm combined with cross-validation and thus outputs P values—we retained only the most important individual differences identified in the previous step that were also statistically significant after applying false-discovery rate correction148. We used this approach because in Study 7 we aimed to replicate the selected predictors using linear regressions; therefore, we wanted to further minimize the likelihood that these predictors are false positives.
Study 7
Sample size. Because this study tested the key predictors identified in Study 6, sample size was estimated using power analyses215 based on the parameters from that study (Supplementary Methods).
Procedure. The study consisted of two waves. In wave 1, participants first completed the consent form and were then presented with, in a randomized order, the measures assessing the most predictive individual differences identified in Study 7 (Table 6). Finally participant information was assessed as in Studies 1–5. Approximately 4 days after completing wave 1, participants were invited to participate in wave 2. They first completed the consent form and were then presented with the items measuring the mediators (Table 7) in a randomized order. Subsequently, they were randomly allocated to a robot example as in Studies 5 and 6 and asked to answer the PRR scale items (Table 4), presented in a randomized order.
Analytic approach. To test whether the key individual differences predicted the relevant PNC dimensions we used linear regressions, one per predictor (Table 6). Furthermore, to identify the most important mediators we used the Process package (Model 4 (ref. 157)) to perform parallel mediation analyses (that is, with all potential mediators analysed together for the relevant predictor; Table 7), percentile-bootstrapped with 10,000 samples. In line with the Benjamini–Yekutieli correction216,217, the significance criterion was 0.01 for the regression analyses whereas for the mediated effects we used 99% confidence intervals that are the equivalent of this criterion.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Data availability
The data that support the findings from all studies, as well as the materials used, are publicly available via the OSF: https://osf.io/2ntdy/?view_only = 2cacc7b1cf2141cf8c343f3ee28dab1d), except for the stimuli used in Studies 4–7, which can be obtained from the corresponding author on request.
Code availability
The codes for all the analyses for the studies conducted are publicly available via the OSF using the following link: https://osf.io/2ntdy/?view_only = 2cacc7b1cf2141cf8c343f3ee28dab1d.
References
Miller, M. R. & Miller, R. Robots and Robotics: Principles, Systems, and Industrial Applications (McGraw-Hill Education, 2017).
Smith, A. & Anderson, J. AI, robotics, and the tuture of jobs. Pew Research Center https://www.pewresearch.org/internet/2014/08/06/future-of-jobs/ (2014).
Belpaeme, T., Kennedy, J., Ramachandran, A., Scassellati, B. & Tanaka, F. Social robots for education: a review. Sci. Robot. 3, eaat5954 (2018).
Abdi, J., Al-Hindawi, A., Ng, T. & Vizcaychipi, M. P. Scoping review on the use of socially assistive robot technology in elderly care. BMJ Open 8, 018815 (2018).
Munde, S. Robotics Market Research Report: Forecast Till 2030 (2021); https://www.marketresearchfuture.com/reports/robotics-market/toc
Broadbent, E. Interactions with robots: the truths we reveal about ourselves. Annu. Rev. Psychol. 68, 627–652 (2017).
Epley, N., Waytz, A. & Cacioppo, J. T. On seeing human: a three-factor theory of anthropomorphism. Psychol. Rev. 114, 864–886 (2007).
Furlough, C., Stokes, T. & Gillan, D. J. Attributing blame to robots: I. The influence of robot autonomy. Hum. Factors 63, 592–602 (2021).
Schermerhorn, P., Scheutz, M. & Crowell, C. R. Robot social presence and gender: do females view robots differently than males? In Proc. 3rd ACM/IEEE International Conference on Human robot interaction 263–270 (2008); https://doi.org/10.1145/1349822.1349857
Stock-Homburg, R. Survey of emotions in human–robot interactions: perspectives from robotic psychology on 20 years of research. Int. J. Soc. Robot. 14, 389–411 (2021).
Kuo, C. M., Chen, L. C. & Tseng, C. Y. Investigating an innovative service with hospitality robots. Int. J. Contemp. Hosp. Manag. 29, 1305–1321 (2017).
Murphy, R. R., Nomura, T., Billard, A. & Burke, J. L. Human–robot interaction. IEEE Robot. Autom. Mag. 17, 85–89 (2010).
Chen, S. X. et al. Conceptualizing psychological processes in response to globalization: components, antecedents, and consequences of global orientations. J. Pers. Soc. Psychol. 110, 302–331 (2016).
Dolan, R. J. Emotion, cognition, and behavior. Science 298, 1191–1194 (2002).
Cacioppo, J. T. & Decety, J. What are the brain mechanisms on which psychological processes are based? Perspect. Psychol. Sci. 4, 10–18 (2009).
Bartneck, C. & Forlizzi, J. A design-centred framework for social human-robot interaction. In RO-MAN 2004, 13th IEEE International Workshop on Robot and Human Interactive Communication 591–594 (IEEE, 2004); https://doi.org/10.1109/ROMAN.2004.1374827
Bendel, O. SSML for sex robots. In International Conference on Love and Sex with Robots 1–11 (Springer, 2017); https://doi.org/10.1007/978-3-319-76369-9_1
Herath, D., Kroos, C. & Stelarc. Robots and Art: Exploring an Unlikely Symbiosis (Springer, 2016).
Kamide, H., Takubo, T., Ohara, K., Mae, Y. & Arai, T. Impressions of humanoids: the development of a measure for evaluating a humanoid. Int. J. Soc. Robot. 6, 33–44 (2014).
Young, J. E., Hawkins, R., Sharlin, E. & Igarashi, T. Toward acceptable domestic robots: applying insights from social psychology. Int. J. Soc. Robot. 1, 95–108 (2009).
Lo, K.-H. in Love and Sex with Robots (eds. Cheok, A. D. & Levy, D.) 83–95 (Springer International Publishing, 2018).
IEEE Standard Ontologies for Robotics and Automation (2015); https://doi.org/10.1109/IEEESTD.2015.7084073
Jackson, J. C., Castelo, N. & Gray, K. Could a rising robot workforce make humans less prejudiced? Am. Psychol. 75, 969–982 (2020).
McClure, P. K. You’re fired’, says the robot: the rise of automation in the workplace, technophobes, and fears of unemployment. Soc. Sci. Comput. Rev. 36, 139–156 (2018).
Savela, N., Oksanen, A., Pellert, M. & Garcia, D. Emotional reactions to robot colleagues in a role-playing experiment. Int. J. Inf. Manag. 60, 102361 (2021).
Broadbent, E., MacDonald, B., Jago, L., Juergens, M. & Mazharullah, O. Human reactions to good and bad robots. In 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems 3703–3708 (2007); https://doi.org/10.1109/IROS.2007.4398982
Nomura, T., Kanda, T., Suzuki, T. & Kato, K. Prediction of human behavior in human–robot interaction using psychological scales for anxiety and negative attitudes toward robots. IEEE Trans. Robot. 24, 442–451 (2008).
MacDorman, K. F. & Chattopadhyay, D. Reducing consistency in human realism increases the uncanny valley effect; increasing category uncertainty does not. Cognition 146, 190–205 (2016).
Bonarini, A., Clasadonte, F., Garzotto, F., Gelsomini, M. & Romero, M. Playful interaction with Teo, a mobile robot for children with neurodevelopmental disorders. In Proc. 7th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion 223–231 (Association for Computing Machinery, 2016); https://doi.org/10.1145/3019943.3019976
Liu, S. X., Shen, Q. & Hancock, J. Can a social robot be too warm or too competent? Older Chinese adults’ perceptions of social robots and vulnerabilities. Comput. Hum. Behav. 125, 106942 (2021).
Shank, D. B., Graves, C., Gott, A., Gamez, P. & Rodriguez, S. Feeling our way to machine minds: people’s emotions when perceiving mind in artificial intelligence. Comput. Hum. Behav. 98, 256–266 (2019).
Sawabe, T. et al. Robot touch with speech boosts positive emotions. Sci. Rep. 12, 6884 (2022).
Smith, E. R., Sherrin, S., Fraune, M. R. & Šabanović, S. Positive emotions, more than anxiety or other negative emotions, predict willingness to interact with robots. Pers. Soc. Psychol. Bull. 46, 1270–1283 (2020).
Rosenthal-von der Pütten, A. M., Krämer, N. C., Hoffmann, L., Sobieraj, S. & Eimler, S. C. An experimental study on emotional reactions towards a robot. Int. J. Soc. Robot. 5, 17–34 (2013).
Suzuki, Y., Galli, L., Ikeda, A., Itakura, S. & Kitazaki, M. Measuring empathy for human and robot hand pain using electroencephalography. Sci. Rep. 5, 15924 (2015).
Riek, L. D., Rabinowitch, T. C., Chakrabarti, B. & Robinson, P. How anthropomorphism affects empathy toward robots. In Proc. 4th ACM/IEEE International Conference on Human–Robot Interaction 245–246 (ACM, 2009).
Seo, S. H., Geiskkovitch, D., Nakane, M., King, C. & Young, J. E. Poor thing! Would you feel sorry for a simulated robot? A comparison of empathy toward a physical and a simulated robot. In Proc. 10th Annual ACM/IEEE International Conference on Human–Robot Interaction 125–132 (Association for Computing Machinery, 2015); https://doi.org/10.1145/2696454.2696471
Darling, K., Nandy, P. & Breazeal, C. Empathic concern and the effect of stories in human–robot interaction. In 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) 770–775 (2015); https://doi.org/10.1109/ROMAN.2015.7333675
You, S. & Robert, L. Emotional attachment, performance, and viability in teams collaborating with embodied physical action (EPA) robots. J. Assoc. Inf. Syst. 19, 377–407 (2017).
Weiss, A., Wurhofer, D. & Tscheligi, M. I love this dog’—children’s emotional attachment to the robotic dog AIBO. Int. J. Soc. Robot. 1, 243–248 (2009).
Döring, N. & Poeschl, S. Love and sex with robots: a content analysis of media representations. Int. J. Soc. Robot. 11, 665–677 (2019).
McArthur, N. & Twist, M. L. The rise of digisexuality: therapeutic challenges and possibilities. Sex. Relat. Ther. 32, 334–344 (2017).
Szczuka, J. M. & Krämer, N. C. Not only the lonely—how men explicitly and implicitly evaluate the attractiveness of sex robots in comparison to the attractiveness of women, and personal characteristics influencing this evaluation. Multimodal Technol. Interact. 1, e51–e55 (2017).
Woodward, S. Digisexuality, erotobotics and the future of intimacy. N. Z. Sociol. 35, 99–119 (2020).
Scheunemann, M. M., Cuijpers, R. H. & Salge, C. Warmth and competence to predict human preference of robot behavior in physical human–robot interaction. In 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) 1340–1347 (IEEE, 2020).
Spatola, N. et al. National stereotypes and robotsʼ preception: the “made in” effect. Front. Robot. AI 6, 21 (2019).
Spatola, N. & Urbanska, K. God-like robots: the semantic overlap between representation of divine and artificial entities. AI Soc. 35, 329–341 (2020).
Puntoni, S., Reek, R. W., Giesler, M. & Botti, S. Consumers and artificial intelligence: an experiential perspective. J. Mark. 85, 131–151 (2021).
de Graaf, M. M. A. & Ben Allouch, S. Exploring influencing variables for the acceptance of social robots. Robot. Auton. Syst. 61, 1476–1486 (2013).
Pandey, A., Kaushik, A., Jha, A. K. & Kapse, G. A technological survey on autonomous home cleaning robots. Int. J. Sci. Res. Publ. https://www.ijsrp.org/research-paper-0414/ijsrp-p2852.pdf (2014).
Ray, C., Mondada, F. & Siegwart, R. What do people expect from robots? In 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems 3816–3821 (IEEE, 2008).
Söderlund, M. Service robots with (perceived) theory of mind: an examination of humans’ reactions. J. Retail. Consum. Serv. 67, 102999 (2022).
Blut, M., Wang, C., Wünderlich, N. V. & Brock, C. Understanding anthropomorphism in service provision: a meta-analysis of physical robots, chatbots, and other AI. J. Acad. Mark. Sci. 49, 632–658 (2021).
Damiano, L. & Dumouchel, P. Anthropomorphism in human–robot co-evolution. Front. Psychol. 9, 468 (2018).
Yam, K. C. et al. Robots at work: people prefer—and forgive—service robots with perceived feelings. J. Appl. Psychol. 106, 1557–1572 (2021).
Yam, K. C. et al. When your boss is a robot: workers are more spiteful to robot supervisors that seem more human. J. Exp. Soc. Psychol. 102, 104360 (2022).
Gray, H. M., Gray, K. & Wegner, D. M. Dimensions of mind perception. Science 315, 619 (2007).
Li, Y. & Wang, C. Effect of customer’s perception on service robot acceptance. Int. J. Consum. Stud. 46, 1241–1261 (2022).
Ötting, S. K., Masjutin, L., Steil, J. J. & Maier, G. W. Let’s work together: a meta-analysis on robot design features that enable successful human–robot interaction at work. Hum. Factors 64, 1027–1050 (2020).
Brondi, S., Pivetti, M., Battista, S. & Sarrica, M. What do we expect from robots? Social representations, attitudes and evaluations of robots in daily life. Technol. Soc. 66, 101663 (2021).
Szollosy, M. Freud, Frankenstein and our fear of robots: projection in our cultural perception of technology. AI Soc. 32, 433–439 (2017).
Kamide, H., Mae, Y., Takubo, T., Ohara, K. & Arai, T. Development of a scale of perception to humanoid robots: PERNOD. In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems 5830–5835 (2010); https://doi.org/10.1109/IROS.2010.5648955
Coeckelbergh, M. Can we trust robots? Ethics Inf. Technol. 14, 53–60 (2012).
Naneva, S., Sarda Gou, M., Webb, T. L. & Prescott, T. J. A systematic review of attitudes, anxiety, acceptance, and trust towards social robots. Int. J. Soc. Robot. 12, 1179–1201 (2020).
Plaks, J. E., Rodriguez, L. B. & Ayad, R. Identifying psychological features of robots that encourage and discourage trust. Comput. Hum. Behav. 134, 107301 (2022).
Birnbaum, G. E. et al. Machines as a source of consolation: robot responsiveness increases human approach behavior and desire for companionship. In 2016 11th ACM/IEEE International Conference on Human–Robot Interaction (HRI) 165–172 (IEEE, 2016); https://doi.org/10.1109/HRI.2016.7451748
Elliot, A. J. Approach and avoidance motivation and achievement goals. Educ. Psychol. 34, 169–189 (1999).
Elliot, A. J., Gable, S. L. & Mapes, R. R. Approach and avoidance motivation in the social domain. Pers. Soc. Psychol. Bull. 32, 378–391 (2006).
Conchinha, C. & Freitas, J. C. Robots & NEE: learning by playing with robots in an inclusive school setting. In 2015 International Symposium on Computers in Education (SIIE) 86–91 (IEEE, 2015); https://doi.org/10.1109/SIIE.2015.7451654
Grau, A., Indri, M., Bello, L. L. & Sauter, T. Robots in industry: the past, present, and future of a growing collaboration with humans. IEEE Ind. Electron. Mag. 15, 50–61 (2020).
Brščić, D., Kidokoro, H., Suehiro, Y. & Kanda, T. Escaping from children’s abuse of social robots. In Proc. 10th Annual ACM/IEEE International Conference on Human–Robot Interaction (HRI’15) 59–66 (ACM Press, 2015); https://doi.org/10.1145/2696454.2696468
Nomura, T., Kanda, T., Kidokoro, H., Suehiro, Y. & Yamada, S. Why do children abuse robots? Interact. Stud. 17, 347–369 (2016).
Salvini, P. et al. How safe are service robots in urban environments? Bullying a robot. In RO-MAN, 2010 IEEE 1–7 (IEEE, 2010).
Haddadin, S., Albu-Schäffer, A. & Hirzinger, G. Requirements for safe robots: measurements, analysis and new insights. Int. J. Robot. Res. 28, 1507–1527 (2009).
Robla-Gómez, S. et al. Working together: a review on safe human-robot collaboration in industrial environments. IEEE Access 5, 26754–26773 (2017).
Granulo, A., Fuchs, C. & Puntoni, S. Psychological reactions to human versus robotic job replacement. Nat. Hum. Behav. 3, 1062–1069 (2019).
Locke, E. A. The case for inductive theory building. J. Manage. 33, 867–890 (2007).
Locke, E. A. Theory building, replication, and behavioral priming: where do we need to go from here? Perspect. Psychol. Sci. 10, 408–414 (2015).
Woo, S. E., O’Boyle, E. H. & Spector, P. E. Best practices in developing, conducting, and evaluating inductive research. Hum. Resour. Manag. Rev. 27, 255–264 (2017).
Eisenhardt, K. M. & Graebner, M. E. Theory building from cases: opportunities and challenges. Acad. Manag. J. 50, 25–32 (2007).
Janiszewski, C. & Osselaer, S. M. The benefits of candidly reporting consumer research. J. Consum. Psychol. 31, 633–646 (2021).
Parrigon, S., Woo, S. E., Tay, L. & Wang, T. CAPTION-ing the situation: a lexically-derived taxonomy of psychological situation characteristics. J. Pers. Soc. Psychol. 112, 642–681 (2017).
Cronbach, L. J. & Meehl, P. E. Construct validity in psychological tests. Psychol. Bull. 52, 281–302 (1955).
Davis, F. D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 13, 319–340 (1989).
Davis, F. D., Bagozzi, R. P. & Warshaw, P. R. User acceptance of computer technology: a comparison of two theoretical models. Manag. Sci. 35, 982–1003 (1989).
Marangunić, N. & Granić, A. Technology acceptance model: a literature review from 1986 to 2013. Univers. Access Inf. Soc. 14, 81–95 (2015).
Venkatesh, V., Morris, M. G., Davis, G. B. & Davis, F. D. User acceptance of information technology: toward a unified view. MIS Q. 27, 425–478 (2003).
Venkatesh, V., Thong, J. Y. L. & Xu, X. Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology. MIS Q. 36, 157–178 (2012).
Williams, M. D., Rana, N. P. & Dwivedi, Y. K. The unified theory of acceptance and use of technology (UTAUT): a literature review. J. Enterp. Inf. Manag. 28, 443–488 (2015).
Heerink, M., Kröse, B., Evers, V. & Wielinga, B. Assessing acceptance of assistive social agent technology by older adults: the Almere model. Int. J. Soc. Robot. 2, 361–375 (2010).
Reeves, B. & Nass, C. I. The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places xiv, 305 (Cambridge Univ. Press, 1996).
Nass, C., Steuer, J. & Tauber, E. R. Computers are social actors. In Proc. SIGCHI Conference on Human Factors in Computing Systems 72–78 (Association for Computing Machinery, 1994); https://doi.org/10.1145/191666.191703
Gambino, A., Fox, J. & Ratan, R. A. Building a stronger CASA: extending the computers are social actors paradigm. Hum. Mach. Commun. 1, 71–85 (2020).
Bishop, D. Rein in the four horsemen of irreproducibility. Nature 568, 435 (2019).
Kerr, N. L. HARKing: hypothesizing after the results are known. Personal. Soc. Psychol. Rev. 2, 196–217 (1998).
Murayama, K., Pekrun, R. & Fiedler, K. Research practices that can prevent an inflation of false-positive rates. Personal. Soc. Psychol. Rev. 18, 107–118 (2014).
Rubin, M. When does HARKing hurt? Identifying when different types of undisclosed post hoc hypothesizing harm scientific progress. Rev. Gen. Psychol. 21, 308–320 (2017).
Jack, R. E., Crivelli, C. & Wheatley, T. Data-driven methods to diversify knowledge of human psychology. Trends Cogn. Sci. 22, 1–5 (2018).
Botvinik-Nezer, R. et al. Variability in the analysis of a single neuroimaging dataset by many teams. Nature 582, 84–88 (2020).
Nosek, B. A. et al. Replicability, robustness, and reproducibility in psychological science. Annu. Rev. Psychol. 73, 719–748 (2022).
Schweinsberg, M. et al. Same data, different conclusions: radical dispersion in empirical results when independent analysts operationalize and test the same hypothesis. Organ. Behav. Hum. Decis. Process. 165, 228–249 (2021).
Silberzahn, R. et al. Many analysts, one data set: making transparent how variations in analytic choices affect results. Adv. Methods Pract. Psychol. Sci. 1, 337–356 (2018).
Breznau, N. et al. Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty. Proc. Natl Acad. Sci. USA 119, e2203150119 (2022).
Schmitt, T. A. Current methodological considerations in exploratory and confirmatory factor analysis. J. Psychoeduc. Assess. 29, 304–321 (2011).
Costello, A. B. & Osborne, J. W. Best practices in exploratory factor analysis: four recommendations for getting the most from your analysis. Pract. Assess. Res. Eval. 10, 1–9 (2005).
Schmitt, T. A. & Sass, D. A. Rotation criteria and hypothesis testing for exploratory factor analysis: Implications for factor pattern loadings and interfactor correlations. Educ. Psychol. Meas. 71, 95–113 (2011).
Asparouhov, T. & Muthén, B. Exploratory structural equation modeling. Struct. Equ. Modeling 16, 397–438 (2009).
Marsh, H. W., Morin, A. J., Parker, P. D. & Kaur, G. Exploratory structural equation modeling: an integration of the best features of exploratory and confirmatory factor analysis. Annu. Rev. Clin. Psychol. 10, 85–110 (2014).
Kuhn, M. Building predictive models in R using the caret package. J. Stat. Softw. 28, 1–26 (2008).
Kuhn, M. caret: Classification and regression training (2023); https://www.jstatsoft.org/article/view/v028i05
Westfall, J., Judd, C. M. & Kenny, D. A. Replicating studies in which samples of participants respond to samples of stimuli. Perspect. Psychol. Sci. 10, 390–399 (2015).
Westfall, J., Kenny, D. A. & Judd, C. M. Statistical power and optimal design in experiments in which samples of participants respond to samples of stimuli. J. Exp. Psychol. Gen. 143, 2020–2045 (2014).
ISO 8373:2021 Robotics — Vocabulary. (2021).
Kaufman, L. & Rousseeuw, P. J. Finding Groups in Data: an Introduction to Cluster Analysis (John Wiley & Sons, 2005).
Nielsen, F. in Introduction to HPC with MPI for Data Science (ed. Nielsen, F.) 195–211 (Springer, 2016).
Šulc, Z. & Řezanková, H. Comparison of similarity measures for categorical data in hierarchical clustering. J. Classif. 36, 58–72 (2019).
Elo, S. & Kyngäs, H. The qualitative content analysis process. J. Adv. Nurs. 62, 107–115 (2008).
Elo, S. et al. Qualitative content analysis: a focus on trustworthiness. SAGE Open 4, 2158244014522633 (2014).
Hsieh, H. F. & Shannon, S. E. Three approaches to qualitative content analysis. Qual. Health Res. 15, 1277–1288 (2005).
Mayring, P. Qualitative content analysis. Companion Qual. Res. 1, 159–176 (2004).
Vaismoradi, M., Turunen, H. & Bondas, T. Content analysis and thematic analysis: implications for conducting a qualitative descriptive study. Nurs. Health Sci. 15, 398–405 (2013).
Neale, J. Iterative categorization (IC): a systematic technique for analysing qualitative data. Addiction 111, 1096–1106 (2016).
Kaiser, H. F. The varimax criterion for analytic rotation in factor analysis. Psychometrika 23, 187–200 (1958).
Hendrickson, A. E. & White, P. O. Promax: a quick method for rotation to oblique simple structure. Br. J. Stat. Psychol. 17, 65–70 (1964).
Beavers, A. S. et al. Practical considerations for using exploratory factor analysis in educational research. Pract. Assess. Res. Eval. 18, 6 (2013).
Dinno, A. Exploring the sensitivity of Horn’s parallel analysis to the distributional form of simulated data. Multivar. Behav. Res. 44, 362–388 (2009).
Horn, J. L. A rationale and test for the number of factors in factor analysis. Psychometrika 30, 179–185 (1965).
Revelle, W. & Rocklin, T. Very simple structure: an alternative procedure for estimating the optimal number of interpretable factors. Multivar. Behav. Res. 14, 403–414 (1979).
Zwick, W. R. & Velicer, W. F. Comparison of five rules for determining the number of components to retain. Psychol. Bull. 99, 432–442 (1986).
Raîche, G., Walls, T. A., Magis, D., Riopel, M. & Blais, J.-G. Non-graphical solutions for Cattell’s scree test. Methodology 9, 23–29 (2013).
Luo, L., Arizmendi, C. & Gates, K. M. Exploratory factor analysis (EFA) programs in R. Struct. Equ. Modeling 26, 819–826 (2019).
Cattell, R. B. The scree test for the number of factors. Multivar. Behav. Res. 1, 245–276 (1966).
Schmitt, T. A., Sass, D. A., Chappelle, W. & Thompson, W. Selecting the ‘best’ factor structure and moving measurement validation forward: an illustration. J. Pers. Assess. 100, 345–362 (2018).
Tabachnick, B. G., Fidell, L. S. & Ullman, J. B. Using Multivariate Statistics (Pearson, 2019).
Gorsuch, R. L. in Handbook Of Psychology: Research Methods In Psychology, Vol. 2 (eds. Schinka, J. A. & Velicer, W. F.) 143–164 (Wiley, 2003).
Gorsuch, R. L. Factor Analysis: Classic Edition (Routledge, 2014); https://doi.org/10.4324/9781315735740
Muthén, L. K. & Muthén, B. O. Mplus User’s Guide (Muthén & Muthén, 2017).
Wang, J. & Wang, X. Structural Equation Modeling: Applications Using Mplus (John Wiley & Sons, 2019).
Browne, M. W. An overview of analytic rotation in exploratory factor analysis. Multivar. Behav. Res. 36, 111–150 (2001).
Xiao, Y., Liu, H. & Hau, K. T. A comparison of CFA, ESEM, and BSEM in test structure analysis. Struct. Equ. Modeling 26, 665–677 (2019).
Hu, L. T. & Bentler, P. M. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct. Equ. Modeling 6, 1–55 (1999).
Hooper, D., Coughlan, J. & Mullen, M. R. Structural equation modelling: guidelines for determining model fit. Electron. J. Bus. Res. Methods 6, 53–60 (2008).
Jackson, D. L., Gillaspy, J. A. Jr & Purc-Stephenson, R. Reporting practices in confirmatory factor analysis: an overview and some recommendations. Psychol. Methods 14, 6–23 (2009).
Chen, F. F. Sensitivity of goodness of fit indexes to lack of measurement invariance. Struct. Equ. Modeling 14, 464–504 (2007).
Chen, F. F. What happens if we compare chopsticks with forks? The impact of making inappropriate comparisons in cross-cultural research. J. Pers. Soc. Psychol. 95, 1005–1018 (2008).
Putnick, D. L. & Bornstein, M. H. Measurement invariance conventions and reporting: the state of the art and future directions for psychological research. Dev. Rev. 41, 71–90 (2016).
Condon, D. Database of individual differences survey tools. Harvard Dataverse https://doi.org/10.7910/DVN/T1NQ4V (2019).
Benjamini, Y. & Hochberg, Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. R. Stat. Soc. Ser. B Methodol. 57, 289–300 (1995).
Zhang, D. C., Highhouse, S. & Nye, C. D. Development and validation of the general risk propensity scale (GriPS). J. Behav. Decis. Mak 32, 152–167 (2019).
Waytz, A., Cacioppo, J. & Epley, N. Who sees human? The stability and importance of individual differences in anthropomorphism. Perspect. Psychol. Sci. 5, 219–232 (2010).
Frost, R. O., Marten, P., Lahart, C. & Rosenblate, R. The dimensions of perfectionism. Cogn. Ther. Res. 14, 449–468 (1990).
Watson, D., Clark, L. A. & Tellegen, A. Development and validation of brief measures of positive and negative affect: the PANAS scales. J. Pers. Soc. Psychol. 54, 1063–1070 (1988).
Jones, D. N. & Paulhus, D. L. Introducing the short dark triad (SD3): a brief measure of dark personality traits. Assessment 21, 28–41 (2014).
Gross, J. J. & John, O. P. Individual differences in two emotion regulation processes: implications for affect, relationships, and well-being. J. Pers. Soc. Psychol. 85, 348–362 (2003).
Elliot, A. J. & Thrash, T. M. Approach and avoidance temperament as basic dimensions of personality. J. Pers. 78, 865–906 (2010).
Schwartz, S. H. et al. Refining the theory of basic individual values. J. Pers. Soc. Psychol. 103, 663–688 (2012).
Hayes, A. F. Introduction to Mediation, Moderation, and Conditional Process Analysis: a Regression-Based Approach (Guilford Press, 2018).
Yik, M., Russell, J. A. & Steiger, J. H. A 12-point circumplex structure of core affect. Emotion 11, 705–731 (2011).
Bartneck, C., Kulić, D., Croft, E. & Zoghbi, S. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int. J. Soc. Robot. 1, 71–81 (2009).
Carpinella, C. M., Wyman, A. B., Perez, M. A. & Stroessner, S. J. The Robotic Social Attributes Scale (RoSAS): development and validation. In Proc. 2017 ACM/IEEE International Conference on Human–Robot Interaction 254–262 (Association for Computing Machinery, 2017); https://doi.org/10.1145/2909824.3020208
Fiske, S. T., Cuddy, A. J. & Glick, P. Universal dimensions of social cognition: warmth and competence. Trends Cogn. Sci. 11, 77–83 (2007).
Cuddy, A. J. C., Fiske, S. T. & Glick, P. in Advances in Experimental Social Psychology Vol. 40, 61–149 (Academic Press, 2008).
Esterwood, C., Essenmacher, K., Yang, H., Zeng, F. & Robert, L. P. A meta-analysis of human personality and robot acceptance in human–robot Interaction. In Proc. 2021 CHI Conference on Human Factors in Computing Systems 1–18 (2021); https://doi.org/10.1145/3411764.3445542
Morsunbul, U. Human-robot interaction: how do personality traits affect attitudes towards robot? J. Hum. Sci. 16, 499–504 (2019).
Robert, L. P. Jr et al. A review of personality in human–robot Interactions. Found. Trends Inf. Syst. 4, 107–212 (2020).
Reich, N. & Eyssel, F. Attitudes towards service robots in domestic environments: the role of personality characteristics, individual interests, and demographic variables. Paladyn 4, 123–130 (2013).
Nicolas, S. & Agnieszka, W. The personality of anthropomorphism: how the need for cognition and the need for closure define attitudes and anthropomorphic attributions toward robots. Comput. Hum. Behav. 122, 106841 (2021).
MacDorman, K. F. & Entezari, S. O. Individual differences predict sensitivity to the uncanny valley. Interact. Stud. 16, 141–172 (2015).
Paetzel-Prüsmann, M., Perugia, G. & Castellano, G. The influence of robot personality on the development of uncanny feelings. Comput. Hum. Behav. 120, 106756 (2021).
R. Wullenkord, M. R. Fraune, F. Eyssel, & S. Šabanović. Getting in touch: how imagined, actual, and physical contact affect evaluations of robots. In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) 980–985 (2016); https://doi.org/10.1109/ROMAN.2016.7745228
Henrich, J., Heine, S. J. & Norenzayan, A. Most people are not WEIRD. Nature 466, 29 (2010).
Buhrmester, M. D., Kwang, T. & Gosling, S. D. Amazon’s Mechanical Turk: a new source of inexpensive, yet high-quality, data? Perspect. Psychol. Sci. 6, 3–5 (2011).
Buhrmester, M. D., Talaifar, S. & Gosling, S. D. An evaluation of Amazon’s Mechanical Turk, its rapid rise, and its effective use. Perspect. Psychol. Sci. 13, 149–154 (2018).
Hauser, D. J. & Schwarz, N. Attentive Turkers: Mturk participants perform better on online attention checks than do subject pool participants. Behav. Res. Methods 48, 400–407 (2016).
Casler, K., Bickel, L. & Hackett, E. Separate but equal? A comparison of participants and data gathered via Amazon’s MTurk, social media, and face-to-face behavioral testing. Comput. Hum. Behav. 29, 2156–2160 (2013).
Newman, A., Bavik, Y. L., Mount, M. & Shao, B. Data collection via online platforms: challenges and recommendations for future research. Appl. Psychol. 70, 1380–1402 (2021).
Aust, F., Diedenhofen, B., Ullrich, S. & Pie, J. Seriousness checks are useful to improve data validity in online research. Behav. Res. Methods 45, 527–535 (2013).
Meade, A. W. & Craig, S. B. Identifying careless responses in survey data. Psychol. Methods 17, 437–455 (2012).
Kung, F. Y., Kwok, N. & Brown, D. J. Are attention check questions a threat to scale validity? Appl. Psychol. 67, 264–283 (2018).
Thomas, K. A. & Clifford, S. Validity and Mechanical Turk: an assessment of exclusion methods and interactive experiments. Comput. Hum. Behav. 77, 184–197 (2017).
Storozuk, A., Ashley, M., Delage, V. & Maloney, E. A. Got bots? Practical recommendations to protect online survey data from bot attacks. Quant. Methods Psychol. 16, 472–481 (2020).
McNeish, D. Exploratory factor analysis with small samples and missing data. J. Pers. Assess. 99, 637–652 (2017).
Enders, C. K. Applied Missing Data Analysis (Guilford Publications, 2022).
de Rooij, M. & Weeda, W. Cross-validation: a method every psychologist should know. Adv. Methods Pract. Psychol. Sci. 3, 248–263 (2020).
Curran, P. J., West, S. G. & Finch, J. F. The robustness of test statistics to nonnormality and specification error in confirmatory factor analysis. Psychol. Methods 1, 16–29 (1996).
Hair, J. F., Black, W. C., Babin, B. J. & Anderson, R. E. Multivariate Data Analysis (Prentice Hall, 2010).
Ryu, E. Effects of skewness and kurtosis on normal-theory based maximum likelihood test statistic in multilevel structural equation modeling. Behav. Res. Methods 43, 1066–1074 (2011).
Knief, U. & Forstmeier, W. Violating the normality assumption may be the lesser of two evils. Behav. Res. Methods 53, 2576–2590 (2021).
Schmidt, A. F. & Finan, C. Linear regression and the normality assumption. J. Clin. Epidemiol. 98, 146–151 (2018).
Faulkner, S. L. & Trotter, S. P. in The International Encyclopedia of Communication Research Methods 1–2 (2017).
Fugard, A. J. B. & Potts, H. W. W. Supporting thinking on sample sizes for thematic analyses: a quantitative tool. Int. J. Soc. Res. Methodol. 18, 669–684 (2015).
Guest, G., Namey, E. & Chen, M. A simple method to assess and report thematic saturation in qualitative research. PLoS ONE 15, e0232076 (2020).
Hennink, M. & Kaiser, B. N. Sample sizes for saturation in qualitative research: a systematic review of empirical tests. Soc. Sci. Med. 292, 114523 (2022).
Mayring, P. Qualitative content analysis: demarcation, varieties, developments. Forum Qual. Soz. (2019); https://www.researchgate.net/publication/215666096_Qualitative_Content_Analysis
van Rijnsoever, F. J. (I Can’t Get No) Saturation: a simulation and guidelines for sample sizes in qualitative research. PLoS ONE 12, e0181689 (2017).
Maruskin, L. A., Thrash, T. M. & Elliot, A. J. The chills as a psychological construct: content universe, factor structure, affective composition, elicitors, trait antecedents, and consequences. J. Pers. Soc. Psychol. 103, 135–157 (2012).
Weidman, A. C., Cheng, J. T. & Tracy, J. L. The psychological structure of humility. J. Pers. Soc. Psychol. 114, 153–178 (2018).
Gower, J. C. A general coefficient of similarity and some of its properties. Biometrics 27, 857–871 (1971).
Struyf, A., Hubert, M. & Rousseeuw, P. J. Integrating robust clustering techniques in S-PLUS. Comput. Stat. Data Anal. 26, 17–37 (1997).
Murtagh, F. & Contreras, P. Algorithms for hierarchical clustering: an overview. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2, 86–97 (2012).
Murtagh, F. & Legendre, P. Ward’s hierarchical agglomerative clustering method: which algorithms implement Ward’s criterion? J. Classif. 31, 274–295 (2014).
Schubert, E. & Rousseeuw, P. J. Faster k-medoids clustering: improving the PAM, CLARA, and CLARANS algorithms. In International Conference on Similarity Search and Applications 171–187 (Springer, 2019).
Šulc, Z., Cibulková, J., Procházka, J. & Řezanková, H. Internal evaluation criteria for categorical data in hierarchical clustering: optimal number of clusters determination. Adv. Methodol. Stat. 15, 1–20 (2018).
Dinno, A. paran: Horn’s test of principal components/factors (2018); https://cran.r-project.org/web/packages/paran/
Rauthmann, J. F. et al. The situational eight DIAMONDS: a taxonomy of major dimensions of situation characteristics. J. Pers. Soc. Psychol. 107, 677–718 (2014).
Gorsuch, R. L. Factor Analysis (Erlbaum, 1983).
Muthén, L. K. & Muthén, B. O. How to use a Monte Carlo study to decide on sample size and determine power. Struct. Equ. Modeling 9, 599–620 (2002).
Hopwood, C. J. & Donnellan, M. B. How should the internal structure of personality inventories be evaluated? Personal. Soc. Psychol. Rev. 14, 332–346 (2010).
Song, Q. C., Tang, C. & Wee, S. Making sense of model generalizability: a tutorial on cross-validation in R and Shiny. Adv. Methods Pract. Psychol. Sci. 4, 2515245920947067 (2021).
Lang, F. R., John, D., Lüdtke, O., Schupp, J. & Wagner, G. G. Short assessment of the Big Five: robust across survey methods except telephone interviewing. Behav. Res. Methods 43, 548–567 (2011).
Jacobucci, R., Brandmaier, A. M. & Kievit, R. A. A practical guide to variable selection in structural equation modeling by using regularized multiple-indicators, multiple-causes models. Adv. Methods Pract. Psychol. Sci. 2, 55–76 (2019).
McNeish, D. M. Using lasso for predictor selection and to assuage overfitting: a method long overlooked in behavioral sciences. Multivar. Behav. Res. 50, 471–484 (2015).
Orrù, G., Monaro, M., Conversano, C., Gemignani, A. & Sartori, G. Machine learning in psychometrics and psychological research. Front. Psychol. 10, 2970 (2020).
Sheetal, A., Feng, Z. & Savani, K. Using machine learning to generate novel hypotheses: Increasing optimism about COVID-19 makes people less willing to justify unethical behaviors. Psychol. Sci. 31, 1222–1235 (2020).
Faul, F., Erdfelder, E., Buchner, A. & Lang, A. G. Statistical power analyses using G* Power 3.1: tests for correlation and regression analyses. Behav. Res. Methods 41, 1149–1160 (2009).
Benjamini, Y. & Yekutieli, D. The control of the false discovery rate in multiple testing under dependency. Ann. Stat. 29, 1165–1188 (2001).
Narum, S. R. Beyond Bonferroni: less conservative analyses for conservation genetics. Conserv. Genet. 7, 783–787 (2006).
Cohen, J. Statistical Power Analysis for the Behavioral Sciences (Lawrence Earlbaum Associates, 1988).
Acknowledgements
This research was supported by the LSE Research Support Fund awarded to J.E.B. and D.K. It was also supported by internal LSE departmental funding awarded by the Department of Management to J.E.B. and by the Department of Psychological and Behavioural Science to D.K. The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.
Author information
Authors and Affiliations
Contributions
D.K. was responsible for conceptualization (lead), data curation (lead), formal analysis (lead), funding acquisition (lead), investigation (lead), methodology (lead), project administration (lead), validation (lead), visualization (lead), writing of the original draft (lead) and writing review and editing (lead). J.E.B. was responsible for conceptualization (supporting), formal analysis (supporting), funding acquisition (lead), investigation (supporting), methodology (lead), validation (lead) and writing review and editing (supporting). A.D. was responsible for conceptualization (lead), formal analysis (supporting), funding acquisition (supporting), investigation (lead), methodology (lead), validation (lead), visualization (lead), writing of the original draft (supporting) and writing review and editing (lead).
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Human Behaviour thanks Andrea Bonarini, André Pereira and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Supplementary Information
Supplementary Notes, Methods, Results, Discussion, Tables 1–23 and References.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Krpan, D., Booth, J.E. & Damien, A. The positive–negative–competence (PNC) model of psychological responses to representations of robots. Nat Hum Behav 7, 1933–1954 (2023). https://doi.org/10.1038/s41562-023-01705-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41562-023-01705-7