Exposure to automation explains religious declines

Significance The rise of working robots and artificial intelligence (AI) is changing how humans work and live. Many studies have now examined the economic impact of automation on unemployment, income inequality, and trade. However, far less research has considered the broader cultural implications of robotics and AI. Here, we show that exposure to robots and AI explains religious declines across national cultures, regions within a nation, members of a community, and employees in an organization. These effects hold controlling for wealth, SES, exposure to science, political conservatism, and other technological advances. Our findings suggest that the rise of automation could accelerate secularization throughout the 21st century in many world regions.


Supplemental Theoretical Information
Religious decline is a major area of study across the human sciences. We view our research as most relevant to research on religious decline from a cultural evolutionary perspective. Here we expand on this theorizing and situate our research within this broader space.

Cultural Evolutionary Theories of Religion and Religious Decline
There are many models of cultural evolution, but we subscribe to a dual inheritance model in which human behavior and belief systems arises jointly from (a) genetic information, which individuals inherit from their parents via reproduction, and (b) cultural information, which individuals inherit from their society via social learning (1). Therefore, humans might be religious either because they have genetically inherited psychological profiles that make them prone to belief in supernatural agents, or because they have socially learned these beliefs from their parents, peers, and other members of their society.
Findings on the genetic transmission of religion remain mixed (2). Some research has proposed that inherited psychological traits such as intuitive thinking style (3,4), theory of mind (5), and death anxiety (6) predispose individuals to religion. However, large studies have failed to support several of these patterns (7)(8)(9), and these bio-psychological mechanisms are poorly suited for explaining cross-cultural variation in religion given their focus on mechanisms shared by all humans. Most comparative studies of religious decline have therefore focused on cultural transmission via social learning.
One class of social learning mechanisms involves the context in which people learn about religion, including the person from whom they learn about religion. For example, people are more likely to maintain belief into adulthood if they learn about religion from highly devoted caregivers who regularly signal religious commitment through fasting, attending services, and donating to their religious community (7,10,11). These signals are alternatively described as "credibility enhancing displays" (CREDs) or "honest signals" because they would be extremely costly if people did not authentically hold their beliefs and they therefore provide an honest cue to religious devotion (12). Childhood exposure to CREDs can explain variation in religiosity both inside and outside the United States and can even predict deconversion age among people who grow up in religious households (10). Declining CREDs are therefore important component of religious decline.
A second class of social learning mechanisms focuses on the content of religious beliefs. Early contentbased hypotheses focused on qualities that made supernatural agents easy to remember and discuss (13)(14)(15). For example, the minimally counterintuitive (MCI) hypothesis suggested that people would be more likely to remember gods who selectively violated lay expectations about physics (e.g., by walking on water), biology (e.g., by gaining immortality), or psychology (e.g., by reading minds) than gods who violated none of these lay expectations or who violated all of them (14). Despite widespread interest in the MCI hypothesis, studies supporting the hypothesis have been critiqued because they confound minimally counterintuitive violations with emotionality (e.g., gods possess traits which are emotionally galvanizing), fitness relevance (e.g., gods possess traits that are relevant to survival), and existential relevance (e.g., gods possess traits that threaten or prolong the lives of believers) (16). A more general critique of the MCI hypothesis is that it only explains which agents people remember; it does not explain why people become devoted to gods and spirits. This critique is called the "Mickey Mouse" problem because the MCI hypothesis is equally well suited for explaining the evolution of non-worshipped agents like Mickey Mouse and Santa Claus as it is for explaining the evolution of worshipped agents like Jesus or Allah (16). The related "Zeus" problem is that the MCI hypothesis cannot explain why some gods are worshipped in a particular time and place (e.g., the Christian God) whereas others cease to be worshipped (e.g., Zeus) (17).
More recent analyses of content-biased transmission have tried to directly address the Zeus and Mickey Mouse problems. For example, Swan and Halberstadt (18) directly compared characteristics of active gods and other fictional agents (e.g., Mickey Mouse, Zeus), finding that active gods were ascribed more superhuman powers-especially helpful psychological powers-compared to other fictional agents (Mickey Mouse may be unusual, but he does not have special psychological capacities and will not solve your problems). Purzycki and McNamara (19) have argued that gods are usually associated with specific community functions that make them deserving of worship and ritual, such as resource management and social norm regulation. Epley and colleagues (20) made a similar claim by showing that people project their own communal concerns onto gods more than other people (see also Purzycki (21)).
These findings are useful because they show how specific features of gods-namely, the possession of superhuman powers and willingness to use these powers to help resolve human problems-are essential for encouraging demonstrations of religious commitment. In other words, most people perceive religion to have an instrumental function. When people appeal to supernatural agents to help them solve problems, these appeals in turn act as CREDs that inspire the next generation of religious adherents. Contentbased dual inheritance models of religious evolution are therefore compatible with context-based models.

Contribution of Our Work
We propose that people's religious worship does not only depend on how they perceive gods and religious role-models; it also depends on how people perceive other means of problem solving. If people believe that automation can address needs that they usually depend on religion to address, they may be less likely to seek out supernatural help through petitionary prayer or ritual participation. In turn, this decline in the frequency of religious displays may lead to broader religious declines. People do not need to explicitly compare automation and religion within this dynamic-which might be highly threatening for a religious individual. Rather, religion may simply be salient to people who have no secular means of solving their problems, and these situations may be rarer for people who have access to automation technology.
Our hypothesis is supported by three well-established premises. The first premise is that people perceive automation and religion as sharing similar features and abilities. For example, people see Google as having uniquely high agency, which is shared only by Christians' perceptions of God (22), and implicitly and explicitly associate robots and AI with gods more than humans (23). Our own studies (Study 5, Study S1) support this idea further by showing that people view automation as operating outside laws of nature, and feel that humans can "break" laws of nature after attending a seminar in AI (Study S1). The evidence in our research and past studies suggests that people think, at least implicitly (23), that automation can fulfill many of the needs that they have previously entrusted to supernatural agents.
The second premise is that people are most likely to engage in religious displays when in need of supernatural aid. For example, prayer and ritual participation increase during natural disasters (24,25) and warfare (26,27), situations in which human science has a limited capacity to help people. Conversely, rises in wealth and stability-which reduce dependence on supernatural aid-have foreshadowed declines in religion in the throughout the 20 th century (28,29).
The third and final premise is that declines in religious displays can lead to loss of faith among observers. This hypothesis is well-supported by the CREDs literature (11). Studies of CREDs mostly focus on intergenerational transmission, but declining participation in religious displays may also affect the beliefs of people's peers, or people's own beliefs. Multiple lines of research in social psychology suggest that people use behavioral displays to gauge their own beliefs (30,31). If people pray and attend religious services less frequently, this should have a negative effect on their own beliefs.
As noted throughout the main text, we view automation as only one mechanism of religious decline. Other complementary mechanisms are also plausible. For example, whereas our work focuses on people's perceived need for religion, other research has focused on people's perception of religious institutions. Scandals involving the Catholic Church have served as credibility "undermining" displays (CRUDs) which have turned people away their religious communities and churches (32,33). These value-based perspectives complement our needs-based perspective, which focuses more on the perceived instrumental benefits of religion.

More Information about Variables in Study 1
Share of Population with Electricity Access. Data on access to electricity are collected among different sources: mostly data from nationally representative household surveys (including national censuses) were used. Survey sources include Demographic and Health Surveys (DHS) and Living Standards Measurement Surveys (LSMS), Multi-Indicator Cluster Surveys (MICS), the World Health Survey (WHS), other nationally developed and implemented surveys, and various government agencies (for example, ministries of energy and utilities). Given the low frequency and the regional distribution of some surveys, several countries have gaps in available data. To develop the historical evolution and starting point of electrification rates, a simple modeling approach was adopted to fill in the missing data points -around 1990, around 2000, and around 2010. Therefore, a country can have a continuum of zero to three data points. There are 42 countries with zero data point and the weighted regional average was used as an estimate for electrification in each of the data periods. 170 countries have between one and three data points and missing data are estimated by using a model with region, country, and time variables. The model keeps the original observation if data is available for any of the time periods. This modeling approach allowed the estimation of electrification rates for 212 countries over these three time periods (Indicated as "Estimate"). Notation "Assumption" refers to the assumption of universal access in countries classified as developed by the United Nations. Data begins from the year in which the first survey data is available for each country.
Share of Population with Mobile Phone Access. Mobile cellular telephone subscriptions are subscriptions to a public mobile telephone service that provide access to the PSTN using cellular technology. The indicator includes (and is split into) the number of postpaid subscriptions, and the number of active prepaid accounts (i.e., that have been used during the last three months). The indicator applies to all mobile cellular subscriptions that offer voice communications. It excludes subscriptions via data cards or USB modems, subscriptions to public mobile data services, private trunked mobile radio, telepoint, radio paging and telemetry services.
Discrepancies between global and national figures may arise when countries use a different definition than the one used by the International Telecommunications Union (ITU). For example, some countries do not include the number of ISDN channels when calculating the number of fixed telephone lines. Discrepancies may also arise in cases where the end of a fiscal year differs from that used by ITU, which is the end of December of every year. A number of countries have fiscal years that end in March or June of every year. Data are usually not adjusted for discrepancies in the definition, reference year or the break in comparability in between years are noted in a data note. Missing values are estimated by ITU.

Results by Type of Robot in Study 1
Study 1 measured log-transformed estimates of industrial robot operational stock from the International Federation of Robots (IFR). The IFR provides yearly estimates of industrial robots installed across all sectors, but also provides the number of robots installed in construction, electricity, manufacturing, mining, and agriculture. In our main text analyses, we focused on overall stock, but here we break down the operational stock based on each sector. These analyses, presented in Tables S2-6 show the same significant findings as our main text across a variety of sectors. We find that the results are similar, and statistically significant, across sectors. Note. Estimates are presented outside parentheses, and standard errors are presented inside parentheses. All estimates have been standardized here so that effect sizes can be compared. * p < .05; ** p < .01; *** p < .001. Note. Estimates are presented outside parentheses, and standard errors are presented inside parentheses. All estimates have been standardized here so that effect sizes can be compared. * p < .05; ** p < .01; *** p < .001. Note. Estimates are presented outside parentheses, and standard errors are presented inside parentheses. All estimates have been standardized here so that effect sizes can be compared. * p < .05; ** p < .01; *** p < .001.     Note. Estimates are presented outside parentheses, and exact p values are presented inside parentheses. All estimates have been standardized here so that effect sizes can be compared. * p < .05; ** p < .01; *** p < .001. Figure S1. Robotics and Global Religious Decline. Panel A) The cross-sectional association between the operational stock of industrial robots and religious conviction. Nodes represent nations, node size represents population size, and node color represents GDP per capita. Panel B) Yearly religious decline across nations by bottom third, middle third, and top third of industrial robot stock. Line and node color indicate industrial robot stock.

Additional Robustness Checks for Study 1
Table S8 displays two additional Study 1 models which include more robustness checks. Column 1 displays a model in which we interact year with all control variables. Column 2 further includes fixed effects for continents, which controls for spatial autocorrelation, a common source of Type 1 error in cross-cultural surveys. The cross-sectional and longitudinal effects of robotics exposure reach significance in both models. No other factor interacts with time to predict religious decline in these models; choice norms was associated with general levels of religiosity, but not change in religiosity.   Note. Estimates are presented outside parentheses, and standard errors are presented inside parentheses. * p < .05; ** p < .01; *** p < .001. Table S9 reproduces Table 1 without majority Muslim nations, which we defined as nations in which at least 50% of the religiously identified population were Muslim. All key main effects and interactions replicated. Table S9.

Prevalence of Robot Workers and Global Religious Decline Excluding Muslim Countries
Religiosity  Note. Estimates are presented outside parentheses, and standard errors are presented inside parentheses. * p < .05; ** p < .01; *** p < .001. Table S10 reproduces Table 1 with an alternative measure of individualism: Hofstede's measure of individualism which was available for 64 of the 68 countries in our sample, whereas Inglehart's measure of individual choice norms was available for only 49. Nevertheless, the results are substantively identical with either measure, and we present the choice norms measure in the main text because it is more up to date; Hofstede's measure was based on data collected in the 1980s. Table S10.

Prevalence of Robot Workers and Global Religious Decline with Hofstede Individualism
Religiosity  Estimates are presented outside parentheses, and standard errors are presented inside parentheses. * p < .05; ** p < .01; *** p < .001.   Note. Estimates are presented outside parentheses, and standard errors are presented inside parentheses. All estimates have been standardized for presentation, so that effect sizes can be compared. % Unemployed is only displayed as a main effect because models failed to converge when % unemployed was interacted with year. * p < .05; ** p < .01; *** p < .001.

AI Interest and Religiosity Interest in the United States Over Time
Study 2 tested how robotics growth related to religiosity across various American regions. In this study, as in Study 1, robotics growth was associated with religious decline. However, neither Study 1 nor Study 2 featured intensive longitudinal data, which meant that we could not develop pseudocausal models to test whether time-specific increases in robotics growth preceded time-lagged declines in religiosity. We therefore conducted an exploratory analysis in which we used Google Trends to test for the relationship between interest in AI and interest in religion over ten years (2011-2020).
Google Trends is quickly growing as a research tool for quantifying regional and intertemporal variation in preferences and values. One specific advantage of Google Trends is that it is available at the monthly level, whereas many historical surveys (e.g., Gallup World Poll) and corpora (e.g., Google Books) are typically analyzed at the level of the year. Google Trends has also recently been applied to track changes in religious beliefs. Bentzen (34) recently used variation in searches for "prayer" to claim that religious beliefs increased during the COVID-19 pandemic. "Prayer" was a relatively useful keyword in this study because searches for "prayer" are more likely to come from genuine religious belief rather than academic interest in religion (e.g., Atheists may frequently search for "God") (34).
We built on Bentzen's (34) study by tracking variation in prayer interest, and also tracking variation in three keywords which connoted interest in artificial intelligence: "AI," "coding," and "computer coding." To reduce researcher degrees of freedom, we pre-registered the study characteristics (e.g., time window and set of terms) before downloading data and running analyses. Our pre-registration is available at https://osf.io/stby4/. A Cronbach's alpha analysis confirmed that our AI search-terms showed high internal consistency ( = .88), indicating that months with high search volume for "AI" would also have high search volume for "computer coding" and vice versa. We therefore collapsed our AI search-terms so that we had two time-series representing (a) religion interest, and (b) AI interest. Figure S3 displays these time series over our 10-year sample window.
We conducted two time series analyses to determine whether there was a negative lagged relationship between AI interest and religiosity interest. The first analysis was a pre-whitened cross-correlation which visualized the correlation between these two variables at a variety of different lags (e.g., how do changes in AI interest correlate with changes in religiosity interest 5 months later?). "Pre-whitening" a crosscorrelation refers to a process where a time series model is fitted to the x-variable, extracted, and then used to residualized the y-variable, which is a procedure that is meant to remove possible spurious lagged effects arising from autocorrelation or other interdependence in the time series. The second analysis was a vectoral autoregression which models both autoregressive effects (how do changes in variable x at time t predict changes in variable x at time t+1), and cross-lagged effects (how do changes in variable x at time t predict changes in variable y at time t+1). For both models, we determined the maximum lag (12 units of time) based on a data-driven function which determines the best maximum lag based on AIC fit. For our VAR model, we differenced the time series before analyzing their bidirectional relationship, so as to remove any possible monotonic trend which could yield a spurious positive correlation between interest in religiosity and interest in AI. Figure S3 presents the results of the cross-correlation, and Table S13 presents the results of the VAR model. Both models showed bi-directional lagged effects between interest in religiosity and interest in prayer. Religiosity interest could predict future declines in AI interest at an optimal lag of 7 months, and AI interest could predict future declines in religiosity interest at an optimal lag of 10 months. The coefficients of these negative effects are displayed on the y-axis of Figure S3, and in Table S13. Overall, this analysis supports our Study 2 conclusion that rises in AI exposure predict declines in religiosity using pseudocausal methods of time series analysis. This analysis also adds an interesting wrinkle to that finding; religiosity also appears to foreshadow declines in AI interest over time, consistent with a negative reciprocal relationship. However, this is a highly exploratory finding, and should be taken with more caution than the results we present in the main text.

Descriptive Statistics for Study 3 Variables
Table S14 displays the mean of religious identification and God belief at each wave of Study 3. Table S15 shows the mean of each occupational science exposure variable at each wave of the survey.

Incorporating Third-Order Lagged Terms in Study 3
In the main text, Table 3 (Model 3) presents a lagged analysis of God belief and occupational AI exposure. We include the first-order and second-order lags, but do not include any higher-order lags because it reduced our sample to only a fraction (n = 8,262) of the total population. Nevertheless, if we included third-order lags, the third-order lagged effect reached statistical significance, b = -.24, SE = .09, OR = .79, t = -2.70, p = .007, 95% CIs [.67, .94], but with a smaller effect size than the first-and secondorder lagged terms (bs = -.24 vs. -.28 and -.71, respectively). Table 3   Table S16.

AI Exposure and God Belief in a Community Sample with Exact p-Values
Belief in God    Note. Estimates are presented outside parentheses, and standard errors are presented inside parentheses. All occupational exposure variables have been standardized via z-scoring for presentation. * p < .05; ** p < .01; *** p < .001.

Additional Robustness Tests for Study 3
We conducted several additional analyses to ensure the robustness of Study 3's findings. These included replicating our central models controlling for general education rather than exposure to specific scientific fields and replicating the relationship between AI exposure and God belief in participants who took part in different subsets of NZAVS waves to control for attrition. We describe each of these approaches below.
Controlling for General Education. Our central models control for occupational exposure to biology, chemistry, mathematics, and medicine/dentistry to ensure that knowledge about science did not confound the relationship between AI exposure and God belief. In supplemental models, we replicated this key relationship controlling for education level as a more general proxy for scientific knowledge. Each participant provided their level of education according to the "New Zealand Qualifications Framework" (NZQF), which provides ten levels ranging from certificates to doctoral degrees communicating different levels of educational qualification. For example, a level 1 certificate communicates "Basic general and/or foundational knowledge" and is earned during secondary education, level 7 indicates "Specialized technical or theoretical knowledge with depth in a field of work or study" and connotes a bachelor's degree, and level 10 indicates "Knowledge at the most advanced frontier of a field of study or professional practice" and connotes a doctoral degree. Simultaneously controlling for education and scientific exposure introduced multicollinearity into our models, so instead we present Table S18 below which controls for general education-scored from 1-10 based on the NZQF codes-rather than exposure to specific scientific fields. Note. Estimates are presented outside parentheses, and standard errors are presented inside parentheses. All occupational exposure variables have been standardized via z-scoring for presentation. * p < .05; ** p < .01; *** p < .001.
Exploring Possible Attrition Effects. The models depicted in our main text included all participants, regardless of how many time-points of the NZAVS they completed. To ensure that our effects were not driven by attrition effects, we re-estimated the relationship between occupational AI exposure and God belief with all covariates (see Table 3, Model 2) for participants who reported complete religion and occupation data for (a) at least two time-points, (b) at least three time-points, (c) at least four time-points, (d) at least five time-points, and (e) at least six time-points. Figure S4 shows the effect size of the key relationship between AI exposure and God belief in each of these models, and the error bars represent the standard error in the model. This Figure shows that the effect is statistically significant and similar in magnitude regardless of whether we rule out participants who participated in a smaller fraction of survey waves. We therefore consider it unlikely that attrition drove our findings. Effect Size Figure S4. The relationship between AI exposure and God belief across participants who completed varying number of time-points. Nodes represent the effect size. Error bars represent standard error. These estimates control for age, gender, income, timepoint, political conservatism, and exposure to other scientific disciplines.

Ruling out Alternative Modeling Strategies in the Study 3 Dataset
We considered a multi-level modeling approach in which we interacted each person's mean AI exposure with survey timepoint. This approach would be similar to Studies 1-2, which tested whether countries and states with high levels of robotics experienced greater religious decline throughout the 21 st century. However, the approach is less appropriate for Study 3 because countries and states almost never experience declining automation, whereas individuals frequently shift between jobs which have different levels of exposure to AI. This means that someone's mean level of AI exposure across the entire NZAVS survey is not a very informative statistic. We therefore considered it more appropriate to track how a person's occupational AI exposure at a given time-point t correlated with their religiosity at that time-point, and also their religiosity at future time-points in the survey using lagged terms. We also considered (and pre-registered as an alternative analysis) a cross-lagged panel model with random intercepts. However, this model did not converge, so we elected to use multi-level models (which we also pre-registered) to test our hypotheses

Study 4 Workplace Behavior Measures
Our main text focuses on the measures in timepoint 1 (T1) and timepoint 2 (T2) which test our main hypothesis. But we also investigated the downstream consequences, measured in timepoint 3 (T3), of a negative association between automation and religiosity in our Study 4 dataset. Past research has linked religiosity to a host of positive outcomes. Studies on religion and social behavior have found that religious people often behave more prosocially (35,36) and honestly (37,38), and that they are trusted more than non-religious people (39,40). Other studies have described more cognitive correlates of religiosity, such as higher levels of self-control (41,42) and conscientiousness (43). If automation leads to religious decline, could it also encourage declines in these positive attributes?
We measured workplace behaviors that resembled outcomes previously linked to religiosity in previous research on religion and prosociality (35,39,44) (e.g., unethical behavior, trust, incivility, organizational citizenship behavior, which are face-valid indicators of prosociality) and on religion and cognitive control (41,42) (e.g., goal progress, task proficiency, which have been previously linked to self-control (45)). The fact that these measures were supervisor-reported helped mitigate self-report biases such as social desirability concerns (46). Finally, we measured age, gender, socioeconomic status, education, and tenure with the company as pre-registered covariates.
These workplace behavior measures are listed below, and Table S19 displays the descriptive statistics of all measures. (47). Each participant was rated by their supervisor on a 1 (Strongly agree) -5 (Strongly disagree) scale across 6 items, including "___ was productive at work" and "___made good progress at work". Each item started with "Over the last week" to make the scale sensitive to the context of the data collection.

Goal Progress (T3). We used the Goal Progress scale developed by Wanberg, Zhu, & Van Hooft
Task Proficiency (T3). We used the Task Proficiency scale developed by Mitchell and colleagues (48). Each participant was rated by their supervisor on a 1 (Strongly agree) -5 (Strongly disagree) scale across three items, including "___ carried out the core parts of his/her job well" and "___ completed his/her core tasks well using the standard procedures." Each item started with "Over the last week" to make the scale sensitive to the context of the data collection.

Organizational Citizenship Behavior (T3).
We used the Organizational Citizenship Behavior scale developed by Lee and Allen (49). Each participant was rated by their supervisor on a 1 (Very low) to 7 (Very high) scale across three items, including "Overall level of effort of ___" and "Overall willingness to do what it takes to successfully complete assigned tasks of ___." Each item started with "Over the last week" to make the scale sensitive to the context of the data collection.
Counterproductive Workplace Behavior (T3). We used the Counterproductive Workplace Behavior scale developed by Bennett & Robinson (50). Each participant was rated by their supervisor on a 1 (Strongly disagree) to 5 (Strongly agree) scale across seven items from Bennett & Robinson's (2000) measure of counterproductive workplace behavior. Items highlighted various behaviors such as cursing, pranking, or publicly embarrassing colleagues for ethical failings at work.

Instigated Incivility (T3).
We used the Instigated Incivility scale used in Koopman and colleagues (51). Each participant was rated by their supervisor on a 1 (Strongly disagree) to 5 (Strongly agree) scale across three items, including "___ put a co-worker down or acted condescendingly towards them" and "___ paid little attention to a coworker's statement or showed little interest in their opinion." Each item started with "Over the last week" to make the scale sensitive to the context of the data collection.

Unethical Behavior (T3).
We used the Unethical Behavior scale used in Welsh, Bush, Thiel, and Bonner (52). Each participant was rated by their supervisor on a 1 (Strongly disagree) to 5 (Strongly agree) scale across four items, including "___cuts corners to complete work assignments more quickly" and "___ alters performance numbers to appear more successful." Each item started with "Over the last week" to make the scale sensitive to the context of the data collection.
Perceived Trust (T3). We used the Perceived Trust scale developed by Robinson (53). Each participant was rated by their supervisor on a 1 (Strongly disagree) to 5 (Strongly agree) scale across six items, including "I believed that ___ has high integrity" and "___ was open and upfront with me."

Study 4 Full Multiple Regression Models
Our main text reports the results of regression models in which T1 AI exposure was negatively associated with T2 religiosity, even controlling for T1 religious fundamentalism and intrinsic religiosity. In Table S20, we summarize the full statistics associated with this relationship.
These models also found an interaction between T1 AI exposure and T1 intrinsic religiosity, such that the negative link between T1 AI exposure and T2 religiosity was stronger for participants who began the study lower in intrinsic religiosity compared to those who began the study higher in intrinsic religiosity. Table S19 displays the coefficients from these regression models. These models also control for age, gender, education, SES, and organizational tenure, which were our pre-registered covariates.

Study 4 Supplemental Measures
With these data, we were able to test whether AI-linked declines in religiosity predict subsequent variation in workplace behaviors. We did this using a structural equation model. We began by modeling the preregistered mediation dynamic (AI exposure à religiosity à workplace behaviors) and then adding additional paths using a data-driven approach with modification indices. We also chose to incorporate intrinsic religiosity as a moderator of the AI exposure à religiosity path given the significant interaction we observed in our initial models.  This SEM (see Figure S5) reproduced the negative lagged association between T1 AI exposure and T2 religiosity, as well as the moderation by T1 intrinsic religiosity. It also reproduced many of the associations documented in previous research involving prosociality and cognitive control. Religiosity was positively associated with organizational citizenship behavior, goal progress, and task proficiency, and was negatively associated with counterproductive workplace behavior, unethical behavior, and incivility. Past studies have shown that religious people can show pronounced social desirability biases in self-report surveys (54), but we measured T3 workplace behavior using supervisor-report, increasing the dependability of these associations.
Our model also revealed two unexpected findings. First, we found that T2 religiosity was not associated with T3 perceived trust. Previous research on religion and trust has sampled people living in religious communities (39,55) whereas our sample of employees in a manufacturing plant may not have relied on religion to the same extent to gauge trust. Second, AI exposure was linked to T3 supervisor-reports of higher perceived trust, task proficiency, and lower unethical behavior. This suggests that, at least in the workplace, adopting AI technology can carry benefits, leading employees to be perceived as more proficient, ethical, and trustworthy. However, AI exposure showed no significant association with supervisor-reports of workplace behaviors, such as organizational citizenship, goal progress, counterproductive workplace behavior, or incivility, suggesting that it had a narrower range of benefits compared to religious belief. Table S22 summarizes these statistics in full.

Figure S5. A Structural Equation Model
Displaying Study 4 Results. AI exposure and intrinsic religiosity have been centered. Coefficients have been standardized and can be interpreted as effect sizes. Variances and covariances are not shown here for display purposes, but they are listed in Table  S22. This model includes covariates (age, gender, SES, education, tenure in organization) which are omitted for display purposes. * p < .05; ** p < .01; *** p < .001.

Stimuli in Study 5
In our main text Methods, we summarize the experimental design of Study 5, which involved reading about three innovations in language, medicine, and agriculture.  that participants read about each advance, and provides a link to the source of each paragraph. Some paragraphs have been slightly adapted for spelling, grammar, and length. ChatGPT is a natural language processing tool that allows users to have human-like conversations with an AI chatbot. Users can ask all kinds of questions to ChatGPT to get straightforward and uncluttered responses in return. For example, you can use the tool as an encyclopedia and ask questions. For instance, "define Newton's laws of motion" or "write a poem," which it will do instantly. Additionally, you may ask ChatGPT to design a computer program which performs simple or complex tasks such as solving anagrams or detecting animals using data about their average height and weight. ChatGPT is so good at emulating human language that many people cannot distinguish between ChatGPT and written responses by real people.
Source: https://emeritus.org/blog/ai-ml-what-is-chatgpt/ AI, Medicine Sharing medical data between laboratories and medical experts is important for medical research. However, data sharing is often sufficiently complex and sometimes even impossible due to the strict data regulatory legislation in Europe. Researchers addressed the problem and developed an artificial neural network that creates synthetic x-ray images that can fool even medical experts.
Source: https://www.sciencedaily.com/releases/2022/11/221117102821.html AI, Agriculture Internet of Things (IoT) is a new method of giving physical objects "minds," with sensors, memory, processing ability, and communication ability. These devices can communicate with each other, adapt their behavior, and predict the future without an internet connection. IoT is revolutionizing agriculture because it allows surveillance cameras, tractors, sprinklers, and other agricultural tools to exchange data on temperature, humidity, waste, wind speed, and pest infestation in order to control plant treatment (e.g., including pesticide control and water volume) without farmers actively making decisions.
Source: https://www.forbes.com/sites/louiscolumbus/2021/02/17/10-ways-ai-has-thepotential-to-improve-agriculture-in-2021/?sh=3f5727f17f3b Science, Language Over 70 million deaf people use sign languages as their preferred communication form. Although they access similar brain structures as spoken languages, it hasn't been identified the brain regions that process both forms of language equally. Scientists have now discovered that Broca's area in the left hemisphere, central for spoken languages, is also crucial for sign languages. This is where the grammar and meaning are processed, regardless of whether it is spoken or signed language.
Source: https://www.sciencedaily.com/releases/2022/12/221220112426.htm Science, Medicine Fewer cases of melanoma were observed among regular users of vitamin D supplements than among non-users, a new study finds. People taking vitamin D supplements regularly also had a considerably lower risk of skin cancer, according to estimates by experienced dermatologists. The study included nearly 500 people with an increased risk of skin cancer.

Robustness Analyses for Study 5
Here we report two robustness analyses for Study 5. The first series of analyses reports the main effects of Study 5 without controlling for perceived impressiveness and technological sophistication. Our second series of analyses reports the main effects of Study 5 without including participants from our pilot data. Results were substantively identical to our main text analyses in both cases.
Without controlling for impressiveness and technological sophistication, participants viewed AI advances as less associated with laws of nature than scientific innovations, b = -1. 44

Study S1
We ran this study during a 7-hour MBA seminar in Singapore, in which senior executives from Taiwan learned about workplace applications of AI. The module introduced easy-to-use computing techniques in R and Python for implementing AI machine learning solutions. We randomly assigned participants to either the control or experimental condition. Participants in the control condition (n = 35) received their questions at the beginning of the seminar (before AI exposure); participants in the experimental condition (n = 43) received their questions at the end of the seminar (after AI exposure). Among participants who reported their gender, there were 27 men and 30 women, and a median age of 40 -50.
The first questions in the survey were a manipulation check measuring whether attending the seminar actually increased confidence in automation (we reasoned that even anticipating the seminar could increase these perceptions). Participants separately rated the promise of automation (AI and robotics), medicine, biology, chemistry, and mathematics with a 1 ("No Promise") -7 ("Very Promising") scale. Participants then rated three "playing God" items using a 1 ("Strongly Disagree") -7 ("Strongly Agree") scale. These items were: (a) Artificial intelligence and robotics allow humans to "break" the laws of nature, (b) Artificial intelligence and robotics allow humans to do things that we have never been able to do before, (c) Artificial intelligence and robotics give humans "superhuman" abilities. Finally, participants rated three items measuring religiosity: (a) Belief in God has an important role in the workplace, (b) Prayer has an important role in the workplace, and (c) Religious service attendance has an important role in the workplace. Participants also reported their religious identity and rated the importance of religion in their life using a 1 (Not at all Important) -7 (Very Important) scale during the demographics section of the survey.
The manipulation succeeded such that participants in the AI condition rated AI as more promising for the future than participants in the control condition, b = 0.38, SE = 0.17, t = 2.19, p = 0.03, but did not rate any other scientific discipline as more promising for the future (ps > 0.05). The manipulation also increased participants' confidence that they could "play God" with AI technology. Participants in the experimental condition agreed significantly more with the items "Artificial intelligence and robotics allow humans to "break" the laws of nature," b = 1.18, SE = 0.44, t = 2.67, p = 0.009, "Artificial intelligence and robotics allow humans to do things that we have never been able to do before," b = 0.93, SE = 0.32, t = 2.93, p = 0.005, and marginally more with the item "Artificial intelligence and robotics give humans 'superhuman' abilities," b = 0.64, SE = 0.36, t = 1.79, p = 0.08. See Figure S6 for an illustration of these effects.
There was no significant main effect of experimental condition on any of the three religion items (ps > 0.05). However, condition interacted with religious importance on ratings  Figure S6, breaks down the effect of condition at each level of religious importance to display this moderation. Figure S6. Top) Estimates of playing God items in the control and experimental conditions from a general linear model. Bottom) Estimates of religious activities (prayer and service attendance) importance in the workplace from a general linear model where condition is moderated by religiosity. We present prayer and services together because their effects were nearly identical.
In sum, this field experiment showed that exposure to AI through an intensive one-day seminar increased senior business executives' belief that artificial intelligence allows humans to "play god," break the laws of nature, and do things that humans have never done before. Among highly religious individuals, exposure to AI also decreased the importance of prayer and service attendance for workplace behavior, although it did not change perceptions of God's importance. Nevertheless, the items measuring prayer and service attendance are interesting in their own right, given these are two forms of petitionary religious appeals.
Despite this study's small sample, it offers valuable evidence that automation leads people to feel unconstrained by laws of nature, and that exposure to automation may reduce people's perceived importance of religious appeals-at least among the highly religious. Religious Activities Imp.

Study S2a-b
In two highly related supplemental studies (Studies S2a-b), we examined the cross-sectional relationship between AI favorability and religiosity in two different datasets. We reasoned that non-religious people would be more favorable towards automation than religious people for at least two reasons. First, favorability towards automation may lead to religious decline (producing a negative correlation with religiosity). Second, religious people may perceive automation as more of a threat to their worldview than non-religious people.
We examined this relationship with two datasets. The first "international" dataset ( Table S24 displays the religious demographic of the sample. In this dataset, not all participants answered all questions (some participants indicated responses of "don't know" or "no opinion" which were scored as missing in our analyses), so degrees of freedom vary across analyses. Table S24.

Religious Identification of Participants in the Study S2a
Republican, 714 people leaned Democrat) and college attainment (2,183 participants had earned their college degree).
Both datasets contained measures of people's AI favorability. In the international dataset, participants rated whether (a) robots and (b) AI was a good or bad thing for society using a binary scale. We chose to average these responses into a composite variable because the responses to the two items also correlated strongly, r(25,102) = .45, p < .001, and using a composite of the two items gave us more statistical power than if we had used either item in isolation since participants did not always respond to all items. In the USA dataset, participants rated their enthusiasm about "the possibility that computers and robots could do most of the work currently done by humans," using a scale of 1-4 anchored at 1 ("Very enthusiastic") and 4 ("Not at all enthusiastic"). We reverse-scored this item so that higher values represented greater enthusiasm.
Both datasets also contained measures of religiosity. In the international dataset, participants responded to the item "how important is religion in your life," which was rated on a 1-4 scale anchored at 1 ("Very Important") to 4 ("Not at all important"). We reverse-scored the scale so that higher values represented greater importance. In the USA dataset, participants rated their frequency of service attendance using a 1-6 scale ranging from "never" to "more than once per week." These different measures allowed us to test how attitudes towards AI were linked to religious behaviors as well as self-reported religious importance.
Our international dataset also allowed us to measure people's favorability towards science more generally, and other technological innovations using the same binary scale that they used to rate robots and AI. Specifically, participants rated their favorability towards science with the item "Overall, would you say developments in science have had a mostly ____ effect on society?" with response options ranging from (1) mostly positive effect, (2) mostly negative effect, and (3) equal positive and negative effects. We recoded these options so that values were ordered in terms of positivity, which 3 representing "mostly positive effect" and 1 representing "mostly negative effect." How did people's attitudes about AI relate to their religiosity? A zero-order correlation found that AI favorability was negatively linked with religiosity in the international dataset, r(30,202) = -.08, p < .001. This negative correlation remained after controlling for sex, age, favorability towards science, and favorability towards space travel in a regression model where intercepts randomly varied across nations (Table S25, Model 1). Favorability towards science was negatively associated with religiosity, but its association was far weaker than the negative association between AI favorability and religiosity. Favorability towards space travel was positively associated with religiosity in the model. Similarly, in the USA dataset, AI favorability was negatively associated with religiosity, r(4,110) = -.09, p < .001, and this negative association persisted controlling for age, gender, education, and political orientation (Table S25, Model 2). Participants from around the world and across the United States with more favorable attitudes towards AI had lower levels of religiosity, and this association could not be reduced to education, age, gender, political orientation, or favorability towards science and technology.

Study S3
In Study S3, we tested whether religious individuals perceived religion and automation as compatible. A well-established finding in research on science and religion is that religious people view science and religion as highly compatible (56,57). This is because people view science and religion as fulfilling different capacities and meeting different needs. Science involves the human exploration and application of laws of nature, whereas religion involves "supernatural" agents and principles that transcend these laws (58) 1 . We predicted that, because many people also believe that automation can operate outside these laws of nature, religious people would view religion as less compatible with automation than science.
We designed a pre-registered within-subjects experiment in which 498 religious individuals rated AI, robotics, and other branches of science (biology, chemistry, mathematics, medicine, the same disciplines that we measured in Study 3) on 12 bipolar and unipolar items: The bipolar items were (1) "The field of ___ is focused on HOW [vs. WHY] to solve problems" (2) "The field of ___ is focused on concrete observable information [vs. abstract ideas and principles]," and (3) "The field of ___ is focused on abstract ideas and principles [vs. intuition]." The unipolar items, anchored at 1 ("Strongly Disagree") and 7 ("Strongly Agree"), were (4) "The field of ___ involves agents with cognitive or physical abilities that surpass human abilities," (5) "People who work in ___ are playing God," (6) "People who work in ___ are doing things that should be left to God," (7) "The field of ___ involves providing social support to people," (8) "The field of ___ involves providing connection to a community," (9) "The field of ___ involves applying laws of nature," (10) "The field of ___ involves discovering laws of nature," (11) "God works through ___ ," (12) "Religion is compatible with ___." We pre-registered five latent factors underlying these items, and a promax factor analysis suggested that five factors each explained > 10% of variance in the items, with a cumulative variance explained of 69%. As pre-registered, there was a construal level factor (items 1-3), a playing god factor (items 4-6), a communality factor (items 7-8), a laws of nature factor (items 9-10), and a compatibility with religion factor (items 11-12). All item loadings were greater than .30, with no cross-loadings above .30. We averaged the items into these five indices for analyses.
After computing these indices, we fit a multilevel model in which each dimension was regressed on discipline dummy-codes, contrasted against automation (the average score of AI and robotics). We combined robotics and AI into a single index for the sake of parsimony, but it had little impact on our findings: Our results replicated regardless of whether we modeled robotics or AI individually.
Our first model found that people rated automation as less compatible with religion than all other scientific disciplines (see Table S26). We next investigated the possible mechanisms of this effect by examining whether automation was unique from all other disciplines in other respects. Our models showed that automation was not viewed as significantly different from other disciplines in terms of prosociality, or construal level (ps > .10). However, automation was seen as less associated with laws of nature compared to any other scientific discipline. Automation was also seen as significantly more associated with playing God than all other disciplines. Statistics from these key models are displayed in Table S26. Beta coefficients represent the mean difference between ratings of automation vs. other disciplines on 1-7 scales, which we describe in the Methods.
In sum, religious people perceived religion as less compatible with automation than with science. These perceptions were partly explained by the view that automation is less associated with laws of nature and that it encourages people to play God. This study supports our finding from Studies S2a-b that religious people feel more negative towards automation than other scientific disciplines, partly because automation gives people capacities that have historically been unique to God.

Study S4
Study S4 explored a different mechanism by which automation could lead to religious decline. In our introduction, we focus on how people may see automation as filling the same functional niche as supernatural agents. But it is possible that automation does not lead to religious decline because of functional overlap between automation agents and gods, but simply because of the lifestyles, activities, and challenges that are inherent to working in AI and robotics occupations. Working in AI and robotics may involve challenges that require more concrete (vs. abstract) construal, and lead people to reflect less on their religious values and supernatural beliefs compared to working in other scientific disciplines. We tested this hypothesis with a correlational study.
We ran a pre-registered study in which we asked 196 participants from Amazon Mechanical Turk to rate whether 21 activities were characteristic of AI, medicine (a science control), and telecommunications technology (a technology control). Participants responded to the prompt, "consider the functions of _______. When you use ________, what kinds of challenges are you most frequently trying to solve" using a 1 ("Very Rarely") -7 ("Very Frequently"). We also asked a separate sample of 199 religious participants from Amazon Mechanical Turk to rate the same activities using the prompt "What kinds of challenges lead you to feel that religion is important in your life" using a 1 ("Not at All") -7 ("Very Much") scale. The challenges in this study are listed in Table S27.  Figure S7. Nodes represent challenges, and the trendline represents the relationship between these challenges' associations with different disciplines and their likelihood of inspiring religious devotion. The error shading represents standard error around these relationships.
These findings suggest that the problems people face when working with automated agents may be uniquely unlikely to inspire religious devotion or strengthen people's religious conviction. This may be one reason why Studies 3-4 (main text) found that exposure to AI professions is associated with more religious decline than exposure to other scientific disciplines.

Study S5
Our main text shows that entering a profession in AI is associated with greater religious decline than entering a profession in another scientific field like medicine. In a final pre-registered supplemental study, we tested whether participants could anticipate this religious decline if religious individuals imagined entering an AI-focused occupation vs. an occupation in medicine. Importance of Religion Importance of Religion Importance of Religion Association with AI Association with Telecom.

Association with Medicine
We recruited 402 religiously identified participants for this study using Amazon Mechanical Turk through the CloudResearch platform. Participants in this study were told to imagine that they were accepting a job either in AI or in medicine, with the following prompts: AI Condition. We would like you to imagine that accepting a new job in computer science where you frequently use artificial intelligence. What kinds of activities do you think you would engage in as part of this job?
Medicine Condition. We would like you to imagine that accepting a new job in medicine where you frequently use tools of modern medicine. What kinds of activities do you think you would engage in as part of this job?
Participants were then told that they would be involved in three different activities as part of this new job: (1) diagnosing illnesses faster, (2) developing new medication, and (3) creating treatment plans. We selected these activities because they are obviously in the domain of medicine, but they also have experienced high levels of AI infiltration in recent years.
For each activity, participants responded to the item "using AI [medicine] to solve this problem would strengthen my religious conviction" suing a 1 ("Would Not Strengthen") -9 ("Would Strengthen") scale.
After rating the individual activities, participants also responded to the general item: "As part of your job in _____ [AI/Medicine], how important of a role do you think that religion would play in your life" using a 1 ("Not Important") to 7 ("Very Important") scale.
Participants in the AI condition anticipated less religious conviction when evaluating disease diagnosis, b = -1. 15 This study therefore provides evidence that participants are aware that entering an occupation involving automation might prompt less religiosity. Could this mean that our prior studies involved selection effects, such that non-religious people were simply selecting into AI professions? We find this possibility plausible as a partial explanation of our results, but not a complete explanation. In Study 3, change in religiosity happened within individual over time: working in AI was associated with decreased religiosity over time in the same people. Similarly, participants in Study 4 did not select into working with AI: they were assigned more AI work as their organization integrated AI technology.