The public perceptions of algorithmic decision-making systems

Algorithmic decision-making (ADM) systems are heavily used by businesses, governments, and the nonprofit sector. Their adoption by the public is important for their optimal performance, but the impact of user perceptions on ADM system adoption is not well-understood. We develop a theoretical model that examines the effect of a user ’ s transparency concern on perceived fairness, accountability, and privacy; it then captures the effect of these on trust in, usefulness of, and finally, intention to adopt an ADM system. We use results from a large-scale survey among 2612 Dutch citizens to test our research model. Our results shed light on the role of transparency concern on trust and perceived usefulness through its impact on perceived fairness, accountability, and privacy. The survey data of a large representative group enables us to capture public views on ADM systems. Finally, the study provides a comparative view of ADM system perceptions for different application scenarios. Our insights on differences between scenarios can help organizations prioritize relevant measures to improve the adoption of their ADM systems.


Introduction
Today, algorithmic decision-making (ADM) systems are heavily used by businesses, governments, and the nonprofit sector (Singh et al., 2016).ADM systems typically employ artificial intelligence algorithms, such as machine learning algorithms, to come up with decisions or suggestions based on data (Zerilli et al., 2019).They have become a vital part of our everyday lives by improving the efficiency and the reliability of existing services such as medical diagnosis (Hoffmann et al., 2020), enabling new services such as assisted and autonomous driving (Yurtsever et al., 2020), and enhancing experiences by presenting personalized recommendations on, for example, e-commerce and social media platforms.
To reap the benefits of ADM systems, it is essential that public and individual users feel comfortable with and adopt these systems (Kern, 2022).Strategies are needed to overcome ethical and public concerns leading to aversive attitudes that prevent ADM system adoption (Zerilli et al., 2022).Transparency concern is particularly prominent in public and policy discussions (Shin and Park, 2019) and, thus, addressed in various regulatory guidelines such as the European General Data Protection Regulation (GDPR), 1 the European their ADM systems.
The rest of the paper is organized as follows.The next section discusses the literature and develops our research model.Section 3 explains the survey, and Section 4 reports the results of the hypothesis tests.We discuss the results along with theoretical and practical implications, limitations, and future directions in Section 5.

Literature review and research model development
In the following sections, we first provide a brief definition of an ADM system and then develop hypotheses and the research model to answer RQ1 and RQ2.To answer RQ3, in Section 3 (Method), we develop five scenarios representing different ADM system use contexts and test the research model comparatively for those contexts.

Algorithmic Decision-Making (ADM) systems and transparency
An algorithm in an ADM system encapsulates a predictive or prescriptive mathematical model using rules that it dynamically generates or adjusts based on input data (Newell and Marabelli, 2015).This is in contrast to systems that use pre-defined rules "builtin" to the system from the outset (Zerilli et al., 2019), such as decision-support systems (Watson, 2017).By using user and other data as input and analyzing the data, ADM systems provide suggestions for making certain decisions (e.g., tumor detection in radiology images, parole decisions, and fraud detection) or make automated decisions (e.g., mortgage decisions and ranking posts on social media) (Merkert et al., 2015).One can distinguish two types of ADM users: decision makers and decision targets (Utz et al., 2021).Decision targets are the end users, such as hospitalized patients, for whom decisions are provided.The output of ADM systems may be visible or invisible to the decision targets, with diverse levels of impact.Decision-makers rely on ADM system outputs to make decisions about end users, such as determining allocations of resources and preferences in treatment scheduling (Utz et al., 2021).
In the above examples, ADM systems are used in significant decisions that have a life-altering and immediate impact on the affected individuals, such as receiving treatment, buying a home, or receiving parole.In the ethics literature, this is often referred to as "hard" impact.Other systems have provided new services that are not particularly deemed essential while still having a large but hard-todiscern delayed impact on public life.For example, recommender systems on social media have been criticized in numerous countries for distorting democratic debate during election cycles (K.Martin, 2019).This is referred to in the ethics literature as "soft" impact (Swierstra and te Molder, 2012).Though debates about the potential ethical issues relating to new technologies have usually focused on "hard" impacts, ADM systems' potential "soft" impacts are becoming increasingly prominent.Therefore, public perceptions of ADM systems that may have both soft and hard impact on individuals' lives are worth investigating.
ADM systems are often opaque or unreadable, poorly understood, and not always predictable in their performance (Diakopoulos, 2014;Veale and Edwards, 2018).Transparency can be about procedures, i.e., how a system works in general, or outcomes, i.e., how a decision is made for a particular case (Zhao and Benbasat, 2019).Transparency of an ADM system for a user is a subjective perception of the extent the user is clear or unclear about the reasoning of the system (Shin and Park, 2019).Transparency issues of intelligent systems have long been considered in academic studies in various ways (Mueller, 2019).Computer scientists have recognized the issue of transparency by naming ADM systems as black boxes.To open the black box, explanations have been developed to make opaque algorithms technically more transparent (Storey, 2022).Focusing on human interaction, other researchers have evaluated user perceptions of ADM systems when certain explanations were provided (Bauer et al., 2021;Schmidt et al., 2020).For example, explanations in an e-commerce recommendation agent were shown to improve trust only when users felt the system to be personalized (Zhang and Curley, 2018), while explanations did not impact trust when expectations were already met in a work promotion scenario (Kizilcec, 2016).These studies contribute to making ADM systems more transparent through explanations and understanding individuals' interactions with systems when explanations are presented.However, although explainability provides a means of improving transparency, the concept of transparency, either as a perception or concern, has not been handled as an individual construct in these studies.Second, legal and ethical researchers have investigated how to theoretically interpret the concept of transparency (Edwards and Veale, 2018;Selbst and Powles, 2017).Although such studies have provided extensive argumentation on the ways to achieve transparency, including governmental policies that can be followed, they have not empirically evaluated public concerns and perceptions around transparency.
There is a growing body of empirical literature on the perceptions and attitudes of individuals toward ADM systems.First, a stream of research has investigated the algorithmic aversion attitudes of decision-makers (Burton et al., 2020) and the preference for human vs ADM system decision-makers due to system performance (Saragih and Morrison, 2022).Second, attitudes towards ADM systems have been analyzed by measuring user acceptance (Gursoy et al., 2019;Park and Mo Jones-Jang, 2022;Shin et al., 2019;Sohn and Kwon, 2020) and other attitudes and behaviors such as perceived usefulness, risk, emotional responses, satisfaction, and information seeking (Araujo et al., 2020;Lee, 2018;Shin 2020bShin , 2022bShin , 2022a)).These studies recognize that factors unique to ADM systems come into play, and different acceptance behaviors can be observed for such systems than typical technological artifacts (Shin et al., 2019;Sohn and Kwon, 2020).Although most of these studies acknowledge the centrality of transparency for ADM systems, many have not explicitly considered transparency as a factor in acceptance (e.g., Araujo et al., 2020;Gursoy et al., 2019;Lee, 2018;Park and Mo Jones-Jang, 2022).For those involving transparency, they have investigated perceived transparency as a factor similar to fairness, accountability, and privacy or considered transparency expectations about a system (Shin 2020b(Shin , 2020a(Shin , 2022b;;Shin and Park, 2019;Shin et al., 2019).Furthermore, these studies have not always included trust and perceived usefulness as related factors.Among those, Shin et al. (2022) developed the concept of transparent fairness, which refers to the interwoven nature of transparency and fairness, but focused on understanding user sensemaking of the system rather than its adoption.Therefore, there is a gap in the literature on the B. Aysolmaz et al. impact of transparency concern on ADM system adoption that also takes into consideration other ADM-specific factors that can impact trust in and perceived usefulness of these systems.

Relation of transparency to perceived Fairness, Accountability, and privacy in ADM systems
User perceptions of ADM systems affect the user adoption process (Shin et al., 2019).ADMs systems are typically evaluated in terms of fairness, accountability, and privacy (Brkan, 2017;Edwards and Veale, 2017;Gal et al., 2022;Jobin et al., 2019;Lepri, 2017;Ribeiro et al., 2016).Transparency can be seen as a facilitator of improved perceptions of these three values by enabling cognitive judgment of these concepts (Mittelstadt, 2016;Rothenberger, 2015).Transparency, fairness, accountability, and privacy are also the most frequent principles mentioned in ethics guidelines on ADM systems (Jobin et al., 2019).As an example, even recommender systems on ecommerce platforms, which are rarely involved in life-critical decisions, raise questions about the degree to which these systems constrain our freedom without our knowledge, though usually with our consent, as well as how personal data resulting from the use of such systems is collected, analyzed, shared, or sold.At the far end of this line of thinking, an ADM system helping us select clothes or the next news item to read could, by some accounts, be seen as undermining the traditional epistemic operations of western liberal democracy by constraining our options based on personal profiling (Sunstein, 2017).
Fairness is about the balance between the impact of user characteristics and the goal of giving everybody equal chances.However, it depends on the context what is to be considered a fair outcome.For example, an ADM system that decides on the maximum mortgage amount might be considered unfair if the educational level of the applicants has a strong influence on the amount, although educational level might be a reliable predictor for household income.The sense of fairness can be moderated through increased transparency, e.g., by justifying the discriminatory outcome in the context of positive discrimination (Zarsky, 2016).Thus, concern about the transparency of a system typically impacts users' fairness perception by changing how users make judgments about the system, as shown in diverse experimental studies (Bolton et al., 2003;Rothenberger, 2015;Shulner-Tal et al., 2022).Accordingly, we pose the following hypothesis: H1a: Transparency concern negatively affects a user's perceived fairness of an ADM system.
Accountability is important for cases where the use of an ADM system may cause harm to the user, in which case users need to know who is responsible and how to seek redress (Diakopoulos, 2014;Selbst and Powles, 2017).System accountability is highly dependent on transparency since, to achieve accountability, the provider of an ADM system needs to clarify and justify actions, including mitigation and oversight mechanisms for correcting and redressing (Vedder and Naudts, 2017;Zouave and Marquenie, 2017).Thus, following longstanding arguments in the literature on democratic governance (Hood, 2010), we maintain that transparency is a precondition and facilitator of accountability (Diakopoulos and Koliska, 2017) and subsequently impacts perceived accountability (Hood, 2010).We posit that lower transparency concern for an ADM system would contribute to the improved perception of accountability with the following hypothesis: H1b: Transparency concern negatively affects a user's perceived accountability of an ADM system.Privacy issues relate to any kind of use or dissemination of private information other than for the purpose of the algorithmic service.They have been established as a major factor in the adoption of online systems in general (Yun et al., 2018).ADM system users may see privacy infringements as a more pronounced issue because it becomes even more difficult for users to understand how their data is used when it is outside of the use context of the initial data collection (Zhao and Benbasat, 2019).The relation between transparency and privacy is different than that of accountability since an ADM system may protect privacy without making it explicit.Nevertheless, by enhancing the transparency of a system, the users can sense that they can track where their private data is being used (Hedbom et al., 2011).It may follow that concerns about limited transparency do raise concerns about privacy via a hermeneutics of suspicion (Felski, 2011); when users fear a lack of transparency, they may well be more likely to feel that the system conceals or hides something about privacy policies, regardless of whether it actually does or not.Accordingly, the next hypothesis suggests a relation between transparency concern and the perception of risk that user privacy is not being appropriately respected: H1c: Transparency concern negatively affects a user's perceived privacy of an ADM system.

Relation of perceived Fairness, Accountability, privacy to ADM system trust and perceived usefulness
Trust plays a major role in settings where individuals interact with technological systems (Söllner et al., 2016).The level of trust identifies how much a system user is willing to rely on the system.Various antecedents have been identified in previous literature that explain how trust is shaped against a system depending on the type of the system and the context in which it is used.In general, trust formation is related to the perceptions of potential loss or costs the user finds in the relation (Hoff and Bashir, 2015).For example, in the case of e-commerce systems, losing control of personal data is a cost that is assessed based on the perceived level of system privacy, which, in turn, determines trust (Dinev and Hart, 2006).For ADM systems, issues relating to fairness, such as being exposed to discriminatory treatment, privacy, such as losing control of personal data, or accountability, such as accruing damage due to a broken system because the responsibility was not clear, are processed as costs on users' mental processes.Perceptions of the possibility of encountering such issues thus influence how users build trust in ADM systems (Kieslich et al., 2022;Shin et al., 2019).For example, increasing the perceived transparency of an ADM system through explanations is shown to improve fairness perceptions, resulting in improved trust (Binns, 2018).
Accordingly, we posit the following hypotheses: H2a: Higher perceived fairness positively affects a user's trust in an ADM system.H2b: Higher perceived accountability positively affects a user's trust in an ADM system.
B. Aysolmaz et al.H2c: Higher perceived privacy positively affects a user's trust in an ADM system.The perceived usefulness of a system, i.e., the utility obtained from using the system (Benbasat and Wang, 2005) as expected by users, has been identified as a key factor for technology acceptance (Tamilmani et al., 2021).Researchers have identified various factors that affect the perceived usefulness of systems.The technology acceptance literature suggests that perceptions of a system directly impact how users perceive usefulness (Marangunić and Granić, 2015).Accordingly, for ADM systems, we argue that perceived fairness, accountability, and privacy will have a substantial effect on users' utility perceptions with the following hypotheses: H3a: Higher perceived fairness positively affects a user's perceived usefulness of an ADM system.H3b: Higher perceived accountability positively affects a user's perceived usefulness of an ADM system.H3c: Higher perceived privacy positively affects a user's perceived usefulness of an ADM system.Perceptions of fairness, accountability, and privacy have been revealed as important factors in the algorithmic adoption process, though mostly, the indirect effects of these characteristics on utility expectations have been investigated (Shin, 2020b).This study differs from this literature by investigating the direct impact of fairness, accountability, and privacy on perceived usefulness and trust.

Perceived Usefulness, Trust, and adoption of ADM systems
The technology acceptance literature has established perceived usefulness of and trust in a system as major factors for the adoption of a system (Marangunić and Granić, 2015).When users trust a system more, they develop stronger positive beliefs about the efficiency of the system and the continuity of the services (Gefen et al., 2003), resulting in increased perceived usefulness.Perceived usefulness is one of the main determinants of the intention to adopt a system (Benbasat and Wang, 2005;Sohn and Kwon, 2020).The other common factor in technology acceptance, perceived ease of use, is not a typical aspect of an ADM system since ADM systems are usually used to give decisions on behalf of the user, or they run automatically; thus, the end-user, the decision target, does not engage in direct contact (Merkert et al., 2015).Thus, for ADM systems, the intention to adopt indicates that end users are willing that these systems are used for significant decisions about them.Perceived usefulness, trust, and the relation between them are important for system adoption also in the context of ADM systems (Shin et al., 2019).In addition to its impact on perceived usefulness, trust has also been shown to directly influence the intention to adopt a system (Benbasat and Wang, 2005).Furthermore, the assessment of perceived usefulness and trust constitutes an important mental process for user adoption (Gefen et al., 2003).Accordingly, we propose the final three hypotheses: H4: Higher trust positively affects a user's perceived usefulness of an ADM system.H5: Higher trust positively affects a user's intention to adopt an ADM system.H6: Higher perceived usefulness positively affects a user's intention to adopt an ADM system.This finalizes our research model, which we illustrate in Fig. 1.It represents the variables and their relations as proposed by the twelve hypotheses.

Sample and data collection
We designed a survey to investigate the applicability of our research model in Fig. 1.Participants were recruited by the LISS panel formed based on a true probability sample of Dutch households that is representative of the Dutch population (Scherpenzeel, 2009).The LISS panel is managed by Centerdata, an independent nonprofit research institute.The LISS panel ensures that survey participants have diverse demographic characteristics in terms of education level, age, gender, and location of living (urban vs exurban) (Knoef and de Vos, 2009).This is essential for our study to capture public perceptions about ADM systems.We received 2700 responses to the survey. 4We analyzed the data using IBM SPSS 26.We cleaned the data for straight and fast answers, which resulted in 2612 responses for data analysis.

Survey instrument
On the first page of the survey, we provided a brief definition of an ADM system and one of the five ADM systems shown randomly, as described in the following subsection.Thus, each participant answered questions for one of the ADM systems.Then, on two separate pages, we randomly showed the survey questions to measure the constructs explained in the section below.Finally, a list of demographic questions was presented.The survey was administered in Dutch.We designed the survey text originally in English and then translated it into Dutch.We asked three independent researchers to check and compare the translations to ensure the Dutch version correctly reflects the intended survey design.The survey material can be found in Appendix 3.

Scenarios for different ADM systems
To test our research model and answer research question 2, to compare transparency concern, and perceived fairness, accountability, and privacy among different ADM system use contexts, we designed five scenarios, or vignettes, that incorporate ADM systems that individuals interact with in daily life as decision targets, as shown in Table 1.Using vignettes in survey research design is an accepted technique to collect data about the judgments of participants in certain situations (E.Martin, 2004) and is also used in algorithmic systems research (e.g., Miller, 2019).In this technique, scenarios are presented to participants that describe situations to which participants are required to respond to.Scenarios were selected so that a large portion of the public may have experienced or could experience them in the near future as decision targets.Furthermore, we aimed for the scenarios to cover diverse organization types.

Measures
We used validated measures from previous studies with a 7-point Likert scale.Initially, we designed a survey with four measures per variable in the research model (Fig. 1).We performed a pilot study with 60 participants to choose the highest-loading two measures per variable.Reducing the number of measurement items is suggested to deal with response behavior and data quality problems (Cheah, 2018;Drolet and Morrison, 2001), especially for participants from the general public (Messer et al., 2012).Issues related to response behavior could have been even more pronounced in our settings since it was conducted as part of a panel presenting multiple surveys to the participants at once (Scherpenzeel, 2009).Nevertheless, using this panel is justified since it enables us to reach a broad audience with demographics highly representative of the whole population.We used the same measures for all ADM systems.We adopted measures from the literature investigating the perceptions of automated and AI systems and information privacy, decision support, and the technology acceptance literature.Where necessary, we adapted the questions to the context of the ADM system with a generic statement since the questions were the same for all the scenarios we used.The selected measurement items per construct are listed in Table 2.The transparency concern measures were defined based on Wang and Benbasat (2016) by adapting the statements for revealing concerns as done in the information privacy literature (Sutanto et al., 2013).The trust measures were derived from Jian et al. (2000) and Lee (2018), and perceived fairness measures were also from Lee (2018).We used the measures from Bansal et al. (2015) for perceived privacy.Since we could not find established measures for perceived accountability, we developed our own measures by relying on the definitions of ADM system accountability given by Kacianka and Pretschner (2021) and Wieringa (2020).We used the literature on technology acceptance to define the measures for the perceived usefulness and intention to adopt the system (Gefen et al., 2003).We used self-reported knowledge of ADM systems, age, and education as control variables.

Descriptives of the demographic profile and variables for the complete dataset
The descriptives show a balanced data set in terms of the demographic properties of the participants and reasonable experience with answering questions to validate the reliability of the answers.The complete descriptive information on the demographic profile of participants is provided in Appendix 1, including gender, age, education, net annual income, and self-reported ADM knowledge.We analyzed the descriptive statistics for the variables in our research model.As shown in Table 3, perceived fairness, accountability, and privacy have the same median values but show a different distribution structure.46.2 % of the participants were male, and 53.8 % were female.
Regarding age groups, 32.4 % fell into the age group younger than 44, 36.3 % were between 45 and 64, and 31.5 % were older than 65.19.40 % of the participants completed basic and secondary education, 33.9 % high or vocational school, and 41.9 % university or higher education.Self-reported ADM knowledge was mostly at a medium level (3-5 over 7-point Likert scale), 57.3 % of participants, while 22 % reported low, and 21 % reported a high level of knowledge.

Descriptives for ADM systems
Separate descriptive statistics regarding the five ADM systems in each scenario are shown in Fig. 2. Transparency concern is observed to be steady and high across all ADM systems, with a mean value between 4.84 and 5.33 (over 7-point Likert scale).The values for perceived fairness, accountability, and privacy follow an increasing trend from ADM system 1 through 5, with the exception of perceived accountability having the same value for 4 and 5.For the variables trust, perceived usefulness, and intention to adopt, the values are lowest for ADM system 1 social media, higher for ADM system 2 HR and 3 insurance, and highest for ADM system 4 medical and 5 tax.As reported in Appendix 2, the knowledge level on ADM system 1 social media is considerably higher than other ADM systems (4.46), while other systems have similar values for this variable (between 3.51 and 4.19).

Statistical analysis
We used the partial least squares (PLS-SEM) technique with the software SmartPLS 3.3.5 to test the hypotheses of our research model.The use of PLS-SEM as a statistical tool has become widespread in social and behavioral sciences and particularly in information systems research (Benitez et al., 2020).It is particularly relevant when the research aims to test a complex theoretical model composed  I would consider getting automated support from these types of systems.Self-reported ADM system knowledge K1 I have heard or read about the use of these types of systems that make automated decisions or suggestions.K2 I am familiar with these types of systems that make automated decisions or suggestions. of multiple constructs and relations among them (Hair et al., 2019).

Measurement model assessment
Following the standard PLS-SEM analysis method (Hair et al., 2019), we first tested the measurement (outer) model for item and scale reliability, discriminant validity, and multicollinearity for common method bias (CMB).Table 4 depicts the factor loadings for each item.For item reliability, typically, item loadings higher than 0.70 are expected.All the items but one had loadings higher than this value (between 0.712 and 0.963).One of the items for transparency concern showed a lower loading (0.573).Items in the range of 0.40-0.70 are suggested not to be dropped if the composite reliability is not improved (Garson, 2016), which also ensures to save the understanding of the concept (Benitez et al., 2020).We kept this item since the composite reliability indeed did not improve when this item was dropped.
Table 5 summarizes the statistics for reliability and validity indicators.Internal consistency reliability was confirmed by examining the composite reliability (CR) score.CR values between 0.760 and 0.909 exceeded the suggested value of 0.70 (Hair et al., 2019).Convergent validity of each construct was assured by the values for average variance extracted (AVE), which exceeded the suggested value of 0.50 for each construct (values between 0.628 and 0.833) (Hair et al., 2019).We established discriminant validity by the  square root of AVE, also known as the Fornell-Larcker criterion, ensuring that the value of each construct is higher than the correlations among other constructs (Garson, 2016), as depicted in Table 5.The test of multicollinearity through the variance inflation factor (VIF) is suggested as the comprehensive procedure for checking if CMB may be a threat (Kock, 2015).The VIFs were lower than 1.793 and, thus, below the recommended maximum threshold of 5.0 for PLS-SEM analysis (Garson, 2016) and 3.3 as the proposed threshold to rule out CBM (Kock, 2015).

Structural model assessment
We evaluated the structural (inner) model with bootstrapping with a sample size of 5.000, which is the accepted standard (Benitez et al., 2020).Fig. 3 shows the results of the structural model evaluation, including the path coefficients and p-values (in parentheses) shown on the lines with thickness adjusted for the value of the path coefficients.Adjusted R 2 values that indicate the percentage of variance explained for the dependent variables are shown inside the circles representing the variables (all values are significant at p < 0.0001).All our hypotheses other than H3b are supported at the level of p <.01.H3b (Perceived Accountability → Perceived Usefulness) is supported at the level of p <.05.Based on the R 2 values, the explanatory power of the model is moderate to substantial for trust, perceived usefulness, and intention to adopt, moderate for privacy, and weak for perceived fairness and perceived accountability (Hair et al., 2019).The control variables of self-reported knowledge of ADM systems, age, and education all have very low path coefficients (-0.047, 0.082, and -0.077, respectively).Thus, we assume they do not have a considerable effect on the outcome variable.
All path coefficients are significant; however, their strength varies.We can distinguish the strongest impact for H1c (Transparency Concern → Privacy), H2a (Perceived Fairness → Trust), and H6 (Perceived Usefulness → Intention to adopt), all being over 0.5.Next, we observe path coefficients on a medium level for H1a (Transparency Concern → Perceived Fairness), H3a (Perceived Fairness → Perceived Usefulness), H4 (Trust → Perceived Usefulness), and H5 (Intention to adopt).These coefficients range between 0.26 and 0.40.The rest of the coefficients, apart from H3b, are weak, which are H1b (Transparency Concern → Perceived Accountability), H2b (Perceived Accountability → Trust), H2c (Perceived Privacy → Trust), and H3c (Perceived Privacy → Perceived Usefulness).These values are between 0.150 and 0.176.Finally, for H3b (Perceived Accountability → Perceived Usefulness), the coefficient is very weak, 0.039.We modeled the control variables as a single-indicator composite to account for the role of different ADM system knowledge levels, education, and age in explaining the intention to adopt ADM systems (Benitez et al., 2020).All control variables show low path coefficients (between 0.047 and 0.082).
Overall, the survey data shows that transparency concern has a strong impact particularly on perceived privacy and fairness but also on accountability, although with a low coefficient.The perceptions of all these three factors affect trust, but perceived fairness is particularly impactful on trust.The path coefficients related to perceived accountability are all weak, whereas those related to perceived fairness are strong.Finally, trust impacts perceived usefulness, and both trust and perceived usefulness show up as important determiners of intention to adopt an ADM system.

Comparison of ADM systems
In this section, we compare the results of the hypothesis tests for each ADM system, as displayed in Fig. 4. We also report on the results of the PLS-SEM Multi-Group Analysis (MGA) (Sarstedt et al., 2011) we ran to find out significant differences in path coefficients of different ADM systems.
Regarding hypotheses H1a, H1b, and H2b (Fig. 4.1), the effect of transparency concern on perceived privacy and fairness is high for all ADM systems (note the negative sign of the path coefficient).Transparency concern has less impact on perceived fairness for system 1 social media and 2 HR (path coefficients -0.31 and -0.34), which have lower levels of trust and perceived usefulness.The impact is strongest for system 5 tax (path coefficient -0.43), the system with the highest level of trust and perceived usefulness.The impact of transparency concern on perceived privacy is steadily high, with path coefficients ranging from -0.54 to -0.59.
The findings about the effect of perceived fairness, accountability, and privacy on trust (H2a, H2b, and H2c, Fig. 4.2) also indicate a high impact of perceived fairness for all systems (path coefficients between 0.55 and 0.59).Perceived accountability has a slightly higher impact on trust for system 1 social media (0.21).The MGA findings indicate that, for the path coefficient of Perceived Accountability → Trust, the difference between system 1 social media and system 4 medical is significant (p =.027).Lastly, perceived privacy has a weak impact on trust for all ADM systems.The effect is slightly lower for system 1 social media and 2 HR (0.15 and 0.13) and larger for system 3 insurance (0.22).
For the effect of perceived fairness, accountability, and privacy on perceived usefulness (H3a, H3b, and H3c, Fig. 4.3), fairness has a strong impact particularly for system 1 social media (0.48) and low for 2 HR (0.26).According to MGA, the path coefficient is significantly different between these two systems (p =.010).The impact of perceived privacy on perceived usefulness is low for system 1 social media (0.10) while relatively higher for other systems (between 0.15 and 0.19).The impact of perceived accountability has not been found significant for individual systems, so it is shown with a dash-line on the graph.
The impact of trust on perceived usefulness (H4, Fig. 4.4) is high for system 2 HR (0.35) and low for system 1 social media and 5 tax (0.21 and 0.19).This is interesting since one of these systems (social media) has the lowest trust and perceived usefulness levels whereas the other (tax) has the highest.A pattern can be identified regarding H3 and H4 for system 1 social media, with perceived fairness having the highest impact and perceived privacy and trust having the lowest impact on perceived usefulness across all ADM systems.
Lastly, the impact of trust and perceived usefulness on intention to adopt (H4 and H5, Fig. 4.4) is observed to be strong across all systems.For system 4 medical, interestingly, the impact of perceived usefulness is highest (0.56), whereas the impact of trust is lowest (0.29).For system 5 tax, there is a different trend, the impact of perceived usefulness (0.439) and trust (0.411) being highest among all systems.This is reflected in the result of MGA as a significant difference in the path coefficients of system 4 and 5 for the impact of trust  (p =.041) and perceived usefulness (p =.007) on the intention to adopt these systems.

Discussion
We set out to understand the impact of transparency concern on the intentions to adopt ADM systems by performing a large survey and analyzing the responses through PLS-SEM to test the research model.In this model, we examine the impact of transparency concern on perceived fairness, accountability, and privacy, and the impact of these factors on perceived usefulness and trust, and their relation to the intention to adopt.
Our findings point to interesting facts about perceptions of ADM systems.First, we establish transparency concern as a factor affecting perceived fairness, accountability, and transparency.In our survey, users with high transparency concerns for a system also perceived the system to have low levels of fairness, privacy, and accountability, which in turn led them to have lower trust in and perceived usefulness of the system.This supports the belief that, as frequently suggested in literature but not so much empirically validated (Arrieta et al., 2020), the "black box" character of ADM systems matters to users and that, subsequently, there is a need for mechanisms to alleviate transparency concerns about ADM systems.We find that the impact of the transparency concern is particularly strong on perceived privacy and fairness, while it is rather weak on perceived accountability.Thus, by decreasing users' concerns about transparency, organizations can especially improve users' perceptions of privacy and fairness of ADM systems.
Next, we observe perceived fairness as the most prominent factor when we investigate how perceived fairness, accountability, and privacy affect trust in and perceived usefulness of ADM systems.Interestingly, perceived accountability does not have a significant impact on usefulness.The reason for perceived accountability being insignificant could be that users focus on the direct value they will obtain as they use an ADM system rather than considering situations of the system causing harm to them.This might be a reason why accountability has been studied less than other factors, such as fairness (K.Martin, 2019).In line with previous information privacy literature, perceived privacy is confirmed to impact trust and perceived usefulness (Li, 2012;Smith, 2011).
Third, our results about the relations among trust, perceived usefulness, and intention to adopt are in line with previous findings for technological artifacts (Söllner et al., 2016) in the context of ADM systems.Trust is shown to have a medium-level impact on perceived usefulness, and both trust and perceived usefulness are found to strongly affect the intention to adopt an ADM system.
Fourth, our empirical findings allow us to compare the perception of ADM systems for different ADM use contexts.Most prominently, the level of transparency concern is observed to be consistently high for all ADM systems.Even for ADM systems that received scores of higher levels of trust and perceived usefulness from users (e.g., medical treatment), users' transparency concerns were almost equally high for those systems.
When relations among variables are compared for different systems, we observe an increasing trend of the effect of transparency concern on perceived fairness as the levels of trust in and perceived usefulness of the ADM system increase, though the difference among systems is not significant.Interestingly, the impact of transparency concern on perceived accountability and privacy does not show such a trend.Considering the relation of perceived fairness, accountability, and privacy on perceived usefulness, the impact of perceived fairness is significantly stronger in social media than HR systems.Social media recommendation, the system with the lowest levels of trust and perceived usefulness attributed, differs from the rest.Perceived fairness has a stronger impact, and perceived privacy has a weaker impact on the perceived usefulness for this system with respect to all other systems.Thus, organizations providing ADM systems for social media recommendation may prefer to prioritize transparency about fairness over transparency over privacy so that their users find the system more useful.
Concerning the relations between trust, perceived usefulness, and intention to adopt, some diversity exists among ADM systems that seem not to follow a trend with respect to the levels of trust and perceived usefulness.The systems that showed a relatively higher impact of trust on perceived usefulness, i.e., social media, insurance, and tax, are also the ones that have a softer impact level.This could indicate that, for those systems that do not have a life-altering impact on users' lives or for which the impact is less immediate, building higher trust becomes more important for users to find them useful.Nevertheless, trust, and even stronger than that, perceived usefulness, seem to be the determining factor of ADM system adoption in all contexts.

Theoretical contribution
This research contributes to the literature in four major ways.First, we situate transparency concern as a construct that affects perceived fairness, accountability, and privacy, which then impact trust and perceived usefulness.Previous literature on ADM system adoption has investigated the relation of each of these factors, i.e., transparency, fairness, accountability, privacy, to ADM system perceptions in the same way, e.g., by investigating each factor's relation to trust (Shin et al., 2019) or satisfaction (Shin and Park, 2019).Although transparency, fairness, accountability, and privacy are all among the prominent ethical principles identified for ADM systems (Kieslich et al., 2022), their roles in determining trust in, perceived usefulness of, and intention to adopt the system may be different.Indeed, transparency can be argued to have an impact by enabling users' judgment of fairness, accountability, and privacy (Mittelstadt, 2016;Rothenberger, 2015).Our findings support this role of transparency (Zerilli et al., 2019).
Second, with respect to the technology acceptance literature, our findings confirm the importance of perceived usefulness as a core determiner of technological system adoption (Marangunić and Granić, 2015) in the context of ADM systems.Thus, despite the differences between ADM systems and other technological artifacts and controversies about ADM systems, perception of usefulness seems to be a prominent factor also for the adoption of ADM systems.Our study indicates furthermore that relations among the factors of trust, perceived usefulness, and intention to adopt are applicable for ADM systems as they are for other technological artifacts (Söllner et al., 2016;Tamilmani et al., 2021).
B. Aysolmaz et al.Third, concerning the information privacy literature, our findings confirm the relevance of perceived privacy for users' trust and usefulness perceptions, as well as for intention to adopt ADM systems, as is the case for other technological systems (Smith, 2011).The strong impact of transparency concern on perceived privacy bears out the applicability of a hermeneutics of suspicion (Felski, 2011); when users have transparency concerns, this raises suspicion about the system concealing or hiding information about privacy policies.Thus, our study confirms perceived privacy as an important factor for ADM system acceptance.This is important as the increased collection, aggregation, and analysis of seemingly mundane and insensitive data resulting from increased public use of ADM systems has potentially "sweeping consequences" for privacy (Nissenbaum, 2010), and since, though prominent in the public debate, privacy has been indicated as important but has been relatively under-researched in algorithmic acceptance research (e.g., Shin et al., 2019).
Fourth, while differences can be observed in perceptions of ADM systems for different use contexts, our findings identify transparency concern as eminent for all systems.Only few relations among the variables for use contexts differ significantly, indicating that the identified relations among constructs would be applicable for diverse ADM systems.

Practical implications
We identify-three salient practical contributions of this study.First, there is an ongoing discussion on the use of ADM systems in popular media; and it is not clear how the public indeed reacts to these discussions.It is important to understand how individuals think about various ADM systems so that these systems can be designed according to public values and expectations and society can reap the benefits and avoid risks associated with these systems to the highest possible extent (Lozano et al., 2021;Wirtz et al., 2019).With our findings, we answer the call for empirical research on understanding public perceptions of various ADM systems and, specifically, how transparency concern impacts perceptions of these systems (Burrell, 2016) in relation to ethically and socially salient principles and issues.We introduce transparency as a mechanism that impacts the way that users make judgments of fairness, accountability, and privacy.Although perceived transparency is an important aspect of an ADM system on its own, the ways to improve transparency are vast and include a trade-off between increasing complexity and confusion for users (Binns, 2018).Our findings can help practitioners develop transparency-improving strategies relevant to the target system and users.If, for example, fairness is a critical issue for a particular system rather than privacy or accountability, it should be the goal to improve transparency related to fairness.In this way, we contribute to filling in the gap in information systems research with regard to investigating the impact of ADM system perceptions on the tendency to use such systems as part of daily life (Newell and Marabelli, 2015).
Second, policymakers are increasingly concerned about taking the proper steps in shaping the future of ADM systems on a regulatory and societal level.Public institutions, we have highlighted the role of European regulation, develop guidelines and regulations with the aim of reducing the risk of future damage to individuals or society at large while not stifling innovation or unnecessarily diminishing possible productivity gains.It is important for these institutions to have clarity about the individuals' perceptions so that they can better map public understanding and attitudes to policy debates (Cath, 2018).However, there is little research on public perceptions of ADM systems (Kieslich et al., 2022).Our study takes a step to support policy makers in having a better understanding of public perceptions and expectations about ADM systems.For example, since accountability is not found to be essential from a user perspective, policy makers may themselves need to exercise extra vigilance concerning accountability to prevent harm to individual users or communities.
Third, organizations are continuously challenged by the risks related to the acceptance of ADM systems they offer to their customers.They need to deal with societal concerns, such as discrimination, loss of private data, or lack of due processes when individuals are harmed when implementing ADM systems (Aysolmaz et al., 2020;Markus, 2017).However, there is little research to help organizations understand different stakeholder views on this issue (Galliers et al., 2017).By uncovering perceptions about ADM systems in varying contexts, our findings can help organizations consider and take necessary actions to prevent those risks from happening while developing ADM systems (Galliers et al., 2017).For example, in high-stake decisions, organizations should consider developing human-in-the-loop systems for decision-making and complement the system with effective communication strategies to decrease transparency concern (Kern, 2022).The differences we identified among ADM use contexts can help developers define requirements to improve perceptions relevant to a certain system.For example, while developing a medical treatment system, focusing on perceived accountability seems to have the least impact on improving trust; thus, the organization should rather identify system requirements to improve perceived fairness.

Limitations and future research directions
Our study comes with some limitations.One possible limitation is the generalizability of results due to the use of a Dutch population sample.The behavior and perceptions of the public regarding ADM systems vary across cultures (Grzymek and Puntschuh, 2019;Nitto et al., 2017).Understanding these differences in a more fine-grained fashion requires further comparative research.Recent studies into perceptions toward ADM systems around the world show that the Asian publics are found to have more positive perceptions of ADM systems, whereas attitudes in the Netherlands are roughly equivalent to other European states with a similar socio-economic profile (Johnson and Tyson, 2020).While our study provides some comparison among different ADM contexts, we lack a robust understanding of whether transparency concerns and the perceptions of ADM systems also differ among cultures and how.We randomly presented one of the five ADM systems to a participant and controlled for the knowledge level to deal with the limitation that participants may have difficulty accurately assessing systems with which they were not sufficiently familiar.The knowledge level, as well as other control variables, did not show a significant impact on the intention to adopt.Still, the study could also be designed to present an ADM system a participant has previously experienced.In this situation, participants' potential bias towards a system that they already chose to use should be considered.Field experiments would be a natural next step to measure actual ADM system use behavior for specific ADM applications.Our study could inform the design of such experiments by means of the findings of the current research model for five ADM contexts.More ADM contexts should also be considered to improve generalizability.
Our study highlights the importance of transparency concerns for ADM system adoption by the decision targets, i.e., end-users.Therefore, more research is needed into transparency concerns and the design of transparency mechanisms that can facilitate the specific needs of diverse users for different ADM systems.Our findings can guide further research in designing such mechanisms by considering which factors related to perceived fairness, accountability, or privacy, would be important in a specific ADM context and, in this way, help organizations gain the capabilities needed to create business value from ADM systems (Korsten et al., 2022).Further research should also consider the perspective of decision-makers and different ways of using the decisions provided by the ADM, e.g., for part or full decision delegation (Leyer et al., 2021).
In our investigation of transparency perception, we use transparency concern as a construct, i.e., concern about perceived levels of transparency.Transparency concern helps us address both the perceived level of transparency and concern (or absence of concern) about that level in an ADM system.The relation between transparency concern and perceived transparency needs, however, further investigation.For example, it is valid to question to what degree the perceived lack of transparency indeed results in transparency concerns (Zhao, Benbasat, and Cavusoglu, 2019) and how this may change depending on the use context.Furthermore, we measured perceptions of fairness, accountability, and privacy rather than the concerns about them.We believe, however, that our construct is in line with other findings, which indicate that beliefs, e.g., concerns, can influence perceptions as a result of motivated reasoning (Bavel et al., 2018) irrespective of an objective basis for the belief or concern (Tilley and Hobolt, 2011).As we show that concerns about transparency impact perceptions of fairness, accountability, and privacy, more research is needed to better understand the potential role of motivated reasoning in these relations, especially as the use of ADM systems in some contexts can be highly contentious.Since there are also societal and ethical concerns around fairness, accountability, and privacy (Aysolmaz et al., 2020), it is also relevant to investigate their impact on ADM adoption.Similar extensions are also required to understand the differences between ADM systems better.
Next to the factors investigated in our study, many other factors can describe ADM system adoption and differ among application domains.It would be a relevant research direction to investigate such factors, many of which would be relevant only for specific applications, e.g., hedonic motivation and anthropomorphism, as studied by Gursoy et al. (2019).Perceptions of other prominent ethical principles about ADM systems, in addition to fairness, accountability, and privacy, can also be tested, such as accuracy, explainability, and autonomy (Jobin et al., 2019;Kieslich et al., 2022).
We could not use established measures for the perceived accountability variable.It is difficult to find relevant measures since this aspect is studied less (K.Martin, 2019).Our finding that perceived accountability does not significantly affect perceived usefulness suggests that the value of the system is visible for regular use cases rather than exceptional situations of the system causing harm.While transparency is recognized as a tool to generate accountability, researchers warn that the relationship between them and how they are perceived is complex and requires caution and more investigation (Ananny and Crawford, 2018;Fox, 2007).Therefore, future work should aim to develop and test measures for perceived accountability and further investigate the role of accountability for ADM systems.Furthermore, we could use only a limited number of items per construct due to the time restrictions of the large-scale survey.Although we performed a pilot study to choose the most relevant items and confirmed them through measurement model assessment, more items should be used in the following studies to strengthen the assessment of the research model.

B
.Aysolmaz et al.

Table 1
The description of the ADM systems provided in the survey.

Table 2
Final set of measurement items and their sources.

Table 3
Descriptives for research model variables.

Table 4
Factor loadings of the measurement items.