Employees’ Trust in Artificial Intelligence in Companies: The Case of Energy and Chemical Industries in Poland

The use of artificial intelligence (AI) in companies is advancing rapidly. Consequently, multidisciplinary research on AI in business has developed dramatically during the last decade, moving from the focus on technological objectives towards an interest in human users’ perspective. In this article, we investigate the notion of employees’ trust in AI at the workplace (in the company), following a human-centered approach that considers AI integration in business from the employees’ perspective, taking into account the elements that facilitate human trust in AI. While employees’ trust in AI at the workplace seems critical, so far, few studies have systematically investigated its determinants. Therefore, this study is an attempt to fill the existing research gap. The research objective of the article is to examine links between employees’ trust in AI in the company and three other latent variables (general trust in technology, intra-organizational trust, and individual competence trust). A quantitative study conducted on a sample of 428 employees from companies of the energy and chemical industries in Poland allowed the hypotheses to be verified. The hypotheses were tested using structural equation modeling (SEM). The results indicate the existence of a positive relationship between general trust in technology and employees’ trust in AI in the company as well as between intra-organizational trust and employees’ trust in AI in the company in the surveyed firms.


Introduction
When the concept of artificial intelligence (AI) emerged in the 1960s, it was not supposed to cover such a wide range of applications. Today, AI-based solutions can be found in almost all areas of human life-at home (smart home) [1,2], in the city (smart city) [3,4], at work [5], in education [6], in communication [7], transport [8], health care [9], or entertainment [10]. More and more advanced technologies and solutions using AI are emerging, with a growing impact on the lives of individuals and the functioning of societies as a whole [11,12]. Artificial intelligence is also increasingly important in business, where it has a growing range of applications (from relatively simple "chatbots" used in customer service to more complex analytical solutions based on deep learning) [13][14][15][16][17][18][19].
Artificial intelligence, as the most advanced form of technology development to date, is an object of interest for managers who see the possibility of increasing their companies' competitive advantage in its applications. In simple words, AI is a system or a machine which mimics human intelligence in the performance of specific tasks and, in addition, can interactively improve (learn) from information gathered [20,21]. Today's businesses need to take advantage of the latest technology to grow and compete globally [22][23][24][25]. Effective and efficient implementation of AI solutions in companies, in addition to significant financial outlays [26,27], also requires the trust of employees in its usability and functionality [28][29][30].
Due to the fact that artificial intelligence is an extremely capacious concept encompassing various technologies and modern solutions, operating on the basis of diverse and often very complicated algorithms, it is difficult to indicate one commonly accepted definition of AI. These definitions vary depending on the context in which the AI concept is used [21,31]. Regardless of the existing definition differences, an important element of advanced technology that determines its categorization into the functional area of artificial intelligence is the ability of systems to make decisions or perform specific tasks with at least a partial representation of human intelligence, as well as the ability to learn and improve on the basis of information collected.
The presence of artificial intelligence solutions in business has become a fact. The real impact on the functioning of many organizations and the related benefits from the use of AI systems are influencing the growing interest of companies in the latest technologies supporting their development. For most of them, this is only the beginning of the path of change, which will soon revolutionize business. Implementation of AI solutions in companies is not possible without the acceptance of their employees. As highlighted by the past research, employees' adjustment to advanced technologies (such as AI) that leads to their acceptance and use is a key factor in translating the technological advances into business revenue [28,32,33]. Going further, employees' trust in artificial intelligence seems to be one of the key factors determining the level of this acceptance, and thus influencing the scale and effectiveness of implementation and use of artificial intelligence solutions in companies. This trust becomes particularly important in the situation of uncertainty and the related need to reduce risk, and therefore in the conditions in which most organizations operate today, including those operating on the Polish market of chemical and energy industry companies.
The trust of employees in artificial intelligence is a special category of trust in the broadest sense. It is a complex, multifaceted and multidimensional variable [34]. In addition, it is latent and difficult to measure directly. As a result, it is both difficult to define and measure, and it is also difficult to describe its relationships with other variables. A specific feature of this category of trust, however, is that the object of trust in this case is neither people nor organizations that are created by people, but technology, i.e., artificial intelligence, which can be considered the most advanced form of technology development to date. Moreover, it refers to [34] employees of companies, which means a significant narrowing of the group of entities (individuals) to which the term can be applied.
The existing research on trust in technology has mostly focused on automation, automated systems or e-commerce systems, e.g., [29,30,[35][36][37][38]. In contrast, in the subject literature, the research on employees' trust in AI in the company is scarce; thus, there is a need for more in-depth analysis in this area. Particularly, there is a niche regarding the research on the relation between employees' trust in AI in the company and other variables, including those relating to the organizational system in which the employee takes up work and its individual characteristics. This article is therefore an attempt to fill an existing research gap in this area, the subject of our interest being the specific category of this trust, which is employees' trust in AI in the company. Given the pioneering character of this research, we perceive it as highly contributing to the knowledge of the field as well as to the practice of contemporary companies dependent on advanced technologies, and especially on AI solutions.
The aim of the article is to examine links between employees' trust in AI in the company and three other latent variables proposed by us (i.e., general trust in technology, intra-organizational trust, individual competence trust). The empirical study was conducted based on a sample of 428 employees from companies of the energy and chemical sectors in Poland. During the research process, we formulated three hypotheses which were tested using structural equation modeling (SEM).
The article consists of two parts-theoretical and empirical. In the first part of the paper, we review the literature on trust. We are particularly interested in issues concerning employee trust in the organization in the context of AI implementation. Our considerations also include the above-mentioned categories of trust, which can influence employees' trust in AI (general trust in technology, intra-organizational trust, and individual competence trust). These considerations are the starting point for the formulation of research hypotheses. Next, we present the research methodology and discuss in more detail the analysis methods used (SEM models). In the next part of the paper, we present the research results and discuss them in the context of the previously presented literature. In the final part, we formulate the main conclusions drawn from our research; we present the contribution and limitations of the study as well as possible future research directions.

Nature of Trust
Trust is a complex, multifaceted and multidimensional category [34,[39][40][41]. As a result, it is difficult to define the concept of trust in an unambiguous and commonly accepted way. Trust as the basis of social relations is an object of interest for representatives of various disciplines of social sciences, including psychology, sociology, political sciences, economics, and management sciences [42][43][44][45]. Trust is also the subject of interest for humanistsprimarily of philosophers and ethicists [46,47]. While the concept of trust is understood and defined in many different ways, it can be assumed that trust is some kind of belief (and sometimes even certainty) that the party has regarding the future behavior or states of the trust object [48][49][50]. Trust is often considered a personality trait, and therefore it can be considered an interpersonal variable [39,[51][52][53][54]. The objects of trust are not only people but also institutions and companies that are created by people.
More and more frequently, the category of trust is also considered in the context of technological development, and thus refers not only to the social or institutional system but also to technology [54][55][56][57][58][59]. This is due to the gradual transformation, at least in part, of the existing relationship between people into a human-technology relationship. This transformation is the result of an extremely dynamic pace of technology development which has diffused and penetrated into almost every sphere of human life [11]. One of such spheres, in which there is a gradual displacement of the man-man relationship in favor of the man-technology relationship, is the one related to the performance of professional activity. Many employees in their workplaces today encounter a situation where their partners in the achievement of tasks are not only other people but also intelligent solutions from the area of advanced technology.

Trust in the Context of Implementing AI in a Company
The presence of artificial intelligence solutions in business is a fact. Artificial intelligence can provide employees with many activities, and this is a great advantage for entrepreneurs looking to implement AI solutions in their own company. The main positive effects of using artificial intelligence in a company include cost savings (e.g., by reducing employment, intelligent production quality control), increasing the effectiveness of business processes (e.g., by improving accounting, facilitating employee recruitment) or eliminating the so-called human error in the process of performing specific tasks [60,61].
The positive effects of the implementation of AI are also considered in the literature from the perspective of company employees. It is noted, for example, that employees, thanks to artificial intelligence, are able to perform tasks that were previously too complicated or dangerous, and this grants people easier access to information and significant time savings [60].
On the other hand, however, the implementation of AI solutions may also lead to negative effects/phenomena, either at the level of the entire company or at the individual level (employee). Doubts that arise in connection with the implementation of AI systems in business may concern, among others, lack of technological readiness of the company to implement such solutions or loss of competitive advantage as a result of faster implementation of artificial intelligence by competitive companies. This may be accompanied by the employees' doubts about the effects of implementing such solutions (mainly the fear of losing their current jobs), their reluctance to change and even resistance to them [62,63].
The perceived benefits and risks associated with the implementation of AI in companies may therefore vary depending on who makes the assessment. What constitutes an advantage for the employer (e.g., reduction of labor costs) may be perceived by employees as a real threat related to the loss of job (replacement of the employee by artificial intelligence solutions). Reducing these concerns will only be possible if the artificial intelligence is developed and implemented in companies in a way that allows employees to gain trust. It seems that implementation of AI solutions in companies is not possible without the acceptance of their employees. In this context, it can be assumed that employees' trust in AI in the company is one of the key factors determining the level of this acceptance and thus influencing the scale and effectiveness of implementation and use of AI solutions in companies. This trust becomes particularly important in the situation of uncertainty and the related need to reduce risk, and therefore under the conditions in which most organizations operate today, including chemical and energy industry companies operating on the Polish market.
Taking into account the above, the article attempts to describe the trust under consideration, as well as to analyze its relations with three other variables, described in further parts. The proposal to describe them is based on our literature review [54,58,[64][65][66][67][68][69][70] and interviews with experts (see Section 3.2 for more details).

Employees' Trust in Artificial Intelligence in the Company (TrAICom)
Our understanding of employees' trust in AI in the company is derived from the definition of trust in technology. We assume that artificial intelligence is the most advanced form of technology development to date. As it is defined in the literature, trust in technology manifests itself in people's readiness to be influenced by technology, resulting from its usefulness, predictability of its effects and credibility of its suppliers [54,55,58,59,67,71]. The concept of trust in technology (and thus in AI) refers to the belief that the other side of the relationship, i.e., technology (and in our case, artificial intelligence) will work in a functional, helpful and reliable way, providing positive results [54]. The functionality reflects the expectation that the technology is capable of performing the intended task. Helpfulness includes the adequacy and responsiveness of the help function built into the technology. Reliability refers to the expectation that a given technology will work in a predictable and consistent manner. Similarly, according to Hardré [72], trust in technology is "the degree to which people believe in the veracity or effectiveness of a tool or system to do what it was created for and is purported to do".
The existing studies support the notion that trust in technology is based on the interpersonal trust concept. In many definitions, trust in technology reflects beliefs about the favorable attributes of a specific technology. Just like in interpersonal trust, research on trust is founded on the perceived qualities of the trustee's trustworthiness [39,55]. Researchers have used these approaches mainly because people tend to anthropomorphize technologies and attribute human motivation or human qualities to them [73][74][75].
Employees' trust in technology in the company-and hence employees' trust in AI in the company being the main subject of our studies-does not develop in a vacuum. Instead, it evolves in a complexity of contexts within a company [35]. Therefore, while analyzing employees' trust in AI in the company, we consider the following contexts: general trust in technology, the characteristics of the organizational support for trust in technology (intraorganizational trust) as well as the context defined by the characteristics of an individual employee being the user of particular AI solutions (individual competence trust).

General Trust in Technology (GenTrTech)
In the context of describing employees' trust in AI, one of the variables that we are interested in is general trust in technology. Due to the fact that technology lacks volition and moral agency, trust in technology reflects beliefs about technology's characteristics rather than its motives or will, because it has no will [53]. According to the relevant literature, general trust in technology refers to the issue of people's (among them employees') assessment of whether, in their opinion, the suppliers of technology have the knowledge and resources necessary to implement particular solutions [64,67,71]. Furthermore, as noted by McKnight et al. [53], general trust in technology refers to the assessment of people's perception of whether the solutions in the field of technology are consistent, reliable, functional, and provide the help needed.
General trust in technology is closely related to the issue of the ethical governance of new technologies. Ethical governance in terms of technology means transparency of process as well as transparency of product itself [76,77]. In particular, ethical issues in regard to technology involve the following: identification of potential harms, providing guidelines on safe design, creating measures protecting the safety of new technology or privacy concerns [58,78]. Another aspect referring to ethical governance of technology and its impact on general trust in technology is ensuring the confidentiality of data and information provided by the technology user. Therefore, creating data privacy policies and procedures that enhance user's trust in technology nowadays seems to be particularly significant for creating general trust in technology [77,78].
Given the context of our research, general trust in technology seems to be the variable of high importance as we follow McKnight et al. [53] who claim that people's general trusting beliefs regarding the attributes of technology influence individual decisions to use technology and influence individual technology acceptance and post-adoption behavior. Some research provides an interesting point of view, arguing that general trust in technology has a nature of confidence, meaning that a technology user deliberately trusts himself/herself to use technology [57,79]. Taking such a point of view, Kiran and Verbeek [79] argue that rather than being perceived as risky or useful, the technology in general is approached by its users as trustworthy. This perspective relates to evidence mentioned by several authors that people's technology knowledge highly impacts the adoption as well as the acceptance of the technology [80,81].
Taking the above-said into account, we assume that people's general trust in technology is one of the primary key factors shaping trust in AI in the company felt by an employee working with a particular AI solution. Therefore, we hypothesize the following: Hypothesis H1. Employees' general trust in technology has a positive impact on their trust in AI in the company.

Intra-Organisational Trust (InOrgTr)
Intra-organizational trust is a special kind of trust, distinguished by its positive influence on phenomena and processes taking place in the organization. It concerns relations between employees (horizontal intra-organizational trust) and relations between employees and superiors (vertical intra-organizational trust) [82][83][84]. The concept of intra-organizational trust also includes a category that concerns employee trust in the organization as a whole (institutional trust) [82,85,86]. Intra-organizational trust depends on building relations in the organization based on positive expectations regarding the behavior and intentions of the parties (subordinates, superiors, colleagues, the organization as a whole). These relationships manifest themselves, among others, in mutual kindness, credibility, honesty of the parties to the relationship and willingness to provide support and assistance [87].
Intra-organizational trust (created by both interpersonal and institutional component) is an extremely important element of support for the success of strategic changes introduced in companies [91,94]. Such changes certainly involve the implementation of advanced technology solutions, including artificial intelligence. A high level of intra-organizational trust reduces employees' fear and uncertainty about the future and fosters a positive climate of change and acceptance of the novelties [97,98]. It is also important to note that the positive effect of a high level of intra-organizational trust is that the most talented employees, who are authority to others, can be retained in the organization [99,100]. Their strong motivation to work, willingness to learn, and openness to change are conducive to implementing strategic solutions. Mutual trust of employees in each other and in their superiors fosters the building of positive relations within the team. Thanks to this trust, the organization can freely exchange information and share knowledge [101][102][103]. The positive impact of intra-organizational trust is therefore visible in the effects of both individual employees and entire teams [104][105][106][107], which, in turn, translates into the effectiveness and competitiveness of the organization as a whole [108,109].
Taking into account the above-described importance of intra-organizational trust in the processes of introducing strategic changes in companies, which, as we assume, include the introduction of AI solutions, in our study we propose the following hypothesis: Hypothesis H2. Intra-organizational trust has a positive impact on employees' trust in AI in the company.

Individual Competence Trust (IndComTr)
Because trust is a complex process, there is a variety of factors determining the extent to which humans trust in particular objects of trust. Even if the object of trust is not a person or a social group, the subject that tends to trust these objects is always a person. Thus, it can be expected that trust is a variable closely related to many individual characteristics that can be used to describe a person. Studies on trust (as a general construct) in this respect refer, among others, to such individual traits as a person's propensity to trust [110] and a person's specific history of interactions [52,111].
Similar research is conducted in relation to trust in technology or similar constructions, i.e., trust in differently defined objects related to technological development. In this area, researchers attempt to identify and describe influential variables which refer to individual context and guide formation of such kind of trust or are associated with such kind of trust. The relatively abundant literature, supported by the results of the research, mainly concerns factors relating to the category of "trust in automation" or "trust in automated systems". The variables considered to influence the formation of such a kind of trust include age [112][113][114]; gender [115]; personality traits, such as extroversion and introversion [116] or intuitive and sensing personality [117]; individual's emotional state/mood [118]; selfconfidence [35,119]; person's past or current experience with the same or similar object of trust [38,56]; pre-existing attitudes and expectations towards an object of trust [37,120]; and knowledge about the purpose of an automated system or how it functions [38].
Introducing new technological solutions in companies, especially in the area of advanced technology, often requires the employee to acquire new knowledge and qualifications. Furthermore, it can be a difficult challenge, requiring the employee to quickly adapt to changes that he or she does not accept or understand, and to cope with the stress that is related to them. Introducing a new technology can cause disruptions in current employees' patterns or behaviors. Such changes may involve the modification of the employees' job responsibilities, added work load, and additional training. Bringing on a new technology in a company can be particularly intimidating for employees who are content in doing their work as they have always done or for employees who possess specific skills and abilities which are no longer needed to the same extent as before and who simultaneously are not able to quickly develop new skills. For such employees, technology changes in a company may be seen as a threat to their positions and a factor that undermines their job competence. They may create feelings of uncertainty, and such uncertainty can trigger more employees' resistance to their acceptance of the changes [121,122].
Taking the above into account, in our study we assume that an important role in shaping employees' trust in AI in the company may be played by their trust in their own competences, resulting from such features as confidence in job-related knowledge; openness to the need to acquire new job-related knowledge; openness to challenges in the workplace that exceed their existing skills; ability to quickly adapt their behavior to changing situations, including the ability to cope with stressful situations; and acceptance of risk-taking changes together with the ability to convince others. In our study, we refer to this complex latent variable as "individual competence trust". We assume that the higher the value of the factors that make up this construct, the higher its level will be, and this, in turn, will have a positive impact on the employee's trust in artificial intelligence solutions present in their workplace. Therefore, we propose the following hypothesis: Hypothesis H3. Employees' individual competence trust positively impacts their trust in AI in the company.

Method and Participants
Data used to verify the proposed hypotheses were collected with the use of selfcompletion questionnaire. The survey was conducted between February and April 2020. Both the selection of companies and the selection of individual respondents were purposive and resulted directly from the research aim. In the case of companies, the key selection criterion was their size determined by the number of employees (large companies only). It could be assumed that large companies, acting under the conditions of strong competition, have developed R and D departments and/or use advanced technology (including AI systems), and thus the employees employed in them, thanks to their contact with AI solutions, have the opportunity to form their own opinion about them. The selection of respondents in each of the companies was made with the support of people employed in these entities. The respondents were people who, as part of their professional duties, have contact (direct or indirect) with high-tech solutions, including artificial intelligence. In the conducted survey, data were obtained from 792 persons meeting the described selection criterion [123]. The survey was carried out in large industrial enterprises operating in Poland. The sample included companies operating in the food, electrical-machinery, cement, fuel and energy, light, lumber, and mineral industries.
In the article, we present the results obtained from a part of the examined sample, i.e., from the employees employed in the chemical and energy industry. The total number of respondents was 428. These were employees from various departments of the surveyed companies-research and development departments were the most represented in the sample (27.1%) and were followed by marketing and sales, production, technical, finance and accounting, administration, procurement and logistics, IT departments and others. These individuals were employed in a variety of positions, the vast majority of them on other than managerial ones (67.8%), and they had different seniority within the company with a prevalence of people with seniority ranging from 6 up to 15 years (29.7%), with a significant, almost equal share of people in the range up to 5 years (22.2%) and 16-25 years (21.7%). The majority of respondents were 41-50 years old (39.5%), and the share of women (48.4%) was almost identical to that of men (50%), with 1.6% of lacking answers.

Variables and Measures
All four variables included in the proposed hypotheses are theoretical and hypothetical constructs with unobserved realizations in a given sample based on a set of identifiable variables. For this purpose, a set of statements was proposed for each variable in the survey questionnaire. In the course of the measurement, respondents were asked to respond to these statements by selecting a specific response category on a scale ranging from 0 to 10, where 0 meant "I completely disagree" and 10 meant "I completely agree". Due to the original nature of the proposed hypotheses, it was not possible to find ready-made measuring scales that could be used to measure our four latent variables. Hence, when proposing particular statements, we decided to use a mixed approach consisting in two steps. The first step included the adaptation of ready-made scales that have already been used by other researchers for the measurement of similar variables in similar research:

•
The starting point for the construction of the statements attributed to the variable "Employees' trust in artificial intelligence in the company" (TrAICom) was a measurement scale proposed by researchers from the New York State University of Buffalo, who originally used it to measure trust in automated systems [65].

•
The starting point for the construction of the statements attributed to the "General trust in technology" (GenTrTech) variable were the measuring scales proposed by Ganesan [64], Seppänen et al. [67], McKnight et al. [54] and Ejdys [58].

•
The starting point for the construction of statements assigned to the "Intra-organizational trust" (InOrgTr) variable were the measurement scales proposed by Hacker and Willard [66] and Ellonen, Blomqvist and Puumalainen [68].

•
The starting point for the construction of the statements attributed to the "Individual competence trust" (IndComTr) variable were primarily the solutions proposed by Jurek and Olech in a publication published by the Polish Ministry of Labour and Social Policy [70]. Additional support in this respect was provided by Zeffane's publication [69].
The second step of our approach was aimed at supplementing the scales adapted from the literature on the basis of opinions obtained from experts during the pilot study.

The Analysis Method Applied
Data obtained from the study were analyzed using Structural Equation Modeling (SEM) [124][125][126][127][128][129]. SEM is a statistical approach to testing hypotheses on the relationship between observed and unobservable variables [128], derived from the following two main techniques: Confirmatory Factor Analysis (CFA) [124] and Multidimensional Regression and Path Analysis [130]. SEMs allow testing of elaborate theoretical models taking into account different relationships among variables and the analysis of both direct and indirect relationships [126].
The SEM structure consists of a model describing the relationships between hidden variables, called the latent variable model [124]: and a model for the measurement of exogenous and endogenous unobservable variables, referred to as the external model (measurement model): where: η m×1 , ξ k×1 -latent endogenous and exogenous variables, B m×m , Γ m×k -coefficient matrix for latent endogenous and exogenous variables, ζ m×1 -latent errors in equations, x q×1 , y p×1 -observed indicators of latent endogenous and exogenous variables, Λ x , Λ y -coefficient matrix for observed indicators of latent endogenous and exogenous variables, δ q×1 , ε p×1 -measurement errors for observed indicators. The estimation of CFA and SEM model parameters boils down to determining their values, thanks to which the postulated model will be able to reproduce the observed covariance matrix in a maximum way [126]. The most commonly used estimators are ML-Maximum Likelihood, GLS-Generalized Least Squares, ULS-Unweighted Least Squares, and WSL-Weighted Least Squares [124,126,131]. Evaluation of the model obtained is an ambiguous procedure with many variants [132]. Therefore, it is necessary to assess the fitting of the model on the basis of different mea-sures simultaneously. In the literature, many different fitting indicators are proposed, see, for instance, [126,131,133]. For statistical inference, the only available test is the χ 2 test, which is a traditional measure of the overall fit of the model and assesses the magnitude of the discrepancy between the covariance matrix observed and implied by the model. The remaining model-fitting measures are descriptive and are divided into general fitting measures (e.g., CFI-Comparative Fit Index, TLI-Tucker-Lewis Index, GFI-Goodness of Fit Index) and comparative measures (RMSEA-Root Mean Square Error of Approximation, SRMR-Standardized Root Mean Square Residual). In the literature, there is no unambiguity as to the recommended values of individual measures. On the basis of the available lists of recommendations, e.g., [124,125,128,129,131,[134][135][136][137] only minimum acceptable values of individual indicators can be indicated. Thus, CFI, TLI and GFI should be greater than 0.9, while RMSEA, SRMR-less than 0.1.

Results
The Structural Equation Modeling was developed in two stages. In the first stage, a confirmation factor analysis was carried out, and then SEM models were built. The CFA model allowed us to determine how latent variables are identified and explained by observable variables (items). Structural models, in turn, allowed us to determine relationships between latent variables.
Our proposed items for particular constructions are presented in Appendix A. They were the starting point for building CFA model. From all of the collected observations (428), the ones that contained the deficiencies (32) and outliers (106) were removed. The Mahalanobis distance was used for this purpose [129,137]. The collinearity and normality of the distribution of the analyzed factors were then examined. As a result, it turned out that the factor indices are characterized by low or moderate collinearity and do not meet the assumption of multidimensional normal distribution. Therefore, at the next stage of the analysis, CFA and SEM models were estimated using the Robust Maximum Likelihood (RML). In this method the correction of traditional statistics and standard errors proposed by Satorra-Bentler [138] was applied. The evaluation of model parameters together with the evaluation of stochastic structure was determined using the R software.
The applied research procedure assumes that the measurement model (CFA) should be correct in terms of reliability, consistency and validity of measures. Meeting these conditions forced the number of items for latent variables to be modified. For each latent variable (TrAICom, GenTrTech, InOrgTr, IndComTr), two items were removed. As a result, each variable consists of four items (see Table 1). The results of constructional correctness and CFA fitting quality measures are shown in Table 2. Additionally, in Table 2 the correlation coefficients between the latent variables are shown under the main diagonal.
The reliability and validity of theoretical constructions was assessed using the following measures: Cronbach's Alpha, Composite Reliability (CR), Average Variance Extracted (AVE) and correlation coefficient. In the case of all constructions, Cronbach's Alpha is above 0.88 and CR is above 0.92. This means that there is high reliability and consistency in the items included in each of our proposed constructions. At the same time, the AVE values are smaller than CR and exceed 0.5, which means that items assigned by us to a given construction are well-related to other items of the same construction (convergent validity). Factor charges, which determine the direct effects of the latent variable on items, are statistically significant and indicate the high fitting of model elements. The values of their standardized factorial charges exceed 0.78 (see Figure 1) and the Fornell-Larcker criterion is met, see, for example, [129,139].
The evaluation of the measuring model in terms of its fitting was carried out by means of the χ 2 test, selected comparative measures and general fitting measures. While the result of the χ 2 test is not satisfactory (the null hypothesis should be rejected), the result of standardized χ 2 (i.e., χ 2 /df = 1.6 < 2) is satisfactory. The values of the comparative measures (CFI, TLI and GFI) are greater than 0.93 and the general fit measures are smaller than 0.05. These measures take acceptable values. The results obtained indicate that latent variables are well explained by the selected items. Therefore, they can be used to verify the research hypotheses (H1, H2, H3) formulated by us. The simultaneous impact of all three latent variables (GenTrTech, InOrgTr, IndComTr) on employees' trust in AI in the company (TrAICom) was examined by applying SEM. The results of the model describing such a relationship are presented in Figure 2 and Table 3.
The indicators of fitting to empirical data achieved by the SEM model are satisfactory. The normalized χ 2 is less than 2; CFI, TLI, GFI measures are greater than 0.93. A very good level of fit was achieved with RMSEA as well as with SRMR (both less than 0.05). The evaluation of the measuring model in terms of its fitting was carried out by means of the test, selected comparative measures and general fitting measures. While the result of the test is not satisfactory (the null hypothesis should be rejected), the result of standardized (i.e., /df = 1.6 2) is satisfactory. The values of the comparative measures (CFI, TLI and GFI) are greater than 0.93 and the general fit measures are smaller than 0.05. These measures take acceptable values. The results obtained indicate that latent variables are well explained by the selected items. Therefore, they can be used to verify the research hypotheses (H1, H2, H3) formulated by us.
The simultaneous impact of all three latent variables (GenTrTech, InOrgTr, IndComTr) on employees' trust in AI in the company (TrAICom) was examined by applying SEM. The results of the model describing such a relationship are presented in Figure 2 and Table 3.

Discussion
The research results show that among all constructs used to identify those that affect employees' trust in AI in the company (TrAICom), the construct with the highest impact strength is general trust in technology (GenTrTech) (β = 0.639, p-value = 0.000). The obtained results regarding the GenTrTech variable indicate that there are no grounds to reject hypothesis H1, according to which employees' general trust in technology has a positive impact on their trust in AI in the company. The study confirmed that this impact is positive, which means that as employees' overall technological confidence increases, so does their trust in AI in the company (TrAICom).
Actually, the impact of general trust in technology on employees' trust in AI used by them at their workplace was expected. Our findings are aligned with the evidence found in the relevant literature. The cause-and-effect relationship between general trust in technology (GenTrTech) and employees' trust in AI in the company (TrAICom) refers to the fact pointed out by several researchers that people's general trusting beliefs regarding the qualities of the technology have an impact on individual technology acceptance and adoption behavior [28,33,53]. In line with prior research, general trust in technology is said to be of a cognitive and confidence nature, which generally means that it relies on people's rational thinking that is based on their general knowledge about technology and its attributes, prior experience, propensity to trust in technology as well as self-confidence [33,57,[79][80][81]. Thus, general trust in technology is explained by the users' willingness to take factual information or advice and act on it, as well as by their perception of the technology as helpful, competent, or useful [33]. As highlighted by some researchers, in contrast to the low trust that exists initially between unfamiliar humans, new technologies may produce optimistic beliefs regarding their abilities and functionality just at the moment they are introduced to the market [33,140]. Moreover, individual's propensity to trust combined with technology user's trust himself/herself in technology have been recognized as having positive impact on trusting behavior in novel situations that are, for example, new AI solutions applied in a workplace [53,55,79,141]. Given the above-mentioned dependencies as well as our study results, it is of increasing importance to develop a climate of trust in technology in general daily life at the workplace. In modern work environments, AI solutions are used frequently, and sometimes they even become employees' daily companions. That is why while moving toward the new reality of 4.0 industry, companies should place considerable attention on providing employees with knowledge on new technologies (e.g., training) as well as technical and organizational support in order to enhance their general trust in technology, which leads to an increase in their trust in AI in the company they work for [28]. Furthermore, some authors focus on the necessity to intensify the activities related to transparency of new technologies that seems to be an imperative for business in the nearest future [71,77,142]. It seems to be of particular importance due to the fact that the increase of employees' trust in AI in the company can bring about several benefits, such as, for instance, higher job performance, which leads to the improvement of job safety and the increase in companies' efficiency, thus eliminating errors [28].
The results of the study indicate that the factor (construct) that also has a significant and positive impact on employees' trust in AI in the company (TrAICom) is the intraorganizational trust (InOrgTr) (β = 0.216, p-value = 0.000). The results of the study on the InOrgTr variable indicate that there are no grounds for rejecting the H2 hypothesis, according to which intra-organizational trust has a positive effect on employees' trust in AI in the company. The strength of this factor, however, is slightly less than that of the general trust in technology discussed above.
It was to be expected that the intra-organizational trust will have a significant and positive impact on employees' trust in AI in the company, as this trust is an important factor supporting strategic changes in the organization, and such changes include the implementation of technological solutions using artificial intelligence. One of the aspects related to intra-organizational trust is activities which manifest themselves in the support of employees at every stage of their development, especially in the situation when it is necessary to assimilate new knowledge, skills and competences. Intra-organizational trust significantly reduces employees' fear and uncertainty about new solutions being introduced. It makes them feel that in case of any problems with performing their duties, they will receive appropriate support, e.g., in the form of training [88,143]. Moreover, the mutual trust of employees in each other, their superiors and the organization in which they work fosters the building of positive relations in employee teams, which significantly improves the exchange of information and knowledge sharing within the organization [101,103,144,145]. In the context of implementing AI solutions, it is also important that a high level of intra-organizational trust is conducive to retaining the best employees in the organization who, on the one hand, provide substantive support for others and, on the other hand, function as authority [99,100]. Their strong motivation to work, commitment and openness to changes promote the implementation of strategic solutions.
Simultaneously, the results of the study indicate that individual competence trust (IndComTr) is a statistically insignificant factor (β = 0.056, p-value = 0.157) from the point of view of its impact on employees' trust in AI in the company (TrAICom). This result makes it necessary to reject the H3 hypothesis assuming that the individual competence trust of employees positively impacts employees' trust in AI in the company.
The obtained result may be surprising because H3 was proposed taking into account the results of research conducted in the area of similar trust categories ("trust in technology", "trust in automation" or "trust in automated systems"), in which numerous influential variables are identified that refer to the individual context of a person and guide the formation of such trust. Among them, there are also those that appear to be closely related to the variables (items) that we have assigned to the IndComTr construct, for example "self-confidence", which is discussed in the context of similar categories of trust by, for instance, Case, Sinclair, and Rani [119] and Lee and See [35]. On the other hand, however, it is worth noting that in the case of our study, the starting point for the construction of the statements assigned to the IndComTr variable was primarily the solutions proposed by Jurek and Olech [70] in the self-assessment questionnaire of competences in the personal and organizational area, which is part of the so-called IE-TC Catalogue of Competent Action. In our opinion, it was possible to relate them to the IndComTr construct examined by us, but it should be remembered that they were created for completely different purposes (the measuring scale proposed by the authors was not used to measure the variable, which in our study we define as "individual competence trust"). Moreover, the solutions proposed by Jurek and Olech have been modified by us based on interviews with experts conducted during the pilot study (both the number of items and their content). All these facts could have led to a decrease in the expected accuracy of the scale we proposed. Another explanation for the fact that the H3 hypothesis has not been confirmed may result from the fact that the competences that we indicate in the proposed items do not necessarily have to translate directly into the shaping of the employees' trust in IT in the company under study, or at least they do not determine that this was the case for the specific group of employees who constituted our research sample. They were employees from production companies operating only in two industries (energy and chemical), who additionally had direct or indirect contact with solutions in the field of advanced technology, including artificial intelligence, and therefore had relatively high and relatively equal competences related to the use of AI in the workplace. The obtained result may also suggest that there are other individual characteristics of employees which we did not take into account in our study and which may have a more decisive influence on the formation of employees' trust in AI in the company.
The above-mentioned possible explanations for the H3 nonconfirmation also point to interesting directions for further research. It is worthwhile considering in this respect, among other things, conducting analogous surveys among employees with more diverse competences in the area of using IT solutions at their workplaces or employed in companies operating in other industries/sectors of the economy. The latter postulate was partly realized by the authors because the research project referred to in this article also covered employees working in companies operating in the food, electrical-machinery, cement, fuel and energy, light, lumber, and mineral industries. The authors are currently at the stage of analyzing the data obtained from this part of the sample, and the results of their analytical work will be presented in subsequent publications. In case H3 is not confirmed in the above mentioned postulated future research, the natural direction of the next researchers' activities seems to be the modification of the proposed scale for measuring "individual competence trust" and its validation, or the search for other individual characteristics of employees, which will manifest a positive correlation with the considered construction of "employees' trust in AI in the company".

Conclusions
The aim of the article was to examine links between employees' trust in AI in the company and three other latent variables (general trust in technology, intra-organizational trust, and individual competence trust). The conducted analysis allowed us to verify the hypotheses that have been formulated in the research process. The developed structural equation model shows the existence of a positive relationship between general trust in technology and employees' trust in AI in the company as well as between intra-organizational trust and employees' trust in AI in the company in the surveyed firms.
Given the growing use of AI in business as well as companies' dependence on employees' interactions with advanced technologies, among them AI, it is necessary to understand the factors fostering employees' trust in AI used in their companies. The present research provides one of the first empirical explorations and validations of key variables for employees' trust in AI at the workplace. Therefore, we perceive it as contributing to the theory and having important managerial implications. The article contributes to the trust literature by adding to the existing debate on employees' trust in AI. Specifically, the findings contribute to better understanding of human-AI collaboration and dynamics as well as the nature of employees' trust in AI in companies of the energy and chemical sectors and antecedents of it.
Moreover, this study contributes to the practice in three ways. First, the findings may have implications for managers responsible for implementing solutions in the field of advanced technologies, including AI, by providing them with guidelines on how to build employees' trust in this area. It is of high importance because nowadays there is doubt that the trust that employees develop in AI will be central to determining its role in companies moving forward. Second, knowledge of this may be of great importance for producers and suppliers of AI solutions because, according to the findings, one of the variables influencing employees' trust in AI in the company is general trust in technology, which refers to people's assessment of whether the suppliers of technology have the knowledge and resources necessary to implement these solutions. Third, considering that the governments of most countries treat artificial intelligence as the future main driver of economic growth and job creation, knowledge of the factors building employees' trust in AI may be invaluable for public institutions involved in supporting commercial pilot projects as well as research and development projects in the field of advanced technology and/or artificial intelligence. This study also raises questions and may open up new avenues for much more research on employees' trust in AI in the company.
Nevertheless, we are aware that our research is not free from limitations. Due to the exploratory nature of the study and the nonrandom sample selection, the results obtained cannot be treated as representative for all employees employed in the chemical and energy industry companies operating in Poland. However, they may be helpful in the operationalization of the considered latent variables as well as in determining the directions of subsequent research steps conducive to its measurement.
Providing recommendations for further research is an important outcome of any research study. The conducted study inspires in-depth investigations of employees' trust in AI in the company. It would be interesting to enlarge empirical analysis through the inclusion of the mediators in the research model. Furthermore, it could also be interesting to compare the factors influencing employees' trust in tangible AI that has some kind of physical representation and virtual AI characterized by having no physical presence. As mentioned above, it is also worth investigating the companies representing other industries in order to find out if the specificity of the industry has an impact on the research results regarding employees' trust in AI at the workplace. Moreover, a survey among employees with more diverse competencies in the area of using AI solutions in their workplaces may shed new light on the issues being examined.