The appropriation of conversational AI in the workplace: A taxonomy of AI chatbot users ☆

There is an exponential growth of the use of AI applications in organisations. Due to the machine learning capability of artificial intelligence (AI) applications, it is critical that such systems are used continuously in order to generate rich use data that allow them to learn, evolve and mature into a better fit for their user and organisational context. This research focuses on the actual use of conversational AI, in particular AI chatbot, as one type of workplace AI application to answer the research question: how do employees experience the use of an AI chatbot in their day-to-day work? Through a qualitative case study of a large international organisation and by performing an inductive analysis, the research uncovers the different ways in which users appropriate the AI chatbot and identifies two key dimensions that determine their type of use: the dominant mode of interaction and the understanding of the AI chatbot technology. Based on these dimensions, a taxonomy of users is presented, which classifies users of AI chatbots into four types: early quitters, pragmatics, progressives


Introduction
The adoption of artificial intelligence (AI) in organisations is increasing at a rapid pace. AI technology is an umbrella term that covers robotics, natural language processing, computer vision, and machine learning, among other smart capabilities (Elbanna, 2020;Elbanna et al., 2020). A recent report shows that 85 % of the organisations surveyed were either considering or already using AI-based applications (Magoulas & Swoyer, 2020). Conversational AI in particular is growing in popularity. The size of the global conversational AI market is expected to increase from USD 6.8 billion in 2021 to USD 18.4 billion by 2026, at a compound annual growth rate of 21.8 % (MarketsandMarkets, 2020). It is also reported that conversational AI is on track to becoming a mainstream technology in almost every market vertical (Lis, 2022). This acceleration and intensification of the adoption and use of conversational AI is partially attributed to employers' need to facilitate employees' direct access to corporate information to support working from home during the pandemic, and hybrid working in its aftermath (Gkinko & Elbanna, 2022). Indeed, the response to and recovery from Covid-19 have significantly increased the volume of interactions handled by conversational AI systems (including AI chatbots and voice assistants) by an estimated 250 % in multiple industry sectors (Comes et al., 2021;MarketWatch, 2022).
Conversational AI is an interactive class of software applications that engage in dialogue with their users by utilising natural language (Dale, 2019). Conversational AI can have a voice-or text-based interface and can be used by organisations externally to support customers or internally to support employees. This study focuses on the internal use of AI chatbot: a type of conversational AI that is gaining popularity in the workplace. In the workplace context, it is argued that AI chatbot technology improves productivity and employees' experiences (Gkinko & Elbanna, 2020a, 2020b. It can improve productivity through facilitating various business processes and services at a low cost. For example, AI chatbots are used in organisations to assist employees with accessing corporate documents and information online, to provide translation services, to compile information from different sources, and to format compiled information to fit organisational templates. AI chatbots are argued to improve employees' experiences by making work resources easily accessible, discoverable and manageable (Marikyan et al., 2022), while providing a convenient natural language interface for users. It is ☆ Associate Research Fellow, The UK Economic and Social Research Council (UK-ESRC), Digital Future at Work Research Centre, Sussex, UK also argued that using AI chatbots can reduce stress and support users in avoiding information overload (Brandtzaeg & Følstad, 2017;Kimani et al., 2019;Meyer von Wolff et al., 2019). They can also offer employees a unique and more personalised experience that may appeal to new generations of workers as they enter the workplace (Gkinko & Elbanna, 2020b).
Most studies of AI chatbots have investigated developers' and customers' perspectives, and very few have explored the perspective of employees (Jang et al., 2021). This is despite the distinct characteristics of conversational AI systems that differentiate them from other conventional enterprise systems in the workplace (Elbanna, 2006(Elbanna, , 2013. Indeed, conversational AI systems in workplace settings are distinct in the following ways: (1) they exhibit human-like competence in conducting specific tasks while conversing with users in natural language; (2) they exhibit personality and hold anthropomorphic characteristics (Bavaresco et al., 2020); (3) their development is continuous, because they consistently learn from the content of users' requests and the way in which users interact with them; and (4) their continuous use is essential for their further improvement because they employ machine learning technology, which depends on use to learn and enhance the AI system's operational capabilities so it can evolve to better fit its users and organisational context and in turn attracts further use. These differences make understanding how employees use this technology of particular theoretical and practical importance.
Since the continuous improvement of AI chatbot is dependent on its use, attention should be paid to the in-depth understanding of users' behaviour and patterns of use (Brandtzaeg & Følstad, 2017). Scholars have recently acknowledged that there is a lack of understanding of the use of AI chatbots in organisations from the perspective of employees, and have called for more research in this regard (Brachten et al., 2021;Dwivedi et al., 2019;Jang et al., 2021;Marikyan et al., 2022). These calls for more research provide another motivation for this study.
Against this backdrop, this research explores the use of an AI chatbot in the workplace from the perspective of employees to answer the following research question: how do employees experience the use of AI chatbot in their day-to-day work? To answer the research question, we adopted a qualitative case study approach. We examined the use of an AI chatbot in a large international organisation and collected rich data through interviews, document reviews, and observations. An inductive data analysis revealed that users appropriated the AI chatbot in different ways. Accordingly, we followed an inductive taxonomy-building approach to capture these differences and categorise them into types of AI chatbot users. Inductive taxonomy-building begins with empirical data and analysis, followed by the extraction of abstract categories (Bailey, 1994;Nickerson et al., 2013). Consequently, we identified and defined four types of users: early quitters, pragmatics, progressives and persistents. By developing a taxonomy of users in relation to their AI chatbot use, this research lays the foundation for further work and theory-building on the use of AI in general and conversational AI in particular (Nickerson et al., 2013;Szopinski et al., 2019).
Following the introduction, we briefly discuss research on the use and appropriation of AI chatbots and technology in Section 2. In Section 3, we explain the research methodology, including a description of the case study site, the data-collection method, and the taxonomy-building method. In Sections 4 and 5, we present the research findings and discuss their theoretical and practical implications. Finally, in Section 6 we conclude the study, discuss its limitations and suggest avenues for future research.

The use of AI chatbots in the workplace
An AI chatbot is a type of interactive technology that provides ondemand 24/7 service to employees. AI chatbots differ from basic chatbots in many ways, as summarised in Table 1. Basic chatbots employ a rule-based approach and typically follow a retrieval-based model (Makasi et al., 2021). While basic rule-based chatbots operate on the basis of keywords and lists of pre-defined answers (similar to FAQs) to match users' queries with answers, AI chatbots are able to compile answers to users' queries by using natural language processing, natural language understanding, natural language response generation, and machine learning. In addition, AI chatbot systems are scalable, dynamic, and context-based and they are broad in scope, capable of extending a search to multiple channels and using multiple languages (REFS). The characteristics of AI chatbots can be divided into categories of form and function, where form includes the interface, visual appearance, and aesthetics, and function includes performance, utility and functionality (Botzenhardt et al., 2016;Gkinko & Elbanna, 2022).
AI chatbot systems are used in a variety of industry sectors (such as healthcare, finance, retail and education) to support customers, and there has been a recent proliferation of their use in workplace settings to support employees. Previous research on AI chatbots has adopted the perspective of either designers or customers, paying less attention to the perspective of employees. This earlier research revolves around two streams: technical and behavioural. The technical stream is the dominant and older stream. This stream is focused on advancing techniques on learning algorithms and approaches to AI chatbot design related to customisation, anthropomorphism (Fotheringham & Wiles, 2022) and personalisation (Ciechanowski et al., 2019;Go & Sundar, 2019). Research in this stream has focused on improving algorithms for more accurate results (Thakkar et al., 2018) and extending functionalities to include sentiment analysis, for example (Sidaoui et al., 2020). The behavioural stream is narrowly focused on determining the factors that impact the intention to adopt and the intention to use an AI chatbot, and the potential impact of this intended adoption and/or use. This stream is largely in the marketing domain focused on customer intentions to adopt and use an AI chatbot at different points in the customer journey (Ashfaq et al., 2020;Cheng & Jiang, 2020;Eren, 2021) and its potential impact on customer engagement (Youn & Jin, 2021), satisfaction with the brand (Chung et al., 2020), intention to purchase (Myin & Watchravesringkan, 2021) and continuous use (Li et al., 2021). Beyond marketing and sales, other research in this stream considers domains such as healthcare and personal monitoring devices (Huang & Rust, 2021), but there is scant examination of the use of AI chatbots in the organisational context (Brachten et al., 2021).
Research into the internal adoption and use of AI chatbots in organisations and from the employee perspective continues to be thin on the ground (Jang et al., 2021;Marikyan et al., 2022). Although some conceptual studies have been carried out (Huang & Rust, 2021), there are very few empirical studies on the actual implementation and use of AI chatbots (Bavaresco et al., 2020;Rapp et al., 2021). Research by Jang et al. (2021) explores managers' perspectives on implementing AI chatbots and highlights the challenges that managers face with the technology, their suppliers, and the creation of a new service model. Gkinko and Elbanna (2022) study the impact of AI chatbots' form and functional characteristics on employees. They find that the characteristics of an AI chatbot influence users' emotions in the workplace and trigger a unique set of emotions that differentiate the use of AI chatbots Table 1 Characteristics of rule-based chatbots and AI chatbots.
in the workplace from the use of conventional enterprise systems. They group these emotions as connective and amusement emotions (Gkinko & Elbanna, 2022). There are also suggestions that AI technology in general could serve as teammates (Seeber et al., 2020) and that organisations could create new human-machine intelligence (Hu et al., 2021;Zhou et al., 2021). In the context of AI chatbots, such human-AI augmentation requires continuous use, which stands in contrast to conventional enterprise systems (Dellermann et al., 2019) and places particular importance on the need to investigate actual AI chatbot use by employees in organisations.

Use of technology
The use of technology is a key area in information systems research, and it has been studied in many different ways from different theoretical approaches. Although we recognise approaches such as adaptive structuration theory (Poole & DeSanctis, 1989), technology affordances (Leonardi, 2011), actor network theory (Latour, 2007) and sociomateriality (Orlikowski, 2007) along with their applications (Elbanna, 2016), two theories resonated with our inductive data analysis: technology appropriation and technological frames. We review these theories in this section; however, we remain faithful to our inductive approach.
With regard to technological frames, this theory adopts a social construction and social cognition perspective to explain that people's understanding of new technology determines its use (Elbanna & Linderoth, 2015). Technological frames are defined as "knowledge and expectations that guide actors in their interpretations and actions related to IT" (Davidson, 2006, p.24). Technological frames theory maintains that individuals and groups assign meanings to, and develop mental models for, new technology based on three aspects (Orlikowski & Gash, 1994): (1) the nature of the technology; (2) the technology implementation strategy; and (3) technology in use in day-to-day. It focuses on how members of an organisation make sense of technologies and how their interpretations influence their actions (Davidson, 2006). Research that has adopted the technological frames theory has shown how the nature of technology, the narratives surrounding its implementation, and the narratives surrounding its use have been interpreted and that these interpretations differ among various organisational stakeholders, which results in incongruent outcomes (Orlikowski & Gash, 1994;Ovaska et al., 2005). .
The theory of appropriation also adopts a social construction philosophy to understanding individuals' use of technology after its initial adoption (Carroll et al., 2003;Dourish, 2003). It postulates that individuals adopt new technology and adapt it to fit their working practices (Carroll et al., 2003). Specifically, users' appropriation and social shaping of technology become integral to its use and complement the original design of the technology as users adapt its use to suit their understanding and needs. Hence, user appropriation plays a key role in the success of the deployment of a technology and its lasting use. When the technology is in use, people may redefine its functional purpose, customise it and assign symbolic meanings to make the technology their own, or they may reject and discard the technology. Based on this theory, the trajectory of individuals' evaluation and use of a technology has three levels (Carroll & Fidock, 2011;Carroll et al., 2003), as shown in Table 2.

Case study: approach and description
We adopted a single case study approach for this research in order to provide in-depth understanding of users (Yin, 2013). By focusing on one AI chatbot in one organisation, we eliminated variations in system configurations and reduced the impact of variations in organisational contexts. We selected the case study organisation following the recommendation that valuable insights can be gained from companies that are rather advanced in using the system under investigation (Benbasat et al., 1987).
We studied one department in a large global organisation. The department that we investigated has more than 30,000 employees worldwide. The AI chatbot that we studied had initially been developed as an internal system to act as an IT helpdesk responding to employees' enquiries related to IT issues; later, it was expanded to cover other services, including translation, sentiment analysis of text, and booking holidays. Initially, the organisation's main motivations for creating the AI chatbot were to reduce costs and enable employees to be more selfsufficient, but over time the aims evolved to include providing a seamless work experience and personal assistance. The AI chatbot was developed based on the Microsoft Bot Framework and Azure Cloud Services.

Research approach and data-collection methods
We adopted a qualitative interpretive approach to explore employees' lived experiences of using the AI chatbot in their work. This approach enabled us to gain an in-depth understanding of participants' experiences by engaging with them in a natural setting (Creswell and Creswell, 2018;Klein & Myers, 2001). Interpretivist research represents a move away from the deterministic explanation of human behaviour to pay attention to the subjective interpretations that actors ascribe to a phenomenon (Goldkuhl, 2012;Johnson et al., 2006;Leitch et al., 2010). When following an interpretivist approach, the purpose is to understand the subjective meaning created by the participants and their personal experiences and viewpoints. Knowledge of reality is considered to be a social construct created by human actors (Walsham, 2006). Interpretive information systems research maintains that the social world is not an objective truth but is constructed through actors' perceptions and actions (Klein & Myers, 1999;Orlikowski & Baroudi, 1991;Walsham, 1995). In our research, this provided a fundamental grounding to understanding users' experiences, where the unit of analysis is the user's lived experience.
The data collection comprised of semi-structured go-along interviews and document reviews (Bell & Bryman, 2007;Myers & Newman, 2007). We conducted 46 semi-structured interviews virtually with unique participants: 44 by using the organisation's audio/video platform, and 2 by email exchange. We had developed a good rapport with some of the participants through long-term engagement, and we were not complete strangers to some of the other interviewees (Myers & Newman, 2007). in these cases, it was possible (and in line with Table 2 Levels of technology evaluation (adapted from Carroll et al., 2003;Carroll & Fidock, 2011).

Technology appropriation trajectory
Users' appropriation activities Level 1 Initial judgements are made with the user's first encounter with a new technology. The outcome of this encounter is either that the user is no longer interested in the technology and decides not to appropriate it, or that the user decides to explore and evaluate the technology further, thus continuing the process of appropriation. Level 2 The user explores the technology in depth. The outcome of this process is either the appropriation of the technology, where the user takes possession of its capabilities to satisfy their needs, or the disappropriation of the technology, where the user rejects the technology at some stage of the appropriation process. Level 3 The technology is appropriated and integrated into the user's everyday practices through long-term use. However, changes in the user's evaluation of the technology may still lead to disappropriation.
4 Fig. 1. Data structure. participants' expectations) to limit unnecessary lengthy introductions and closures and focus directly on the subject (Mann & Stewart, 2016). This resulted in variation in the length of the interviews. In line with observations in previous research, we found that virtual interviews tend to be shorter than in-person interviews, because the online format reduced the number of typical office disruptions and provided a quieter space for the interviewer-interviewee interaction (Gkinko & Elbanna, 2022;Gray et al., 2020;Salmons, 2014). We adopted the go-along interview method, which combines interview and observation techniques to provide opportunities for researchers to account for more than what the participants say during the interview (Stiegler, 2021). In this way, go-along interviews allow for a better understanding of participants' perceptions of day-to-day interactions in context (Kusenbach, 2003). They often entail researchers moving around social environments with participants while engaging in conversation (Castrodale, 2018). Therefore, in line with this method, we encouraged the participants to share their screens so we could observe their interactions as they spoke about their use of the AI chatbot while showing us examples of this use and highlighting different features. The rich data from go-along interviews and observations of events and actions as they occur in a local setting forms a critical part of the case study data and contributes to the validity of that data (Erickson, 2012;Yin, 2013).
In line with the inductive research process, we followed up the interviews with emails and further communications (when needed) to clarify or expand on points made during the interviews. The interviews took place between December 2019 and April 2022, during which time we witnessed the progress of the AI chatbot in the organisation. The interviews lasted for between 20 min and 1 h. The users were selected randomly from different teams who agreed to participate in the study, and their industry experience ranged from a few months to over 15 years. The backgrounds and professional roles of the participants are presented in Appendix 1. The rounds of interviews reached closure when a point of saturation had been reached and no new information was emerging from the interviews (Braun & Clarke, 2021;Eisenhardt, 1989).
The documents we reviewed included project documents, system manuals, newsletters, emails and internal links, and they were accessed on the basis of a confidentiality agreement. Documents are an important source of data in information systems (Punch, 2013;Trauth, 2001). The documents we reviewed enhanced our understanding of the context of the AI chatbot's implementation and situated the participants' interactions in the context of the technology-as-designed, the initial purpose of the chatbot, and the strategic roadmap.

Data analysis and taxonomy development
All the interviews were recorded and transcribed verbatim. To maintain confidentiality, all data were anonymised. During the interviews, we observed that users appropriated the AI chatbot in different ways; hence, we embarked on identifying and classifying common patterns. There are two broad methods of developing a classification scheme: taxonomy, which is inductive and based on primary data; and typology, which is deductive and based on secondary sources (Bailey, 1994;Miller & Roth, 1994;Paré et al., 2015). We selected the first method, considering the observation of different patterns of use during the data analysis in addition to the novelty of the phenomenon and the insufficiency of the existing research in the actual use of AI chatbots. Taxonomies can be developed in a variety of ways, and the literature does not recommend one particular method. We adopted Nickerson et al. (2013) guidelines for developing a taxonomy. Szopinski et al. (2019) conducted an extensive review of 196 taxonomy-related papers in the information systems (IS) domain and concluded that Nickerson et al. (2013) taxonomy-building guidelines are "a widely accepted method in IS". This method is strongly rooted in Bailey (1994) seminal and widely recognised work on taxonomy-building. The method asserts that taxonomy-building comprises of inductive analysis to extract patterns, dimensions, and characteristics upon which a taxonomy is constructed, followed by iterative cycles of combined inductive and deductive development (for detailed guidelines see: Nickerson et al., 2013).
For the inductive analysis, we followed the method used by Gioia et al. (2013). This method provides consistent and rigorous analysis across three stages, as shown by the data structure in Fig. 1 (Gioia et al., 2013). In the first stage, we used excerpts from the interviews to identify the emerging codes that best represented the participants' voices while avoiding theoretical bias. In the second stage, we conceptually categorised the first-order codes into four themes: emotion-based use, function-based use, perception of the AI chatbot as an enterprise tool, and perception of the AI chatbot as a potential personal assistant. In the third stage, we aggregated the four themes into two dimensions: dominant mode of interaction and understanding of AI chatbot technology Gioia, 2020. These two dimensions served as the basis for developing the taxonomy, and accordingly we classified the users into four types. Finally, we evaluated the resulting taxonomy conceptually to ensure that it met the criteria for mutual exclusion and exhaustion (Nickerson et al., 2013) and empirically by partly gearing the last five interviews towards validating the taxonomy as part of a wider research programme. There are already several different taxonomies of technology adoption, and it is noted that Rogers (2010) taxonomy is one of the most commonly used. However, Rogers's work is based on classifying the uptake of a technology from the moment of its introduction (Rogers, 2010). In contrast, we focused on the actual use of the technology on a day-to-day basis: how users engaged with it, rather than its initial adoption.

Taxonomy of AI chatbot users in the workplace
Through the inductive data analysis, two main dimensions were identified within which users appropriated the AI chatbot. The first dimension is employees' understanding of the AI chatbot technology. This dimension includes aggregate themes that range from considering the AI chatbot as just another enterprise system for retrieving information or automating help with IT issues to considering it as a human-like entity that, in some cases, can even be considered as a colleague. The second dimension is employees' dominant mode of interaction with the AI chatbot. This dimension includes aggregate themes that range from interaction influenced by emotion to interaction influenced by rationality and function. These two dimensions served as the basis for

that it requires to learn a lot from data, and when it is new you don't have a lot of data to work with so it's obvious that you are not going to get the best answers right away."
Interviewee 27 creating the taxonomy, as shown in Fig. 2. As a result, four distinct types of AI chatbot users were identified. Each user type comprises a unique combination of characteristics (Bailey, 1994;Nickerson et al., 2013), which are explained in the sections that follow.

Early quitters
Early quitters represent a type of user who stopped using the AI chatbot from their first encounter with it (non-appropriationlevel 1). Early quitters perceived the AI chatbot as a tooljust another conventional enterprise system to support their day-to-day workand did not recognise the capability of the AI technology behind it. Hence, their expectations and associated mental models were focused on finding similarities with enterprise systems they already knew and use, ignoring any further potential of the AI chatbot. The following interview excerpt encapsulates this view: "If you ask me right now, I see it simply as a search engine." Interviewee 22.
Early quitters did not observe the evolving nature of the AI chatbot nor its capability for dynamic learning; instead, they treated the AI chatbot as a fixed and finished product. They saw themselves as playing a passive role in its use and tended to place full responsibility on the AI chatbot.
In terms of interaction, early quitters tended to be cynical or had preconceived negative expectations of the technology and its potential in the workplace. These preconceptions were developed on the basis of personal experience with what they perceived as similar technology that they had used in other social settings. This type of user became frustrated easily if the chatbot did not respond in the way they expected, and they simply assigned the dysfunction to the AI chatbot without considering the possibility of adapting the language they use, adjusting the keywords, or considering how they could help to train the AI chatbot. This view is succinctly summarised as follows: "Because in your daily life you are also confronted with these bots, right? So, if you go on an internet page and then you see some kind of smart robot on a certain web page 'Can I help you?', you know it's just a robot, you never get anything out of it. But I typed some questions and what I could recall is that it never really gave me the answer I was looking for." Interviewee 11.
In line with this failure to adapt, early quitters stopped using the chatbot as soon as they got an incorrect answer or response. A single error was perceived as confirmation of their negative expectations of the technology. Rather than attempting to phrase something differently, they ended their use. Interviewee 14 "gave up" after the first try, as the following excerpt shows: "I think my question was not that complex, that's why I stopped it. I could have also tried it with a different query maybe. I gave up maybe too easily." Interviewee 14.

Progressives
This type of user embraced new technologies. These users were 'enthusiasts' who tried to find ways of incorporating technology into every aspect of their lives, so using technology is part of their lifestyle. In a workplace setting, they experimented with the AI chatbot technology and, in many cases, found a different way to use the AI chatbot to assist them with day-to-day work (technology-in-use). Progressives continuously experimented with the AI chatbot, tried to find added value and capabilities of the technology as well as possibilities for its use. This view is illustrated by the excerpts below: "From time to time I will challenge the bot and see if it gives me any answer or any solution for my questions." Interviewee 6.
"I'm trying to use it for everything to see where it's actually really good at and where it's not so good." Interviewee 17.
Because progressives acted on the basis of rational assessment and understanding of the functionality of the AI chatbot, they were less driven by emotions and rarely referred to the anthropomorphic features and social presence of the AI chatbot. They perceived the AI chatbot as an enhancement of employee-assisted services. In the following excerpt, Interviewee 5 explained the benefits of interacting with the chatbot: "So, that's better than the human interaction because it is actually giving you the links and pages and everything that you need. So that way, it is more helpful when you really want information about something." Interviewee 5.
Progressives also aimed to understand how the AI chatbot worked and tried interacting with it in different ways.
"I ran a couple of times in the loop because the chat run out of options and so I kept trying to sort of go back to the previous sort of branch of decisionmaking in the chatbot and try other tree options, at least in my mind I think about it as a decision tree." Interviewee 33.
In terms of their understanding of the technology, progressives considered the AI chatbot not as a mere tool but as a virtual personal assistant. The following excerpt encapsulates this viewpoint: "I really try to imagine the chatbot as my personal virtual assistant." Interviewee 19.
Progressives tended to be technology-savvy and played an active role when interacted with the AI chatbot. In cases where the chatbot did not answer (correctly), they assumed that they should have tried using alternative expressions, search terms, or language; when the chatbot's answers were still not correct, they assumed that the information they were looking for had not been fed into the chatbot and that the chatbot could not be blamed for this. Therefore, this type of user did not attribute responsibility to either the chatbot or themselves, which led to their continued appropriation. The following excerpt illustrates the nonattribution of responsibility to the chatbot for failure and highlights an emotional interaction: "Sometimes maybe it's to blame that this information is not available in a sense that I would expect them. I think this is not to blame on the on the bot." Interviewee 17.

Pragmatics
Pragmatics represent a type of user who perceived the AI chatbot as a tool to replace the delivery of services that were previously provided by humans, such as an IT helpdesk. For example, Interviewee 9 referred to the AI chatbot as a machine and compared it with the alternative of assistance from a person: "I spent like 10 mins trying to explain to the bot that my delegate is having this problem, but if I had somebody over a call, probably would have been faster. So, if the bot can be trained in a way that it's giving better results, more efficient results, then it's fine; otherwise, it's better to talk to a person." Interviewee 9.
In terms of interaction, this type of user tended to hold a rational view of the AI chatbot that was primarily focused on its functionality. These users highlighted trust as key to their use. For this type of user, the depth and breadth of the AI chatbot's functions and its operational dynamics enabled them to trust it. Interviewee 23 provided some reasoning for the trust criteria they considered when interacted with the AI chatbot: "So, to be able to trust the chatbot, I would personally also like to understand and see what information sources it covers. So how much can I really trust, how deep does the chatbot go into the different pages? So, I would need to first understand how detailed its searches are and how widespread it is looking [how widely it searches] to be able to trust it." Interviewee 23.
Pragmatics expressed a shared responsibility with the AI chatbot when they did not get a correct answer; however, they remained passive in terms of playing a part. This shared attribution of responsibility is illustrated by Interviewee 13, who, on the basis of function, tried to provide a reason for the chatbot's failure: "Saying it didn't answer, I mean it can be multiple causes. Maybe you didn't search for the right thing, maybe the information is not there, so it's like you would ask a person but he cannot answer; you think of both [yourself and the person]." Interviewee 13.
Therefore, based on their understanding the AI chatbot technology and their patterns of use, pragmatics do appropriate the AI chatbot. However, in their appropriation, they tended to quickly frame the chatbot's capabilities and its possibilities for use, and then interacted with it within that frame, hence limiting their experience. In accordance with their perception of the technology, they may have avoided it for some uses and limited their use to particular functions without trying other possibilities. This is eloquently explained by Interviewee 22 as follows: "I would go to the chatbot to search for acronyms and stuff like that, because everything else seems to be a bit too complicated for it." Interviewee 22.

Persistents
Persistents are a type of user who did not easily gave up on the AI chatbot (level 3 of the appropriation process). They persisted with using it and kept rephrasing their questions if the chatbot did not respond correctly. In terms of their understanding of the AI chatbot, this user type paid more attention to its anthropomorphic features and liked to treat the AI chatbot as a colleague. The excerpts below are examples of the view expressed by some participants that interacting with the chatbot is like communicating with a person: "I always chat [with the bot] as if I am chatting to a person; it's not like I am searching for anything." Interviewee 16.
"When you are actually communicating with somebody, when you are actually talking to somebody, even if it's like AI, it's much easier because you have the feeling that somebody is listening to you." Interviewee 5.
Progressive users observed the AI chatbot's social cues, including its icon, animation, friendly messages and notes, in order to teach it. This influenced users' perception of the AI chatbot as being 'human-like' and led to connective emotions when they interacted with the AI chatbot (Gkinko & Elbanna, 2022). Interviewee 6 shares this notion and personifies the AI chatbot by referring to it as "him": "The chatbot also needs to learn. It asks me at the end 'Was this helpful or how did I find it?' and also to teach him, and when I tell him 'No', I feel bad for the bot." Interviewee 6.
Persistents attributed responsibility for the chatbot's failure to themselves (by mentioning that they were not asking the right questions) or reason that the right information was not fed into the chatbot. Interviewee 27 assumed that the chatbot did not have the information needed, as demonstrated by the excerpt below: "I understand the nature of it, that it requires to learn a lot from data, and when it is new you don't have a lot of data to work with so it's obvious that you are not going to get the best answers right away." Interviewee 27.
This type of user maintained their appropriation of the chatbot because they were motivated by the notion of advancing the chatbot and they relied on the chatbot's distinct capability of learning. As a consequence, when they saw the chatbot advancement, their appropriation was reinforced.

Discussion
The research presented in this article focused on the use of AI chatbot systems in the workplace from the perspective of employees. The aim was to understand employees' lived experiences of using this unique technology in their work. It answered the following research question: how do employees experience the use of AI chatbots in their day-to-day work? We adopted an inductive qualitative approach to the research, and our analysis revealed that employees' use of AI chatbot is heterogeneous and varies from one user to the next. We identified two key dimensions underlying this variation in use: users' dominant mode of interaction with the chatbot and their understanding of AI chatbot technology. These two dimensions served as the basis for classifying AI chatbot users and creating a taxonomy of user types. In doing so, we followed the guidelines for inductive taxonomy-building detailed in Nickerson et al. (2013). Accordingly, we identified four user types: early quitters, progressives, pragmatics, and persistents (see Fig. 2).
In terms of user types, we found that users' understanding of an AI chatbot includes many aspects, which we grouped under the main categories of 'tools' and 'virtual personal assistant'. The understanding of an AI chatbot as a tool includes ignoring its functions and AI capabilities and considering it as a fixed technology similar to conventional systems. When this perception is combined with emotions, it can lead to a user terminating their use of the technology. However, when it is combined with rational reflection on instances of use, it can lead to a user appropriating their use of the AI chatbot for particular functions. This may narrow the potential and possibilities of the AI chatbot. When a user considers an AI chatbot as a virtual personal assistant, they will continue using it. When this perception is combined with emotions stemming from the AI chatbot's social cues, users will persist with searching for alternatives in terms of language, expressions, and keywords to try. Furthermore, when the perception is combined with use patterns that are dominated by a rational search for potential uses of the technology, users will keep searching for more possibilities for its use. Table 3 summarises the key differences between the four identified user types across the aspects within each of the two aggregate dimensions.

Implications for theory
This research contributes to the emerging literature on the use of AI chatbots by providing a case study of their use and appropriation in an organisational setting and by creating a taxonomy of users. In doing so, it responds to calls for research on the use of AI chatbots in organisations (Brachten et al., 2021;Dwivedi et al., 2019;Jang et al., 2021;Marikyan et al., 2022).
First, the study expands the behavioural stream of AI chatbot research beyond its narrow current focus on intentions. By identifying types of use of the same AI chatbot in the same department of the same organisation, this research extends the literature on AI chatbots and user experience and demonstrates that the use of an AI chatbot in the workplace is not homogeneous among individual employees. Our results provide deeper insights into the employees' experience on using an AI chatbot by analysing the effects of their perceptions and use. In particular, it shows that specific types of users have an active role and therefore a greater impact on the AI chatbot's advancement. Consequently, it fills in the recent research gap on the use of AI chatbots in organisations from the perspective of employees (Brachten et al., 2021;Dwivedi et al., 2019;Jang et al., 2021;Marikyan et al., 2022).
Second, the analysis shows that employees' knowledge and expectations of the AI chatbot, in addition to their dominant mode of interaction, impact employees' actions with regard to appropriating the AI chatbotand, hence, its continuous use. This is in line with the technological frames theory. However, the research also demonstrates some incongruences between employees' technological frames and their resulting appropriation and use of the AI chatbot. The technological frames theory argues that congruence in technological frames in organisations supports the use of technology, while incongruence threatens the use of technology and has a negative impact on the benefits derived from it: efficiency and effectiveness (Davidson, 2006;Orlikowski & Gash, 1994). Research that has adopted this theory has consistently examined different occupational groups to show the incongruence in their technological frames, how these incongruences occur, and how they can be resolved (Khalil et al., 2017;Lin & Silva, 2005;Olesen, 2014). Some research has shown that divergence across groups is not problematic as long as group members agree to differ; when this condition is met, some divergence does not negatively affect the use of the technology in the workplace (Mazmanian, 2013). Our findings show that incongruences in technological frames do occur among users of the same AI chatbot within the same occupational group and that these incongruences are made possible by the individualised nature of the technology. This suggests that the type of technology under study has an impact on the congruence or incongruence of technological frames. In a similar vein, Mazmanian (2013) investigated the use of mobile-based email devices, which are a social technology, and found that their use in the workplace was dependent on users' collective understanding and evaluation of when and how frequently they could expect to receive a reply by email from a colleague. Therefore, having a collective conception of how the technology is used among an occupational group may be important, and group members may need to agree to differ in their use if they are to succeed in appropriating a particular technology for day-to-day use (Elbanna, 2007). However, given that the AI chatbot under study in our research is a type of technology that is individualised and personalised, attaining agreement within a group (or across multiple groups) on how to use it is not key to its appropriation and use. Instead, our findings show that an individual's understanding of the AI chatbot technology and their dominant mode of interaction with it are what determine how the technology is used.
Third, our research found that when the use of an AI chatbot is dominated by emotions that stem from its form of social presence and its anthropomorphic features, employees persist in using the AI chatbot and keep looking for new possibilities for its use and trying different options. On the other hand, when these emotions stem from the functions of the AI chatbot, employees might stop using it when they first encounter an error. By including emotions as part of the technological frames that users build around the AI chatbot, this research contributes to the application of the technological frames theory. This confirms previous research on the importance of understanding users' emotions in the context of AI chatbots (Gkinko & Elbanna, 2022). The findings invite further investigation into the form characteristics of AI chatbotsspecifically, its social presence and anthropomorphic featuresand how they influence employees use in their organisational context.

Implications for practice
Our research also informs the design and implementation of AI chatbots in organisations in the following ways.
First, it is recognised that "taxonomies provide parsimonious descriptions which are useful in discussion, research and pedagogy" (Miller & Roth, 1994, p.286) and they are foundational for the advancement in knowledge (Bailey, 1994). In the information systems field, taxonomy-building is acknowledged as a key step in theory-building (Nickerson et al., 2013) and structuring domains (Glass & Vessey, 1995;Paré et al., 2015;Szopinski et al., 2020), in addition to supporting design science work (Gregor & Hevner, 2013;Kuechler & Vaishnavi, 2008) and serving pedagogical needs (Miller & Roth, 1994). The proposed taxonomy of AI chatbot users reveals insights into the different types of users in the workplace. It also goes beyond description and classification to provide analysis of the underlying technological frames that determine the appropriation and continuous use of AI chatbots in the workplace. This facilitates a more deeply nuanced understanding of the use and appropriation of AI chatbots by employees in an organisational setting (Carroll et al., 2003;Dourish, 2003). The taxonomy developed in this article could serve as a foundation for future research into the design, implementation, and use of AI chatbots in the workplace.
Second, by identifying the dimensions and themes through which AI chatbot technology is evaluated by employees and their impact on actual use and actual continuity of use, the research deconstructs employees' evaluation of usefulness and satisfaction (Nocera et al., 2007) and provides detailed understanding beyond the current quantitative surveys that examine intentions. Understanding the use of AI chatbots in organisations is especially important given that use plays a substantial role in the learning and advancement of such applications (Gkinko & Elbanna, 2020a, 2020b. In showing that users' previous experience of chatbots in other settings could influence their perception and use of an AI chatbot in the workplace, the study highlights that for organisations and designers to succeed with implementing an AI chatbot in the workplace, they must tackle employees' assumptions, expectations and understanding (Treem et al., 2015) and differentiate AI chatbots from the menu and rule-based chatbots commonly used in marketing and on websites.
Finally, the findings provide insights into how particular types of users play an active role on the learning of AI chatbots. They highlight the specific features that users rely on for their judgement to potentially treat it as a colleague. The research findings can help sociotechnical designers in enhancing the users' active involvement taking into account the AI chatbot features and organisational design and context that surrounds its implementation.

Conclusion, limitations and future research
There has been a rapid acceleration in the adoption of AI-based applications in the workplace in recent years. AI chatbot systems differ from older enterprise systems because of their distinct characteristics: they consistently learn from patterns of use and the ways in which users interact with them. This research revealed that employees appropriate an AI chatbot in different ways. It identified two key dimensions that determine the type of use: the dominant mode of interaction, and users' understanding of the AI chatbot technology. Accordingly, a taxonomy of user types developed, which classified employees into four user types: early quitters, progressives, pragmatics, and persistents. The research also detailed the characteristics of each type of user.
The research focused on the internal use of one AI chatbot in an organisational setting. As a result, the findings from this study cannot be generalised. However, future research could expand on this to explore other organisational settings. The classification scheme presented in this study is based on the appropriation of technology influenced by technological frames, so future research could explore other theories upon which to base the classification. Future research could also explore other angles and dimensions, and the possibility of including other taxons. The taxonomy developed in this research is inductive and qualitative, and it was validated qualitatively; future research could adopt a quantitative approach to test its validity. Future research could also examine whether the technological frames related to AI chatbot use in organisations change over time, and in what ways they change.
In conclusion, this study explores the use of AI chatbots in the workplace from the perspective of employees. It provides new insights concerning the different types of users and their lived experience and offers a taxonomy of users that future research can build on to advance. We hope that this study inspires future research into employees' use of AI chatbots in the workplace.