Academic communication with AI-powered language tools in higher education: From a post-humanist perspective

AI-powered language tools (AILTs) are commonly used by university students, yet there is a limited understanding of how students utilise and perceive these tools in everyday academic communication practice. Employing a post-humanist lens and based on over 1700 open-ended comments from a nationwide student survey, this qualitative study examined students ’ lived AILT experiences to explicate the impact of AILTs on academic communication in higher education learning and assessment. Thematic analysis of the data shows that students ’ academic writing is realised by assemblages of distributed spatial and personal linguistic repertoires, underscoring AILT ’ s role in enhancing students ’ communicative performance and personal language development. AILTs are also conducive to transforming the academic writing process into an additional learning space. Students have developed a new identity as spatially advised learners, enabling them to assert their agency in terms of language development and subject-content knowledge while also holding critical perspectives on the limitations of AI. Furthermore, the findings point to divergent and eclectic student viewpoints on the ethical concerns of AILTs in assessment in the absence of university instructions. The study discusses implications for university policymaking and pedagogy in developing teaching and assessment methods that match students ’ stances and needs in AI-mediated academic communication.


Introduction
In recent decades, the growing availability of artificial intelligence (AI) applications in higher education has ushered in a new era of teaching and learning (Churi et al., 2022;Crompton & Burke, 2023;Zawacki-Richter et al., 2019).Notably, the educational landscape has been significantly reshaped by the proliferation of AI-powered language tools (AILTs), which have been widely used by university students for their engagement in academic communication.AI-powered language tools (AILTs) are software programmes/applications that use AI methods to analyse or generate human language, including but not limited to writing assistants, machine translators, speech-to-text transcribers, and text generators (chatbots).These technologies have introduced both new opportunities and challenges to students' interactions with course materials, peers, and instructors (Jiang et al., 2020;Tsai, 2019).
The integration of AILTs in higher education has piqued the interest of language education scholars and practitioners, who have mostly investigated the use of various writing assistance tools in second/foreign language classrooms (see a recent review in Alharbi, 2023).While research has increasingly acknowledged the pedagogical value of AILTs in facilitating students' academic language development, their effectiveness is still debated (e.g., ONeill & Russell, 2019), as are the ethical implications of using AILTs in assessment, especially in light of the recent progress in generative AI, exemplified by innovations like ChatGPT (Atlas, 2023;Dwivedi et al., 2023;Liebrenz et al., 2023).Nonetheless, as Alharbi (2023) points out, it has become "an inescapable fact" (p.2) that university students will use AILTs to engage in communication and learning regardless of their effectiveness and controversies since they are so widely available and accessible.As technology continues to advance, it is incumbent upon educators and researchers to discover effective and appropriate methods to enable students to make optimal use of these tools.Hence, it is imperative to gain a comprehensive understanding of students' real-world encounters and interactions with AILTs, not only confined to academic language classrooms but, more pertinently, within their everyday academic communication practice.
This study seeks to address this pressing need by investigating the lived experiences of AILTs of university students in Sweden, with a particular focus on their usage and perceptions of AILTs for learning purposes.The study is based on a thematic analysis of survey responses from 1703 Swedish university students and adopts a post-humanist conceptualisation of academic communication to examine the students' experiences and perspectives.This approach views AILT as an integrated part of repertoire assemblage (Ou & Malmström, 2023) that constitutes university students' academic communication competence.It allows for a close and situated examination of the role AILTs play in university students' academic learning and communication practice, with equal emphasis placed on the human and non-human agency of meaning-making.The study will provide in-depth insights into how students position their own language and thinking in relation to AI technology and its impact on academic communication, as well as the broader implications for learning and assessment in higher education.By extension, the study's findings will inform university stakeholders about the specific educational and social implications of AI and practical suggestions for adapting higher education systems to better incorporate AILT into educational practises.

AILT in higher education
Despite the widespread attention given to AI, precise definitions of it, not least in applied linguistics, are lacking (Bearman et al., 2022;Holmes & Tuomi, 2022).Therefore, it is essential to establish a clear understanding of what we mean by artificial intelligence when we refer to AILTs and their usage in academic communication.While the science of AI has grown drastically since the 1950s, contemporary scholarship largely retains Turing's philosophy by comparing machine behaviours to human intelligence and cognition.For instance, within the realm of education, Baker et al. ( 2019) offer a broad definition of AI as "computers which perform cognitive tasks, usually associated with human minds, particularly learning and problem-solving" (p.10).They further assert that AI for education (AIED) does not describe a single technology but encompasses an array of technologies-such as machine learning, natural language processing or an algorithm-all designed to serve students' learning needs (learner-facing), automate teachers' duties (teacher-facing), and support school administration and management (system-facing).
In the last decade, amidst the accelerated expansion of AI in the global education market coupled with substantial commercial interest, a burgeoning body of research, policy deliberations, and working documents have emerged to explore the optimal use of AIED and address the risks associated with its deployment in higher education (Crompton & Burke, 2023;Miao et al., 2021;Vincent-Lancrin & Van der Vlies, 2020;Zawacki-Richter et al., 2019).Learner-facing AIED has received widespread recognition for its transformative impact on augmenting human cognition in learning (Holmes & Tuomi, 2022).However, researchers (e.g., Bearman et al., 2022;Humble & Mozelius, 2022) also criticise the widespread over-optimism regarding the potential of AIED, while calling out conceptual ambiguity, especially in terms of the weak connection of many AIED applications to theoretical pedagogical perspectives and the unclear social implications they have brought to higher education.
Over the last decade, AILTs (e.g., automated writing evaluation and corrective feedback tools, machine translation applications, chatbots) have been increasingly used in higher education, supporting students' academic communication and literacy development.Intelligent computer-assisted language learning (ICALL) systems, a substantial subgroup of AILTs designed for second/foreign language education, have consistently attracted attention within the field of AIED.ICALL systems have evolved from early iterations like Intelligent Tutoring Systems (e.g., Dodigovic, 2007;Tschichold, 1999) to later products such as Grammarly (Barrot, 2022;Koltovskaia, 2020;ONeill & Russell, 2019) and Wordtune (Zhao, 2022).Extensive research has been devoted to probing their language teaching functions, including emulating standard language users, correcting grammatical errors, providing automated feedback, and improving (English) language learners' writing skills (ibid).Additionally, some AILTs originally developed for non-educational purposes have garnered attention within education contexts.For example, chatbots (originally chatterbots) have gained popularity in language teaching due to their capability to interact with users in the target language (Fryer et al., 2019;Schmulian & Coetzee, 2019).Similarly, speech-to-text/speech recognition technology has been introduced in the education domain for its unique assistance for students with cognitive or physical disabilities (e.g., hearing impairments) in learning, to solve the technological challenge (i.e., poor audio communication quality) in synchronous online classrooms and so on (Shadiev et al., 2014).
However, some AILTs, such as machine translators, are widely accessible to students in their daily communication, but their role in education remains underexplored (Medvedev, 2016;Van Lieshout & Cardoso, 2022).Likewise, while 'traditional' chatbots were largely considered useful (and useable) for language learning (Huang et al., 2022), recent chatbot technology (large language models like ChatGPT) has sparked discussions in (higher) education primarily over concerns about their risks for teaching and learning, academic integrity and traditional assessment systems, rather than their potential benefits for academic communication (Atlas, 2023;Dwivedi et al., 2023;Liebrenz et al., 2023;Rudolph et al., 2023).Despite the extensive attention devoted to the ethical implications of ChatGPT and similar chatbots in higher education, empirical evidence remains scarce (an exception is Firat, 2023).There is also insufficient knowledge about university students' actual use of such AILTs like ChatGPT for everyday academic communication and their perspectives of the impacts of these new technologies on their language use, competence, and their learning and assessment in A.W. Ou et al. higher education at large.A recent quantitative survey by Liu and Ma (2023) studied Chinese students' acceptance of ChatGPT in their self-directed English language learning and attested to largely positive attitudes towards and frequent use of the technology.Nevertheless, more qualitative research is needed to gain a more comprehensive understanding of students' lived experiences with AILTs in higher education.
The present study seeks to bridge these knowledge gaps by investigating the following two research questions.

RQ1.
How do university students in Sweden utilise AILTs for meaning-making in their academic learning activities?

RQ2.
How do students perceive the potential, benefits, and challenges of engaging in academic communication with AILTs in higher education?

A post-humanist view of AILT in academic communication
As Bearman et al. (2022) point out in their critical review of discourses of AI in higher education, AIED research has long been suffering from mythic language with respect to AI and technology in general; for example, AI has been portrayed as the 'ghost in the machine' (p.12).The lack of conceptual clarity makes it challenging for scholars and educators to comprehend how AI functions in educational contexts and how students, teachers and pedagogies should respond to today's AI-mediated communication and learning environments.In order to demythologise AI, we suggest a post-humanist concept of academic communication competence for an explicit understanding of the role of AI in students' language practice.Focusing on three notionsdistributed cognition, spatial repertoire, and assemblagewe discuss an expansive and polycentric framework of communicative repertoire embracing the distinctive interactive role of AILTs for students to participate in academic communication.
In a very broad sense, academic communication encompasses the literacy and practice necessary for engaging in academic learning-related activities.While traditionally students' use and development of academic language was treated as a purely linguistic and text-based matter, Haneda (2014) provides an expansive account of academic communication by recognising the multimodal nature of meaning-making in academic contexts and including artefacts and material tools into "a toolkit of mediational means" (p.134) students use to achieve their communicative goals.This sociocultural view, while incorporating technologies into students' broad repertoire, mainly emphasizes their scaffolding role in the classroom to enable academic language development.
The rapid development of technology in recent years has altered our understanding of the relationship between our material surroundings and human thinking and capacity, thus leading to the development of a critical and post-humanist view of language (Pennycook, 2018).Post-humanism has gained momentum in applied linguistics as it foregrounds the material essence inherent in species and objects and a non-human-centric way of thinking, enabling linguists and language educators to discern the profound impacts of digitality and multimodalities on language/languaging and to recognise the co-agency of human and non-humans (e.g., materials and technologies) in shaping language practices and literacy (Pennycook, 2018;Toohey, 2019).A major intellectual foundation of post-humanist applied linguistics is the distributed view of thinking (Clark, 2008;Norman, 1992).The notion of distributed cognition, according to Hutchins (1995), suggests that the ability of thinking is distributed both across social community members and in the material environment where external cognitive artefacts are involved.For example, a calculator/computer can "amplify" one's ability to do arithmetic tasks by providing an external set of functional skills.From this perspective, an individual's mental function is part of a larger cognitive system, which enables human thought and action through the "coordination of internal and external structures" (Hollan et al., 2000, p. 176).Distributed cognition transcends the boundary between individuals and their embedded socio-material space to understand one's capacity of thinking by highlighting the features of extension and interconnectedness.As stated in Mitchell's (2003) famous quote, "I construct, and I am constructed, in a mutually recursive process that continually engages my fluid, permeable boundaries and my endlessly ramifying networks; I am a spatially extended cyborg" (p.39).
Accordingly, the post-humanist view of language is spatial and distributed; it challenges the human mind as the only locus of language and advocates "breaking down distinctions between interiority and exteriority" (p.446) to allow for an expansive, integrative, and polycentric understanding of communicative competence that is distributed across people, places, and objects.This highlights the powerful and dynamic role of the material world in shaping one's competence and the result of communication.Apart from the semiotic repertoires (languages, bodies and signs) individuals possess (cf.Kusters et al., 2017), communicative and social activities are also enabled by spatial repertoires, including all forms of "semiotic resources used by previous interlocutors for that activity in that setting … and become sedimented to shape similar communicative activities associated with that place later" (Canagarajah, 2021, p. 7).Spatial repertoire incorporates extensive non-human and material resources distributed in the contexts of communication and aligns with human language to co-construct meanings.In interactions, the interlocutors' personal repertoires and spatial repertoires work together in assemblage rather than in a cumulative manner.Assemblage, for Deleuze and Guattari (1987) who coined the term, emphasizes a concrete collection of heterogenous materials that can lead to change.From a post-humanist perspective, the notion refers to an entanglement of agency, language and cognition from both human and non-human entities that connect and interact in a complex way, resulting in novel and unanticipated effects (cf.Pennycook, 2017).For example, Ou and Malmström (2023) show how an assemblage of a student's own multilingual competence, social networks, and machine language translation functions gave the student an ad hoc and situated reading ability in a foreign language and thus allowed him to successfully participate in a group project.
A post-humanist approach to academic communication therefore promotes considering students' language competence for participation in learning activities as repertoire assemblages (cf.Ou & Malmström, 2023).It highlights the importance of recognising the agentive power of non-human things (e.g., technology) to generate meaning in human interactions and impact human language.This helps open up an ecocentric discussion (as opposite to a human-centric viewpoint) of the role of AILTs in academic A.W. Ou et al. communication: when we situate academic communication in its embedded material world, we are able to interpret interaction as not only a socialisation process of interlocutors, as was formulated in Haneda (2014), but more importantly a socio-material process (Canagarajah, 2021) in which the dynamic relations between individual language and semiosis, AI, and academic activities are explored.In other words, rather than focusing on an individual's language capability or how an AILT mediates (or impedes) its development process, our study from a post-humanist perspective prioritises the analysis of emerging assemblages of human and non-human communication resources and investigates the interconnectedness of all parties involved in the communication, with equal attention paid to human and AI language functions.It is important to examine what communicative resources are accessible in a learning activity, how they relate to one another, and what influence this has on communication outcomes.Through this approach we may get substantial insights into how AILTs and students engage with each other in their everyday communicative practice, and what this means for academic communication and learning in today's higher education system.

Method
The present study adopts a qualitative research design, using open-ended comments data obtained from a nationwide survey conducted among university students in Sweden in 2023.The primary objective of the survey was to gather insights into university students' use of and attitudes towards AI for learning purposes.In addition to scaled questions, the survey incorporated an optional comment section at the end, where participants were prompted to add any comments about the use of AI language tools (including AI chatbots) that they would like to share with the researchers.While a summary of the statistical data has been published in Malmström et al. (2023), this study focuses exclusively on the qualitative comment data.
To administer the survey, the research team employed the Questback platform and subsequently disseminated it through various digital channels (i.e., Meta platforms, LinkedIn, and Mecenat) and personal networks at different universities across Sweden.The survey was made available from 5 April to May 7, 2023, in both English and Swedish.Through a convenient sampling approach, the survey generated a total of 5894 responses from students across a majority of Sweden's universities (28 universities contributed with at least 1 % to the total participant responses).Twenty-nine per cent of the respondents contributed to the optional comment section, resulting in a total of 1703 valid responses, amounting to 718,027 characters.Demographic information of the respondents is provided in Table 1.The sample is representative of the diversity in the underlying student population in Sweden's higher education, for example, in terms of gender (with a representative balance between genders), academic level, and language background.Additionally, the pattern of familiarity and frequency of use with AILTs among the comment respondents is highly consistent with that of the whole survey dataset, wherein almost all the respondents (96.88%) are familiar with ChatGPT and their knowledge and usage of other AILTs, particularly language translation tools, is widespread.
Upon collection, the Swedish language comments were initially translated into English to ensure that all comments could be uniformly processed in a single language (necessary for the software analysis) and since English is the common language used by the authors.Then all the data, both the original and translated versions, were inputted into NVivo for organisation and analysis.The data analysis mainly followed an inductive and bottom-up thematic analysis approach (Patton, 1990).Through an iterative and recursive process of reading and re-reading of data sources, as well as grouping data into meaningful units, this method allows for the gradual emergence of themes and categories from the unstructured and extensive textual data.
Our analysis adhered to the six-step reflexive coding process suggested by Braun and Clarke (2022): First, we familiarised ourselves with the data by reading all the comments, proofreading the translated comments, and making notes of coding ideas (Step 1); this was followed by the generation of initial data-driven codes (Step 2).Subsequently, we generated initial themes based on the identified codes (Step 3) and reviewed them (Step 4), through a recursive process of engaging and re-engaging with the coded data extracts (both the Swedish original data and English translation) to identify representative opinions and perspectives within the comments.Special Note.Language = The major language used in each original comment.
A.W. Ou et al. attention was paid to the interconnections and relationships between various codes.Additionally, to facilitate the identification and triangulation of prevalent themes, we conducted a quick-term word frequency query in NVivo, which helped detect frequently used words by informants that index specific themes, such as "writing," "text," and "cheating."A partial set of the word frequency query results can be found in Appendix 1.Following Step 4, we identified the essence of each theme, including their relationships with sub-themes, defined the major themes and refined the specifics of each theme (Step 5).In this theme-naming process, the post-humanist constructs of language and communication were employed to assist the conceptualisation of themes, i.e., providing conceptual vocabularies for precise descriptions of the identified themes and theory building.Throughout the data analysis process, Amy independently coded the data; upon completion of the initial analysis, Christian and Hans (both Swedish-speaking researchers) reviewed the findings, specifically instances of translated data, confirming the accuracy of themes and contributing to a collective understanding of the data.In the end, our analysis resulted in three main themes that contribute to a new conceptualisation of the relationship between AILT, academic communication, learning and assessment in higher education (Step 6).The coding scheme is attached in Appendix 2.

Findings
In this section, we delve into the three identified themes that elucidate university students' attitudes and usage of AILTs in their academic learning pursuits, informed by post-humanist applied linguistics.The first theme, addressing RQ1, illuminates how students incorporate AILTs into spatial repertoire for academic writing.Responding to RQ2, the second theme reveals an emerging spatially advised learner identity, where students view themselves as independent and critical subject agents of learning shaped through their alignment with AILTs in academic communication.The third theme, also addressing RQ2, explores the heterogeneous student perspectives on AILTs in higher education assessment.

AILTs used as part of spatial repertoire for academic writing
Most respondents who left open-ended comments did so with detailed accounts of how they personally used different AILTs as well as their perspectives on appropriate usage of them for learning; these comments indicate that many students in Sweden are familiar with AILTs, and that they actively and regularly use them for academic communication.Their comments centre on several prominent AILTs, including ChatGPT, Google Translate, and writing assistance tools like Grammarly, and they mostly allude to the benefits of these AI tools in enhancing their academic writing outcomes.According to the results of the word frequency inquiry, "writing" is the most often used term introduced by respondents, 1 and it is usually followed by terms such as "texts"-the second most frequently used word in all comments-and "essay," "assignment," "report," "exam," "paragraph," and "code."Below are some examples of such comments: No.179: AI helps me optimise my grammar and coherence with my key ideas.Besides using ChatGPT to optimise my academic writing, I use Grammarly and Quillbot to perform the same function.
No.556: With AI tools such as Google Translate, and other learning tools, I think that they have been fantastic for me and for fellow friends who, like me, are foreigners living in Sweden and studying and having to both read books and produce written assignments in Swedish.Although my Swedish is now at a much stronger level of fluency, I have written my own assignments many times in English and then used Google Translate to help me put my own writing into Swedish, which I then go and look over and fix any mistakes (as can so easily happen and often happens when it comes to machine translation) myself.I feel like it would take me double or even triple the amount of time if I were to try to write my assignments in Swedish directly instead of writing first in English, plus my writing would come out sounding much more infantile and be at a much lower level of Swedish than the university-level of writing that I can produce in English.
The respondents' experiences demonstrate how academic writing today is reliant on assemblages of distributed language competence across individual students' multilingual repertoires and multiple AILTs available to them (Ou & Malmström, 2023).In these assemblages, students align their own languages, writing skills and thinking with the algorithm-based language processes (e.g., lexical, grammatical, and textual corrections, word choice suggestions, language translation) within AI chatbots, writing assistance, and machine language translation to optimise the outcomes of their academic writing.Thanks to these spatial repertoires-language affordances from the digital environment of academic communication-students are able to make use of an expanded academic language capacity beyond what they could achieve on their own.In this sense, students have become "spatially extended cyborg[s]" (Mitchell, 2003) whose academic communication competence gets amplified in the networked learning environment with AILTs.
Furthermore, the action of doing academic writing and the assemblages of AILT and individual languages in this process could also encourage personal development in academic language: No.338: I have weak dyslexia and mainly have difficulty with spelling and seeing my own mistakes when I proofread my own texts.By using e.g., Grammarly, I can more easily detect these errors and correct them, as well as get help with grammar and 1 "Writing" was placed seventh in the overall list, but the six words before it are either popular syntactical terms (use, think) or survey significant theme words (AI, tools, student, learning).
A.W. Ou et al. some internal structure.I also feel that this makes me aware of common mistakes I make in my writing, and by being more aware of it, I make fewer mistakes later on.I believe that I have become a better writer with this tool.
No.1124: I use both ChatGPT and Grammarly daily for inspiration and it is very useful.I find that my English writing skills have improved a lot by using Grammarly, even when I don't use the tool.
By reflecting on AI-generated texts and their own work in regular writing practice, students can improve their writing abilities outside of the language classroom.In this sense, students not only gain an emergent and situated communicative ability from the spatial repertoire, but also become spatially advised language learners whose thinking and knowledge systems are impacted by AI.
Our analysis shows that utilising AILT as part of spatial repertoire for academic communication grants students agency and authority of learning.This is particularly relevant and meaningful to a group of students who cite various kinds of learning difficulties, such as dyslexia, ADD, ADHD, and autism.A substantial number of comments, many in the form of shared anecdotal narratives, document how AI chatbots and other language tools serve as a remedy for these students' inherent communication challenges, thus empowering them to engage in learning in higher education on a more equitable basis.For instance: No.181: I always post-process my texts and mainly use AI as an inspiration for my own writing.Much of this is due to my ADHD diagnosis which makes it difficult for me to start writing.AI gives me the ability to start projects and later I can, based on my own language use, formulate my exams.
No.621: For me with dyslexia, tools like Grammarly and ChatGPT have been really useful.… enabling me to understand a text that I find difficult to read and get a simpler explanation, or get help with my spelling and sentence structure has been a great help to me given my difficulties with this particular issue.
These comments indicate the great value of AILTs in catering to the diverse learning needs of university students.Equality in higher education can be supported by the recognition of the multimodal nature of meaning-making in academic writing, embracing AIpowered communicative resources as part of students' competence for academic communication and providing instructions for students to incorporate these resources properly and effectively.

"ChatGPT is my teacher": an emerging spatially advised learner identity
Different from other AILTs, ChatGPT is perceived by most respondents as a more versatile, potent, and hence, controversial tool for academic writing.The comments show that students have used ChatGPT for a wide range of communication purposes, including seeking information, summarising and elaborating academic concepts, making summaries of lectures and readings in different languages, creating outlines and notes, writing codes and debugging in programming, and more.In particular, students understand that chatbot not only helps with linguistic aspects of writing (i.e., spelling, grammar, and text organisation) but is also able to automate writing based on thoughts and ideas that do not belong to themselves, as suggested by the following response: No.726: The reason why I think language tools are good, as opposed to chatbots, is that they can develop an already written text for the better.The chatbots can create completely new texts, which makes it difficult for teachers to ensure that the students have actually written the submitted texts.
Comments like this highlight awareness of the independent "cognitive capacity" of generative AI, which makes GPT-model chatbots problematic from the point of view of academic integrity.When academic communication today draws on distributed practice from both human and non-human intelligence, students must consider how to position their own thinking in relation to AI in their language use.
In this regard, while slightly different stances are present (as seen in the various analogies given by respondents when explaining their personal usage of ChatGPT), a large proportion of students subscribe to an interactive relationship between human thinking and AI in academic communication.These comments assign ChatGPT numerous humanised roles, most frequently "teacher," followed by "tutor," "mentor," and occasionally "study buddy/partner," "fellow/peer."For example: No.377: AI is great for discussing and learning things as it becomes like a study buddy with whom you can "brainstorm" questions, which leads me to learn more for exams.
No.628: It helps students to do tasks and learn faster if they have questions … They are like a digital teacher that you have all the time.
No.1441: I think that ChatGPT is a great tool to use as a tutor for questions about structure, ideas, suggestions for improvement of texts or to ask if a text I have written explains what I want in a clear way.In this way, I can reduce the time I have to take from man tutors as long as I myself have critical thinking about what I get as an answer.… Everything I can get help from a supervisor, I can ask ChatGPT about first.
These respondents emphasised the interaction between themselves and ChatGPT, which allows for knowledge-sharing, conceptchecking, brainstorming, discussions, and other forms of learning process.In their academic writing process, AI is not only a language aid but also plays a pedagogical role by providing writing guidance and suggestions, explaining difficult concepts and subjects, offering feedback, and so on.The students who perceive ChatGPT as a knowledgeable academic mentor demonstrated a quite high level of trust and dependence on the AI-generated thoughts, yet some of them, such as No.1441, mentioned the significance of critical thinking from themselves.
This indicates that the engagement of AI in academic communication enables a multi-sited and multi-centric form of learning in higher education.Knowledge construction not only happens in formal education discourses (e.g., in classroom interactions with teachers and peers, through reading textbooks and learning materials) but also during students' everyday academic writing practice and through their interactions with generative AILTs, which facilitate learning based on the infinite amount of publicly available information it was trained on.A few respondents even perceived ChatGPT as a substitute for their teachers: No. 565: Teachers rarely have time to discuss or even respond.This is where ChatGPT does an excellent job.This virtual-teacher view highlights a permeable boundary between spatial repertoire and personal repertoire in the assemblage of academic communication.In the academic writing process, AI chatbots act as agentive social actors, mediating and shaping students' own thinking and language use and thus leading to potential development of personal competence.Indeed, it is frequently mentioned in the comments that the primary value of AI lies in assisting students' own learning rather than freeing students from their writing process or simply offering a shortcut to good exam results: No.33: … Chatbots should not be seen as a magic formula for giving correct answers, but more as an assistant that can help you identify where to start.
No.37: … Using AI tools can be very effective in improving our knowledge if they are used correctly.For example, it is bad to ask a chat AI to write a whole essay for me.But I can write my essay and use an AI tool to correct it or to give advice about how to improve my essay so that I can learn what I did badly, and I can understand how to do better.This way, I don't have to wait for my teacher to look over my work, correct it or write feedback, and send it back for improvement.
No.1687: No matter what the AI may write "for" you, you would still need to understand what it is.I have used it to generate some samples of programming code, not to let it write for me, but rather give me examples of how certain problems might be solved.Some answers have seemed difficult to grasp, forcing me to cross-reference the code with other sources.Thus, I learn something new.
An emerging learner identity becomes evident from the analysis thus far.By attributing a virtual instructor role to chatbots, students seem to use AI as a means to reproduce learning environments in their academic writing process that mirror their past learning experiences involving human-to-human interactions, such as those between students and teachers and among peers.In this learning environment, chatbots are expected to provide heuristics, methods and suggestions, facilitating the students' knowledge development in both academic literacy and other subject contents.The students are not satisfied to simply gain AI-amplified performance in academic writing (i.e., a spatially-extended-cyborg stance) but see themselves as spatially advised learners whose own language competence and subject knowledge can be enhanced through human-AI interactions.
Some respondents even warned about the risks of the spatially-extended-cyborg attitude towards AI usage, such as defeating confidence in one's own thinking and writing, prohibiting one's own creative thinking and learning, reducing source criticism, generating superficial and streamlined knowledge, and more.As spatially advised learners, however, individual students remain the subject agent of thinking and learning, as one respondent noted: No.336: You should be able to do the same things as AI; it just helps you do them.You shouldn't use a calculator if you don't know what the plus sign on it does.Furthermore, considering AI-mediated academic communication as an additional site for learning does not entail fully accepting AI as the authority of knowledge.Certain students noted the limitations of AI-generated knowledge and their associated risks for people's learning and development: No.186: Isn't it very difficult to see which sources are used when you get a reply from an AI chat?Then the whole point of reading different opinions and papers to be able to describe a topic in an essay, for example, disappears.
No.737: I use ChatGPT primarily for linguistics and academic formulation purposes.I tried to ask academic questions and sometimes very detailed questions, but the answers were almost superficial and invalid for an academic level.Furthermore, it might be also useful in my case for providing a specific style of reference.My ultimate statement to ChatGPT is "this (academic work) in APA style."However, many times, it provides wrong answers.
No.1273: The concern is that the technology will be seen as authoritative in the same way that spell checks and other features presently found in computers and mobile devices are.People, in my experience, are willing to comply with "corrections," which do not work well in the context of the Swedish language and complex terms.Texts generated by, for example, ChatGPT are given without doubt, and if the user understands nothing about what is written, mistakes may go unnoticed.
As shown in the comments above, students' concerns about ChatGPT include, but are not limited to, the superficial and sometimes untrustworthy content it produces (particularly in academic and specialised fields of knowledge), incompatibility/worse performance in other languages than English, and a lack of transparency in the source of information.Some are also concerned about the security of (personal) data since the method by which this tool collects information is not entirely clear).
With these considerations in mind, respondents called for the development of critical information analytical skills while using ChatGPT as part of spatial repertoire for academic writing and an additional source of learning.Some emphasised that students must be able to examine the reliability of AI-generated answers, using cross-references and scrutinising the sources.Some also suggested teachers take the responsibility to "explain the benefit and cost of using and relying on such tools (how they actually function and understanding that they may not 'understand' as much as they seem to)" (No.537).

Heterogeneous student perspectives on assessment with AILTs
Our analysis also reveals that students' academic writing practice based on distributed competencies from persons and AILTs poses challenges to the present assessment system in higher education, which is founded on a human-centric theorisation of academic communication.As the word frequency inquiry shows, "cheating" is one of the topmost referenced terms in the comments.Most respondents argued that the engagement of AILTs in academic communication for assessment is a complex issue, and a range of student perspectives emerged from our data.
First of all, the student perspectives suggest a conditional usage of AILT in examinations.One condition widely agreed upon is that AILTs, as tools that can alter a person's performance in academic writing, should not be permitted in examinations designed to assess one's (academic) language ability: No.1053: I think AI language tools can count as cheating in certain exams, i.e., actual language exams where language skills are put to the test.If there are exams for a completely different subject, I see no problem with a tool correcting the language.Word, for example, has always done this.
Another commonly held "rule" is that AILT, especially ChatGPT, should not be used to automate a full-text response for an assignment submission: No.370: In my eyes using AI chatbots when doing an assignment would be a lot like asking a friend for help.You could have them do the entire assignment for you and that would be cheating.However, you could also have them do the assignment and then use what they give you as a foundation or inspiration for your own assignment, which would be more akin to getting feedback on your work than cheating.
No.1184: … It depends on the task and what you use AI for.If I use AI to improve my English and correct spelling mistakes in a technical report, is that cheating?Absolutely not, in my opinion.Is it cheating to use ChatGPT to generate an abstract for my thesis?Yes, at least unethical.
No.1303: Using tools to write a text instead of doing it yourself is clearly cheating.Using translation tools, such as Google Translate, I think it depends on what you use them for.I use it to translate words in the course literature, I don't consider that cheating.If I submit a whole text for translation and then hand it in, it would definitely be cheating even if I wrote the original text myself.It wouldn't develop my language skills either.
No.1707: I think it depends entirely on how you use AI when it comes to assignments, I personally see no problem if individuals use it as a language aid in individual sentences and paragraphs.But to have entire paragraphs or entire works written by AI, I think it is cheating and takes away from the purpose of learning.When it comes to coding, I have no problem, because we have been googling solutions to codes for a long time.
There is consensus among the respondents that student written assignments and exams should primarily reflect students' own thinking processes and knowledge development.As a result, AILTs cannot be used in language-targeted assessment, but they may be legitimate for generating some of the text (e.g., improved vocabulary and grammar, fundamental ideas) for assessment in nonlanguage subjects.However, divergent opinions also emerge from the comments regarding the nature and scope of potential support provided by AILTs for the assessed academic writing.For example, while many students hold that it is ethical to use chatbots for the sake of inspiration, to generate fundamental ideas and background information in assignments, some tend to believe that AILTs can only be used as a language aid but never for generating content: No.25: AI should only be a tool used to help students express their own original thoughts, that is, it should only be used as a writing enhancement tool.
The scope of language support AILTs should be allowed to provide is still open to question, i.e., whether language enhancement can be applied to the entire text or only to selected sentences and paragraphs.Furthermore, while some students (e.g., No.1707 above) distinguish AI automating essays and codes, seeing the latter as ethical in assessment, other comments argue that using ChatGPTgenerated codes directly in examinations is plagiaristic.
The comments on ChatGPT, in particular, reflect varied perspectives on whether the technology should be allowed in examination at all.A minority of the respondents believe that ChatGPT, although being a useful language aid and additional source for learning, should be prohibited during assessment.These remarks mostly highlight the inadequacy and unpreparedness of present higher education curricula, assessment techniques, and plagiarism detection technologies for including AI chatbots in learning assessment.Thus prohibiting the use of ChatGPT in assessment can help to ensure student fairness.
A more dominant group of respondents, however, support the acceptance and integration of AI chatbots in assessment.They have expressed a variety of opinions on the subject.Some argue that existing AI technology is not yet a threat to assessment since in the content-generation aspect it is far from producing trustworthy and sophisticated level reasoning that meets higher education learning standards.Given the popularity of ChatGPT in their personal and professional life, more individuals think that prohibiting its use in A.W. Ou et al. assessment would be counterproductive and nearly impossible.Some respondents connect AI to earlier technical developments such as calculators, the internet, Google, and Wikipedia, and suggest that the higher education system should adapt and "modernise" (No.7) itself to incorporate the usage of AI, for example: No.515: The development of AI is inevitable, and trying to stop it is pointless.Instead, it should be integrated into education, teaching students how to use it effectively, and how to use chatbots to accelerate their learning rather than inhibit it.
Many respondents indicate a need to be informed and directed about the use of AILTs in learning process and examination.Some of them propose that universities create new AI-related courses and curricula to educate effective, responsible, and ethical usage of AILTs for academic communication.Some suggest that universities develop and implement guidelines and instructions to regulate the use of AI in teaching, learning, and evaluation.Aside from this, a few students also provide specific recommendations for adjusting assessment techniques for AI-integrated academic communication based on repertoire assemblage, such as increasing assessment methods, adding oral examinations, and employing project-based assignments rather than tests.

Discussion
The post-humanist conception of language and communication proved useful in demystifying the discourse of AI in higher education (cf.Bearman et al., 2022).The holistic insight of this study revealed what AILT signifies for university students' academic communication, as well as the larger learning and assessment realm of higher education.First, by viewing academic communication as a spatial activity that draws on distributed practice from the spatial repertoires embedded in the AI-infused material environments of communication, this study has highlighted the role of collective agency from students and AILTs to explain the dynamic repertoire assemblages that shape students' academic writing practice.The notion of assemblage enabled us to focus on how "all semiotic resources working together, gaining equal importance, and generating different forms of synergy for meaning-making" (Canagarajah, 2018, p. 271).This was particularly helpful in identifying the affordances that various AILTs provide to students' academic communication.On one hand, for students who study in a second language (e.g., English, and Swedish for some students in this study), AILTs, as external language functions "outside" an individual, could assist students in attaining communicative success by providing improved language outputs on lexical, grammatical and discourse levels.This form of spatial repertoire was particularly acknowledged by a minority group of students with learning difficulties and valued for the significance of such an AI-powered "extended mind" (Clark, 2008) in supporting educational equality.
Additionally, the assemblage of individual language and AI also involves the interaction of students' reflective thinking with AI language outputs, which leads to the growth of individuals' language competency.The action of AI-mediated academic writing has created an interactive learning community (Garrison & Arbaugh, 2007) where students could ask questions and gain suggestions and feedback from AILTs.By reflecting on their own language use and mistakes, students might expand their vocabularies, learn more sophisticated writing skills, and build metacognitive strategies for academic writing.This finding extends previous AIED and ICALL research (e.g., Alharbi, 2023;Fryer et al., 2019) by recognising the potential educational value of AILTs, particularly chatbots, in out-of-classroom contexts, such as students' regular academic writing process.
The wide accessibility of AILTs within the daily academic communication milieu of students has transformed higher education learning into a multi-cited process, i.e., students live in a networked learning society with "multiple sets of overlapping relationships, cycling among different networks" (Mitchell, 2003, p. 17).Consequently, knowledge construction is no longer confined to conventional classroom interactions and to following a linear trajectory; rather, it manifests the potential for intricate and rhizomatic learning trajectories (Canagarajah, 2018) of both academic language and subject content knowledge through everyday academic communication practices.Student participants in our study widely acknowledge the teacher role of GPT-model chatbots in facilitating their learning.Conversely, according to the students, few educators have come to this insight and dedicated time and effort to researching and experimenting with AI as a teaching resource.This pedagogical oversight is conspicuously evident in our study.Among the 1703 comments, merely five references alluded to instances where teachers introduced or discussed the use of AILT with their students.More research is needed to explore the pedagogical potential encapsulated within AILTs, with a view to offering guidance to front-line stakeholders.
Our findings also revealed an important identity agenda in the era of AI.Many university students were interested in how they should place their own thinking in regard to artificial intelligence when they engage with AILTs as part of spatial repertoire for academic communication.The students in our study distinguished between the spatially-extended-cyborg and spatially-advised-learner perspectives on AI, with the former accentuating the amplifying effect of AILTs on performance and the latter underscoring personal competence development fostered by interactions with AILT.In both learning and assessment contexts, the students were inclined to the spatially-advised-learner positioning, emphasising the pivotal role of individual students as the subject agents of learning and owners of knowledge.This identity is accompanied by a discernible degree of critical reflection on the limited accuracy, reliability and professionality of AI-generated content.
Given the prevalence of AI in higher education, it is imperative that this existing spatially-advised-learner identity is acknowledged by university teachers and administrators.This recognition serves as a foundation for the designation of AI-related policies for education and assessment, which should pivot towards optimizing students' learning experiences by effectively integrating AI into higher education rather than merely attempting to prevent potential academic misconduct through AI.Our findings suggest that an automatic assumption that students use AI to cheat in examinations is unwarranted since most students appear oriented towards ethical academic language practices with AI.Furthermore, students exhibit a virtual-teacher view of AILTs, coupled with a profound sense of ownership over their knowledge.Teachers can harness this perspective as a valuable resource to facilitate AI-mediated classroom interactions.For example, teachers can design learning activities involving AI-student interactions and encompassing discussions on specific topics with AILTs and subsequent reflections on AI-generated content, mimicking the dynamic of peer-to-peer interactions and reviews.
Lastly, it is noteworthy that although consistent usage patterns and perceptions of AILT among students were evident in their everyday academic writing practices, AILT in high-stakes academic communication-assessment and examination-received heterogeneous and eclectic viewpoints.Student participants in our study exhibited a conditional stance towards the ethical use of AILTs in assessment, contingent upon the specific subjects and methods by which these tools were used in assignments.Furthermore, no clear consensus could be reached on fundamental questions such as whether AILTs should be prohibited from being used in assessment.Also, there was uncertainty on detailed implementation issues such as what aspects of language support AILTs should provide, how much is appropriate, and how it should be applied in different types of assignments in different disciplines if AILTs are allowed in assessment.

Conclusions and implications
This study used 1703 open comments from Swedish university students to explore their lived experiences with AILTs for academic communication.The findings provide empirical evidence on students' self-reported frequent engagement with various prominent AILTs (i.e., ChatGPT, Grammarly and Google Translate) in academic writing, showing their utility as part of a spatial repertoire in enhancing students' academic communication performance and facilitating personal language development.The student perspectives highlight an interactive and virtual-teacher view to generative AI.This underscores the substantial social implications of AI for academic communication and learning in higher education.The proliferation of AILTs has transformed students' everyday academic writing process into an additional learning space and bestowed upon students a novel identity of spatially advised learners, empowering them to acknowledge AI's facilitating role for personal competence enhancement while remaining aware of its inherent limitations.
Therefore, we advocate for university administrators and educators to acknowledge students' expanded academic repertoires in the AI era and consider their spatially advised learner identity when developing and implementing AI-related language-and-education policies, curricula and assessment methods.Higher education initiatives should prioritise cultivating essential skills that enable students to adeptly navigate AI-mediated academic communication, fostering informed, efficient and responsible academic writers and learners.Additionally, with the growing exploration of pedagogies incorporating AILTs in language classrooms to enhance student motivation and learning effectiveness (e.g., Su et al., 2023), we emphasise embracing the post-humanist, repertoire assemblage view of students' academic communicative competence.This perspective can help teachers recognise the full range of communicative resources available to students and acknowledge students' engagement with generative AI in daily academic writing practices as a social interaction experience where students can achieve language development in a rhizomatic manner.
Divergent and conditional opinions among students were found regarding the usage of AILT in the context of assessment.The impassioned student responses concerning the ethical elements of "cheating" versus adherence highlight a formidable challenge for assessment introduced by the evolving employment of AILTs in students' academic communication practices.Students are concerned about the communicative competences and proficiencies that higher education expects from them in the era of AI, and they are equally apprehensive about how their learning progress would be evaluated fairly.Universities today have a compelling need to better understand these student concerns and provide applicable policies, instructions and education to help students navigate learning and assessment in the new educational landscape characterised by the integration of AI.It not only includes policies regulating the use (or non-use) of AI in assessment, but it also necessitates a fundamental rethinking of learning in light of the external cognitive function that AILTs play in the repertoire assemblages of students' academic communication, as well as an associated adaption of pedagogical and assessment methodologies for constructive alignment (Biggs, 2014).
While this study has provided valuable insights into the perceptions and experiences of students regarding AILT usage in tertiarylevel academic communication, the limitations stemming from potential sampling bias, temporal factors, and contextual specificity necessitate caution when interpreting and applying the results.Firstly, the study's sampling methodology relying on convenience sampling introduced a potential source of bias from the possibility that the survey primarily attracted responses from students who possess a heightened interest in the subject matter and a greater familiarity with AI.Consequently, the resulting dataset might overrepresent opinions and experiences that are more positive or more critical of AILTs.Secondly, the survey was conducted during the spring of 2023, a few months after the release of ChatGPT.This temporal proximity to the technology's introduction likely influenced respondents' perceptions, causing an emphasis on ChatGPT-related comments.The survey participants' limited exposure to ChatGPT also means our findings cannot fully encapsulate the long-term implications and evolving dynamics of ChatGPT within educational settings, which highlights the need for future longitudinal investigations on the same subject.Finally, it is imperative to acknowledge the contextual limitation of this data source exclusively from university students in Sweden, a country recognised for its advanced position with respect to the digitalisation of higher education.Therefore, the student perceptions and experiences in this study might not be applicable to other regions, such as the Global South.More local investigations in other parts of the world are needed.A.W. Ou et al.

Table 1
Demographic information for comment participants.