The AIED 2022 conference held in Durham, UK, included a panel discussion entitled “AIED: Coming of Age?” The panel began with the following provocation:

The AIED community has researched the application of AI in educational settings for more than forty years. Today, many AIED successes have been commercialised – such that, around the world, there are as many as thirty multi-million-dollar-funded AIED corporations, and a market expected to be worth $6 billion within two years. At the same time, AIED has been criticised for perpetuating poor pedagogic practices, datafication, and introducing classroom surveillance. The commercialisation and critique of AIED presents the AIED academic community with a conundrum. Does it carry on regardless, continue its traditional focus, researching AI applications to support students, in ever more fine detail? Or does it seek a new role? Should the AIED community reposition itself, building on past successes but opening new avenues of research and innovation that address pedagogy, cognition, human rights, and social justice?

Because of the level of interest generated by the multitude of issues raised by the discussion in Durham, challenges centred on the futures of AIED and the AIED research community, the panellists were invited to publish their thoughts as an opinion piece in this journal. This Special Issue, “AIED. Coming of Age?”, is the result. I am very grateful to the IJAIED co-Editors in Chief, Judy Kay and Vincent Aleven, for giving us this opportunity and for their ongoing support.

Before continuing, I should first acknowledge that I am the author of the provocation. For those of you who don’t know me, while I’ve been part of the AIED community since 2015, I’m neither a computer scientist nor a cognitive scientist. Instead, I am a social scientist with some expertise in education, human rights, and ethics. So, you might ask, what right do I have to question the AIED community? Well, now that the AIED community’s work is far more visible (because of its successes, but also partly because of its increasing commercialisation and perhaps most recently because of ChatGPT), my argument is that the community increasingly shares a responsibility to examine, unpack and question everything that is usually taken for granted in its work, every claim, and every implication. The provocation’s aim was simply to encourage that process and the necessary debate.

The Contributors

The contributors to this Special Issue include the original panellists, all of whom have been leading members of the AIED community for many years. In alphabetical order: Ben du Boulay (University of Sussex), Art Graesser (University of Memphis), Ken Koedinger (Carnegie Mellon University), Danielle McNamara (Arizona State University), and Maria Mercedes (Didith) T. Rodrigo (Ateneo de Manila University). In addition, because of their notable contributions as members of the Durham panel’s audience, Peter Brusilovsky (University of Pittsburgh), René Kizilcek (Cornell University), and Kaśka Porayska-Pomsta (University College London) were also invited to contribute opinion pieces. Finally, in order to further open the discussion and to introduce some alternative voices into the debate – in other words, to prevent the Special Issue becoming an AIED echo chamber – several leading researchers from outside the community all of whom have published in related areas were also invited to contribute opinion pieces: Rebecca Eynon (University of Oxford), Caroline Pelletier (University College London), Jen Persson (DefendDigitalMe), Neil Selwyn (Monash University), Ilkka Tuomi (Meaning Processing), Ben Williamson (University of Edinburgh), and Li Yuan (Beijing Normal University).

All told, then, there are fifteen opinion pieces representing a spectrum of views, some complementary, others not, both from within and from outside the AIED community.

Ben du Boulay (Emeritus Professor of Artificial Intelligence at the University of Sussex and Visiting Professor at University College London) urges the AIED community to continue its traditional focus. While he acknowledges there are legitimate concerns about commercial applications of AI in education, he argues that it is incorrect to blame AIED for perpetuating poor pedagogic practices, and that AIED is already researching human rights, and social justice.

Peter Brusilovsky (Professor of Information Science and Intelligent Systems, School of Computing and Information, University of Pittsburgh) sets out a case for learner control and user-AI collaboration as being key for ensuring human-centred AI in education. He focuses on learner control over the AIED technologies integrated into the learning process, especially for personalised content selection and often by means of open models.

Rebecca Eynon (Professor of Education, the Internet and Society, University of Oxford) examines the knowledge tradition in terms of which technology, education and research are conceptualised by the AIED community. She suggests and then questions that AIED research is by definition characterised by generalisable standards and impartiality, and proposes an alternative centred on the “ethics of care”.

Art Graesser (Emeritus Professor of Psychology at the Institute for Intelligent Systems at the University of Memphis), Xiangen Hu (Professor in the Department of Psychology, Department of Electrical and Computer Engineering and Computer Science Department at The University of Memphis), John Sabatini (Distinguished Research Professor at the Institute for Intelligent Systems and in the Department of Psychology at the University of Memphis), and Colin Carmon (Doctoral Student in the Department of Psychology and the Institute for Intelligent Systems at the University of Memphis), begin with two important statements. First, AIED systems do help students learn; and second, research on AIED systems follows ethical guidelines. Now, they contend, it is essential to scale up AIED – to meet the needs of diverse populations, while challenging the many public misconceptions and the ethical uncertainties in AIED corporations.

Rene Kizilcec (Assistant Professor of Information Science at Cornell University) calls for more research of AIED from a social-psychological perspective. While most AIED research has focused on technological improvements, such as creating more accurate algorithms, he argues that it is also important to understand what factors shape the way educators perceive, trust, and use AIED in their teaching practice.

Ken Koedinger (Hillman Professor of Computer Science in the Human–Computer Interaction Institute at Carnegie Mellon University) acknowledges the two sides of each issue raised in the provocation. For example, he notes, while continuous monitoring can facilitate dynamic assessment which has been shown to be more accurate than standardised testing, it might also be thought of as surveillance, being constantly observed by an unseen authority. This type of tension needs to be addressed by the AIED community.

Danielle S. McNamara (Professor of Psychology and Executive Director of the Learning Engineering Institute at Arizona State University) examines in detail the three criticisms of AIED mentioned in the provocation: poor pedagogic practices, datafication, and surveillance. For each, she robustly defends the AIED community. She then highlights the promise of learning engineering, a multidisciplinary, pluralistic approach that embraces the complexities of learning.

Caroline Pelletier (Reader in Culture and Communication at the Faculty of Education and Society, University College London) criticises what she calls the “celebration of personalisation”, as embodied in many AIED systems. She questions the unquestionable aims of personalised learning, and asks what is the real problem for which AI-assisted personalised learning is proposed as a solution, and where is the evidence that such a solution is credible or worth having?

Jen Persson (Founder of the NGO Defend Digital Me, UK, that advocates for safe, fair and transparent data processing in education) begins with the purpose of AIED, to influence the developing brains of children, and argues that AIED researchers therefore have a duty of care – to ensure that AIED tools are safe and that they provide a high quality education. In particular, teachers need to be helped to understand the evidence so that they can evaluate the commercial AIED tools.

Kaśka Porayska-Pomsta (Professor of Artificial Intelligence in Education at the Faculty of Education and Society, University College London) reminds readers that AIED tools are purposefully created to change and enhance human thinking and behaviour, and that AI research early on recognised and addressed the need for transparency, accountability and flexibility. She acknowledges several remaining blind spots, and concludes with a manifesto for a pro-actively responsible AIED.

Maria Mercedes T. Rodrigo (Professor of the Department of Information Systems and Computer Science of the Ateneo de Manila University, Philippines) points out that the criticism of AIED mentioned in the provocation is a first-world problem – because countries such as the Philippines do not have the infrastructure to deploy AI-based educational applications at scale. However, such countries should still participate in these debates, to help ensure that the application of AI in education is genuinely ethical, inclusive and safe.

Neil Selwyn (Professor in the School of Education, Culture & Society, Monash University, Melbourne) argues that the AIED community would be well advised to engage pro-actively with the “growing pushback against the presence of AI technologies in education.” He recognises that there is a complex array of agendas in play (corporate, social, educational, and technical), and he calls for an approach to the futures of AIED that goes far beyond the technology itself, being co-produced by its proponents and its critics.

Ilkka Tuomi (Founder and Chief Scientist at Meaning Processing Ltd, Finland) takes us back to the fundamentals, asking what education is for and how it might best be organised. He speculates that instead of automating and sequencing knowledge delivery, new avenues in AIED research should build on existing research into open learner models, metacognitive support, and self-regulated learning. In summary, AIED should move beyond prioritising efficiencies, to focus instead on helping learners in the process of learning.

Ben Williamson (Chancellor's Fellow and Senior Lecturer, Centre for Research in Digital Education, University of Edinburgh) questions the foundational assumptions that AI will transform the future of education for the better, and unpacks what he calls the ‘social life’ of AIED – the economic, political, ethical and regulatory control questions underpinning its role in education. He concludes by arguing for acknowledging the potential risks AIED could bring, including the real possibility it could actually worsen rather than solve educational problems.

Li Yuan (Professor in the College of Education for the Future and Director of the Centre for Connective Intelligence in Education at Beijing Normal University) closes this collection of opinion pieces. She explores a critical approach to AIED, making the complexities of the AIED ecosystem more visible, before focusing on a specifically Chinese historical, political, economic, and ethical perspective. She concludes with a call for more research at scale, to ensure that AIED addresses the real needs of students and teachers.

Challenges

In the remainder of this introduction, I am going to take advantage of the opportunity to raise, although (because of space) not explore in enough detail, six challenges with which I believe the AIED community should seriously engage. The aim, again, is to provoke a discussion. However, before doing so, I first want to reiterate (cf. Holmes et al., 2021) that raising these challenges is not a criticism of the work to date of the AIED community. Instead, it is a recognition of the successes of the AIED community alongside the fact that the world has changed. In my opinion, and to misquote Alvy Singer,Footnote 1 an academic field is like a shark. It has to constantly move forward or it dies.

Generative AI

Months after the release of ChatGPT, there is wide recognition of an urgent need to rethink many areas in which AI is being or will be applied, with AI in education a prominent case. In my opinion, the exciting world of generative AI is both a gift and a major challenge for the AIED community. It is a gift for at least two reasons. Thanks to the dramatic arrival of ChatGPT, policymakers worldwide are finally beginning to consider the possibilities of AI for education – albeit mostly in terms of how LLM’s might help students to cheat and will possibly lead to the collapse of education as we know it. Nonetheless, we would be wise to take advantage of this wave of interest, to engage policymakers and wider society in discussions about more nuanced applications of AI in education, as have been researched by the AIED community. Second, generative AI might itself actually have huge potential for education, for enhancing AIED applications in many ways yet to be properly explored, despite the flood of academic papers (in fact, ChatGPT has already been embedded in products by increasing numbers of AIED commercial players around the world). However, from another perspective, this raises an important challenge. How do we ensure that AIED doesn’t become a synonym for generative AI in education, watering down or displacing existing avenues of research or dominating future ones? In addition, just as the potential of generative AI for education remains unclear, so are the risks. This is an urgent area that the AIED community should, in my opinion, focus on (in addition to, not instead of, its traditional focuses). Questions about the disconnect between the appearance of accuracy and the reality, about intellectual property theft, and about the exploitation of ghost “guardrail” workers in Global South countries are probably only the tip of the ethical iceberg.

“I’m Just an Engineer” Footnote 2

Ever since the Manhattan Project, the relationship between researchers and research exploitation has been an uneasy one. What responsibility do research scientists have for the ways in which their research is applied by others? While AIED is not as perilous as the atomic bomb, presumably we all agree that, given that it is designed to change human minds and shape the future of education, AIED does have important consequences. The problem is that, even with the best of intentions, some of those consequences might turn out to be bad – especially when the priority is profits over the human rights of students. Consider the typical practice of commercial AIED players to exploit data generated by student use of their product to build their business model (raising important issues around data privacy and data ownership). However, rather than suggesting that individual researchers have direct ethical liability when their research is commercialised, the question is what is the collective responsibility of the AIED community as a whole? In my opinion, it is critical that the community considers the potential misuses or abuses of their research by the commercial sector… by design. In other words, it should become standard practice for AIED researchers to begin by asking themselves how their research might be misappropriated and what they can do to mitigate such a possibility (which will only happen when AIED faculty members give a lead). Strategies might include (i) prioritising ethical research into AIED (which has slowly started to emerge, but that requires far more effort), (ii) carefully selecting the topics of AIED research (avoiding controversial and, I would argue, indefensible applications such as e-proctoring) and properly justifying such decisions (avoiding techno-solutionism), (iii) exploring approaches that empower (not disempower) teachers and enhance (not undermine) student agency (i.e., by not automating poor pedagogic practices, such as the automation of didactic instruction), (iv) exercising due diligence by identifying potential unintended consequences and working proactively to prevent or at least minimise them, (v) engaging with a spectrum of stakeholders, especially policymakers, and (vi) advocating for transparency, accountability, public scrutiny, robust guidelines and appropriate regulations. All of this would help establish a level playing field (between the research community and the commercial players) while helping protect learners (their education, human rights, and agency) throughout the lifecycle of AI in education (the research, development and commercial deployment).

AI Literacy

Another challenge for the AIED community is the need to include research on what is increasingly being called AI Literacy – for everyone, not just for students studying AI-specific topics (elsewhere, we have combined AIED and AI Literacy under the acronym AI&ED, Holmes et al., 2022). In fact, worldwide there are many examples of AI curricula being developed and implemented in schools (Miao & Shiohira, 2022), with two leading examples both coming from the US: AI4K12Footnote 3 and DAILy WorkshopFootnote 4 from MIT. The problem is that such curricula typically focus on how AI works and how to create it (the technological dimension of AI) and rarely spend much time on its impact on, or the social justice implications for, humans and wider society (the human dimension of AI), which includes ethical questions centred on power and political motivations. Yes, frequently there is a nod to the ethics of AI (usually instantiated as biases), but often this is almost as an afterthought, once the ‘sexier’ topics (e.g., machine learning and large language models) have been studied.

I will add three things, at least one of which might ruffle some feathers. First, I would argue that the human dimension of AI should be given equal billing to the technological dimension of AI, and that the two dimensions should be interwoven throughout any course. For example, if a student is to study how emotion detection works, they should study at the same time the potential impact of emotion detection (both when it does and when it does not work) on people – including issues such as surveillance – especially when the technology is being used in classrooms. Second, if only because AI is impacting on all aspects of society, I think it is essential that teachers in particular themselves develop an appropriate level of AI Literacy in both the human and technological dimensions of AI, for two complementary reasons: so that they can best support their students’ developing AI Literacy, and so that they are empowered to evaluate whether an AIED tool might be useful, effective, and ethical in their specific context. Teachers from every discipline should also be supported to think about and to teach about AI in terms of their own discipline (for example, what are the implications of generative AI for literature and art, and how do these technologies affect what it is to be human?). Third…, well, I will leave the feather ruffling to my next challenge for the AIED community.

The AIED Community Needs to be More Open to other Fields of Expertise

I do not believe that computer science teachers (in general) are best placed to teach the human dimension of AI. Of course, there are many computer scientists (in and outside the AIED community) who have made important contributions to the human questions, but it remains the case that most computer scientists rarely have the interest nor necessary expertise to teach it. This is not a criticism, computer scientists are experts in computer science, but an observation with resonances for the wider AIED community. My argument here is that the AIED community itself needs to open up far more to researchers, ideas and criticisms from other disciplines. Yes, the AIED community has always engaged with domain and pedagogy experts, and it has a tradition of linking with and drawing on areas like cognitive science and the learning sciences; in addition, there are a few of us social scientists who have snuck in under the radar. But, if AIED is to address existing and future challenges, we need to open the community far more widely, ensuring that other voices are heard at the highest levels. And then, we need to work together, challenging each other, and encouraging each other to question the received wisdom – asking why we are researching something, not just how; exploring the impact of AIED on classroom culture, not just on learning – in order to push the field far further while always being ethical by design.

Ethics and Human Rights

I noted earlier that ethical research into AIED is slowly starting to emerge (for example, see the many insightful contributions in Holmes & Porayska-Pomsta, 2023), but I also argued that far more effort is needed. As we concluded in (Holmes et al., 2021, p. 522):

Clearly, many AIED researchers do recognise the importance and value of engaging with the ethics of their work (indeed, there is no evidence of AIED work that is deliberately unethical). However, as the responses reported here have demonstrated, this engagement now needs to be surfaced, the nuances of opinion need to be discussed in depth, and issues around data, human cognition, and choices of pedagogy need to be investigated, challenged and resolved. In particular, the AIED community needs to debate the value and usefulness of developing an ethical framework and practical guidelines, to inform our ongoing research, and to ensure that the AIED tools that we develop and the approaches that we take are, in the widest sense, ethical by design.

For example, in the same paper, we highlighted “the need to differentiate between doing ethical things and doing things ethically” (p. 505), a distinction that warrants further attention. While I have no doubt that AIED researchers strive to and mostly succeed in doing things ethically (e.g., working within university ethics constraints, and prioritising data privacy in tutoring systems), in my opinion AIED researchers need to step outside the AIED bubble to achieve a higher perspective and to ask whether the things that they do are themselves ethical. Building on the previous example, given personalisation’s connections to Silicon Valley’s reification of the individual, and the parallel downgrading of collaborative learning and collective intelligence, is personalised tutoring an ethical (let alone efficacious) ambition at all?

With regard to AIED and human rights, here because of space limitations I will just name (and give example questions for) some key human rights that certain aspects of AIED might challenge, and I invite readers to engage with our Council of Europe report in which more details are explored (Holmes et al., 2022): right to education (Is it acceptable, when few human teachers are available, to rely on AIED tools?), right to dignity (Is it acceptable to delegate educational decisions to AI-enabled systems?), right to autonomy (Is the profiling of children by AI-enabled systems acceptable?), right to be heard (Should AIED systems prioritise student agency?), and the right to be protected from economic exploitation (Is it acceptable for commercial players to commercialise the data created by students in their engagement with an AIED tool?). Unfortunately, although these human rights are clearly necessary and for the common good, they are not always properly addressed (“British children using Google products at school risk commercial exploitation and data-related risks,” The Digital Futures Commission, 2022, p. 8). Accordingly, I encourage the AIED community to seize the opportunity to take a lead – to build on its history, to show the wider AI research community that the application of AI in social settings can and should respect and promote human rights (for AIED, of students, parents and teachers), all the while still being both effective and useful.

As mentioned earlier, another important question with which the AIED community should engage centres on the need for regulation. In the Council of Europe project (Holmes et al., 2022), we are working towards a legal instrument designed to govern the application of AI in educational settings (teaching with AI), for which we are drawing on the approach taken in the medical model of clinical trials. Our logic is that, given society rightly insists on medical interventions being thoroughly tested before being made widely available (because of their safety and efficacy implications for the human body), we should insist on educational technologies being thoroughly tested before being made available in classrooms (because of their safety and efficacy implications for the developing human mind). The Council of Europe project is also working towards a recommendation that teaching about AI (i.e., to facilitate AI Literacy) should include both the human and technological dimensions.

Language

I’m grateful that you have read so far, but this might be where I lose you – as I am now going to argue that, for AIED to genuinely come of age, its anthropomorphised language also needs to change (Watson, 2019). Let’s start with the big one: “intelligence”. Now I get that intelligence is in the name of the field, Artificial Intelligence in Education, but in the wider field of Artificial Intelligence (“a name that we capitalise to highlight that it is a specific field of inquiry and development, and not simply a type of intelligence that is artificial" Holmes & Tuomi, 2022, p. 2) the word has long been identified as a problem (Crawford, 2021; Dreyfus et al., 1986; Tucker, 2022): “AI is not intelligence – it is prediction.” (Firth-Butterfield, 2023, p. 9).

The issue is that by adopting these “pragmatic ‘weak’ metaphors” (Rehak, 2021, p. 89) we can all too easily slip from positing intelligence as the target of research to assuming that the tools that have been developed are in some way themselves intelligent, thus accidentally or otherwise allocating to them capabilities that they do not actually have. I suspect it will be no surprise if I point out that, quite the contrary, no AI system today is intelligent, no AI system is smart, and no AI system (including today’s LLMs) understands anything – which is not to say that they are not useful. However, the same is true in AIED, particularly with the so-called Intelligent Tutoring Systems – which would be better named for what they are, “Automated Adaptive Tutoring Systems”. Perhaps not as catchy but hey. The other big one is “learning”, with AI systems frequently being said to 'learn' from data. However, while the performance of some “machine learning” algorithms does improve based on data, the process is fundamentally different from the learning experienced by humans (and other higher order animals), which by definition requires some form of conscious or unconscious consciousness: “true intelligence requires consciousness” (Penrose, 1999, p526). In other words, students might learn, while AIED systems (like inanimate objects, such as trees) might instead adapt.

But why make a fuss about all this? Well, the problem is that continuing to use anthropomorphic words, confusing for example the appearance of intelligence with actual intelligence, can mislead people (the public, policymakers, and ourselves) into thinking that AIED tools can do more than they actually can – for example, that they understand what we mean or feel, and that they can (or will soon be able to) substitute for humans. In fact, at first, anthropomorphisms can be useful, because they build on what people already know and can help make things understandable. However, all too soon they can result in the false assumption that the technology/human interaction is symmetric – that the machine (the AIED tool) and the user (the learner or teacher) have a more or less equal reciprocal relationship. This is contrary to most AIED research, which typically argues that the relationship is not equal, and that the human should remain in full control (that it is the computer, not the human, that should be “in the loop”). So, in conclusion, given that AIED researchers want AIED to be considered a rigorous applied science, we should stop anthropomorphising and instead should challenge these misleading metaphors wherever they are used (in research or in the commercial sector), to help assure the field’s scientific credentials.

In Conclusion

In the preceding paragraphs I’ve introduced six challenges that I see for the AIED community, the intention being to provoke a (or to continue the) debate. The following opinion pieces in this Special Issue introduce many other challenges, and all make important contributions to the future of AIED (my sincere thanks to all the authors). And, so, I will finish with a plea. While all of us might gravitate to reading the opinion pieces written by authors we already know and respect, I urge all of you to engage with the papers by authors whom you don’t yet know. Hopefully, you will be challenged by the ideas you encounter, some of which you might strongly disagree with, but all of which might usefully inform your future work and so contribute to AIED’s coming of age.