The Anticipated Impact of Artificial Intelligence on US Higher Education: A National Study

Since the rise of generative artificial intelligence (GenAI) in late 2022, many scholars and thought leaders have wondered about its impact on higher education. This study used a survey methodology (three multiple choice questions and one open-ended question) to explore the perspectives of a nationally representative sample of 1,327 U.S. administrators and faculty, asking questions to understand how much change they anticipate as a result of advancements in artificial intelligence (AI) technology, how prepared their institution is for such change, and what aspects of higher education they expect to change. The researchers used Kranzberg’s Laws of Technology as a lens to interpret the findings and guide the subsequent discussion about how AI might impact higher education. The findings showed that the vast majority of participants expect that AI will change their institution over the next five years and that the majority of participants do not feel that their institution is ready for change. The comments left in response to the open-ended questions fell into one of four themes: concerns about academic integrity and rigor, issues related to AI integration (e.g., anticipated benefits, practices in teaching and learning, issues related to preparedness, and the expected scope of change), the feeling that the current AI discourse is merely hype, and feelings of uncertainty. Ultimately, AI has the potential to be both advantageous and disadvantageous to teaching and learning, with the benefits and challenges of its use varying by context.

Online Learning Journal -Volume 28 Issue 3 -September 2024 "Technology is neither good nor bad; nor is it neutral."

~Kranzberg's First Law
Since the launch of ChatGPT in late 2022, conversations about the impact of artificial intelligence (AI) on higher education have gained prominence. Mollick (2024) recently wrote that "we are at an inflection point where AI will reshape how we teach and learn," adding that "they [AI] will destroy the way we teach before they improve it" (p.160).He goes on to elaborate on the tensions that exist within education surrounding AI use: namely, AI can be used by students to cheat and it offers substantial promise in providing individual-level support to students.
The role of AI in our systems of education and society, in general, is a cross-disciplinary issue encompassing many aspects (e.g., applications, benefits, drawbacks, policy, innovation, funding, adoption), more than are possible to address in detail in a single paper.In this paper, we focus on macro-level observations of the anticipated impact of AI situated within the U.S. while acknowledging that there are nuances and context-specific phenomena that may not be captured in our discussion.When presenting a high-level view of a complex and pervasive technology, one risks missing the forest for the trees.To avoid becoming overly focused on the minutiae of how AI is being implemented and the many potential applications and challenges, we used Kranzberg's laws of technology (Kranzberg, 1986) as a theoretical framework to synthesize our findings and shape our discussion about the broader implications.
Our study builds upon previous work done by Bay View Analytics related to how higher education might change in the years to come (e.g., Johnson et al., in press;Veletsianos et al., 2021).In spring 2023, Bay View Analytics surveyed faculty about anticipated change at their institution and higher education more broadly over the next five years (Johnson et al., in press).One of the topics that emerged in the open-ended responses was AI, which led the research team to wonder about participant expectations and feelings about the future specific to AI.Our present study focuses explicitly on the adoption and impact of AI technologies on higher education in the U.S.
We also recognize that the term AI is somewhat ambiguous and includes a variety of technologies that have been used over decades, such as machine learning, intelligent tutoring, and (more recently) generative AI (GenAI).Although the questions asked of participants in our study used the term "AI" rather than asking about a specific type of AI technology, our findings show that participants tended to focus on GenAI in their responses.Thus, this paper focuses primarily on the anticipated impact of GenAI on higher education.
The purpose of this study is to investigate the anticipated impact of AI, particularly GenAI, on higher education institutions among faculty and administrators.The following research questions guided our work: Online Learning Journal -Volume 28 Issue 3 -September 2024 • How much change might AI bring to institutions over the next five years?• Are institutions prepared for such change?• What aspects of higher education are expected to change due to AI technologies, particularly GenAI?

Review of the Relevant Literature
Scholarly reviews of literature related to AI use in education (also referred to as AIED) point out that this is not a new topic: AI has been used over time to support learning through various applications that help with assessment, provide individualized student support, and assist with administrative tasks (e.g., Chiu et al., 2023;Crompton & Burke, 2023;Zawacki-Richter et al., 2019).While past research showing how AI has been used is helpful for understanding its potential applications, the purpose of this study is to investigate the anticipated impact of AI going forward.Our review of the literature first focuses on publications that are relevant to three key elements of our study: anticipated change over the next five years, preparedness for such change, and aspects that are expected to change.We conclude the section with a description of the theoretical framework used to guide our interpretation of the findings and discussion.

Extent of Anticipated Change
Scholarly discourse and anecdotal discussions within academia indicate that AI used within higher education is growing.Within the U.S., reports from well-respected sources in the higher education sector have highlighted the increasing prominence of AIED and provided insights on the state of the landscape and recommendations for AI strategy development (e.g., Fink, 2024;Gunder, 2024;Pelletier et al., 2024;Schroeder, 2024).There is an overarching sentiment within these reports that increased AI use brings both opportunities, such as improving support for students with disabilities, and challenges like academic integrity and algorithmic biases that may disadvantage some students (Fink, 2024).
At the same time, Staudt Willet and Na (2024) cautioned against overestimating the use (and growth) of AI in higher education.Their study examining 25 education-related subreddits (on the Reddit platform) revealed that the volume of conversations related to ChatGPT was low; however, when it was discussed, the engagement was high.These subreddit conversations mostly centered on students and topics like academic integrity and the effect of ChatGPT on teaching practices.The growth of AI use in education is a generally accepted notion; how much change will result from this growth tends to be the topic of debate.

Preparedness for Change
When questioning whether institutions are prepared for change, one needs to consider faculty, students, and administration.Salhab (2024) argued that efforts to improve AI literacy among faculty and students are needed to "bridge the gap between market needs and the skills and competencies delivered by university programs" (p.17).At the administrative level, Gunder (2024), in her framework for AI integration in higher education, recommended that decision-makers view AI as a tool that extends across multiple contexts within an institution and impacts various interest groups.The framework emphasizes the importance of garnering support and collaboration from administrators, faculty, and students to understand how AI impacts factors like strategic planning, governance, ethics, infrastructure, instruction, assessment, accessibility, and personalized support.Despite these calls for improved AI-literacy and strategic planning for AI-related change, there is an overall lack of research that explores the prevalence of feelings of preparedness and unpreparedness at institutions and what would help address feelings of unpreparedness for AI-use in education.

Aspects of Higher Education Expected to Change
The 2024 Horizon Report (Pelletier et al., 2024), an annual report on higher education trends, dedicated a special section that discussed plausible societal impacts related to AI (e.g., AI will impact how some jobs are performed, which may impact workforce needs).They added that "AI tools have the potential to reshape pedagogy and student experiences" as "more and more uses for AI in the classroom are emerging" (p.19).As noted by Sahab (2024), market needs may drive curriculum change as students need AI-specific skills to succeed in the workforce.
Studies using speculative methods (Ross, 2017) to forecast possible scenarios that may occur because of advancements in AIED technology also provide insight into the aspects of higher education that might change.Bozkurt et al. (2023) compiled diverse viewpoints from a global group of expert scholars into a collective reflection on GenAI.These scholars expressed concerns about academic integrity and a "fear of the unknown and concerns about its [AI's] power" (p.58).Another study using speculative methods by Veletsianos et al. (2024) asked respondents to imagine AI's future role in education from a relational perspective.Some of the respondents in Veletsianos et al.'s study anticipated that AI would be used primarily as a tool that would be programmed to perform specified tasks.Others imagined AI to be more of a collaborator that would work in a relationship with humans to achieve a goal together.They argued that rethinking the epistemology of our engagement with AI is important for challenging Western biases and worldviews that might otherwise be programmed into AI as normative standards.The speculative research reiterates the key sentiments expressed in other reports: there is a tension between a desire to take advantage of potential benefits while remaining keenly aware of the potential harms of a rapidly evolving technology.
Conflicting opinions about the extent to which AI will impact education, and the aspects of teaching and learning that will be impacted are present throughout the broader AIED literature and scholarly discourse.According to UNESCO (2024), AI presents "innovative opportunities to enrich and transform educational experiences" (para.12).Conversely, AI may also widen the digital divide, exacerbate global inequities, and perpetuate biases.One key recommendation put forth by UNESCO is that AI should be used "to complement, rather than replace, the human elements of teaching" (para.17).Bozkurt and Bae (2024) discussed the "dual nature of generative AI in educational contexts" (p.1): the power it offers to humanity and the power it could potentially wield over us, if we do not learn from our past experiences with so-called disruptive technologies.For example, Online Learning Journal -Volume 28 Issue 3 -September 2024 Popenici and Kerr's (2017) discussion on the implications of AI use in education brings up several important points that are worth acknowledging.Written prior to the launch of GenAI, they shared the same potential uses for AI as identified in the literature reviews.In discussing the potential challenges, they use Massive Open Online Courses (MOOCs) as an example of a highly lauded technological solution that was believed by some to be a major disruptor through its potential to deliver free education at a mass scale.As Popenici and Kerr (2017) noted, MOOCs were much like AI in that they offered a novel solution with little empirical evidence to support decision-making (amidst pressure to make decisions fast).They stated that the takeaway lesson from MOOCs, that should be applied to any pressure to adopt AI, was "that a limited focus on one technology solution without evidence-based arguments can become a distraction for education and a perilous pathway for the financial stability of these institutions" (p.9).
The key overarching narrative within the literature, whether from scholarly discourse, speculative perspectives, or studies on AI use in practice, is that AI use in higher education holds the potential to be beneficial.There is also an acknowledgement that critical challenges must be overcome for AI to be advantageous.At the same time, the empirical literature, especially studies focused on the perspectives of educators, is limited.One of the key aims of our study is to link theoretical and speculative discussions about AIED, especially its benefits and challenges, with empirical findings that show how AIED practices are unfolding in actuality from an educational perspective.
Theoretical Framework Kranzberg's Laws provide theoretical focal points for inferring what might come to pass due to technological innovation: they are not laws in the sense that they can be tested like in math or physics, but serve as maxims that can guide how we think about technology integration (Kranzberg, 1986;Pitt et al., 2023).Table 1 lists Kranzberg's Laws and briefly explains each one (Kranzberg, 1986).Pitt et al. (2023) used Kranzberg's Laws as a conceptual lens for understanding the implications of AI use in marketing.We follow a similar process, using Kranzberg's Laws to explore the discourse (amongst scholars and within our findings) surrounding AI use in higher education.Technological innovation, at present and historically, has been driven by human needs and exists for human use.
Technology must be designed in such a way that humans understand its use and purpose, and even then, we must understand that a lesser-or non-technological approach may be preferred in some contexts.
Kranzberg emphasized the importance of recognizing the possible implications and impacts of technology through a historical lens.Further, his laws are underpinned by the understanding that technology adoption, use, and (dis)advantage vary by context and human needs and preferences within that context.

Methods
Online Learning Journal -Volume 28 Issue 3 -September 2024 The study used a survey research design.Bay View Analytics conducted the survey in partnership with Cengage, the Association of Community College Trustees (ACCT), the United States Distance Learning Association (USDLA), and the Association of College and University Educators (ACUE).

Participants
This study targeted higher education administrators, faculty, and trustees in the U.S.There were 1,327 survey participants in total, including 451 administrators, 675 faculty, and 201 trustees.The survey did not ask trustees all of the questions, including the open-ended question, which provided context for the quantitative questions.Therefore, our analysis focuses only on the 1,126 responses from administrators and faculty.Participants were invited by announcements in newsletters and communications of the partner organizations, as well as by direct email outreach.Data collection was conducted between September and November of 2023.The faculty and administrative samples were designed to be nationally representative of all degree-granting higher education institutions in the U.S. The resulting sample was compared to the distribution of higher education institutions in the National Center for Education Statistics' (NCES) Integrated Postsecondary Education Data System (IPEDS) to ensure representativeness.Trustee survey responses were invited by direct outreach of the ACCT to their members.

Materials
The survey instrument was designed to be short in length to reduce the survey burden and maximize the number of responses.The survey consisted of three multiple-choice questions and an opportunity to leave an open-ended comment (see Table 2).Administrators and faculty were asked the full set of questions, whereas trustees were only asked to answer the questions on the anticipated impact of AI over the next five years and whether they felt their institution was prepared for AI-related changes.We welcome any comments you may have on how you believe AI will impact your teaching or the future of your institution.

Procedures
The survey was open from September 22 to November 27, 2023.It was distributed through a combination of outreach from partner organizations to their members and direct email invitations using leased mailing lists representative of all degree-granting higher education institutions.

Quantitative Procedures
All data were checked for completeness and inconsistent responses and to ensure that no duplicate responses were included.There were no mandatory survey questions; respondents were free to skip any if they so decided.Very few respondents ignored the short-answer questions; incomplete rates were under 2% for all questions, typically just under 1%.The survey labeled all long-form open-ended questions as "Optional."The specific wording for the primary question used for this analysis was " [Optional] We welcome any comments you may have on how you believe AI will impact your teaching or the future of your institution."

Qualitative Procedures
The final survey question (a prompt to elicit an open-ended response) provided the qualitative data for this study.Of the total number of participants, 322 administrators and faculty provided an open-ended comment.One comment was removed from the analysis for irrelevancy, leaving 321 comments included in the qualitative analysis.
A constant comparative approach was used to analyze the data and identify prominent themes related to the anticipated impact of AI on higher education.One researcher performed an initial analysis to code the data and organize the codes into themes.The other two researchers reviewed the qualitative analysis to ensure consensus among all team members.All 321 comments are represented in the themes; however, any comments included in this report are only from participants who provided consent to be quoted.

Validity and Reliability Measures
The survey questions used for this study used a structure and question-ordering employed and tested on multiple prior studies.The specific wording for the current research used teammodified survey questions that had been validated for past studies exploring administrator and faculty feelings about the future more broadly (Johnson et al., in press).The modifications were made to elicit responses specific to feelings about AI use in higher education while retaining the same structure and question order as the previous work.The primary open-ended question used for soliciting respondents' thoughts on AI use in higher education used slightly modified wording from similar questions in past studies.

Limitations and Delimitations
Several limitations should be noted when considering the findings of this study.As is true for all survey research studies, the results are subject to sampling errors.The 95% confidence interval for administrative and faculty responses is +/-4.7%.Given the exploratory nature of this study, the research team placed an emphasis on identifying broad trends.Our participants represent an array of diverse contexts, and our study does not capture the contextual nuances that may be drivers of differences in responses.More research is needed to examine the extent of contextual variance that occurs when administrators and faculty from varying fields and institutional types are asked to anticipate the impact of AI on higher education.

Results
In presenting the findings, we briefly discuss the quantitative results as an anchoring point for a deeper discussion of the qualitative data, which is the focal point of this report.The themes identified through the findings will then inform our discussion of the anticipated impact of AI on higher education as we consider how the themes within the data relate to Kranzberg's Laws.

Quantitative Findings
The first two quantitative survey questions asked about the extent to which participants anticipated AI tools would change their institution over the next five years and whether they felt their institution was prepared for AI-related changes.The responses from administrators and faculty were similar, with most participants from these two groups expecting a small to moderate amount of change (Figure 1) while stating that their institution was unprepared (Figure 2).

Administrators and Faculty: Perceptions of Institutional Preparedness
Over three-quarters of administrators and faculty do not feel their institution is prepared for AI-related change, which is noteworthy considering that the majority do not expect any drastic change.At the same time, the data analysis also revealed that perceptions of institutional preparedness are inversely related to the extent of anticipated change.Figure 3 shows that the administrators and faculty who expected massive change were most likely to indicate that their institution was unprepared for AI-related change.

Relationship Between Perceptions of Institutional Preparedness and Extent of Anticipated Change
The survey also asked administrators and faculty what areas of their institution they expected AI to change.Most participants (87% of administrators and 89% of faculty) expected that students would use AI to complete homework and assignments, and roughly three-quarters of faculty and administrators expected that students would use AI to cheat.The third-most common response was that faculty would use AI tools to prepare course materials.Additionally, more than half of administrators and faculty expected that AI would change student access to academic information outside of class and that students would use AI for tutoring.Before performing a constant comparative analysis to identify common themes, the comments were sorted by each participant's response to the first question about the extent of expected change.The distribution of comments by the extent of expected change is listed in Table 3.There were 23 survey participants who did not answer the question related to the extent of anticipated change yet chose to leave a comment.The 298 participants who chose to leave a comment were slightly more likely to anticipate a greater change when compared to responses from the total group of participants (see Figure 1).In other words, those who expected small changes or little to no change were less likely to leave a comment.Despite fewer comments from the groups expecting less change, a sufficient number of responses represent these perspectives.

Theme
Description n

Academic integrity and rigor
The impact of AI on academic integrity and the quality of student work.

AI integration
Observations about how AI is (or could be) integrated into higher education, implications of AI integration (benefits and concerns), barriers to integration, and preparing for AI integration. 143

Hype
The sentiment that the discourse surrounding AI is exaggerated and/or that concerns about AI are overblown.
19 Uncertainty Expressions of uncertainty about the impact AI might have.

36
Additionally, 29 participants wrote comments that did not fit within the major themes and were coded as "vague statements" by the research team.These comments did not provide information about the participant's opinion or expectations for the future.For example, one participant wrote, "AI will definitely have an impact on my teaching and the future of my institution and other colleges and universities all over the world."That participant did not provide any further insights as to how they thought AI would impact their teaching or institutions worldwide.Another example of a vague statement comes from a participant who wrote, "Higher education is going to have to use AI or it will be over," without providing any context as to "what" might "be over."

Academic Integrity and Rigor
Many participants (n = 113) commented on the impact of AI on academic integrity and rigor.The comments within this theme were overwhelmingly negative, with participants expressing concerns that students were using (or would use) GenAI to cheat or as a crutch that would reduce students' capabilities to "understand complex issues" and "stunt creativity."One participant wrote, "AI is a crutch students currently rely upon and don't learn from the use, just find a way to shortcut the learning process.The attitude of getting the grade is more important than learning."A second participant shared a similar sense of despondency about the long-term impact of GenAI, saying, "It will be a race to the bottom where everyone gives up on real learning so we can more easily award credentials that are increasingly meaningless."Some participants remarked that they had become hypervigilant about implementing measures to verify that the student had completed any work submitted entirely on their own.These participants mentioned reverting to "blue book assignments" and making "students write short papers in pen or pencil" to prevent cheating.One participant who had previously redesigned their assignments to prevent cheating in response to the rise of various internet technologies added, "I suspect that soon, some idiot will create AI tools that will simply read them [course materials] for the students and answer the questions, and at that point, we have to return students to hard lockdown in the class, or else, what is the point?" A handful of participants also spoke of a need for a changed approach to assessment, and their comments carried a more positive tone as they suggested an improvement in the instructional design of learning experiences rather than restricting the use of technology.One person posited that "well-designed assignments won't have to worry about AI being used to cheat," and another wrote, "Students will cheat-but only if the assignments are poorly designed and not meaningful."Within the conversation about academic integrity and rigor, a few comments asserted that teaching practices and assessment changes were necessary to effectively prepare students for the workforce.One participant noted, "Our course content will change as the marketplace adopts AI tools, forcing us to teach and prepare students to use AI tools in their vocations." Whether participants chose to permit AI use or restrict technology use, the comments suggested that the widespread availability of GenAI tools to students would impact faculty workload.Below, we contrast two participant responses related to academic integrity and workload.The first participant puts an onus on faculty to figure out how to effectively incorporate AI into coursework to create a positive benefit for students, while the second participant essentially is choosing to reject AI to reduce their workload (while noting that restricting technology also brings about extra work).Highlighting such differences in participant responses and their feelings toward AI is important for understanding the underlying tensions that may affect how AI impacts higher education over the long-term.
It's a new technology, and as people are aware of its benefits, I hope that we can see some positives here, like using for tutoring, time management, creation of rubrics, meeting agendas, little things that take a lot of time.For students, yes, there will be cheating (there is always cheating), but I think that places the workload on faculty to create more interactive lessons, and incorporate AI in some facet.I've had to switch from term papers students do at home to in-class handwritten exams.This stinks and makes things worse for both me and my students.However, I will not make any more effort to use AI tools for more creative teaching or whatever because my aim is to make things as easy as I can for myself and minimize the time and effort I devote to teaching.I'm a professor, not a teacher.AI Integration AI integration into higher education was the largest theme, and nearly half of the participants who left an open-ended comment (45%) made remarks that fit within the theme.Comments related to AI integration focused on whether or how AI could be integrated into higher education.Within the overarching theme of AI integration, four overlapping sub-themes emerged: benefits, practices, preparedness, and scope of change.Table 5 provides a description for each subtheme along with the associated number of comments.

43
Benefits.Comments related to AI integration within the sub-theme of benefits centered upon the positives that could result from implementing AI applications in teaching and learning.A primary benefit mentioned was the potential for AI to make teaching "more efficient," with participants stating that AI would "make my workflow easier" and "save time for me by taking over rote administrative tasks."One participant commented, "I do not intend to use AI very often, but I am continually more and more aware that the option can save time and energy when I feel under-resourced in these areas." Other participants anticipated that AI should be viewed as a gift that offers "an opportunity to engage students.""We've focused almost entirely on the threat of AI and only to a very small degree on opportunities," remarked one participant.In contrast to the concerns expressed by the comments on academic integrity, participants who mentioned the benefits of AI integration tended to view AI positively as a "tool to help us improve the quality of our programs."Several participants expressed hope that AI would "change the focus of education," shifting the emphasis to creativity and deeper, more critical thinking about concepts.One Online Learning Journal -Volume 28 Issue 3 -September 2024 participant wrote, "I just think that AI will force changes in how assignments are put together.Hopefully, it will lead to more thoughtful learning outcomes and assignments." Practices.Comments that focused on the practical ways that AI was being, or could be, integrated into higher education were grouped into the sub-theme relating to practices.Some participants described how they had begun or intended to use AI going forward.These participants made remarks indicating that they were learning how to "use it [GenAI] effectively, efficiently, and ethically" alongside students so they could help students leverage AI as a learning tool and set them up for success in the workforce.One participant acknowledged the role that AI could play when students graduate and begin their careers, saying, "I realize that AI tools will be used in the workplace, and I embrace students' use of AI tools to jump-start project plans and generate ideas.I am working at modeling the instructional flow to how one would approach problems in the workplace."Some comments relating to practices mentioned simple ways that AI could help their institution, suggesting AI could perhaps write a user manual for the LMS, enrich or create course materials, and help students become better writers (e.g., "students use AI to create an essay or policy or homework and then critique AI's skill or lack of skill").Participants also noted that AI could provide individualized student support, assist in research, and help with enrollment processes.It could also be used by students and faculty alike to generate new ideas.One participant noted that they had "started to explore its use to refine assignments or come up with new class examples where I already know what I want them to learn, but need slightly new or updated material."Similar to the discussion about academic integrity, some participants noted that they had changed their teaching practices to reduce the likelihood of cheating while permitting students to use GenAI."AI has forced me to move away from answer-based teaching to experiential learning because I allow students to use AI," said a participant.Another said, "I have already completely redesigned my courses with AI in mind.As AI constantly changes, I will follow those changes with reorganized courses.I'm concerned that my fellow [faculty] members have not put in the time and effort to completely re-organize their courses."Lastly, a small group of participants noted that their intended practice with AI technology would be to opt out of using it.For instance, one person said, "I, for one, plan to opt out.I don't have any desire to talk to robots nor let them speak for me."Two other participants similarly commented that they would be retiring soon and had no interest in changing their approach or "learning how to deal with AI." Ultimately, participants whose comments focused on practices for AI integration appeared to hold the underlying belief that AI technologies would persist over time and that the onus was on institutions to adapt.As one participant aptly stated: AI is here to stay, so our institution is already exploring how best to incorporate it sensibly into coursework.It can be a useful tool for both students and faculty (and, I assume, administrators).Like all tools, it can be misused, and that will have to be Online Learning Journal -Volume 28 Issue 3 -September 2024 guarded against, but AI will not be the end of the world as we know it.(♫ And I feel fine.♫) Preparedness.The participants who commented on preparedness concerning AI integration focused mainly on sharing that they felt ill-prepared for AI or that their institution was not ready.Several participants used the term "behind the curve" to describe how their institution addressed AI.Despite an overwhelming feeling of not being ready, the comments within this subtheme carried the sense that AI technologies would continue to be a persistent force and that action was needed to overcome a perceived lack of preparedness.Multiple participants within this group mentioned a need for training or professional development, either for themselves or for faculty, in general, at their institution.One participant wrote, "I'm nervous and excited to learn about AI and know that I will need extensive training to keep up with the technology."Another echoed that statement: "My perception is that most of the faculty at my institution, including myself, feel unprepared to use AI tools effectively." Others described actions such as a "cross-institutional task force studying the issues" and called for more "policies and procedures on how to correctly guide students on how they might use these tools to enhance their educational experience."The notion of preparedness included ensuring that faculty and students were equipped and institutional policies were put in place to "use the changes that occur in technology for the good they can do, and educate (our actual calling) our students in understanding the ethics and morality of that technology."Several participants noted frustration with their institution (or higher education more generally) for showing "zero interest in learning about AI or in even creating policies for best practices when using AI."In the words of one participant: My institution has no comprehensive plans to address AI to benefit student learning or the operations of the college.The posting of a draft statement on a library website does not constitute preparation for incorporation of AI in the academic enterprise.
Ultimately, there was an overarching sentiment that it is incumbent upon institutions to ready themselves to integrate AI technologies and undertake the "responsibility to prepare students for an AI future."Rather than discussing concerns about cheating, participants who spoke about preparedness tended to emphasize a need to "guide students on how they might use these tools to enhance their educational experience," and "think ahead and incorporate AI into class, homework, and assignments."Scope of Change.Within the scope of change subtheme, the comments (n = 43) focused on the extent to which participants anticipated AI would be integrated into higher education.Similar to the subtheme of preparedness, participants held an underlying expectation that AI use will persist into the future.Some of the participants who mentioned the scope of change positioned AI as an initial disruptor that will eventually become a useful and inevitable part of our daily lives.They compared the widespread use to other technologies that have emerged over time, such as books, the calculator, the internet, spellchecking and grammar-checking software, and laptop computers.Within these comparisons to other technologies was the sense that AI is, at its essence, a tool and "over time, we will get a grip on it, but it will be very disruptive in the next few years." Other participants linked the scope of change to the workforce.They discussed how AI use in certain professions would impact academic disciplines, with some fields of study being impacted more than others.Comments like, "our course content will change as the marketplace adopts AI tools," and "AI will change medical care, engineering, computer science, research . .." suggest that the scope of change within higher education may be driven in part by the extent to which AI impacts the workforce.It is important to note that these comments appear to refer to AI more generally, rather than GenAI.
The notion that AI "will become deeply embedded in our work lives" extended beyond speculation about the impact on student career paths into academia."AI will take over most jobs, including mine as a lecturer, and result in a degraded educational experience," one participant lamented.Several others also brought up concerns, saying, "only the best in-person teachers will be retained," "they could replace me with a bot," and "the need for me as a teacher, and the desire I'll have for a teacher, will both decrease."Another participant added, "each of us will be required to consistently demonstrate the intrinsic and explicit value we provide to the community."In other words, these participants anticipated that AI would reduce the need for their personal role in higher education and that it could potentially replace them.
Hype.The smallest theme that emerged in the comments was hype (n = 19); however, it was notable that it mostly presented amongst the participants who expected minimal change.The participants who mentioned hype described the narrative that AI would largely impact higher education as being "overblown."These participants argued that AI was "not a major problem" and that "we are all freaking out too early."One participant likened AI to "the "crisis" of calculators in the early 1980s," and another said, "AI is just Wikipedia on steroids."Uncertainty.Participants whose comments fell under the theme of uncertainty (n = 36) mainly expressed that they did not know what would happen.They used phrases like "I don't know," "no idea," and "not sure" in their responses.Several illustrative comments related to feelings of uncertainly are listed below: AI is changing so fast it is hard to even imagine the impact on our institution.I think that AI has already had a huge impact, and we really don't know how big the change will be-but it is major.
It is too soon to know the real impact of these technologies.
Honestly, I think the impact of AI is very hard to predict.The selections I made above are nothing more than "gut feelings".
Anyone who says they know how AI will impact academia is delusional.
We don't know what we don't know.
Online Learning Journal -Volume 28 Issue 3 -September 2024 Summary Overall, the findings show a range of perspectives, including some competing perspectives, about what the future may hold.The thematic analysis showed the prevalence of different narratives about the potential impact of AI and provided some explanatory insights as to why people may have widely differing views.Together, the quantitative and qualitative analyses demonstrate that while there are different opinions about how AI use will manifest itself and impact higher education, there is a general consensus about two things: (1) AI is here to stay, and (2) higher education is not ready for the changes it will bring.

Discussion and Implications
To achieve our objective of connecting theory to day-to-day manifestations of AIED, this section will focus on revisiting our research questions, discussing the themes that emerged in the qualitative findings (academic integrity and rigor, issues related to AI integration, hype, and uncertainty) in relation to Kranzberg's Laws, and presenting recommendations for research and practice going forward.

Extent of Anticipated Change and Preparedness
Our study answered our first research question, asking about the perceived extent of change to institutions over the next five years in a relatively straightforward way.Nearly all administrators and faculty surveyed expected that AI would impact higher education to some extent over the next five years; however, most anticipated moderate to small amounts of change.This finding aligns with research done by Staudt Willet and Na (2024) investigating reactions to ChatGPT by analyzing conversations within education subreddits.They found that conversations about ChatGPT on these subreddits were relatively minimal and did not show indication that mass disruption was expected amongst most posters.
The answer to our second research question was also clear: most administrators and faculty do not believe that their institution is prepared for AI-related change.Within the openended comments, the overall sense that AI is here to stay appeared to drive a desire for professional development and guidance from institutional leaders on how to effectively manage and integrate AI at the course level.Both Salhab (2024) and Bozkurt and Bae (2024) reiterate the importance of developing AI literacy as a critical skill.
What Aspects of Higher Education Are Expected to Change?Our third research question was more complex: we asked what aspects of higher education are expected to change due to AI technologies, particularly GenAI.Overall, participant comments are well-aligned with the various sentiments observed in the literature, especially concerning the tensions surrounding AI use in teaching and learning.Although it is true (both in our findings and the literature) that some people think AI will be beneficial and others believe it will be detrimental, acknowledging this fact does not help us draw conclusions about the anticipated impact of AI on higher education.In reality, there are multiple potential futures that scholars and participants envision for higher education, as suggested by the literature related to Online Learning Journal -Volume 28 Issue 3 -September 2024 speculative futures (Bozkurt et al., 2023;Veletsianos et al., 2024).In the following section, we apply Kranzberg's Laws to add shape and structure to these varying perspectives that exist in the literature and are echoed in our findings, about what might come to pass.
Interpreting the Findings and Their Implications Using Kranzberg's Laws Will Change be Good or Bad?
Before addressing what might change, it is important to acknowledge such conversations tend to quickly bypass the "what" and jump into determining whether anticipated changes will be good or bad.There are overarching expressions of concern by some that AI will have a negative influence on teaching and learning (e.g., students cheating, reduction of critical thinking, increased faculty workload).At the same time, some academics (e.g., Mollick, 2024) and participants in our study have expressed hope that AI will improve the higher education system by creating opportunities to better support and engage students.According to Kranzberg's first law, "Technology is neither good nor bad; nor is it neutral" (p.545).Elaborating on this statement, Kranzberg (1986) argued that technology may be used differently in different contexts.Depending on how a technology is used in relation to other contextual factors, it may be either beneficial or harmful (or neither or both) in the short or long term (or both).In other words, it is entirely feasible for technological innovation, such as AI, to hold the potential for both positive and negative impacts (e.g., Bozkurt et al., 2023;Popenici et al., 2017;Zawacki-Richter et al., 2019).
When applying Kranzberg's first law as a lens to examine participants' tendency toward a binary way of thinking (e.g., AI will be either good or bad for higher education), it becomes clear that attempting to judge the impact of AI as good or bad is an unproductive exercise.Rather, Kranzberg urges us to examine how AI will impact different academic contexts differently.Both the participant who said AI would "stunt creativity" and the participant who said AI would "improve the quality of our programs" make points that may be fully valid and true (even though they are in opposition) because it is the context that affects whether technological innovation will have a positive or negative impact more than the technology itself.Similarly, participants who remarked that AI would increase workload and participants who said it could reduce their workload may both be correct depending on context.

Non-Technological Factors May Influence What Aspects Will Change
The importance of context in understanding the impact of AI also ties into Kranzberg's fourth law, which highlights the importance of non-technical factors, such as sociopolitical factors, on technological decision-making.Or, as restated by Pitt et al. (2023), "the evaluation of any technology, according to Kranzberg, is largely influenced by the public's perception of risk, rather than risk itself" (p.85).The key implication is that different institutional contexts may be impacted differently by AI, depending on institutional culture, values and leadership.For example, the findings within the subtheme of preparedness showed that the actions of institutional leaders in appointing task forces, developing policies, and supporting faculty and students in ethical AI use (or choosing not to do these things) affect whether or how AI will be implemented at an institution.Over time, there is likely to be an ever-widening rift between institutions that prepared themselves for AI use and those that did not.It is reasonable to infer that the action or inaction taken by institutions based on sociopolitical factors alone could feasibly lead to very different contextual outcomes in the future.
History Provides Insight into the Future According to Kranzberg, understanding technological innovation in relation to history is critical.Kranzberg's fifth law, "all history is relevant, but the history of technology is most relevant" (p.553) and sixth law, "technology is a very human activity-and so is the history of technology" (p.557), underscore the importance of reviewing the impact of other types of disruptive technologies.Popenici and Kerr's (2017) discussion on the rise and fall of MOOCs serves as a reminder that decision-making, when based upon limited empirical research, holds considerable risk.As one participant said, "We don't know what we don't know."Educational research in an array of contexts is critical for guiding implementation choices and policy in our early stages of widespread AI use.Bozkurt and Bae (2024) cautioned that "the educational technology community often fails to learn from history" (p.4) and is often swept up in the excitement of the transformative potential of technologies without taking pause to pragmatically reflect upon past innovations deemed "the next best thing that will save educators" (p.4).Several participant comments within the theme of hype also serve as reminders that the advent of the internet and the use of calculators in mathematics were also once considered massive disruptors that have now become status quo.In a decade or two, we may come to realize that AI was, indeed, a massive disruptor or we might find that it was not (or we may see that it massively disrupted some educational contexts and left others relatively untouched).
But What Exactly Might Change?Kranzberg's second law, "invention is the mother of necessity," tells us that, as AI permeates higher education, we must address the needs that will arise as a result.When identifying these needs, we gain insight as to the specifics aspects of education that might change.Essentially, "every technical invention seems to require additional technical advances in order to make it fully effective" (p.548).Kranzberg's third law, "technology comes in packages, big and small," adds further insight to the notion that AI integration will create new needs, reminding us that these needs will be an interconnected mix of large and small needs.As an example, concerns about students using AI to cheat have resulted in participants employing new strategies, whether that be a move away from technology use during testing altogether (e.g., reverting to the use of blue book tests), plagiarism detectors (Fink, 2024), or changing the nature of their assignments and assessments so that AI can be used in a complimentary way.Policy frameworks, like the one developed by Gunder (2024), are being used to help institutions recognize an array of needs and come up with solutions.
Finally, Kranzberg's sixth law states that "technology is a very human activity-and so is the history of technology."The potential benefits and challenges of AIED are reflections of our Online Learning Journal -Volume 28 Issue 3 -September 2024 human needs.When we use AI as an efficiency tool, it reflects a human need for greater efficiency in higher education.When we use AI relationally, as a co-collaborator in our work (e.g., Veletsianos et al., 2024), it reflects our human need for technology to take on a role for us that includes a human-like way of interacting.When students use AI to cheat, it reflects our human need to avoid negative societal outcomes that may result from failure.The history of educational technology, including the integration of AI into teaching and learning in recent years (Chiu et al., 2023;Crompton & Burke, 2023;Zawacki-Richter et al., 2019), is ultimately a narrative of our desire to make our systems of education accessible, high quality, and beneficial to the lives of learners.That was the unrealized goal of MOOCs (Popenici & Kerr, 2017) and these same values are repeated in the responses of participants and scholars advocating for AI use in education (e.g., Mollick, 2023).The hoped-for impact of AI, and the reason for trialing its use, is an education system that is objectively better than the system currently in place.

Recommendations
With the understanding that AI use is likely to become a permanent fixture in education and the knowledge that it is only in its infancy (Bozkurt and Bae, 2024), how should we react?Given the importance of context in relation to impact, we recommend moving away from broader debates about whether AI will be good or bad and toward questions that spark contextspecific discussions: How might AI integration benefit a certain subset of learners?What might be the unintended consequences of banning AI use for assignments for a specific course?How will future workers in a certain field of study be expected to use AI in the workforce?

Conclusion
To conclude, we revisit our final research question: what aspects of higher education are expected to change due to AI technologies, particularly GenAI?After conducting this investigation and examining the findings in relation to the literature, we are hesitant to make claims about the future with a bold sense of certainty.Perhaps, after reading the many differing perspectives and ascertaining through Kranzberg's Laws that different contexts may make opposing viewpoints equally valid, we find our perspectives mirroring some of the participants who expressed uncertainty.The impact of AI will likely be varied, with a yet-to-be-determined combination of context-dependent positive and negative outcomes.In the meantime, we can make the strong assertion that more research that explores the impact of AI use in practice (in multiple different contexts) is needed.

Ethics and Consent
As an independent research organization, Bay View Analytics follows strict internal ethics and privacy protocols and is not required to gain approval from an institution's ethics review board.The invitation message to all survey participants noted, "Privacy is of the utmost concern, and all respondents' data will be de-identified as the first step in the analysis.Only researchers holding current certification in human subjects research will have access to individual-level survey responses.Individual responses are not shared with the survey partners."The invitation listed the identities of the organizations conducting the research, the identity of all Online Learning Journal -Volume 28 Issue 3 -September 2024 sponsoring organizations, who would have access to the data, and the intended research deliverables.Open-ended comments were reviewed to remove personally identifying information and only quoted if the respondent provided specific approval.

Figure 1
Figure 1Administrator and Faculty: Extent of Anticipated Change

Figure 4
Figure 4Areas of Anticipated Change

Table 1
Kranzberg's Laws "Technology is a very human activity-and so is the history of technology."(p.557)

Table 2
Survey Instrument What areas of your institution do you expect AI to change?Administrators and faculty

Table 3
Distribution of Comments by Extent of Expected ChangeFour major themes emerged from the 321 open-ended responses (117 administrators and 204 faculty): academic integrity and rigor, AI integration, hype, and uncertainty.Table2describes each theme and the number of comments it includes.The themes overlap, and 21 comments address more than one theme.

Table 4
Themes Related to the Perceived Impact of AI Online Learning Journal -Volume 28 Issue 3 -September 2024

Table 5
Sub-themes Related to AI Integration