1 Introduction

A growing number of universities around the world are experiencing a proliferation of artificial intelligence (AI) in various domains of teaching and learning  [1]. Subsequently, higher education institutions (HEIs) use Generative Pre-Trained Transformers (ChatGPT) to produce text that emulates human writing, respond to queries accurately, and generally produce education-related content for research and learning purposes [2]. However, the use of AI has resulted in issues that are yet to be fully resolved in HEI [3, 4]. Some of the key concerns revolve around finding the right equilibrium between leveraging AI's potential in education and effectively addressing the ethical, privacy, equity and acceptability issues that emerge during its implementation [5,6,7]. The broad domains of education, including teaching, research, and learning, are undergoing substantial influence due to the emergence of AI in teaching and learning [8].

Regardless of the technology at hand and the promised benefits it offers, the issue of technology acceptance and utilization by diverse educational stakeholders remains a subject of significant interest for researchers and practitioners. This study particularly focuses on how to understand and potentially promote the acceptance of emerging technologies like AI-ChatGPT within the context of education, to maximize their benefits for students. It is worth noting that despite the increasing attention that AI has garnered, how these technologies can or should be integrated into HEI remains unclear.

Previous studies offer only limited insights, primarily due to the nascent stage of research within this field. It should be noted that while Artificial Intelligence (AI) is not a new phenomenon, its increasing application in education is recent [7]. AI, in its broader scope, is expected to exert a transformative influence on research practices, as it fundamentally questions a range of deeply ingrained assumptions, particularly those related to technology acceptance [9].

Nevertheless, research on students' acceptance of AI has advanced significantly in developing countries [10, 11]. However, there has been minimal attention given to lecturers' acceptance of AI for their students in these same developing countries. Given that lecturers are pivotal decision-makers when it comes to selecting the technology and tools for classroom use, their acceptability is of paramount importance for the advancement of AI-powered tools.

In developing countries, few studies have examined the intentions of lecturers or faculty members to accept AI for their students' classroom activities. For example, [12] study among higher education students found that more than half (69.9%) of them indicated acceptance of AI, while factors that influence their acceptance of AI included social influence, innovation characteristics, perceived usefulness and psychological needs assessment. Thus, there is a dearth of studies on lecturers’ intention to accept AI for their students and also to examine the factors that influence their acceptance of AI in sub-Saharan Africa. Thus, the study aims to investigate  the acceptability of AI by lecturers and further examine the factors that influence lecturers’ acceptance of AI for students. Examining and  understanding lecturers’ intentions and use behaviour helps researchers, information systems developers and educators initiate and develop suitable measures to promote AI adoption and acceptance for students [7].

2 Literature review

There are various AI opportunities for enhancing teaching, research and learning practices across all educational levels. AI i is poised to revolutionize education by transforming how we manage classrooms,  foster faculty collaboration, and deliver learning through innovative AI-powered  teaching methods  [12]. Extensive research has demonstrated that AI technology holds the potential to enhance students' learning and cognitive capabilities, as well as elevate the efficiency of teaching and learning processes with personalised learning experiences [13].

Specifically, in the context of higher education institutions (HEIs), AI ChatGPT can play a pivotal role in fostering the development of soft skills [14] and enhancing the overall educational experience of students in several ways. Such as providing students with immediate access to information, adapting to individual student needs and learning styles, research assistance needs and even supporting students with disabilities by providing text-to-speech capabilities. For international students, it can prove instrumental in facilitating language learning acquisition and the mastery of subject-specific writing styles [15].

Su et al. [16] conducted a meta-review by examining 14 research papers on AI curriculum implementation in the Asia–Pacific region, and their study found that content knowledge, tools, platforms, activities, models, assessment methods, and learning outcomes as essential components of AI acceptance.

Furthermore, [17] conducted a comprehensive review spanning from 2016 to 2022, covering AI instruction from kindergarten to university levels. Their examination encompassed pedagogical methods, teaching tools, learning content, and assessment strategies. They also recommended the utilization of P21's framework for twenty-first-century learning to define the essential AI literacy skills and knowledge required for students to excel in both their professional and personal pursuits.

This clearly shows that user acceptance of AI is fundamental to its progressive use in higher education. For AI to realize its potential benefits for a wide range of users, acceptance and adoption are crucial. Low acceptance rates can lead to reduced AI adoption, causing wasted resources, an oversupply of AI devices, and a potential slowdown in technological innovation, all to the detriment of lecturers and students [18].

In the realm of education, teachers' beliefs, organizational support and policies are recognized as vital factors influencing their classroom behaviour [18]. Numerous studies have aimed to establish connections between teachers' beliefs and their classroom conduct, exploring diverse categories of these beliefs [18], including Perceived complexity and usability [19], sociocultural factors [20], technology readiness [21, 22] and pedagogical beliefs and practices [23], concerning their use of technology in educational settings.

Despite numerous studies underscoring the importance of AI in education and investigating the interaction between humans and AI, there remains a gap in our understanding regarding lecturers' willingness to embrace AI for the benefit of students and also the factors that influence their acceptance of AI.

2.1 Conceptual model

To understand the progressiveness of lecturers’ acceptance of AI for students' use, this study adopted the affordance theory, technology-organisation-environment framework and technology acceptance model. This progressive research model of AI acceptance was developed as a result of combining these innovative acceptance theories with the intention of furthering our knowledge. By integrating aspects of these theories, lecturers’ perception of how AI-powered tools can enhance teaching practices is observed. The research model (see Fig. 1) shows four (4) important constructs necessary to understand AI acceptance by lecturers in universities:

Fig. 1
figure 1

Research model

Perceived pedagogical affordances (PPA): By incorporating aspects of the “pedagogical” affordance theory [24], lecturers’ perception of how AI-powered tools can enhance theory teaching practices is examined. In this study, PPA involves assessing whether teachers believe that AI offers benefits such as personalized learning, improved student engagement, or more effective assessment methods. It also includes assessing lecturers' emotional responses, enthusiasm, anxiety, self-efficacy, and emotional responses to concerns or challenges associated with AI, such as job security or data privacy.

Socio-cultural context (SCC): As per the technology-organisation-environment (TOE) framework [25] this construct involves examining the broader social and cultural factors that influence technology acceptance among lecturers. In this study, SCC encompasses the role of colleagues, school leadership, professional communities, and educational policies in shaping teachers' attitudes and behaviours towards AI.

Perceived complexity and usability (PCU): Also from the theoretical orientation of TAM, this construct involves assessing the lecturer's perceptions of the complexity and usability [19] of AI-powered tools; it is derived from the technology-organisation-environment theory. In this study, PCU includes factors such as ease of use, user interface design, and the learning curve associated with adopting new technology.

Organisational policies and incentives (OPI): Within the framework of TOE [25, 26], this construct assesses how university policies, incentives, and leadership support influence the adoption of AI. In this study, OPI involves examining whether the institution has policies that encourage the incorporation of AI and whether lecturers receive incentives or recognition for incorporating technology into their teaching practices. reveals the importance of support mechanisms and professional development opportunities available to lecturers [27]. OPI also includes evaluating the availability of training programs, peer support networks, and resources that can help lecturers build confidence and competence in using AI.

As shown in Fig. 1, the adapted research model consists of perceived pedagogical affordances, socio-cultural context, perceived complexity and usability, and organizational policies and incentives which influence AI acceptance by lecturers.

3 Methodology

3.1 Study design

This study is cross-sectional research conducted among lecturers in Ghanaian universities, employing convenient and snowball sampling techniques. An online self-administered questionnaire was created using a Google Form, which was subsequently shared on WhatsApp and through the university's email portal with friends and colleagues. The questionnaire development was influenced by the works of [24] (PPA), [25] (SCC), [19] (PCU), and [25, 26] (OPI). Inclusion criteria for the study required participants to be lecturers at a public or private university at the time of the survey.

3.2 Study setting

Ghana, a lower-middle-income country situated in West Africa, shares borders with Cote d’Ivoire to the west, Togo to the east, Burkina Faso to the north, and the Gulf of Guinea to the south. As of 2021, Ghana has a population of 30.8 million people distributed across its 16 administrative regions, with 50.7% being female and 49.3% male [28].

Ghana boasts 15 public universities, 10 technical universities and over 50 private universities in Accra, with its capital located in the Greater Accra region, which is also the most populous. The population of the Greater Accra region has increased from 16.3% in the last census in 2010 to 17.7% in the latest census conducted in 2021 [28]. Currently, the largest university is the University of Ghana with a population of 67,914 with males representing 34,772 (51.2) while females are 33,142 (48.8%).

3.3 Data analysis

The descriptive statistics were analysed using Statistical Package for Social Sciences (SPSS) v27. Descriptive statistics, such as frequencies and percentages were used to describe the socio-demographic experience of lecturers, lecturers' AI experience in education and beliefs about AI-powered tools in education. To determine the relationship between lecturers’ acceptance of AI for students and the socio-demographic experience of lecturers, lecturers' AI experience in education, and beliefs about AI-powered tools in education, Pearson’s chi-square test was used. Variables with p-value < 0.05 were deemed statistically significant.

Furthermore, the SmartPLS v3 for structural equation model (SEM) was used to analyse the factors (perceived pedagogical affordances, socio-cultural context, emotional responses, support and professional development, perceived complexity and usability, and organizational policies and incentives) that influence lecturers’ AI acceptance for students. The purpose of the SEM was to estimate the measurement model for the reliability and validity of the constructs.

3.4 Data collection

The data for the study were collected over two months, concluding in July 2023, from lecturers. The online survey questionnaire covered five areas: describe the socio-demographic experience of lecturers, lecturers' AI experience in education, and beliefs about AI-powered tools in education and determinants of AI acceptance.

The questions describe the socio-demographic experience of lecturers, lecturers' AI experience in education and beliefs about AI-powered tools in education were adapted from previous studies. Nevertheless, these questions were modified in light of the faculty members’ context. While the constructs from perceived pedagogical affordances were adapted from [24], socio-cultural context from [25, 29], perceived complexity and usability from [19], and organizational policies and incentives from [25, 26].

Participation in the study was entirely voluntary. The study adhered to all ethical guidelines, and a signed informed consent was acquired from all participants. Written informed consent was taken from all respondents and thus the study complied with all ethical regulations outlined by the Research Ethics Committee (REC/0001) of Heritage Christian University, Amasaman.

4 Descriptive Results

4.1 Lecturers' acceptance of artificial intelligence—ChatGPT for students

For this study, a total of 94 lecturers were involved. More than two-thirds (84%) of the lecturers indicated their acceptance of artificial intelligence specifically ChatGPT for their students, while 16% stated non-acceptance of AI for students. The main reasons cited for non-acceptance of AI by lecturers included a lack of trust in AI’s accuracy and reliability (33%), a concern of loss of personalized learning (14.9%), integration of AI could lead to job replacement (i.e. job security concern) (9.6%), and a concern of overreliance on technology can hinder students essential skills development (e.g. critical thinking) (9.6%) (Table 1).

Table 1 Main reasons for lecturers’ non-acceptance of AI for students

4.2 Socio-demographic characteristics of participants

Participants’ characteristics: A total of 94 lecturers or faculty members completed the online survey. Out of the 94 lecturers, two-thirds (n = 62, 66%) were aged 35–45. The majority of the lecturers were males (n = 59, 62.8%), and were PhD holders (n = 65, 69.1%) with 60.6% of them being married. Regarding teaching experience, 43.6% (n = 41) have taught for 6–10 years and a higher corresponding number of lecturers (75.6%; p > 0.05) indicated acceptance of AI for teaching. Most of the participants were located in the Greater Accra Region (41.5%) with the least respondents from Central Region (8.5%). Interestingly, 45.7% (n = 43) of the lecturers indicated that they have received no AI training and development, however, 79.1% (n = 34; p > 0.05) of them were willing to accept AI for teaching. Concerning the attitude toward AI, 66% (n = 62) had positive attitudes while 11.7% (n = 11) were not sure of their attitudes towards AI. Further, the majority of the lecturers’ (77.7%) student population per class was below 200. However, lecturers (90.5%) whose student population were above 200 were more willing to accept AI for teaching (see Table 2). Since AI is an infant research area in educational studies, demographic factors such as age, gender, marital status and regional balance were crucial determinants and success of AI due to their impact on society, user experience customization, cultural sensitivity and adaptation and individual perspectives on ethical considerations.

Table 2 Socio-demographic characteristics

4.3 Artificial intelligence (ChatGPT) experience in education

Table 3 shows the AI–ChatGPT experiences of lecturers. The results reveal that more than two-thirds of lecturers are familiar with AI tools and do encounter AI-powered tools in their daily work, while 28.7% are not familiar with AI. Interestingly, seven out of ten lecturers have never used AI-powered tools to teach, while more than half of them (52.1) perceptions and attitudes are great towards AI impact. Similarly, the majority of them (48.9%) do not use AI tools in learning, research or teaching in education. In addition, more than half (56.4%) of the respondents reported that their universities have not integrated AI into their work role while 61.7% also reported that their universities have not decided on AI integration for teaching and learning. However, a high proportion (70.7%; p < 0.05) of lecturers who reported no decision on AI by universities indicated their acceptance of AI in education for their students.

Table 3 Lecturers’ AI experience in education

Regarding privacy and ethics, 83% reported it to be a concern for their students. However, aside from these concerns, more than half of the respondents (52.6%, p < 0.05) indicated acceptance of AI-powered tools for students. Also, a majority (53.2%) of them reported that they don’t know or can’t tell encountering bias or unfairness in AI systems. The study also found that students' chances of using AI at home (79.8%) are much higher than in the classroom (19.1%) settings.

4.4 Beliefs about Artificial Intelligence-powered tools in education

Concerning beliefs about AI-powered tools in education, about 69.1% of lecturers agreed with the statement “Once AI tools are available and approved by the university, it will be safe” and a higher proportion (69.2%; p < 0.01) of respondents who disagreed revealed acceptance of AI-powered tools for their students than those (62.5%; p < 0.001) who neither disagreed nor agreed. However, those who agreed showed a high percentage (87.7%; p < 0.001) of AI acceptance in education for students. Correspondingly, 75.5% believe AI-powered tools help improve efficiency in education while 84% agreed that AI is a tool for personalising learning experiences. However, those who neither disagreed nor agreed to the personalization power of AI indicated a higher proportion (77.8%; p < 0.001) of acceptance of AI tools than those who disagreed (66.7%; p < 0.01). Two-thirds (73.4%) of lecturers agreed that AI is an enabler for new technologies and also can enhance learning outcomes (agreed by 76.6%) by offering adaptive learning platforms. However, 22.3% of respondents neither disagreed nor agreed that AI is a powerful tool for solving complex problems but a majority (61.7%) agreed with AI's capabilities to solve complex problems. More than half (66%) of the participants believe that integrating AI tools can help students develop digital literacy and skills while 20.2% neither disagreed nor agreed. Of those who neither disagreed nor agreed, a high proportion (73.7%; p < 0.001) indicated AI acceptance for students. Further, 11.7% of lecturers don’t believe (disagreed) that AI tools can be tailored to support local languages to fit the cultural context. However most (72.7%; p < 0.001) agreed to the acceptance of AI for students.

The majority (75.5%) of the respondents agreed to the notion that connectivity issues and lack of lecturers' training hinder AI tools, and a higher proportion (78.9% p < 0.05) of those who agreed indicated acceptance of AI for students. Also, 36.2% of the respondents disagreed that AI adoption can exacerbate existing inequalities or ensure equitable access to AI tools. Interestingly, 33% of the lecturers neither disagreed nor agreed that AI development tools can address teacher shortages. With regards to job displacement, 73.4% agreed that AI would lead to job displacement, however, a higher proportion (72.5%; p < 0.001) of lecturers indicated acceptance of AI for the students.

5 PLS-SEM analysis and discussion

5.1 Structural equation modelling of acceptance of artificial intelligence

To examine the factors that influence lecturers’ acceptance of AI for students, two main analyses are conducted in this section: estimation of the measurement model and assessment of the structural model. For the statistical analysis, the study used partial least squares-structural equation modelling (PLS-SEM) as outlined by Ringle et al. [30].

5.2 Model assessment

From the estimation of the measurement model, as shown in Table 4, the Cronbach Alpha (CrA) values ranged from 0.727 to 0.940 indicating a good reliability scale. The composite reliability (CR) values were all above 0.7, meeting the recommended minimum threshold, and ranging from 0.719 to 0.943 (rho_a) and 0.800 to 0.962 (rho_c). Furthermore, the average variance extracted was estimated to range from 0.572 to 0.893. As shown in Table 4, the AVE values were higher than the recommended 0.5 threshold which implies a good association between the constructs of study. To multicollinearity issues, the variance inflation factor (VIF) was estimated [31]. From Table 5, the VIF values were below 5 5 (i.e. all within the acceptable range)—indicating that the model is free of common method bias with no collinearity issues.

Table 4 Construct reliability and validity
Table 5 Discriminant validity with heterotrait-monotrait ratio of correlations

The Heterotrait-Monotrait Ratio of Correlations (HTMT) criterion was used to examine the discriminant validity of the constructs. As a thumb rule, an HTMT value below 0.9 between two constructs indicates that discriminant validity is established [32]. As shown in Table 5, all the HTMT ratios of the constructs range from 0.246 to 0.641 which are less than 0.9 threshold factor.

5.3 Structural model

During the analysis of the structural model, the Coefficient of Determination (R2) was estimated to determine the model’s predictive power. According to Hair et al. [31], R2 values at the threshold of 0.25, 0.50 and 0.75 represent weak, moderate and substantial levels. From Table 6, the R2 value of 0.248 shows a weak positive relationship between the AI acceptance and the independent variables (see Fig. 2 for PLS results). Similarly, it shows that only 24.8% of the variation in the dependent variable can be explained by the independent variable(s) in the model. The remaining 75.2% of the variation is due to other unexplained factors. Future studies may examine other unexplained factors to augment AI acceptance by lecturers. However, Tables 3 and 7 show other unexplained AI experiences and beliefs that can support 75.2% of the variation explanation. Notwithstanding, it is important to note that R2 only indicates the strength of the relationship, not necessarily the causality between the variables.

Table 6 R-Square
Fig. 2
figure 2

PLS results

Table 7 Beliefs about artificial intelligence -powered tools in education

Furthermore, Cohen’s f2 was used to assess the effect size of each path in the model. As a thumb rule, the effect size (f2) of less than 0.02 indicates that there is no effect, while 0.15 and 0.35 represent medium and large effect sizes respectively.

The goodness of fit indices was examined using the SRMR and NFI. SRMR shows an absolute measure fit of the model and it ranges from 0.061 to 0.074. While Normal Fit Index (NFI) values range between 0 and 1. The closer the NFI values to 1, the better the fit model [32]. Table 8 shows the SRMR and NFI values for estimation.

Table 8 Model fit

Importantly, results from Table 9 indicate that all the hypotheses were significant. Primarily, organizational policies and incentives significantly predicted AI acceptability. Furthermore, Perceived complexity and usability significantly predicted AI acceptability. Similarly, perceived pedagogical affordances and socio-cultural context or impact all predicted AI acceptability in universities. Appendix A shows the cross-loadings and measurement items of the constructs. The next section discusses the implication of the hypothetical results on AI acceptance in the classroom.

Table 9 Path co-efficient

5.4 Discussion

First, the study found a significant relationship between Organisational policies and incentives and teachers' acceptance of AI for students. This finding aligns with prior studies that emphasise the importance of institutional support for technology adoption [33]. Studies suggest that clear policies outlining expectations, guidelines for using AI tools, and professional development opportunities can foster teacher buy-in [33, 34]. Furthermore, providing incentives like reduced workload or recognition for effective AI integration can motivate exploration and experimentation [34].

Second, the study found a significant relationship between Perceived complexity and usability and lecturers' acceptance of AI for students. Teachers are more likely to embrace AI if they perceive it as user-friendly and not overly complex. Research [35] suggests that intuitive interfaces, clear instructions, and readily available technical support can enhance teachers' confidence and willingness to adopt AI tools [36]. Conversely, complex and cumbersome systems can lead to frustration and hinder acceptance [36].

Third, a strong relationship was found between perceived pedagogical affordances and AI acceptance. This finding underscores the importance of AI tools aligning with teachers' pedagogical goals. If teachers perceive AI as enhancing their teaching practices, improving student learning outcomes (e.g., personalised feedback, adaptive learning), or streamlining administrative tasks, they are more likely to accept it [37]. Conversely, if AI is seen as a replacement for teachers or not adding value to their pedagogy, resistance may arise [38].

Lastly, the study found that socio-cultural context positively influences teachers’ AI Acceptance. This highlights the crucial role of considering the social and cultural environment within an institution. Factors like faculty culture, leadership attitudes towards innovation, and existing technology infrastructure can all influence teacher receptiveness to AI [39]. A collaborative and supportive environment that encourages the exploration of new technologies can foster AI acceptance, whereas a resistant culture might hinder it [40].

The study findings contribute significantly to the ongoing conversation about integrating AI in higher education. This paper emphasizes the need for a multifaceted approach that considers not just the technology itself but also the human element. By addressing teacher concerns through supportive policies, user-friendly interfaces, and alignment with pedagogical goals, higher education institutions can create a more fertile ground for AI adoption.

6 Conclusion

The study findings revealed that lecturers in universities are willing to accept AI for their students. Determining factors that influence AI acceptance by lecturers include organizational policies and incentives, perceived complexity and usability, perceived pedagogical affordances, socio-cultural context and support and professional development. Therefore, universities must explore these factors to increase AI use for students. However, concerns were raised regarding connectivity issues and lack of training, no AI approval by the university governing body, job displacement, AI bias and fairness, privacy and ethics and how AI tools can be tailored to support local languages and cultural contexts. Further, AI acceptance is influenced by lecturers’ teaching experience, attitude towards technology, institutional support and AI training and development. In this regard, to increase the acceptance of AI by lecturers for their students' benefit, university management should consider these factors when  developing and implementing AI-powered tools. Specifically, more training, workshops and periodic sensitization on the benefits and ethical consideration of AI tools, would be beneficial. Also, policy guidelines are needed to spearhead AI use in universities. Currently, there is no bold decision on AI use and acceptance for academic work by university management in Ghana, hence the slow pace of adoption. The research hopes this work has set the tone for AI adoption and acceptance in developing countries.

6.1 Limitations

This study has some limitations, primarily stemming from the non-representative nature of the study population, as it relied on convenient and snowballing sampling techniques, thus deviating from an accurate reflection of the public university's population. Furthermore, this study was cross-sectional as such have issues with causality.

The online survey link was ultimately disseminated through WhatsApp and university email portal platforms, with the added anonymity feature which can enable respondents to potentially complete the survey multiple times. Despite the limitations, this research is one of the earliest study on lecturers’ AI acceptability for their students in sub-Saharan Africa. Consequently, findings from this study would contribute to the literature on artificial intelligence acceptance and hesitancy in developing countries. Importantly, future studies may have to explore students' increasing use of AI at home as compared to the classroom; since universities are seen as a place of innovation and growth. Finally, future studies could explore the long-term effects of AI integration on teaching practices and student learning outcomes. In conclusion, the researcher hopes this study will ignite further exploration in AI education research.