A model for assessments in higher education institutions

The rapid transition to online learning during the Covid-19 pandemic has necessitated re-evaluating assessment practices in higher education. This research article presents an assessment model to inform policymakers and practitioners of innovative approaches to address the challenges effectively. Using an action design research methodology, this study delves into the problem of online assessments in Private Higher Education Institutions (PHEIs). By considering the contextual factors of load shedding, unavailability of Wi-Fi, the requirement for cheat-proof evaluations, and the paramount importance of addressing learning outcomes, the study sought to design a model that enables effective assessments in a post-Covid world. The designed artifact contributes to the growing body of knowledge on assessment practices in higher education by providing practical insights and recommendations for adapting to the unique challenges faced post-Covid. The model can guide policymakers and practitioners in developing a framework to address innovative assessment designs in the future.


Introduction
The overall success within the context of Private Higher Education Institutions (PHEIs) is based upon continual and consistent student throughput rates and successful completion of qualifications.These factors are the establishing and guiding underlying visible principles of best practice regarding student success in higher education.PHEIs are currently at a crucial crossroads in their learning and teaching strategy, where they must consider the goalposts of the future (Mentz, 2023).Furthermore, PHEIs must determine what quality entails and what will be sustainable, replicable, and achievable.The key focus will be on quality assessments.
Student success has recently come under the spotlight because of the potential causal link between emergency interventions implemented during Covid-19 and the marked improvements in student throughput rates during the 2020/2021 academic cycles (Khumalo & Makibinyane, 2021).The Covid-19 pandemic disrupted higher education in many ways.
Teaching, learning, group, practical work, and assessments moved online and took on new modes.Questions about these changes' sustainability, efficacy, and integrity and their impact on student success still need to be answered and explored.There is a need for further reflection and research as HEIs navigate their way out of the pandemic and into a post-Covid world.
The way HEIs implemented assessments already started to show fractures pre-Covid-19.However, the move to emergency remote teaching revealed that assessments could be done differently.This disruption acted as stimuli for the reconceptualization of current practices and how higher education conducts assessments.Although different types of assessments are used to assess students' learning curves, written examinations are the most common approach HEIs use (Fynn & Mashile, 2022).The emphasis on assessment has become more important because modern society demands high-quality learning.According to Mawa et al. (2019), educators know very little about the assessment of learning or assessment practices in higher education.
In fact, a recent study by Dos Reis et al. (2022) found that most summative assessments they evaluated were on a low cognitive level, only testing understanding and application, according to the Bloom Taxonomy.It was concluded that no explicit national policy guides HEIs on using Bloom's or any other taxonomy to assess students at the appropriate National Qualification Framework (NQF) level.Moreover, they found a need for a national assessment policy framework to guide HEIs in assessing students at the different cognitive levels as required by the qualification authorities.
Apart from the quality of assessments, Kakepoto et al. (2021) revealed that poor computer literacy, electricity load shedding, slow internet speed, expensive internet packages, and lack of interaction between students and lecturers are barriers towards online learning and assessments.A huge challenge in the country is to keep the electricity on.This is a fundamental problem in South Africa, and it affects online learning for students.Kakepoto et al. (2021) confirmed it is not only a South African problem.Most developing countries struggle with the same challenge.Pakistan, for example, sometimes has up to 12 hours of electricity load shedding per day.While all online exams rely on electricity, there is no way to program a power outage into the examination and tools to terminate or pause the examination.Some basic issues include recommence exam once the power is back and load-shedding time be bracketed (Dawson, 2021).Similarly, some online exam technologies are resistant to a loss of internet connectivity.According to Kakepoto et al. (2021), expensive data creates a financial burden for students, and because they cannot afford data, they miss online classes.Sometimes, online workshops last around four to six hours a day.For students who can hardly afford the tuition fees, expensive data causes an extra barrier.On most campuses, there are student centers where they can access free data, but then they must travel, which also creates challenges because of affordability.On the other hand, cheating is also a concern, especially for sit-down online assessment sessions, because students have their textbooks and notes with them.They also create WhatsApp groups where they share answers, and the use of artificial intelligence (AI) to monitor exams raises questions about privacy and ethics.More and more students also use AI, such as ChatGPT, to write assignments and answer assessment questions on their behalf (Dianati & Laudari, 2023).These are key problems towards effective online learning, and they have huge implications for online examinations.However, students still consider online learning as helpful and positive.
Connectivity has changed the spaces and times where learning and assessments occur (Gros, 2016).The learning and assessment context has been and is still rapidly changing.
Approaches that have been working effectively for decades are challenged, and it seems as if they are no longer appropriate to meet the expectations and needs of the 4IR.To be constantly connected is a way of life, and it has serious implications for learning and assessment.Both can happen anywhere and anytime and can be scheduled around one's lifestyle, habits, or  2016), the current buzzword is "personalisation."Not every person has the same approach to learning, and technology supports this situation.
Students live in a world where everybody is connected 24/7, types on keypads, has permanent access to all information, and struggles with load shedding and affordable data.
However, when it comes to exams and assessments, they are forced back to a world of no access to information, writing with a pen on paper, or sitting down for an online examination that mirrors the traditional one, and memorise facts instead of applying it to new situations while googling the info they need.Because of this old-school assessment paradigm, stress and anxiety are building up because students must memorise and remember info and get mostly evaluated on what and how well they can remember (Sharma, 2024).Though people are used to googling everything they want to know while living in the Fourth Industrial Revolution, which is characterised by the fusion of digital, biological and physical technologies, fundamentally altering the way people live and work, people are en route to the 5IR Fifth Industrial Revolution, where the collaboration between humans and advanced technologies like AI and robotics will be implemented to create sustainable, personalised and ethical innovations.
Assessment methods developed slower than the world around the examination centre (Schutte, 2024).Examinations happen in a context that needs to be coordinated and in pace with the world in which the student lives.These worlds must meld to not only accommodate the availability of electricity, data and the movement towards asynchronous learning but also to turn assessments into practical knowledge about the subject, where the student can even gain experience and learn valuable new skills during the assessment, instead of just being an exercise in parroting the textbook and memorising class notes.To address these burning issues, this paper follows design science research (DSR) as methodology to create an artefact to provide a solution to the problem.DSR, as a type of research, invents a new purposeful artefact to address a generalised problem and evaluates its utility for solving problems of that type (Venable et al., 2012).Amongst the different methods for conducting DSR, the research reported in this paper employs the action design research (ADR) methodology.Action design research, as presented in the seminal paper by Sein et al. (2011), provides an insightful structured process model for combining the activities of action research and DSR (Mullarkey & Hevner, 2018) so that DSR researchers work together with practitioners or stakeholders for mutual benefits and better results.
Following DSR, this paper investigates the problem of online assessments in PHEIs in a post-COVID world and formulates an assessment model that is not affected by load-shedding and the unavailability of Wi-Fi and that is also AI and cheat-proof but will still address learning outcomes.The paper attempts to develop a new, theoretically informed and practical model for this problem.

Necessity for assessments
Assessments are the standard practices to measure how much knowledge and skills have been mastered as a programme's learning outcomes.Mawa et al. (2019) state that the purpose of assessment is to measure, certify, and report the level of students' learning so that reasonable decisions can be made about students' future programmes or placements, but also to assess the success of teaching and the overall value of a programme and system.Assessment is a standard tool to investigate the relationship between subject knowledge and skills gained.But, for Quinn (2015), it is a pity that the dominant discourse is still the assessment of learning in many institutions.She advocates for a shift towards assessments that facilitate learning.She challenges lecturers to extend their focus beyond the immediate classroom context, contemplating how assessment strategies can equip students for lifelong learning and professional success.
Assessment through written examination is a traditional method, universally practiced in most educational institutions.It is a system in which questions are created following the subject content (Jabbar & Omar, 2015).This is done to evaluate if learning took place and if students are competent when measured against the learning outcomes of a subject.However, how effectively this testing is done is an issue (Kembo, 2020), that needs to be addressed.
Despite the importance and regularity of testing, many institutions of higher learning need to train their lecturers to assess students effectively.According to the assessments of examination papers done by Dos Reis et al. (2022), most papers do not align with the appropriate NQF level, questions are not crafted in suitable, well-formulated language, and in many cases, the same examination questions are repeated over several years.Kembo (2020) confirmed that, in If this is how students are trained to respond, they will not be able to think critically and innovatively.The assessment paper should stretch and challenge students rather than merely test for memory (Dos Reis et al., 2022).
It is taken for granted that if one holds a PhD or master's degree, one is qualified to teach and construct examination papers (Kembo, 2020).However, a study by Momsen et al. (2013) showed that over 60% of the questions asked stayed at lower levels of cognition and did not test students' abilities to re-organise information, use it in different ways, synthesise it or even apply it to novel situations.The absence of these higher-order questions indicates that examinations do not adequately develop students with the best skills and competencies.It indicates students who have a better memory.The same is true about lecturers' technological competence.It is assumed that they know how to use technology effectively in class and during assessments, but many are either struggling with or avoiding the use of technology (Schutte, 2024).According to Do Reis et al. (2022), the assessment problem relates to the curriculum problem.Some curricula remained unchanged for a decade or more, and the academic staff who ought to lead curriculum innovation often do not have the skills and tools required.

Different ways of assessing
At many institutions, examination papers assess the student's competency against the learning outcomes.The prescribed textbooks and study guides are the main materials that students use to prepare for examinations.The textbook or outcomes do not change every year.
Because of this, a practice has developed where students utilise past assessments as a deductive tool to predict future areas that will be assessed, as well as certain characteristics, such as the difficulty level of question papers (Ontong & Bruwer, 2020).According to Ontong and Bruwer (2020), when past papers are used, they inhibit the development of critical thinking skills.This repetitive nature of items being assessed may result in students who can pass assessments but cannot demonstrate critical thinking skills as required by module outcomes.
For Mawa et al. (2019), the assessment of learning is an ongoing process as it is being conducted continually in various forms.These forms and methods may include tests and examinations and a wide variety of products and demonstrations of learning, such as portfolios, exhibitions, performances, presentations, simulations, multimedia projects, and other written, oral, and visual methods.However, written assessments are the dominant one.According to Quinn (2015), assessments must not only focus on the content but also on the process of learning and on creativity to gain skills while being assessed.
Critical thinking is a highly valued skill in the current Fourth Industrial Revolution.
However, the traditional assessment methods often used by educational institutions, such as repetitive past assessments and focusing on previous examination papers during lectures and study sessions, do not effectively stimulate critical thinking or encourage the analysis of new scenarios.This approach creates a comfortable space for students because they know what questions to expect and what answers to prepare, but it fails to challenge them to think critically and become problem-solvers.HEIs should adopt test-enhanced learning and an appropriate learning approach to produce students who can think critically and solve problems instead of just regurgitating academic content (Ontong & Bruwer, 2020).
According to Mawa et al. (2019), critics argue that written examinations are limited because they only test students' verbal ability and, in a sense, their ability to memorise and remember.This method of assessing is usually a one-time measure based on the achievement made by a student on a particular day and a single correct answer per specific question.Then, this one intervention adds up to 50 -70% of the total year mark, omitting the student's demonstration of overall knowledge and thought processes.The practice of formative and summative assessments needs some rethinking.Therefore, the enquiry into other assessment methods also identifies the need to measure what students can do with what they know rather than to find out what they know.Other authentic forms of assessment can encourage students to use higher-order cognitive skills and to use their knowledge creatively, encourage them to analyse, synthesise and evaluate (which are the highest orders in Bloom's 1959 cognitive taxonomy) and prepare them better for a life in a demanding digitalised world where problemsolving, critical thinking and decision-making activities are in high demand.
A whole new way of thinking about assessment is necessary.A burning question is the introduction of peer, group, and self-assessment in designing assessment processes (Quinn, 2015).This is important to develop students' capacity to make judgements about their own and others' work.Being able to do this realistically and ethically is likely important for all graduates in their future professions and workplaces.But to assess and evaluate, one needs criteria against which it can be done.In some cases, students are involved in designing their assessment criteria.This, as well as the process of evaluating peers or their work, contributes to a deepening understanding of how work is assessed and what is valued in a specific discipline.

E-examinations
Online examinations are still trying to find an identity of their own.Examinations also moved online during the COVID-19 pandemic's remote emergency teaching phase.These examinations were not initially designed as online but as sit-down ones that quickly moved online.Since the first COVID-19 examinations, institutions have experimented with different forms of online examinations.Currently, online assessments at many institutions are more suitable for multiple-choice questions and questions with short answers or questions that can be machine-marked.Questions are given inversely to avoid cheating, and each student might receive a separate set of randomly selected questions.Thus, some students may have more difficult questions; therefore, a fairness issue can arise.Technology to prevent cheating has also been developed and used, such as web lock software, webcams, fingerprint readers and biometric machines (Kakepoto et al., 2021).Researchers such as Dawson ( 2021) are focused on finding solutions for e-cheating.However, technology is constantly creating new opportunities for learners to have more control over how and where their learning occurs; this includes some assessments.The e-learning environment incorporates collaboration, interaction, and engagement.It makes various possibilities possible, including assessment by simulations and gamifications.

Technology post-Covid
The Covid-19 has changed the world, particularly how education across the world was seen and accepted.Some HEIs were on the verge of changing their learning philosophy and were propelled and prepared for what became an essential methodology of learning strategy that relied on technology-driven learning.Even though some institutions implemented 'emergency protocols' to continue learning based on technological approaches, the underlying education and learning models were based on archaic and outdated models and policies.
Technology has not stagnated during Covid-19; it has grown, and the 5th Industrial Revolution is now a reality.The future of technology post-Covid will grow, and AI and chatbots are anticipated to transform communication, and logically this concept follows that it will also change the world of digital literacy (Mills, 2023).Digital literacy and online learning will be catalysts for ensuring that future generations of learners are employable and ready to work in a world that includes new and innovative technology methods.Technology will also influence students' behaviours in the future (Mills, 2023).Therefore, HEIs must address how they look at learning and particularly assess learners in the future.One of the positive concepts that came out of online learning during Covid-19 was that learners, particularly adult learners, could adapt to what became known as the 'new normal'.
The future of education will be hybrid; however, online is going to be the money spinner, and technology will drive this evolution.Therefore, the institutions that gain momentum stay abreast of learning methodologies and interconnect with technology to ensure that they remain relevant and at the forefront of education in the future.The ability to adapt and thrive will not be a catchphrase; it will be the bedrock of education and assessing learners.
Higher education needs to remain relevant, authentic and adaptable.A key component will be changing the overall model of assessment that allows learners to embrace technology and a method that allows learning to be transferable and relatable and, most importantly, that allows learners to be employable and able to demonstrate that they have truly a high competency in their area of study.Practical and innovative assessment strategies based on models that include flexibility and life-long learning components will be how future learners thrive in a highly competitive world.

Research design and process
This paper aims to create a new artefact to address the general problem regarding online assessments.DSR is suited for this study because this methodology aims to develop knowledge that can be used to design solutions for problems experienced in a specific field.It has largely been developed in information system research, but it has been used successfully in other fields, such as business and education (Winter et al., 2015;Bakker, 2018)).DSR projects typically undertake four main activities: problem diagnosis, purposeful artefact invention, purposeful artefact evaluation and design theorising (Venable, 2010).ADR, as presented in the seminal paper by Sein et al. (2011), provides a structured and insightful process model that combines both the activities of DSR (Hevner et al., 2004) and Action Research (Susman & Evered, 1978).In the end, ADR requires a contribution that proposes a solution for a specific real-world problem through the building of an artefact (Haj-Bolouri et al., 2018), which, in terms of this study, will be a model that aims to guide assessment designs, policies and practical execution in higher education.To this end, this study follows the four stages of the ADR process model (Figure 1), namely the formulation of the problem, intervening and evaluating the problem, reflecting and learning from various collaborative interactions, and finally, formulating the learning into a solution.The interactions with educators and students were conducted as semi-structured interviews.The artefact was tested at meetings, structured workshops, brainstorming sessions, and a mini-conference.After each intervention, the artefact was improved and additional elements we added according to the data collected during the intervention.The artefact's design was based on the literature available on online assessments, assessment in general, pedagogy, technology in education, AI, and the challenges of the twenty-first century.The design and creation of the artefact followed multiple evaluations, reflection and intervention sessions during and in between the workshop, meetings, mini-conference, and interviews.The data collected for this artefact came from interaction with educators, students, assessment officers and academics from several private higher education institutions in South Africa.Seven interviews were held with educators and nine with students.The mini-conference had 34 attendees.Eleven people showed up for the brainstorming session and the workshop was attended by fourteen educators who actively participated.All of these activities took place online during scheduled TEAMS meetings.
The final model was evaluated by 26 participants from four different institutions during an online workshop.During the design of each pillar, evaluation was done, and comments were made on what was available at that stage.Immediate improvements were implemented.During the workshop, the researcher presented and explained the model.Then, participants were asked to evaluate its overall comprehensiveness and usability as a model to inform assessment policy and practice.Most participants made positive and appreciative comments on the comprehensiveness and guidance the model is giving.

Design of an artefact
This section entails the ADR Principle 2 and explains the thinking process leading to the chosen model for online assessments.A theory-ingrained artefact for online assessments needs to find a balance between science, practicality, and technology according to the ADR methodology framework (Haj- Bolouri et al., 2016).The artefact will inform policymakers and practitioners on handling innovative assessments in the future, including in an online environment.

Basic design and labelling of the artefact
Learning outcomes.Learning outcomes (ELOs) are the minimum range of standards for a level within a module or qualification (CHE, 2011).It would be specified in behavioural and measurable terms, and according to Bell (2022), it should be listed in order of contentbased outcomes, cognitive and affective outcomes, and application outcomes.The balance between the three will vary, depending on its role in the module and programme.All the participants stated that this is a non-negotiable pillar of any form of assessment.Assessments assess if learners are competent when assessed against the criteria set by the learning outcomes.
It is, therefore, an essential part of the designed artefact.

Learning graduate attributes.
During the interviews, brainstorming sessions, and meetings with groups of practitioners and participants from different institutions, the importance of graduate attributes in the curriculum design and assessment has been raised several times.Graduate qualities have been widely debated internationally using terms such as key competencies, core skills, and transferable skills.The term 'graduate attributes' has been widely used to describe these qualities (James et al., 2004;Barrie, 2007;Barrie et al., 2009;Holmes, 2013).A baseline study of South African graduates from the employers' perspective (Griesel & Parker, 2009) also embraces the term.
Graduate attributes have several points of reference.Some are shared by the higher education sector (such as attributes relating to academic authenticity); some will emanate from the specific mission, values and ethos of the awarding institution and its commitment toward student graduateness; others are shaped by the disciplinary context and knowledge in which they are conceptualised and taught (Jones, 2009).The graduate attributes must form part of the qualification standards for the Council of Higher Education (CHE) in South Africa (CHE, 2011).The CHE (2022) distinguishes two categories of graduate attributes: knowledge attributes and skills attributes.The CHE also suggests assessing the progression towards the attainment of the attributes.Coetzee (2012) from Unisa emphasised embedding graduate attributes and employability in curriculum and assessment design.According to her, employers seek discipline-specific intellectual capabilities (learning outcomes) and transferable graduate attributes in candidates pursuing employment.The quality of graduates' personal growth and intellectual development must be portrayed by the skills and attributes they bring to the workplace.Graduate attributes thus enable and promote employability.According to Coetzee (2012), university education has a formative function, cultivating a specific set of transferable graduate attributes that constitute a graduate's graduateness and employability.Therefore, these attributes must be assessed to ensure they have been transferred.That is why this is included in the artefact.

Institutional differentiators.
During the workshop and conversation at the miniconference, the concept of institutional differentiators and niche focus areas were mentioned a few times.Two participants during the semi-structured interviews also stated that the study guides at their institutions are written from their unique focus, and their assessment papers are internally moderated to ensure that their unique perspectives and differentiators are present or implied in the questions.When asked, all participants from all the different institutions confirmed that they have unique differentiators that act as their "competitive advantage", setting them apart from the other institutions in the private education space in the market.All participants also confirmed that they promote these differentiating perspectives and factors by incorporating them into their study material.The majority also confirmed that there is a focused effort to embed their unique niche as part of their assessment.
The CHE (2022) encourages institutions to find their differentiators and unique niches, guided by their institutional vision and mission, and make it part of their qualifications.
According to Bell (2022), learning outcomes may, apart from core skills, also include values and competencies that contribute to the ideals and focus of a specific institution.It is thus important to include these in an assessment artefact.
Challenges with online assessments.During the COVID-19 pandemic, lecturers employed various assessment methods through the online mode of teaching and learning, but currently, at the beginning of the end of the pandemic, lecturers are back to practising offline assessment methods, or, where they kept the assessments online, the temptation or commonly used approach is to mirror face-to-face strategies and practices.Only one institution of those interviewed as participants for this study confirmed that they are back to practising sit-down examinations exactly as before the pandemic.The remaining participants confirmed that they kept the assessments online and that their students preferred it.Numerous challenges and barriers to effective online assessments were discussed during the interviews with students and  Not all institutions have the latest and best digital student platforms to facilitate seamless and smooth online student experiences.Many institutions find the available options too expensive, especially PHEIs in South Africa, which do not receive government subsidies.Institutions building their student platforms to customise their student experience undergo teething problems that sometimes affect the student's use of the platform.
All of these need to be taken into account when designing a model.

Multiple modes of assessment.
During the pandemic, online assessments at most institutions mirrored face-to-face strategies and practices.This was called emergency remote teaching, which most students had a positive experience with (Schutte, 2021).Different assessments were suggested during the mini-conference, and a strong plea was made for multiple modes of assessments.During brainstorming sessions, lecturers listed a variety of assessment practices they could think of, such as peer assessments, group work and presentations.As part of the interviews, lecturers from the different institutions shared ways of assessments they are currently experimenting with, such as journaling, reflective essays, gamification, project-based assessments and assignments as summative assessments.This is not a new or a post-Covid debate.For more than two decades, various assessment practices have been debated during faculty development programs (Zhang & Burry-Stock, 2003;Sikka et al., 2007).
During the interviews, two participants from two different institutions confirmed that they do not do formal examinations on a specific day as an online option.All their assessments are assignments with due dates that the students must upload on the system.One institution still sets an assessment paper for a specific date, but students can download it at the scheduled time.Then, they have 12 hours to complete it, and then upload it again before the deadline.
They follow this route to give students affected by load-shedding a fair opportunity to write the exam.Most other institutions set an assessment for a specific date online.The system opens the paper at a set time, and after three or four hours, as indicated by the assessor of the paper, the system automatically closes, and all papers must be submitted.Some institutions use multiple-choice questions because the system marks them, and results are immediately available.To avoid cheating, one institution combines the assessment with reflective essays in which students must explain what they learned from the module in their own words.Another institution expects students, as an attempt to avoid cheating, to upload a PowerPoint presentation with a video of themselves doing the presentation.Institutions see E-portfolios and experiential learning reports as ensuring students refrain from cheating or using AI to complete the assessment.A recent study done by Alzubi et al. (2022), in which they collected data from 62 educators, confirmed that quizzes and presentations were highly useful modes of online assessments and commonly used.Their participants are experimenting with eportfolios, journaling, and project-based group assignments, which they also consider options for the future.
During one of the interviews, a conversation began on traditional African ways of assessment, and that led to the question about the decolonisation of assessments.The researcher then embarked on a detour and interviewed academic colleagues from Zambia, Nigeria, Zimbabwe, Ghana, Malawi, Botswana, and South Africa on traditional ways of assessment in African cultures.Data saturation was actually reached after the first interview because all participants were in consensus that memorising and assessing facts are not part of the traditional African way of assessing.When taught, learners can ask questions to get more information or gain deeper insight, but the educator does not ask them questions to see if they understand.When learners apply their knowledge, the educator will see if they are competent and if they understand.Skills are learned by working alongside someone who knows the trade until they can do it independently.One of the participants remarked that Western terminology called experiential learning and work-integrated learning was born in Africa, and it is a decolonised form of assessment.He mentioned a university in Nigeria where, if the workplace passes the student on the experiential learning report, the university automatically passes the student because the application of knowledge in the workplace confirms that the student is competent.According to most participants, project-based, problem-based, and experiential learning forms of assessment are rooted in a decolonised form of assessment.Since the question of the decolonisation of assessments has been asked by Godsell (2021), and the debate on the decolonisation of universities, curriculum and pedagogy (Schutte, 2019) is far from done, these inputs need to be taken seriously when designing an assessment artefact.

The model as an artefact
From the narratives of the participants, it became clear that higher education needs a way of assessing students that are load-shedding, unavailability of Wi-Fi, AI, and cheat proof, but that will address learning outcomes, graduate attributes, the unique niche of institutions, employability skills, and that can be done asynchronously.During the model's basic design and labelling phase, it became clear that learning outcomes are the overarching concept.
Without assessing the learning outcomes, there is no assessment.This is the basic criteria.It was also confirmed by all participants and tested during the artefact evaluation sessions that graduate attributes, institutional differentiators, and employability skills must be incorporated into the study materials.These aspects must form part of formal assessments to confirm that it has been transferred.During the second phase of the ADR model, the artefact needs to be built and evaluated.The first pillar in the design of an assessment model can, therefore, be visually presented in figure 2. The design of the second pillar of the model focused on how to address the credibility of online assessments.During the data collection phase, numerous participants mentioned the problem of cheating and students searching either in their textbooks or online for answers.
ChatGPT, other AI technology and WhatsApp groups were also mentioned as cheating tools.
From all the different interactions with participants, the researcher concluded, based on the majority of suggestions made, that problem-based and project-based assessments might be the most credible because they minimise the chances of cheating, copying, and pasting an answer from the internet.What was mentioned during the decolonisation conversations can also assist here: experiential learning or work-integrated learning and applying skills.Research assignments as formative and summative assessments were also mentioned during an interview.To address the problem of load-shedding and Wi-Fi availability, asynchronous assessments, as discussed by the participants, were summarized in figure 3 to allow students to either download an assessment and work offline on it or to work on the assessment when they have electricity and access to Wi-Fi.The assessment artefact inform the creation of a framework to revisit the policy and current practice regarding assessments at HEIs.
After the evaluative comments have been incorporated, the final artefact is presented in figure 6.

Conclusion and further research
This research aimed to design an artefact that could serve as a model informing assessment frameworks and policies at HEIs.The research followed the DSR and applied the ADR methodology.Initially, the intent was to design a model for online assessment, but it became clear that it could also be used for offline assessments.The initial idea was to design the artefact for use by private higher institutions, but the final model can also be generalised and applied to public institutions.Using ADR was extremely useful in designing the model because of the continuous development and evaluation process.Each interview, workshop, brainstorming session, or meeting either contributed to the creation of the model or, through sharing the model during these interactions with participants; it got evaluated.It was immediately adjusted or expanded with comments that surfaced.
This model is only the first of more tools needed to inform policymakers and educational practitioners about the need to innovate the design of assessments to address the challenges of the times.Further themes that need research are the potential and possibilities technology holds for innovating credible, reliable and valid assessments.How assessments can be a learning experience enhancing employability and 4IR skills also needs to be further investigated.Two more questions must be explored: 'What structural conditions must HEIs implement to support students towards successful online assessments?' and 'What types of assessments will contribute to creativity, problem-solving, critical thinking, analytical skills, and other expertise needed in our ever-changing world?'

Disclosure statement
No potential conflict of interest was reported by the author(s).

Funding
This work was not supported by any funding.

Figure 1 ADR
Figure 1 workshops, brainstorming, meetings, and the mini-conference.In summary, these can be documented as:  Load-shedding: An issue exists when load-shedding kicks in during or in the middle of an exam or when downloading or uploading an assessment.Expensive data.Some assessment takes three or four hours. Unavailability or constant dipping of Wi-Fi. Cheating by students who write online.They have open textbooks next to them.They create WhatsApp groups and share questions and answers. Artificial Intelligence: They ask ChatGPT to give them the answers, do assignments or write essays on their behalf.They Google the answers. Transport is expensive for rural students to go to the nearest centre where they can access Wi-Fi on examination days.Not all lecturers have the technological skills and knowledge to maximise the potential of online assessments.