Rethinking online assessment from university students’ perspective in COVID-19 pandemic

Abstract The recent COVID-19 pandemic prompted the implementation of online teaching and online assessment. Online assessment can be challenging to both teachers and students due to technical, academic, and ethical issues. In this survey, we adopted both qualitative and quantitative approaches to evaluate (1) the perceived effectiveness of online assessment; (2) barriers and problems of using online assessment; and (3) suggestions for improvement. The online survey was conducted in May 2020, 752 full-time undergraduate and postgraduate students had completed the questionnaires. Forty-three undergraduate students attended an individual interview between May and June 2020. A total of 739 (98.3%) students had the experience of taking online assessment during COVID-19 outbreak. The survey results revealed that only 16.6% of students were satisfied with their online assessment arrangements. The major difficulty that students encountered was technical problems (52.6%). Majority of students (72.6%) agreed that online assessments were more affected by computer problems and internet connection when compared with traditional examination. Students expressed that teachers’ feedback was essential for their learning, and they wished to receive timely and detailed feedback on their performance. Students suggested that technical support should be provided, and standardized measures should be taken to ensure academic honesty.


Introduction
Assessment plays a crucial role in education and it is considered the core component for effective learning (Gikandi et al., 2011). Assessment can provide feedback to teachers and students and the results will serve as guidelines for correction (Bloom, 1968). There are numerous assessment methods available, such as examination, test, quiz, written assignment, individual project, and group project. The assessment methods used should be in line with the course objectives and they often vary among programs (Organisation for Economic Co-operation and Development, 2013). Traditionally, majority of the assessment methods require face-to-face contact, such as end-ofterm final examination or in class presentation (Vyas et al., 2021).
Following the outbreak of the Coronavirus Disease 2019  since January 2020, the Hong Kong government had implemented a series of preventive measures. Classes were suspended and social gatherings were discouraged (Centre for Health Protection, 2021). The university had switched majority of its teaching and learning activities to online mode to reduce human-to-human contact and to cater for the needs of overseas students. The COVID-19 pandemic persisted throughout the semester (World Health Organization, 2021). Thus, online teaching remained the preferred option and online assessment became inevitable.
While many local universities have stated their plan and strategy of adopting technology to promote e-learning, most teachers and students had only limited experience of conducting online assessment (Coniam et al., 2014;Evans et al., 2020;Foung & Chen, 2019). Under COVID-19, modification of assessment methods was often needed (Carrillo & Flores, 2020). Common strategies included shifting from traditional summative assessment to formative assessment, changing the final written examination to a project or term paper, or hosting online examinations (Cleland et al., 2020;Nic Dhonncha & Murphy, 2021). In our university, three types of online examinations were adopted, including (i) synchronous or asynchronous online examinations with lockdown browser, webcam, and video analysis to detect and prevent cheating (Aguirre & Selampinar, 2020); (ii) synchronous online examinations with any online quiz system and video-conferencing tool; and (iii) paper-form synchronous online examinations under invigilation via videoconferencing tool.
Nevertheless, with such sudden and massive changes in assessment methods, both teachers and students had raised a lot of concerns. There were worries that technical problems might arise during online examination, project or assignment might not be as effective as examination in accessing students' learning, and difficulties in ensuring academic integrity and prevention plagiarism or cheating during online assessments (Darling-Hammond & Hyler, 2020;Deranek & Parnther, 2015). The current study aimed to collect students' opinion, identify the barriers and problems, and provide suggestions for improvement on online assessment.

Literature review
Chickering and Gamson suggested seven principles of good practice for teaching and learning in undergraduate education (Chickering & Gamson, 1987), including encouraging contacts between students and faculty, developing cooperation among students, encouraging active learning, giving prompt feedback, emphasizing time on task, communicating high expectations, and respecting diverse ways of learning. These principles had been widely adapted as performance criteria to evaluate the effectiveness of assessment, including online assessment in web-based virtual classrooms (Gorsky & Blau, 2009;Tartavulea et al., 2020). Morgan and O'Reilly had proposed ten key qualities of online assessment (Morgan & O'Reilly, 2005), including clear rationale and consistent pedagogical approach, explicit values, aims, criteria, and standards, relevant authentic and holistic tasks, awareness of students' learning contexts and perceptions, sufficient and timely formative feedback, facilitative degree of structure, appropriate volume of assessment, valid and reliable, certifiable as students' own work, and subject to continuous improvement via evaluation and quality enhancement. Both Chickering and Morgan put great emphasis on giving feedback and engaging students in the assessment.
It was suggested that effective feedback was essential for stimulating students to learn and develop effective studying strategies (Gikandi et al., 2011). The greatest advantages of online assessment were its flexibility and accessibility (Rolim & Isaias, 2019). The flexibility around time and place of taking assessment enhanced students' learning experience. Technology also helped consolidate reliability and validity of online assessment by providing timely interactive feedback (Bajzek et al., 2008). Students generally received more prompt feedback from peer assessment and computer-marked assessment than teacher-marked ones (Ogange et al., 2018). Nevertheless, there were challenges in the design, development, and delivery of online assessment and evaluation. Quality of the feedback generated from online assessment could be a concern. Problems reported included unclear feedback from tutors or vague suggestions on improvement (Higgins et al., 2001;Weaver, 2006). Choices of assessment methods should be in line with the course objectives, but assessment options might be limited by the online setting (Hickman et al., 2005). Securing and proctoring online tests was crucial (Howlett & Hewett, 2005). Teachers should pay special attention to the question design when traditional testing was delivered in distance learning in order to maintain academic integrity. Students' acceptance towards online assessment was another issue. Inadequate peer support could affect motivation and confidence in online assessment (Webb & Jones, 2009).
As the mode of communication and learning paradigm shifted, the assessment practices in online environment should be adjusted to direct teaching and promote learning (Bartley, 2005). There was rising teachers' awareness on transforming the way students were assessed by technology (Sampson et al., 2014). Previous studies revealed that multiple factors apart from personal ability could affect students' performance, such as quality of teaching, level of the course, familiarity with examination format, and quality of examination items (Inuwa et al., 2012). Multiple factors affecting students' perspectives should be considered when researching into students' preferences on assessment methods.

Study design and study population
Study population included undergraduate and postgraduate students from all eight faculties at the participating university.
The current study consisted of two parts.
The first part was a cross-sectional study which aimed at collecting quantitative data. Students were invited to fill in an online questionnaire in May 2020. Invitation was sent to all eligible students through email. The questionnaire contained five sections namely (1) Types of online assessment method used; (2) Evaluation of the overall effectiveness of online assessment from students' perspectives; (3) Problems encountered using online assessments including written assignments, presentations, test, quiz, and examination; (4) Students' perception of teachers' competency in the implementation of online assessment; (5) Obstacles or difficulties in online assessment. Five-level Likert-scale with answers from strongly agree to strongly disagree was used. For challenges and suggestions, open-ended questions were used. The questionnaire was pre-tested on 15 students in April 2020 before large scale distribution. Minor rephrasing of the questions was done after the pilot testing.
The second part aimed to collect qualitative data on students' opinion and suggestions on online assessment through individual interviews. Students were invited to participate in an in-depth individual online interview between May and June 2020. Invitation was sent to all eligible students through email. Data collected from the online questionnaire were used as reference when developing the questions for individual interview. Findings from questionnaire and interview would be reported separately.

Data processing and statistical analysis
Descriptive statistics such as mean, standard deviations, percentages were used to report the statistical results from questionnaire. Chi-squared test was used to determine any difference in students' perception of difficulties in different types of online assessments among faculties. A coding scheme was developed based on all the collected qualitative data. Two researchers did the coding separately, then discussed and agreed on the coding results. The coding reliability for the open-ended questions was almost 99%. Students' response from the semi-structured questions like the types of barriers and difficulties students encountered in online assessments and the support needed were summarized according to the main themes with content analysis technique. Based on the latest official information reported by the University Grants Committee in the year 2020, 20,464 full-time students from the eight faculties in this participating university were eligible for participation in this research. The sample size estimation to achieve a 95% confidence level (level of significance of 0.05) with a marginal error of 5% was 378 students. IBM SPSS Statistics (version 26) was used to analyse the questionnaire data.
The individual interviews were conducted by one researcher. The interviews were audio-recorded and transcribed by student helpers. A general inductive approach was adopted in data analysis. The transcripts were read several times to identify themes and categories. All the transcripts were first read and checked for accuracy by two researchers independently. A coding frame was developed after discussion. The transcripts were coded by one researcher and checked by another research. If new codes emerged, the coding frame was changed and the transcripts were reread according to the new structure. This process was used to develop categories, which were then summarized and classified into broad themes. The themes were categorised into experience, opinion, and suggestions on online assessment.

Ethics
The current study was approved by the University Survey and Behavioural Research Ethics Committee (Reference No. SBRE-19-624). All students had provided their informed consent before filling in the questionnaire or joining the interview.

Results
A total of 752 students completed the online survey. Majority (86.6%) of the respondents were undergraduate students. Respondents came from all eight faculties of the university, namely, the Faculty of Arts, the Faculty of Business, the Faculty of Education, the Faculty of Engineering, the Faculty of Law, the Faculty of Medicine, the Faculty of Science, and the Faculty of Social Science. Table 1 showed the demographic distribution of the participants in the online survey. In addition, 43 students form the eight faculties attended the individual interview. Table 2 summarized students' engagement in online assessment during COVID-19. A total of 739 (98.3%) students had taken online assessment during COVID-19 outbreak. The most frequently used methods included attempting online test, quiz, and examination at a fixed time and place within a fixed period (69.0%), followed by students doing open book online test, quiz, and examination (59.7%), students doing an online presentation (54.1%) and group projects and collaborative writing through online platform (50.3%). There were 526 (69.9%) students who attempted online assessments for at least 5 times, and 222 (29.5%) students who attempted online assessments for at least 10 times. On average, students took online assessment for 6.5 times. Table 3 summarized students' opinion on online assessment. A total 125 (16.6%) students were satisfied with their online assessment arrangements, while 348 students (46.3%) were unsatisfied. There were 546 students (72.6%) who agreed that online assessments were more affected by computer problems and internet connection when compared with traditional examination, 435 (57.8%) thought it was more convenient to students when they were allowed to do the online assessments at their preferred location and at a desired time-slot, 422 (56.1%) students who agreed that feedback given by the teacher was prompt and easy to understand, 418 students (55.6%) agreed that teachers gave good instructions and guidance for their online assignments, and 393 (52.3%) were inexperience with the new format which resulted in online assessments being more stressful for students. Table 4 summarized teachers' and students' competency with online assessment. The major barrier that students faced was technical problems (52.6%). There were higher percentage of students having no difficulty with written assignments (P < 0.01) and presentations (P = 0.02) than those who did. While for online examination, there were higher percentages of students having difficulties than those who did not (P < 0.01). Majority of students agreed that their teachers could make use of online discussion board to encourage questions and discussions (83.9%) and that their teachers would look for opportunities to provide feedback (71.5%). Around half of the students thought that their teachers were capable of assisting self and peer assessments (54.9%) and that their teachers were skilful in using synchronous technologies to communicate in real-time (48.9%). Only 21.8% of students agreed that their teachers were skilful in using formative assessment, such as ungraded quizzes, to check students' learning.  Table 5 summarized students' comments on online assessment and their suggestions for improvement from individual interviews. Students expressed that they prefer taking traditional assessment for professional courses or practicum courses. They thought their learning could be facilitated if teachers could return students' course work earlier and provide more timely and detailed feedback. To avoid cheating, students suggested that lockdown browser or camera should be used for examination invigilation. The examination format could be adjusted by setting more challenging and application questions to test students' understanding on fundamental concepts and prevent students from copying answers directly from online sources. Assessment rubric for presentation should also be revised in accordance with the virtual context. From the university's level, students hoped a set of rules could be developed to standardize practice across departments. They also hoped that technical problems with online assessment can be addressed by being allowed to take online examinations oncampus or getting real-time support from the university information technology service unit.

Discussion
COVID-19 pandemic has caused a paradigm shift in many aspects of citizens' lifestyle (Khan & Jawaid, 2020). One major change is the implementation of online teaching in primary, secondary, My teacher will look for opportunities to provide feedback to the entire class through sending an online announcement or e-mail summary of discussion done in an online class or observed from student assignments to help students consolidate their learning.

(71.5%)
My teacher has used self and peer assessments to improve student learning experiences and build community.

(54.9%)
My teacher was skillful in using synchronous technologies, when appropriate to communicate with students in real-time.

(46.9%)
Teachers have difficulty in providing feedback and comments to the students when we have submitted our assignments online, as they liked to print out students' assignments and mark them on the paper directly.

(23.5%)
If the course material or contents were very complicated and technical, my teacher will use ungraded, self-check quizzes to check students' learning.

(21.8%)
#Number and proportion of students who agreed to the statement. Lee et al., Cogent Education (2022) and tertiary institutions (Moorhouse, 2020;Ng et al., 2020). While there are debates on the pros and cons of online teaching, issues with online assessment have also attracted teachers' and students' attention (Elzainy et al., 2020). Assessment is a crucial component of teaching and learning, as it evaluates the achievement of course learning outcomes by the students (Gorsky & Blau, 2009;Tartavulea et al., 2020). Online assessment tools have been available for a long time. However, they are not often adopted to conduct major assessment on students due to controversies over validity, reliability, and dishonesty (Guangul et al., 2020).
When comparing computer-based examinations with paper-and-pencil examination, some studies suggested that the performance results were similar (Martinez et al., 2009). However, some students reported their performance was adversely affected by the online environment (Dillon & Clyman, 1992). Detailed studies revealed that multiple factors could affect student performance (Inuwa et al., 2012). In the current study, only less than 20% of students were satisfied with the online assessment arrangements. Nearly half of the students did not like online examination and preferred the traditional paper-and-pencil examination. The major barrier in online assessment that students encountered was technical problem. There were 72.6% of students thought they were more affected by computer problems and internet connection when compared with traditional examination. Around 40% of students found it more difficult to focus on the test when working in an online assessment environment. Over 50% of students also expressed that inexperience with the new format had resulted in online assessments being more stressful for students. The current study echoes literature findings that students' performance in online assessment might be affected by multiple factors, and students might be more stressful with online assessment (Stowell & Bennett, 2010).
Ensuring academic honesty is one of the major challenges in online assessment (Deranek & Parnther, 2015). In a recent study conducted in a university in Jorden, nearly 45% of students Table 5

. Students' opinion and suggestions from individual interviews
Opinion on online assessments • Preferred traditional types of assessment for professional courses, as they could assess students' face-toface practicum, skills in communication with clients and application of subject knowledge.
• Some traditional classroom assessments could hardly be replaced by online assessment, for example, producing design portfolio which combined proper drawings, model photos, renderings, and text.
• The assessment rubric for presentation may not apply well on hand gestures, eye-contacts, and postures.
• Had fewer chances to exchange thoughts and receive feedback from other classmates as they only needed to present to teachers.

From teacher's level
• Return students' course work earlier and provide more prompt and detailed feedback.
• Provide clearer grading rubrics or instructions.
• Provide mock examination to avoid the occurrence of technical problem and allow students to be more familiarized with the online assessment arrangement.
• Have a random distribution of questions for each student to avoid cheating.
• Adjust examination format by setting more challenging and application questions.
• Increase the proportion of formative assessment or replace the final examination with assignment.
• Replace laboratory work and field trip with virtual reality devices.
• Consider the needs of international students in different time zones.

From university's level
• Draft a set of rules for all the departments.
• Use a lockdown browser or camera for examination invigilation to prevent cheating.
• Provide real-time support by information technology service staff.
• Allow students to attend online examinations on-campus and provide equipment or venue for students to take an online examination and do presentation with a stable internet connection.
admitted misconduct or dishonesty during the remote online examinations, including seeking help from friends or searching for answers from all possible sources (Elsalem et al., 2021). Academic dishonesty does not only concern faculty members, but also students (McGee, 2013;Spaulding, 2009). Students who attended the focus-group interview pointed out that it was crucial for the university to consolidate a set of rules and regulations to standardize practice across departments. Common strategies that students proposed included changing the close book examination to an open book examination, making the use of lockdown browser and camera for examination invigilation compulsory, and adjusting examination format by setting more challenging and practical questions. The decision to change assessment methods would require serious consideration, especially for professional programs (Sagoo et al., 2020). Nevertheless, it is clear that programspecific modifications are inevitable as COVID-19 pandemic persists.
Changing the assessment format by increasing the proportion of formative assessment is often proposed in online assessment. Formative assessment serves as a tool to boost students' achievement and identify learning gaps (Hayes, 2015). Effective integration of formative assessment with sustained interactions among learners and teachers supports high-order deep learning and fosters the formation of a meaningful learning community (Sorensen & Takle, 2005). Successful feedback can be identified in terms of frequency and detail, which should be focusing on students' performance, timely, appropriate to students' conception of knowledge, and meeting learning objectives (McCarthy, 2017). Computer-marked assessment could provide more prompt feedback to students (Ogange et al., 2018). Interactive formative feedback was significant and useful in helping students deal with shyness in expressions (Ogange et al., 2018). In the current study, students expressed that getting feedback was essential to their learning. While most students thought that feedback given by the teacher was fast and easy to understand, they had less interaction and received less feedback from their classmates. Students' comments correlated with previous findings that common difficulties faced in implementing online formative assessments was the lack of supportive peer learning environment and the lack of high-quality feedback (Webb & Jones, 2009). Future development of online assessment should include strategies to engage discussion among students. In a research conducted in a Korean online university in 2019, six factors were found to have direct impact on student engagement in the e-learning environment, including psychological motivation, peer collaboration, cognitive problem solving, interactions with instructors, community support, and learning management (Lee et al., 2019). Besides, those engaged learners always demonstrated good communication skills with proficient cooperative and self-directed learning by utilising online technology effectively (Dixson, 2015). Therefore, future research in how to overcome the barriers in online assessments could focus on how to increase students' engagement in a collaborative learning environment by boosting their competencies in online learning and shape high quality learning outside the traditional classroom (Golladay et al., 2000).
The current study has several limitations. Firstly, the survey was conducted in a single tertiary educational institution. The applicability of these findings to situations of other universities worldwide may be limited. Secondly, this survey only covered students' perspectives on comment types of online assessments, more complicated types such as practicum, laboratory testing, or microteaching were not covered. Thirdly, a detailed research into the distribution of online formative and summative assessments of the faculty and the difficulties faced in the implementation of different types of online assessments was not covered in this research. Nevertheless, the current research is a call to the different stakeholders in higher education to pay attention to the difficulties students have faced in doing both formative and summative online assessments. Through this research, different stakeholders in the university will be encouraged to re-consider the possibilities of allowing more online courses in different faculties. Also, future research should focus on ways to establish a comprehensive system of validation for university examination to be taken online.

Conclusion
In conclusion, students' satisfaction level on online assessment was low. Their major concern was related to the technical problems during assessment. The current study has revealed an urgent need to explore how to develop a safe, valid and reliable online assessment system that can meet the needs of higher education in Hong Kong. The current research project is only the beginning in the exploration of how students in different faculties responded to online assessment during the pandemic and the summary of their major problems and difficulties. Future research direction should focus on the accreditation process of online assessment, how university could provide a universal, formal, and third-party recognition of competence of students in the performance of specific tasks.