Perception and acceptance of senior medical students at King Abdulaziz University of the use of Objective Structured Clinical Examination as a tool of assessment

Introduction: The faculty of Medicine at King Abdulaziz University (KAU) introduced the Objective Structured Clinical Examination (OSCE) as a tool of assessing medical students in the last decade. In our study, we aimed to assess the perception and acceptance of OSCE among students and to interpret rating of OSCE in relation to other assessment methods. Methodology: A cross-sectional survey using electronic validated questionnaire, was distributed through different methods which included Short Message Service, social media website, and posters. The questionnaire contained various domains about students’ perception of OSCE validity and reliability, and rating of OSCE in relation to other assessment methods. Results: Among 246 students who responded to the survey, 52% of them denied that OSCE provided opportunity to learn real life scenarios. Interestingly, more than 80% of students showed concern about inter-evaluator and interpatient variability as bias factors that could affect their scores. Passing or failing the OSCE was not a true measure of clinical skills as 77% of students admitted. Conclusion: Although OSCE exam is supposed to be standardized and fair to students, our survey raised concerns regarding the conduction of OSCE especially regarding inter-evaluator and inter-patient variability.


Introduction
The assessment of clinical competence is fundamental to ensure that graduate practitioners are able to exercise their duties in patient care (Awaisu et al. 2007) .
Since Harden et al, introduced the objective structured clinical examination (OSCE) in 1970s as a means of assessing clinical competency by direct observation, it has been used increasingly for both under-and post-graduate students (Chisnall et al. 2015) . As the OSCE developed throughout the years, there was a debate about its applicability as an assessment tool especially in terms of its validity and reliability (Rushforth 2007) .
In the last decade, the faculty of medicine at king Abdulaziz University in Saudi Arabia introduced the OSCE as a tool of assessment for undergraduate medical students.
In 2008 a study in our faculty concluded that OSCE can be used for the evaluation of clinical skills like Traditional Oral Clinical Examination (TOCE) with better objectivity and reliability (Bakhsh et al. 2009) .
In this paper, we assessed the perception and acceptance of an OSCE method and explored its strengths and weaknesses among senior medical students. On the other hand, we wanted to interpret students' rating of an OSCE in relation to other assessment methods like Multiple Choice Questions (MCQs) and Clerkship.

Methodology
OSCE became one of the main tools of assessment in all clinical rotations at King Abdulaziz University. A clinical rotation was chosen in this study which comprised a circuit of ten stations divided into two days, each station was five minutes duration. It involved completion of a number of tasks including: Focused history taking, Physical examination, Counseling, Laboratory data and radiographic interpretations.
A cross-sectional survey using 46 items validated questionnaire with various domains, modified from a study by (Pierre et al. 2004) . The questionnaire contained demographic data of the respondents and questions to evaluate the nature, organization, content and structure of the examination. In addition to perception of OSCE validity and reliability, a 5-point Likert / type scale that indicated degree of agreement was used to assess most of the dimensions in the questionnaire and a 3-point scale was used for rating OSCE in relation to other assessment methods. Openended questions were also included to generate qualitative data.
In contrast to many studies that chose to assess students' feedback immediately after an exam and to avoid confounding factors (e.g., Stress) and on voluntary basis, we started data collecting later in a dispassionate time and students were assured that information they provided would remain confidential. The questionnaire was ethically approved from biomedical ethics unit and medical educational department in KAU.
A self-administered electronic questionnaire was chosen to conduct the survey. The survey was structured and collected by using Google™ forms. Three different links were created by Google™ URL Shortener service that directed students to our study survey. Each link was distributed through different methods including Short Message Services (SMS), social media website (Facebook™) and posters. Regarding SMS, we sent a message that included the survey link to all our target students. Database of students' mobile numbers were taken from groups' leaders. A reminder message was also sent a week later. The link, which was created for social media, was distributed through a private senior students' group in Facebook. Fifteen posters were distributed in many different places in our collage. A barcode link, which lead to the survey, was included in posters. The data were analyzed by IBM SPSS V22.0 and qualitative data were analyzed manually by thematic content.

Results
A clinical rotation was chosen to survey OSCE perception and acceptance. The faculty of medicine divided senior medical students into three batches across the year. Survey questionnaires were sent to all batches after they completed the rotation.
A link to the questionnaire was distributed to students via SMS, social media website (Facebook™) and posters. Students had the option of choosing the method to complete the survey. We identified which method each student accessed the link. We found that 51.75% of students accessed the questionnaire via SMS messages while 46.8% via Facebook website, and only 1% accessed the questionnaire through posters' barcode. For more details see table 1. Two-hundred forty six out of 355 senior medical students responded to the questionnaire (response rate of 69.2%). Out of respondents, 131 (53.3%) were males and 115 (46.7%) were females. The response rate of each batch showed 81 (32.9%), 73 (29.7%) and 92 (37.4%), respectively.

Methods of questionnaire distribution
Majority of students; 212 (86.2%) found that the OSCE was a stressful method of assessment. In addition,165 (67.1%) denied that OSCE was less stressful in comparison to other methods of assessment. Furthermore, 166 (67.5%) students admitted that OSCE was an intimidating method of assessment.
Fifty-one students (20.8%) disagreed that the exam covered wide range of knowledge. However, when we stratified answers according to students' GPA, we found that (35%) of students whom grades were A+ and A (28 out of 80) were fully aware of the level of information needed for the exam. In contrast, (39%) students whom grades were below A (65 out of 166) were not fully aware of the level of information needed.Other pertinent information on student's perception is shown in table 2. A hundred and twenty-two students (49.6%) denied that instructions were clear and unambiguous. Additionally, seven students commented in open-ended questions that the instruction should be formulated to be short and clear.
In the same way, students were asked about time allocated to each station and the results showed that 138 (55.8%) students disagreed that time was adequate, while 64 (25.9%) students agreed that time was adequate, and the rest of students showed neutral responses. Moreover, time allocated per station represented a vast majority of respondents' concern as 35 comments asked for more time to complete instructions.
Furthermore, 69 (28.1%) students agreed that OSCE provided opportunity to learn real life scenarios, but 130 (52.8%) disputed this fact. Nine comments highlighted that OSCE was a useful method in simulating real life scenarios and to be under stress conditions as eight students revealed. More detailed information on quality of performance testing is in table 3.   From pre-exam arrangement issues and specifically for our college, nearly most of the students 217 (88.2%) believed that a long waiting time, which may extend up to five hours in a closed room prior to starting the exam, may affect their attention and concentration. Also, there were 10 comments on open-ended questions about long waiting time before the exam as an exam weakness point that needs to improve. A large number of examinees in the same day may also affect their performance as 200 (81.3%) students reported. Lastly, students were asked if the formative OSCE may improve their performance in the summative OSCE. A hundred seventy four (70.7%) students found it to be a useful method to improve their skills.

Disscussion
In our study, a clinical rotation was chosen to be studied because it had variant types of assessment (Clerkship, MCQs and OSCE) compared to other rotations that senior medical students had finished.
Our questionnaire was electronic in its form in order to be easier and faster in reaching our targeted sample (Hunter 2012) . Beside the scope of our study, we aimed to find out which way of survey was more effective and applicable between medical students. In term of questionnaire distribution, and to accessed the survey to all of our target students through different methods (SMS, social media website and posters), we created three different links for each method to reach the survey. Thus, showed us which method of distribution is more reachable by the students. We found that about half of the students accessed the questionnaire via SMS messages while the other half accessed via Facebook website, and only 1% accessed the questionnaire through posters' barcode. Hence, these results go with the trend of social media and era of technology nowadays and should be considered in the future.
Many studies found that students perceived the OSCE as a stressful method of assessment (Allen et al. 1998).The majority of students (86.3%) in our study showed the same perception. In comparison to other methods of assessment including MCQs and Clerkship, OSCE was the most stressful.
Regarding the students' assessment of the quality of their performance on the OSCE, we noticed a large number of students (53%) denied that OSCE provide opportunity to learn real life scenarios. In contrast to a study by Pieere RB et al (2004) which considered the OSCE to be helpful and practical tool to simulate real life scenarios. Some students suggested, in response to open-ended questions, to have a real patient rather than simulated cases.
Although there are few reports in the literature that described the OSCE as a fair method of assessment (Duffield and Spencer 2002) and (Pierre et al. 2004),55% of students demonstrated that the OSCE was an unfair tool. Even in relation to other methods of assessment, the OSCE was rated as the unfair method. In our opinion, few bias factors could have altered the degree of exam fairness among students. These factors include inter-patient variability and inter-evaluator variability.
The majority of students (81.3%) agreed that inter-patient variability may present as a source of bias and would affect scores. Similarly, inter-evaluator variability could make a difference in the standard scale of scores. A study by Austin et al showed the same concern from students, and that this might adversely affect their academic standing.
Personality, ethnicity and gender of examiners represented additional factors as reported by 69% of the students. A study by (Awaisu et al. 2007) concluded the same result as well. Furthermore, related to the examinees, these factors might also affect students' scores. Regarding gender variances, we found that male students had more concern than female students about these factors in a slight percentage.
From time aspect, time allocated per station is still a point that need to be more standardized. A lot of students (55.8%) recognized that stations needed more time to complete the task. Some comments from students highlighted that the instructions should be short in form and clear to give time to complete the task (Stowe CD and Gardner SF 2005).
Regarding the validity and reliability of the OSCE, we found that (77.7%) of students did not consider passing or failing the exam is a true measure of clinical skills. In contrast to the previous result, a study done in International Islamic University Malaysia (IIUM) which most of its students considered the exam to be a measure of their skills.
About one third of students thought that OSCE did not assess skills which they learnt. Perhaps, formative OSCE is one of the tools that would help them to recognize the general environment of the exam and improve their skills as (70.7%) of students admitted.
The bulk of respondents (81.3%) thought that a large number of examinees within the same day might affect their attention and concentration. In addition, (88.2%) of students thought that waiting too long prior to start the exam would affect their performance.

Conclusion
Alghamdi K, Katib B, Alhoqail A, Al-khatib T Although OSCE exam is supposed to be standardized and fair to students, our survey raised concerns regarding the conduction of OSCE stations especially regarding time allocation per station and pre exam waiting time. Other concerns were inter-evaluator and inter-patient variability, which may affect students' scores.

Take Home Messages
OSCE is an effective and widely used method of assessment around the world, but in some places, according to the students, it is not as much as expected to be. OSCE exam, and the way of introducing it in KAU, needs to be more evaluated and assessed to achieve the purpose of this method. Although this survey is assessing the perception of students, we should take it in consideration especially the most significant parameters.

Notes On Contributors
Khalid Alghamdi, medical student, Faculty of Medicine, King Abdulaziz University.
Bassel Katib, medical student, Faculty of Medicine, King Abdulaziz University.
Abdulaziz AlHoqail, medical student, Faculty of Medicine, King Abdulaziz University.
Talal Al-khatib, MD, FRCSC, Chairman of Department of Otolaryngology-Head Neck Surgery, Faculty of Medicine, King Abdulaziz University.