Medical students’ understanding of cost effectiveness in feedback delivery

Introduction Feedback is an important influence on student achievement, yet students report it lacking in both quantity and quality and there is an unexplained mismatch between students’ and staff’s perceptions about the adequacy of feedback offered. Despite the financial constraints on Higher Education providers, there is little evidence about students’ understanding of the costs involved in delivering educational resources. We therefore investigated students’ views on feedback, focusing our analysis on feasibility and cost effectiveness. Methods An online questionnaire was delivered to students in the first, third and fifth (final) year of a UK undergraduate medical programme over two academic years. Students were asked to identify the ‘main problem’ with feedback and any positive aspects of feedback. A thematic analysis was undertaken to analyse the data collected. Results A total of 690 responses were received, representing a 38.3% response rate. A number of themes were identified, with students highlighting areas for improvement related to numerous facets of feedback delivery but often focusing on the number of opportunities for feedback. Numerous suggestions were also made. There was little acknowledgement of the resources required to improve feedback delivery and no mention of the cost implications of making such improvements. Conclusions Students appear unaware of the practical aspects of assessment and feedback delivery and as a result, make suggestions which are very demanding within the resource constraints of contemporary higher education. Students believe these requests are reasonable and are therefore likely to become frustrated when they are not fulfilled. Engaging students in the discourse of cost effectiveness may enable students and staff to develop a shared vision for feasible improvements to feedback. Mahmood F, Hope D, Cameron H MedEdPublish https://doi.org/10.15694/mep.2019.000026.1 Page | 2


Introduction
Feedback is an important influence on student educational achievement, similar in effect size to direct instruction, prior cognitive ability and reciprocal teaching (Hattie 1999). Kluger and DeNisi (1996) note that while feedback typically has a moderately positive effect on performance, poorly delivered interventions can reduce student achievement. In their meta-analysis 38% of all attempted interventions negatively impacted student performance. Ten Cate et al (2013) argue that such negative effects occur because feedback interferes with feelings of competence, which, along with autonomy and a sense of belonging or relatedness are necessary for the maintenance of intrinsic motivation according to self-determination theory (Deci 1971). The ideal feedback cycle therefore requires more than an adept teacher who provides targeted feedback in a considered manner. Student commentary on feedback provision is critical (Seldin 1989) in allowing development of an effective feedback system and there is increasing emphasis on students participating in curriculum development as part of a wider drive towards accountability and quality improvement (Hendry & Dean 2010) in higher education. However there is conflicting evidence about the effectiveness of student reports on improving teaching (Cohen 1980;Kember et al. 2010). We therefore need to periodically evaluate how effectively we are using those student contributions and consider how to build a stronger partnership and shared vision. Doing so will help us to ensure these contributions lead to better outcomes.
Students view feedback delivery as poor and lacking in quantity (Liberman et al. 2005;Duffield & Spencer 2002). Feedback is often regarded as inconsistent (Bevan & Badge 2008), late and irrelevant (Gil et al. 1984). Doan (2013, p. 6) found that 64% of 206 students surveyed agreed that, 'Students are more interested in their grade and pay little attention to feedback,' suggesting that students' views on feedback are unlikely to be grounded in the pedagogical aspects of education. Delivering such feedback is beyond most medical schools in all but the most exceptional cases (Gibbs & Simpson 2004).
Students are rarely made aware of the pedagogy and practical aspects of feedback, including the often contrasting definitions and purposes of feedback in the literature (Ende 1983;Hattie & Timperley 2007). Students must therefore use ad-hoc definitions (Scott 2014) that differ from those of experts. Despite this, they are asked to make sweeping judgments on feedback in medical education.
Reducing costs is a priority in medical education (Altbach et al. 2010, p. xii), despite recognition that there is already insufficient resource available for its provision (Lowry 1992). Prystowsky and Bordage (2001) highlight the importance of considering cost in the current era of 'cost containment' and students perceiving themselves as customers (Finney & Finney 2010) of the university 'business' (Knapp & Siegel 2009). Levin (2001) notes that cost effectiveness in education is understudied and suggests this is because of a lack of both supply -clinical educators are not skilled at performing cost effectiveness analyses -and demand -policymakers are disinterested in the outcomes of cost effectiveness analyses, relying instead on their own discretion. Although factors such as student performance and satisfaction are commonly investigated in the literature, cost is considered only in approximately 2% of studies as an outcome measure (Prystowsky & Bordage 2001;Zendejas et al. 2013). There is limited evidence suggesting that students understand or appreciate the costs involved in developing and delivering educational resource, though Taplin et al (2013) found that less than half of students were willing to pay $5 to download digital versions of lectures for an entire course unit.
Students' views on feedback development are strongly at odds with the views of their educators (Gil et al. 1984). They request interventions educators view as unfeasible (Carless 2006), are unaware of the extensive pre-existing literature surrounding feedback (Scott 2014) and there is no evidence that they have a mechanism to meaningfully appraise cost when making their judgments. Added to the subjective interpretation of many factors around assessment and feedback (O'donovan et al. 2004) it is not surprising that student proposals are so at odds with those suggested by educators.
Students' limited view of relevant factors interferes with their ability to constructively engage with the feedback delivery process. Notably, student feedback is biased by numerous factors including the grade received (Zabaleta 2007), ease of the topic and the teachers' attractiveness or entertainment value (Davison & Price 2009). 'Survey fatigue' can follow repeated requests for feedback; students are more likely to engage with such surveys if they feel their contributions are making a significant difference (Porter et al. 2004). Furthermore, there is little consensus on what questions to ask of students (Aleamoni & Spencer 1973;Coffey & Gibbs 2010;Spencer & Aleamoni 1970), as well as concern regarding validity of existing questionnaires (Kember & Leung 2008).
So although student contributions to improving feedback are important and should be prioritised (Seldin 1989) there are significant challenges. Students need to better understand feedback -both its principles and practicalities to make those contributions as constructive and useful as possible. In order to better understand students' perspectives we investigated their views on feedback focussing our analysis on feasibility and cost-effectiveness, with the intention of identifying potential training opportunities for students and ultimately promoting better engagement with feedback improvement.

Methodology
We conducted a qualitative study using a phenomenology based approach. This approach was used in an attempt to understand the lived experiences of the students, identifying positive and negative views. We adopted the transcendental phenomenology approach described by Moustakas (1994), which focuses on the experiences of the research participants, in an attempt not to colour them with the investigator's views. Moustakas describes key steps including: Identifying a phenomenon Collecting data through broad open ended questions Analysing the data and reducing it into significant statements or themes Further analysis to convey an essence of the subjects' experiences.
Ethical approval for the study was obtained from the relevant University Student Ethics Committee.

Study design
An online questionnaire was delivered through the students' virtual learning environment. The questionnaire comprised 80 questions, in a mixture of formats. This included a personality inventory questionnaire (Goldberg 1992) and a questionnaire investigating student locus of control. The final component asked students: "Could you please summarise in your own words what you see as the main problem with feedback to students?" "Please summarise in your own words what you think is good about the feedback you receive." This study focuses on these two items only, with the remainder of the questionnaire reviewed as part of a larger study. The questionnaire was offered to medical students in the first, third and final year of the 2010/11 and 2011/12 academic sessions of a UK MBChB. The answers for the second question, regarding positive responses were only available for the second academic year as the question was added to the questionnaire in 2011-12. Thus we gathered information from approximately 6 cohorts, with a small number of possible duplicate responses due to students repeating a year.

Analysis
Responses were imported into QSR NVivo 10 (QSR International 2012), which was used to code the data. Prior to encoding, the dataset was read through in its entirety to allow for familiarisation and development of an understanding of the context of responses.
In coding, the approach used by Boyatzis (1998) was followed. Here, coding involves identifying an important comment and coding it as such, prior to interpretation. A properly defined code captures the "qualitative richness of the phenomenon" (Boyatzis 1998, p. 31). Encoding was undertaken line by line, allowing emergence of broad themes identified as repeatedly featuring in comments. A further round of coding was undertaken by reviewing the dataset.
Themes were split into three broad categories. Having identified positive and negative themes, a number of core concepts were developed.

Results/Analysis
A response rate of 38.3% was recorded across both questions, with 453 responses from a potential 1184 respondents received. For the first question, an average of 62 words per response was recorded, whilst the second question received an average of 27 words per response.
A total of 11 themes were identified. The themes were classified into practical and theoretical types, as illustrated below (Table 1). In responding to the question regarding the 'main problem' with feedback, students made numerous suggestions on how to improve feedback and these were coded as a separate theme to allow for review.
We chose to focus on the practical themes since they provided insight into students' appreciation of cost effectiveness and practical means of improving feedback delivery. Students admitted to feeling frustrated, suggesting that 'adequate' feedback could significantly improve their learning experiences. They indicated that they felt more resources were required to allow feedback delivery; their comments did not acknowledge address costs in relation to this.

Timeliness
Some students identified timeliness as the primary problem, suggesting that by the time feedback was received; they had either forgotten the assignment in question or already completed the assessment for the block, rendering the feedback useless: "Feedback is not timely enough; by the time feedback arrives the exam/essay is no longer fresh so often there is a lack of motivation to go back and improve" There was no evidence that students understood the reasons for the timing of feedback, or the staff resource required to deliver immediate feedback.

Interactivity
Although this theme did not feature as heavily as others, it was an issue raised by a number of students in all years. Students described being unable to 'go through problems' or 'understand why the marker has found the work poor or excellent.' One described the occasional opportunity to interact with a teacher on feedback as 'invaluable.' Others described being unsure as to whom to consult when seeking feedback, as in this comment: "Feedback is woefully underprovided and there is rarely indication of where to go for additional feedback, I feel there are barriers to asking directly for feedback as most of the time I never even know who has marked my work!" The students' comments did not refer to the resources and logistics required to put such a scheme into operation with hundreds of tutors spread geographically and approximately 250 students in each cohort.

Reasons for limitations
This theme was explored to ascertain what students perceived to be the limitations in delivering high quality feedback, allowing a contrast with the educationalists' perception. One theory was that the ratio of students to staff was too high to allow tutors to familiarise themselves with students. Others suggested that the large number of assignments made it difficult for tutors to provide quality feedback due to insufficient time. One particularly interesting comment highlighted that tutors were not incentivised to provide good feedback: In a similar vein, students called for accountability from tutors providing feedback, critiquing the lack of standardisation in marking their work. This issue was further compounded by difficulty in identifying who had provided feedback to students.
Some responses showed insight into difficulties surrounding feedback provision, acknowledging that it was important not to provide answers to part of a year group when another group had yet to sit an assessment. Others commented that there were insufficient assignments to obtain feedback.

Suggestions for improvement
Most student suggestions were quantitative, rather than qualitative in nature, suggesting that it is the quantity of feedback provision which they feel is lacking. A large proportion of these suggestions related to universal access to feedback by having a tutor available to discuss one's performance. One comment suggested having regular timetabled feedback sessions for individual feedback, perhaps with their director of studies.
Some students suggested that large group sizes caused problems and dropping the number of students per group to 2-3 would be beneficial. Another student opined: "The reasons given for not having better feedback are either because of "technical difficulties in doing so" or "time/staff issues" which in my opinion could very easily be overcome." Others indicated that they felt staff should be held accountable for providing quality feedback -perhaps by providing tutors with a feedback proforma or minimum feedback requirements. An alternative suggestion was to double mark all assignments. As well as this, students requested feedback on both weaknesses and strengths. One student said they wished to have open access to all data regarding their performance. No comments mentioned the cost implications of enacting such requests.
The students' comments were summarised well by this student, who stated: "Good feedback is given soon enough after the assessment as to be relevant and inform practice while memories are fresh; is preferably given face-to-face; is comprehensive (e.g. a paragraph or two of written points); and discusses a number of strengths and weaknesses, and how to improve for next time."

Discussion
The responses from students demonstrate limited engagement with the practical aspects of assessment and feedback delivery at medical school and limited understanding of the context in which feedback is delivered. Some comments suggest that they feel that problems related to feedback are 'easily surmountable.' It is recognised (Altbach et al. 2010) that higher education institutions worldwide are subject to financial pressures, leading to larger class sizes and employment of part-time faculty instead of full-time academic staff. As the cost burden of higher education shifts from governments to students, many universities are being forced to diversify their revenue streams to ensure their financial survival (Johnstone 2004). Yet most student suggestions require substantial additional expenditure, would be impossible with typical staffing levels or would require individual members of staff to develop expertise across multiple domains. There is little evidence that students are aware of the logistical difficulties of their suggestions within current budget limitations. They believe their requests are reasonable and are inevitably frustrated when they are not fulfilled. If medical schools do not raise universal resource and cost issues with students such as the significant non-modifiable costs surrounding assessment processes ( surprising that students' lack of understanding leads to frustration.
Students displayed perceptive insight into reasons why feedback provision might be difficult -a few commented that the ratio of staff to students was too high, limiting tutors' ability to familiarise themselves with individual students. However, there were no comments on why it might be challenging to reduce the staff to student ratio, especially in terms of cost.
Some suggestions, such as providing a proforma for feedback or tutor guidance, are normal practice at the institution. Given that students have no knowledge of the job plans or commitments of their tutors, they have limited understanding of the competing demands on their time beyond teaching (Cotten & Wilson 2006) -whether these be in terms of clinical care or administrative work. Therefore, teaching staff may view students' requests for more teaching time as unreasonable given the bulk of their contracted time is dedicated towards non-teaching activities.
The increase in staff to student ratios (Parliamentary Select Committee on Education and Employment 2001), coupled with the use of part time teaching staff (Altbach et al. 2010) has also led to difficulty in tutors familiarising themselves with their students. As such, tutors see little to no return on their investment in feedback. Tutors are unlikely to encounter these students once their attachments are complete and do not have the ability to develop meaningful relationships with them, leading to students feeling disenfranchised (Watson 1999). One particularly insightful student comment noted that the quality of feedback received significantly improved when student and tutor had a shared goal, such as a journal publication. Incorporating information to tutors on their investment in giving feedback through anonymised student results, comparing tutor performance across taught units or allowing students longer attachments with individual tutors to build familiarity may encourage tutors to deliver more meaningful and effective feedback. There is evidence of such efforts in the form of nomination of individual tutors for excellence awards (Thompson & Zaitseva 2012), but such awards reward a limited number of tutors and rely on motivated students to nominate their tutors. O'Donovan (2004) suggests that to allow students to develop a meaningful understanding of standards and criteria, they need to engage with or use these standards. Previous work (Gil et al. 1984;Duffield & Spencer 2002;Bevan & Badge 2008) has identified that students will often find feedback to be lacking in quantity, in contrast to the opinions of staff members. We have expanded on these findings, asking students for constructive criticism with a view to improving feedback delivery. In the process, we have identified gaps in students' knowledge of today's context of higher / medical education, which ultimately limits their ability to provide workable solutions.

Strengths and limitations
A number of factors add to the credibility and reliability of the study. These include collections of data from three separate year groups over consecutive years, resulting in a large dataset. Triangulation of responses, through questioning different year groups adds to the reliability also. A large number of responses were obtained. The overall response rate for these items was slightly below 40% of the population sampled. However, it should be noted that the data collected here is in the form of free text responses, requiring more time and thought than Likert scale answers, and this may have resulted in a lower response rate.
There are some limitations to this work. It was a single-centre study based on a single programme, within a specific context with respect to fees and funding higher education and these factors may limit transferability. However, our focus on evaluating students' perspectives around feedback delivery is broadly transferable across higher education and students have been found to make similar requests in other areas of research (Doan 2013). The data was not linked to candidate performance, which would have allowed a more detailed analysis. As participation was voluntary, response bias may also have had an influence. We have attempted to overcome our own biases from our experiences as both students and teachers. Nevertheless, we have undoubtedly presented our own perspectives and it is quite possible that this differs from what the students intended or from what others might find. The transferability of the findings here can also be debated. Some of the problems and potential solutions described will be familiar to medical schools internationally. Others, may not be equally applicable.

Conclusion
Students are dissatisfied with many aspects of feedback and make suggestions for change. However, they fail to demonstrate awareness of costs which in turn limits their engagement with cost effectiveness as a key criterion for good feedback. If students lack this understanding, developments in feedback may ignore local student commentary and suggestions, or be poorly implemented due to resource limitations.
Medical schools should evaluate the cost effectiveness of their approaches to feedback, identify the local determinants of cost-effective feedback and train tutors in the most effective ways.
Future work should explore the ways in which tutors are incentivised to invest in their students including offering high quality individual feedback and the impact such investment may have on students' perception of feedback as well as their academic performance.
Staff need to help students develop their assessment literacy and in particular to engage them in the discourse of cost effectiveness in approaches to feedback in order to develop students as informed partners in their own education, and to create a shared vision of affordable, effective, and enjoyable education.

Take Home Messages
There is little evidence that students consider cost effectiveness when critiquing feedback. 1.
Students' suggestions to improve feedback are sometimes logistically difficult. 2.
Medical schools should develop means of measuring the cost effectiveness of their educational interventions. 3.
Medical schools should engage students in discussions around cost effectiveness. 4.
Future research should explore incentivising tutors to invest in their students. 5.

Mr Fahd Mahmood is a Specialist Registrar in Trauma & Orthopaedics and holds an MSc in Clinical Education.
Dr David Hope is a psychometrician whose work includes investigating the academic and personal correlates of feedback satisfaction.
Professor Helen Cameron is Dean of Medical Education at Aston University. She is particularly interested in assessment that encourages and supports effective learning.