An analysis of two evaluative models for a university MA English for Communication course

Heinle, Cengage Learning EMEA. Dellar, H., & Walkley, A. (2010b). OUTCOMES Preintermediate. Student’s book. Hampshire, UK: Heinle, Cengage Learning EMEA. Dijk, T., van. (1981). Discourse studies and education. Applied Linguistics, 2(1), 1-26. English, L. M., & Lynn, S. (1995). Business across cultures. Effective communication strategies. London, UK: Addison-Wesley. Frendo, E. (2009). How to teach Business English. Essex, UK: Pearson Education Limited. Jorgensen, M. W., & Phillips, L. J. (2002). Discourse analysis as theory and method. London, UK: SAGE Publications. Ladhari, R., Souiden, N., & Choi Y. H. (2015). Culture change and globalisation: The unresolved debate between cross-national and cross-cultural classifications. Australasian Marketing Journal (AMJ), 23(3), 235-245. Matthews, P. H. (2007). Concise Oxford dictionary of linguistics (2nd ed.). Oxford, UK: Oxford University Press. Norman, S., & Melville, E. (1997). We mean business. An elementary course in Business English. London, UK: Addison-Wesley. Quirk, R. (1986). Words at work: Lectures on textual structure. Harlow, UK: Longman. Schiffrin, D. (1987). Discourse markers. Cambridge, UK: Cambridge University Press. Zerkina, N., Lomakina, Y., & Kostina N. (2015, April). Place and role of English classical literature in modern educational discourse. In Proceedings of GlobELT2015: 1st Conference on Teaching and Learning English as an Additional Language (pp. 459-463). Antalya, Turkey: GlobELT. Choosing the most appropriate evaluation model for a university course can be a challenging task. This paper builds a RUFDATA framework to enable the presentation, analysis, application and comparison of applied Developmental Evaluation and Utilisation-focused evaluation models to a French university language course. A tailored integrated model is detailed, which embraces the suitable aspects of both models and utilises a business digital evaluation tool to increase the efficiency of the given teaching context. The conclusion highlights the need for personalised solutions to every course evaluation and provides an example for other teachers on which to base their own evaluation solution decision.

the course and myself. There may be an additional element of evaluation for 'accountability' as I would like to prove how beneficial the course is and how much students like it to my superiors.
Positive results will certainly help me secure the course in future terms.

RUFDATA
Using online surveys or 'like sheets' and informal feedback conversations is a useful way of getting opinions but not as a basis for serious evaluation or course development policy.
A RUFDATA process, on the other hand, is utilised to develop the evaluation from an intention to an actionable process. 'RUFDATA is an acronym for the procedural decisions that would shape evaluation activity' (Saunders, 2000, p. 15). It begins by looking at the reasons and purposes (R) for the evaluation, its uses (U) and then focuses (F) on what is to be evaluated. This is followed by considering the analysis of the data (D), who the audience (A) of the results will be and finally when the evaluation will occur (T) and who will conduct it (A). Completing a RUFDATA analysis provides evaluators with a valuable pre-evaluation 'reflective moment' to create a framework and road plan for their evaluation. Saunders (2011) argues that RUFDATA 'involves a process of reflexive questioning during which key procedural dimensions of an evaluation are addressed leading to an accelerated induction to key aspects of evaluation design' (Saunders, 2011, p. 16). For Saunders, RUFDATA is a 'meta-evaluative' tool that provides the missing planning link between objectives and the actual evaluation. Forss et al. (2002) note that 'more learning takes place before and during the evaluation, than after' (Forss et al., 2002, p. 40).
Therefore, we can see RUFDATA analysis as not just an evaluation planning tool but also a guide for conducting the process and from an evaluator development viewpoint, it is fair to assume the 'learning' also applies to those people involved.

Reasons and aims
The reason for the analysis is simple; continuous improvement to make the course fulfilling for students. I have three aims in conducting an evaluation: (1) to measure how effectively the course helps students practice and improve their English speaking skills; (2) to assess how much students enjoy the course; (3) to evaluate how well the course helps students pass the test.

Uses
I hope to improve the course so to provide better lessons and tasks that enable students to speak and help them increase their English speaking skills and level in an enjoyable environment. I also wish to adapt or rebuild the test so there is better cohesion between the taught lessons and the assessment at the end of term. They need to work seamlessly so the course not only helps students develop but also supplies them with the tools to pass the test.

Foci
The aim of the evaluation is to firstly uncover the opinions of each student as they relate to the effectiveness of the course for proving them with opportunities to speak English and how they feel their speaking skills have increased as a result.
Secondly, it will look at their opinions of how they have liked or not the course and lastly, the results of the tests which will be analysed to comprehend the amount who have passed and the general scores.
I hope to understand what the students feel about the course and if they believe it functions in relation to the test i.e. if the former prepares them for the latter and if the latter is an accurate representation of the course content.

Data
The data will be collected via open or more closed questions delivered through a written or spoken survey and possibly 1-2-1 interviews for the first 2 objectives. The last objective requires statistical data about test scores and percentages of who passed and failed and with what scores. This is attainable from the final lesson where the test takes place and scores are earned.

Audience
The results of the evaluation are mainly intended for myself as I design and teach the course. They will be made available to my boss and any involved students or interested colleagues as well as the administration department.

Timescale
The course only has ten lessons and the final one usually includes the test. Thus, I generally utilise session Nine to revise and prepare students for the exam. As a result, the first two objectives could be assessed from lessons one or two to nine or after the test. Objective 3 can only be measured after lesson ten. I will aim to begin the evaluation process at the beginning of the course.

Agency
The evaluation process will be conducted by myself and include any other relevant teachers or department members who are suitable and available. The exact make up of a possible evaluation group depends on the type of evaluation chosen at the end of this assignment and availability of colleagues.

METHODOLOGY
The epistomological position adopted in this paper is grounded in a general constructivist perspective which argues that we build our understanding of reality internally as we are involved in what Piaget (1977) terms 'construction of meaning'. The process is ongoing as we learn and change our understanding based on interaction in the social world. Vygotsky (1978) refers to it as 'a developmental process deeply rooted in the links between individual and social history' (Vygotsky, 1978, p. 16). By embracing the influence of the external world and people, we can adopt the term 'social constructivism' as used by Lodico et al. (2010), which reflects a relativist ontology of reality is human experience and human experience is reality (Levers, 2013). This paper acknowledges the importance of experience but maintains the focus on the cognitive or 'brainbased' construction of meaning as opposed to a more interpretivist perspective. Iofciu et al. (2012) believe that cognitive researchers must place those 'To encapsulate all these theories, we can label the theoretical standpoint of this paper as Personal Social Constructivism' evaluative solution to innovative and adapting context but also a tool to enhance such innovation. Scriven (1996) ventures further by defining DE as 'an evaluation-related exercise, not as a different kind of evaluation' (Scriven, 1996, p. 158). Therefore, we can comprehend that DE can be much greater than just an evaluative solution.
A DE on its own is not always appropriate for every context, even for those in extremely innovative situations, so it is often used in conjunction with other forms of evaluation. DE's process-based responsive nature can clash with the more standard 'end result' evaluation which in many organisations is necessary for formal evaluations and measurable and actionable results. DE cannot be clearly defined, pre-planned and the outcomes are not predictable. Like the context it was created to evaluate, it is complex.
Thus, a DE evaluation can be challenging to legitimise at the evaluation planning stage and also its return on investment at the end of the process and even for establishing success indicators mid-process. Gamble (2008) explains the journey of DE as 'we move methodically from assessing the situation to gathering and analysing data, formulating a solution and then implementing that solution' (Gamble, 2008, p. 18). This is a five-step process but what is omitted is the subsequent loop between implementation at the end of a cycle and the next data collection or feedback to begin the following iteration. Patton (2011)

General observations
Utilisation-focused evaluation (UFE) concentrates on the use of the evaluation which in this setting, is the assessment of my course to establish its quality. According to Patton (2003), 'Evaluations should be judged by their utility and actual use; therefore, evaluators should facilitate the evaluation process and design any evaluation with careful consideration of how everything that is done, from beginning to end, will affect use' (Patton, 2003, p. 223).
UFE aims to create the most optimum evaluation possible with a carefully planned step-by-step design approach. It has a long-term vision with each stage clearly marked out to reach the final goal of creating the ideal evaluation for that setting. According to Ramírez and Brodhead (2013), the attention is constantly on the intended use by intended users. We can consider UFE as a bottom-up 'by the people for the people' evaluation design approach. Based on the constructivist perspective of everyone having different experience and thus constructions, we can deduce that every UFE produced test will be unique. The result is a product of the needs of the users but also their knowledge, skills and teamwork. Patton (2003) advocates an 'active-reactiveadaptive' relationship between the lead evaluator and the team members/intended users which implies the contribution of the team members but the overarching managerial role of the leader to lead and manage (active), listen to and respond to member contributions (reactive) and to change steps, the evaluation and the style of the process (adaptive). Patton highlights in more detail important decisions of members by stating that UFE 'is a process for helping primary intended users select the most appropriate content, model, methods, theory, and uses for their particular situation' (Patton, 2003, p. 223). The scholar proposes a logical five-step procedure for conducting a UFE process which is ideal for newcomers to evaluation.
1. Identify who needs the evaluation.

Objectives compatibility
UFE can enable a final evaluation to reach all my objectives as long as the team members can agree to cover them and the data collection methods are suitable. The timing of the actual evaluation is perhaps the main challenge as there will be a great deal of influence of objective 3 on the questions created from 1 and 2. This can be termed the 'post-test results influence'. For example, a student earning a good grade will reply positively to questions and 'help' (1) and 'enjoy' (2). They will be naturally biased due to their experience of their results and especially if did not confirm their expectations. Thus, there will be significant differences of opinion between those who expected and got the grades they wanted, those who did not and those who actually performed better. Objectives 1 and 2 could alternatively be achieved before the test via surveys and objective 3 following it as the latter will rely on require test results. In this sense the design process perhaps only needs to focus on the first 2 objectives. process. This is the infamous 'Achilles heel of Utilisation-focused evaluation' (Patton, 2003, p. 232-233). 5. Share reports with the team, analyse them.

COMPARISON OF DE AND UFE
6. Share results with the students, discuss them. 7. Make changes to the next lesson.
Repeat steps 3 to 7 until the final test. 11. Share results with the students, analyse them.

Conduct a meta evaluation.
This process tests the students every two weeks and each cycle will provide information to help improve the subsequent questions. For instance, perhaps question types with low responses should be avoided, concise wording needs will draw more attention and students are reluctant to discuss feedback. It is important to vary the test designs as utilising the same questions five times would probably produce less answers as students would not see the point. Therefore, the process not only evaluates the students for the objectives but also inherently assess itself. This is why I believe every cycle's results must be shared and discussed with the students. This involves them as stakeholders in the process.

CONCLUSION
Evaluations are both an interesting and a challenging activity for evaluators. The RUFDATA analysis in this paper demonstrates the need to create a framework for any and all possible evaluations while the analysis and comparison of DE and UFE highlight the significant differences between just 2 types of evaluations. Even though Patton (2016) notes that DE is actually classified as a UFE as it involves a focus on use by (current) users, this paper highlights significant differences which must be addressed when choosing a suitable evaluative solution. There is no 'one size fits all', which is why an integrated version tailored to the situation will successfully press more evaluative buttons than an 'off the shelf' model. In this paper, the proposed outcome is a 12-step model which will presumably evolve over time as a result of applications to evaluations, as too will the use of the Officevibe app as a tool for data collection and analysis. There must be an element of evaluating the evaluation i.e. meta evaluation, as highlighted in this paper. Change is inherent in the evaluation context as the people are constantly creating understanding and factors such as lesson length, attendance and the levels of students can and in this context do always change. A good evaluation seeks results and they are more important than how they were achieved be that from a 100% DE, UFE, a mix or even a completely new type of evaluation.
'There is no 'one size fits all', which is why an integrated version tailored to the situation will successfully press more evaluative buttons than an 'off the shelf' model'