Journal of New Approaches in Educational Research

Articulo main

Daniel Martos-Garcia Oidui Usabiaga Alexandra Valencia-Peris

Abstract

The aim of this study was to evaluate differences in physical education students’ perception on an educational innovation based on formative and peer assessment through the blogosphere. The sample was made up of 253 students from two Spanish universities. Data was collected using a self-reported questionnaire and t tests were employed in order to find differences among students’ groups. Results show significant differences in almost all of the items on which the students were questioned. Basque students were more satisfied with the assessment tool used than the Valencian students. Students found the blogosphere more active, meaningful, functional and motivating and that it made for collaborative learning in comparison to other traditional evaluation methods. They also showed disapproval related to the demands on attendance, continuity and the greater effort required. For future occasions, negotiation about assessment criteria with the students should be implemented right at the very start of the course.


El propósito de este estudio fue evaluar las diferencias en la percepción que los estudiantes de educación física tenían acerca de una innovación basada en una evaluación formativa y por pares a través de la blogosfera. La muestra estuvo formada por un total de 253 estudiantes de dos universidades españolas. Los datos fueron recogidos mediante un cuestionario autoadministrado y el empleo de pruebas t permitió encontrar diferencias entre los diferentes grupos de estudiantes. Así, se obtuvieron diferencias significativas en la mayoría de ítems. El alumnado vasco estuvo más satisfecho con las herramientas de evaluación usadas en comparación con los valencianos. Además, los y las estudiantes percibieron la blogosfera como una herramienta activa, funcional, motivante y apropiada para un aprendizaje colaborativo, sobre todo en comparación con los métodos de la evaluación tradicional. Por otro lado, se recogieron opiniones negativas sobre la exigencia respecto a la asistencia, continuidad y el mayor esfuerzo requerido. En el futuro, se hace necesario negociar con el alumnado desde el inicio del curso aspectos como los criterios de evaluación y de calificación
Keywords
FORMATIVE ASSESSMENT; PEER ASSESSMENT; BLOG, E-LEARNING; PHYSICAL EDUCATION
EVALUACIÓN FORMATIVA; EVALUACIÓN POR PARES; BLOG; E-LEARNING; EDUCACIÓN FÍSICA
Section
Articles

INTRODUCTION

Higher education is going through a moment of deep change to answer to the new needs of society. At this crossroads, the implementation of the European Higher Education Area (EHEA) has contributed to moving the focus of attention from teaching to learning (Gijón & Crisol, 2012) and to the students’ own experiences (López-Pastor, Pintor, Muros, & Webb, 2013). For these and other reasons, education is experiencing a period of renewal and with it, all its defining elements.

Changes in assessment

One of the areas going undertaking in-depth re-conceptualisation is that of assessment, which is considered a determining factor in education, and not without reason, as it conditions the learning process (Boud & Associates, 2010). Thus, in recent years there has been a clear increase in the involvement of the students in their own assessment (Brew, Riley, & Walta, 2009; Falchikov & Goldfinch, 2000; Tan, 2008; Van der Berg, Admiraal, & Pilot, 2006b) in the interest of democratising the learning process itself and under the coherence provided by the conceptions of constructivism. Along the lines of Lorente-Catalán and Kirk (2012), the adoption of alternative assessment models is not limited to the adoption of new techniques, but rather a repositioning of the relationship of power, provided that the authority, which is traditionally vested in the teachers, is put into context. In this way, although they may have different names (alternative assessment, democratic, authentic, assessment for learning or collaborative assessment, participative or shared), we find ourselves with many and varied experiences in formative assessment, which are now included in the educational agenda (López-Pastor et al., 2013), as it is one of the requirements for a coherent quality assessment within the EHEA (Bretones, 2008). Formative and collaborative or shared assessment, have now found their place in the agenda of education. Formative assessment is that which takes place during the learning process and allows feedback from its own practice (Santos Guerra, 1993) enabling students to be aware of their strong and weak points (Brew et al., 2009). Collaborative assessment, on the other hand, emphasises self-assessment and peer assessment and responds to the needs that students have to judge their own practices and to increase their independence and self-management (Boud & Associates, 2010). For our particular case study, adopting the concepts proposed by López-Pastor, Castejón, Sicilia-Camacho, Navarro-Adelantado, and Webb (2011), we define “peer” assessment as that done by two people in the same situation, for example two students, differentiating it from co-assessment carried out together by a teacher and a student.
The adoption of these assessment methods, however, should not merely obey formal, standard changes, but be the fruit of the guarantee that the results of the research on the subject have implied and which, in fact, are already showing clear improvements on the learning front. Studies undertaken by López-Pastor et al. (2013), Gutiérrez-García, Pérez-Pueyo, and Pérez-Gutiérrez (2013) or Lorente-Catalán and Kirk (2014) have made reference to a lot of publications in which this type of proposal is associated with higher motivation to learning, a deeper level of understanding, more confidence and an improvement in emotional response, given the new social relationships that are formed and even an improvement in academic performance and marks. In the case of first degrees related to teacher training, the development of in-situ assessment techniques seem to be even more necessary as, the task of evaluating future students fairly will be required in the very near future as described by Brew et al., (2009). On the other hand, the application of various proposals for formative and collaborative assessment, have provided some negative results which should be borne in mind. For instance, the feeling that there is a greater work-load (López-Pastor et al., 2013), the differences in what the students and the teachers perceive with regard to evaluation (Gutiérrez-García et al., 2013), the traditional resistance on the part of the teaching staff to change, or that they question the position of authority (Lorente-Catalán & Kirk, 2012), the opinions disagreeing with the students taking part (Davies, 2010) or the increase in numbers of students per group. Consequently, the said practices should not be taken as positives per se (Lorente-Catalán & Kirk, 2014) but rather, they need continuous critical evaluation in themselves. It is possible, therefore, as we have stated above, that it will be necessary to go further than the technical and isolated use of such practices and move towards pedagogical proposals more coherent and in line with existing ones (Hay & Penney, 2009).

Information and Communication Technologies

Among other changes in recent decades are developments in Information and Communication Technologies (ICT) and their complete introduction into education systems the world over. Europe is no exception and in 1998 the Sorbonne Declaration encouraged us to recognise the fact. We should not, therefore, be surprised that the traditional teaching space, the classroom, should be extended into other places, public and private, work-related or informal (Coutinho, 2007). This exemplifies the fact that a virtual environment can become an excellent teaching area (Harasim, Hiltz, Teles, & Turoff, 2000). It is not without reason that the Internet is an interactive means of communication (Castells, 2009), which can provide a more flexible and accessible learning experience (Cebreiro & Fernández, 2003). Even though the use of ICTs in higher education did not occur with the advent of the EHEA, and at first the introduction of technology in education was not well regarded, the appearance of the internet in the mid-90s (Rubia & Guitert, 2014), made interest grow in this institution. This phenomenon led to the development of new educational experiences (Collis & Moonen, 2011), encouraged by a new framework.
As part of the development process of ICT, the blog is one of the most popular tools (Namwar & Rastgoo, 2008), to which more than 112 million users (as per 2008 figures) bear witness (Castells, 2009). The key could be in the interactive possibilities it offers (Williams & Jacobs 2004), also present in the educational version, the “edublog”. Many benefits of its use have been cited, such as its versatility (Williams & Jacobs 2004), how it is ideal for working together even when the students are physically separated (Namwar & Rastgoo, 2008) or the opportunity it provides for communication with others (Coutinho, 2007). The implementation of such technology, however, should be executed in an adequate teaching framework (Lombillo, López, & Zumeta, 2012; O’Donnell, 2006), taking into account interactivity, competency development and flexibility when the corresponding educational tools are designed and improved (González & García, 2011). As with all the ICTs, they are not infallible and as with all innovations, they need to be accompanied with other educational changes (Salinas, 2004) therefore edublogs can only be seen as a first step to resolving certain educational problems (González & García, 2011). After we have seen the advantages, there are difficulties to surmount such as the effort required to design and maintain them or the excessive amount of information contained , which tends to complicate their general adoption.

Assessment, blogs and physical education

Taking into account the above and Brown’s arguments (2015), one of the possible areas of growth in the use of technology to support assessment may be collaborative assessment in virtual settings; these may constitute one of the areas with the highest potential. This idea becomes even more important when the use of new technologies in education makes it possible for us to use more dynamic teaching and participation methods which will improve the quality of university education (Laurillard, 2002). This implies a great transformation in the different teaching methods used, allowing the design of new learning settings, which can sometimes complement the traditional ones (Salinas 2004). We are currently seeing great advances in collaborative learning processes in virtual environments, given how easy it is for students and teachers to work together and to generate an active learning process, which is at the same time autonomous and thought-provoking (Salinas & Viticcionli, 2008).
Even though the use of blogs in higher education has grown (Molina, Valenciano, & Valencia-Peris, 2015) leading to a lot of research and publications on the subject (Molina, Antolín, Pérez-Samaniego, Devís-Devís, & Villamón, 2013), the situation in the physical education area seems to be lagging behind somewhat (Gómez-Gonzalvo, Devís-Devís, Pérez-Samaniego, & Atienza, 2012); it has been recently observed however that the educational use of blogs in the Physical Activity Sciences and Sport, or physical education for primary education is possible (Usabiaga, Martos-García, & Valencia-Peris, 2014). Luckily, the situation with regard to the question of assessment is somewhat better, even though its development in physical education (PE hereafter) is noticeably lower than in other areas of education (Hay, 2006). To this effect, in 2005, the National Network for Formative and Shared Assessment in Higher Education was created, which, as part of its aims, plans to analyse and improve the assessment process moving away from those used traditionally; its intention is also to break away from professional isolation and learn from other experiences and to collaborate in innovations in assessment (Buscà, Pintor, Martínez, & Peire, 2010; López-Pastor et al., 2011).
To understand how students experience collaborative assessment in virtual environments, as McConnell (2002) points out, is not just about assessing the students, but also about helping with the development and improvement of the processes themselves. We should also ask ourselves if their perceptions are in line with our own (Gutiérrez-García, Pérez-Pueyo, Pérez-Gutiérrez, & Palacios-Picos, 2011). Therefore, the purpose of this paper is twofold. First, to know the students’ perception of the assessment tool employed (blogosphere) in terms of advantages and disadvantages; second, to examine if there are differences in perception depending on the university of provenience. It is also important to find out whether the tool has the same level of acceptance by students of both universities.

METHODOLOGY

Sample

The study sample was taken from two Spanish state universities (Table 1). The sample used was of a non-probable or incidental type, given that we approached students enrolled in the groups run by research lecturers. The sample was made up of students enrolled in the four groups taking part, making up a total of 253 students of whom 61.5% of the 2013/14 class and 53.5% of the 2014/15 class were male. It represents a 79.8% response rate from the 317 adolescents invited to participate in the study. The students from Valencia University (M=23.5; SD=3.6) were older than those from the Basque Country University (M=19.2; SD=3.2).

Table 1. Distribution of the study sample
University

 Course 2013/2014
n (%)

 Course 2014/2015n (%)

Total
n (%)

A. University of the Basque Country 72 (28.5%) 44 (17.4%) 116 (45.85%)
B. University of Valencia 63 (24.9%) 74 (29.2%) 137 (54.15%)
Total 135 (53.4%) 118 (46.6%) 253 (100%)

Description of the assessment experience

The innovation that led to this research transpired as a result of the collaboration between two teachers in charge of two subjects (Table 2) taught in both Universities, and out of their participation both in the National Network for Formative and Shared Assessment in Higher Education and in a project carried out by the Theory and Teaching of Physical Education and Sport Teaching Innovation Group.

Table 2. Participating students’ profile
  University Level

Subjects

Course

Years of study

Groups participating

A

First degree in Physical Activity and Sport

Fundamentals of Basque Pilota and Tennis

1st

2013-2014/
2014-2015

1 group per academic year

B

First Degree on Primary Education Specialisation: Physical Education

Teaching of Games and Sporting Activities

4th

2 groups per academic year

Composition of the groups and choice of subject

Students were divided into groups of 4 to whom the basic characteristics of the study were presented (goals, assessment criteria, steps to follow and schedule). They were also offered a range of subjects in the fields of “Basque and Valencian pilota” (two traditional sports) respectively, different for each course and that each group had to choose for their respective blogs. Furthermore, the teacher designed a central blog, common to the subjects and different for each course. The blog was to contain the basic information for each subject and it was to become the subsequent blogosphere.

Design and creation of the blog

In the next stage, each group designed their own blog and started working on the chosen subject, partially guided by some common basic standards applicable to all of the subjects but also in line with the assessment criteria for each block of subjects. The blog content had to include audio-visual material such as pictures, videos or other kinds of ICT. The students themselves were the administrators of the different blogs.

Linking the group blogs in the Blogosphere

Once they were designed and the subjects decided upon, each group had to send their blog URL to the subject teacher who, using a gadget, configured a list of blogs for each university. All the various blogs were then linked to the central blog so that all the students would have access to all of the blogs belonging to the rest of the groups.

Inter-university collaborative blog collaboration: peer assessment and ‘teacher-led assessment’

The blogs administered by the various groups and the subjects and content were evaluated and graded both by the teacher (hetero-assessment), who assessed all the blogs corresponding to his/her university students, and by the class-mates (peer-assessment) from both universities. The students assessed four pieces of work individually, two from each university using comments on the corresponding blogs and following the assessment criterion set out on the Blogosphere and suggested a mark ranging between 0 and 10. The students had 10 days to do this, after which it was the teachers’ turn to write their comments. The mark for the piece of work was calculated using the average mark from the grades given by the students on each blog (50% of the final mark) and the mark awarded by the teacher (50% of the final mark). The assessment criteria was divided into two parts, the general criteria applicable to all the blogs (number of information sources used, correctly referenced material, list of material for the given subject, use of ICTs and their educational value, blog structure, proof of collaborative work and the richness of the language used, for example) and the specific criteria proposed by each subject group (structure, game action and the context of the special subjects presented, the history and characteristics of the installations, appropriateness of teaching, etc.).

Formative assessment

Once the assessment comments have been published, each group was given a specific length of time which was different in each of the two universities, to make changes they felt relevant in relation to the assessments made with the aim of complying with the formative assessment proposal (Brew et al., 2009), and thereby improving their blogs and subjects.

Instruments and procedure

Data was collected using the Perception Scale on Participative Methodologies and Formative Evaluation, which had already been validated for university students (Castejón, Santos, & Palacios, 2015). This scale enables an evaluation of how the student body perceives the methodology and assessment during their initial training; it also measured their level of satisfaction in relation to it being put into practice and attaining the desired learning outcomes.
The questionnaire has 17 questions (101 items) on a Likert-type scale (0=None/Never – 4=A lot). Given the objective of the study, throughout this work, only questions related to the perceptions of the students on the assessment system used were presented, along with those relating to the level of satisfaction with the subject and the assessment model.
The data was collected at the end of the first four-month period of the 2013/14 and 2014/15 academic years so as to coincide with the end of the subject periods. Beforehand, the students had been told about the reasons for the questionnaire and the research, the autonomous and voluntary character of participation and they were given guarantees that the results would be used ethically and confidentially.

DATA ANALYSIS

The analyses were centred on assessing the inter-university formative and collaborative education assessment system activated for the subjects. This was done using the quantitative analysis software SPSS v.22.0. The assumption of normal distribution was checked and since some variables did not meet this assumption, the values were transformed by calculating the square root. As the fundamental aim was to show differences between the universities, t tests were made for independent samples for the quantitative variables (Likert scale transformed to 1 to 5) and hypothesis contrasts for proportions applied to the comparison of qualitative variables (Yes/No). Significant differences were noted when p<0.05.

RESULTS

Firstly, half the students asked thought that collaborative assessment processes had been used in the subjects they studied (A: 114 (48.9%); B: 119 (51.1%). In this sense the memory of the experience is related to favourable opinions of the same if we look at the general data. In relation to advantages provided by the assessment system for those subjects, the students showed their general agreement with the potential offered by the process (Table 3). Significant differences can be seen, however (p<0.05), in all items in favour of University A students, except in the use of a contract made before participation, negotiation and agreement with the assessment system where the scores were fairly similar. Some of the advantages which were most highly valued by the students from both universities were that active learning was actually available, that collaborative team-work was a possibility and that there is an interrelationship between the theory and the practice. On the other hand, there were various less-valued advantages expressed, especially by the students form university B; improvement in academic tutoring, functional learning, improvement in the quality of pieces of work and the existence of feedback, being the main ones.
The disadvantages the students found (Table 4) were given, however, lower marks than those of the advantages described in Table 3, especially if we look at the students from University A. Significant differences were observed (p<0.05) in all the items except in items related to the difficulty of working in groups, which they found to be of little significance in both educational contexts. For the remainder, University B students gave higher marks to questions related to formative and continuous assessment (obligatory attendance and continuity) than those from University A. In both cases, even though there are some differences, it can be seen that the students do not associate this proposal with the need to make a greater effort or with pre-existing comprehension problems; nor do they associate it with the difficulties of group-work. On the other hand, they do not seem to see the assessment model as a more complicated (or unclear) process than others, which generate uncertainty and insecurity. Even though the assessment system used may contain errors, which can, in any case, be corrected for future uses, 79.1% of the students from University B and 97.4% from University A were fairly satisfied with the assessment system used (χ2(4)=28.481; p<0.001; V=0.351).

Table 3. Student perception related to the advantages found in the assessment process used in subjects (1=none – 5=a lot)

 

A

B

M (SD)

M (SD)

1. It offers alternatives for all students

4.52 (0.69)

4.09 (0.96)

2. There is a previous contract, agreed and negotiated, regarding the evaluation system

4.40 (0.74)

4.36 (0.96)

3. It is centred on the process, the importance of daily work

4.47 (0.67)

4.03 (0.86)

4. The student performs an active learning

4.65 (0.54)

4.32 (0.70)

5. Teamwork is conceived in a collaborative manner

4.69 (0.59)

4.16 (0.80)

6. The student is more motivated, and the learning process is more motivational

4.56 (0.63)

4.02 (0.91)

7. Grades are more fair

4.53 (0.66)

4.10 (0.89)

8. Improves academic tutelage (follow-up and help for students)

4.42 (0.66)

3.78 (0.84)

9. Allows functional learning

4.40 (0.68)

3.85 (0.86)

10. Generates significant learning

4.29 (0.75)

4.02 (0.89)

11. Much more is learnt

4.65 (0.62)

4.07 (0.88)

12. Improves the quality of requested essays

4.55 (0.65)

3.86 (0.88)

13. There is a correlation between theory and practice

4.68 (0.61)

4.20 (0.82)

14. There is feedback and the possibility to correct mistakes in essays and activities

4.47 (0.75)

3.76 (1.04)

M=Mean, SD=Standard Deviation. Bold type shows the differences between the universities. Square-root-transformed values were used in the analysis, but nontransformed values are presented in the table.
Table 4. Student perception related to the disadvantages found in the assessment process used in subjects (1=none – 5=a lot)

 

A

B

M (SD)

M (SD)

1. Demands compulsory and active attendance

3.25 (1.24)

4.12 (1.00)

2. It has a work dynamic that is not widely understood and students are unfamiliar with

2.24 (1.08)

2.79 (1.35)

3. Demands continuity

3.93 (0.82)

4.23 (0.75)

4. It needs to be explained beforehand

3.17 (0.99)

3.62 (1.08)

5. Demands a greater effort

3.15 (0.97)

3.62 (1.01)

6. It is difficult to work in teams

2.27 (1.18)

2.46 (1.20)

7. A lot of work may be accumulated towards the end

2.41 (1.08)

3.09 (1.29)

8. The relationship work/credits is disproportionate

2.06 (1.14)

3.01 (1.36)

9. The process is more complex and sometimes unclear

1.77 (0.91)

2.45 (1.21)

10. Generates uncertainty and insecurity, doubts about what is to be done

1.83 (0.98)

2.32 (1.28)

M=Mean, SD=Standard Deviation. Bold type shows the differences between the universities. Square-root-transformed values were used in the analysis, but nontransformed values are presented in the table.

DISCUSSION

In general, the data shows a general level of satisfaction with the assessment system used, with the majority of the students satisfied with the experience which coincides with other research studies in formative and/or collaborative assessment (Hortigüela, Pérez-Pueyo, & Abella, 2015; Van der Berg, Admirall, & Pilot, 2006a), even accounting for those who use blogs (Coutinho, 2007; Molina et al., 2015). Also, it seems satisfactory that most of the students recognise that the assessment system is collaborative from the experience seen in the four groups and forming the basis of this study. However, the results lead us to point out some nuances. Bearing in mind that the questionnaires were distributed at the end of the four-month study period, and rejecting a possible forgetfulness factor mentioned by Gutiérrez-García et al. (2011), it is powerfully noticeable that an enormous proportion of the students, about half, do not recognise the proposal as collaborative. Unfortunately, this is not a new finding, as student perception is not always akin to that of the teachers and associates; in effect it leaves the assessment system in a final and conclusive version (Hortigüela et al., 2015). The general problem derived from this is that the experiment can lose educational value if it is not recognised as originally intended and it is seen merely as a yet another form of assessment. It may be the case that in spite of the efforts made to implement formative and collaborative assessment systems the students continue to see final examinations as being the most important component (Gutiérrez-García et al., 2011).
Perhaps, in our case, an effort needs to be made to place the teaching sense of the assessment proposal and the subject itself in the same line, as many authors require (Hay & Penney, 2009; Hawe, 2007), in a bid to bring the links between active assessment and methodologies closer together (Lorente-Catalán & Kirk, 2014), so that the former is not considered as an add-on to the subject to make the final mark easier to determine. The idea is to make the students understand the real sense of collaborative and formative assessment and not just to recognise its advantages over other assessment systems. This also becomes more pertinent if, the students, as future teachers, are going to have to design and apply assessment procedures in schools; this implies that they need to think about the meaning of their education and its very nature (Hay & Penney, 2009). In this particular case we are making self-criticism and bear the responsibility arising from the difficulty in making the same assessment proposal coherent for students of different subjects, in different levels and at universities, which are far apart.
Another reason why half the students may not have found the experience to be collaborative could be the use of the blog and that the fact that the very collaboration in question took place in a virtual environment. This fact would contrast with the positive reports made by a lot of authors with regard to ICTs in general and in particular to blogs, in so far as collaboration is concerned (González & García, 2011; McConnell, 2002; Namwar & Rastgoo, 2008) or in relation to teaching and learning assistance (Williams & Jacobs, 2004). Given the results, however, it seems that the words of Van der Berg et al. (2006a, 2006b) take on new relevance when emphasis is given to the need for a combination of written and oral feedback, with visual contact. It would perhaps be a good idea to design new strategies combining formative and peer-assessment and the efficient use of the blog both to explore the benefits of face-to-face assessment as well as the use of a virtual environment.
In relation to the benefits and positive aspects that the student body grants to this system of collaborative and formative assessment, the marks awarded were generally high which leads us to believe that the students are in fact aware of the improvements implied by the system. Those advantages have been proven time and time again in the existing literature on the subject (Boud & Falchikov, 2007; Davies, 2010; López-Pastor et al., 2013; Lorente-Catalán & Kirk, 2014). In our particular study the students made special reference to the advantage of active participation, also emphasised in other studies (e.g., López Meneses, 2009; Salinas & Viticcionli, 2008), which contributes to generating interactivity between the students themselves; emphasised as well is collaborative learning, also described by Huffaker (2004) and Farmer, Yue, and Brooks (2008), as well as by others; or the interrelationship between theory and practice (e.g., Tekinarslan, 2008; Williams & Jacobs, 2004).
The fact that the students are aware of this is already a positive because, in a way, it can help to reduce the traditional resistance on the part of students to take part in educational experiments (Lorente & Kirk, 2012). This perception is also vital when we are trying to create an environment of trust because the students need to see that the system is fair (Brown, 2015). The marks obtained for items related to the learning process and the students were also notable, given that collaborative learning is coherent in general with the proposals.
As far as the differences are concerned, the least enthusiasm was found in the answers given by University B students. These students were in their 4th year and were somewhat older than those from University A; although this fact does not seem to be related to them having more experience in this type of assessment, as shown by Falchikov and Goldfinch (2000). As a possible explanation we can introduce an argument to which we shall return later; it is related to the fact that as they are final year students they can feel the pressure of saturation and of excess work to be completed before their initial training and/or a combination of the said perceptions. It also seems that the University A students have understood the whole proposal better which could put the spotlight on the role of the teachers and their explanations of the process. González and García (2011) emphasise the said role in collaborative virtual experiments. In consequence, the teachers’ role is ever critical (Lorente-Catalán & Kirk, 2012) and this needs to be borne in mind in future research.
In general, as far as the perceived difficulties are concerned lower marks are given to advantages of peer-assessment, which is a positive finding. In this section, what stands out is the higher number of points given by the University B students to items related to the accumulation of work at the end of the four-month study period and a feeling that there is a disproportionate amount of work required in relation to the subject credits as compares with the marks given by the University A students. Once again, here we can suggest that the proposal was not sufficiently or clearly explained or that students in their final year will be more likely to be anxious as the process is drawing to a close. This can lead to, as pointed out by López-Pastor et al. (2013), the student feeling that their workload is accumulating, especially at the end of the four-month study-period. This is relevant, and in line with Domingo, Marínez, Gomariz and Gámiz’s study (2014), the simple perception that one has too much work can make one undertake the assessment in an automatic sort of way, doing it without necessarily making much effort. Formative assessment requires continuous work and, in many cases, obligatory attendance as the students noticed, especially those in University B, a result which coincides with other similar research projects (Hortigüela et al., 2015), which could lead to the understanding that this requirement as an obligation is not very manageable and that it may even be unfair. In our case, the perception also contrasts with the use of the blog, which is more in line with an education received in various contexts apart from the classroom, using an “extended classroom”, in accordance with the arguments proposed by Coutinho (2007). In this way the students’ perception may be due to the fact that the peer-assessment proposal was understood as an independent exercise to be added to the rest instead of an assessment in itself. In any case, to advance towards a more shared and formative type of assessment, more empirical studies need to be carried out to measure the influence of pedagogical interventions aimed at such an outcome in university studies, in an effort, for example, to find some coherence between our teaching discourse and practice (López-Pastor & Sicilia-Camacho, 2015).

CONCLUSIONS

From our point of view, the objective of this research project has been met, taking into account that it was aimed at getting to know students’ perception of a formative assessment system and also in peers via the blogosphere. Furthermore, we found the students’ opinions on the system designed to evaluate its usefulness interesting and, based on their perceptions, the disadvantages they observed may be corrected. To achieve this we need to explain the intentions and ends of the assessment system in order to make the students understand its usefulness and to encourage their active participation. The involvement of the students in their own assessment is a fundamental question for their future careers, especially if they are going into the teaching profession and want to put this type of system into practice. To establish a real collaborative learning process, it is also advisable to go beyond the limits restricting the proposal of mere assessment, to avoid students perceiving the experience as an improvised succession of tests. For example, we see a lot of potential in the idea of negotiating assessment aspects such as grading criteria with the students.
An idea for the future can be the development of more research projects in the hope of using specific and original innovative subjects, which expand the pedagogical limits. We believe it is only in this way that we can give answers to the new social demands, such as the use of ICTs for collaborative and formative assessments. To this end, we feel it will be necessary to incorporate qualitative research, which was lacking in our study. A better understanding of this area, we believe, may be obtained from more case studies and further action-research.

ACKNOWLEDGEMENTS

Funded by: Ministry of Economy and Competitiveness, Spain.
Funder Identifier: http://dx.doi.org/10.13039/501100003329
Award: EDU 2013-42024-R

Funded by: University of Valencia, Spain.
Funder Identifier: http://dx.doi.org/10.13039/501100003508
Award: UV-SFPIE FO13-147376, UV-SFPIE FO14-223314

This work was supported by the State Programme of Research, Development and Innovation Facing the Challenges of the Society, 2013–2016. (Ministry of Economy and Competitiveness, Spain) under grant [EDU 2013-42024-R] and by the University of Valencia (Spain), Call for grants to educational innovation projects, 2013-2014 [UV-SFPIE FO13-147376] and 2014-2015 [UV-SFPIE FO14-223314].

REREFENCES

  1. Boud, D., & Associates (2010). Assessment 2020: Seven Propositions for Assessment Reform in Higher Education. Sydney, Australia: Australia Learning and Teaching Council. Retrieved from http://www.uts.edu.au/sites/default/files/Assessment-2020_propositions_final.pdf
  2. Boud, D., & Falchikov, N. (2007). Rethinking Assessment in Higher Education. Learning for the long term. London: Routledge.
  3. Bretones, A. (2008). Participación del alumnado de Educación Superior en su evaluación. Revista de Educación, 347, 181–202. Retrieved from http://www.mecd.gob.es/dctm/revista-de-educacion/articulosre347/re34709.pdf?documentId=0901e72b81236771
  4. Brew, C., Riley, P., & Walta, C. (2009). Education students and their teachers: comparing views on participative assessment practices. Assessment & Evaluation in Higher Education, 34(6), 641–657. doi: 10.1080/02602930802468567
  5. Brown, S. (2015). A review of contemporary trends in higher education assessment. @tic, 14, 43–49.
  6. Buscà, F., Pintor, P., Martínez, L., & Peire, T. (2010). Sistemas y procedimientos de Evaluación Formativa en docencia universitaria: resultados de 34 casos aplicados durante el curso académico 2007-2008. Estudios sobre Educación, 18, 255–276. Retrieved from http://dadun.unav.edu/bitstream/10171/9829/2/ESE_18_11.pdf
  7. Castejón, J., Santos, M. L., & Palacios, A. (2015). Questionnaire on methodology and assessment in physical education initial training. Revista Internacional de Medicina y Ciencias de la Actividad Física y el Deporte, 15(58), 245–267.
  8. Castells, M. (2009). Comunicació i poder. Barcelona: UOC.
  9. Cebreiro, B., & Fernández, C. (2003). Las tecnologías de la comunicación en el espacio europeo para la educación superior. Comunicar, 21, 57–61. Retrieved from http://www.revistacomunicar.com/index.php?contenido=detalles&numero=21&articulo=21-2003-08
  10. Collis, B., & Moonen, J. (2011). Flexibility in Higher Education: Revisiting Expectations, Comunicar, 37, 15–25. doi:10.3916/C37-2011-02-01
  11. Coutinho, C. (2007). Cooperative learning in higher education using weblogs: A study with undergraduate students of education in Portugal. World Multi-conference on Systemics. Cybernetic and Informatics, 11(1), 60–64. Recuperado de http://repositorium.sdum.uminho.pt/bitstream/1822/6721/1/Webblogs.pdf
  12. Davies, P. (2010). Computerized Peer Assessment. Innovations in Education & Training International, 37(4), 346–355. doi:10.1080/135580000750052955
  13. Domingo, J., Martínez, H., Gomariz, S., & Gámiz, J. (2014). Some Limits in Peer Assessment. Journal of Technology and Science Education, 4(1), 12–24. Retrieved from http://www.jotse.org/index.php/jotse/article/view/90/118
  14. Falchikov, N., & Goldfinch, J. (2000). Student Peer Assessment in Higher Education: A Meta-Analysis Comparing Peer and Teacher Marks. Review of Educational Research, 70(3), 287–322. doi:10.3102/00346543070003287
  15. Farmer, B., Yue, A., & Brooks, C. (2008). Using blogging for higher order learning in large-cohort university teaching: A case study. Australasian Journal of Educational Technology, 24(2),123–136. doi: 10.14742/ajet.1215
  16. Gijón, J., & Crisol, E. (2012). La internacionalización de la Educación Superior. El caso del EEES. Revista de Docencia Universitaria, 10(1), 389–414. Retrieved from http://dialnet.unirioja.es/descarga/articulo/4020541.pdf
  17. Gómez-Gonzalvo, F., Devís-Devís, J., Pérez-Samaniego, V., & Atienza, R. (2012). Los blogs en ciencias de la actividad física y el deporte. Una aproximación desde la óptica del alumnado. In D. Cobos, E. López, A. Jaén, A. H. Martín, & L. Molina (Dirs.), Actas del Congreso. I Congreso Virtual Innovagogía 2012. Congreso Virtual sobre Innovación Pedagógica y Praxis educativa (pp. 1334–1343). Sevilla: AFOE.
  18. González, R., & García F.E. (2011). Recursos eficaces para el aprendizaje en entornos virtuales en el Espacio Europeo de Educación Superior: análisis de los edublogs. Estudios Sobre Educación, 20, 161–180. Retrieved from http://dadun.unav.edu/bitstream/10171/18416/2/ESE%20161-180.pdf
  19. Gutiérrez-García, C., Pérez-Pueyo, A., & Pérez-Gutiérrez, M. (2013). Percepciones de profesores, alumnos y egresados sobre los sistemas de evaluación en estudios universitarios de formación del profesorado en educación física. Ágora para la Educación Física y el Deporte, 15(2), 130–151. Retrieved from http://dialnet.unirioja.es/descarga/articulo/4493406.pdf
  20. Gutiérrez-García, C., Pérez-Pueyo, A., Pérez-Gutiérrez, M., & Palacios-Picos, A. (2011). Percepciones de profesores y alumnos sobre la enseñanza, evaluación y desarrollo de competencias en estudios universitarios de formación de profesorado. Cultura y Educación, 23(4), 499–514. doi:10.1174/113564011798392451
  21. Harasim, L., Hiltz, S. R., Teles, L., & Turoff, F. (2000). Redes de aprendizaje. Guía para la enseñanza y el aprendizaje en red. Barcelona: Gedisa.
  22. Hay, P. J. 2006. Assessment for learning in physical education. In D. Kirk, D. Macdonald, & M. O’Sullivan (Eds.), Handbook of Physical Education (pp. 312–325). London: Sage. doi: 10.4135/9781848608009.n18
  23. Hay, P., & Penney, D. (2009). Proposing conditions for assessment efficacy in physical education. European Physical Education Review, 15(3), 389–405. doi:10.4135/9781848608009.n18
  24. Hawe, E. (2007). Student teachers' discourse on assessment: form and substance. Teaching in Higher Education, 12(3), 323–335. doi:10.1080/13562510701278666
  25. Hortigüela, D., Pérez-Pueyo, A., & Abella, V. (2015). Perspectiva del alumnado sobre la evaluación tradicional y la evaluación formativa. Contraste de grupos en las mismas asignaturas. REICE, 13(1), 35–48. Retrieved from e http://www.rinace.net/reice/numeros/arts/vol13num1/art3.pdf
  26. Huffaker, D. (2004). The educated blogger: Using weblogs to promote literacy in the classroom. AACE Journal, 13(2), 91–98. doi: 10.5210/fm.v9i6.1156
  27. Laurillard, D. (2002). Rethinking University Teaching: A Conversational Framework for the Effective Use of Learning Technologies. London: Routledge. doi:10.4324/9780203304846
  28. Lombillo, I, López, A., & Zumeta, E. (2012). Didactics of the use of ICT and traditional teaching aids in municipal higher education institutions. Journal of New Approaches in Educational Research, 1(1), 33–40. doi:10.7821/naer.1.1.33-40
  29. López, E. (2009). Innovar con blogs en las aulas universitarias. Revista DIM: Didáctica, Innovación y Multimedia, 14, 1–6. Retrived from http://ddd.uab.cat/pub/dim/16993748n14/16993748n14a2.pdf
  30. López-Pastor, V. M., & Sicilia-Camacho, A. (2015). Formative and shared assessment in higher education. Lessons learned and challenges for the future. Assessment & Evaluation in Higher Education, 42(1), 77–97. doi:10.1080/02602938.2015.1083535
  31. López-Pastor, V. M., Castejón, J., Sicilia-Camacho, A., Navarro-Adelantado, V., & Webb. G. (2011). The process of creating a cross-university network for formative and shared assessment in higher education in Spain and its potential applications. Innovations in Education and Teaching International, 48(1), 79–90. doi:10.1080/14703297.2010.543768
  32. López-Pastor, V. M., Pintor, P., Muros, B., & Webb, G. (2013). Formative assessment strategies and their effect on student performance and on student and tutor workload: the results of research projects undertaken in preparation for greater convergence of universities in Spain within the European Higher Education Area (EHEA). Journal of Further and Higher Education, 37(2), 163–180. doi:10.1080/0309877X.2011.644780
  33. Lorente, E., & Kirk, D. (2012). Alternative democratic assessment in PETE: an action-research study exploring risks, challenges and solutions. Sport, Education and Society, 1(20), 77—96.
  34. Lorente-Catalán, E., & Kirk, D. (2014). Making the case for democratic assessment practices within a critical pedagogy of physical education teacher education. European Physical Education Review, 20(1), 104–119. doi:10.1177/1356336X13496004
  35. McConnell, D. (2002). The Experience of Collaborative Assessment in e-Learning. Studies in Continuing Education, 24(1), 73–92. doi:10.1080/01580370220130459
  36. Molina, J. P., Valenciano, J., & Valencia-Peris, A. (2015). Los blogs como entornos virtuales de enseñanza y aprendizaje en Educación Superior. Revista Complutense de Educación, 26, 15–31.Retrieved from http://revistas.ucm.es/index.php/RCED/article/download/43791/45929
  37. Molina, J. P., Antolín, L., Pérez-Samaniego, V., Devís-Devís, J., & Villamón, M. (2013). Uso de blogs y evaluación continua del aprendizaje del alumnado universitario. Edutec, 43. Retrieved from http://www.edutec.es/revista/index.php/edutec-e/article/download/335/71
  38. Namwar, Y., & Rastgoo, A. (2008). Weblogs as Learning Tool in Higher Education. Turkish Online Journal of Distance Education, 9(3), 176–185. Retrieved from http://tojde.anadolu.edu.tr/yonetim/icerik/makaleler/432-published.pdf
  39. O’Donnell, M. (2006). Blogging as pedagogic practice: Artefact and ecology. Asia Pacific Media Educator, 17, 5–19. Retrieved from http://ro.uow.edu.au/cgi/viewcontent.cgi?article=1018&context=apme
  40. Rubia, B., & Guitert. M. (2014). ¿La revolución de la enseñanza? El aprendizaje colaborativo en entornos virtuales (CSCL). Comunicar, 42, 10–14.
  41. Salinas, J. (2004). Innovación docente y uso de las TIC en la enseñanza universitaria. Revista Universidad y Sociedad del Conocimiento, 1(1), 1–15.Retrieved from http://dialnet.unirioja.es/descarga/articulo/1037290.pdf
  42. Salinas, M. I., & Viticcionli, S. M. (2008). Innovar con blogs en la enseñanza universitaria presencial. EDUTEC. Revista Electrónica de Tecnología Educativa, 27, 1–22. Retrived from http://www.edutec.es/revista/index.php/edutec-e/article/download/464/197
  43. Santos, M. A. (1993). La evaluación: un proceso de diálogo, comprensión y mejora. Archidona: Aljibe.
  44. Tan, K. H. K. (2008). Qualitatively different ways of experiencing student self-assessment. Higher Education Research & Development, 27(1), 15–29. doi:10.1080/07294360701658708
  45. Tekinarslan, E. (2008). Blogs: A Qualitative Investigation into an Instructor and Undergraduate Students' Experiences. Australasian Journal of Educational Technology, 24(4), 402–412. doi:10.14742/ajet.1200
  46. Usabiaga, O., Martos-García, D., & Valencia-Peris, A. (2014). Propuesta de innovación educativa para el futuro profesorado de educación física a través de una blogosfera. Revista Española de Educación Física y Deportes, 406(3), 85–92. Retrieved from http://www.reefd.es/index.php/reefd/article/viewFile/29/31
  47. Van der Berg, I. Admiraal, W., & Pilot, A. (2006a). Peer assessment in university teaching: Evaluating seven course designs. Assessment & Evaluation in Higher Education, 31(1), 19–36. doi:10.1080/02602930500262346
  48. Van der Berg, I. Admiraal, W., & Pilot, A. (2006b). Designing student peer assessment in higher education: Analysis of written and oral peer feedback. Teaching in Higher Education, 11(2), 135–147. doi:10.1080/13562510500527685
  49. Williams, J. B., & Jacobs, J. (2004). Exploring the use of blogs as learning spaces in the higher education sector. Australasian Journal of Educational Technology, 20(2), 232–247. doi:10.14742/ajet.1361