Elsevier

Computers in Human Behavior

Volume 75, October 2017, Pages 81-91
Computers in Human Behavior

Full length article
Promoting collaborative learning through regulation of guessing in clickers

https://doi.org/10.1016/j.chb.2017.05.001Get rights and content

Highlights

  • The announcement of penalty for guessing has a negative effect on promoting collaborative learning.

  • Questions that require higher-order thinking skills promote collaborative learning to a greater extent.

  • Creating mixed level groups of students seems advisable to enhance collaborative learning and to decrease pure guessing.

Abstract

Collaborative learning is a promising avenue in education research. Learning from others and with others can foster deeper learning at a multiple-choice assignment, but it is hard to control the level of students' pure guessing. This paper addresses the problem of promoting collaborative learning through regulation of guessing when students use clickers to answer multiple-choice questions of various levels of difficulty. The study is aimed at identifying how the difficulty of the task and students' levels of knowledge influence on the degree of partial guessing. To answer this research question, we developed two research models and validated them by testing 84 students with regard to the students' level of knowledge and the penalty announcement. The findings of this research reveal that: a) the announcement of penalty has a negative effect on promoting collaborative learning even if it leads to reducing pure guesses in test results; b) questions that require higher-order thinking skills promote collaborative learning to a greater extent; c) creating mixed level groups of students seems advisable to enhance learning from collaboration and, thus, to decrease the degree of pure guessing.

Introduction

Collaborative learning is a pedagogical approach, which helps to enhance learning performance (Blasco-Arcas et al., 2013, McDonough and Foote, 2015). In-depth research indicates that this type of learning environment leads to deeper learning while students teach each other by addressing misunderstanding and clarifying misconceptions. In the collaborative learning environment, students gain different perspectives and, thus, articulate and defend their own ideas. It was Lev Vygotsky who laid the foundations for collaborative learning (Vygotsky, 1978). His concept of learning, called the zone of proximal development, cast doubt on knowledge-based tests as a proper means to measure students' level of knowledge. Vygotsky contended that, in order to gauge the level of true knowledge, it is required to examine an ability to solve problems both independently and in a group. But measuring the knowledge of students who are working in a group is a complicated problem.

One way of stimulating peer collaboration and, at the same time, measuring individual performance is using clickers (Brady et al., 2013, Chien et al., 2016, Cook and Calkins, 2013, Lantz and Stawiski, 2014, Lantz, 2010, Mayer et al., 2009). Some studies highlight the effectiveness of this method because it promotes active learning through student engagement (McDonough & Foote, 2015). For instance, in their research, Smith et al. (2009) used clickers to test in-class concept questions. At first, students were asked to answer a question after a peer discussion. Then, they were posed a similar clicker question, but they followed the instruction to give an answer independently. Smith et al. (2009) analyzed the improved percentage of correct answers after peer discussion. The authors offered two possible explanations for higher grades: the result of conceptual understanding or simply the outcome of choosing the answer most supported by more knowledgeable peers. The authors concluded that the peer discussion led to better understanding even when none of the students knew the correct answer. Although this research shed light upon the major problem of distinguishing between actual learning from students' collaboration and the influence of more prepared students on their peers, there seemed to be some problems with assessing learning performance accurately.

This assessment problem partly results from the limitations imposed by the testing format: clickers are traditionally used in multiple-choice testing (Little & Bjork, 2016). There are two major issues of multiple-choice testing, which are widely debated in the literature.

The first issue is designing questions which go beyond Bloom's lower-order thinking levels: recalling, understanding, and applying to the higher-order levels: analyzing, evaluating, and creating (Anderson et al., 2001, Bloom et al., 1956). On the one hand, some studies (Anderson et al., 2001, Mayer, 2002, Ventouras et al., 2010, Ventouras et al., 2011, Thelwall, 2000) point out that it is possible to design multiple-choice quizzes that test higher-order thinking skills. On the other hand, some researchers argue that multiple-choice assignments are deemed to measure only factual recalling (Butler and Roediger, 2008, Nickerson et al., 2015, Nicol, 2007). Therefore, many instructors offer the easiest way to manipulate test difficulty, i.e. to vary the number of multiple-choice alternatives (Butler and Roediger, 2008, Dehnad et al., 2014, Lesage et al., 2013, Tarrant and Ware, 2010). But an increase in the number of distractors may lead to a decrease in proportions of correct responses. Students are likely to acquire false knowledge instead of enhancing retention of the material. As a result, such test format may increase students' exposure to misinformation. Butler & Roediger (2008) indicate that a distractor has the most detrimental effect unless proper feedback is provided Nicol (2007).

An opposing view is suggested in Bjork's recent research (Bjork et al., 2014, Bjork et al., 2015, Little and Bjork, 2015), where it is stated that multiple-choice testing can promote deep learning and increase long-term retention even when no corrective feedback is given. In accordance with these studies, multiple-choice testing can stimulate the type of retrieval processes known to improve learning (Bjork et al., 2015). In this case, instructors need to provide students with a metacognitive strategy to encourage more complex thinking. This strategy is aimed at considering all the alternatives to cogitate not only why the selected answer is correct, but also why distractors are incorrect. Moreover, students should engage in this metacognitive strategy even if they are certain what answer is correct.

However, applying metacognitive strategies may pose the other serious assessment problem: if students can eliminate some responses based on critical analysis, they can get the correct answer with partial guessing, the level of which is often difficult to assess correctly (Ben-Simon et al., 1997, Kubinger et al., 2010). An extensive body of literature puts forward different scoring procedures to examine partial guessing (Arnold and Arnold, 1970, Bereby-Meyer et al., 2003, Espinosa and Gardeazabal, 2010, Lord, 1980). The primary purpose of these methods is to alleviate pure guessing effects on multiple-choice items and, thus, to reveal students' true knowledge. For instance, Ghafournia (2013) attempted to approach this problem analysing test-taking strategies in answering multiple-choice tests at three levels of English proficiency. The author studied the following subcategories of strategies: time management, error avoidance, guessing, and intent consideration (Ghafournia, 2013). The findings of this research demonstrate significant differences only in using guessing strategies across the three levels of proficiency. While the higher level students used the error avoidance strategy and the time management strategy more frequently, the lower level students employed the guessing strategy less regularly. In contrast to the results of the lower level group and the higher level group, the intermediate level students used the guessing strategy to a much greater extent. These results could be interpreted as follows. The higher level students have a sufficient level of knowledge to answer questions, so they do not need to heavily rely on the guessing strategy. By contrast, the lower level students take pure guesses as they may not have enough knowledge to adopt guessing as a strategy. Finally, the intermediate level students have only partial knowledge. As a result, they demonstrate some partial guessing in attempt to avoid distractors. Consequently, the level of guessing depends not only on the order of thinking skills, but also on the level of students' knowledge.

What is not specifically tackled in the studies reviewed above is how the levels of cognition and students' levels of knowledge influence on the degree of guessing. This is the research question raised in this study. Addressing this gap with regard to collaborative learning, we stated the objective to look into the problem of promoting collaborative learning through regulation of guessing in answering clicker questions. Firstly, we support the idea that clickers can be seen as an effective instrument for promoting deeper understanding and improved students' performance via collaboration. Clickers can help to develop students' critical thinking skills (Blasco-Arcas et al., 2013, Brady et al., 2013, Levesque, 2011), especially when designed questions are based on a taxonomy to encourage higher-order thinking (Bode et al., 2009, Bruff, 2009, Cook and Calkins, 2013). Secondly, the process of collaboration is not limited to applying only cognitive and metacognitive strategies. It also involves such aspects as social and metasocial interaction (Wang, Wallace, & Wang, 2017). Consequently, the regulation of this process is crucial for creating an effective learning environment. Though there is research into different types of regulation (De Backer et al., 2016, Jarvela and Hadwin, 2013, Jarvela and Hadwin, 2015, Jarvela et al., 2016, Raes et al., 2016, van Leeuwen et al., 2015, Winne, 2015), which is primarily focused on developing skills of self-regulation (Grau & Whitebread, 2012), co-regulation (Chan, 2012) and socially shared regulation (De Backer et al., 2014, Isohatala et al., 2017, Jarvela and Hadwin, 2015, Malmberg et al., 2015), but it seems little attention is paid to the problem of guessing regulation.

To achieve our research aim, we tested two control groups of students: lower level students and higher level students. They were given a set of clicker questions, increasing in difficulty and involving both lower-order thinking skills (LOTS) and higher-order thinking skills (HOTS). During the tests, all the students were encouraged to collaborate. However, some of them were announced the penalty for guessing, while the others had no penalty. In addition, the students were given bonus points for answering the clicker questions correctly, so they had an incentive to take the questions seriously.

This paper is organised as follows. Section 2 presents the hypotheses and the proposed research models. It also describes the tests, participants, and procedure used to support the present research. Section 3 reveals the results of descriptive statistics in support of the hypotheses. Section 4 summarises the findings of this study and answers the raised research question.

Section snippets

Method and materials

This section discusses the hypotheses formulated to examine the research question and the research models created to visually represent the logic behind the hypotheses. Then, we provide a description of tests, participants, and procedure used to support the present research.

Results

This section demonstrates the results of descriptive statistical analysis in support of the hypotheses.

Discussion

The present results fully comply with the previous studies that stated positive findings for clicker activities through students' collaboration (McDonough & Foote, 2015). McDonough and Foote (2015) examined the impact of individual and shared clicker use on students' collaborative learning. The research results reported that shared clicker activities produce more collaborative reasoning than individual clicker activities. Moreover, shared clickers elicited better learning outcomes though it may

Limitations and future directions

There are tree main limitations that need to be addressed regarding this research before broad generalizations may be made. First, the study presents a restricted sample of university students majored in engineering sciences. Second, even if the collected small sample helped us to support the hypothesis providing statistically significant estimates, a larger sample size would be more reliable. Finally, the groups categorization was non-blinded. But estimates' bias may be significantly reduced

Conclusions

The present study was aimed at analyzing the problem of guessing regulation to promote collaborative learning. For this purpose, we tested two control groups of students “hl-” and “ll-”. They were asked to answer clicker questions, which increased in difficulty and were required the application of both LOTS and HOTS. To investigate the relationship between the influence of guessing and the impact of collaboration, two subgroups of students (“hl-p” and “ll-p”) were announced penalty while the

Acknowledgements

This work was supported by the Ministry of Education and Science of the Russian Federation, grant 074-U01. The authors would like to thank the reviewers for the valuable comments and suggestions. We are also grateful to Raoul Kessels for sharing the TEXcode (Kessels, 2012) used to draw the emoticons in Fig. 2.

References (75)

  • N. Ghafournia

    The relationship between using multiple-choice test-taking strategies and general language proficiency level

    Procedia - Social and Behavioral Sciences

    (2013)
  • V. Grau et al.

    Self and social regulation of learning during collaborative activities in the classroom: The interplay of individual and group cognition

    Learning and Instruction

    (2012)
  • K.C. Huff

    The comparison of mobile devices to computers for web-based assessments

    Computers in Human Behavior

    (2015)
  • J. Isohatala et al.

    Socially shared regulation of learning and participation in social interaction in collaborative learning

    International Journal of Educational Research

    (2017)
  • S. Jarvela et al.

    Promoting and researching adaptive regulation: New Frontiers for CSCL research

    Computers in Human Behavior

    (2015)
  • S. Jarvela et al.

    Recognizing socially shared regulation by using the temporal sequences of online chat and logs in CSCL

    Learning and Instruction

    (2016)
  • D. Kemmerer

    The spatial and temporal meanings of English prepositions can be independently impaired

    Neuropsychologia

    (2005)
  • L.F. Kleiner

    The semantics of English prepositions. Book review

    Journal of Pragmatics

    (2005)
  • M.E. Lantz

    The use of ’Clickers' in the classroom: Teaching innovation or merely an amusing novelty

    Computers in Human Behavior

    (2010)
  • M.E. Lantz et al.

    Effectiveness of clickers: Effect of feedback and the timing of questions on learning

    Computers in Human Behavior

    (2014)
  • A. van Leeuwen et al.

    Teacher regulation of multiple computer-supported collaborative groups

    Computers in Human Behavior

    (2015)
  • E. Lesage et al.

    Scoring methods for multiple choice assessment in higher education - is it still a matter of number right scoring or negative making?

    Studies in Educational Evaluation

    (2013)
  • J.-J. Lo et al.

    Effects of confidence scores and remedial instruction on prepositions learning in adaptive hypermedia

    Computers & Education

    (2004)
  • J. Malmberg et al.

    Promoting socially shared regulation of learning in CSCL: Progress of socially shared regulation among high- and low-performing groups

    Computers in Human Behavior

    (2015)
  • R.E. Mayer

    A taxonomy for computer-based assessment of problem-solving

    Computers in Human Behavior

    (2002)
  • R.E. Mayer et al.

    Clickers in college classrooms: Fostering learning with questioning methods in large lecture classes

    Contemporary Educational Psychology

    (2009)
  • K. McDonough et al.

    The impact of individual and shared clicker use on students' collaborative learning

    Computers & Education

    (2015)
  • C.M. Mueller

    English learners' knowledge of prepositions: Collocational knowledge or knowledge based on meaning?

    System

    (2011)
  • B.H. Ngu et al.

    Evaluating a CALL software on the learning of English prepositions

    Computers & Education

    (2006)
  • A. Raes et al.

    Promoting metacognitive regulation through collaborative problem solving on the web: When scripting does not work

    Computers in Human Behavior

    (2016)
  • E.E. Rigdon

    Choosing PLS path modeling as analytical method in European management research: A realist perspective

    European Management Journal

    (2016)
  • M. Ronkko et al.

    On the adoption of partial least squares in psychological research: Caveat Emptor

    Personality and Individual Differences

    (2015)
  • J.R. Stowell

    Use of clickers vs. mobile devices for classroom polling

    Computers & Education

    (2015)
  • S. Streukens et al.

    Bootstrapping and PLS-SEM: A step-by-step guide to get more out of your bootstrap results

    European Management Journal

    (2016)
  • M. Tarrant et al.

    A comparison of the psychometric properties of three- and four-option multiple-choice questions in nursing assessments

    Nurse Education Today

    (2010)
  • M. Thelwall

    Computer-based assessment: A versatile educational tool

    Computers & Education

    (2000)
  • N. Valaei et al.

    Modelling continuance intention of citizens in government Facebook page: A complementary PLS approach

    Computers in Human Behavior

    (2017)
  • Cited by (0)

    View full text