Issue 1, 2020

Introducing randomization tests via an evaluation of peer-led team learning in undergraduate chemistry courses

Abstract

The methodological limitations education researchers face in the evaluation of reformed instruction have led to debates as to the evidence advancing evidence-based practices. To conduct more effective research, methodological pluralism in the evaluation of educational reforms can be used to complement the strengths and limitations of a corpus of literature informing the impact of an evidence-based practice. This study seeks to introduce randomization tests, a nonparametric statistical analysis incorporating a random-assignment component that can be applied to a single-subject (N = 1) research design, as a methodology to be counted amongst evaluations of instructional reforms. To demonstrate the utility of this approach, an evaluation of peer-led team learning (PLTL) for classes of second-semester general chemistry spanning 7 semesters was conducted using randomization tests. The design contributes novel understandings of PLTL including differences in effectiveness across instructors, trends in effectiveness over time, and a perspective as to the appropriateness of assumptions concerning statistical independence when applied to educational settings. At the research setting, four instructors (each constituting an individual case) alternated implementing lecture-based instruction and PLTL by term. Across these four instructors, the treatment effects of peer-led team learning when compared to lecture-based instruction ranged in impact (from d = 0.233 to 2.09). For two instructors, PLTL provided a means by which to significantly reduce the differential performances observed of students with variable preparations in mathematics, thereby advancing the equitability of their courses. Implications of this work include the incorporation of single-subject research designs in establishing evidence-based instructional practices, the effectiveness of PLTL as interpreted in a methodologically pluralistic context of the research literature, and enacting measurements of equity when gauging the success of instructional reforms in science. Further, this introduction to randomization tests offers another methodology for the evaluation of instructional reforms more widely applicable in educational settings with smaller sample sizes (e.g., reforms conducted within a single classroom or upper-level courses with small class sizes).

Article information

Article type
Paper
Submitted
23 Aug 2019
Accepted
17 Oct 2019
First published
28 Oct 2019

Chem. Educ. Res. Pract., 2020,21, 287-306

Author version available

Introducing randomization tests via an evaluation of peer-led team learning in undergraduate chemistry courses

V. Rosa and S. E. Lewis, Chem. Educ. Res. Pract., 2020, 21, 287 DOI: 10.1039/C9RP00187E

To request permission to reproduce material from this article, please go to the Copyright Clearance Center request page.

If you are an author contributing to an RSC publication, you do not need to request permission provided correct acknowledgement is given.

If you are the author of this article, you do not need to request permission to reproduce figures and diagrams provided correct acknowledgement is given. If you want to reproduce the whole article in a third-party publication (excluding your thesis/dissertation for which permission is not required) please go to the Copyright Clearance Center request page.

Read more about how to correctly acknowledge RSC content.

Social activity

Spotlight

Advertisements