ASCB logo LSE Logo

Building, Using, and Maximizing the Impact of Concept Inventories in the Biological Sciences: Report on a National Science Foundation–sponsored Conference on the Construction of Concept Inventories in the Biological Sciences

    Published Online:https://doi.org/10.1187/cbe.07-05-0031

    Abstract

    The meeting “Conceptual Assessment in the Biological Sciences” was held March 3–4, 2007, in Boulder, Colorado. Sponsored by the National Science Foundation and hosted by University of Colorado, Boulder's Biology Concept Inventory Team, the meeting drew together 21 participants from 13 institutions, all of whom had received National Science Foundation funding for biology education. Topics of interest included Introductory Biology, Genetics, Evolution, Ecology, and the Nature of Science. The goal of the meeting was to organize and leverage current efforts to develop concept inventories for each of these topics. These diagnostic tools are inspired by the success of the Force Concept Inventory, developed by the community of physics educators to identify student misconceptions about Newtonian mechanics. By working together, participants hope to lessen the risk that groups might develop competing rather than complementary inventories.

    INTRODUCTION

    For many years, we have faced both the public demand and our personal convictions about the need to improve student learning experiences in the biological sciences at the postsecondary level. Although the particular demands (and often our own desire to make improvements) may vary, their content is generally in agreement on two points. First, students need to learn material at a deeper level than rote memory, and second, student learning assessments and course program evaluation should be improved, meaningful, and integrated into course structure. Essentially, we are asked to help students learn at the conceptual level so that they leave our courses knowing considerably more than how to list, define, and label correctly. For example, the National Research Council's (NRC) report BIO 2010: Transforming Undergraduate Education for Future Research Biologists asked us to teach in ways that address students' misconceptions (NRC, 2003). Research on addressing misconceptions in the sciences suggests that a new concept cannot be learned until the student is forced to confront the paradoxes, inconsistencies, and limitations of the mental model that already exists in the student's mind. Students persist in their erroneous beliefs because they often seem more reasonable and/or useful to them (e.g., Mayer, 1987; Schneps, 1994).

    While there are teaching approaches designed to facilitate student learning by addressing student misconceptions, faculty also require a reliable means for assessing student learning at the conceptual level. However, assessments that measure concept-level learning are fundamentally different from the way we measure the traditional, rote learning that we (and often our students) are most familiar with. This is where instruments such as concept inventories (CIs) enter the picture because they are research-based instruments designed to measure student conceptual understanding in areas where students are known (through rigorous research) to hold common misconceptions.

    CIs are characterized by the following:

    • They resemble typical multiple-choice tests, but distracters (the “wrong” answers) are based on research findings indicating misconceptions commonly held by students

    • Distracters diagnose or map a particular level of student conceptual understanding

    • Each distracter reveals where student understanding has gone astray or become “stuck”

    A CI is therefore an instrument that not only tells us how many students do not understand a concept, but also (through its individually researched distracters) which conceptual picture they hold instead. Armed with this detailed knowledge of where the students' thinking went astray, instructors can address problem areas more effectively, using appropriate teaching techniques (which have also been identified through research on learning).

    There are many ways to assess students' conceptual understanding (e.g., concept mapping, learning portfolios, oral exams, etc.). We focus here on CIs for a number of reasons, but primarily because the physics community has demonstrated that a widely adopted CI can successfully provide a compelling argument and catalyst for change in the way teaching and learning are both approached. The first such instrument, the Force Concept Inventory (FCI; see e.g., Hestenes et al., 1992) covered approximately half a semester of a two-semester introductory physics course (Kinematics and Dynamics) but nevertheless ushered in the new field of physics education research that revolutionized the teaching of physics. Widespread adoption of even a single CI also encourages the adoption of other alternative means of learning assessment and program evaluation.

    Although CIs can be very useful in helping faculty identify common conceptual difficulties, their adoption can also lead to difficulties. CIs can cause some confusion among both faculty and students because their format resembles a typical, multiple-choice test. This can create confusion because of widespread expectations of what tests should or should not look like. Learning environments, like any social setting, are characterized by many (often unarticulated) rules. For example, by the time our students enter a science course, they have already been socialized to believe that the subject matter is based entirely on “facts.”

    Science, technology, engineering, and mathematics (STEM) disciplines are still considered by most students to be “black and white” rather than innovative, creative, or in need of further exploration. Students are rarely exposed to opposing theories on a given subject, and the content is typically treated as things we know or certainties (facts or theories that can be treated as facts). Students are rarely asked to explain things such as how an older theoretical model was developed based on the observations and information available at an earlier time and is modified and/or discarded as new information becomes available and we find that the model no longer explains what is observed to happen. This socialization is reinforced by the most common assessment model: When we assess STEM knowledge, we tend to focus on whether or not students have selected the one correct answer in the midst of other clearly wrong responses. Students and faculty both come to know what a test “looks like” and what the expectations are for those taking them. Because CIs look like a typical multiple-choice test, students and faculty both find it difficult to accept that there are levels of knowledge represented in the inventory and that the distracters are not so much wrong, but rather that each one represents particular places where students commonly become “stuck” in their understanding—levels of understanding that they must move beyond in order to completely understand and make use of conceptual knowledge. This leads to common misconceptions about CIs. Faculty and students alike often believe that they are “trick” questions rather than questions with potential responses that help inform us of a student's level of conceptual understanding. It is often easier to explain to students what the purpose and format of the questions represent than it is to assist faculty in truly understanding them.

    An additional difficulty in introducing CIs unique to the biological sciences is the relatively fragmented nature of the conceptual landscape in the field. Even administrative fragmentation leads to the biological sciences often being housed in several separate units in a campus. This can hamper the development of CIs capable of catalyzing the same degree of educational reform seen in other STEM communities, most notably the physics education research community. Over the past 10 years, a number of discussions, workshops, and meetings have included the topic of CIs in the biological sciences. However, the majority of these meetings have either focused on internal needs and uses (e.g., Massachusetts Institute of Technology's development of the Biology Concept Framework; see Khodor et al., 2004) or on the broader context of educational reform (e.g., The National Academies Summer Institute on Undergraduate Education in Biology, 2003).

    The Faculty Institutes for Reforming Science Teaching (FIRST I and FIRST II; 1999–2005) include a component on assessing student-centered learning. As part of those projects, Kathleen Fisher and the San Diego Group held a working group session focused on the assessment of conceptual understanding in the biological sciences per se. At the same time, the overall project goals for these and other such initiatives were much broader: educational reform and the institutionalization of student-centered learning approaches in the biological and other sciences (e.g., the reports and work generated by the National Science Foundation (NSF)/Johnson Foundation Workshop on Bringing Research on Learning to the Geosciences; July 8–10, 2002; NSF, 2002).

    It is therefore clear that, for the first time, several groups are focused on efforts to develop concept inventory instruments in the biological sciences. As a result, the NSF elected to sponsor a meeting, hosted by the Biology Concept Inventory (BCI) development team at the University of Colorado, Boulder, for those currently working on CIs in the biological sciences. The meeting included both biologists and researchers in STEM education and was designed to leverage these efforts in ways that enable us to provide a set of instruments that complement rather than compete with one another and facilitate each group's ability to develop and cultivate cogent, persuasive means of communicating the value and appropriate uses of CIs to others who teach in the biological sciences.

    MEETING STRUCTURE

    Although many STEM disciplines (e.g., physics or chemistry) are housed in a single department, it is not uncommon to find multiple departments of biological sciences at a single institution. When we pair this type of fragmentation with the lack of communication typical among specialists in any discipline, it becomes clear how pedagogically focused research addressed by ecology or physiology is often unfamiliar to those working to improve teaching and learning in cell and molecular biology. Thus, a critical goal for this meeting was to ensure that all group members became closely aware of work associated with CI development being carried on by others in the biological sciences.

    Mike Klymkowsky and Kathy Garvin-Doxas of the University of Colorado organized the meeting. The NSF Directorate for Biological Sciences (Nancy Pelaez and Dan Udovic) provided the impetus for the meeting as well as the initial list of current and recent NSF grantees in the subject area. The final list of participants was arrived at through an iterative process of consulting among the meeting organizers, the NSF program managers, and the researchers in the original list on current and recent projects that might be relevant and those who might have the needed expertise.

    To attain our primary meeting goal, each participant submitted a short paper before the meeting. These papers are available on the BCI website (www.bioliteracy.net). In addition, individual authors were encouraged to seek independent publication in the venue most suitable to their particular discipline. During the meeting each group representative summarized their work in CI development, use, and dissemination (see Figure 1). The 14 papers are as follows, in order of presentation:

    • Teaching and Learning Biology at the Undergraduate Level

      Joyce Parker,1 Andy Anderson,1 and John Merrill1; Michigan State University

    • Cataloging Physiology Misconceptions

      Joel Michael1; Rush Medical College (also, Physiology Educational Research Consortium [PERC])

    • Developing Assessments of Conceptual Understanding Using “Big Ideas”

      Terry P. Vendlinski,1 Joan L. Herman, Sam Nagashima, and Eva L. Baker; University of California, Los Angeles (also, National Center for Research on Evaluation Standards and Student Testing [CRESST])

    • C-Tools: Concept-Connector Tools for Online Learning in Science

      Douglas Luckie1 and Diane Ebert-May1; Michigan State University

    • Teaching and Learning Ecology in Undergraduate Courses

      Nancy Stamp1; Binghamton University; State University of New York

    • Changing Teaching Practice: Much More Than a Diagnostic Test

      Charlene D'Avanzo1; Hampshire College (also, Teaching Issues and Experiments in Ecology [TIEE])

    • Genetics Concepts Inventory

      Susan Elrod1; California Polytechnic State University, San Luis Obispo

    • Drawing Out Misconceptions: Assessing Student Mental Models in Biology

      William J. Hoese1 and Merri Lynn Casem1; California State University, Fullerton

    • Building the Biology Concept Inventory

      Kathy Garvin-Doxas1 and Michael Klymkowsky1; University of Colorado, Boulder

    • Learning Gains in a Lecture-Based and a Web-Enhanced, Interactive Introductory Biology Course

      Carl N. McDaniel, Bradford C. Lister,1 Michael Hanna, and Harry Roy; Rensselaer Polytechnic Institute

    • Inventorying Conceptual Understanding of Basic Biology Ideas

      Kathleen Fisher, San Diego State University; Kathy Williams,1 San Diego State University; Dianne Anderson, Point Loma Nazarene University; and Mike Smith,1 Mercer University

    • Thinking Ahead: the FIRST Assessment Database

      Diane Ebert-May,1 Michigan State University; Everett Weber, Michigan State University; Mark Urban-Luain, Michigan State University; Ryan McFall, Hope College, Michigan; and Matt Jones, University of California-Santa Barbara

    • Preinstructional Assessment of the Basic Chemical and Molecular Literacy of Biochemistry Students

      Duane W. Sears1 and Scott E. Thompson; University of California, Santa Barbara

    • Tree-Thinking Research in Evolution Education (TREE)

      Sam Donovan1; University of Pittsburgh

    Figure 1.

    Figure 1. Meeting participants: Joyce Parker, Andy Anderson, John Merrill, Joel Michael, Terry P. Vendlinski, Douglas Luckie, Diane Ebert-May, Nancy Stamp, Charlene D'Avanzo, Susan Elrod, William J. Hoese, Merri Lynn Casem, Kathy Garvin-Doxas, Michael Klymkowsky, Bradford C. Lister, Kathy Williams, Duane W. Sears, and Sam Donovan.

    In addition to the participants noted above, Nancy Pelaez and Dan Udovic represented the NSF, and Julie Schneider of the University of Colorado's BCI team kept the meeting minutes.

    The paper presentations took place during the morning of the first day of the meeting. That afternoon, the participants formed breakout groups of similarly minded participants for further discussion on five areas: Introductory Biology, Genetics, Nature of Science, Evolution, and Ecology. Results of the small group meetings were presented at the end of the first day. In addition, most participants shared their extensive experience with the challenges of introducing new pedagogical approaches to faculty members, including those that focus on students' conceptual understanding. For example, the website Teaching Issues and Experiments in Ecology (TIEE, http://tiee.ecoed.net/) offers a wide variety of strategies for teaching particular concepts in ecology and offers a wide range of faculty development material that can be applied to any course content.

    The next day we began with a review presentation by Julie Schneider of what we had discussed thus far, converging on the common themes and concerns identified through this first face-to-face meeting (Julie Schneider's review presentation can also be found on the BCI website at bioliteracy.net). Although the meeting focused on CI development and related issues, no CI instrument exists in a vacuum; the process of sharing experiences of faculty development and classroom implementation of the development process proved highly beneficial, too. For example, in its most recent phase, the FIRST project has been compiling a large database of assessment information that makes meta-data available for secondary research purposes. Meta-data are such information as demographics, type of instrument, pedagogical approach, etc., that the data represent (cf. Ebert-May, Weber, Urban-Luain, McFall, and Jones, meeting paper). The database includes tools for autoloading of class statistics and makes provision for both classroom management data and assessment data as well as separate learning styles and subject matter classification schemes. We ended the meeting by planning our next steps.

    THEMES THAT EMERGED FROM THE MEETING

    Although the papers themselves provide the most complete information about the range of topics and discussion covered at the meeting, a number of common themes, issues, and challenges emerged. It is very challenging to identify concepts that represent commonly held student misconceptions. Even in projects that began with the results of prior research (e.g., see the Elrod meeting paper) and continued work toward incorporating those results into a CI instrument indicated that many of the areas identified as concepts were actually topics or were fact based (meaning they are things that can be memorized by students effectively).

    This is not to argue that memorizing facts (or theories that can be treated as facts) is not important in learning. However, although facts are critical to learning in any discipline, a CI instrument is not the appropriate way to assess fact-based student learning. Thus, an issue common among CI development projects is that “digging” past rote learning to identify concepts is often difficult. This challenge is seen at all levels of postsecondary education in the biological sciences (not just at the introductory level).

    As a consequence, CIs may end up covering more than commonly held student misconceptions in a particular area to the detriment of their validity. If students are able to use memorized knowledge to rule out possible responses, this test-taking strategy can lower the validity of the CI as a measure of conceptual understanding. Such memorized knowledge should be assessed with some other tool. In general there are fewer concepts in any discipline than the number of topics that are typically taught. Another issue that we face is our ability to communicate effectively about CIs. Most important is clarity about what those of us building CIs mean when we use the term “concept” and what others in the biological sciences believe we mean when we use that term. Any lasting and important change in teaching and learning in the biological sciences requires, in addition to assessment and program evaluation, a strong and meaningful vocabulary to communicate the need for and reasonableness of change. In many ways, the term “concept” appears to have either become ubiquitous or remains too broad to be consistently meaningful both within the biology education community and as we reach out to others. A number of group members replaced the term “concept” with the phrase “big ideas.” Thus, it may be more helpful when communicating both among those who focus on the idea of enhancing students' conceptual understanding and with those who are new to this area, to discuss our work in terms of identifying and finding ways to diagnose or inventory the “big ideas” in the biological sciences.

    This issue was raised in part because it is difficult to separate the notion of topics (which is how most postsecondary material is currently organized) from what we mean by concepts. One of the most challenging tasks any CI developer has is to explain to others who teach in the same area why it is that not all topic areas listed on the syllabus (for example) are represented in the content of the inventory. One portion of this difficulty comes from the fact that, although it seems easy to distinguish between rote memorization and higher levels of learning, there really is a great deal of any STEM discipline that requires a certain amount of learning vocabulary definitions (much of which can be memorized), the ability to correctly label (again, something that can be memorized), and lists (which can be and usually are memorized).

    Although these abilities and skills are absolutely essential to students' ability to progress in a discipline, they do not represent concept-level understanding. Nor should they be assessed using CIs. So, when we talk about a vernacular misconception, it is not the same as not knowing the vocabulary definitions. For example, as recently as 15 years ago, students regularly confounded the portion of natural selection about survival of the “fittest” with survival of the “strongest” or, even more often, they assumed that “weakness” would fail to survive. This type of misconception does not appear in students' thinking until they are in later courses—courses that require them to apply what they understand about natural selection to a problem that they have not seen before. It also prevents them from moving beyond what we currently know about natural selection because it provides a deep-seated “blinder” that keeps the student from looking at new information (or even current or old information) in new ways. Perhaps discussing the “big ideas” in the biological sciences (rather than “identifying the concepts”) addresses both the tendency of professors to “think” about their discipline in terms of topic areas (there are many more topics than there are concepts) and helps them to better understand the purpose and uses of CIs, particularly since CIs “look like” multiple-choice tests, even though that is not what they are and is not how they are used.

    When CI developers pursue dissemination and adoption, they find that faculty likewise misunderstand the distinction between a topic and a concept. Noting the absence of a topic that they teach, they might assume that the CI is incomplete or that it devalues topics they find important. Electing to utilize a CI and to teach students in a way that focuses on concepts does not necessarily mean eliminating course content. CIs are simply one tool that can provide faculty with an overview of their particular students' strongest areas of conceptual understanding (so they can make choices about where to spend the majority of their class time based on actual student need). CIs can also provide a pre- and postinstruction comparison that can be used to determine how well a new course intervention or regular teaching approach has worked.

    Although CI instruments most commonly use multiple-choice test formats that are based on word problems, they can also ask students conceptually oriented questions using pictorial representations or hierarchical structures. Perhaps the best known alternative to the multiple-choice format is the use of concept mapping. Recent, technology-based concept maps can be used both as an automated assessment tool as well as a student learning tool. For example, some are using it to help students learn to make connections among processes (see the Luckie paper). When used this way, the focus is not on students' ability to fill in the major categories in, for example, a process (something that can be learned by rote), but rather, students' ability to explain how those processes are connected to one another (a concept-level task).

    To complete the meeting, we held a number of small group discussions designed to develop and facilitate future collaborations among attendees. A framework presented by Parker, Anderson, and Merrill from Michigan State University consisted of three major themes in understanding biological systems: processing and storage of Energy, Matter, and Information. Three of the subgroups considered this formulation. The group discussing Introduction to Biology preferred a larger number of concepts: Laws of Biological Thermodynamics (conservation, energy, mass); self-regulating systems (beyond homeostasis, steady-state systems, and molecular feedback); evolution; and history. The Genetics group found the framework more useful and has since followed through with additional meetings to organize the Genetics CI around concepts of storage and processing of information. The group discussing Ecology CIs benefited from a longer engagement of the discipline with projects aimed at conceptual levels of learning. They examined how the “big ideas” in ecology fit with the framework, but also discussed broader goals.

    The group discussing the Nature of Science felt that it was itself a “big idea” that all students must appreciate in any course in the biological sciences. They considered whether a separate set of questions addressing that concept could be integrated into all CI instruments.

    The Evolution group recognized a unique problem: the need to take into consideration cultural and other beliefs that may represent commonly held misconceptions that could be presented as distracters in CI questions. Ontology (people's concept of reality) can interfere with scientific learning, since the ontology of science is that reality is assumed to be constant. Science assumes that students' ways of coming to know the truth, their epistemological beliefs, may be at fault if they subscribe to any other ontological system.

    NEXT STEPS

    Our first meeting of a group of biologists and people working in STEM education research concluded with planning next steps. In addition to continuing to pursue further collaborations among group members and continuing our current work, we decided that continued progress in building, using, and maximizing CIs as one means of improving student learning in the biological sciences depends on holding at least one other face-to-face meeting before another year passes. To further this effort, Susan Elrod volunteered to organize and host our next meeting in San Luis Obispo. We agreed on goals for the next nine months and for the meeting itself at the end of that time.

    We decided that it would be critical for each research group planning to attend the next meeting to produce another premeeting paper. This paper will focus on clearly articulating the following:

    1. what each group believes a CI actually is (its goals, purposes, approach, etc.);

    2. the “big ideas” that each CI measures;

    3. how each CI goes about measuring the “big ideas” identified in goal 1; and

    4. how the data collected and measured by each CI is and can be used to inform and improve teaching and learning.

    In addition, papers prepared for this next meeting will discuss the elements of our individual work that are ready for dissemination to the broader community of educators in the biological sciences. These premeeting papers will provide the foundation for the next meeting, the goal of which is to conduct a meta-cognitive analysis of the individual papers so that we can clearly articulate more about the state of the art of CI development in the biological sciences and how these instruments can be used to improve student learning in the biological sciences (meta-cognitive analyses study the process of creating knowledge as well as the knowledge itself, asking essentially “how does this field know what it knows”). These documents, along with the results of our next meeting, will enable us to communicate clearly how CIs connect with teaching and learning goals and where materials that support concept-based learning that connects to particular CIs can be found.

    FOOTNOTES

    This article is a supplement to the Building a Basic Biology Concept Inventory (BCI) Project, awarded by the NSF.

    1 Meeting participant.

    REFERENCES

  • Hestenes D., Wells M., Swackhammer G. (1992). Force concept inventory. Physics Teacher 30, 141-158. Google Scholar
  • Khodor J., Halme D. G., Walker G. C. (2004). A hierarchical biology concept framework: a tool for course design. Cell Biol. Educ. 3, (2), 111-121. LinkGoogle Scholar
  • Mayer R. E. (1987). Educational Psychology: A Cognitive Approach, New York: Harper Collins. Google Scholar
  • National Academies (2003). Undergraduate Education in Biology: Pilot Summer Institute accessed 7 December 2006 http://dels.nas.edu/summerinst/agenda_2003.shtml. Google Scholar
  • National Research Council (2003). BIO 2010: Transforming Undergraduate Education for Future Research Bio, Washington, DC: National Academies Press. Google Scholar
  • National Science Foundation/Johnson Foundation Workshop (2002). Bringing Research on Learning to the Geosciences accessed 7 December 2006 http://serc.carleton.edu/research_on_learning/workshop02/. Google Scholar
  • Schneps M. H. (1994). A Private Universe, 2801 Colorado Avenue, Santa Monica, CA 90404: Project Star, Harvard University, running time 18 minutes, Pyramid Film & Video. Google Scholar