Children’s knowledge of the earth: A new methodological and statistical approach

https://doi.org/10.1016/j.jecp.2008.03.004Get rights and content

Abstract

In the field of children’s knowledge of the earth, much debate has concerned the question of whether children’s naive knowledge—that is, their knowledge before they acquire the standard scientific theory—is coherent (i.e., theory-like) or fragmented. We conducted two studies with large samples (N = 328 and N = 381) using a new paper-and-pencil test, denoted the EARTH (EArth Representation Test for cHildren), to discriminate between these two alternatives. We performed latent class analyses on the responses to the EARTH to test mental models associated with these alternatives. The naive mental models, as formulated by Vosniadou and Brewer, were not supported by the results. The results indicated that children’s knowledge of the earth becomes more consistent as children grow older. These findings support the view that children’s naive knowledge is fragmented.

Introduction

By late childhood, most children, at least in industrialized countries, know that the earth is a sphere that orbits the sun even though it appears to be flat and motionless. But how do they acquire this counterintuitive theory of the earth? Theoretical perspectives on this question differ greatly, and the answer remains unclear. It is, at least, unlikely that children adopt this theory without direct instruction or otherwise culturally transmitted information (Nobes, Martin, & Panagiotaki, 2005). However, children’s own observations and intuitions may also play a role.

The issue of children’s knowledge of the earth boils down to the question of whether children’s naive knowledge—that is, their knowledge before they acquire the standard scientific theory—is coherent (i.e., theory-like) or fragmented. This question of coherence versus fragmentation may be raised more generally in relation to conceptual development. The coherence side of this debate is occupied by the theory theorists who claim that children’s understanding is organized into coherent and consistent theories that structure everyday thinking and are resistant to fluctuations (Wellman & Gelman, 1998). The fragmentation side is occupied by the adherents of the knowledge-in-pieces view who maintain that children’s naive ideas are fragmented. DiSessa (1988) stated that “intuitive physics is a fragmented collection of ideas, loosely connected and reinforcing, having none of the commitment systematicity that one attributes to theories” (p. 50). These fragments (i.e., phenomenological primitives or “p-prims”) are small, self-explanatory knowledge structures that are abstracted from experience; they “simply happen.” Both sides continue to produce empirical support for their respective views (for a review, see diSessa, 2006, diSessa et al., 2004).

Our current aim is to advance this debate in the field of children’s knowledge of the earth by identifying relevant methodological and statistical limitations of previous studies of children’s knowledge of the earth. We demonstrate solutions to these limitations in two new experiments. We first discuss the two competing views of children’s knowledge of the earth. Subsequently, we identify and discuss several methodological and statistical limitations of the current empirical studies in this area. We then present two experiments with a new paper-and-pencil test that we call the EARTH (EArth Representation Test for cHildren).

Vosniadou and colleagues are theory theorists in that they hypothesize that children’s naive knowledge of the earth is theory-like, that is, coherent and internally consistent (e.g., Vosniadou & Brewer, 1992). They argued that children construct coherent mental models of the earth; we refer to their view as the mental model account. There are several views concerning mental models in the psychological literature (e.g., Goodwin and Johnson-Laird, 2005, Halford, 1993, Halford and Busby, 2007, Held et al., 2006, Holland et al., 1986, Johnson-Laird, 1983, Johnson-Laird, 2004, Vosniadou, 2002). A mental model is defined as a mental representation that is analogous to the state of affairs the model represents (Johnson-Laird, 1983). The concept of mental models is often used in reference to people’s reasoning (Johnson-Laird, 2004) such as conditional reasoning (Markovits & Barrouillet, 2002). It is assumed that people construct one or several mental models when asked to produce deductive inferences. Vosniadou and colleagues (e.g., Vosniadou, Skopeliti, & Ikospentaki, 2004) also argued that mental models are dynamic situation-specific representations formed on the spot for the purpose of answering questions. However, children are assumed to have only one model at a given moment. This model is assumed to have predictive and explanatory power and may mediate in the interpretation of information and in theory revision. Vosniadou (2002) claimed that these mental models play an important role in conceptual development and change. Children’s construction of a mental model is based on their observations and everyday cultural influences. Such observations and cultural influences are, however, subject to the constraints of the underlying conceptual structures (Vosniadou, 2002). In this study, we specifically tested the conceptualization of mental models as proposed by Vosniadou and colleagues. Given their account, children should construct one coherent model on the spot (e.g., during a test) to answer the questions consistently. The consistency of responses across a range of questions is then a measure of the coherence of children’s knowledge.

Vosniadou and colleagues (Vosniadou and Brewer, 1992, Vosniadou et al., 2004) found that 80 to 85% of the children form either initial, synthetic, or scientific mental models of the earth. According to Vosniadou (1994), the initial and synthetic models are embedded in a naive theory of physics, with the initial models being most strongly influenced by certain “entrenched presuppositions” that apply to physical objects in general. These presuppositions are that the earth is flat (i.e., flatness constraint) and that unsupported objects fall down (i.e., support constraint). The initial models are the rectangular earth model and the disk earth model in which the earth is a flat object, supported by ground, shaped like a rectangle or disk. When children are increasingly exposed to culturally accepted information about the scientific view of the earth, they try to assimilate this information into their own naive theoretical framework and, thus, form synthetic models (Samarapungavan, Vosniadou, & Brewer, 1996). The synthetic models are subdivided into three classes: the hollow sphere model, in which the earth is a hollow sphere with people living inside on a flat surface; the dual earth model, in which there are two earths: a spherical earth up in the sky and a flat earth on which people live; and the flattened sphere model, in which the earth is round on the sides but flat on the top and bottom, which is where people live. At some point in their development, children adopt the scientific model of a spherical earth with people living all around the world. According to Vosniadou (1994), the acquisition of the scientific model involves a major conceptual reorganization that proceeds through the revision and rejection of children’s presuppositions. The initial and synthetic models discussed above are the most prevalent mental models found in studies with children from Western countries. Although the flatness and support constraints are assumed to be universal, significant cultural variations were found in the manifestation of these presuppositions. For example, Indian children were found to have a model of a flat earth that floats on water, consistent with aspects of lay culture (Samarapungavan et al., 1996).

In contrast, Nobes and colleagues claimed that children have no strong presuppositions of flatness and support. Rather, children are initially theoretically neutral (Nobes et al., 2003). According to this view, the development of children’s understanding of the earth comes about through the gradual accumulation of fragments of cultural information about the earth. These fragments come piece by piece and may be mutually inconsistent. Nobes and colleagues claimed that children’s knowledge is fragmented and unsystematic until they acquire the coherent cultural scientific theory of the earth. We refer to this view as the fragmentation account, which resembles the view of diSessa (1988) in that children’s naive knowledge is thought to be fragmented. However, whereas diSessa claimed that fragmented knowledge is phenomenological, Nobes and colleagues emphasized the influence of culture on children’s knowledge. Children can acquire the correct scientific view relatively early if the cultural information is provided.

Several studies (Nobes et al., 2003, Nobes et al., 2005, Siegal et al., 2004) produced little evidence for strongly entrenched presuppositions or for the presence of initial and synthetic models given that even the youngest children seemed to possess some scientific knowledge. Rather, these studies supported the idea that children have fragmented and incoherent knowledge of the earth. In addition, the studies showed the importance of culture in children’s knowledge acquisition. For example, Siegal and colleagues (2004) found that Australian children, who are given early instruction in cosmological concepts, have considerably more scientifically correct knowledge than do English children.

Why are the studies inconsistent, with some results supporting the mental model account (e.g., Vosniadou and Brewer, 1992, Vosniadou et al., 2004) and other results supporting the fragmentation account (e.g., Nobes et al., 2003, Nobes et al., 2005, Siegal et al., 2004)? Below we review the research methods that are used to assess children’s knowledge of the earth. Characteristics of these research methods might explain the various outcomes of the different studies. Based on this review, we argue that the preferred method for investigating children’s knowledge of the earth is a paper-and-pencil test with forced-choice items in which all mental models found in industrialized (Western) cultures are represented.

Vosniadou and colleagues (2004) argued that the best way to investigate children’s knowledge of the earth is to use methods that require children to make productive use of the information to which they have been exposed. For this reason, Vosniadou and colleagues prefer interviews with open-ended and so-called generative questions. They noted that factual questions can be answered by simply repeating culturally transmitted knowledge. In addition, children were asked to produce a drawing or a clay model of the earth. Vosniadou and colleagues argued that generative questions and the tasks of drawing or making clay models encourage children to make generative use of the scientific information they have at their disposal and encourage the on the spot formation of mental models.

Nobes et al., 2003, Nobes et al., 2005, Siegal et al., 2004 criticized the drawing method of Vosniadou and colleagues. Children may draw an incorrect picture of the earth simply because they are unable to draw a sphere (Blades and Spencer, 1994, Ingram and Butterworth, 1989) or because they have a bias toward orienting drawings of figures to a vertical or horizontal baseline (Pemberton, 1990). Asking children to make a clay model of the earth (Samarapungavan et al., 1996, Vosniadou et al., 2004) is open to similar criticism and may also be difficult for children (Siegal et al., 2004). Moreover, pictures and clay models, as well as interview responses, are difficult to score objectively. Although interrater reliability within a research group may be very high (Samarapungavan et al., 1996, Vosniadou and Brewer, 1992), the between-research groups reliability might be low—a well-known phenomenon in the scoring of conservation classifications (Brainerd, 1973).

The interview method used by Vosniadou and colleagues (e.g., Vosniadou and Brewer, 1992, Vosniadou et al., 2004) may also be problematic. During the interview, similar questions (i.e., questions addressing the same issue) were asked to clarify children’s answers and to arrive at a full understanding of the underlying conceptual structures (Vosniadou & Brewer, 1992). This results in a prolonged method of repeated questioning. The everyday conventions concerning conversation do not apply in such situations, and this may confuse young children (Siegal, 1997, Siegal, 1999, Siegal and Surian, 2004). Another potential problem of the interview method is that children might not be familiar with the words that are used (Siegal, 1997). Both research concerning verbal conservation tasks (Bijstra et al., 1989, Donaldson, 1978, Elbers, 1989, McGariggle and Donaldson, 1975, Rose and Blank, 1974, Siegal et al., 1988) and research concerning eyewitness testimony of young children (Ceci and Bruck, 1993, Ceci and Bruck, 1998, Poole and White, 1991) showed that the above-mentioned problems are a considerable source of error. Both under- and overestimation of children’s knowledge and the consistency of this knowledge may occur. In summary, although the open-ended interview method can be useful during an exploratory phase of the study into children’s knowledge of the earth and the different models they construe, a judgment-only (i.e., forced-choice) test is preferred for further research.

Adherents of the fragmentation account (Nobes et al., 2003, Siegal et al., 2004) have used mainly interviews with forced-choice questions that require children to choose between a “scientific” alternative and an “intuitive” one. In addition, children needed to choose the correct shape of the earth from several three-dimensional models. The forced-choice questionnaires elicited more scientific responses than did the open-ended interview method of Vosniadou and Brewer (1992). The forced-choice interviews have the advantages of structure and objective scoring, but the complex social interaction may still confound the results.

Vosniadou and colleagues (2004) raised three points of criticism against the forced-choice questionnaires. First, they argued that responses to the forced-choice questionnaires may be biased toward the scientific view because the choice between a scientific answer and an intuitive one is limited and because of the lack of justifications of answers. However, empirical evidence with respect to conservation of volume (Kingma, 1984, van der Maas and Molenaar, 1996), the class inclusion task (Chapman and McBride, 1992, Hodkin, 1987, Thomas, 1995, Thomas and Horton, 1997), and the balance scale task (Jansen and van der Maas, 1997, Jansen and van der Maas, 2002, Siegler, 1981) has demonstrated that, if the construction of the test is based on extensive research on possible alternative strategies, it is possible to design a forced-choice task that does not overestimate children’s knowledge and allows for the detection of alternative strategies.

The second point of critique by Vosniadou and colleagues (2004) is that forced-choice methods inhibit the generation of mental models other than the scientific model because these methods encourage children to reason on the basis of the scientific model. A test that contains all mental models found in previous research carried out in industrialized (Western) countries may overcome this problem. Moreover, as demonstrated below, the hypothesis that open-ended interviews and drawing pictures encourage the formation of mental models can be tested by assessing whether children show more consistent knowledge after being tested with either a drawing task or an open-ended interview.

A final point of criticism of Vosniadou and colleagues (2004) is that children perform better on the forced-choice questionnaires because they only need to recognize scientifically correct information instead of retrieving or constructing this knowledge on their own. The advantage of recall tasks (e.g., open-ended interviews) is the assessment of spontaneous thinking, whereas recognition tasks (e.g., forced-choice tests) have the advantages of a more standardized assessment procedure and objective scoring method. Nonetheless, both responses to a forced-choice task and explanations (answers to open-ended questions) are indirect measurements of the knowledge of interest, and there is no reason to assume that explanations are a better reflection of knowledge than responses to a forced-choice task. Moreover, if mental models exist and play an important role in the development of children’s knowledge of the earth, we should be able to detect them with both recognition and recall tasks.

Nobes and colleagues (2005) attempted to develop a research method that was more standardized than an open-ended interview and contained a wider choice of mental models than was represented in the forced-choice questionnaires. Children and adults needed to rank 16 pictures according to how well each picture represented the earth. Each picture was a combination of three properties of the earth: shape (sphere, flattened sphere, hollow, or disk), location of people (around or on top), and location of the sky (around or on top). Using this mainly nonverbal test, Nobes and colleagues did not find any support for the existence of mental models or presuppositions of flatness and support.

Although the test of Nobes and colleagues (2005) is a promising new approach, it does have several limitations. First, the ranking test must be administered in a time-consuming, one-to-one study situation in which aspects of social interaction and linguistic ability come into play. Second, the ranking test is complex, especially for young children. The pictures of the ranking task vary along three dimensions. Research has shown that young children have trouble with seriation problems, especially when various dimensions come into play (Siegler, 1998). Therefore, the ranking test of Nobes and colleagues assesses not only knowledge of the earth but also the ability to create correct orderings. Third, children may experience problems ranking pictures that are inconsistent with their mental model and, therefore, are incorrect. For example, if children think that the earth is flat, it is possible that they judge all pictures of differently shaped earths as equally bad representations of the earth. Hence, using the ranking of these pictures as information concerning children’s knowledge is questionable. The final problem concerns the analysis of the responses given that few suitable methods of statistical analysis are available for this new nonstandard test format (i.e., a ranking of 16 cards).

We suggest using a structured, nonverbal, forced-choice test that can be administered without one-to-one supervision, so that more children can be tested at the same time and the social interaction is minimal. The training of experimenters and the use of complex coding systems are not required. We constructed a paper-and-pencil test (EARTH) in which the most prevalent models found in earlier studies with samples from Western countries (Vosniadou and Brewer, 1992, Vosniadou et al., 2004) are represented given that these models are relevant for children in The Netherlands. Therefore, children have a rather wide choice of answers, and this should enable us to detect the synthetic models found in previous research. In this way, we address the points of criticism by Vosniadou and colleagues (2004) and solve the problems of the open-ended interview, the forced-choice interview, and the ranking test.

Statistical issues also provide explanations for the various results of the different studies. Below we describe and discuss the data-analytic techniques used in previous research. Subsequently, we introduce a technique, latent class analysis (LCA), that has several important advantages compared with these methods.

Several researchers (Samarapungavan et al., 1996, Vosniadou, 1994, Vosniadou and Brewer, 1992, Vosniadou and Brewer, 1994, Vosniadou et al., 2004) used a methodology that resembles Siegler’s rule assessment methodology (Siegler, 1976, Siegler, 1981). For each mental model, an expected pattern of responses was formulated, and the degree of correspondence between the expected and obtained response patterns was determined. Vosniadou and Brewer (1992) based these expected response patterns on previous research (e.g., Mali and Howe, Nussbaum, 1979, Nussbaum and Novak, 1976, Sneider and Polus, 1983). In some of the early studies (e.g., Vosniadou & Brewer, 1992), revisions of the expected response patterns were made after examination of the data, and some deviations of the expected response patterns were tolerated. In later research (Vosniadou et al., 2004), no deviations were allowed. Using this method, many children (80–85%) were classified as having a mental model of the earth.

There are several problems with the rule assessment methodology (Jansen and van der Maas, 1997, Jansen and van der Maas, 2002, van der Maas and Straatemeier, in press). First, the assignment of response patterns to mental models takes place on the basis of an arbitrary criterion of a minimum percentage of correspondence between observed and expected responses. Moreover, this criterion can be more or less stringent depending on the number of items used in the test, resulting in different classifications (van der Maas & Straatemeier, in press). A second problem is that only models that are formulated beforehand can be detected. Hence, the detection of alternative models is not feasible. Third, the rule assessment methodology does not provide statistical information regarding the goodness of fit of the models (i.e., a quantification of how well the model actually accounts for the data).

LCA solves these problems. Because there are many introductions to LCA (e.g., Clogg, 1995, McCutcheon, 1987, Rindskopf, 1987), we provide only a brief explanation here. LCA belongs to the family of latent structure models. Latent structure models make a distinction between manifest variables (observed responses) and latent variables (the underlying or unobserved variables). Latent structure models differ in the measurement level of the variables. In the case of LCA, the measurement level of both the latent variables and the manifest variables is categorical. LCA divides a sample of response patterns into a limited number of groups, the so-called latent classes (McCutcheon, 1987). LCA can be used when people are assumed to differ qualitatively from each other. The latent classes may represent qualitatively different styles, personalities, attachment patterns, developmental stages, strategies, and so on. Children’s responses to the EARTH items can be classified by means of LCA to find possible underlying mental models. For every latent class, the technique gives a probability of being in that class (unconditional probabilities). Furthermore, every class (or mental model) is characterized by a pattern of conditional probabilities. A conditional probability indicates the probability of giving a particular response to an item given membership of that class. If the mental model account is correct, we should find a limited number of latent classes with conditional probabilities consistent with the mental models, as formulated by Vosniadou and Brewer (1992). For example, children with a flat earth model should have high probabilities of responding that the earth is shaped like a rectangle or disk and that people can live only on top of the earth.

An important advantage of LCA is that it offers statistical fit measures that indicate how well a given latent class model accounts for the data. The criterion used to classify children, therefore, is not arbitrary but is amenable to statistical testing. Thus, LCA may falsify a given theoretical account. For example, in the case of children’s knowledge of the earth, if a one-class model fits the data, it seems more plausible that children cannot be categorized into qualitatively different mental models of the earth. A single-class model would suggest that children show only quantitative differences in their knowledge. A second advantage is that the technique can be used to model error processes. Although a child may have a coherent mental model, he or she may still produce answers to items that differ from the answers of other children with the same mental model. One cause of such response variation is response errors. Deviations from the expected response pattern can be modeled in a latent class model because the conditional probabilities for a given answer need not equal 0 or 1. Moreover, LCA can detect clusters of unexpected response patterns; therefore, the mental models do not need to be known beforehand. These unexpected response patterns may suggest unanticipated alternative models. LCA has proven its value in a variety of developmental studies (e.g., Boom et al., 2001, Jansen and van der Maas, 1997, Jansen and van der Maas, 2001, Jansen and van der Maas, 2002, Raijmakers et al., 2004, Rindskopf, 1987, van der Maas, 1998).1

In what follows, we present two experiments—a pilot and a main experiment—in which we studied children’s knowledge of the earth. With these experiments, we aim to address the coherence versus fragmentation debate in the field of children’s knowledge of the earth. As argued, we use a structured paper-and-pencil test (i.e., EARTH) in which the most prevalent mental models found in previous research are represented, and we apply LCA to detect possible underlying mental models.

Section snippets

Pilot experiment

In the pilot experiment, 328 children (4–11 years of age) from two different primary schools in The Netherlands completed the first version of the paper-and-pencil test, the EARTH-1, followed or preceded by a drawing task in which children needed to produce a drawing of the earth with people, trees, the sky, the sun, and the moon. More details about the scoring of the EARTH-1 and the assessment procedure can be found in the Method section of the main experiment.

Below we discuss the main

Main experiment

A new test version, the EARTH-2, was constructed for the main experiment. The items of the EARTH-1 that posed problems in the pilot experiment were revised or removed, and new items were added. Moreover, we used three-dimensional pictures in the EARTH-2. Care was taken to ensure that the mental models of Vosniadou and Brewer (1992) were used consistently throughout the test. We used an open-ended interview, instead of a drawing, as a generative task. Some children in our sample were interviewed

Acknowledgments

This research was supported by a grant from the Netherlands Organization for Scientific Research (NWO). We thank Eric-Jan Wagenmakers and Conor Dolan for thorough reviews of previous drafts of this manuscript, and we thank Joost van der Schee for illustrating the EARTH.

References (70)

  • D. Rindskopf

    Using latent class analysis to test developmental models

    Developmental Review

    (1987)
  • A. Samarapungavan et al.

    Mental models of the earth, sun, and moon: Indian children’s cosmologies

    Cognitive Development

    (1996)
  • A. Samarapungavan et al.

    Children’s thoughts on the origin of species: A study of explanatory coherence

    Cognitive Science

    (1997)
  • M. Siegal et al.

    Conceptual development and conservational understanding

    Trends in Cognitive Sciences

    (2004)
  • M. Siegal et al.

    Misleading children: Causal attributions for inconsistency under repeated questioning

    Journal of Experimental Child Psychology

    (1988)
  • R.S. Siegler

    Three aspects of cognitive development

    Cognitive Psychology

    (1976)
  • H.L.J. van der Maas et al.

    Catastrophe analysis of discontinuous development

  • S. Vosniadou et al.

    Mental models of the earth: A study of conceptual change in childhood

    Cognitive Psychology

    (1992)
  • S. Vosniadou et al.

    Mental models of the day and night cycle

    Cognitive Science

    (1994)
  • S. Vosniadou et al.

    Modes of knowing and ways of reasoning in elementary astronomy

    Cognitive Development

    (2004)
  • Aldenderfer, M. S., & Blashfield, R. K. (1984). Cluster analysis (Sage University Paper series on Quantitative...
  • J. Bijstra et al.

    Conservation and the appearance—reality distinction: What do children really know and what do they answer?

    British Journal of Developmental Psychology

    (1989)
  • C.J. Brainerd

    Judgments and explanations as criteria for the presence of cognitive structures

    Psychological Bulletin

    (1973)
  • S.J. Ceci et al.

    Suggestibility of the child witness: Historical review and synthesis

    Psychological Bulletin

    (1993)
  • S.J. Ceci et al.

    Children’s testimony: Applied and basic issues

  • M. Chapman et al.

    Beyond competence and performance: Children’s class inclusion strategies, superordinate class cues, and verbal justifications

    Developmental Psychology

    (1992)
  • C.C. Clogg

    Latent class models

  • I.A. Diakidoy et al.

    Conceptual change in astronomy: Models of the earth and of the day/night cycle in American-Indian children

    European Journal of Psychology Education

    (1997)
  • A.A. diSessa

    Knowledge in pieces

  • A.A. diSessa

    A history of conceptual change research: Threads and fault lines

  • M. Donaldson

    Children’s minds

    (1978)
  • E. Elbers

    Het conservatie-experiment en de verwachting van het kind over de interaktie met de proefleider [The conservation experiment and the child’s expectations concerning the interaction with the experimenter]

    Tijdschrift voor Ontwikkelingspsychologie

    (1989)
  • J.L. Fleiss

    Measuring nominal scale agreement among many raters

    Psychological Bulletin

    (1971)
  • G.P. Goodwin et al.

    Reasoning about relations

    Psychological Review

    (2005)
  • G.S. Halford

    Children’s understanding: The development of mental models

    (1993)
  • Cited by (72)

    • Latent classes from complex assessments: What do they tell us?

      2020, Learning and Individual Differences
      Citation Excerpt :

      These successful applications have contributed to the growing popularity of mixture modeling in cognitive developmental research. For example, mixture models have shed light on students' mental models of scientific concepts, such as sinking and floating (Schneider & Hardy, 2013) and the earth (Straatemeier, van der Maas, & Jansen, 2008a), uncovered distinct patterns of conceptual knowledge of rational numbers (Kainulainen, McMullen, & Lehtinen, 2017; Van Hoof, Degrande, Ceulemans, Verschaffel, & Van Dooren, 2018), and identified students at risk for reading disabilities (Swanson, 2017). Many of these studies have provided strong empirical support, using less intrusive or interpretation-heavy measures, for previously theorized structures of knowledge or learning.

    • Latent variable mixture models in research on learning and individual differences

      2018, Learning and Individual Differences
      Citation Excerpt :

      However, it is also possible that these individuals may not be reasonably identifiable using an overall score on a test. For example, a student with a strongly held misconception about the shape of the planet earth and a student who is starting to form a more accurate concept may both perform relatively poorly on the same test, when assessing for correctness (e.g. Straatemeier, van der Maas, & Jansen, 2008). Yet, these two students may need different instructional interventions in order to progress towards a more scientifically correct concept.

    • Variable control and conceptual change: A large-scale quantitative study in elementary school

      2018, Learning and Individual Differences
      Citation Excerpt :

      The type of assessment and the statistical model used to analyze data represent substantial factors in studies on conceptual change (Frède et al., 2011; Straatemeier, van der Maas, & Jansen, 2008). Typical approaches to assessing conceptual change include interviews (Christou & Vosniadou, 2012; Nussbaum & Novak, 1976), drawings (Vosniadou & Brewer, 1992), concept mapping (Liu, 2004), and questionnaires with multiple-choice questions (Hardy et al., 2006; Straatemeier et al., 2008) or open questions (Christou & Vosniadou, 2012). These methods can either be interpreted qualitatively, or students' answers can be quantified and analyzed using statistical models.

    View all citing articles on Scopus
    View full text