Next Article in Journal
Academic Advising in Civil Engineering: Design and Evaluation of a Hybrid Model
Previous Article in Journal
Migration Potential of Students and Development of Human Capital
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Helping Learners Become Their Own Teachers: The Beneficial Impact of Trained Concept-Mapping-Strategy Use on Metacognitive Regulation in Learning

by
Virginia Deborah Elaine Welter
1,
Lukas Bernhard Becker
2 and
Jörg Großschedl
1,*
1
Institute of Biology Education, University of Cologne, 50931 Cologne, Germany
2
Evangelische Schule Steglitz, 12167 Berlin, Germany
*
Author to whom correspondence should be addressed.
Educ. Sci. 2022, 12(5), 325; https://doi.org/10.3390/educsci12050325
Submission received: 23 March 2022 / Revised: 27 April 2022 / Accepted: 3 May 2022 / Published: 5 May 2022

Abstract

:
Several empirical studies have shown that, during COVID-19-caused distance learning, many learners were struggling to realize the extent of self-regulated learning activities that were required to ensure the ongoing learning progress. Due to the significance of self-regulated learning regarding students’ learning success, the construct of metacognition also gained in importance, since corresponding skills are closely related to successful self-direction in learning. In our study, we focused on the learning strategy of concept mapping (CM), which is (1) directly related to beneficial effects on learning and retention performance, as well as (2) considered to cause constructive side-effects regarding metacognitive skills and, thus, self-regulated learning. To grasp CM’s full potential in terms of improving cognition-related learning performance, however, appropriate training of this learning strategy seems to be required. This raised the question of whether and to what extent appropriate CM training is also necessary to improve the metacognitive skills of our participants (N = 73 university students of different majors) in terms of the accuracy of their judgments of learning (JOLs). Although we were able to show, in a previous study, that the CM-training intensity did not affect the absolute level of these JOLs, the results of our current study show that there is, nevertheless, a significant effect in terms of the JOLs’ accuracy when considering their relationships to objective learning performance. Thus, CM training intensity affects the competence of metacognitive monitoring. In addition, we found that scaffolding- and feedback-including training conditions tend to counteract systematic misjudgments regarding the domain of conceptual knowledge, in particular. Practical implications and recommendations that can be derived from these results are discussed.

1. Introduction

An actively designed and autonomously directed acquisition of knowledge and skills on the part of the learners (i.e., self-regulated learning) is a central element of constructivist learning theories, which emphasize the importance of the individual learner’s activity for his or her learning success [1,2]. The far-reaching practical implications of this statement became quite apparent recently. Since the beginning of the COVID-19 pandemic in spring 2020, various containment strategies have been implemented worldwide, including the temporary closure of schools and universities in many countries. The associated shift of school and university students’ education from regular on-site to digital distance learning required far-reaching changes in the design and implementation of learning opportunities, in communication between teachers and students, and in the assessment of learning success [3,4,5]. In connection with this, particular interest was attached to learners’ autonomy in dealing with learning content. Meanwhile, several empirical studies have shown that many learners struggled in realizing the extent of self-regulated learning activities that was required to ensure the ongoing learning progress [6,7,8,9,10]. Accordingly, the academic performance of many learners has fallen off in quality [11,12,13,14]. Due to the significance of self-regulated learning regarding students’ learning success [6,15,16,17], the successful use of learning strategies and especially the construct of metacognition also gained in importance, since corresponding skills are considered to be closely related to successful self-regulation in learning [16,18,19,20].
By selecting and applying suitable learning strategies, learners can considerably support their learning progress [21]. In simplified terms, such learning strategies can be divided into cognitive and metacognitive strategies. Some authors name strategic resource management, which fulfills a supporting function for successful learning by helping to manage learning places and times, as an additional third category [21,22]. However, one must keep in mind that the functionalities of these strategy classes often cannot be clearly differentiated from one another [22].
Cognitive strategies support the integration of new information into existing cognitive structures and include different types of strategies [23]: Elaboration strategies (e.g., activating prior knowledge, taking notes, asking questions, developing ideas) primarily promote semantic understanding by linking new information to prior knowledge, and thus supporting understanding and retention. In contrast, organization strategies can be used to condense and structure new information to its essentials, for example, by summarizing texts. Whereas these strategies of organization and elaboration belong to the so-called deep-learning strategies, mnemonics and repetition strategies (e.g., multiple reading) are, contrarily, categorized as surface strategies. However, empirical studies have, meanwhile, shown that such surface strategies can also have far-reaching effects on the learning process, especially by supporting elaboration.
In contrast to cognitive strategies, metacognitive strategies support learning success more indirectly, by modulating the process of information integration and adapting it to current requirements (i.e., current situations, tasks, and the learner’s understanding) [23]. This adaptation can take place, for example, by planning, monitoring, evaluating, and regulating learning activities as well as by selecting and using adequate cognitive learning strategies. In this respect, it can be stated that cognitive learning strategies are regulated metacognitively [23,24,25,26,27,28]. Accordingly, the selection and effective use of both cognitive and metacognitive strategies in learning processes require the availability of metacognitive competencies, including metacognitive knowledge (e.g., about learning goals and adequate learning strategies) as well as monitoring and regulative elements [28]. In this context, it is also worth mentioning that cognition and metacognition go hand in hand, continuously and without explicit intention. In other words, metacognition also affects individual learning in an ‘unprompted’ manner [29]. Those who ask themselves whether they feel like doing a certain activity, or whether they can swim at a level where crossing a river is not a problem, or how sufficient their own preparation is to sign up for an exam, or who avoid certain situations because they feel inferior in conversations, etc., are practicing (usually unconscious) metacognition and behavioral regulation. Beyond that, it is of course also possible to make a conscious decision to reflect on one’s own learning process, to consciously seek and consider metacognition, or to use adequate strategies to deepen and elaborate certain (meta-) cognitive insights. However, the required competencies on the part of learners are neither to be taken as a given, nor do they appear on their own, but have to be learned and continuously trained.
Considering the central aspects of metacognitive competencies, it is obvious how closely they are linked to self-regulated learning in terms of a large overlap between the two constructs. The autonomous activity of learners in practicing self-regulated learning concerns the motivated acquisition of knowledge and skills as well as the goal-oriented adaptation and modification of cognitive activity, e.g., by choosing suitable learning strategies. Thus, self-regulated learning aims at formative evaluating and optimizing one’s own learning process in a strategy-based and goal-oriented manner. For this, learners must be aware of what they already know, what they still have to learn, and how they can best achieve their learning goal [23,30].
In the temporal development from highly guided learning in school to almost self-directed learning in higher education, it is crucial that metacognitive competencies are actually teachable and learnable to help learners become their own teachers. The development of corresponding skills can be supported by promoting an understanding of both the difference and the connections between cognition and metacognition, as well as by instructions for task-specific attentional control or consistent strategy training [31,32,33]. Furthermore, research results suggest that the first-time learning of self-regulatory/metacognitive competencies requires both their explicit introduction and embedment into a specific subject area, in order to ensure the best possible outcome for learners [23,34,35]. Accordingly, strategy modifications and transfer are, rather, to be sought in subsequent steps [36,37].
Moreover, self-regulated learning is also associated with affective and motivational aspects [28]. Since the relevant metacognitive activities are basically self-communicative actions, they are associated not only with the abilities but also with the willingness to engage in introspection and self-regulatory activities [38,39]. Accordingly, in realizing self-regulated learning, metacognitive knowledge must be necessarily accompanied by the learner’s potential to initiate, direct, and maintain learning-related efforts in an autonomous manner, as well as by a realistically balanced positive self-efficacy expectation [23,40,41]. Consequently, the specific triad of cognition, metacognition, and motivation must be adequately addressed when promoting self-regulated learning [40,42,43,44].

1.1. Judgments of Learning

From the perspective of empirical research, the operationalization and assessment of metacognitive competencies is a particular challenge, as neither thoughts themselves nor processes of becoming aware of them are directly observable, which is why their measurement always depends on self-reports [42,45]. In the context of self-regulated learning, learners are expected to respond to, for example, the rather global metacognitive question of how one’s beliefs about learning influence one’s own learning, or the rather specific question of how exactly an issue is actually known or understood. A popular method for investigating the extent to which learners know that they have learned something is to ask them to make metacognitive judgments of learning (JOLs) [46,47,48] and to compare these judgments with objective learning performance afterward. Consequently, JOLs are learners’ qualitative or quantitative (primarily in terms of percentage) predictions about how successfully certain information has been learned and can probably be remembered in the future [48].
JOLs are effective for self-regulated learning if they are accurate, i.e., if the correlation between the level of confidence to (not) remember an item and the actual (non-) remembering is as close as possible [42,49,50]. Consequently, JOLs are a specific measure of metacognitive monitoring competence that allow for learners’ decisions to effectively manage their learning process [47,51], e.g., by answering the questions of how much time or what intensity should be assigned a specific learning content [50,52]. Therefore, the accuracy of monitoring a learning process by using JOLs is crucial for learning efficiency, as any discrepancy between a JOL and corresponding actual learning performance can lead to less-efficient learning [50,52,53,54,55]. In this context, some empirical studies showed a certain sensitivity of JOLs to specific influencing factors such as the number of revisions of learning material, the delay between learning and JOL, or the degree of abstraction or affective connotation of a learning content [47]. A characteristic overestimation of one’s own learning performance, for example, is often found when learners base their judgment only on the ease of information processing, disregarding a basic check of how well they can actually recall this information from memory [49,56]. Nevertheless, most studies on JOLs provided evidence for the reliability and validity of this measure [47].

1.2. Concept Mapping

Given the importance of accurate metacognitive JOLs for self-regulated learning, the question arises of whether and how this accuracy can be promoted, for example, by a learning strategy that explicitly addresses metacognitive skills. A closer look at university learning shows that many fields of study (e.g., the natural science subject of biology) are characterized by highly abstract topics with a large number of complex interrelationships, which have to be learned by the students much more autonomously than in high school [57,58,59,60]. Resulting difficulties for learners, for example, in continuously assembling new pieces of (abstract) information towards a profound knowledge base [61,62,63], often lead to uncertainty, stress, exam nerves, declines in performance, and possibly also to premature matriculation at the end [64,65]. By closing universities to contain the COVID-19 pandemic, these potential problems of students have certainly been exacerbated [66,67,68,69]. Interventions that reduce students’ perceived stress as well as the pressure of time and to perform can, for example, focus on practicing specific learning strategies, provided their effectiveness for promoting self-regulated learning [70,71].
Concept mapping (CM) is such a learning strategy, which both stimulates deeper understanding [72,73,74] and promotes metacognitive skills in terms of self-regulated learning [75,76,77]. Concept maps are diagrams constructed from graphical and linguistic elements that represent semantic networks in terms of an externalized concept-oriented knowledge representation [78]. CM can support learners in recognizing their own understanding and especially mis- and non-understanding; for example, when they notice difficulties in selecting relevant concepts from a text, in arranging concepts in a structured way, or in describing the relationships between concepts [79,80]. The steps required to create a concept map (selecting, arranging, and linking concepts) can help learners monitor their learning progress, which directly contributes to the development of metacognitive skills and is, therefore, one of the main arguments for using CM [72,75,76,81].
However, since CM is a comparatively demanding and complex learning strategy [82], it does not tend to support learning without prior training [83,84,85,86,87]. In a previous study, we have already been able to demonstrate the crucial functionality of CM training regarding both CM strategy skills and the acquisition of subject-matter knowledge when using the learning strategy of CM [85]. The quasi-experimental design of this study was based on three different training conditions: the first group (T++) received extensive CM training, including additional scaffolding and feedback elements; the second group (T+) received similar CM training to the first group but without any additional scaffolding or feedback; and the third group (T−) received a control training in applying common non-CM learning strategies and, apart from that, a short theoretical introduction to the CM strategy. After three weeks of training, the participants took part in a one-week learning phase in which they were asked to acquire knowledge about the contents of a learning text on the topic of cell biology by using CM. At the end of this learning phase, we asked the participants to make JOLs on what percentage of the learning text’s content they would probably still be able to remember a week later. Although the two CM-training groups (T++ and T+) numerically predicted slightly higher retention of the material compared to the control group (T−), our analyses showed no statistically significant differences between the three groups, indicating that CM training did not seem to influence the participants’ absolute level (~50% in every group) of metacognitive prediction. One week after the learning phase, the two-week test phase began, during which three objective measures of learning success were assessed: (1) declarative knowledge (knowledge about discrete facts), (2) structural knowledge (knowledge about connections between facts regarding a specific topic), and (3) conceptual knowledge (knowledge about the principles of connections between facts at a higher level of abstraction, enabling the transfer of knowledge). Our results showed statistically significant group differences regarding structural and conceptual knowledge, as well as differences by trend regarding declarative knowledge, with the T++ and T+ groups (CM-training groups) consistently outperforming the control group (T−). Furthermore, on average, there were high correlations across all groups between the participants’ JOLs and their declarative as well as structural knowledge, but we found no significant association with their conceptual knowledge. However, these results did not yet answer the question of whether the three groups might also differ in the accuracy of their JOLs. Thus, we would like to address this desideratum in our present report. Since examining metacognitive skills inherently requires a different theoretical contextualization than our previous study’s focus on cognition-related learning outcomes, we considered it appropriate to explore this specific question apart from our previous study report [85].

1.3. Research Question

Considering our previous study’s finding of the positive effects of higher CM-training intensity on cognition-based knowledge acquisition [85] against the background of the strong interrelations between cognition and metacognition [23,24,25,26,27,28], one can assume that extensive training is also beneficial for metacognitive skills. If the closeness of the associations between JOLs and objective performance measures in learning with CM differed systematically depending on the CM-training intensity prior to applying CM in learning, this would imply that learners make more- or less-accurate metacognition-based judgments of their own learning depending on their familiarity with the CM learning strategy. Accordingly, we want to answer the question of to which extent the effectiveness of CM as a specifically metacognition-promoting learning tool also depends on previous training. In addition, we want to identify the potential metacognition-promoting functions of scaffolding and feedback elements by comparing different training conditions.

2. Materials and Methods

To answer our research question, we used the data from our previous study [85], whose report provides further methodological details to anyone interested. A description of both the three quasi-experimental training conditions and the study’s procedure was already given above (see Section 1.2), so we confine ourselves at this point to referring to a summarizing timeline (see Figure 1) as well as providing information on our sample.
Figure 1. Timeline of the study’s procedure.
Figure 1. Timeline of the study’s procedure.
Education 12 00325 g001
Annotation. By choosing different learning objectives in the training and the learning phase, we wanted to prevent test effects; T++ (n = 27) = CM training including scaffolding and feedback elements; T+ (n = 21) = CM training without any additional scaffolding or feedback elements; T− (n = 25) = non-CM-learning-strategy training including scaffolding and feedback elements.
Our sample consisted of N = 73 university students recruited by offering a voluntary course on learning improvement and the successful application of learning strategies at our university. The grouping was based on the participants’ self-selection, i.e., any interested students could choose one out of three weekdays on which they would take part in our study during the following six weeks. Nevertheless, we reached comparability of the three groups in terms of potentially confounding variables (age, sex, GPA, semester of study, prior knowledge about cell biology), which allowed for stating equal baseline conditions (see Table 1 and Table 2). On average, our participants were 22.6 years old, 78% were female, and 56% were enrolled in a science major, whereas all others were primarily majoring in the humanities.

2.1. Operationalization of Variables

Below, we provide a summarizing overview of our dependent variables’ operationalizations. A detailed description of the measures, including all items and a comprehensive coding manual, can be found within the report on our previous study and its supplementary material [85].
Our participants’ JOLs were assessed at the end of the learning phase by asking them to estimate on a scale from 0 to 100% (discretely graded in 10% intervals) how much of the learning material’s content they would probably remember a week later.
One week after capturing their JOLs, the participants’ declarative knowledge on the topic of cell biology was assessed using a self-designed multiple-choice test. A sample item of this test was “Which of the following general statements about the plasma membrane are correct?”. The homogeneity of the 28 test items was α = 0.80 in our sample.
Furthermore, at this point of measurement, we assessed the participants’ structural knowledge on the topic of cell biology using a self-designed similarity judgment test (SJT). For this purpose, we first selected 11 key terms/concepts of the participants’ learning material on cell biology and generated all pairwise combinations of these afterward (e.g., “Smooth ER—Nucleus”). Thus, the final SJT consisted of 55 items. The task was to rate each of these items regarding the semantic proximity of the two terms/concepts forming the respective pair (scale ranging from 1 = minimally related to 9 = strongly related). In the framework of an expert evaluation (n = 7), our SJT showed excellent interrater reliability, ICC = 0.95, CI95% [0.93, 0.97]. The closeness of agreement between our participants’ responses and the average expert solution was ultimately used as a measure of the participants’ structural knowledge.
Finally, one week after assessing our participants’ declarative and structural knowledge, we captured their conceptual knowledge on the topic of cell biology using a self-designed open-answer test. This test required the flexible use of the acquired knowledge by solving transfer tasks, such as those typically set in examinations. A sample item of this test was “How do you explain the fact that lipophilic hormones act inside cells, but hydrophilic hormones act outside cells? Justify your answer by referring to your knowledge of the structure of plasma membranes”. The participants’ answers were coded as wrong (0 points), partially correct (1 point), or correct (2 points). The homogeneity of the 15 test items was α = 0.79 in our sample.

2.2. Statistical Analyses

To answer our research questions (see Section 1.3), we compared the group-specific average correlation coefficients indicating the associations between the participants’ JOLs and their knowledge tests’ scores.
Since a previous check of the distributional assumptions of our data had revealed minor deviations from the normal distribution in 4 of 12 condition-variable constellations, we decided to estimate bivariate Spearman rank correlations if any of these constellations were involved in any of the 9 pairs of correlations calculated. In all other cases, bivariate Pearson correlations were estimated.
Subsequently, we applied Fisher-z transformations to the estimated correlation coefficients r. This procedure is applicable not only to Pearson coefficients, but also to Spearman coefficients in the same manner, provided N > 10 [88,89,90]:
Z r = 1 2 ln 1 + r 1 r
Transforming correlation coefficients using the Fisher-z procedure conduces normalization of the sample distribution and stabilization of the sample variance, which is then interpretable as a function of the sample size N [88,89,90]:
σ Z r 2 = 1 N 3
Consequently, this transformation is necessary to compare the magnitude of different correlation coefficients and test them for significant differences. This statistical comparison of two Fisher-z-transformed correlation coefficients tests the null hypothesis that two correlation coefficients are not significantly different from each other (H0: Z r 1 = Z r 2 ) [88,89,90]. The procedure does not require the statistical significance of the correlation coefficients to be compared, i.e., coefficients can also be compared with each other if they have previously proven to be insignificant in bivariate correlations [91,92].

3. Results

A closer look at the descriptive statistics of the variables considered (see Table 3) reveals that participants in all three groups stated that they would probably remember only about 44% to 58% of the learning material one week later. Participants of the T+ group gave the highest JOLs, i.e., they were the most confident about their retention performance, followed by the T++ group, and, finally, the T− group. The same ranking is found for the domain of declarative knowledge: participants of the group T+ performed slightly better than those of the group T++ and these, in turn, performed slightly better than those of the group T−. However, we find a different pattern of results regarding the two other domains of knowledge, with the two CM-training groups (T++ and T+) performing considerably better than the T− group.
Bivariate correlations (see Table 4) first show that, for all groups, there are high and significant correlations between their participants’ JOLs and test results on declarative knowledge. In contrast, for none of the three groups, there are significant correlations between their participants’ JOLs and test results on conceptual knowledge. Regarding the test scores on structural knowledge, high significant correlations to JOLs exist only for participants of the two CM-training groups (T++ and T+), but not for those of the T− group. Numerically, a similar pattern results for the domain of declarative knowledge: regardless of its statistical insignificance, compared to the two CM-training groups (T++ and T+), the correlation to the T− group’s participants’ JOLs is apparently lower. Finally, the pronounced negative correlation between the T+ group’s participants’ JOLs and their test scores on conceptual knowledge is striking compared to the corresponding null correlations resulting for the two other groups (T++ and T−), which had additionally received scaffolding and feedback during their training (see Figure 2).
Table 4. Group-specific correlations between JOLs and knowledge tests’ scores.
Table 4. Group-specific correlations between JOLs and knowledge tests’ scores.
GroupnrJOL—Declarative knowledgerJOL—Structural knowledgerJOL—Conceptual knowledge
T++270.74 ***0.70 ***0.06 a
T+210.76 ***0.71 ***−0.35
T–250.54 a,**0.31 a−0.02 a
Annotation. T++ = CM training including scaffolding and feedback elements; T+ = CM training without any additional scaffolding or feedback elements; T− = non-CM-learning-strategy training including scaffolding and feedback elements; a = Spearman correlation coefficient (instead of Pearson); ** p < 0.01, *** p < 0.001.
Figure 2. Graphical representation of the numerical values of the bivariate correlations between JOLs and knowledge tests’ scores of the three training groups.
Figure 2. Graphical representation of the numerical values of the bivariate correlations between JOLs and knowledge tests’ scores of the three training groups.
Education 12 00325 g002
Annotation. T++ (n = 27) = CM training including scaffolding and feedback elements; T+ (n = 21) = CM training without any additional scaffolding or feedback elements; T− (n = 25) = non-CM-learning-strategy training including scaffolding and feedback elements.
Comparisons of the Fisher-z-transformed correlation coefficients (see Table 5) reveal statistically significant differences only in the case of structural knowledge, with the two CM-training groups (T++ and T+) clearly outperforming the T− group. Regarding the other two knowledge domains, there are, at best, differences by trend, which, however, do not reach the necessary level of significance. In the case of declarative knowledge, the two CM-training groups (T++ and T+) seem again to outperform the T− group, but with respect to conceptual knowledge, the two groups who received additional scaffolding and feedback during their training (T++ and T−) seem to benefit. While for these two groups resulted at least null correlations between their participants’ JOLs and conceptual knowledge, participants of the T+ group (without any scaffolding and feedback) appeared to systematically overestimate themselves (see Table 3 and Table 4).

4. Discussion

With our study, we aimed to answer the question of whether our participants’ metacognitive JOL accuracy after learning with CM turned out to be differential depending on, on the one hand, the intensity of previously received CM training and, on the other hand, the integration of scaffolding and feedback elements into such a training (see Section 1.3). Our results (see Section 3) show that a higher CM training intensity is (by trend) associated with more accurate JOLs regarding declarative and structural knowledge, whereas training conditions that include scaffolding and feedback tend to counteract systematic misjudgments regarding the domain of conceptual knowledge.
The latter result can certainly be attributed to the fact that both implicit (scaffolding) and explicit (feedback) external estimates of achievement helped our participants to assess their own learning more accurately. However, it must be specifically noted both that scaffolding and feedback were only part of the training phase, not of the learning phase, and that a different learning objective had been assigned to each of the two phases to avoid test effects. Thus, none of the participants ever received an external assessment of his or her level of knowledge about the learning phase’s topic of cell biology (which was also the target of the test phase). Scaffolding and feedback must, therefore, have already become effective during the training phase on the topic of the psychological construct of intelligence (see Figure 1). A potentially beneficial effect, thus, quite obviously relates to the metacognitive assessment of one’s own learning and not to the simple adoption of any external assessments of one’s own topic-related level of performance. This is in line with previous findings on the beneficial effect of different kinds of performance feedback on self-regulated learning skills [34,93,94,95,96]. However, in our study, the two groups that had received scaffolding and feedback in the training phase showed only a null correlation between their JOLs and their scores on the conceptual-knowledge test. Two things must be considered when interpreting this finding: first, the null correlation must be considered in light of the T+ group’s participants’ systematic overestimation of their own abilities; as such, an overestimation was at least avoided in the scaffolding and feedback groups. And second, due to short-term and unexpected organizational reasons, our participants’ conceptual knowledge had to be assessed one week later than the other knowledge measures. Thus, the correlations between JOLs and conceptual-knowledge test scores may have been different, or the differences between the three groups may have been larger, if conceptual knowledge had also been assessed one week after the completion of the learning phase. Since we, furthermore, had clearly stated a one-week retention period when asking the participants to make their JOLs, the pattern of results can only be interpreted to a limited extent in this respect (see Section 4.1).
Beyond that, a closer look at the other two measures of knowledge (that could be assessed as scheduled), first of all, reveals a significantly closer relationship between the two CM-training groups’ (T++ and T+) participants’ JOLs and their structural knowledge than in the case of those of the T− group. This finding is consistent with the characteristics and purpose of CM. A concept map explicitly depicts relevant relationships between concepts [97], which refers to the common definition of structural knowledge [98]. In selecting, arranging, and connecting concepts, learners may become aware of the consistency of their structural knowledge, e.g., when they notice difficulties in arranging concepts in a structured way [79,80]. Thus, the intensive use of the CM strategy during the trainings of the groups T++ and T+ may have simultaneously promoted metacognitive skills regarding a more reliable assessment of one’s own structural knowledge. In this respect, our results extend the knowledge base of previous studies that have already provided evidence for the benefit of higher versus lower CM-training intensities [72,73,87,99,100]. The advantage of the two CM-training groups (T++ and T+) is also cognizable regarding the domain of declarative knowledge; but here, their JOL accuracy is not that much better to let the difference with the T− group become statistically significant. This can be explained in two ways: On the one hand, our group sizes and, thus, the statistical power are very small, so it is quite possible that the difference would have become statistically significant in case of larger groups. On the other hand, our sample consisted exclusively of university students, i.e., practiced learners who are periodically required to reliably assess their (primarily declarative) knowledge before taking part in exams. Consequently, it can be assumed that the participants of the T− group were already skilled at a certain level in this respect.
If we now combine all these results and, at the same time, take into account the specific perspective of our JOL assessment, we can state the following: Altogether, the results are most likely to reveal something about how the participants individually operationalize the term of “retention performance” asked in the JOLs, i.e., which level of information integration they equate with “retention”. In this regard, we see that declarative knowledge, i.e., the retention of isolated facts, seems to be most likely in mind across all three groups when the JOLs are made. However, in contrast to group T−, the two CM-training groups (T++ and T+), also considered relations and functionalities between thematic concepts, thus equated “retention” not only with a reproduction of isolated facts. In this respect, the metacognition-enhancing effect of familiarity with CM is most likely based on a kind of “sensitization” to the fact that understanding about something can mean more than the accumulation of declarative knowledge. Nevertheless, for any of the three groups, it cannot be asserted that a decontextualized and flexible application of acquired knowledge structures in terms of conceptual knowledge [98] was actually considered. However, this is not surprising given the fact that our sample consisted of university students who are most likely to be confronted with the requirement of continuous reproduction of declarative knowledge (‘bulimic learning’) [101] in their everyday academic life, whereas conceptual knowledge appears to be less required for successful completion of most university studies [102].

4.1. Limitations

In the previous section, we have already pointed out that short-term and unexpected organizational circumstances compelled us to assess the participants’ conceptual knowledge after two weeks (instead of one week) after the end of the learning phase. However, since the question to assess the participants’ JOLs was “How much of the information from the text you will remember in one week from 0 to 100%?”, it is conceivable that the correlations between JOLs and conceptual knowledge test scores as well as corresponding group differences might have turned out differently if conceptual knowledge (just like the other knowledge measures) had actually been assessed one week after the end of the learning phase. Thus, our results found in this regard can only be interpreted to a limited extent.
Furthermore, regarding the applied statistical procedures, a post-hoc power analysis showed that our group sizes were too small to reach sufficient power, as it was only about 0.35 to 0.40. Therefore, the size of each group should have included approximately n = 65 participants to detect medium-sized effects at a power level of 0.80. Consequently, it is conceivable that numerically visible but statistically insignificant group differences regarding declarative and conceptual knowledge would have reached the level of statistical significance given a sufficiently large sample. However, it seems difficult to imagine how the intensive care of such a large number of participants could have been ensured during the six-week study period, given limited personnel and financial resources.
Overall, against the background of these two limitations, a replication of our study would be desirable that both assesses the knowledge measures in accordance with JOL operationalization (in terms of time frame) and takes a larger sample in order to clearly prove all relevant effects at the required level of statistical significance.

4.2. Practical Implications and Prospects for Future Research

Despite the previously discussed limitations of our study, our results clearly show that intensive CM training has beneficial effects on metacognitive monitoring skills while learning with CM. In addition, our study provides at least indications that the integration of scaffolding and feedback elements in such learning-strategy training can counteract systematic misjudgments of one’s own learning. Thus, realizing comprehensive CM training (such as our T++ setting) in the classroom could be one way to efficiently address critical difficulties in learners’ self-regulation [6,7,8,9,10], since metacognitive skills and especially the ability to reliably rate one’s own learning are closely related to successful self-regulated learning [16,19,20,34]. While several studies have already shown the positive effects of CM on learning regardless of learners’ grade level (except for very young children below grade 4) [74,99], it would, however, of course still be necessary to first answer the question of the extent to which different CM-training designs are effective for different age groups.
A second interesting question of future studies could relate to the specific triad of cognition, metacognition, and motivation [40,42,43,44] (see Section 1). With their JOLs, our participants had predicted to retain on average only about 50% of what they had learned; so, in our study, there might have been a lack of learning motivation, probably induced by the experimental context. Consequently, the question arises of to what extent a specific experimental manipulation of motivation may affect our findings. In this respect, various experimental settings exist that can induce specific motivational orientations and, thus, influence the performance behavior of study participants. For example, Murayama and Elliot [103] were able to show that participants in a performance-goal condition (“perform better than other participants”) performed better in a memory test immediately after a learning phase, whereas participants in a mastery goal condition (“perform better than before”) had an advantage in a follow-up memory test one week later. It therefore seems promising to take up motivation-related experimental approaches in future studies to test their additional benefits. In this way, further recommendations for the specific design of CM training to promote self-regulated learning could possibly be derived.

Author Contributions

Conceptualization, J.G., L.B.B. and V.D.E.W.; Methodology, J.G., L.B.B. and V.D.E.W.; Formal Analysis, V.D.E.W.; Investigation, L.B.B.; Resources, J.G. and V.D.E.W.; Data Curation, L.B.B. and V.D.E.W.; Writing—Original Draft Preparation, V.D.E.W.; Writing—Review and Editing, J.G., L.B.B. and V.D.E.W.; Visualization, V.D.E.W.; Supervision, J.G.; Project Administration, J.G.; Funding Acquisition, J.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the German Research Foundation (DFG), grant number GR 4763/2-1.

Institutional Review Board Statement

Our study was approved by the DFG (project number: GR 4763/2-1). Before participation, all subjects received detailed written subject information in accordance with the current ethical guidelines laid down by the University of Cologne (Germany) and the German Psychological Society [104], including the following information: aims and course of the investigation, absolute voluntariness of participation, possibility of dropping out of participation at any time, guaranteed protection of data privacy (collection of only anonymized data), possibility of requesting data cancelation at any time, no-risk character of study participation, and contact information in case of any questions or problems. Written informed consent was obtained from all participants prior to the study, according to the current version of the Declaration of Helsinki [105,106]. Anonymity was ensured by using individual codes that the students created themselves. Data storage meets current European data protection regulations [107].

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Paris, S.G.; Byrnes, J.P. The Constructivist Approach to Self-Regulation and Learning in the Classroom. In Self-Regulated Learning and Academic Achievement: Theory, Research, and Practice, 1st ed.; Zimmerman, B.J., Schunk, D.H., Eds.; Springer: New York, NY, USA, 1989; pp. 169–200. ISBN 978-1461281801. [Google Scholar]
  2. Vrieling, E.; Stijnen, S.; Bastiaens, T. Successful learning: Balancing self-regulation with instructional planning. Teach. High. Educ. 2018, 23, 685–700. [Google Scholar] [CrossRef] [Green Version]
  3. García-Morales, V.J.; Garrido-Moreno, A.; Martín-Rojas, R. The Transformation of Higher Education After the COVID Disruption: Emerging Challenges in an Online Learning Scenario. Front. Psychol. 2021, 12, 616059. [Google Scholar] [CrossRef] [PubMed]
  4. Huber, S.G.; Helm, C.; Günther, P.S.; Schneider, N.; Schwander, M.; Pruitt, J.; Schneider, J.A. COVID-19: Distance learning from the perspective of school staff in Germany, Austria and Switzerland. PraxisForschungLehrer*innenBildung 2020, 2, 27–44. (In German) [Google Scholar] [CrossRef]
  5. Lauret, D.; Bayram-Jacobs, D. COVID-19 Lockdown Education: The Importance of Structure in a Suddenly Changed Learning Environment. Educ. Sci. 2021, 11, 221. [Google Scholar] [CrossRef]
  6. Berger, F.; Schreiner, C.; Hagleitner, W.; Jesacher-Rößler, L.; Roßnagl, S.; Kraler, C. Predicting Coping with Self-Regulated Distance Learning in Times of COVID-19: Evidence from a Longitudinal Study. Front. Psychol. 2021, 12, 701255. [Google Scholar] [CrossRef] [PubMed]
  7. Biwer, F.; Wiradhany, W.; Egbrink, M.O.; Hospers, H.; Wasenitz, S.; Jansen, W.; de Bruin, A. Changes and Adaptations: How University Students Self-Regulate Their Online Learning During the COVID-19 Pandemic. Front. Psychol. 2021, 12, 642593. [Google Scholar] [CrossRef] [PubMed]
  8. Hensley, L.C.; Iaconelli, R.; Wolters, C.A. “This weird time we’re in”: How a sudden change to remote education impacted college students’ self-regulated learning. J. Res. Technol. Educ. 2022, 54, S203–S218. [Google Scholar] [CrossRef]
  9. Santamaría-Vázquez, M.; Del Líbano, M.; Martínez-Lezaun, I.; Ortiz-Huerta, J.H. Self-Regulation of Motivation and Confinement by COVID-19: A Study in Spanish University Students. Sustainability 2021, 13, 5435. [Google Scholar] [CrossRef]
  10. Sum, C.; Chan, I.; Wong, H. Ready to learn in an uncertain future: Ways to support student engagement. Account. Res. J. 2021, 34, 169–183. [Google Scholar] [CrossRef]
  11. Emmerichs, L.; Welter, V.D.E.; Schlüter, K. University Teacher Students’ Learning in Times of COVID-19. Educ. Sci. 2021, 11, 776. [Google Scholar] [CrossRef]
  12. Engzell, P.; Frey, A.; Verhagen, M.D. Learning loss due to school closures during the COVID-19 pandemic. Proc. Natl. Acad. Sci. USA 2021, 118, e2022376118. [Google Scholar] [CrossRef]
  13. Hammerstein, S.; König, C.; Dreisörner, T.; Frey, A. Effects of COVID-19-Related School Closures on Student Achievement: A Systematic Review. Front. Psychol. 2021, 12, 746289. [Google Scholar] [CrossRef]
  14. Schult, J.; Mahler, N.; Fauth, B.; Lindner, M.A. Did Students Learn Less During the COVID-19 Pandemic? Reading and Mathematics Competencies Before and After the First Pandemic Wave. PsyArXiv 2021. preprint. [Google Scholar] [CrossRef]
  15. Greene, J.A. Self-Regulation in Education, 1st ed.; Routledge: New York, NY, USA, 2018; ISBN 978-1138689107. [Google Scholar]
  16. Karatas, K.; Arpaci, I. The Role of Self-directed Learning, Metacognition, and 21st Century Skills Predicting the Readiness for Online Learning. Contemp. Educ. Technol. 2021, 13, ep300. [Google Scholar] [CrossRef]
  17. Urbina, S.; Villatoro, S.; Salinas, J. Self-Regulated Learning and Technology-Enhanced Learning Environments in Higher Education: A Scoping Review. Sustainability 2021, 13, 7281. [Google Scholar] [CrossRef]
  18. Flavell, J.H. Metacognition and cognitive monitoring: A new area of cognitive–developmental inquiry. Am. Psychol. 1979, 34, 906–911. [Google Scholar] [CrossRef]
  19. Pintrich, P.R. The Role of Metacognitive Knowledge in Learning, Teaching, and Assessing. Theory Pract. 2002, 41, 219–225. [Google Scholar] [CrossRef]
  20. Williamson, G. Self-regulated learning: An overview of metacognition, motivation and behaviour. J. Initial. Teach. Inq. 2015, 1, 25–27. [Google Scholar] [CrossRef]
  21. Friedrich, H.F.; Mandl, H. Learning strategies: On the structuring of the research field. In Handbook of Learning Strategies, 1st ed.; Mandl, H., Friedrich, H.F., Eds.; Hogrefe: Göttingen, Germany, 2006; pp. 1–23. ISBN 978-3801718138. (In German) [Google Scholar]
  22. Wild, K.-P. Individual learning strategies of university students: Consequences for university didactics and teaching. Beiträge Lehr. 2005, 23, 191–206. (In German) [Google Scholar] [CrossRef]
  23. Hasselhorn, M.; Gold, A. Educational Psychology, 5th ed.; Kohlhammer: Stuttgart, Germany, 2022; ISBN 978-3170397828. (In German) [Google Scholar]
  24. Das, A. Metacognition and Learning: An Overview. Glob. J. Res. Anal. 2016, 5, 64–65. [Google Scholar]
  25. Leopold, C.; Leutner, D. Improving students’ science text comprehension through metacognitive self-regulation when applying learning strategies. Metacogn. Learn. 2015, 10, 313–346. [Google Scholar] [CrossRef] [Green Version]
  26. Roberts, J.A.S. Integrating Metacognitive Regulation into the Online Classroom Using Student-Developed Learning Plans. J. Microbiol. Biol. Educ. 2021, 22, 1–5. [Google Scholar] [CrossRef] [PubMed]
  27. Zhao, N.; Wardeska, J.G.; McGuire, S.Y.; Cook, E. Metacognition: An Effective Tool to Promote Success in College Science Learning. J. Coll. Sci. Teach. 2014, 43, 48–54. [Google Scholar] [CrossRef] [Green Version]
  28. Muijs, D.; Bokhove, C. Metacognition and Self-Regulation: Evidence Review; Education Endowment Foundation: London, UK, 2020; Available online: https://d2tic4wvo1iusb.cloudfront.net/documents/guidance/Metacognition_and_self-regulation_review.pdf (accessed on 18 March 2022).
  29. Norman, E. Why Metacognition Is Not Always Helpful. Front. Psychol. 2020, 11, 1537. [Google Scholar] [CrossRef]
  30. Putnam, A.L.; Roediger, H.L., III. Education and Memory: Seven Ways the Science of Memory Can Improve Classroom Learning. In Stevens’ Handbook of Experimental Psychology and Cognitive Neuroscience, 4th ed.; Volume 1 Learning & Memory; Phelps, E.A., Davachi, L., Eds.; Wiley: New York, NY, USA, 2018; pp. 169–213. ISBN 978-1119170167. [Google Scholar]
  31. Jaleel, S.; Premachandran, P. A Study on the Metacognitive Awareness of Secondary School Students. Univers. J. Educ. Res. 2016, 4, 165–172. [Google Scholar] [CrossRef]
  32. Schraw, G. Promoting general metacognitive awareness. Instr. Sci. 1998, 26, 113–125. [Google Scholar] [CrossRef]
  33. Veenman, M.V.J.; Van Hout-Wolters, B.H.A.M.; Afflerbach, P. Metacognition and learning: Conceptual and methodological considerations. Metacogn. Learn. 2006, 1, 3–14. [Google Scholar] [CrossRef]
  34. Hattie, J. Visible Learning: A Synthesis of Meta-Analyses Relating to Achievement, 1st ed.; Routledge: London, UK, 2008; ISBN 978-0415476188. [Google Scholar]
  35. Veenman, M.V.J. Metacognition in Science Education: Definitions, Constituents, and Their Intricate Relation with Cognition. In Metacognition in Science Education: Trends in Current Research, 1st ed.; Zohar, A., Dori, Y.J., Eds.; Springer: Dordrecht, The Netherlands, 2012; pp. 21–36. ISBN 978-9400721319. [Google Scholar]
  36. Grotzer, T.; Mittlefehldt, S. The Role of Metacognition in Students’ Understanding and Transfer of Explanatory Structures in Science. In Metacognition in Science Education: Trends in Current Research, 1st ed.; Zohar, A., Dori, Y.J., Eds.; Springer: Dordrecht, The Netherlands, 2012; pp. 79–99. ISBN 978-9400721319. [Google Scholar]
  37. Schuster, C.; Stebner, F.; Leutner, D.; Wirth, J. Transfer of metacognitive skills in self-regulated learning: An experimental training study. Metacogn. Learn. 2020, 15, 455–477. [Google Scholar] [CrossRef]
  38. Ansorge, U. Knowledge in Motion: How Procedural Control of Knowledge Usage entails Selectivity and Bias. J. Knowl. Struct. Syst. 2021, 2, 3–28. [Google Scholar]
  39. Choifer, A.A. New Understanding of the First-Person and Third-Person Perspectives. Philos. Pap. 2018, 47, 333–371. [Google Scholar] [CrossRef] [Green Version]
  40. Efklides, A. Introduction to the Special Section: Motivation and Affect in the Self-Regulation of Behavior. Eur. Psychol. 2005, 10, 173–174. [Google Scholar] [CrossRef]
  41. Schunk, D.H. Social Cognitive Theory and Self-Regulated Learning. In Self-Regulated Learning and Academic Achievement: Theory, Research, and Practice, 1st ed.; Zimmerman, B.J., Schunk, D.H., Eds.; Springer: New York, NY, USA, 1989; pp. 83–110. ISBN 978-0387969343. [Google Scholar]
  42. Higgins, N.L.; Frankland, S.; Rathner, J.A. Self-Regulated Learning in Undergraduate Science. Int. J. Innov. Sci. Math. Educ. 2021, 29, 58–70. [Google Scholar] [CrossRef]
  43. Winne, P.H. Cognition and Metacognition within Self-Regulated Learning. In Handbook of Self-Regulation of Learning and Performance, 2nd ed.; Schunk, D.H., Greene, J.A., Eds.; Routledge: New York, NY, USA, 2017; pp. 36–48. ISBN 978-1315697048. [Google Scholar]
  44. Zimmerman, B.J.; Moylan, A.R. Self-Regulation: Where Metacognition and Motivation Intersect. In Handbook of Metacognition in Education, 1st ed.; Hacker, D.J., Dunlsoky, J., Graesser, A.C., Eds.; Routledge: New York, NY, USA, 2009; pp. 299–315. ISBN 978-0203876428. [Google Scholar]
  45. Craig, K.; Hale, D.; Grainger, C.; Stewart, M.E. Evaluating metacognitive self-reports: Systematic reviews of the value of self-report in metacognitive research. Metacogn. Learn. 2020, 15, 155–213. [Google Scholar] [CrossRef]
  46. Arbuckle, T.Y.; Cuddy, L.L. Discrimination of item strength at time of presentation. J. Exp. Psychol. 1969, 81, 126–131. [Google Scholar] [CrossRef]
  47. Rhodes, M.G. Judgments of Learning: Methods, Data, and Theory. In The Oxford Handbook of Metamemory, 1st ed.; Dunlosky, J., Tauber, S.K., Eds.; Oxford University Press: New York, NY, USA, 2016; pp. 65–80. ISBN 978-0199336746. [Google Scholar]
  48. Schraw, G. Measuring Metacognitive Judgments. In Handbook of Metacognition in Education, 1st ed.; Hacker, D.J., Dunlosky, J., Graeser, A.C., Eds.; Routledge: New York, NY, USA, 2009; pp. 415–425. ISBN 978-0203876428. [Google Scholar]
  49. Koriat, A. Monitoring one’s own knowledge during study: A cue-utilization approach to judgments of learning. J. Exp. Psychol. Gen. 1997, 126, 349–370. [Google Scholar] [CrossRef]
  50. Thiede, K.W.; Anderson, M.C.M.; Therriault, D. Accuracy of metacognitive monitoring affects learning of texts. J. Educ. Psychol. 2003, 95, 66–73. [Google Scholar] [CrossRef]
  51. Dunlosky, J.; Mueller, M.L.; Thiede, K.W. Methodology for Investigating Human Metamemory: Problems and Pitfalls. In The Oxford Handbook of Metamemory, 1st ed.; Dunlosky, J., Tauber, S.K., Eds.; Oxford University Press: New York, NY, USA, 2016; pp. 23–37. ISBN 978-0199336746. [Google Scholar]
  52. Nelson, T.O.; Dunlosky, J. When people’s judgments of learning (jols) are extremely accurate at predicting subsequent recall: The “delayed-jol effect”. Psychol. Sci. 1991, 2, 267–271. [Google Scholar] [CrossRef]
  53. Tauber, S.K.; Dunlosky, J.; Rawson, K.A.; Wahlheim, C.N.; Jacoby, L.L. Self-regulated learning of a natural category: Do people interleave or block exemplars during study? Psychon. Bull. Rev. 2013, 20, 356–363. [Google Scholar] [CrossRef] [Green Version]
  54. Yang, C.; Potts, R.; Shanks, D.R. Metacognitive unawareness of the errorful generation benefit and its effects on self-regulated learning. J. Exp. Psychol. Learn. Mem. Cogn. 2017, 43, 1073–1092. [Google Scholar] [CrossRef]
  55. Yang, C.; Sun, B.; Shanks, D.R. The anchoring effect in metamemory monitoring. Mem. Cogn. 2018, 46, 384–397. [Google Scholar] [CrossRef]
  56. Karpicke, J.D.; Grimaldi, P.J. Retrieval-based learning: A perspective for enhancing meaningful learning. Educ. Psychol. Rev. 2012, 24, 401–418. [Google Scholar] [CrossRef]
  57. Akiha, K.; Brigham, E.; Couch, B.A.; Lewin, J.; Stains, M.; Stetzer, M.R.; Vinson, E.L.; Smith, M.K. What types of instructional shifts do students experience? Investigating active learning in science, technology, engineering, and math classes across key transition points from middle school to the university level. Front. Educ. 2018, 2, 68. [Google Scholar] [CrossRef] [Green Version]
  58. Christie, H.; Tett, L.; Cree, V.E.; McCune, V. ‘It all just clicked’: A longitudinal perspective on transitions within university. Stud. High. Educ. 2016, 41, 478–490. [Google Scholar] [CrossRef] [Green Version]
  59. Fazey, D.M.A.; Fazey, J.A. The potential for autonomy in learning: Perceptions of competence, motivation and locus of control in first-year undergraduate students. Stud. High. Educ. 2001, 26, 345–361. [Google Scholar] [CrossRef]
  60. Meaders, C.L.; Smith, M.K.; Boester, T.; Bracy, A.; Couch, B.A.; Drake, A.G.; Farooq, S.; Khoda, B.; Kinsland, C.; Lane, A.K.; et al. What questions are on the minds of stem undergraduate students and how can they be addressed? Front. Educ. 2021, 6, 639338. [Google Scholar] [CrossRef]
  61. Hashem, K.; Mioduser, D. Learning by modeling (lbm): Understanding complex systems by articulating structures, behaviors, and functions. Int. J. Adv. Comput. Sci. App. 2013, 4, 80–86. [Google Scholar] [CrossRef] [Green Version]
  62. Hmelo-Silver, C.E.; Azevedo, R. Understanding complex systems: Some core challenges. J. Learn. Sci. 2006, 15, 53–61. [Google Scholar] [CrossRef]
  63. Yoon, S.A.; Anderson, E.; Koehler-Yom, J.; Evans, C.; Park, M.; Sheldon, J.; Schoenfeld, I.; Wendel, D.; Scheintaub, H.; Klopfer, E. Teaching about complex systems is no simple matter: Building effective professional development for computer-supported complex systems instruction. Instr. Sci. 2017, 45, 99–121. [Google Scholar] [CrossRef] [Green Version]
  64. England, B.J.; Brigati, J.R.; Schussler, E.E.; Chen, M.M. Student anxiety and perception of difficulty impact performance and persistence in introductory biology courses. CBE Life Sci. Educ. 2019, 18, ar21. [Google Scholar] [CrossRef] [Green Version]
  65. Meaders, C.L.; Lane, A.K.; Morozov, A.I.; Shuman, J.K.; Toth, E.S.; Stains, M.; Stetzer, M.R.; Vinson, E.; Couch, B.A.; Smith, M.K. Undergraduate student concerns in introductory stem courses: What they are, how they change, and what influences them. J. STEM Educ. Res. 2020, 3, 195–216. [Google Scholar] [CrossRef]
  66. Aristeidou, M.; Cross, S. Disrupted distance learning: The impact of Covid-19 on study habits of distance learning university students. Open Learn. J. Open Distance e-Learn. 2021, 36, 263–282. [Google Scholar] [CrossRef]
  67. Reyes-Portillo, J.A.; Warner, C.M.; Kline, E.A.; Bixter, M.T.; Chu, B.C.; Miranda, R.; Nadeem, E.; Nickerson, A.; Peralta, A.O.; Reigada, L.; et al. The Psychological, Academic, and Economic Impact of COVID-19 on College Students in the Epicenter of the Pandemic. Emerg. Adulthood 2022, 10, 473–490. [Google Scholar] [CrossRef]
  68. Turner, K.L.; Hughes, M.; Presland, K. Learning Loss, a Potential Challenge for Transition to Undergraduate Study Following COVID19 School Disruption. J. Chem. Educ. 2020, 97, 3346–3352. [Google Scholar] [CrossRef]
  69. Wang, Y.; Xia, M.; Guo, W.; Xu, F.; Zhao, Y. Academic performance under COVID-19: The role of online learning readiness and emotional competence. Curr. Psychol. 2022, 1–14. [Google Scholar] [CrossRef]
  70. Cook, E.; Kennedy, E.; McGuire, S.Y. Effect of teaching metacognitive learning strategies on performance in general chemistry courses. J. Chem. Educ. 2013, 90, 961–967. [Google Scholar] [CrossRef]
  71. Griffin, R.; MacKewn, A.; Moser, E.; VanVuren, K.W. Do learning and study skills affect academic performance? An empirical investigation. Contemp. Issues. Educ. Res. 2012, 5, 109. [Google Scholar] [CrossRef]
  72. Cañas, A.J. A Summary of Literature Pertaining to the Use of Concept Mapping Techniques and Technologies for Education and Performance Support; The Institute for Human and Machine Cognition: Pensacola, FL, USA, 2003; Available online: https://cmapspublic3.ihmc.us/rid=1L403MT99-1T4VWYR-4634/IHMC%20Literature%20Review%20on%20Concept%20Mapping.pdf (accessed on 18 March 2022).
  73. Romero, C.; Cazorla, M.; Buzón, O. Meaningful learning using concept maps as a learning strategy. J. Technol. Sci. Educ. 2017, 7, 313. [Google Scholar] [CrossRef] [Green Version]
  74. Schroeder, N.L.; Nesbit, J.C.; Anguiano, C.J.; Adesope, O.O. Studying and Constructing Concept Maps: A Meta-Analysis. Educ. Psychol. Rev. 2018, 30, 431–455. [Google Scholar] [CrossRef] [Green Version]
  75. Chevron, M.-P. A metacognitive tool: Theoretical and operational analysis of skills exercised in structured Concept Maps. Perspect. Sci. 2014, 2, 46–54. [Google Scholar] [CrossRef] [Green Version]
  76. Kinchin, I.M. Visualising knowledge structures in biology: Discipline, curriculum and student understanding. J. Biol. Educ. 2011, 45, 183–189. [Google Scholar] [CrossRef]
  77. Stevenson, M.P.; Hartmeyer, R.; Bentsen, P. Systematically reviewing the potential of concept mapping technologies to promote self-regulated learning in primary and secondary science education. Educ. Res. Rev. 2017, 21, 1–16. [Google Scholar] [CrossRef]
  78. Cox, R. Representation construction, externalised cognition and individual differences. Learn. Instr. 1999, 9, 343–363. [Google Scholar] [CrossRef]
  79. Hilbert, T.S.; Renkl, A. Concept mapping as a follow-up strategy to learning from texts: What characterizes good and poor mappers? Instr. Sci. 2008, 36, 53–73. [Google Scholar] [CrossRef]
  80. Novak, J.D.; Cañas, A.J. The Theory Underlying Concept Maps and How to Construct and Use Them; Florida Institute for Human and Machine Cognition: Pensacola, FL, USA, 2008; Available online: http://cmap.ihmc.us/docs/pdf/TheoryUnderlyingConceptMaps.pdf (accessed on 18 March 2022).
  81. Redford, J.S.; Thiede, K.W.; Wiley, J.; Griffin, T.D. Concept mapping improves meta-comprehension accuracy among 7th graders. Learn. Instr. 2012, 22, 262–270. [Google Scholar] [CrossRef] [Green Version]
  82. Jüngst, K.L.; Strittmatter, P. Knowledge structure representation: Theoretical approaches and practical relevance. Unterrichtswissenschaft 1995, 23, 194–207. (In German) [Google Scholar] [CrossRef]
  83. Ajaja, O.P. Concept mapping as a study skill. Int. J. Educ. Sci. 2011, 3, 49–57. [Google Scholar] [CrossRef]
  84. Andrews, K.E.; Tressler, K.D.; Mintzes, J.J. Assessing environmental understanding: An application of the concept mapping strategy. Environ. Educ. Res. 2008, 14, 519–536. [Google Scholar] [CrossRef]
  85. Becker, L.B.; Welter, V.D.E.; Aschermann, E.; Großschedl, J. Comprehension-Oriented Learning of Cell Biology: Do Different Training Conditions Affect Students’ Learning Success Differentially? Educ. Sci. 2021, 11, 438. [Google Scholar] [CrossRef]
  86. Salmon, D.; Kelly, M. Using Concept Mapping to Foster Adaptive Expertise: Enhancing Teacher Metacognitive Learning to Improve Student Academic Performance, 1st ed.; Peter Lang: New York, NY, USA, 2015; ISBN 978-1433122705. [Google Scholar]
  87. Schwendimann, B.A. Concept maps as versatile tools to integrate complex ideas: From kindergarten to higher and professional education. Knowl. Manag. E-Learn. 2015, 7, 73–99. [Google Scholar] [CrossRef]
  88. Sánchez-Meca, J.; Marín-Martínez, F. Meta Analysis. In International Encyclopedia of Education, 3rd ed.; Peterson, P., Baker, E., McGaw, B., Eds.; Elsevier: Oxford, UK, 2010; pp. 274–282. ISBN 978-0080448947. [Google Scholar]
  89. Siegel, S. Nonparametric Statistics for the Behavioral Sciences, 1st ed.; McGraw-Hill: New York, NY, USA, 1956; ISBN 978-0070856899. [Google Scholar]
  90. Zar, J.H. Spearman Rank Correlation. In Encyclopedia of Biostatistics, 2nd ed.; Armitage, P., Colton, T., Eds.; John Wiley & Sons: Hoboken, NJ, USA, 2005; ISBN 978-0470849071. [Google Scholar]
  91. Diedenhofen, B.; Musch, J. Cocor: A Comprehensive Solution for the Statistical Comparison of Correlations. PLoS ONE 2015, 10, e0121945. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  92. Rosnow, R.L.; Rosenthal, R. Statistical Procedures and the Justification of Knowledge in Psychological Science. Am. Psychol. 1989, 44, 1276–1284. [Google Scholar] [CrossRef]
  93. Butler, D.L.; Winne, P.H. Feedback and Self-Regulated Learning: A Theoretical Synthesis. Rev. Educ. Res. 1995, 65, 245–281. [Google Scholar] [CrossRef]
  94. Chung, Y.B.; Yuen, M. The role of feedback in enhancing students’ self-regulation in inviting schools. J. Invit. Theory Pract. 2011, 17, 22–27. [Google Scholar] [CrossRef]
  95. Hemerda, J.M. Maximizing Feedback for Self-Regulated Learning. Ph.D. Thesis, Walden University, Columbia, MD, USA, 2016. [Google Scholar]
  96. Shih, K.-P.; Chen, H.-C.; Chang, C.-Y.; Kao, T.-C. The Development and Implementation of Scaffolding-Based Self-Regulated Learning System for e/m-Learning. Educ. Technol. Soc. 2010, 13, 80–93. [Google Scholar]
  97. Hager, P.J.; Scheiber, H.J.; Corbin, N.C. Designing & Delivering: Scientific, Technical, and Managerial Presentations, 1st ed.; Wiley: Ney York, NY, USA, 1997; ISBN 978-0471155645. [Google Scholar]
  98. Anderson, J.R. Cognitive Psychology and Its Implications, 9th ed.; Worth Publishers: New York, NY, USA, 2020; ISBN 978-1319067113. [Google Scholar]
  99. Brod, G. Generative Learning: Which Strategies for What Age? Educ. Psychol. Rev. 2021, 33, 1295–1318. [Google Scholar] [CrossRef]
  100. Roessger, K.M.; Daley, B.J.; Hafez, D.A. Effects of teaching concept mapping using practice, feedback, and relational framing. Learn. Instr. 2018, 54, 11–21. [Google Scholar] [CrossRef]
  101. Coffield, F. Beyond bulimic learning. In Beyond Bulimic Learning: Improving Teaching in Further Education, 1st ed.; Coffield, F., Costa, C., Müller, W., Webber, J., Eds.; Institute of Education Press: London, UK, 2014; pp. 1–21. ISBN 978-1782770732. [Google Scholar]
  102. Erpenbeck, J.; Sauter, W. Stop the Competence Disaster! Paths to a New World of Education, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2019; ISBN 978-3662596777. [Google Scholar]
  103. Murayama, K.; Elliot, A.J. Achievement Motivation and Memory: Achievement Goals Differentially Influence Immediate and Delayed Remember-Know Recognition Memory. Pers. Soc. Psychol. Bull. 2011, 37, 1339–1348. [Google Scholar] [CrossRef] [Green Version]
  104. Professional Ethical Guidelines of the Professional Association of German Psychologists e. V. and the German Psychological Society. Available online: https://www.bdp-verband.de/binaries/content/assets/beruf/ber-foederation-2016.pdf (accessed on 18 March 2022). (In German).
  105. World Medical Association. WMA’s Declaration of Helsinki Serves as Guide to Physicians. J. Am. Med. Assoc. 1964, 189, 33–34. [Google Scholar] [CrossRef]
  106. World Medical Association. Declaration of Helsinki. Ethical Principles for Medical Research Involving Human Subjects. J. Am. Med. Assoc. 2013, 310, 2191–2194. [Google Scholar] [CrossRef] [Green Version]
  107. European Union. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Off. J. Eur. Union 2016, 59, 294. [Google Scholar]
Table 1. Results of cross-table analyses regarding possible baseline differences between the three groups [85].
Table 1. Results of cross-table analyses regarding possible baseline differences between the three groups [85].
VariableCategoryObserved n in Groupsχ2 Test
T++T+T−
SexFemale181821χ2(2) = 3.28, p = 0.19
Male934
University study programB.A.10913χ2(2) = 1.19, p = 0.55
B.Sc.171212
Annotation. T++ = CM training including scaffolding and feedback elements; T+ = CM training without any additional scaffolding or feedback elements; T− = non-CM-learning-strategy training including scaffolding and feedback elements.
Table 2. Descriptive statistics and ANOVA results regarding possible baseline differences between the three groups [85].
Table 2. Descriptive statistics and ANOVA results regarding possible baseline differences between the three groups [85].
VariableGroupANOVA
T++T+T−
MSDMSDMSD
Age22.935.9422.053.3722.645.92F(2, 70) = 0.16, p = 0.85
GPA2.020.632.070.691.810.55F(2, 70) = 1.10, p = 0.34
Semester of study3.481.533.711.853.592.22F(2, 70) = 0.09, p = 0.91
Prior knowledge about cell biology7.743.867.523.226.642.93F(2, 70) = 0.75, p = 0.48
Annotation. T++ (n = 27) = CM training including scaffolding and feedback elements; T+ (n = 21) = CM training without any additional scaffolding or feedback elements; T− (n = 25) = non-CM-learning-strategy training including scaffolding and feedback elements.
Table 3. Means and standard deviations of the group-specific JOLs and knowledge tests’ scores.
Table 3. Means and standard deviations of the group-specific JOLs and knowledge tests’ scores.
GroupJOLDeclarative KnowledgeStructural KnowledgeConceptual Knowledge
MSDMSDMSDMSD
T++50.7428.1415.196.120.460.1712.965.79
T+58.1021.1217.144.230.450.1812.624.97
T−44.4022.0013.804.320.330.168.803.39
Annotation. T++ (n = 27) = CM training including scaffolding and feedback elements; T+ (n = 21) = CM training without any additional scaffolding or feedback elements; T− (n = 25) = non-CM-learning-strategy training including scaffolding and feedback elements; JOL = judgement of learning.
Table 5. Differences of the group-specific correlations between JOLs and knowledge tests’ scores.
Table 5. Differences of the group-specific correlations between JOLs and knowledge tests’ scores.
Group ComparisonDeclarative KnowledgeStructural KnowledgeConceptual Knowledge
Zr DifferencepZr DifferencepZr Differencep
T++ vs. T+0.150.440.060.481.370.09
T++ vs. T–1.170.121.850.030.270.39
T+ vs. T–1.230.111.780.041.090.14
Annotation. Zr difference = difference between Fisher-z-standardized correlation coefficients; T++ (n = 27) = CM training including scaffolding and feedback elements; T+ (n = 21) = CM training without any additional scaffolding or feedback elements; T− (n = 25) = non-CM-learning-strategy training including scaffolding and feedback elements; gray highlighted cells = statistically significant differences (p < 0.05).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Welter, V.D.E.; Becker, L.B.; Großschedl, J. Helping Learners Become Their Own Teachers: The Beneficial Impact of Trained Concept-Mapping-Strategy Use on Metacognitive Regulation in Learning. Educ. Sci. 2022, 12, 325. https://doi.org/10.3390/educsci12050325

AMA Style

Welter VDE, Becker LB, Großschedl J. Helping Learners Become Their Own Teachers: The Beneficial Impact of Trained Concept-Mapping-Strategy Use on Metacognitive Regulation in Learning. Education Sciences. 2022; 12(5):325. https://doi.org/10.3390/educsci12050325

Chicago/Turabian Style

Welter, Virginia Deborah Elaine, Lukas Bernhard Becker, and Jörg Großschedl. 2022. "Helping Learners Become Their Own Teachers: The Beneficial Impact of Trained Concept-Mapping-Strategy Use on Metacognitive Regulation in Learning" Education Sciences 12, no. 5: 325. https://doi.org/10.3390/educsci12050325

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop