skip to main content
10.1145/3613904.3642150acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Free Access

Toward Supporting Adaptation: Exploring Affect’s Role in Cognitive Load when Using a Literacy Game

Published:11 May 2024Publication History

Abstract

Educational technologies have been argued to enhance specific aspects of affect, such as motivation, and through that learner experiences and outcomes. Until recently, affect has been considered separately from cognition. In this study, we investigated how learner affect (valence and activation) was tied to learner cognitive load and behaviours during game-based literacy activities. We employed experience sampling as part of a lab-based case study where 35 English language learners used an adaptive educational game. The results indicated that both positive and negative affect predicted learner cognitive load, with negative affect predicting extraneous (unnecessary) load. These results and the newly identified interaction patterns that accompanied learner affect and cognitive-load trajectories provide insight into the role of affect during learning. They show a need for considering affect when studying cognitive load and have implications for how systems should adapt to learners.

Skip 1INTRODUCTION Section

1 INTRODUCTION

The field of affective computing dates back to the 1990s and the importance of affect in learning was posited even earlier [33, 36] with later work exploring affect’s role [11, 52]. Prior studies have demonstrated that emotions play an important role in learning, influencing aspects like attention [14], information processing [54], and decision-making [6, 79]. More recently, many educational technologies have attempted to account for learner emotions and affective states as part of the learning experience [16, 80]. Game-based learning is one approach that has aimed to improve learner motivation and engagement, with mixed results [58, 68].

While we have seen growing attention on learner affect, most systems and studies ignore the dynamics of affective states [65], and there has been little exploration of the trajectory of a learner’s affect when using online educational technologies. This lack of study may lead to challenges in affect modelling and inhibit the utility of designing interactions around learner affect [55]. Moreover, learner affect is usually studied independently of learner cognitive load, despite cognitive load being central to human information processing and knowledge construction. Recent research has challenged this assumption. The integrated cognitive-affective model of learning with multimedia (ICALM) posits the role of emotions in cognition and performance—it argues that emotions cannot be decoupled from cognition [57] but none of our models reflect this. According to studies from neuroscience, affect and cognition rely on similar brain systems [38]. Moreover, some have suggested that affect and its regulation might contribute to changes in cognitive load during learning due to the role that student affective states seem to play in learning [4, 31, 56, 77]. Given the probable interaction between these two constructs, there is a need to investigate how affect may contribute to cognitive load. Understanding this would help us understand how affect may impact learning and provide insight into how to design system adaptations that respond to learner affect. Such information could inform the use of artificial intelligence (AI) and machine learning to detect learner affect and adapt to it, which has been a key focus within affective computing [55]. Once we understand the interplay between affect and cognitive load, we can use data from physiological signals (e.g., heart rate or skin conductivity) or various other signals, such as tone of voice and body language [7, 13, 15, 29], to support this adaptation by responding appropriately to learner affect in a way that is supportive of the cognitive demands of the learning task, thus improving learner experience.

In this study, we employed experience sampling to explore learner affect and cognitive load in conjunction with learner interaction patterns. Participants used a self-paced, adaptive English literacy game that provides personalised learning sequences. We measured participant affect and cognitive load while capturing their learning process. To capture detailed learner-system interactions, eye-tracking technology was used to record learners’ gaze information. Analyses were conducted to answer the following research questions:

How do affect and cognitive load vary?

How does affect contribute to the prediction of cognitive load?

How are learner-system interactions connected to trajectories of affect and cognitive load?

In answering these questions, this paper provides insight into the role of affect in predicting cognitive load and it details previously unidentified behavioural patterns in relation to learner affect and cognitive-load trajectories. These insights inform the proposal of guidelines for affective game-based learning systems.

Skip 2RELATED WORK Section

2 RELATED WORK

2.1 Affect and Literacy

The role of affect in literacy has received increasing attention in recent years. Some studies have shown that affect influences cognitive processing by examining the effects of affect on laboratory and real-world learning tasks [23]. As an essential skill in language education, reading comprehension involves complicated cognitive processes that interact with each other, such as generating ideas, integrating new information with prior knowledge, creating a situational model, and making predictions [5]. Previous studies have shown that emotion can influence such processes, with several theories of emotion having been developed to explain this.

In the resource allocation model [20], emotional events and information are distinctly represented in memory and given higher priority for processing. Hence, they place a burden on our limited cognitive resources by generating thoughts irrelevant to the task at hand, interfering with task performance. This effect is more noticeable when more demanding tasks are involved; it can also depend on the type of emotion being experienced. This effect relates to another model called affect infusion, which proposes that different moods facilitate different types of cognitive processes and strategies. For instance, positive affective states facilitate flexibility and creativity as well as global processing and the use of heuristics, whereas negative affective states facilitate systematic thinking, error avoidance, local processing, and analytical reasoning [23, 45], causing various types of tasks to be differentially influenced by contrasting affective states.

Prior studies have found that positive and negative affect have measurable effects on reading comprehension performance. For narrative text comprehension, Egidi and Gerrig showed that emotion congruency effects arise when participants with induced positive or negative affect read stories with either positive or negative endings [17]. Participants with induced positive affect found positive endings less surprising and negative endings more surprising than those with negative affect. Similarly, the negative group found positive endings more surprising. This suggests that participants’ emotion primes them for content with a similar emotional valence. For expository text comprehension, a different pattern has emerged. Mills et al. found that participants with induced sadness performed better than participants with induced happiness on deep-reasoning comprehension questions [45]. This could be due to the different processing styles brought about by happiness and sadness, as negative affect facilitates detailed processing and analytical reasoning. Other studies have shown that positive emotions lead to better comprehension performance compared to negative emotions. Scrimin and Mason found that participants with a positive mood scored significantly higher on factual knowledge comprehension questions than those experiencing a negative emotion, but this did not hold for knowledge transfer questions [66]. Ellis et al. showed that participants with an induced depressive emotional state performed worse than a neutral group at correctly identifying contradictions in text and made more false identifications, regardless of motivational factors [21]. This is consistent with the resource allocation model—fewer cognitive resources are available for the task at hand as they are used to process irrelevant thoughts. Furthermore, the neutral group showed an interaction between the number of correctly identified contradictions and the perceived difficulty of the task, while the depressive group lacked such interaction. We build on the above knowledge by studying language learner affect as it occurred rather than by inducing learner affect. We also expand the literature by relating learner affect to cognitive load theory, a key theory of learning.

2.2 Cognitive Load Theory and Learning

Cognitive load has been shown to have a complicated relationship with learner performance [49]. It is grounded in an understanding of the human cognitive architecture that is supported by previous research on working memory [2] and mental effort [64]. Working memory’s capacity and duration are limited when processing new information, but humans need to consciously process new information to learn. They learn by making sense of this new information, constructing new knowledge, and storing that knowledge. Cognitive load is a representation of how a learner’s working memory is used when performing a learning activity. Cognitive load, which originated in educational psychology, is closely related to the construct of mental workload, which comes from the field of ergonomics [82]. Both of these theories are based upon an assumption that people have limited cognitive resources, which may explain why the terms are often used interchangeably. However, cognitive load differs from mental workload as it has been formulated through an educational lens that takes cognitive load beyond the concept of mental workload. This extension supports instructional design.

Cognitive load theory (CLT) [74] has been used to explain learner cognitive processes. CLT outlines three types of load: intrinsic, extraneous, and germane. Intrinsic load refers to the inherent difficulty of the material being learned, while extraneous load is associated with any additional cognitive demands that are unnecessary for learning [43, 73]. Essentially, extraneous load can be increased or decreased through activity design. Germane load, on the other hand, represents the cognitive effort required to process information in a way that promotes learning; recent evidence has led to CLT being updated to reflect that germane load may be influenced by intrinsic and extraneous load [75]. This reformulation of the theory is consistent with the existing evidence and view that students may struggle to retain information or perform learning activities when cognitive load is over-taxed. Therefore, this refinement of the theory did not change a key instructional design goal. Basically, designers want to optimise learning by lowering extraneous cognitive load, thereby allowing people to devote more cognitive resources to learning activities.

Building on CLT, Mayer proposed a theory in multimedia learning where visual and auditory materials are used to convey information [42]. This cognitive theory suggests that multimedia presentations can be effective for promoting learning, but that the design of these presentations is critical. Specifically, the use of unnecessary or distracting multimedia elements can increase extraneous cognitive load and decrease learning. This risk requires careful attention in game-based learning settings.

In addition to multimedia design elements in games, findings from studies of CLT across a variety of settings have found that different activity designs are beneficial for learners with different expertise levels [61]. For example, novice students may benefit from explicit, step-by-step instructions, while experts may benefit from more complex and abstract information. This effect highlights the potential for software to better support learners through adaptation. Furthermore, it appears that incorporating personalised information into instructional materials can reduce extraneous cognitive load and improve learning [71]. Overall, the literature highlights the importance of designing educational applications that optimise learning by supporting the improved allocation of cognitive resources and the minimization of extraneous load. This goal can be partly achieved by using the findings from CLT design work that provide guidance on how to design effective classroom and paper-based learning activities [72, 74]. However, CLT does not fully explain learning success [46, 51]. Learner affect and motivation, which are linked to the reasons why games and multimedia are used in learning processes, are not yet adequately explained in CLT-based models.

Skip 3METHODS Section

3 METHODS

To answer our research questions, we conducted a case study in our lab space. We used experience sampling [27] and combined analysis of learner questionnaire responses with an investigation of their system interactions. The study was approved by our institutional Research Ethics Board (REB).

3.1 Game-based Learning System

An English literacy game was used in this study. It is an online language-learning system designed for learners who can read at a grade two through eight level. This system integrates reading-comprehension activities directly into a base-building game with the aim of increasing learner engagement. A point system is used to tie learning performance with game-play progression. Learners must complete English passage reading and question-answering tasks to gain points that can be used to perform other game activities.

This game is adaptive—it dynamically adjusts learning task difficulty based on its assessment of the learner’s reading skills, termed reading level in this paper. This personalization caters to the needs of each learner. The game allows players to compete against other online players. This competitive aspect is intended to promote both engagement and a sense of community among learners. Below, we outline the specifics of how the system operates and how it dynamically assesses reading level.

3.1.1 Game-play Elements.

In this base-building design, players pretend to live in the world of dreams and are entrusted with defending their virtual homeland from invading “reveries” (in-game creatures that are equivalent to soldiers in other base-building games). The play elements combine single-player campaigns with the ability to challenge other players on the same world map. There are two main types of game-play interaction.

Figure 1:

Figure 1: User interfaces for game play: (a) adding a new building called a "Hatch" (number 7) in the main gaming user interface (UI), (b) reverie training.

Figure 2:

Figure 2: User interface for game play: challenging other players.

Base-building: In this game, players need to construct their home base in a virtual world—see Fig. 1(a)

Reverie Training and Peer Challenge: Players can use their points to train a group of reveries (see Fig. 1(b)) or challenge other players on the same map (see Fig. 2).

3.1.2 Learning Activities.

The provided learning activities target English reading comprehension and related literacy skills. Learners need to complete learning activities to advance gameplay. Learners can attempt learning activities by clicking the "Question" button in the main user interface (UI) (Fig. 1(a)). There are two categories of learning activity, and there are no imposed time limits.

Passage Question Answering: Learners are given reading passages from a range of topics and fields (see Fig. 3(a) for a sample text). The passages have different genres (e.g., realistic fiction, fantasy, horror) to expose students to a variety of types of literature since these may require different reading skills and strategies. After reading each passage, a comprehension question appears (see Fig. 3(b)). If the player answers incorrectly, there is no penalty—they receive feedback and miss an opportunity for gameplay. Correctly answered questions earn points that can be used to support gameplay.

Independent Question Answering: Students are also given questions that are independent of passages (see Fig. 4). Such questions allow students to practice a range of vocabulary, grammar, and other English language arts knowledge. Like with passage questions, players can earn points and are told whether their submitted answer was correct.

Figure 3:

Figure 3: User interfaces for learning activities: (a) a passage for reading and (b) a reading-comprehension question.

Figure 4:

Figure 4: User interface for learning activities: an independent question about similes.

Figure 5:

Figure 5: System feedback when a player attempts a building upgrade when they do not have enough points.

3.1.3 Learning and Game-play Integration.

The integration of learning and gameplay was designed to be seamless. For players to engage in certain game interactions, they must first complete learning exercises to accumulate points, with different game interactions requiring different numbers of points. This requirement ensures that educational elements are a core part of the game experience.

When a player successfully answers a question, the game displays virtual objects representing two types of in-game points (Fig. 1(a)

To perform learning activities, players can click on the "Question" button (see 1(a)). This action transitions them from the pure game-play component to the learning component. Additionally, the system prompts the user to complete learning activities if they attempt a game-play move when they do not have enough points (an example can be found in Fig. 5). This prompt provides a direct link to the learning (question) UI, thereby facilitating a smooth transition between learning and gameplay.

The employed design ensures that learning exercises are an integral part of the gaming experience. It encourages players to engage with educational content as a means to enhance their gameplay, creating a symbiotic relationship between learning and play.

3.1.4 Reading Level Detection and Adaptation.

Student reading levels are inferred based on their performance. Reading levels are scaled from 1 to 8.9. Students are assigned a level using a test within the system. For students, this test is indistinguishable from other learning activities. The assigned level is used to select the topics and initial difficulty of learning tasks (i.e., the reading materials and questions).

The system continually assesses student reading levels and covertly adapts the difficulty of the reading content and questions. When students’ answer more than 75% of questions correctly, their reading level is increased by 0.1. If they have answered less than 50% of questions correctly, their reading level is decreased by 0.1.

3.2 Participants

Because the selected learning game aims to support English literacy practices, we recruited 35 English-language learners from our university. All participants were invited to use the game for around an hour. Each participant was given a new account at the beginning of the study. No participants withdrew. Each participant provided signed consent and received a $30 gift card.

Participants had a range of backgrounds and majors. Of the 35 participants, 51.42% (n = 18) were graduate students. The remaining 48.58% (n = 17) were undergraduate students. They ranged in age from 17 to 33 (M = 24.14) years; 65.7% were female (n = 23) and 34.3% were male (n = 12).

When answering "What language(s) did you speak at home as a child?", 61.5% (n = 24) of participants reported speaking Chinese (Mandarin). The second most common language was Japanese (10.3%, n = 4). Persian (Farsi) was spoken by 3 participants (7.7%). Korean, Gujarati, French, and Spanish were each spoken by one participant (2.6% each).

Prior to this study, none of the participating learners had used the selected game, and 21 had never used an educational game.

Participant working-memory capacity ranged from moderate to high: 17.1% (n = 6) of participants scored 5, 51.4% (n = 18) scored 6, 22.9% (n = 8) scored 7, and 8.6% (n = 3) scored 8. These measures are within the typical range for adults (5-9), as suggested by results from other studies [44]. See Section 3.3.3 for measurement details.

3.3 Data Sources and Measures

3.3.1 Affect Measures.

We measured user affect without manipulating it to capture information about learner experiences. As is common in psychometrics and experience sampling, questionnaires (scales) were used to measure affect [5, 17]. In this study, we identified affect measurement instruments from the perspective of their fit with Russell’s model [62] and their appropriateness for use with our targeted user group. Two widely-used, existing instruments were selected to measure the two primary dimensions of affect: valence and activation.

The first instrument was the international positive and negative affect schedule short-form (I-PANAS-SF) [78]. This instrument is a 10-item questionnaire that uses a 5-point Likert-scale (1 - Strongly Disagree and 5 - Strongly Agree). It is composed of two subscales: one measures Positive Affect (PA) and the other measures Negative Affect (NA). In prior research, the PA (α =.78) and NA (α =.76) subscales were shown to be adequately reliable [78]. Each subscale includes 5 items. The score of each subscale is calculated by summing the values across its associated items. Scores range from 5 to 25 with higher scores representing higher levels of positive or negative affect. An overall affect score is calculated by subtracting the NA score from the PA score; potential scores range from − 20 to 20.

The second affect-measurement instrument was the Self-Assessment Manikin (SAM) [41]. SAM differs from other instruments in that it uses a pictorial representation rather than words and does not assign labels to emotions. The benefit of this format is that it can be used by people with lower literacy levels, and it does not require the same level of abstract thinking as other instruments. In this study, we adopted the version that uses a 9-point rating scale to capture participants’ Activation. In prior research, activation had a Cronbach’s α of.83, indicating adequate reliability [47]. The scale ranges from 1 to 9, where a high score (9) indicates high activation or energy, and a low score (1) indicates low activation or energy.

3.3.2 Cognitive Load Measures.

In this study, we intended to explore cognitive load and how affect might impact specific types of cognitive load. We used Sweller, Ayres, and Kalyuga’s definition of cognitive load [74] that has three sub-types: intrinsic load, extraneous load, and germane load.

Similar to how affect is measured, cognitive load is commonly assessed using psychometric scales. Although CLT is widely recognised as a key theory in instructional design, research on developing highly generalised measures of cognitive load remains limited [34, 48]. Unlike mental workload, where that are instruments like the NASA Task Load Index (NASA-TLX) that can provide a holistic measure across domains [25], cognitive load is intricately linked with specific learning contexts and tasks. Some holistic measures of mental workload appear to be usable for measuring intrinsic load but are limited in their ability to capture extraneous and germane load [48]. Instruments for cognitive load are usually developed for a specific domain or learning task (e.g., statistics [39], multimedia [50], and art learning [28]), and the questions need to be adjusted for use in other instructional domains. Additional challenges arise when trying to measure the different types of load.

Our review of the literature indicated that there were few, if any, existing scales that measured CLT in language-learning and literacy contexts. Therefore, in this study, we developed our cognitive load instrument by adapting scales from other educational domains. We identified scales (e.g., [28, 39]), which showed adequate reliability (Cronbach’s α was higher than.8) and adapted them to the literacy context. The employed instrument had 10 Likert-scale items (1 - Strongly Disagree and 10 - Strongly Agree) that were used to measure cognitive load and each of its sub-types (see Appendix A). It included 3 items for intrinsic load, 2 items for extraneous load, 2 items for germane load, and 1 for cognitive load as a whole. The score for each type of load is obtained by summing its corresponding items. To determine the cognitive load score, we first averaged the sub-type scores. We then averaged the score for the cognitive load item with the sub-type averages.

3.3.3 Working-memory Capacity Test.

Because cognitive load is influenced by an individual’s working-memory capacity, we measured capacity using the Corsi block-tapping test [8]. This test is widely used in research and clinical settings for a variety of populations [1]. The test was implemented via PsyToolkit [69].

During the test, participants are shown a set of blocks that are highlighted in sequence. Participants must reproduce the sequence immediately by selecting the blocks in the same order. The task starts with sets of two blocks and progressively increases the set size, challenging the participants’ working-memory capacity. The longest sequence that a participant is able to recall correctly is their working memory score. The score ranges from 2 to 9, where a high score suggests high working-memory capacity.

3.3.4 Eye-gaze Tracking.

Throughout the study, participants wore an open-source eye tracker—a Pupil Core (see 6)—that records dual eye movements at 200 Hz and a scene video that shows what the participant sees. It weighs about 23 grams and is worn like normal glasses. The eye-gaze tracking data was captured so that we could infer the connections between student interactions with the learning system and their inner states (i.e., affect and cognitive load).

Participant’s eye-gaze data was logged frame by frame. The data extracted from the eye tracker consists of the video showing the participants’ field of view, coordinates describing where a participant was looking (in pixels), and associated timestamps. A filter was used to extract gaze fixation by identifying how long participants looked at a specific point. In these fixation moments, a participant maintained their gaze for a specified period (80ms) with 1.5 maximum dispersion on the object.

Figure 6:

Figure 6: The study environment.

3.4 Study Procedures

After obtaining consent, we explained the study procedure in detail. We then guided participants through the system tutorials and device calibration process.

Before starting gameplay, participants completed the working memory test and a baseline measure for affect. Participants then played the game for one hour.

The self-report instruments for affect and cognitive load were administered at intervals of approximately 5 minutes. Sampling occurred after the participant had answered a question or finished a game task. This sampling procedure used a signal to support event-contingent sampling [27] that prevented distraction and interference with the learning task.

No feedback or hints related to the learning task were given during the study. A demographics questionnaire was administered at the end.

3.5 Analysis Procedures

3.5.1 Analysis of Participant Affect and Cognitive Load.

Descriptive statistical analyses were performed to better understand how measures varied. As measures were collected in different scales, we unitised all measures so they ranged from 0 to 1 to ensure that each feature contributed proportionately to the analysis and model fitting process. Violin plots were used to visualise data distributions for each measure. Heatmaps were used to explore potential patterns among the group of affect measures and the group of cognitive load measures.

3.5.2 Mixed Linear Modelling.

After conducting descriptive analyses, we constructed a set of mixed linear models to investigate how affect relates to cognitive load. The dependent variable included cognitive load and each of its sub-components (i.e., extraneous, intrinsic, and germane load). The independent variables included the measures of learner affect (i.e., positive affect, negative affect, activation) and working-memory capacity. Because each learner submitted multiple samples at different time points, we used a combination of two variables, the learner identifier (SID) and time serial (T), as random effects. This is a common practice in such models [3, 67] and accounts for individual variability between learners and the inherent dependencies in sequential measures.

This analysis was conducted using RStudio (version 2023.06.0, built on R version 4.3.0) with the lme4 (version 1.1-33) and lmerTest (version 3.1-3) packages.

3.5.3 Analysis of Experience Trajectories and Learner Interactions.

We conducted a qualitative analysis investigating the potential interplay between participants’ interactions with the system and the trajectories in their concomitant experience for overall affect, extraneous load, and germane load. As intrinsic load is defined by the inherent difficulty of the learning task and should not be influenced by subjective experience [74], its trajectory was excluded from this analysis.

Identifying Experience Trajectories. We first identified the experience trajectories for overall affect, extraneous load, and germane load. The trajectories were determined by identifying the fluctuations in the series of reported results. These trajectories provide dynamic representations of feelings, moving beyond a singular report to describe the flow of learning experiences over time. Trajectories were categorised as follows:

Continued Increase: The values were characterised by an upward slope across three or more reports.

Continued Decrease: The values were characterised by a downward slope across three or more reports.

Transition to Increase: The values were characterised by an upward slope after a period of stable or decreasing values.

Transition to Decrease: The values were characterised by a downward slope after a period of stable or increasing values.

Stable: The values remained constant and no shifts were observed across three or more reports.

Analyzing Leaner Interactions with the Game. The trajectories were then analyzed together with participants’ interactions with the system. Interactions were identified using the eye-tracker video data.

The analytical process began with the preprocessing of the videos. Initially, we extracted both the world video and the eye-gaze information, subsequently integrating them into a single video. This was followed by segmenting this video by self-report measure, capturing footage from the end of the previous self-report to the beginning of the most recent one. During this phase, data from participant 23 (P23) was excluded because the tracking was unreliable for this participant due to a sensor issue. P23’s data was used in all other analyses.

We then coded the data. This involved two coders (authors 1 and 3) and one reviewer (author 6). In the training phase, the two coders individually reviewed and identified a sequence of participant interactions from several video segments. They then compared their coding results and collaborated to establish a codebook (see Appendix B). After further review by and discussion with the reviewer, each coder analyzed half of the samples.

Coders focused on the actions students took within the game-based learning environment. This included game and learning actions as well as system feedback and their visual attention (e.g., focusing on specific user-interface elements).

After the coding process, we conducted analysis to identify potential behavioural patterns. For each experience trajectory, the first author grouped the samples by trajectory type. He then identified common behavioural patterns that characterised participant interactions. This synthesised analysis was reviewed and augmented by authors 3 and 6, ensuring depth and reliability.

Skip 4RESULTS Section

4 RESULTS

Our case study collected 319 samples across 35 participants since each person submitted multiple responses.

4.1 Changes in reading level

Participating learners generally showed an increase (M = 0.1, SD = 0.12) in their reading level. There were 17 students (\(48.57\%\)) whose reading level increased (M = 0.2, SD = 0.01), 13 (\(37.14\%\)) whose level remained stable, and 5 (\(14.29\%\)) whose level decreased slightly (M = −0.1, SD = 0.00).

4.2 How do affect and cognitive load vary?

Figure 7:

Figure 7: Violin Plots for the (a) affect measures and (b) cognitive load measures.

Figure 8:

Figure 8: Heatmaps for valence and activation.

Figure 9:

Figure 9: Heatmap for cognitive load and its sub-types.

Fig. 7 shows that most measures were highly variable, where the largest value was at least twice as large as the minimum. This variation suggests that differences among participants and samples were substantial.

The plots of affect measures (Fig. 7(a)) showcase data concentrated around high positive affect and low negative affect with moderately high activation. These distributions indicate that participants had generally positive affect and moderate activation levels throughout their learning. In addition, the elongated tails extending from the central region of the negative affect and activation plots highlight the presence of exceptional samples of high negative affect and low activation.

The violin plot for cognitive load (Fig. 7(b)) showed data concentrated around the middle. There is a subtle shift towards the upper end, indicating that participants typically experienced moderate to slightly elevated cognitive load. This pattern suggests that case study tasks were generally appropriate for learners’ abilities and did not overtax them. Intrinsic load appears to be more uniform than the other types of load, suggesting that participants‘ perceptions of the task difficulty are varied. The extraneous load plot shows a large amount of data centred around the median and into the third and fourth quartiles. The germane load plot exhibited a higher central tendency than the other sub-types of cognitive load. This moderately-high germane load aligns with the finding that nearly half of the participants exhibited an improved reading level, i.e., they learned.

We then plotted heatmaps showing the mean value of measures for each learner to understand patterns within and across participants. Learner identifiers are shown across the bottom of the heatmap (each column is a participant), and measures are shown on the left from top to bottom (each row is a measure). For example, Fig. 8 shows that Participant 23 had the highest negative affect (darkest negative square), while having low energy levels (white activation square), and a low to moderate level of positive affect.

In Fig. 8, participants were ordered by the average value of positive affect from low to high. No obvious associations can be made between negative affect and each of positive affect or activation. The darker colour at the right end of the activation row indicates students who experienced high positive affect tended to have moderate to high activation levels that are indicative of their enjoying the game-based learning activity.

The cognitive load measures can be seen in Fig. 9, where participants were ordered according to cognitive load. The figure indicates that participants’ cognitive load was consistent with their experiences of extraneous and intrinsic load. This pattern suggests the cognitive demands of the learning task were driven by extraneous and intrinsic load.

4.3 How does affect contribute to the prediction of cognitive load?

In this analysis, we considered how affect contributed to cognitive load during game-based learning using a set of linear mixed models. We analyzed all 354 participant samples.

Table 1 shows the ANOVA results for affect-related factors when predicting cognitive load (R2 =.53). Significant effects of both positive and negative affect were found. We also found an interaction between positive affect and working-memory capacity, which suggests that working-memory capacity moderates the effect of positive affect.

To investigate this interaction, we plotted cognitive load versus positive affect. In Fig. 10, each data point represents a sample, with positive affect on the x-axis and cognitive load on the y-axis. This scatter plot reveals a generally positive trend, indicating that as a learner’s positive affect increases, their cognitive load tends to increase. The subgroup fit lines depict the effects of positive affect for learners with different working memory capacities. The figure shows potential variations in the strength of the effect at different levels of positive affect. For instance, the slope of the fit line for learners with higher working-memory capacity (i.e., 7 and 8) appears to be steeper than that for those with lower working-memory capacity (i.e., 5 and 6). This pattern suggests that the positive relationship between cognitive load and positive affect is stronger for learners with higher working-memory capacity.

Table 1:
MeasureDFFpη2
Positive Affect202.17.023.58
Negative Affect154.17<.001.62
Activation81.08.400.21
Working Memory31.15.362.21
Positive Affect x Working Memory391.85.021.57
Bold font indicates significant results; DF = degrees of freedom

Table 1: ANOVA results from the linear mixed model that predicts cognitive load based on affect

Figure 10:

Figure 10: Interaction between positive affect and working-memory capacity when predicting cognitive load.

Table 2:
Sub-typeMeasureDFFpη2
Intrinsic Load
Positive Affect202.835.004.65
Negative Affect151.830.070.46
Activation81.848.107.33
Working Memory31.234.333.22
Positive Affect x Activation861.680.021.74
Positive Affect x Working Memory391.856.023.61
Germane Load
Positive Affect202.065.034.57
Negative Affect151.478.163.44
Activation81.155.356.25
Working Memory31.225.340.20
Positive Affect x Working Memory391.821.024.61
Extraneous Load
Positive Affect201.820.061.52
Negative Affect152.959.003.54
Activation81.367.248.25
Working Memory30.827.500.17
Bold font indicates significant results; DF = degrees of freedom

Table 2: ANOVA results from the linear mixed models that predict each sub-type of cognitive load using affect

Following this investigation of cognitive load, we considered the contributing components; Table 2 shows the effects of affect and working-memory capacity when predicting each sub-type of cognitive load.

The model for intrinsic load (R2 =.44) identified positive affect and negative affect as predictors. However, positive affect interacted with each of activation and working-memory capacity. According to Fig. 11-a, while all lines show a positive association between intrinsic load and positive affect, the slope of the lines for students with higher working-memory capacity (i.e., 7 and 8) seems to be steeper than that for those with lower working-memory capacity (i.e., 5 and 6). This implies that alongside the increase in positive load, intrinsic load increases more for students with a higher working-memory capacity. According to Fig. 11-b, the interaction between positive affect and activation shows opposite effects. For activation levels between 1 and 8, an increase in intrinsic load was paired with a rise in positive affect. In contrast, as positive affect increased, intrinsic load decreased for those experiencing the highest level of activation (i.e., 9).

The germane load model (R2 =.47) identified positive affect as a predictor. Additionally, an interaction between positive affect and working-memory capacity was found. As shown in Fig. 12, positive affect appeared to have a stronger effect on germane load for students with higher working-memory capacity (i.e., 8) than those with lower capacity (i.e., 5 and 6).

When looking at the model of extraneous load (R2 =.46), we observed that negative affect was a significant indicator.

Figure 11:

Figure 11: Interactions between (a) positive affect and working memory and (b) positive affect and activation when predicting intrinsic load.

Figure 12:

Figure 12: Interaction between positive affect and working-memory capacity when predicting germane load.

4.4 How are learner-system interactions connected to trajectories of affect and cognitive load?

Our investigation of participant interaction with the learning game during specific affect trajectories and cognitive load trajectories provided insight into how learners engage with game-based learning. Some of the identified patterns were seen across trajectories others were specific to one or more trajectories.

4.4.1 General Patterns.

When attempting passage questions, participants often revisited the passage after seeing the associated question. This was especially common when they received a passage they had previously seen with a new question. In such situations, they would immediately proceed to the question before returning to the passage, where they were likely searching for information that would help them answer the question. For long passages and in some other cases, participants rechecked the passage multiple times.

With respect to observed game-play behaviours, base-building activities, including upgrading and adding buildings, were predominant. Other activities included training soldiers and engaging in peer challenges.

4.4.2 Behavioural Patterns and Affect Trajectories.

Increasing Trajectory. The patterns identified for this trajectory include behaviours that were observed when participants transitioned from a stable or decreasing trajectory to an increasing one as well when they maintained an increasing trajectory. One behaviour pattern involved the balancing of game play and learning. This typically meant that participants had an activity loop that interleaved gameplay with learning. Fig. 13 illustrates one such loop.

Participant game-play interactions in this trajectory tended to involve few peer challenges and frequent base-building activities like upgrading buildings (e.g., P12, P2, P9) or reverie (soldier) training (e.g., P4, P6). This suggests the potential positive impact of these game-design elements.

Participants who were in this trajectory also tended to answer questions correctly.

Figure 13:

Figure 13: These pictures show P29 alternating between game play and learning: (a) P29 is training soldiers, (b) then answering an independent question, (c) and adjusting the position of buildings, before (d) answering another independent question. Participant fixation points are shown using yellow circles

Decreasing Trajectory. Those who were transitioning to or who remained in a decreasing trajectory exhibited similar patterns. We saw little evidence of an attempt by participants to balance their learning and play or to focus on learning tasks when analyzing data from this trajectory. Rather, participant actions tended towards gameplay. In addition to this, participants experiencing decreasing affective trajectories were seen to repeatedly, unsuccessfully attempt game tasks. In these cases participant game-play processes were blocked because they had not completed enough learning activities. For instance, P6 tried multiple times to play peer challenges and was often prompted to first answer more questions. Similarly, P20 frequently attempted to upgrade buildings, but was prompted to answer more questions because they did not have enough points. This type of repeated failure might arise from a lack of understanding of game mechanics.

There was a larger portion of samples displaying focused learning during decreasing trajectories compared to those with increasing trajectories, indicating a possible association between focused learning and a decline in affect.

Abandoning learning tasks was also seen amongst participants in this trajectory. In these situations, participants switched to performing game-play activities without answering an assigned question. This was evident in samples from P30 and P17.

Consistent with the patterns observed in increasing trajectories, base-building and reverie training were common. Participants seemed inclined to participate in peer challenges. When they did this, they experienced both victory and defeat.

Passage rechecking behaviours were consistent with those seen during increasing trajectories, and it was common for participants to answer more than half of the questions incorrectly.

4.4.3 Behavioural Patterns and Extraneous Load Trajectories.

There were few samples showing a continued increase in extraneous load which limited our ability to identify learner behaviours for this trajectory. The behavioural patterns seen for passage rechecking were consistent with those mentioned in Section 4.4.1.

When investigating the transition of participants’ extraneous cognitive load from either a stable or decreasing trajectory to an increasing one, we saw several patterns. It was common for those experiencing an increase in extraneous load to be predominantly engaged with passage questions. In terms of gaming behaviours, upgrading buildings was a common participant action. Several participants also encountered failed game-play attempts that were followed by prompts encouraging them to complete more learning activities. When participant extraneous load shifted from a stable or increasing trajectory to a decreasing one, we noticed that they were balancing their pursuit of learning and gaming activities. When in this trajectory, participants tended to complete a mix of both independent and passage questions. Notably, participant extraneous load was consistently decreasing when they were primarily answering independent questions which have fewer working memory demands. On the gaming front, training reveries was the dominant activity during this trajectory.

4.4.4 Behavioural Patterns and Germane Load Trajectories.

There were few samples where germane load showed a continued increase or decrease. When participants were transitioning to a decrease in germane load, they were answering independent questions. This was not the case when they were transitioning to an increase in germane load.

Skip 5DISCUSSION Section

5 DISCUSSION

We studied learner interactions with an English literacy game using self-report measures and learner-system interaction information. Unlike other studies where emotions were artificially induced, we recorded participants’ affect as it naturally unfolded, resulting in increased ecological validity. This approach mirrors real-life scenarios, enhancing the reliability of the findings.

Study measures showed a general increase in learner reading levels and positive affect. Both of these align with the objectives of this learning game. These findings are also consistent with existing research underscoring the beneficial impact of game-based learning approaches [58, 68]. Our analyses provide insight into how affect predicts cognitive load and which behavioural patterns occur across different learner affect and cognitive load trajectories. Building on the findings and analytical insights that we discuss below, we present design guidance for supporting affect through system adaptation.

5.1 Affect Can Predict Cognitive Load

In our exploration using linear mixed-effects models, we uncovered that both positive and negative affect predicted cognitive load. This finding highlights the multifaceted role that the valence dimension of affect plays during learning. Our findings are consistent with prior research that emphasises the interplay between affect and cognitive processes [23, 37, 45]. We delve deeper by highlighting the link to cognitive load in a self-paced learning environment. This aspect of the game gives learners the opportunity to regulate their affect by choosing the activities they will perform. Our analysis highlights the importance of considering affect given its influence on cognitive load despite this relationship having been largely ignored in such settings. Prior research in self-paced learning environments has prioritised cognitive dimensions over affective ones under the assumption that affect plays a subsidiary role [83].

Differences in the perceived or identified role of affect between the present study and prior research may be related to how this work has conceptualised and operationalised affect. Studies have often conceptualised affect as a discrete set of emotions [19], rather than considering affect as multidimensional. Our separation of affect into valence and activation enables a more nuanced understanding of the role affect plays during self-paced learning. This allowed us to better characterise which aspects of affect were intertwined with cognitive load, with valence being directly linked to increases in specific types of cognitive load—negative valence predicted increased extraneous load and positive valence predicted germane load. This suggests opportunities for integrating different interventions based on the valence of the affective state a learner is experiencing.

5.1.1 Positive Affect May Benefit Schema Construction.

Our study provides insight into the influence of positive affect on cognitive load. According to cognitive load theory [74], germane load supports the development of schema which are cognitive structures that enable our understanding of a phenomenon. Our analysis revealed positive affect predicted increased germane load, suggesting that positive affect is predictive of or beneficial for learning (i.e., schema development). This finding suggests learners were engaged in higher levels of cognitive processing that support learning; it is consistent with earlier research that suggests the important role of positive affect in enhancing attention on learning content [14] and knowledge comprehension [66].

The observed differential effect of positive affect on germane load when accounting for working-memory capacity highlights the importance of considering such individual differences when designing for learner affect. It suggests that promoting positive affect and creating a supportive and engaging learning environment may be particularly beneficial for students with higher working-memory capacity, as it can help optimise their cognitive load. This type of adaptation should support improved learning (as suggested by the reading level increase seen among participants).

5.1.2 Negative Affect is Related to Extraneous Load.

The observed relationship between high negative affect and increased extraneous load suggests that negative affect can be burdensome and impede learning. This strengthens results from prior studies that found negative affective states with higher activation (e.g., anxiety or sadness) might reduce one’s ability to process information [12, 70], while deactivating states like boredom may lead to reduced attention and more superficial processing of information [14, 54]. This insight relates to the resource allocation model [20], suggesting the processing of emotional information can hinder the concurrent or subsequent processing of other information. Regulating negative affect in learning might consume an extra portion of an individual’s limited cognitive resources, resulting in higher extraneous load.

5.1.3 Working Memory as a Moderator.

We did not observe a main effect of working-memory capacity on cognitive load. This seems counter-intuitive given the understanding that cognitive load essentially reflects working memory use. People vary in their working-memory capacities, leading to different thresholds for overload. It has been argued that those with lower capacities could be overloaded more easily [76], making it difficult for them to process and retain new knowledge. Because participants generally experienced moderate levels of cognitive load, we suspect the presented learning task might not have been sufficiently difficult to push their working memory to its limits.

The identified interactions between affect and working-memory capacity have the potential to generalise given the fact that participant working-memory capacity was typical of adults. These interactions suggest nuanced relationships and a need for personalised approaches that adapt to learner working-memory capacity. Ignoring these differences might lead to overgeneralised conclusions or technological interventions that fail to cater to the distinct needs of each learner.

5.2 Activation Was Generally Low During Self-paced Learning

The lack of measurable effects for activation on cognitive load in the present study suggests that activation might play less of a role in self-paced, game-based learning. This finding conflicts with prior research. The literature suggests that strong affect, regardless of its valence, could influence where and how learners allocate their attention [22]. Additional, preliminary empirical results indicate that activating negative states, such as anxiety or stress, can act as cognitive disruptors which limit learner ability to process and retain new information [35, 53].

The mismatch between our findings and existing theory may be the result of the absence of high activation levels among participating learners. Previous research on self-paced learning suggests that learners in this context typically do not experience heightened activation [83]. Self-paced learning allows learners to control the pace and sequence of their learning activities. This approach often leads to distinct affective responses compared to more traditional, instructor-led environments. Since our learners could take breaks, revisit material, or even skip ahead based on their understanding and comfort, they could avoid the emotional extremes (both positive and negative) that can arise in fixed-paced settings [32]. This type of self-management was likely responsible for the lack of variability in learner activation: participants’ observed behaviour patterns showed that learners who were experiencing a shift towards positive affect alternated between learning and game play.

While a self-paced design may result in lower activation levels, it does not mean we can ignore this important aspect or its role in learning processes. A sudden surge in activation could indicate an abnormal situation, suggesting a potential mismanagement of the learning experience. By pinpointing and adapting to these inflection points, educational systems could offer enhanced support, ensuring a smoother learning experience. Given this, future research should explore environments where learner activation is more variable. Understanding the interplay of high activation levels with cognition could be instrumental to designing future affect-aware learning environments.

5.3 Perceived Difficulty and Inherent Difficulty are Different

Our analysis indicates that positive affect may predict reported intrinsic load, confirming that a learner’s affect could influence their perception of task difficulty. This expands prior work on induced emotions [21] to those experienced by learners who did not experience additional manipulation. Results from the present study provide further insight and indicate that this effect seems to be moderated by factors such as working-memory capacity and activation. However, intrinsic load is meant to represent the inherent complexity of the learning task, determined by the complexity of the content and a learner’s prior knowledge [74].

Traditionally and theoretically, affect should not directly impact intrinsic load. We hypothesize that this incongruity arises because self-reported scores are likely capturing perceived difficulty rather than fully reflecting the learning task’s inherent difficulty. While the inherent difficulty posed by a specific content item should be stable, a learner’s affect seems to skew perceived difficulty. These results reflect the risks of relying solely on self-reports for measuring intrinsic load. Distinguishing between inherent and perceived difficulty will be essential to crafting adaptive support. Analytic techniques, such as item-response theory [60], may provide a reasonable proxy for intrinsic load through their estimation of item difficulty.

Our analysis of interaction effects revealed that learners experiencing the highest activation level tended to perceive tasks as less challenging when they were also experiencing high positive affect. Findings related to the broaden-and-build theory of positive emotions have linked positive emotions to improved creativity and more flexible thinking [24]. The expanded cognitive flexibility may lead to creative solutions and make challenging tasks seem more manageable. Learners with other activation levels showed increases in their intrinsic load as their positive affect increased. These results highlight the need for personalisation when crafting adaptations for learners.

5.4 Cognitive Load Was Driven by Extraneous Load

The heatmap analysis revealed a consistency in the levels of cognitive load and extraneous load experienced. This suggests that the cognitive effort learners invested during the learning process was driven by the amount of effort expended on unnecessary information. One fundamental principle of instructional design emphasises the need to avoid overwhelming a learner’s cognitive capacity [74, 76]. Overburdening this capacity can lead to learners overlooking critical concepts as they grapple with an excess of information. It can also hinder knowledge retention. The observed consistency between extraneous load and cognitive load in this game-based learning environment indicates that refining the learning system to minimise extraneous load could release cognitive resources for use in processing information that directly relates to learning. Coupled with our findings about how negative affect predicts extraneous load, this insight underscores the importance of providing adaptive assistance to help learners regulate their affect, especially when negative affect is increasing.

5.5 Design Guidance for Affective Game-based Learning Systems

5.5.1 Personalised Support that Responds to Learner Affect is Needed.

In self-paced learning contexts, learners often lack the external emotional support that can be obtained from instructors or peers in other settings. Building on our understanding that affect can impact cognitive load and the crucial role of the latter in learning, we believe that there is a need for affect-aware adaptations in self-paced learning. Such adaptations could benefit both the learner’s experience and learning itself. The rapid development of computing technologies and the nature of online learning present opportunities for offering personalised support that adapts to learner affect, aiding learners in regulating their affect and optimizing cognitive load. Our findings on the moderating effect of attributes like working-memory capacity highlight the need to factor in individual differences when conceptualizing affect-aware designs. This attention to individual variation should ensure that each learner’s experience is optimised, fostering a more effective and engaging educational journey.

In the context of reading comprehension and English language arts, the system could auto-select reading materials with tragic or comedic tone to help the learner regulate their affect. Alternatively, the system could select material that aligns with a student’s current emotional state, thus benefiting comprehension by facilitating deeper information processing as is consistent with the emotion congruency effect [18].

5.5.2 Features that Support Learner Self-regulation of Affect Are Needed.

One important insight we gained from the behavioural analysis is that learners tended to loop between learning and game-play activities when experiencing increased affect. This implies that alternating learning and entertainment activities can be an effective self-regulation strategy in self-paced learning environments. Achieving a balance may ensure learners remain in a flow state, where they are fully engaged in the learning process [10]. This insight is consistent with Reinders’s work [59] and confirms that language-learning games could lower affective barriers to increase learner willingness to engage in learning activities.

However, not all learners managed their affect in this way and some experienced continued negative affect. These trajectories and their associated behavioural patterns highlight a need to include features that support learner regulation of their affect. This would involve designing features that help learners recognise, understand, and manage their affect as part of supporting their learning process. One such feature could involve the customization of feedback based on a learner’s performance, progress, and affective states. For example, if a learner shows decreases in affect with an imbalance of activities, the game could suggest a shift to alternative activities where the learner will experience greater success.

5.5.3 Avoid Repeated Negative Feedback and Failure.

We found a connection between a decline in affect and repeated negative feedback from the system. The impact of negative feedback could be influenced by various factors such as delivery method, learner’s pre-existing beliefs, and the learning environment design [30]. Past studies have highlighted that when not properly handled, negative feedback in game-based learning can demotivate learners [81]. In addition to negative feedback, consistently encountering barriers to progress despite effort can lead to learned helplessness, where learners believe they lack control over outcomes [40]. Some have suggested that negative feedback can impact a learner’s self-concept, potentially leading to feelings of disappointment or doubt about one’s capabilities [9]. Notably, we found certain learners remained in a loop of unsuccessful attempts due to a misunderstanding of the game’s mechanics, yet system messages failed to correct the misunderstanding. This observation suggests deficiencies in the system’s instructions or design. We advocate designs that minimises excessive negative feedback or failure. Moreover, this loop demonstrates a need to identify where ongoing negative feedback or experiences of failure occur, and it suggests that supplementary adaptation is needed to support learner progression.

5.5.4 Design for the Risks of Peer Competition.

In our study results, the learning system featured a peer challenge component designed to bolster learner motivation. Our data showed that learners used this peer challenge mechanism when exhibiting declining affect trajectories. This observation hints at a learner attempt to unsuccessfully regulate their affect. While competition can spark extrinsic motivation, there is a danger it could overshadow or even diminish intrinsic motivation rooted in genuine personal interest or joy [63]. We recommend the system propose an alternative activity when there is a continued decrease in affect after engaging in competitive elements.

5.6 Limitations

One limitation of this study results from the sampled learner population. Learners were university students residing in Canada. Many participants had an East-Asian background. As such, the results may only generalise to other members of these populations given potential cultural influences on subjective reports [26]. This work needs to be repeated with members of other population groups to see how widely the results generalise.

In addition to this limitation, the voluntary nature of participation with accompanying incentivization introduces the potential for self-selection bias. While offering rewards can indeed boost participation rates and enhance data quality, it might also attract individuals primarily motivated by the incentive rather than genuine interest in the study or learning. The use of other incentive mechanisms or the repetition of this study in a formal educational environment may reduce the impact of such sampling biases. Now that we have found potential effects, other study designs can be used to explore whether this bias impacted them.

Skip 6CONCLUSION Section

6 CONCLUSION

In response to increasing interest in affective learning design and the potential interplay between affect and cognition, we investigated how affect (as a multi-dimensional construct consisting of valence and activation) predicts cognitive load during game-based learning. During the study, participants’ affective states were recorded as they naturally occurred when interacting with the game-based learning system, which differentiates this work from much of that found in the literature. Our analysis indicated that valence, both positive and negative, predicts learner cognitive load. Notably, negative affect, was a significant predictor of extraneous load, while positive affect was a significant predictor of germane load. This suggests that systems should be designed in a way that supports learner positive affect so that their germane load can be maximised in order to facilitate learning. These results shed light on affect’s multi-faced role during learning and show a need for considering affect when studying cognitive load.

The modelling results that detail affect’s role in cognitive load were augmented with a qualitative analysis of learner interaction patterns. This analysis identified previously unknown behavioural patterns that were present when learner affect and cognitive load were increasing or decreasing. For instance, increasing affect was often seen when learners alternated between learning and game-play activities in a balanced manner and decreasing affect was seen when learners were unsuccessful in the game. These and other behavioural patterns hint at potential activity recommendations that could be used to support learner regulation of their affect and cognitive load. Moreover, the observed recurring increase in extraneous load during specific game-play moments suggests potential areas for improvement in the user interface design. Drawing from these insights, we highlight the need for adaptation to better support learner affect. Additionally, we provide recommendations for game-based learning, which can further support system and instructional designers.

Skip ACKNOWLEDGMENTS Section

ACKNOWLEDGMENTS

This work was supported in part by funding from the Social Sciences and Humanities Research Council of Canada and the Natural Sciences and Engineering Research Council of Canada (NSERC), [RGPIN-2018-03834].

We are also grateful to Shoelace Learning for giving us access to the game and helping us to understand its mechanics.

A COGNITIVE LOAD QUESTIONS

Please take your time to read each of the questions carefully and respond to each of the questions on the presented scale from 1 to 10, in which ‘1’ indicates not at all, the case and ‘10’ indicates completely the case.

Constructing a general idea of what the texts says required effort.

It took effort to relate the ideas from each of the sentences to each other.

It was easy to identify important information in the texts.

I had to pay a lot of attention to play this game.

It took effort to remember all the information.

The reading enhanced my understanding of the topics covered.

The questions enhanced my understanding of the topics covered.

In general, I found the reading content difficult.

B CODEBOOK

B.1 Learning Related Interactions

Passage reading

Rechecking the passage

Passage Question

Independent Question

General game play

Adding a base building

Upgrading a Building

Exploring Virtual Environment (main game interface)

Asking researcher for explanation of the game design

Filling in self-report measures

Game Tutorial

Gaze out of the screen

Used smart phone

Internet off

Looking at something else

Looking at the researcher

Abandoning a question

Peer Challenge

Researcher tried to help with the server issue (trouble shooting)

Troubleshooting

Sever issues occurred

Training soldiers

Tried to play peer challenge but failed

Tried to add a base building but failed

Tried to upgrade a building but failed

Skip Supplemental Material Section

Supplemental Material

Video Presentation

Video Presentation

mp4

109.5 MB

References

  1. Terek Arce and Kyla McMullen. 2021. The Corsi Block-Tapping Test: Evaluating methodological practices with an eye towards modern digital frameworks. Computers in Human Behavior Reports 4 (Aug. 2021), 100099. https://doi.org/10.1016/j.chbr.2021.100099Google ScholarGoogle ScholarCross RefCross Ref
  2. A. Baddeley, R. Logie, S. Bressi, S. Della Sala, and H. Spinnler. 1986. Dementia and Working Memory. The Quarterly Journal of Experimental Psychology Section A 38, 4 (Nov. 1986), 603–618. https://doi.org/10.1080/14640748608401616 Publisher: SAGE Publications.Google ScholarGoogle ScholarCross RefCross Ref
  3. Douglas Bates, Martin Mächler, Ben Bolker, and Steve Walker. 2015. Fitting Linear Mixed-Effects Models Using lme4. Journal of Statistical Software 67 (Oct. 2015), 1–48. https://doi.org/10.18637/jss.v067.i01Google ScholarGoogle ScholarCross RefCross Ref
  4. Sian L. Beilock, Catherine A. Kulp, Lauren E. Holt, and Thomas H. Carr. 2004. More on the Fragility of Performance: Choking Under Pressure in Mathematical Problem Solving. Journal of Experimental Psychology: General 133 (2004), 584–600. https://doi.org/10.1037/0096-3445.133.4.584Google ScholarGoogle ScholarCross RefCross Ref
  5. Catherine M. Bohn-Gettler and David N. Rapp. 2014. Emotion during reading and writing. In International handbook of emotions in education. Routledge/Taylor & Francis Group, New York, NY, US, 437–457.Google ScholarGoogle Scholar
  6. Tobias Brosch, Klaus R. Scherer, Didier Maurice Grandjean, and David Sander. 2013. The impact of emotion on perception, attention, memory, and decision-making. Schweizerische medizinische Wochenschrift 143 (2013), w13786. https://doi.org/10.4414/smw.2013.13786Google ScholarGoogle ScholarCross RefCross Ref
  7. Rafael A. Calvo and Sidney D’Mello. 2010. Affect Detection: An Interdisciplinary Review of Models, Methods, and Their Applications. IEEE Transactions on Affective Computing 1, 1 (Jan. 2010), 18–37. https://doi.org/10.1109/T-AFFC.2010.1Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Philip Michael Corsi. 1972. Human memory and the medial temporal region of the brain. Ph. D. Dissertation.Google ScholarGoogle Scholar
  9. Martin V. Covington. 1992. Making the grade: A self-worth perspective on motivation and school reform. Cambridge University Press, New York, NY, US. https://doi.org/10.1017/CBO9781139173582 Pages: viii, 351.Google ScholarGoogle ScholarCross RefCross Ref
  10. Mihaly Csikszentmihalyi. 1990. Flow: the psychology of optimal experience (first edition ed.). Harper & Row, New York.Google ScholarGoogle Scholar
  11. Carrie Demmans Epp. 2016. English language learner experiences of formal and informal learning environments. In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge(LAK ’16). Association for Computing Machinery, New York, NY, USA, 231–235. https://doi.org/10.1145/2883851.2883896Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Florin Dolcos and Gregory McCarthy. 2006. Brain systems mediating cognitive interference by emotional distraction. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience 26, 7 (Feb. 2006), 2072–2079. https://doi.org/10.1523/JNEUROSCI.5042-05.2006Google ScholarGoogle ScholarCross RefCross Ref
  13. Sidney D’Mello, Ed Dieterle, and Angela Duckworth. 2017. Advanced, Analytic, Automated (AAA) Measurement of Engagement During Learning. Educational psychologist 52, 2 (2017), 104–123. https://doi.org/10.1080/00461520.2017.1281747Google ScholarGoogle ScholarCross RefCross Ref
  14. Sidney D’Mello and Art Graesser. 2012. Dynamics of affective states during complex learning. Learning and Instruction 22, 2 (April 2012), 145–157. https://doi.org/10.1016/j.learninstruc.2011.10.001Google ScholarGoogle ScholarCross RefCross Ref
  15. Sidney D’Mello, Blair Lehman, Reinhard Pekrun, and Art Graesser. 2014. Confusion can be beneficial for learning. Learning and Instruction 29 (Feb. 2014), 153–170. https://doi.org/10.1016/j.learninstruc.2012.05.003Google ScholarGoogle ScholarCross RefCross Ref
  16. Sidney D’Mello, Blair Lehman, Jeremiah Sullins, Rosaire Daigle, Rebekah Combs, Kimberly Vogt, Lydia Perkins, and Art Graesser. 2010. A Time for Emoting: When Affect-Sensitivity Is and Isn’t Effective at Promoting Deep Learning. In Intelligent Tutoring Systems(Lecture Notes in Computer Science), Vincent Aleven, Judy Kay, and Jack Mostow (Eds.). Springer, Berlin, Heidelberg, 245–254. https://doi.org/10.1007/978-3-642-13388-6_29Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Giovanna Egidi and Richard J. Gerrig. 2009. How valence affects language processing: Negativity bias and mood congruence in narrative comprehension. Memory & Cognition 37, 5 (July 2009), 547–555. https://doi.org/10.3758/MC.37.5.547Google ScholarGoogle ScholarCross RefCross Ref
  18. Giovanna Egidi and Howard C. Nusbaum. 2012. Emotional language processing: How mood affects integration processes during discourse comprehension. Brain and Language 122, 3 (Sept. 2012), 199–210. https://doi.org/10.1016/j.bandl.2011.12.008Google ScholarGoogle ScholarCross RefCross Ref
  19. Paul Ekman. 1992. An argument for basic emotions. Cognition and Emotion 6, 3-4 (May 1992), 169–200. https://doi.org/10.1080/02699939208411068Google ScholarGoogle ScholarCross RefCross Ref
  20. Henry C. Ellis and Patricia W. Ashbrook. 1989. The "State" of Mood and Memory Research: A Selective Review. Journal of Social Behavior and Personality 4, 2 (Jan. 1989), 1–21. https://www.proquest.com/docview/1292254290/citation/509D79B47A494957PQ/1Google ScholarGoogle Scholar
  21. Henry C. Ellis, Scott A. Ottaway, Larry J. Varner, Andrew S. Becker, and Brent A. Moore. 1997. Emotion, motivation, and text comprehension: The detection of contradictions in passages. Journal of Experimental Psychology: General 126, 2 (1997), 131–146. https://doi.org/10.1037/0096-3445.126.2.131Google ScholarGoogle ScholarCross RefCross Ref
  22. Michael W. Eysenck, Nazanin Derakshan, Rita Santos, and Manuel G. Calvo. 2007. Anxiety and cognitive performance: attentional control theory. Emotion (Washington, D.C.) 7, 2 (May 2007), 336–353. https://doi.org/10.1037/1528-3542.7.2.336Google ScholarGoogle ScholarCross RefCross Ref
  23. Klaus Fiedler and Susanne Beier. 2014. Affect and Cognitive Processes in Educational Contexts. In International Handbook of Emotions in Education. Routledge, 36–55.Google ScholarGoogle Scholar
  24. Barbara L. Fredrickson. 2001. The role of positive emotions in positive psychology: The broaden-and-build theory of positive emotions. American Psychologist 56, 3 (2001), 218–226. https://doi.org/10.1037/0003-066X.56.3.218 Place: US Publisher: American Psychological Association.Google ScholarGoogle ScholarCross RefCross Ref
  25. Sandra G. Hart and Lowell E. Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. In Advances in Psychology, Peter A. Hancock and Najmedin Meshkati (Eds.). Human Mental Workload, Vol. 52. North-Holland, 139–183. https://doi.org/10.1016/S0166-4115(08)62386-9Google ScholarGoogle ScholarCross RefCross Ref
  26. Steven J. Heine, Darrin R. Lehman, Kaiping Peng, and Joe Greenholtz. 2002. What’s wrong with cross-cultural comparisons of subjective Likert scales?: The reference-group effect. Journal of Personality and Social Psychology 82, 6 (June 2002), 903–918.Google ScholarGoogle ScholarCross RefCross Ref
  27. Joel M. Hektner, Jennifer A. Schmidt, and Mihaly Csikszentmihalyi. 2007. Experience sampling method: Measuring the quality of everyday life. Sage Publications, Inc, Thousand Oaks, CA, US.Google ScholarGoogle Scholar
  28. Jon-Chao Hong, Ming-Yueh Hwang, Mei-Syuan Chen, and Kai-Hsin Tai. 2021. Explorative and Exploitative Learning Affected by Extraneous Cognitive Load and Gameplay Anxiety in a Gestalt Perception Game. Journal of Educational Computing Research 59, 2 (April 2021), 209–229. https://doi.org/10.1177/0735633120961415Google ScholarGoogle ScholarCross RefCross Ref
  29. Ashish Kapoor and Rosalind W. Picard. 2005. Multimodal affect recognition in learning environments. In Proceedings of the 13th annual ACM international conference on Multimedia(MULTIMEDIA ’05). Association for Computing Machinery, New York, NY, USA, 677–682. https://doi.org/10.1145/1101149.1101300Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Avraham N. Kluger and Angelo DeNisi. 1996. The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin 119, 2 (1996), 254–284. https://doi.org/10.1037/0033-2909.119.2.254 Place: US Publisher: American Psychological Association.Google ScholarGoogle ScholarCross RefCross Ref
  31. Lisa Knörzer, Roland Brünken, and Babette Park. 2016. Facilitators or suppressors: Effects of experimentally induced emotions on multimedia learning. Learning and Instruction 44 (Aug. 2016), 97–107. https://doi.org/10.1016/j.learninstruc.2016.04.002Google ScholarGoogle ScholarCross RefCross Ref
  32. Kenneth R. Koedinger, Albert T. Corbett, and Charles Perfetti. 2012. The Knowledge-Learning-Instruction Framework: Bridging the Science-Practice Chasm to Enhance Robust Student Learning. Cognitive Science 36, 5 (2012), 757–798. https://doi.org/10.1111/j.1551-6709.2012.01245.xGoogle ScholarGoogle ScholarCross RefCross Ref
  33. Stephen Krashen. 1987. Principles and practice in second language acquisition. rentice-Hall International, New Jersey.Google ScholarGoogle Scholar
  34. Moritz Krell, Kate M. Xu, Günter Daniel Rey, and Fred Paas. 2022. Editorial: Recent Approaches for Assessing Cognitive Load From a Validity Perspective. Frontiers in Education 6 (2022). https://www.frontiersin.org/articles/10.3389/feduc.2021.838422Google ScholarGoogle ScholarCross RefCross Ref
  35. Kevin S. LaBar and Roberto Cabeza. 2006. Cognitive neuroscience of emotional memory. Nature Reviews Neuroscience 7, 1 (Jan. 2006), 54–64. https://doi.org/10.1038/nrn1825 Number: 1 Publisher: Nature Publishing Group.Google ScholarGoogle ScholarCross RefCross Ref
  36. Eero J. Laine. 1987. Affective Factors in Foreign Language Learning and Teaching: A Study of the "Filter." Jyvaskyla Cross-Language Studies, No. 13. ISBN: 9789516797192 ERIC Number: ED292302.Google ScholarGoogle Scholar
  37. Richard L. Lamb, Leonard Annetta, Jonah Firestone, and Elisabeth Etopio. 2018. A meta-analysis with examination of moderators of student cognition, affect, and learning outcomes while using serious educational games, serious games, and simulations. Computers in Human Behavior 80, C (March 2018), 158–167. https://doi.org/10.1016/j.chb.2017.10.040Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Joseph E. LeDoux and Richard Brown. 2017. A higher-order theory of emotional consciousness. Proceedings of the National Academy of Sciences of the United States of America 114, 10 (March 2017), E2016–E2025. https://doi.org/10.1073/pnas.1619316114Google ScholarGoogle ScholarCross RefCross Ref
  39. Jimmie Leppink, Fred Paas, Tamara van Gog, Cees P. M. van der Vleuten, and Jeroen J. G. van Merriënboer. 2014. Effects of pairs of problems and examples on task performance and different types of cognitive load. Learning and Instruction 30 (April 2014), 32–42. https://doi.org/10.1016/j.learninstruc.2013.12.001Google ScholarGoogle ScholarCross RefCross Ref
  40. Steven F. Maier and Martin E. Seligman. 1976. Learned helplessness: Theory and evidence. Journal of Experimental Psychology: General 105, 1 (1976), 3–46. https://doi.org/10.1037/0096-3445.105.1.3 Place: US Publisher: American Psychological Association.Google ScholarGoogle ScholarCross RefCross Ref
  41. Bradley Margaret M. and P. J. Lang. 1994. Measuring emotion: the Self-Assessment Manikin and the Semantic Differential. Journal of Behavior Therapy and Experimental Psychiatry 25, 1 (March 1994), 49–59. https://doi.org/10.1016/0005-7916(94)90063-9Google ScholarGoogle ScholarCross RefCross Ref
  42. Richard E. Mayer. 2002. Multimedia learning. In Psychology of Learning and Motivation. Vol. 41. Academic Press, 85–139. https://doi.org/10.1016/S0079-7421(02)80005-6Google ScholarGoogle ScholarCross RefCross Ref
  43. Richard E. Mayer and Roxana Moreno. 2003. Nine Ways to Reduce Cognitive Load in Multimedia Learning. Educational Psychologist 38, 1 (2003), 43–52. https://doi.org/10.1207/S15326985EP3801_6Google ScholarGoogle ScholarCross RefCross Ref
  44. George A. Miller. 1956. The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review 63, 2 (1956), 81–97. https://doi.org/10.1037/h0043158Google ScholarGoogle ScholarCross RefCross Ref
  45. Caitlin Mills, Jennifer Wu, and Sidney D’Mello. 2019. Being Sad Is Not Always Bad: The Influence of Affect on Expository Text Comprehension. Discourse Processes 56, 2 (Feb. 2019), 99–116. https://doi.org/10.1080/0163853X.2017.1381059Google ScholarGoogle ScholarCross RefCross Ref
  46. Roxana Moreno. 2010. Cognitive load theory: more food for thought. Instructional Science 38, 2 (March 2010), 135–141. https://doi.org/10.1007/s11251-009-9122-9Google ScholarGoogle ScholarCross RefCross Ref
  47. Ghasim Nabizadeh Chianeh, Shahram Vahedi, Mohammad Rostami, and Mohammad Ali Nazari. 2012. Validity and Reliability of Self-Assessment Manikin. 6, 2 (Sept. 2012), 52–61. http://rph.khu.ac.ir/article-1-94-en.htmlGoogle ScholarGoogle Scholar
  48. Laura M. Naismith, Jeffrey J. H. Cheung, Charlotte Ringsted, and Rodrigo B. Cavalcanti. 2015. Limitations of subjective cognitive load measures in simulation-based procedural training. Medical Education 49, 8 (Aug. 2015), 805–814. https://doi.org/10.1111/medu.12732Google ScholarGoogle ScholarCross RefCross Ref
  49. Fred Paas and Paul Ayres. 2014. Cognitive Load Theory: A Broader View on the Role of Memory in Learning and Education. Educational Psychology Review 26, 2 (June 2014), 191–195. https://doi.org/10.1007/s10648-014-9263-5Google ScholarGoogle ScholarCross RefCross Ref
  50. Fred Paas, Paul Ayres, and Mariya Pachman. 2008. Assessment of cognitive load in multimedia learning: theory, methods and applications. In Recent innovations in educational technology that facilitate student learning, Daniel H. Robinson and Gregory Schraw (Eds.). Information Age Publishing, Charlotte, NC, 11–35.Google ScholarGoogle Scholar
  51. Fred Paas, Juhani E. Tuovinen, Huib Tabbers, and Pascal W. M. Van Gerven. 2003. Cognitive Load Measurement as a Means to Advance Cognitive Load Theory. Educational Psychologist 38, 1 (March 2003), 63–71. https://doi.org/10.1207/S15326985EP3801_8Google ScholarGoogle ScholarCross RefCross Ref
  52. Zachary A. Pardos, Ryan S. J. D. Baker, Maria O. C. Z. San Pedro, Sujith M. Gowda, and Supreeth M. Gowda. 2013. Affective states and state tests: investigating how affect throughout the school year predicts end of year learning outcomes. In Proceedings of the Third International Conference on Learning Analytics and Knowledge(LAK ’13). Association for Computing Machinery, New York, NY, USA, 117–124. https://doi.org/10.1145/2460296.2460320Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Reinhard Pekrun. 2006. The Control-Value Theory of Achievement Emotions: Assumptions, Corollaries, and Implications for Educational Research and Practice. Educational Psychology Review 18, 4 (Dec. 2006), 315–341. https://doi.org/10.1007/s10648-006-9029-9Google ScholarGoogle ScholarCross RefCross Ref
  54. Reinhard Pekrun, Thomas Goetz, Wolfram Titz, and Raymond P. Perry. 2002. Academic Emotions in Students’ Self-Regulated Learning and Achievement: A Program of Qualitative and Quantitative Research. Educational Psychologist 37, 2 (Jan. 2002), 91–105. https://doi.org/10.1207/S15326985EP3702_4Google ScholarGoogle ScholarCross RefCross Ref
  55. Rosalind W. Picard. 2003. Affective computing: challenges. International Journal of Human-Computer Studies 59, 1 (July 2003), 55–64. https://doi.org/10.1016/S1071-5819(03)00052-1Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Jan L. Plass and Slava Kalyuga. 2019. Four Ways of Considering Emotion in Cognitive Load Theory. Educational Psychology Review 31, 2 (June 2019), 339–359. https://doi.org/10.1007/s10648-019-09473-5Google ScholarGoogle ScholarCross RefCross Ref
  57. Jan L. Plass and Ulas Kaplan. 2016. Emotional Design in Digital Media for Learning. In Emotions, Technology, Design, and Learning, Sharon Y. Tettegah and Martin Gartmeier (Eds.). Academic Press, San Diego, 131–161. https://doi.org/10.1016/B978-0-12-801856-9.00007-4Google ScholarGoogle ScholarCross RefCross Ref
  58. Meihua Qian and Karen R. Clark. 2016. Game-based Learning and 21st century skills: A review of recent research. Computers in Human Behavior 63 (Oct. 2016), 50–58. https://doi.org/10.1016/j.chb.2016.05.023Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Hayo Reinders and Sorada Wattana. 2015. Affect and willingness to communicate in digital game-based learning. ReCALL 27, 1 (Jan. 2015), 38–57. https://doi.org/10.1017/S0958344014000226Google ScholarGoogle ScholarCross RefCross Ref
  60. Steven P. Reise and Tyler M. Moore. 2023. Item response theory. In APA handbook of research methods in psychology: Foundations, planning, measures, and psychometrics, Vol. 1, 2nd ed. American Psychological Association, Washington, DC, US, 809–835. https://doi.org/10.1037/0000318-037Google ScholarGoogle ScholarCross RefCross Ref
  61. Günter Daniel Rey and Florian Buchwald. 2011. The expertise reversal effect: cognitive load and motivational explanations. Journal of Experimental Psychology. Applied 17, 1 (March 2011), 33–48. https://doi.org/10.1037/a0022243Google ScholarGoogle ScholarCross RefCross Ref
  62. James A. Russell. 2003. Core affect and the psychological construction of emotion. Psychological Review 110, 1 (2003), 145–172. https://doi.org/10.1037/0033-295X.110.1.145Google ScholarGoogle ScholarCross RefCross Ref
  63. Richard M. Ryan and Edward L. Deci. 2000. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist 55, 1 (2000), 68–78. https://doi.org/10.1037/0003-066X.55.1.68Google ScholarGoogle ScholarCross RefCross Ref
  64. Gavriel Salomon. 1984. Television is "easy" and print is "tough": The differential investment of mental effort in learning as a function of perceptions and attributions. Journal of Educational Psychology 76, 4 (1984), 647–658. https://doi.org/10.1037/0022-0663.76.4.647Google ScholarGoogle ScholarCross RefCross Ref
  65. Klaus R. Scherer and Agnes Moors. 2019. The Emotion Process: Event Appraisal and Component Differentiation. Annual Review of Psychology 70, 1 (2019), 719–745. https://doi.org/10.1146/annurev-psych-122216-011854Google ScholarGoogle ScholarCross RefCross Ref
  66. Sara Scrimin and Lucia Mason. 2015. Does mood influence text processing and comprehension? Evidence from an eye-movement study. British Journal of Educational Psychology 85, 3 (2015), 387–406. https://doi.org/10.1111/bjep.12080Google ScholarGoogle ScholarCross RefCross Ref
  67. Judith D. Singer and John B. Willett. 2003. Applied longitudinal data analysis: Modeling change and event occurrence. Oxford University Press, New York, NY, US.Google ScholarGoogle Scholar
  68. Klaus Dieter Stiller and Silke Schworm. 2019. Game-Based Learning of the Structure and Functioning of Body Cells in a Foreign Language: Effects on Motivation, Cognitive Load, and Performance. Frontiers in Education 4 (2019). https://doi.org/10.3389/feduc.2019.00018Google ScholarGoogle ScholarCross RefCross Ref
  69. Gijsbert Stoet. 2010. PsyToolkit: A software package for programming psychological experiments using Linux. Behavior Research Methods 42, 4 (Nov. 2010), 1096–1104. https://doi.org/10.3758/BRM.42.4.1096Google ScholarGoogle ScholarCross RefCross Ref
  70. Justin Storbeck and Gerald L. Clore. 2005. With sadness comes accuracy; with happiness, false memory: mood and the false memory effect. Psychological Science 16, 10 (Oct. 2005), 785–791. https://doi.org/10.1111/j.1467-9280.2005.01615.xGoogle ScholarGoogle ScholarCross RefCross Ref
  71. Jerry Chih-Yuan Sun and Shih-Jou Yu. 2019. Personalized Wearable Guides or Audio Guides: An Evaluation of Personalized Museum Guides for Improving Learning Achievement and Cognitive Load. International Journal of Human–Computer Interaction 35, 4-5 (March 2019), 404–414. https://doi.org/10.1080/10447318.2018.1543078Google ScholarGoogle ScholarCross RefCross Ref
  72. John Sweller. 1988. Cognitive load during problem solving: Effects on learning. Cognitive Science 12, 2 (April 1988), 257–285. https://doi.org/10.1016/0364-0213(88)90023-7Google ScholarGoogle ScholarCross RefCross Ref
  73. John Sweller. 2010. Element Interactivity and Intrinsic, Extraneous, and Germane Cognitive Load. Educational Psychology Review 22, 2 (June 2010), 123–138. https://doi.org/10.1007/s10648-010-9128-5Google ScholarGoogle ScholarCross RefCross Ref
  74. John Sweller. 2011. Cognitive Load Theory. In Psychology of Learning and Motivation, Jose P. Mestre and Brian H. Ross (Eds.). Vol. 55. Academic Press, 37–76. https://doi.org/10.1016/B978-0-12-387691-1.00002-8Google ScholarGoogle ScholarCross RefCross Ref
  75. John Sweller. 2023. The Development of Cognitive Load Theory: Replication Crises and Incorporation of Other Theories Can Lead to Theory Expansion. Educational Psychology Review 35, 4 (Sept. 2023), 95. https://doi.org/10.1007/s10648-023-09817-2Google ScholarGoogle ScholarCross RefCross Ref
  76. John Sweller, Jeroen J. G. van Merrienboer, and Fred G. W. C. Paas. 1998. Cognitive Architecture and Instructional Design. Educational Psychology Review 10, 3 (Sept. 1998), 251–296. https://doi.org/10.1023/A:1022193728205Google ScholarGoogle ScholarCross RefCross Ref
  77. Michelle Taub, Roger Azevedo, Ramkumar Rajendran, Elizabeth B. Cloude, Gautam Biswas, and Megan J. Price. 2021. How are students’ emotions related to the accuracy of cognitive and metacognitive processes during learning with an intelligent tutoring system?Learning and Instruction 72 (April 2021), 101200. https://doi.org/10.1016/j.learninstruc.2019.04.001Google ScholarGoogle ScholarCross RefCross Ref
  78. Edmund R. Thompson. 2007. Development and Validation of an Internationally Reliable Short-Form of the Positive and Negative Affect Schedule (PANAS). Journal of Cross-Cultural Psychology 38, 2 (March 2007), 227–242. https://doi.org/10.1177/0022022106297301Google ScholarGoogle ScholarCross RefCross Ref
  79. Chai M. Tyng, Hafeez U. Amin, Mohamad N. M. Saad, and Aamir S. Malik. 2017. The Influences of Emotion on Learning and Memory. Frontiers in Psychology 8 (2017), 1454. https://doi.org/10.3389/fpsyg.2017.01454Google ScholarGoogle ScholarCross RefCross Ref
  80. Kurt VanLehn, Winslow Burleson, Sylvie Girard, Maria Elena Chavez-Echeagaray, Javier Gonzalez-Sanchez, Yoalli Hidalgo-Pontet, and Lishan Zhang. 2014. The Affective Meta-Tutoring Project: Lessons Learned. In Intelligent Tutoring Systems(Lecture Notes in Computer Science), Stefan Trausan-Matu, Kristy Elizabeth Boyer, Martha Crosby, and Kitty Panourgia (Eds.). Springer International Publishing, Cham, 84–93. https://doi.org/10.1007/978-3-319-07221-0_11Google ScholarGoogle ScholarDigital LibraryDigital Library
  81. Matthew Ventura, Valerie Shute, and Yoon Jeon Kim. 2013. Assessment and Learning of Qualitative Physics in Newton’s Playground. In Artificial Intelligence in Education(Lecture Notes in Computer Science), H. Chad Lane, Kalina Yacef, Jack Mostow, and Philip Pavlik (Eds.). Springer, Berlin, Heidelberg, 579–582. https://doi.org/10.1007/978-3-642-39112-5_63Google ScholarGoogle ScholarCross RefCross Ref
  82. Mark S. Young, Karel A. Brookhuis, Christopher D. Wickens, and Peter A. Hancock. 2015. State of science: mental workload in ergonomics. Ergonomics 58, 1 (2015), 1–17. https://doi.org/10.1080/00140139.2014.956151Google ScholarGoogle ScholarCross RefCross Ref
  83. Barry J. Zimmerman. 2002. Becoming a Self-Regulated Learner: An Overview. Theory Into Practice 41, 2 (May 2002), 64–70. https://doi.org/10.1207/s15430421tip4102_2Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Toward Supporting Adaptation: Exploring Affect’s Role in Cognitive Load when Using a Literacy Game

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        CHI '24: Proceedings of the CHI Conference on Human Factors in Computing Systems
        May 2024
        18961 pages
        ISBN:9798400703300
        DOI:10.1145/3613904

        Copyright © 2024 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 11 May 2024

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate6,199of26,314submissions,24%
      • Article Metrics

        • Downloads (Last 12 months)73
        • Downloads (Last 6 weeks)73

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format