skip to main content
10.1145/3613904.3642249acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open Access

``Backseat Gaming" A Study of Co-Regulated Learning within a Collegiate Male Esports Community

Published:11 May 2024Publication History

Abstract

Previous work demonstrated that esports players often leverage insights from other players and communities to learn and improve. However, little research examined social learning in esports, over time, in granular detail. Understanding the role of others in the esports learning process has implications for the design of computational support systems that can help esports players learn and make the games more accessible. Therefore, we perform an exploration of this topic using Co-Regulated Learning as a theoretical lens. In doing so, we hope to enrich existing knowledge on social learning in esports, provide insights for the future development of computational support, and a road-map for future work. Through an interview study of an esports community consisting of 14, college-aged, male players, we uncovered 10 themes regarding how Co-Regulated learning occurs within their teams. Based on these, we discuss three main takeaways and their implications for future research and development.

Skip 1INTRODUCTION Section

1 INTRODUCTION

Recent work has illustrated that playing high-level esports – organized, competitive gaming as defined by Formosa et al. [26] – can be beneficial to players, granting them improved critical thinking, teamwork skills, fine motor skills, emotional regulation skills, problem solving skills, and academic performance [42, 43, 59, 80, 87]. Additionally, the esports industry is now worth over 1 billion dollars 1 and esports teams are on the rise in both professional and academic settings [13, 50, 68], with even the armed forces recruiting players 2. With this in mind, there is an interest and a need in better understanding esports, their players, and their communities in order to make them, and their benefits, more accessible to more people.

One element of interest in recent literature is how esports players learn to play [34, 58] and how this can be supported through computational assistance [39, 85]. This is because esports are notoriously difficult to master and it does not take much for a player to become discouraged and quit a game [6, 23]. While there is already much computational support available [5, 54, 61, 75], it still does not quite equate to the kind of support players can get from other players [38], however, novice players often do not have access to other players and therefore do not receive the kind of personalized, specialized learning assistance that experienced players have access to [44].

As such, there are suggestions that a more detailed understanding of social learning in esports can inform the design of more advanced computational support systems that behave like real people and better support players’ needs, especially for those who do not have regular access to other people [39, 76]. However, much of the existing work examining learning among esports players focuses on solo learning and only discusses the roles of others in passing [34, 39]. Even when social learning is examined in more detail, it tends to focus more on how players interact during play or competition [69, 78]. Work focused on how esports players help each other learn over time is uncommon [71] and there is not yet a strong enough knowledge-base surrounding how esports players support each others’ learning over time through evaluation, guidance, and feedback to advance the field of computational support for esports.

We take a step towards building a better knowledge-base by conducting an interview study of how Co-Regulated Learning (CoRL) occurs within an esports club at a North American University. CoRL [30, 31] is an extension of the theory of Self-Regulated Learning (SRL), which broadly discusses how learners can direct their own learning process in the absence of an educator [46, 56, 65]. SRL has already been explored in the context of esports [37, 38], but CoRL has not been. According to CoRL, which was first proposed by Hadwin and Oshige [30] and Hadwin et al. [31], social learning is a transitional process where meta-cognitive SRL skills related to planning, monitoring, strategy use, and evaluation are gradually appropriated by the learner from a more experienced other through the regular exchange of input. We chose to leverage the theory of CoRL, specifically, within this work as previous research has suggested that it is relevant to the esports context and can inform the design of more personalized computational support [39] but, to the authors’ knowledge, no work on learning in the context of esports has examined social learning through this lens.

We, specifically, seek to answer the following research question: How do esports players exchange input with teammates in order to co-regulate one another’s learning? We chose this question as a first step towards understanding CoRL more granularly in esports as the giving and seeking of input, i.e. comments providing guidance or feedback, which we often refer to holistically as input exchange, is the core behavior engaged by humans operating under a CoRL arrangement.

Here, we explore this phenomenon with a community of 14 college-aged male members who reflect the majority demographics for esports in the United States 3. The ultimate goal is to understand how this phenomenon occurs in esports at large, in order to inform the design of tools to support learning. However, the esports community is not a monolithic entity and individual groups of players who interact with one another are often differentiated by demographics such as gender [52] or culture [53] and their behavior and experiences are likely influenced accordingly. We believe that a general examination of esports players across communities would likely result in insights regarding minority communities being overshadowed and therefore advocate for an exploration of this topic on a community-specific basis, generating a general understanding over time via repetition of the methods with different groups. Through the work presented here, we provide a general method and starting point for comparison and repetition in future work.

We conducted the interview study with a competitive esports club at a university in the Southern United States consisting of 14 male members. We discuss further in the methods the ways in which this community is representative of the general esports community and additionally discuss the limitations of this sample later on. Saturation in the data was seen within the 14 participants, justifying the sample size. Based on the theory of CoRL, players were asked a variety of questions regarding their experiences playing with others and when they would seek or provide input. Results revealed 10 themes and from these we synthesize and discuss three main takeaways and what they mean for the development of computational support tools, such as AI coaches, for esports.

Contributions: There are three important contributions in this work. First, we provide the first look at e-sports gaming through the lens of CoRL to better understand how players help one another learn through input exchange. Second, our analysis reveals three high-level concepts that can direct future work on social learning in esports: The Hierarchical Nature of Input, The Relationship between Input and Failure, and The Three Phases of Gameplay and Learning and we discuss ways these can inform the design of computational support tools for learning. Third, our interview method is informed by existing theory and may be used in future work to explore this topic with different subgroups and broaden our understanding of CoRL in esports.

Skip 2RELATED WORK Section

2 RELATED WORK

2.1 Learning in Esports

In the context of learning to play games, the end goal is often expertise. Despite expertise being recognized as a key component of gameplay [21], studies of expertise in games are often more interested in the skills possessed by expert players, rather than how those skills are learned. Nowhere is this more apparent than esports research, which often discusses and formalizes the skills experts use to succeed in gameplay [20, 24, 37, 48, 66] but has been, overall, less interested in studying how novice players gain these skills over time. As Kleinman et al. pointed out [39], this gap in the literature makes it difficult to design computational support for the learning process, as we do not possess a strong understanding of what that process is.

In an attempt to address this gap, Hesketh et al. [34] leveraged grounded theory to examine learning in team esports and identified several preliminary themes around the topic, including several related to how players had to evaluate and adapt their skills to new scenarios. In another example, Pluss et al. [60] performed a larger scale study determining the connections between amount and quality of practice over time and ultimate tournament performance. In doing so, they highlighted how just playing a game did not correlate directly with learning and performance gains and that there were other factors to consider. Building on these findings, Kleinman et al. [39] conducted an interview study specifically investigating the activities that esports players leveraged to try and gain skill in their chosen game and the challenges they faced in that process. They took the topic a step further, however, using their results to inform a set of suggestions for future computational support tools to better meet players’ needs.

While this work has been critical at filling the gap in our understanding of esports learning, it is also noteworthy that both Kleinman et al.’s [39] and Hesketh et al.’s [34] participants emphasized that their learning processes often involved obtaining information from other sources be it known others or community resources. Neither work, however, places a strong emphasis on exploring this topic, instead suggesting it should be examined further in the future. An older study by Kow and Young [44], however, does look into this phenomenon in far more detail. Their interview study with StarCraft community members revealed the critical role that both personal and community resources play in, not only motivating players, but helping them advance their skills and keep up with changing expectations within the gameplay community. Through this work, they illustrated the central role other people play in an individual player’s learning process, as a means by which knowledge can be leveraged towards one’s own learning goals.

2.2 Social and Co-Regulated Learning in Esports

This leveraging of the knowledge of a more experienced “other” individual (be it a specific person or a community resource) suggests the presence of Co-Regulated Learning (CoRL) [31] practices within esports communities. CoRL is a theory of learning that branched off of Self-Regulated Learning (SRL), suggesting that SRL skills are acquired from other, more experienced individuals through a process of appropriation [30, 31]. Similar processes have been observed in esports teams where new players are mentored by their experienced counterparts [71]. However, as Kow and Young pointed out [44], when it came to leveraging knowledge and learning over time, expert players were far more likely to be able to interact with other people in the process, leaving novice players reliant on public media technologies, community resources, and, likely, computational support tools. Kleinman et al.’s participants [39], however, indicated that existing computational and media technologies cannot replace a real coach or teammate in the learning process because they do not think or act like one. This prompted them to suggest that the research community needed a stronger understanding of CoRL in esports to design better computational support tools for the domain.

CoRL, and even social learning as a process that occurs over time, is, however, understudied in esports. In fact, much of the work on social esports play is more focused on elements of teamwork and how they inform performance and victory over how team-members help one another learn [55, 57, 62, 78, 79, 88]. Instead, observations of social learning are often included as a side product of a research study otherwise focused on solo learning or a different topic altogether [39, 67]. Richard et al. [69], in their study of collegiate esports players, do extensively examine social learning activities, including group strategy planning and reflection. However, this study looks specifically at how these activities are engaged during a single competition, and does not examine how input is exchanged and learning is guided by others during the long-term process of gaining expertise in the game. While it provides valuable information about team-based metacognitive processes, it does not tell us much about how these are leveraged over time to help each other learn.

Rusk et al. [71], in contrast, examined social learning over time through mentor-apprentice relationships in esports through a qualitative study of a Counter Strike: Global Offensive team. Their findings indicated how new players were integrated into the team and provided with guidance and support by teammates until they demonstrated enough competence to be considered an individual agent on the team. Unlike the previous example, Rusk et al.’s work does explore how learning occurs long term within an esports context, and in doing so captures how learning and relationships change throughout the process. One of their most interesting findings is the apprentice player’s shift from obedience and excuses to resistance to input and justification of choices once they have achieved enough skill to act independently. Rusk et al.’s work is perhaps one of the most in depth explorations of how social learning occurs over time in esports contexts, with the above paper being one of several examinations of the topic of mentor-apprentice relationships [70]. However, they studied a team with a clearly identifiable apprentice and mentor and, despite looking at how the mentor helped the apprentice learn, did not leverage CoRL as a theory in their work. Here, we build upon this previous work by examining social learning through the lens of CoRL and within a community where the roles of apprentice and mentor are not so clearly defined. In doing so we hope to expand our understanding of social learning in esports and provide actionable insights for the advancement of computational support.

2.3 Computationally Supporting Learning in Esports

Some of the most prominent computational support tools for esports are visualization systems that have been developed to provide players with gameplay data in a human-readable manner to better facilitate learning processes such as reflection [2, 84]. Wallner et al. [85] even conducted a user study examining how players use visualization systems to learn and what features were most useful to them. An example of one such system is Kuan et al.’s [45] spatio-temporal system for Starcraft 2 synchronizes data presented across multiple maps and timelines. The user can highlight interesting spans of time in the timeline, which controls what is shown on the map, allowing them to reflect on their gameplay, identify errors, and develop ways to adapt their strategy in the future, all elements of learning discussed in prior work [38, 69].

There have also been similar systems designed for spectator use. For example, Charleer et al. [10] designed a spectator dashboard for League of Legends that presented how much gold each player had at all times, as this was indicated by the player community as an important indicator of victory. Another example is the WEAVR application [41], an esports spectatorship visualization system that can be used by audience members during DotA 2 tournaments to follow the action, identify moments of interest, and anticipate what is to come. These systems help spectators learn from others by better understanding others’ decision-making processes and also helps novice or less experienced players become motivated to improve by watching more experienced players, an element of learning observed within existing esports communities [44].

While these visualization systems emphasize supporting reflective processes in both players and spectators, algorithmic systems that can predict what comes next and give players intelligent recommendations are seeing increased interest as a way to support planning and execution-related learning processes, either before or during play [11, 12, 22]. For example, the work of Semenov et al. [74] tested the accuracy of different quantitative methods (Naive Bayes classifiers, Logistic Regression, and Gradient Boosted Decision Trees) on predicting DotA 2 match outcomes based on what heroes were picked during the draft. Predictive systems like this are the foundation of tools that can provide players with in-the-moment information that can help them make informed decisions and learn during play by better connecting their actions to the outcomes they experience [38]. For example, Christiansen et al.’s [15] statistical approach to predicting chances of victory based on actions taken was later turned into a plugin for DotA 2 that displayed to the player what their chances of victory would be if they chose to pursue any of several next possible actions. Prior work has demonstrated that giving recommendations or discussing possible strategies in this manner is a common element of social learning in team esports contexts [38, 44, 69] and by providing this information to players, these tools are taking the first steps towards emulating the behavior of a coach or more experienced teammate.

Despite the success of prior work and the prevalence of commercially available tools [5, 54, 61, 75], there is still much to be understood about social learning in esports if these tools are going to meet the challenge of teaching like real people and make esports more accessible to more players. Here, we take one such approach to this challenge by leveraging CoRL as a theoretical lens to examine social learning via the exchange of input among an esports community. In doing so, we enrich existing literature by connecting esports learning phenomena back to an existing framework of learning and identify pathways for future work to explore and opportunities to advance the state of the art of computational tools.

Skip 3METHODS Section

3 METHODS

Table 1:
ConstructInterview Questions
Playing with Others

(1)

Would you like to provide details regarding how often you train alone or with others ?

(2)

What is your relationship with the other people you play with?

(3)

What do you typically discuss before, during, and after play?

(4)

What training techniques do you find most effective when playing with others?

(5)

Do you prefer practicing or training with others or solo and why?

Seeking input (informed by the theory of CoRL [30, 31] and Prather et al.’s [63] measures of CoRL)

(6)

What situations or circumstances prompt you to explicitly seek the input of others?

(7)

In relation to when you notice a mistake, especially in-game, when do you specifically seek out others’ input?

(8)

Whose help do you seek and why?

(9)

Can you describe how they help you?

(10)

Can you describe any situations where you would go to community resources rather than known others?

(11)

In what situations or circumstances would you not seek the input of others if any exist?

Providing input (informed by CoRL [30, 31] and Prather et al.’s [63] measures of CoRL))

(12)

Who do you provide input to and why?

(13)

Who initiates the interaction?

(14)

In what situations or circumstances do you provide input to others?

(15)

In what situations or circumstances would you not provide input to others?

Table 1: The Questions used in the interviews. All questions were open answer.

The goal of this work was to take a first look at how esports players seek and provide input in order to co-regulate each other’s learning through a qualitative study with a single, representative, esports community using the theoretical lens of CoRL. We conducted semi-structured interviews with 14 participants focused on understanding how these practices manifested among their community.

3.1 Interview Design

We developed a set of questions for semi-structured interviews targeting input exchange practices among esports teams based on the theory of CoRL. The interview structure consists of three parts, with parts two and three informed by and targeting the two input related constructs of CoRL, as seen in Table 1. Part one of the interview asked for general information about opinions and habits surrounding social play. These questions, seen in Table 1, were not directly linked to any theoretical aspect of CoRL and were included to gauge how participants felt about social learning, what their experiences were with the phenomenon, how often they engaged in social learning and with whom.

We chose semi-structured interviews as this approach would allow us to focus on a single community at a time, ensuring that communities composing of minority groups explored via the same method in future work would be properly represented, rather than drowned out among a large scale examination. By using theory to derive the interview design we ensure that we are focusing on and targeting the phenomenon, input exchange, that we are interested in, which exists within the much more complex context of esports play. The semi-structured interview approach further allowed us to capture specific phenomena that may not be directly observable (i.e. emotional states) and ask follow up questions in response to anything interesting or unexpected said by the participants.

3.1.1 Leveraging Co-Regulated Learning to Target Input-Exchange.

Interview parts two and three were informed by CoRL [30, 31] and targeting practices of seeking and giving input. As stated previously, CoRL is a related theory of SRL, which has previously demonstrated relevance in the context of esports due to its robust consideration for the interconnection between physical tasks and the metacognitive skills that inform them [37]. As a related theory, CoRL is expected to similarly fit the domain. Previous work has additionally observed behaviors from players that appear to fall within the theoretical umbrella of CoRL [38, 39, 71].

Under CoRL, a learner focuses on mastering the physical skills of a task while the metacognitive aspects are handled by a more experienced other. The learner eventually appropriates the metacognitive skills as the physical skills are mastered, ultimately transitioning into a self-regulated arrangement when all physical and metacognitive skills have been mastered. The other’s handling of metacognitive skills manifests in the form of them giving input regarding evaluation, planning, monitoring, etc. (all well documented SRL behaviors [89]). Hadwin and Oshige [30] discussed this in the context of a mother teaching her child to tie shoes. In this example, while the child manipulates the laces, the mother gives input in the form of statements meant to prompt metacognitive process like “what do you do next?" (planning) or “what did you do wrong?" (evaluation). The child can also take the initiative by asking, for example, what they should do, in other words, by seeking input.

Based on this theoretical idea of CoRL through input exchange, and informed by the work of Prather et al. [63], who leverage CoRL to understand how social learning occurred in computer science education, we recognize and define the two general constructs of CoRL that we explore in this work as Seeking Input (the learner engages with the experienced other for metacognitive support) and Providing Input (the experienced other engages with the learner to provide metacognitive support). Both constructs describe how metacognitive aid manifests and information is exchanged, differentiated by who takes the initiative. These constructs became parts two and three of our interview. The exact questions were adapted from and built upon those used by Prather et al.’s study [63].

Under Seeking Input, the questions target the interviewee’s experiences as a learner engaging with an experienced other for support. Two questions (6 and 7) deal with when input is sought in terms of what prompts it and timing. Questions 8 and 9 target who the experienced other is and how they help while questions 10 and 11 deal with circumstances where direct input would not be sought.

Under Providing Input, the questions target the interviewee’s experiences as an experienced other providing support to a learner. Questions 12 and 13 deal with the relationship between the learner and the experienced other while questions 14 and 15 deal with circumstances in which input is or is not given. The questions under the two constructs were not exactly the same in order to prevent redundant responses from participants.

We note here that it is not our goal, at this time, to acquire user requirements regarding computational support tools, but instead to glean further insights regarding CoRL that can inform the development of a foundational theory that will, in turn, inform the design of such tools. User requirements gathering and further work in that direction is, as such, left to future work.

3.2 Interview Structure

Interviews were conducted in a one-on-one, in-person, semi-structured manner between October and December 2022. Audio was recorded and transcribed afterward. After receiving informed consent, the interviewer collected demographic information and then asked the questions seen in Table 1. During the interview, the interviewer also collected demographic information related to frequency of team play, frequency of input giving, and frequency of input seeking. All participants were asked all questions. The interviewer would ask follow-up questions or for elaboration if necessary. Participants were allowed to skip any question they did not wish to answer. Interviews ranged from 14 minutes to 41 minutes, with an average duration of 26 minutes. University IRB approved the protocol.

3.3 The Community

For this study, we focused on a competitive esports club at Abilene Christian University, located in the South Western United States. The club had 14 members at the time of the study, each of whom played at least 1 of 4 different games (see Figure 1) with some members playing multiple games. We chose this community because it is representative of a majority group within the overall esports community in the United States. As collegiate players between the ages of 18 and 22 the members of this community sit within the largest age group for esports fans in the US 4. While all male, the community reflects the generally male dominated esports culture where approximately 72% of fans identify as male 5 and fewer than 10% of professional players are female 6. The gender breakdown of the community is also consistent with samples seen in previous work on esports which also tend to be primarily or exclusively male [19, 51, 62, 82]. We discuss the implications of focusing on this community and the need to explore other demographic groups in limitations and future work. Race and exact age information were not collected to protect participant identities.

After each interview, the interviewer made note of the approximate themes that were discussed during the interview. Saturation [29] in these initial themes was observed around participant 10. After the remaining 4 interviews no new themes emerged during this initial review phase. The two researchers who conducted the data analysis, who were separate from the one who conducted the interviews, confirmed saturation in the data-set collected from the 14 community members.

3.4 Data Analysis

The data were analyzed via iterative thematic analysis by two researchers [27, 73]. In an initial pass, both researchers, separately, reviewed a representative sub-set containing 30% of the data [9] and developed initial themes. The unit of analysis was a single utterance by a participant. The researchers then reconvened to discuss their independent theme lists, collapse overlaps, and generate a combined list of themes containing 20 themes. The two researchers then performed an Inter-Rater Reliability (IRR) [16] check with the 20 themes. The resulting kappa value was.79, indicating strong agreement [47], however, the application results revealed three themes that were never applied by either researcher and three that the researchers never agreed upon, and each only applied once to the data set if at all. Another round of iteration on the code-book at this point saw these six themes collapsed into one another or other themes accordingly. This resulted in a list of 18 themes organized into three categories that corresponded to the three constructs that informed the interviews. One researcher then applied the 18 themes to the entire data set. The researcher then performed a final iteration on the code-book based on that application, collapsing and re-organizing themes based on number of appearances within the data set and overlapping meanings. At this point, one theme unrelated to the giving and receiving of input was also removed from the set. These were then re-organized into four categories based on the higher-level concepts the themes were related to. The final list consisted of 10 themes organized into four categories (See Results).

Skip 4RESULTS Section

4 RESULTS

4.1 Demographics

The demographic data for the club can be seen in Figure 1. All participants reported having friendly relationships with their fellow club members. Most of the club members reported playing either Rocket League or Rainbow Six Siege as their main game. Both games, along with Valorant, also played by one member of the club, are included in the top-10 esport games for 2022-2023 7. Years and hours of gameplay are in general terms, not exclusive to playing with teammates or with any specific game. Figure 2 shows reported frequencies of input giving and seeking and playing with others. These were collected via likert scale questions asking players “on a scale of 1 to 5 how often do you...".

Figure 1:

Figure 1: An overview of demographic data collected during the study.

Figure 2:

Figure 2: An overview of frequency data collected alongside the demographics.

Table 2:
# of Obs.# of Pls.Theme
Sources of Input and How they are Chosen
5514Other perspectives are important but where you go for input depends on what you need
8014Input is expected to come from those of higher authority/skill/knowledge/experience
Failure as a Prompt for Input
2311Input focuses on causes of failure and overcoming them
2414Players will seek input if they recognize their own failures or mistakes
2213The receiver does not need to seek input, it will be given in the case of repeated failure
Avoiding Conflict during Input Exchange
4213Input should be tactful and given with care
199Players are expected to take input and act upon it without resistance
When Input is Given and What it is Focused On
1814Pre-game input focuses on establishing goals
2412In-game focuses on performance, not input
3014Post-game input focuses on reflection

Table 2: 10 themes were identified regarding how input is exchanged among the players in the esports club and these were organized into four categories based on higher-level concepts surrounding the exchange of information via input (i.e. where it comes from or how it is delivered to avoid conflict). # of obs indicates how many times each theme was observed while # of pls indicates how many players discussed each theme.

4.2 Interview Results

An overview of the 10 themes and how often they appeared in the data set in terms of the number of total observations and the number of players who brought it up can be seen in Table 2.

4.2.1 Category 1: Sources of Input and How they are Chosen.

This category contains two themes related to where players go for input or where they expect it to come from. These themes highlight the general understanding among players regarding where valuable information can be acquired depending on what is needed.

Other perspectives are important but where you go for input depends on what you need: All players emphasized that getting input from others was valuable as an external perspective on one’s gameplay could reveal new insights. For example: “we’ll watch what the captain sees mainly and we have to ask him like ‘hey go to this round, check where I was, check who killed me, because I didn’t see it’.” (Participant 2, discussing the value of having his team captain watch his gameplay to help evaluate his decisions). However, players also articulated how different sources of information were more appropriate to different input needs. Players discussed using community resources as a way to gain general information about play, or specific instructions about how to learn a new skill. For example: “There’s a lot of YouTubers out there that break down and analyze the game so watching them has helped me a lot so I recommend that” (Participant 9). In contrast, five players explicitly discussed how they would go to their teammates or coaches for input because it was more personalized and better suited to their specific problems. For example: “If I was trying to get specific help on...either a mechanic or a strategy in the game, I would probably just stick with my teammates and people associated with the team” (Participant 5). This suggests that there are, perhaps unspoken, rules regarding when it is appropriate to request the attention of teammates versus leveraging more general sources of information. From a CoRL perspective, this reinforces that the CoRL behaviors that are leveraged within the club are viewed as valuable by the players and supports our claims that using CoRL as a framework for creating support systems and environments for social esports learning would be of value. Interestingly, however, we see that sometimes the phenomenon of receiving input from an experienced other is occurring when the other is neither known nor able to give personalized input (i.e. a community resource such as a stream or blog post).

Input is expected to come from those with higher authority/skill/knowledge/experience: While discussing how input was given and received, an explicit hierarchical structure came to light, with all participants discussing how input should come exclusively from those recognized as having more skill, knowledge, or experience, or being in a place of authority, such as a coach or team captain. For example: “Usually [name]...because he’s the captain of our team, so he knows...the best, like game knowledge of our team. And so he’s the one I usually go to. Or if there’s someone from like the [higher level] team, there’s one of them there then I’ll ask them because they know more than either me or [captain’s name]” (Participant 3). Eight players additionally discussed instances in which a more experienced individual would help a less experienced one set their goals. For example: “One of our players, he is very new to the competitive scene so he’s still getting used to the pacing and the intensity of playing at a competitive level. So I just talk to him and like ’Hey you need to work on your situational awareness and being able to process a lot of information while under pressure’.” (Participant 9). This is an explicit manifestation of how input works under CoRL: a more experienced other scaffolding and supporting one’s learning process by providing information. Interestingly, the players in the club appeared to adhere rather strictly to the idea that only these more experienced others should be performing this role, going so far as to suggest that defying the hierarchy could become a source of conflict For example: “Like if someone...like in Rocket League...someone’s like, SSL (supersonic legend) and I’m like, champ, I’m like, ‘yeah, you’re gonna know better what...what I’m doing right and what I’m doing wrong.’ So, yes, when they’re better than me. I think sometimes when, which might be a fault of mine, but when people are on my level, I tend to be like, ‘No, I don’t want to get your opinion on this. Like, I know what I’m doing just as much as you do’" (Participant 11).

4.2.2 Category 2: Failure as a Prompt for Input.

This category contains three themes related to the relationship between failure and input. A common trend within the interview data was how input giving or seeking was heavily connected to a player recognizing either their own or someone else’s failure, and how, most of the time, input focused on how to understand and overcome those failures.

Input focuses on causes of failure and overcoming them: 11 players talked about the kind of input they get or give, specifically discussing how it often focuses on helping identify and understand mistakes and how to avoid them in the future. For example: “There are times where they’ll just make a mistake. Just a simple mistake...missed the ball or something. And they’ll ask, like, ‘What should I have done there?’" (Participant 12) and “There were things where I would do something, and I’m like, ‘so this only works because I’m playing against people who are lower skill level than me’. I check to see if like, is it something that people are going to do in higher tiers, so I would ask [coach] some questions that he’d...like ‘No, yeah, what you did was good, and you’re spot on. It only works because you’re playing against a bunch of like, gold and silver players. Here’s what you actually should have done or like here’s...how it usually goes in higher tiers”’ (Participant 4). This practice may be a side effect of input exchange occurring almost exclusively before and after play for the club members (a trend we discuss further below). Under CoRL, the experienced other, when scaffolding the metacognitive learning processes of the learner after the task, is somewhat restricted to the metacognitive learning processes that occur at that point, which include attribution (identifying the cause of failure) and adaptation (overcoming it) [89].

Players will seek input if they recognize their own failures or mistakes: When discussing what would prompt them to seek the input of others, all players suggested that they were usually prompted by either their own recognition of failure or a lack of knowledge about how to handle a negative or problematic situation, possibly combined with feelings of responsibility for that situation occurring during their gameplay. For example: “Mostly in a loss of a match or series. And especially...if it wasn’t a close game. I’ll definitely say ‘hey, listen...something was up. I don’t know what we were doing wrong.’ Every once in a while if it was a close game, and I feel like I cost the loss...specifically, I’ll get their input as well” (Participant 5). Existing work on CoRL suggests that learners will primarily seek input when they are unsure how to progress with a task [30]. Here we see that the club members may be prompted to do so additionally due to feelings of inadequacy. This is illustrated by “When I know I’ve messed up - like Rocket League I rotate slow - I know that I did something wrong in order to get back or something" (Participant 11, discussing when he asks his teammates for input) who includes, in his discussion of his input seeking behavior, the self-evaluation of “I rotate slow" indicating a sense of inadequacy in relation to one element of his gameplay.

The receiver does not need to seek input, it will be given in the case of repeated failure: In relation to the previous theme, however, 13 players suggested that input does not have to be explicitly sought to be given and that a player may receive unsolicited advice if their teammates notice repeated mistakes. For example: “If I noticed them making the same mistake multiple times in a row, and maybe they don’t realize it, like typically after you make a mistake, that gets you scored on, it will be like, ‘Oh, why did I do that?’ Or something like that. But if I see them make the same, like small mistake a few times and they don’t mention it [I’ll bring it up] just to bring it to light” (Participant 6). In some cases participants illustrated how this input giving practice helps them become aware of mistakes they otherwise may not recognize on their own. For example: i.e. “Sometimes I need people to point it out because...I push too aggressively so I need somebody to tell me ’you need to stay back a little bit more’" (Participant 10). However, this does raise questions of what happens when the recipient of the input does not agree that they have made a mistake, as the existing theories of CoRL do not examine this possibility. This question is somewhat related to the themes presented in the next category.

4.2.3 Category 3: Avoiding Conflict during Input Exchange.

This category contains two themes related to how input is expected to be given and received. The implication is that when these expectations are followed conflicts can be avoided among teammates.

Input should be tactful and given with care: There were several different aspects of the delivery of input that players were cognizant of as ways to avoid conflict with their teammates while exchanging information. Eleven players suggested that there was no need to give further input if the receiving player was already aware of and working on a situation, with some even suggesting that providing further input in this context could cause tension. For example: “If it’s something that they, like, clearly already know what they did wrong, I don’t think it is really helpful to say that again, because then they might just snap at me or something” (Participant 3) and “If it’s a recurring mistake that I make, quite often I would say, you know, ‘I’m working on this. I don’t need any more input. I’m already very aware of the mistake’" (Participant 5). Further, eight players specifically emphasized the importance of tone and clarity of input. For example: “’You suck and you should do this’...like, okay, is there a constructive way you can tell me that? Because you’re really making me not want to play right now” (Participant 4). The delivery of input is not discussed in the existing CoRL literature [30]. It is likely that these considerations are heavily informed by the well-documented toxicity that exists within esports communities [81] which may exist within this particular esports club (though our participants did not explicitly speak of it) or may bias the club members’ views of harshly worded or repetitive input.

Players are expected to take input and act upon it without resistance: While the previous theme dealt with ways to avoid conflict when giving input, this theme related to ways to avoid conflict when receiving it. Participants explained that a recipient not taking input that was given was a source of conflict, especially in situations where the input was given but not necessarily sought. For example: “If I’ve already talked about it...and then they’re still making the same mistakes like the next practice? It’s like, okay, why did I even bother saying anything if you’re not going to apply what I said?” (Participant 4). This illustrates an expectation among the players that input received is to be accepted and followed. This expectation is further illustrated by the fact that players will communicate their input and goals in ways that allow their teammates to monitor their progress and hold them accountable for following the advice they received. For example: “we just try and remind ourselves on what we work on throughout the practice. And we help try and remind each other of that during the practice as well" (Participant 5). Again, this idea of accepting the input your are given is not discussed extensively in existing work on CoRL, and may be influenced by an apparent mindset among the players that the performance of the team was more important than the performance of the individual.

4.2.4 Category 4: When Input is Given and What it is Focused On.

This category contains three themes related to how input is given (or not) across three distinct time-points in gameplay: pre-game, in-game, and post-game. Our results illustrated that the players followed distinct practices in terms of how they shared information across these time-points and what information was prioritized.

Pre-Game Input focuses on Establishing Goals: All players reported that pre-gameplay time was devoted largely to establishing goals and strategies and that this usually was a social process involving conversations among the entire team. For example: “Besides winning and just maybe...‘Hey, we need to try and rotate better’. We’ll just say...not necessarily like...a number goal or something like that, but just ‘Hey, this is our goal. We need to try and rotate better in these games’.” (Participant 5, discussing goal setting prior to gameplay) and “sometimes...I’ll bring up stuff like ‘Hey, I just developed a new strategy for one of the sites we’re playing make sure you take a look at that" (Participant 9, discussing strategizing before play). Goal setting is a metacognitive component of SRL [89], and thus, through these group discussions, more experienced club members could provide targeted input and scaffold this process for their less experienced members, one of the core concepts of CoRL [30] and previously noted in other contexts by Prather et al. [64].

In-Game Focuses on Performance, not Input: 12 players emphasized that input giving and seeking did not occur during gameplay. Instead, the in-game conversation was focused on ensuring that the game continued to run smoothly and that performance, morale, and chances of victory could be maintained. For example: “I don’t want to talk while we’re still playing because it’ll mess up the flow of communication. So generally...we try not to...say anything about mistakes until we’re not actively communicating” (Participant 3) and “No, no, that’s actually something that is not ideal, especially in teams...giving advice or correcting something right away in the middle of the game..." (Participant 7). This is an interesting result because the theory of CoRL suggests that metacognitive support, and therefore input, is provided throughout a task [30], but our results suggest that the club members are often too absorbed by high-stakes gameplay and trying to win to give or ask for input. This means that more experienced club members are not providing in-the-moment support while the learner is engaging with the task itself. This may be a finding unique to competitive contexts such as the collegiate club we examined and we discuss this further in the discussion and limitations sections.

Post-Game Input focuses on Reflection: Following the previous theme, all players clarified that a lot of input was given or sought out after gameplay, a moment typically characterized by group reflective practices including evaluation of performance and suggestions for improvement. For example: “If we lose, we’ll talk about what happened. We’ll look at the game and see what we did wrong, what we did right...” (Participant 10) and “There’s a function in the game where we can review matches we’ve already played. So our team captain will go in on that. And then kind of just watch what we’ve been doing basically, and what we did do in that previous game. And then...he’ll use that to tell us, I guess it’s kind of like strategy, but like after the game, kind of for next time, I guess. But basically just still critiquing what we could have done better if we did lose or whatever" (Participant 13). Like the previous discussion of group goal setting before gameplay, this illustrates an explicit manifestation of CoRL practices among the club members. Providing input post-play allows the more experienced players to aid their less experienced companions with the metacognitive processes of evaluation, attribution, and adaptation [30, 89].

Skip 5DISCUSSION AND IMPLICATIONS Section

5 DISCUSSION AND IMPLICATIONS

In this section we discuss three main takeaways from our results and their implications for future work on social learning in esports as well as their implications for the development of computational support tools for esports.

5.1 The Hierarchical Nature of Input

The most prominent takeaway from the results is the emphasis on hierarchical relationships within the club and how one’s position in the hierarchy dictated their role in the co-regulation process. Notably, input was not always exchanged explicitly between club members, but when it was, those club members perceived to be less skilled were lower on the hierarchy and typically the recipients of input from those who were perceived as more skilled. The theory of co-regulated learning dictates how one who is more skilled may play an active role in a learner’s learning process by handling the metacognitive elements of the task (i.e. helping them reflect on what they did or determine their next move) and guiding the learner through those processes with appropriate input [31]. This system of having those with more experience guide those with less mirrors this phenomenon. Further, the existence of this hierarchical relationship is consistent with mentor-apprentice interactions discussed in previous work [70, 71]. However, while previous work examined a context in which the apprentice was easily distinguishable as a new player due to their time on the team, it appeared at first glance that players in our community all stood on the same level. Only through the interviews did we observe that the players still recognized a ladder within their community.

More interesting, however, is how serious the club members were about maintaining this hierarchy. Input was expected to come solely from those higher up and it was expected to be taken by those lower down without resistance, with players going so far as to suggest that failure to adhere to this arrangement could lead to conflict. In the context of using this work to inform the design of computational assistance, this echoes an earlier finding that players tended to not engage with tools they felt did not know the game as well as they themselves did [39]. This connection emphasizes the importance of building assistive tools based on an understanding of how players help each other (or allow each other to help).

The majority of the club members seemed to be aware of these rules, as most participants suggested that they would never even try to give input to those perceived as better than them. While CoRL suggests that this management from the more experienced is dialed back as the learner gains skill [30], our participants did not mention any such phenomenon, suggesting that the more skilled members of the club would always give input to those lower on the hierarchy, regardless of how much skill the lower player gained, and that the lower player was expected to accept and act on this input, not resist it. It may be that the strict adherence to a hierarchy prevented the players from perceiving those below them as ever having gained enough skill to not require their input. It may otherwise be that the more skilled players also believed themselves to be continuing to gain skill, and thus, while the players below them had gotten better, they too had gotten better, and their positions remained the same in relation to one another. This is in contrast to what is discussed in Rusk et al.’s work [70, 71], who found that support and orientation from other players diminished as apprentice players gained skill. This may be the result of cultural differences, as Rusk et al. examined Finnish students in their work, while ours were recruited from an American university, and prior work has demonstrated that gaming culture varies across the globe [18] as do interpersonal relationships and interactions [7, 72]. These differences highlight the need for further study of this topic with other demographic groups to expand our knowledge on the topic.

This strict adherence to a hierarchy is of additional interest given that participants suggested that there are contexts in which one’s status within the club may change. For example, a player who has a lower overall skill level may know how to play a specific character better than another player, which gives them a higher skill level in that specific context and permits them to give input, but only in that context. This flexibility comes at an interesting juxtaposition with the strict adherence to a hierarchy as it raises questions regarding how the club members recognized such contexts and shifts in the hierarchical relationship and what happened when there was a disagreement in relation to that shift. Furthermore, while players indicated their own strict adherence to this hierarchy, many had stories about someone else of a lower rank or status who had provided unsolicited advice. Often this other player would, in their own interview, suggest that they would never do something of the sort, and perhaps even accuse the one who had accused them (they were not aware of what was discussed in each other’s interviews). This suggests that the club members are aware of the hierarchy and their need to adhere to it, but may not always be aware of whether or not they are actually doing so.

5.1.1 Implications for the Development of Computational Support.

From this takeaway, we can identify two main implications for the future development of computational, AI, or data-driven support tools for esports learning. First, knowing that only certain types of input are sought from other teammates can help future tools focus the kind of aid provided by their data-driven or artificially intelligent components. In doing so, developers can save time and resources by designing their tools to leverage or direct players to existing sources of information for input such as basic instructions. Second, our results emphasize the extent to which the player must perceive the tool as being higher on the hierarchy than they are, as only those higher on the hierarchy can give them input. Previous work has shown that such a perception of a tool does not exist by default, with players reporting sensations that they know the game better than the tool and subsequently uninstalling it [39]. Thus, a challenge is presented to the designers of future tools who will need to find innovative ways to convince players that the tools know enough to give them input. Insights from explainable AI [3] and trust in technology [4] may prove valuable in addressing this challenge. Alternatively, Open Learner Models (OLMs) may offer valuable guidance on addressing this challenge. OLMs have already seen extensive use for supporting learning and SRL [1, 35, 86], and many have experimented with incorporating ways to convince learners that their evaluation is correct through visualization [33, 49] and negotiation [14, 36, 77]. Similar techniques may lend themselves to the challenge of convincing esports players to view an assistive tool as higher than them on the hierarchy, but negotiation techniques will likely need to follow the protocols for giving and receiving input that exist among human players.

5.1.2 Implications for Future Work.

In this study, we looked at an esports club consisting of college-aged males playing in a competitive environment, and it may be possible that players who do not identify as male or are of a different age group may not adhere to such a strict hierarchical structure with regards to exchanging input. As such we recognize and reiterate the need for future work to repeat this study with other demographic groups in order to obtain a general understanding of the phenomenon. Further, a valuable asset to advance the development of support tools in this context would be a stronger understanding of how expertise or authority are perceived by players, such that tools perceived as such can be created. Our results found that players’ understanding of their own or others’ skill levels were not always consistent and potentially not accurate, so further exploration of this is necessary to advance the understanding of CoRL in esports.

5.2 The Relationship between Input and Failure

Another prominent trend in the results is the notable relationship between input and mistakes or failures. For the members of the club, recognizing a failure or the feeling that a mistake was made would prompt one to seek input while noticing repeated failure would prompt one to give input. It further appeared that being able to recognize one’s mistakes and seek out input was seen as a positive character trait or a sign of skill or growth by the club members. Previous work on CoRL, especially in computing education, saw a high level of social help-seeking behaviors, especially when hitting a wall [63], but CoRL occurring in response to recognized failures is not prominently documented or discussed. Rusk et al. [71], however, did discuss how expectations and responsibility were a major part of an apprentice player’s learning trajectory. They specifically discuss how expectations for the apprentice’s performance and the amount of responsibility they held grew as they gained skill. This may be the reason for the emphasis on failure we see here. As expectations grow, chances to fail do too, and as responsibility grows, the need to take responsibility for failures, by seeking input in order to overcome them, likely grows as well.

We further propose that this emphasis on identifying and overcoming failures is a side effect of input exchange occurring primarily pre- and post-play within the club. Post-gameplay, in particular, is referred to as self-reflection according to Zimmerman’s cyclical phase model and is characterized partially by the identification of mistakes and adaptation of strategies for future play [89]. While we discuss the timing of input further in the next section, here we suggest that post-gameplay input tends to gravitate towards this topic as a result of inherent SRL skills that the players may possess, possibly as a side-effect of playing the game, as suggested in previous work [37]. Further, previous work has discussed how difficult it can be for newer players to recognize their mistakes and identify how to prevent them in future play [38, 39] and it may be that the emphasis on examining failures helps the club onboard new members and move people up the ranks more easily.

5.2.1 Implications for the Development of Computational Support.

While future work is needed to confirm the generalizability of these findings, we argue here that computational support tools for esports can be improved through this work by being fine-tuned to focus on identifying and overcoming mistakes as this appears to be the aid exchanged through input the most. In other words, while existing work has emphasized reflection through visualization [40, 83, 85], explicit evaluations and personalized feedback might better serve players. Further, we can understand from our results that such tools should be careful not to become repetitive in their aid as players suggested that input of this variety is not necessary when they are already aware of and are working on the problem. Previous work has discussed the value of features that track player progress over time [38] and our results suggest that such features could be beneficial if focused on tracking occurrences of mistakes and progress towards overcoming them. Further, this may be a way to aid players without becoming overly repetitive.

5.2.2 Implications for Future Work.

In addition to future iterations of this study to improve the generalizability of our findings, there are implications here regarding the timing and phrasing of input related to failure, whether it is coming from a computational tool or another person. Our results suggested that such input can be given even if not sought, but that tensions can arise between players in this context. This raises questions regarding affect management, a concept within theories of emotional co-regulation [8, 17, 28] and how esports players navigate the phenomenon. While previous work has looked at emotional regulation in relation to esports play [87], there is little looking at how emotions are navigated by teammates when trying to assist one another’s learning processes, something that can become emotionally charged given how focused it is on identifying each other’s mistakes. Thus, we identify this as a promising opportunity for future work.

5.3 The Three Phases of Gameplay and Learning

Consistent with previous work [37, 38], our findings demonstrate that learning processes occurred differently across the three phases of gameplay (pre-game, in-game, and post-game) during team play among the club members. Specifically, while our participants dictated that pre-game was dedicated to discussing strategies and setting goals and post-game was dedicated to reflection and evaluation, they did not discuss any input exchange in-game, during which time they focus on maintaining performance. Richard et al. [69], in their study of collegiate esports players, similarly found that group discussion pre and post (or between) games was similarly focused on planning and reflection and adaptation respectively and previous work [38] discussed how these three gameplay phases mirror the three phases of learning defined by the Cyclical Phase Model of SRL [89]. However, the previously leveraged Cyclical Phase Model of SRL does not account for social interaction, specifically the seeking or receiving of input within the learning process. Socialization was critical to learning within the esports club we focused on in this work and if this generalizes to other communities then it may be that the Cyclical Phase Model alone is not enough to describe learning in esports contexts.

In contrast, Hadwin’s CoRL model [31] proposes four phases: understanding, goal setting, working toward the goal, and adaptation. From our results, it appears that both phases one and two take place pre-game, while phase three occurs somewhat in-game and phase four is post-game. In phases one and two, the group negotiates their shared understanding of the task and their strategies for accomplishing it. In phase three, the group works collaboratively on their goal, collectively utilizing multiple cognitive, metacognitive, and motivational strategies. In phase four, the group makes small changes to large-scale pivots in strategies and goals based on feedback from the task and one another. This model may better apply to social learning in esports, as it better encompasses the critical role of social interaction and input exchange via communication between team-members, though it similarly downplays individual learning processes, which may still be important.

Further, given both of these models’ discussions of metacognitive processes occurring during the task, the lack of in-game learning within the club is interesting. While previous work also suggested that esports players are too focused during gameplay to think about learning [38, 39], it was proposed that they may not be aware of it. Here, the players more explicitly described that there is little to no input being sought or given in-game, as they wish to focus all in-game communication on the gameplay itself and keep negative emotions as low as possible to avoid them impacting play. At the very least, from this we can infer that there may be no social learning in-game within the club and that any subconscious learning that may occur happens at the individual level.

5.3.1 Implications for the Development of Computational Support.

Previous work suggests that in-game support should be kept at a minimum in order to help players focus on gameplay [38]. Our findings go even further to suggest that there should be little, if any at all, input from a computational support tool during gameplay. Instead, it would be better for tools to note important takeaways from the players’ gameplay (such as failures, as discussed previously) to present after play. This arrangement would better reflect how, at least the community we studied, approach input exchange across the multiple phases of gameplay. Here, however, we also acknowledge the difficulties balancing short term and long term learning and acknowledge that there may be times when it would be better to interrupt the flow of a game, as human coaches do not optimize for winning the present game in every instance. Further, prior work on reflection and learning does emphasize the importance of in-action reflection, performed during the learning task, rather than after [25]. As such, a challenge and opportunity for future development is, not only determining how to deliver in-the-moment support, as discussed in prior work [38], but also developing intelligent support tools capable of making the right calls in terms of when such support should be given.

5.3.2 Implications for Future Work.

Between the Cyclical Phase Model not accounting effectively for social interaction, Hadwin’s model downplaying individual learning processes, and neither model properly reflecting what occurs during gameplay for the club members, it may be that a unique model of social Self-Regulated Learning needs to be derived for the domain of esports. The development of such a model is beyond the scope of this individual work, as is confirmation of its necessity, but the results we present here lay the ground work for its future exploration and derivation. The development of such a model also has the potential to generalize beyond esports and help describe social learning across complex tasks in general as well as traditional academic contexts.

Skip 6LIMITATIONS AND FUTURE WORK Section

6 LIMITATIONS AND FUTURE WORK

Here we acknowledge some limitations of our work. Our primary limitation is the sample. All of our participants identified as male, all were undergraduates at the same university in the USA, and most played Rocket League or Rainbow Six Siege with a few playing other games. As stated in the introduction, we chose to focus on a single community as we predict that different communities are demographically distinct and require individual exploration to avoid minority voices being drowned out. We acknowledge that other demographic groups (female or non-binary players, older players, players from other cultures or countries, non-collegiate players, and non-competitive players) or that players of other games could demonstrate different input exchange practices and we emphasize that a comprehensive understanding of CoRL within esports requires examinations of these other demographic groups. As we discussed above, the majority of esports fans identify as male and our chosen community fell within the largest age group, thus justifying focusing on this particular club in this initial work on the topic in order to lay the ground-stones for future work to build upon. In terms of generalizing our findings and examining other communities, we present our methodology for use in future studies that, by using the same protocol to explicitly examine these minority groups through targeted research, expand our knowledge of co-regulation in esports across the entire population. We specifically created our methodology with non-majority voices in mind to enable this.

We additionally acknowledge that our community consisted of only 14 players but we did see saturation in the data and this sample size is consistent with similar studies in previous work [32, 39]. In the future, we hope to follow this initial qualitative probe with large-scale survey studies that aim to understand how the themes presented herein apply across larger populations. That being said, an initial interview study with a smaller sample size was necessary in order to lay the foundation for this future work.

Finally, we acknowledge that we focused on only a single element of CoRL, input exchange, and that the phenomenon, as a whole, is much larger and more complex. In future work we hope to expand upon these findings and even link them with concepts of emotional co-regulation to develop a more comprehensive understanding of CoRL in esports.

Skip 7CONCLUSION Section

7 CONCLUSION

In this work, we sought to develop a foundational understanding of how social learning occurs in esports teams, which is critical to advancing support for esports learning through educational and computational means. Towards this end, we explored how co-regulated learning occurred within an esports community through an interview study of an esports club at an American University consisting of 14 male players. Our results revealed a total of 10 themes around social learning and how they are navigated by the club members. From this, we derive three primary takeaways and discuss what they mean for future research and development. In future work, we hope to expand these findings by replicating the study protocol with other demographic groups and looking more closely at specific elements of the co-regulated learning process such as the establishment and navigation of hierarchical relationships.

Footnotes

Skip Supplemental Material Section

Supplemental Material

Video Presentation

Video Presentation

mp4

285 MB

References

  1. Solmaz Abdi, Hassan Khosravi, Shazia Sadiq, and Dragan Gasevic. 2020. Complementing educational recommender systems with open learner models. In Proceedings of the tenth international conference on learning analytics & knowledge. 360–365.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Ana Paula Afonso, Maria Beatriz Carmo, and Rafael Afonso. 2021. VisuaLeague: Visual Analytics of Multiple Games. In 2021 25th International Conference Information Visualisation (IV). IEEE, 54–62.Google ScholarGoogle Scholar
  3. Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N Bennett, Kori Inkpen, 2019. Guidelines for human-AI interaction. In Proceedings of the 2019 chi conference on human factors in computing systems. 1–13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S Lasecki, Daniel S Weld, and Eric Horvitz. 2019. Beyond accuracy: The role of mental models in human-AI team performance. In Proceedings of the AAAI conference on human computation and crowdsourcing, Vol. 7. 2–11.Google ScholarGoogle ScholarCross RefCross Ref
  5. Blitz.gg. 2021. Blitz App. https://blitz.gg/. Online; accessed April 15th 2021.Google ScholarGoogle Scholar
  6. Robert C Brusso, Karin A Orvis, Kristina N Bauer, and Amanuel G Tekleab. 2012. Interaction among self-efficacy, goal orientation, and unrealistic goal-setting on videogame-based training performance. Military Psychology 24, 1 (2012), 1–18.Google ScholarGoogle ScholarCross RefCross Ref
  7. Brant R Burleson. 2003. The experience and effects of emotional support: What the study of cultural and gender differences can tell us about close relationships, emotion, and interpersonal communication. Personal relationships 10, 1 (2003), 1–23.Google ScholarGoogle Scholar
  8. Emily A Butler and Ashley K Randall. 2013. Emotional coregulation in close relationships. Emotion Review 5, 2 (2013), 202–210.Google ScholarGoogle ScholarCross RefCross Ref
  9. John L Campbell, Charles Quincy, Jordan Osserman, and Ove K Pedersen. 2013. Coding in-depth semistructured interviews: Problems of unitization and intercoder reliability and agreement. Sociological Methods & Research 42, 3 (2013), 294–320.Google ScholarGoogle ScholarCross RefCross Ref
  10. Sven Charleer, Kathrin Gerling, Francisco Gutiérrez, Hans Cauwenbergh, Bram Luycx, and Katrien Verbert. 2018. Real-time dashboards to support esports spectating. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play. 59–71.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Zhengxing Chen, Christopher Amato, Truong-Huy D Nguyen, Seth Cooper, Yizhou Sun, and Magy Seif El-Nasr. 2018. Q-deckrec: A fast deck recommendation system for collectible card games. In 2018 IEEE Conference on Computational Intelligence and Games (CIG). IEEE, 1–8.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Zhengxing Chen, Truong-Huy D Nguyen, Yuyu Xu, Christopher Amato, Seth Cooper, Yizhou Sun, and Magy Seif El-Nasr. 2018. The art of drafting: a team-oriented hero recommendation system for multiplayer online battle arena games. In Proceedings of the 12th ACM Conference on Recommender Systems. ACM, 200–208.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Alexander Cho, AM Tsaasan, and Constance Steinkuehler. 2019. The building blocks of an educational esports league: lessons from year one in orange county high schools. In Proceedings of the 14th International Conference on the Foundations of Digital Games. 1–11.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Chih-Yueh Chou, K Robert Lai, Po-Yao Chao, Chung Hsien Lan, and Tsung-Hsin Chen. 2015. Negotiation based adaptive learning sequences: Combining adaptivity and adaptability. Computers & Education 88 (2015), 215–226.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Anders Harboell Christiansen, Emil Gensby, and Bryan S Weber. 2019. Resolving Simultaneity Bias: Using Features to Estimate Causal Effects in Competitive Games. In 2019 IEEE Conference on Games (CoG). IEEE, 1–8.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and psychological measurement 20, 1 (1960), 37–46.Google ScholarGoogle Scholar
  17. Pamela M Cole, Sarah E Martin, and Tracy A Dennis. 2004. Emotion regulation as a scientific construct: Methodological challenges and directions for child development research. Child development 75, 2 (2004), 317–333.Google ScholarGoogle Scholar
  18. Małgorzata Ćwil and William T Howe. 2020. Cross-cultural analysis of gamer identity: A comparison of the United States and Poland. Simulation & Gaming 51, 6 (2020), 785–801.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Maarten Denoo, Niels Bibert, and Bieke Zaman. 2021. Disentangling the motivational pathways of recreational esports gamblers: A laddering study. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–15.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Scott Donaldson. 2017. Mechanics and metagame: Exploring binary expertise in league of legends. Games and Culture 12, 5 (2017), 426–444.Google ScholarGoogle ScholarCross RefCross Ref
  21. Joshua A Eaton, Matthew-Donald D Sangster, Molly Renaud, David J Mendonca, and Wayne D Gray. 2017. Carrying the team: The importance of one player’s survival for team success in League of Legends. In Proceedings of the human factors and ergonomics society annual meeting, Vol. 61. SAGE Publications Sage CA: Los Angeles, CA, 272–276.Google ScholarGoogle ScholarCross RefCross Ref
  22. Markus Eger and Pablo Sauma Chacón. 2020. Deck Archetype Prediction in Hearthstone. In International Conference on the Foundations of Digital Games. 1–11.Google ScholarGoogle Scholar
  23. Jose Esteves, Konstantina Valogianni, and Anita Greenhill. 2021. Online social games: The effect of social comparison elements on continuance behaviour. Information & Management 58, 4 (2021), 103452.Google ScholarGoogle ScholarCross RefCross Ref
  24. Joey R Fanfarelli. 2018. Expertise in professional overwatch play. International Journal of Gaming and Computer-Mediated Simulations (IJGCMS) 10, 1 (2018), 1–22.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Rowanne Fleck and Geraldine Fitzpatrick. 2010. Reflecting on reflection: framing a design landscape. In Proceedings of the 22nd Conference of the Computer-Human Interaction Special Interest Group of Australia on Computer-Human Interaction. 216–223.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Jessica Formosa, Nicholas O’Donnell, Ella M Horton, Selen Türkay, Regan L Mandryk, Michael Hawks, and Daniel Johnson. 2022. Definitions of Esports: A Systematic Review and Thematic Analysis. Proceedings of the ACM on Human-Computer Interaction 6, CHI PLAY (2022), 1–45.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Helen Gavin. 2008. Thematic analysis. Understanding research methods and statistics in psychology (2008), 273–282.Google ScholarGoogle ScholarCross RefCross Ref
  28. James J Gross. 1999. Emotion regulation: Past, present, future. Cognition & emotion 13, 5 (1999), 551–573.Google ScholarGoogle Scholar
  29. Greg Guest, Emily Namey, and Mario Chen. 2020. A simple method to assess and report thematic saturation in qualitative research. PloS one 15, 5 (2020), e0232076.Google ScholarGoogle ScholarCross RefCross Ref
  30. Allyson Hadwin and Mika Oshige. 2011. Self-regulation, coregulation, and socially shared regulation: Exploring perspectives of social in self-regulated learning theory. Teachers College Record 113, 2 (2011), 240–264.Google ScholarGoogle ScholarCross RefCross Ref
  31. Allyson Fiona Hadwin, Sanna Järvelä, and Mariel Miller. 2011. Self-regulated, co-regulated, and socially shared regulation of learning. Handbook of self-regulation of learning and performance 30 (2011), 65–84.Google ScholarGoogle Scholar
  32. Nour Halabi, Günter Wallner, and Pejman Mirza-Babaei. 2019. Assessing the impact of visual design on the interpretation of aggregated playtesting data visualization. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play. 639–650.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Danita Hartley and Antonija Mitrovic. 2002. Supporting learning by opening the student model. In International Conference on Intelligent Tutoring Systems. Springer, 453–462.Google ScholarGoogle ScholarCross RefCross Ref
  34. Joseph Hesketh, Christoph Sebastian Deterding, and Jeremy Gow. 2020. How Players Learn Team-versus-Team Esports: First Results from A Grounded Theory Study. In DiGRA’20 Abstract-Proceedings of the 2020 DiGRA International Conference. York.Google ScholarGoogle Scholar
  35. Danial Hooshyar, Margus Pedaste, Katrin Saks, Äli Leijen, Emanuele Bardone, and Minhong Wang. 2020. Open learner models in supporting self-regulated learning in higher education: A systematic literature review. Computers & Education 154 (2020), 103878.Google ScholarGoogle ScholarCross RefCross Ref
  36. Alice Kerly, Richard Ellis, and Susan Bull. 2007. CALMsystem: a conversational agent for learner modelling. In International conference on innovative techniques and applications of artificial intelligence. Springer, 89–102.Google ScholarGoogle Scholar
  37. Erica Kleinman, Christian Gayle, and Magy Seif El-Nasr. 2021. "Because I’m Bad at the Game!" A Microanalytical Study of Self Regulated Learning in League of Legends. Frontiers in Psychology (2021).Google ScholarGoogle Scholar
  38. Erica Kleinman, Reza Habibi, Yichen Yao, Christian Gayle, and Magy Seif El-Nasr. 2022. " A Time and Phase for Everything"-Towards A Self Regulated Learning Perspective on Computational Support for Esports. Proceedings of the ACM on Human-Computer Interaction 6, CHI PLAY (2022), 1–27.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Erica Kleinman, Murtuza N Shergadwala, and Magy Seif El-Nasr. 2022. Kills, Deaths, and (Computational) Assists: Identifying Opportunities for Computational Support in Esport Learning. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 519, 13 pages. https://doi.org/10.1145/3491102.3517654Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Erica Kleinman, Jennifer Villareale, Murtuza N Shergadwala, Zhaoqing Teng, Andy Bryant, Jichen Zhu, and Magy Seif El-Nasr. 2023. " What else can I do?" Examining the Impact of Community Data on Adaptation and Quality of Reflection in an Educational Game. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Athanasios Vasileios Kokkinakis, Simon Demediuk, Isabelle Nölle, Oluseyi Olarewaju, Sagarika Patra, Justus Robertson, Peter York, Alan Pedrassoli Pedrassoli Chitayat, Alistair Coates, Daniel Slawson, 2020. DAX: Data-Driven Audience Experiences in Esports. In ACM International Conference on Interactive Media Experiences. 94–105.Google ScholarGoogle Scholar
  42. Denis Koposov, Maria Semenova, Andrey Somov, Andrey Lange, Anton Stepanov, and Evgeny Burnaev. 2020. Analysis of the reaction time of esports players through the gaze tracking and personality trait. In 2020 IEEE 29th International Symposium on Industrial Electronics (ISIE). IEEE, 1560–1565.Google ScholarGoogle ScholarCross RefCross Ref
  43. Yubo Kou and Xinning Gui. 2020. Emotion Regulation in eSports Gaming: A Qualitative Study of League of Legends. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2 (2020), 1–25.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Yong Ming Kow and Timothy Young. 2013. Media technologies and learning in the starcraft esport community. In Proceedings of the 2013 conference on Computer supported cooperative work. 387–398.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Yen-Ting Kuan, Yu-Shuen Wang, and Jung-Hong Chuang. 2017. Visualizing real-time strategy games: The example of starcraft ii. In 2017 IEEE Conference on Visual Analytics Science and Technology (VAST). IEEE, 71–80.Google ScholarGoogle ScholarCross RefCross Ref
  46. Josef Künsting, Joachim Wirth, and Fred Paas. 2011. The goal specificity effect on strategy use and instructional efficiency during computer-based scientific discovery learning. Computers & Education 56, 3 (2011), 668–679.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. J Richard Landis and Gary G Koch. 1977. The measurement of observer agreement for categorical data. biometrics (1977), 159–174.Google ScholarGoogle Scholar
  48. Lasse Juel Larsen. 2020. The play of champions: Toward a theory of skill in eSport. Sport, ethics and philosophy (2020), 1–23.Google ScholarGoogle Scholar
  49. Fotis Lazarinis and Symeon Retalis. 2007. Analyze me: open learner model in an adaptive web testing system. International Journal of Artificial Intelligence in Education 17, 3 (2007), 255–271.Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Je Seok Lee and Constance Steinkuehler. 2019. Esports as a catalyst for connected learning: the North America Scholastics Esports Federation. XRDS: Crossroads, The ACM Magazine for Students 25, 4 (2019), 54–59.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Shengmei Liu, Mark Claypool, Atsuo Kuwahara, Jamie Sherman, and James J Scovell. 2021. Lower is better? The effects of local latencies on competitive first-person shooter game players. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Daniel Madden, Yuxuan Liu, Haowei Yu, Mustafa Feyyaz Sonbudak, Giovanni M Troiano, and Casper Harteveld. 2021. “Why Are You Playing Games? You Are a Girl!”: Exploring Gender Biases in Esports. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–15.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Rick M Menasce. 2017. From casual to professional: how Brazilians achieved eSports success in counter-strike: global offensive. Ph. D. Dissertation. Northeastern University.Google ScholarGoogle Scholar
  54. mobalytics. 2021. Mobalytics. https://app.mobalytics.gg/. Online; accessed 25 January 2021.Google ScholarGoogle Scholar
  55. Geoff Musick, Rui Zhang, Nathan J McNeese, Guo Freeman, and Anurata Prabha Hridi. 2021. Leveling Up Teamwork in Esports: Understanding Team Cognition in a Dynamic Virtual Environment. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (2021), 1–30.Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Ernesto Panadero. 2017. A review of self-regulated learning: Six models and four directions for research. Frontiers in psychology 8 (2017), 422.Google ScholarGoogle Scholar
  57. Petr Parshakov, Dennis Coates, and Marina Zavertiaeva. 2018. Is diversity good or bad? Evidence from eSports teams analysis. Applied Economics 50, 47 (2018), 5064–5075.Google ScholarGoogle ScholarCross RefCross Ref
  58. Ismael Pedraza-Ramirez, Lisa Musculus, Markus Raab, and Sylvain Laborde. 2020. Setting the scientific stage for esports psychology: A systematic review. International Review of Sport and Exercise Psychology 13, 1 (2020), 319–352.Google ScholarGoogle ScholarCross RefCross Ref
  59. Matthew A Pluss, Andrew R Novak, KJ Bennett, Derek Panchuk, Aaron J Coutts, and Job Fransen. 2020. Perceptual-motor abilities underlying expertise in esports. Journal of Expertise 3, 2 (2020), 133–143.Google ScholarGoogle Scholar
  60. Matthew A Pluss, Andrew R Novak, Kyle JM Bennett, Derek Panchuk, Aaron J Coutts, and Job Fransen. [n. d.]. The relationship between the quantity of practice and in-game performance during practice with tournament performance in esports: An eight-week study. ([n. d.]).Google ScholarGoogle Scholar
  61. porofessor.gg. 2021. porofessor.gg. https://porofessor.gg/download. Online; accessed December 9th 2021.Google ScholarGoogle Scholar
  62. Dylan R Poulus, Tristan J Coulter, Michael G Trotter, and Remco Polman. 2022. A qualitative analysis of the perceived determinants of success in elite esports athletes. Journal of Sports Sciences 40, 7 (2022), 742–753.Google ScholarGoogle ScholarCross RefCross Ref
  63. James Prather, Lauren Margulieux, Jacqueline Whalley, Paul Denny, Brent N. Reeves, Brett A. Becker, Paramvir Singh, Garrett Powell, and Nigel Bosch. 2022. Getting By With Help From My Friends: Group Study in Introductory Programming Understood as Socially Shared Regulation. In Proceedings of the 2022 ACM Conference on International Computing Education Research - Volume 1 (Lugano and Virtual Event, Switzerland) (ICER ’22). Association for Computing Machinery, New York, NY, USA, 164–176. https://doi.org/10.1145/3501385.3543970Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. James Prather, Lauren Margulieux, Jacqueline Whalley, Paul Denny, Brent N Reeves, Brett A Becker, Paramvir Singh, Garrett Powell, and Nigel Bosch. 2022. Getting By With Help From My Friends: Group Study in Introductory Programming Understood as Socially Shared Regulation. In Proceedings of the 2022 ACM Conference on International Computing Education Research-Volume 1. 164–176.Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Minna Puustinen and Lea Pulkkinen. 2001. Models of self-regulated learning: A review. Scandinavian journal of educational research 45, 3 (2001), 269–286.Google ScholarGoogle Scholar
  66. Daniel Railsback and Nicholas Caporusso. 2018. Investigating the human factors in eSports performance. In International Conference on Applied Human Factors and Ergonomics. Springer, 325–334.Google ScholarGoogle Scholar
  67. Jana Rambusch, Peter Jakobsson, and Daniel Pargman. 2007. Exploring E-sports: A case study of game play in Counter-strike. In 3rd Digital Games Research Association International Conference:" Situated Play", DiGRA 2007, Tokyo, 24 September 2007 through 28 September 2007, Vol. 4. Digital Games Research Association (DiGRA), 157–164.Google ScholarGoogle Scholar
  68. Jason G Reitman, Reginald Gardner, Kathryn Campbell, Alex Cho, and Constance Steinkuehler. [n. d.]. Academic and Social-Emotional Learning in High School Esports. ([n. d.]).Google ScholarGoogle Scholar
  69. Gabriela T Richard, Zachary A McKinley, and Robert William Ashley. 2019. Collegiate Esports as Learning Ecologies: Investigating collaboration, reflection and cognition during competitions. Transactions of the Digital Games Research Association 4, 3 (2019).Google ScholarGoogle ScholarCross RefCross Ref
  70. Fredrik Rusk, Matilda Ståhl, and Kenneth Silseth. 2020. Exploring peer mentoring and learning among experts and novices in online in-game interactions. In Proceedings of the 14th International Conference on Game Based Learning. Academic Conferences International Limited, 461–468.Google ScholarGoogle Scholar
  71. Fredrik Rusk, Matilda Ståhl, and Kenneth Silseth. 2021. Player Agency, Team Responsibility, and Self-Initiated Change: An Apprentice’s Learning Trajectory and Peer Mentoring in Esports. In Esports research and its integration in education. IGI Global, 103–126.Google ScholarGoogle Scholar
  72. Richard M Ryan, Jennifer G La Guardia, Jessica Solky-Butzel, Valery Chirkov, and Youngmee Kim. 2005. On the interpersonal regulation of emotions: Emotional reliance across gender, relationships, and cultures. Personal relationships 12, 1 (2005), 145–163.Google ScholarGoogle Scholar
  73. Johnny Saldaña. 2021. The coding manual for qualitative researchers. SAGE Publications Limited.Google ScholarGoogle Scholar
  74. Aleksandr Semenov, Peter Romov, Sergey Korolev, Daniil Yashkov, and Kirill Neklyudov. 2016. Performance of machine learning algorithms in predicting game outcome from drafts in dota 2. In International Conference on Analysis of Images, Social Networks and Texts. Springer, 26–37.Google ScholarGoogle Scholar
  75. SENPAI.gg. 2021. SENPAI.gg. https://senpai.gg/. Online; accessed April 15th 2021.Google ScholarGoogle Scholar
  76. Murtuza N Shergadwala and Magy Seif El-Nasr. 2021. Esports agents with a theory of mind: towards better engagement, education, and engineering. arXiv preprint arXiv:2103.04940 (2021).Google ScholarGoogle Scholar
  77. Raja M Suleman, Riichiro Mizoguchi, and Mitsuru Ikeda. 2016. A new perspective of negotiation-based dialog to enhance metacognitive skills in the context of open learner models. International Journal of Artificial Intelligence in Education 26, 4 (2016), 1069–1115.Google ScholarGoogle ScholarCross RefCross Ref
  78. Evelyn TS Tan, Katja Rogers, Lennart E Nacke, Anders Drachen, and Alex Wade. 2022. Communication Sequences Indicate Team Cohesion: A Mixed-Methods Study of Ad Hoc League of Legends Teams. Proceedings of the ACM on Human-Computer Interaction 6, CHI PLAY (2022), 1–27.Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. Wanyi Tang 2018. Understanding esports from the perspective of team dynamics. The Sport Journal 21 (2018), 1–14.Google ScholarGoogle Scholar
  80. Adam J Toth, Niall Ramsbottom, Christophe Constantin, Alain Milliet, and Mark J Campbell. 2021. The effect of expertise, training and neurostimulation on sensory-motor skill in esports. Computers in Human Behavior 121 (2021), 106782.Google ScholarGoogle ScholarDigital LibraryDigital Library
  81. Selen Türkay and Sonam Adinolf. 2019. Friending to flame: How social features affect player behaviours in an online collectible card game. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  82. Selen Turkay, Jessica Formosa, Robert Cuthbert, Sonam Adinolf, and Ross Andrew Brown. 2021. Virtual Reality Esports-Understanding Competitive Players’ Perceptions of Location Based VR Esports. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–15.Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. Guenter Wallner and Simone Kriglstein. 2016. Visualizations for retrospective analysis of battles in team-based combat games: A user study. In Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play. 22–32.Google ScholarGoogle ScholarDigital LibraryDigital Library
  84. Günter Wallner and Simone Kriglstein. 2020. Multivariate Visualization of Game Metrics: An Evaluation of Hexbin Maps. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play. 572–584.Google ScholarGoogle ScholarDigital LibraryDigital Library
  85. Günter Wallner, Marnix Van Wijland, Regina Bernhaupt, and Simone Kriglstein. 2021. What Players Want: Information Needs of Players on Post-Game Visualizations. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  86. Philip H Winne. 2021. Open learner models working in symbiosis with self-regulating learners: A research agenda. International Journal of Artificial Intelligence in Education 31, 3 (2021), 446–459.Google ScholarGoogle ScholarCross RefCross Ref
  87. Minerva Wu, Je Seok Lee, and Constance Steinkuehler. 2021. Understanding Tilt in Esports: A Study on Young League of Legends Players. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–9.Google ScholarGoogle ScholarDigital LibraryDigital Library
  88. Weiwei Zhang, Goran Muric, and Emilio Ferrara. 2022. Individual and Collective Performance Deteriorate in a New Team: A Case Study of CS: GO Tournaments. arXiv preprint arXiv:2205.09693 (2022).Google ScholarGoogle Scholar
  89. Barry J Zimmerman and Adam R Moylan. 2009. Self-regulation: Where metacognition and motivation intersect. (2009).Google ScholarGoogle Scholar

Index Terms

  1. ``Backseat Gaming" A Study of Co-Regulated Learning within a Collegiate Male Esports Community

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Article Metrics

      • Downloads (Last 12 months)129
      • Downloads (Last 6 weeks)129

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format