skip to main content
10.1145/3613904.3642043acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open Access

“Oh My God! It’s Recreating Our Room!” Understanding Children’s Experiences with A Room-Scale Augmented Reality Authoring Toolkit

Published:11 May 2024Publication History

Abstract

Human-Computer Interaction (HCI) and education researchers have applied Augmented Reality (AR) to support spatial thinking in K-12 education. However, fewer studies support spatial thinking through spatial exploration. Room-scale AR, a recent technology development, brings new opportunities not yet researched. We developed NetLogo AR, an AR authoring toolkit, that allows children to play with, design, and create room-scale AR experiences that combine AR with computational models. To acquire a deeper and more nuanced understanding of children’s interactions with this new technology, we conducted eight-week participatory design sessions with seven children aged 11-13. We analyzed 48 hours of video data, interview transcripts, and design artifacts. Children were enthusiastic and engaged in spatial thinking activities. We affirmed room-scale AR’s role in spatial exploration by comparing it with other supported modalities. Building on existing studies, we propose a new AR design framework around spatial movement and exploration that could help inform design decisions.

Skip 1INTRODUCTION Section

1 INTRODUCTION

Augmented Reality (AR) has been extensively studied in K-12 learning and education. Many AR learning designs have leveraged AR to support spatial thinking. Spatial thinking is recognized as an essential skill in STEM learning[57]. The development of spatial thinking can be supported by spatial exploration, defined as the exploration behaviors (e.g. movement) during children’s construction of spatial knowledge[22, 50]. Some AR designs, mostly projection-based and geospatial AR, successfully fostered children’s spatial thinking through exploration[21, 42]. However, the bulk of existing literature focused on marker-based, device-based AR, which supports spatial thinking by enabling learners to view a digital object from multiple perspectives[32]. The content is often disconnected from the environment, making little use of the rich spatial properties of children’s physical surroundings[40]. As such, device-based AR often falls short of children’s expectations, in which AR systems could spatially recognize and transform their physical surroundings at room-scale[68]. It is also unclear what design factors in AR may lead to spatial exploration and create spatial thinking opportunities. A recent design framework suggests that the adoption of markerless AR might be the key[41], yet some studies reported different usage patterns (e.g. [2, 60]).

We present a spatial AR authoring system, NetLogo AR, that leverages device-based, room-scale AR for spatial learning. Recent advances in room-scale AR technologies have opened up new opportunities for learning design that encourages children’s spatial exploration and thinking. Compared to AR technology used in most previous studies, room-scale AR requires relatively little effort to align digital contents with physical surroundings[1]. The technology can work on consumer-level mobile devices and poses a much lower threshold than AR headsets such as Microsoft HoloLens. AR authoring tools face a novel challenge in working with unexpected and dynamic physical spaces. However, it also opens up a new perspective on the relationship between digital contents and physical surroundings[46, 62]. So far, very few studies have explored the potential of device-based, room-scale AR for learning.

Our design builds on NetLogo, an agent-based programming language that is low-threshold, high-ceiling, naturally spatial, and easy for children to learn[56]. While it has been mostly used in complex systems modeling, studies of NetLogo have also seen success in supporting children’s creativity expression, ranging from games to artwork[9, 12]. Informed by a recent framework[41], our design supports two markerless modalities (room-scale, plane-based) to encourage children’s spatial exploration; leverages children’s bodies to help them author spatial AR experiences; and creates spatial thinking opportunities through an easy switch between AR and 2D top-down views.

To understand how children might interact with the novel AR technology and design their AR experiences, we conducted eight-week participatory design sessions in a small, private K-12 school in the U.S. Midwestern area. Participatory design is a methodology that invites stakeholders as equal partners into design processes[18]. We worked closely with seven children (aged 11-13) during their after-school club time. We collected video recordings, interviews, and design artifacts. Through grounded qualitative analysis, we addressed the following research questions:

(1)

What were children’s impressions of and responses toward room-scale AR technology?

(2)

What kind of spatial activities did children engage in when playing with, thinking about, and designing with room-scale AR?

(3)

Were children’s spatial-related reactions and usage preferences to the system different in plane-based AR and non-AR modalities?

Our findings show that children were enthusiastic about exploring room-scale AR technology. Such interests fostered their bodily spatial exploration and investigation. In many cases, children’s spatial exploration and designs with spatial properties led to their engagement in spatial thinking activities. We compared children’s spatial-related reactions and usage preferences with plane-based AR and non-AR modalities. Finally, we explored the potential reasons for some children’s shift in preferred modalities.

The contributions of this paper include:

(1)

The design and implementation of NetLogo AR, an AR authoring toolkit that brings computational models of complex systems together with room-scale physical worlds;

(2)

A deeper understanding of children’s interaction with NetLogo AR’s room-scale, plane-based, and non-AR modalities by analyzing video streams of seven children’s eight-week play and design engagement;

(3)

A proposed AR design framework dedicated to encouraging users’ different spatial movement and exploration behaviors;

(4)

A discussion of potential moments and design opportunities for future AR authoring systems to support children’s spatial exploration and thinking.

Skip 2RELATED WORK Section

2 RELATED WORK

2.1 AR in Education

AR has been studied for its educational impact across various K-12 age groups for two decades. It has been applied in many contexts, such as humanities and arts, science, vocational, and programming education[4, 55].

Radu identified three main potential benefits of AR in education: Aligned representations present learning contents in 3D spaces and provide better contextualization, resulting in a better understanding of complex phenomena and visual reasoning[26, 47, 51, 70]. Transformed representations enhance original representations with more media forms, visualize the invisible, reduce cognitive load, and enable a transition from presentation to active exploration and knowledge construction[26, 47, 51, 70]. Embodiment encourages learners to physically and tangibly interact with the learning content, leveraging presence, immediacy, and immersion for a deeper learning outcome[26, 47, 70]. In the K-12 age group, AR designs have successfully targeted many aspects of learning and education, e.g. as interactive textbooks, educational toys, or embodied play[3, 26, 71].

While mobile-based AR has been increasingly available through the support of low-cost smartphones, research in AR design for education is still at an early stage[32]. At this point, a majority of design studies with mobile-based AR in education still use marker-based AR, focusing on presenting digital resources[32, 55]. As a result, AR designs generally under-utilize the spatial properties of the physical environments, leading to children focusing more on digital content instead of real-world surroundings. For example, a design study by Malinverni et al. found that children aged 9-11 only followed markers during usage, leaving the rest of the physical space unexplored. The finding corroborates with an earlier literature review, where learners may pay too much attention to virtual information, making the AR technology intrusive in learning settings[4].

AR’s current landscape in K-12 education contrasts with what children expect when co-designing AR headsets with researchers[68]. Beyond visually augmenting the physical environment, children expected that AR systems should be able to create immersive environments at a room scale by transforming their physical surroundings, where virtual and physical elements can interact with each other. Informed by the study, we propose the design of NetLogo AR that could transform children’s physical environments into room-scale AR experiences.

2.2 AR for Spatial Thinking and Exploration

Educational research has assigned huge importance to spatial thinking skills, as they serve as a gateway or barrier to STEM learning[57, 58]. Previous literature has identified three main functions of spatial thinking[15]: A descriptive function, where people capture and preserve the appearance of and relationships among objects, and then convey them to others; an analytic function, where people integrate, relate, and structure spatial information into relational representations (i.e. structures); and an inferential function, where people generate answers to questions about the evolution and function of spatial objects. Spatial exploration, which refers to exploration behaviors (e.g. body movement aimed at revealing more information) during children’s construction of spatial knowledge, is known to support the development of spatial abilities[22]. Children’s practice of whole-body movements supports their understanding of spatial structures, a concept coined as "body syntonic learning"[43]. Children start spatial exploration early on[22], and its importance is not diminished for STEAM education aimed at K4-12 children[16, 50]. In certain learning scenarios, such as museum exhibits and botanical gardens, spatial exploration is also important in its own right, as design goals often include familiarization and exploration of physical spaces[27].

While many AR designs attempted to support spatial thinking through the visualization of spatial structures[8, 10], some projection-based AR designs did it through spatial exploration and whole-body movement. An early study positively associated the spatial learning gains of AR with physical movement[52]. Price and Rogers’s design framework for augmented spaces sees exploration and awareness of the physical world as key benefits of AR. Several later studies used AR for participatory simulation, which allows players to function as different, individual parts of complex systems, usually in a physical space[34, 67]. In such environments, spatial movement is a natural outcome and a crucial resource for disciplinary learning[30]. For example, Enyedy et al. develops learners’ deep understanding of forces and motion, two fundamental physical concepts, through supporting learners’ spatial movement and reasoning with AR. Besides projection-based AR, geospatial AR could also introduce spatial movement, as evidenced by Pokemon Go[13]. In education, while college freshmen successfully navigated the physical world with abstract representation (maps)[33], students sometimes struggled with cognitive overload, which was introduced by the demand for geospatial navigation and collaborative problem-solving together[19].

It has been less clear what design factors in AR may lead to spatial exploration. Projection-based AR may lead to spatial movement, but device-based AR (e.g. children holding a tablet) could also produce it. The problem is that movement does not automatically lead to exploration. For example, Malinverni et al. noticed that both device-based AR and production-based AR failed to produce spatial exploration among children. In the study, children aged 9-11 moved across space, following one marker to another. On the other hand, a markerless prototype design in the early phase of the same study stimulated children’s interest in exploring their physical surroundings[40]. While markers provide easy access to digital content, they also direct children’s focus and interest, limiting children’s interaction with physical spaces. Thus, Malinverni et al. suggests that markerless design has the potential to encourage children’s spatial exploration by opening up all spaces as potentially valid for interaction. However, it is unclear if the markerless AR technology is the deciding factor for spatial exploration. While some studies corroborated the hypothesis (e.g.[28, 38], the marker-based NatureAR design successfully fostered spatial exploration activities among 6-12 years kids by intentionally placing nature markers (e.g. leaves, pinecones) in a nature park for "seek and find"[2].

Recently, with the capability to augment physical surroundings at room scale, AR headsets such as Microsoft HoloLens open up new spatial learning opportunities[37]. For example, Mathland MR facilitated physical, spatial perception of users through spatially linking the physical world with a digitally augmented space, thus supporting spatial understanding of physical rules[31]. AR headsets were also used to support collaborative robot programming, where introducing spatial AR improved users’ spatial task performance[5]. So far, little literature explored headset-based AR’s potential for spatial exploration, and even fewer studies explored device-based, room-scale AR.

To better understand the impact of technology and design choices on children’s spatial exploration, in this study, we compare children’s spatial-related reactions and usage preferences to two markerless, device-based AR modalities supported by NetLogo AR: plane-based AR, which anchors AR content on a wall, floor, etc; and room-scale AR, which anchors AR content in entire rooms.

2.3 AR Authoring Toolkits for Children

While researchers or professionals still create the majority of AR experiences for learning, recent research started to recognize the importance of empowering K-12 teachers and/or students to create AR experiences[53, 59]. As the adoption of AR in educational settings has been hindered by limited time and technical expertise, supporting learning participants to customize AR experiences with a lower threshold becomes more important[53].

Moreover, having children create AR experiences could contribute to their deeper learning of spatial thinking and AR technology. As the constructionist learning theory builds on the connotation of learning as “building knowledge structures”[44], it argues that learning happens felicitously when a learner is consciously engaged in constructing a “public entity”. Following the constructionist tradition, a few toolkits were introduced for children as designers and/or developers of AR experiences[17]. For example, AR Scratch enabled children to project their Scratch program onto pre-designated markers with live preview[48]; ExposAR enabled children to collaboratively create AR experiences through image and plane recognition[39]; StoryMakAR brought together physical prototyping and AR authoring for children to create stories in their spatial surroundings[24].

However, most toolkits either treat the world as a blank canvas or require precise mapping with real-world environments, making them unable to work with dynamic surroundings meaningfully[62]. For example, a space-agnostic design may project a 3D object in front of the viewer, without connecting to the real world; a marker-based design has to be triggered by the exact marker item. While a plane-based system might recognize arbitrary planes, but could hardly assign meanings for them beyond sizes and heights. Professional design teams also report the lack of tool support, as they struggle to iterate on AR designs that would be applied to a remote and distinct environment[35],

Several approaches have been recently developed to tackle this challenge. Building on the LIDAR and 3D reconstruction technology, DistanciAR captured the appearance of physical environments for remote authors to author AR contents in a specific site[62]. Going beyond site-specific authoring, ScalAR captures the semantic relationships between physical surroundings to create AR contents that potentially work in multiple physical locations[46]. Story CreatAR also focused on site-agnostic AR authoring, with the potential to support authors’ spatial thinking[54]. While those studies informed our technology design, no studies have leveraged room-scale spatial semantics for AR authoring.

NetLogo AR is made available by recent computer vision and machine learning developments, which empower more mobile devices with room-scale spatial capabilities. Recently, all iOS devices with LiDAR sensors acquired the capability to recognize building structures and furniture[6]. Instead of relying on predefined markers, images, or dynamic but crudely approximated planes, the RoomPlan SDK makes it possible for adults and children to author room-scale dynamic AR contents on many commercially available mobile devices[1]. Unlike the direct 3D reconstruction in DistanciAR, RoomPlan focuses on semantic and spatial properties of the physical world (i.e. where / what kind of a physical item is).

No longer treating the world as strictly scripted (pre-located markers) or as a blank canvas (space-agnostic), it is now possible to have AR experiences that meaningfully interact with ad-hoc, unpredictable real-world spaces and obstructions. However, few studies have explored the affordances brought by RoomPlan or any similar device-based, room-scale AR technology at the time of writing. No studies, to our knowledge, have explored room-scale AR authoring toolkits and/or for children.

Skip 3SYSTEM DESIGN Section

3 SYSTEM DESIGN

NetLogo AR is a spatial AR authoring toolkit that combines room-scale AR technology with NetLogo. It is freely distributed as a part of Turtle Universe, the mobile version of NetLogo[11]. An agent-based programming language, NetLogo has been widely used in both scientific research and education, enabling scientists and children to use simple computational rules at the individual level to create complex, emergent phenomena[63, 66]. Aside from complex systems modeling, studies of NetLogo have also seen success in supporting children’s creativity expression, ranging from games to artwork[9, 12]. Our design builds on NetLogo because 1) As a descendent of the Logo language[43], NetLogo is a low-threshold, high-ceiling language, naturally spatial, and suitable for K-12 children to learn[56]; 2) The agent-based programming paradigm suits our design objective best (i.e. to combine complex real-world surroundings with digital content), as elements of both could be easily represented as computational agents, dynamically interacting with each other in time and space.

3.1 Design Objectives

Encouraging Spatial Exploration through Markerless AR. Spatial exploration is important both for the learning of spatial thinking and often in its own right (see 2.2). To encourage children’s spatial exploration behaviors, we leveraged two modalities of markerless AR: room-scale and plane-based. We chose markerless AR because 1) our goal is to combine diverse real-world spaces with augmented reality, yet marker-based AR needs intense preparation for each space; 2) we were informed by a recent design framework that markerless AR would generate spatial exploration[41]. Since only a small portion of devices currently available in classrooms (incl. our implementation site) support room-scale AR, we decided also to support plane-based AR, which is available to all ARKit or ARCore-compatible mobile devices today. Plane-based AR provides more accessible but less powerful capabilities: it can scan certain spatial properties of the physical world (the shape, dimension, and height of planes). However, due to its limited ability to take in real-world information, Plane-based AR provides less connection with the real world. Thus, we hypothesized that children would explore less with, and would be less inclined to interact with this AR modality.

Authoring AR with Body Syntonicity in Complex Physical Surroundings. Body syntonicity was the key design principle underlying the Logo programming language, where children could learn abstract knowledge with their body senses[43]. Instead of thinking about abstract coordinate systems, children could situate themselves in the perspectives of a turtle, acting out both in their bodies, minds, and programming commands. As we attempt to incorporate spatial properties of the physical world into augmented reality experiences, we leverage the idea of body syntonicity to help ease the threshold of authoring in diverse and potentially unexpected physical spaces. Instead of thinking abstractly about how the spaces could be, children could use their bodies to experience the current space, thinking about localized features and rules (e.g. should my agent turn right if something is blocking it). As children consider rules at the individual level more, their design might become more adaptive in different physical environments. Encouraging Spatial Thinking through Easy-Switching Modalities. The immersive, room-scale AR experience can potentially encourage children to think from the embodied perspectives of individual agents[65]. Any physical obstacle - walls, doors, chairs, and tables - could block children’s immersive AR view, leading to children’s ad-hoc, spatiotemporal, and concrete understanding of computational models, artwork, or games. On the other hand, there is also a need for a more abstract, macro-level, unobstructed perspective of emergent phenomena, where the top-down 2D modality has an advantage. Through an easy switch design that instantly converts between AR and non-AR modalities, we hope to create spatial learning opportunities where children can mentally map between the physical world and abstract 2D views, which was known to be difficult in previous learning studies[58].

3.2 Design Overview

NetLogo AR supports three modalities: room-scale AR; plane-based AR; and non-AR. To introduce NetLogo AR to children and stimulate their design ideas, we presented the Ants (AR) activity to children during the first session. Here, we used this activity to showcase the design and interaction flow of NetLogo AR. (Note that the technical details are in a separate paper, anonymized for review)

Figure 1:

Figure 1: The Ants (AR) Model, in NetLogo AR. A) Room-scale AR B) Plane-based AR; C) Non-AR.

Building on the classical computational model Ants[64], Ants (AR) (Fig 1) simulates a colony of ants. As simple computational rules drive each ant, ants collaboratively forage for food, manifesting swarm intelligence. However, there are also notable differences between the two versions: 1) Instead of living in an ideal virtual world, digital ants now share the same physical world with users; 2) digital ants have to navigate around physical obstacles; and 3) users are invited to play as an ant with body movement such as turning and running around. The merging of physical and computational worlds creates many possibilities unseen in the original model. For instance, while ants in the original model always succeed in fetching food, “physical ants” could be blocked by obstacles and stuck in loops.

When children enter any AR-infused model, the mobile version of NetLogo[11], a menu will show up with options for room-scale and plane-based scans based on the device’s capability. On laptops which some children would use to design and develop their projects (for the activity, see 4.2), researchers could also load a pre-scanned room. Otherwise, children would then hold the device to finish the scanning process. No limit exists on how large the plane or the room could be. Through network synchronization, children could create, edit, and play individual projects across laptops, tablets, and phones and in different modalities.

Room-scale AR Modality If the child chooses room-scale AR (Fig 1 A), their physical position will be mapped into the computational world. Then, the “player ant” (represented as an arrow) will match its position and overlap in the physical world with other ants, creating a mirroring relationship with the child. Physical surroundings, once recognized, will be visualized as semi-translucent boxes. As the child moves around the world, the “player ant” will always follow the tablet’s position, as if it follows the child’s feet.

Plane-based AR Modality If the child chooses plane-based AR (Fig 1 B), a scanned plane (e.g. the floor, a wall, or a table) will serve as the stage for the computational model, using the rough, scanned boundary of the physical plane as the boundary of the computational model. The child will control the “player ant” by a virtual “crosshair”[49]: the “player ant” will attempt to stay at the center of the tablet. Crosshair is not the only supported interaction mode; virtual joysticks were used in some children’s designs.

Non-AR Modality In either AR mode, children could switch to and from the non-AR mode at any moment. The “player ant” will follow the center of the screen, and children will use touch interactions to move around the perspective and thus the “player ant”. As shown in Fig 1 C, the non-AR modality still keeps properties from the scanned physical space: walls or boundaries are rendered as white lines; blockades (furniture, etc) as colored blocks, with each color representing a type.

Authoring the Experiences In Figure 1, the bottom-right buttons allow children to author the model: the first button for editing input parameters; the second button for block-based programming; and the third button for text-based programming. A dedicated open-source extension was built for NetLogo to incorporate and visualize the spatial information from AR scans ( GitHub link). Each element (e.g. a segment of wall or a desk) of the scanned space is automatically imported as a turtle (agent) with physical properties. Some primitives are added or adapted, so digital agents can easily interact with imported real-world agents. For example, the primitive “can-move?” checks if a turtle can move forward or not. Its AR counterpart “ar:can-move?” will also check if any real-world obstacles stand in the way. As a result, both the block-based and text-based versions of the adapted AR (Ants) model can be written or read similarly to the original model. Once children finish editing any aspect, the changed model will come into effect immediately.

Skip 4METHODOLOGY Section

4 METHODOLOGY

4.1 Research Site and Participants

We convened a participatory design group[18] in an existing after-school program about AR creation. A PreK-12 social enterprise school in the U.S. Midwestern area hosts the program. We collaborated with the school to conduct the study. All seven children aged 11-13 in the program gave oral assent. The facilitator and all parents gave written consent. Four researchers in the team facilitated the sessions with the teacher’s assistance. The school provides children’s demographics (Table 1). Most children had some prior experience in block-based coding and only one with text-based coding. A university institutional review board approved our study protocol.

Table 1:
PseudonymAgeEthnicityGender
Vihaan11IndianM
Jayden13BlackM
Adam11WhiteM
Sophia11WhiteF
Sullivan12WhiteM
Michael11WhiteM
Keisha11Asian and BlackF

Table 1: Children’s Pseudonyms

4.2 Procedure and Data Collection

Table 2:
PhaseSession NoActivities
PreparationN/AObserving children interacting with other AR products.
Design ActivitiesSession 1Researchers introduce NetLogo AR. Children and researchers explore together.
Session 2Follow-up on children’s investigation of NetLogo AR; children explored NetLogo’s model library to find project inspirations
Session 3Children come up with a personal project idea and communicated with researchers
Session 4Researchers developed prototypes based on each child’s sketches. Children and researchers iterate on those prototypes (GitHub link).
Session 5-6Children and researchers iterate on projects and peer-test.
Session 7Children and researchers prepare for the final showcase.
Final ShowcaseSession 8Children showcase to other teachers and students.

Table 2: Overview of Design Sessions

Before the study, four researchers observed the first two weeks of the same after-school program where children designed AR projects with CoSpaces Edu, an AR authoring toolkit. Then, we designed our session plans based on what we learned (Table 2). For eight weeks, we implemented weekly one-hour design sessions. The sessions started in a classroom, but children often went to other places on the same floor for design and play. After researchers introduced NetLogo’s models library, children were invited to find inspiration and generate their project ideas. Children designed, coded, and iterated on their individual projects, with inputs from peers and adults. Researchers and the facilitator offered support in design and programming on a needed basis. While we initially planned on having researchers implement NetLogo code for children’s design ideas, most children eventually learned some NetLogo and edited code with support from researchers and the facilitator. In the final showcase week, we had around 20 to 30 5th-8th grade children and their teachers as the audience in a different classroom.

Our activity design was inspired by literature in participatory design[7, 61, 72]. Children collaborated with adults as testers, informants, designers, and developers for their and peers’ projects[18]. Children carried out the entire cycle of the design process[23, 29], while adults learned from observing, interacting with, and supporting children. Following the example of[25], we adapted our activity plan within and between sessions using children’s and facilitator’s feedback. For example, when children expressed interest in seeing their video data, researchers edited two-minute recap videos weekly and played them at the beginning of each session. Honoring children’s requests motivated children to offer more feedback and participate more enthusiastically, enabling us to build a deep, playful working relationship with children.

During the activity, we collected three types of data:

(1)

For videos, we recorded all design sessions with five eye-level GoPro camera headbands and one stationary room-wide camera, which resulted in 48 hours of footage.

(2)

For design artifacts, we collected children’s design sketches, versions of children’s projects, and feedback cards written to peers.

(3)

For interviews, we conducted and recorded seven semi-structured interviews during the sessions. They are collected within the video data and analyzed with a similar process.

4.3 Data Analysis

After completing the eight design sessions, four researchers iteratively applied a grounded theory approach to the entire dataset that includes video streams and design artifacts (e.g. design sketches). For video data, researchers coded 48 hours for two rounds[14]. During the first round:

(1)

Four researchers watched and separately open-coded eight hours of video data together. After that, researchers reconvened for an hour-long discussion, which led to the preliminary coding protocol.

(2)

Researchers agreed on a preliminary coding protocol that involves chunking the footage by significant events (e.g. a playful activity, or a conversation, each lasting between 30 seconds to five minutes), taking observational notes of note-worthy interactions, and recording important verbatim;

(3)

Researchers each coded about 12 hours of videos (including interview transcripts). During the process, researchers had meetings every week and clarified the definitions of the codebook. We retrospectively revised the coding result when necessary.

Based on topics that repeatedly emerged from the significant events, researchers decided on three main themes related to 1) children’s reactions to room-scale AR; 2) AR modalities’ impact on children’s engagement patterns; 3) AR modalities’ impact on children’s design decisions. During the second round:

(1)

Each researcher marked up episodes in coded videos that were related to the research questions;

(2)

Each of the first three authors took a research question, reviewed the entirety of coded video data, and organized episodes into codes;

(3)

Similarly, the fourth author organized the interview transcripts into codes.

Most of our analysis focuses on a qualitative understanding of what emerged from children’s interactions with NetLogo AR in a more naturalistic setting. More rigid measurement is used in two cases: first, when measuring children’s preference of modalities, we looked at each incident when the child asked for, or turned down AR-capable devices; and which modality they elected to switch to or stay in, as they frequently re-initiated AR scanning and could choose from room-scale or plane-based AR. Second, to understand the different impacts of modalities, we marked out three types of incidents:

(1)

Body movement: When a child holds a device and actively plays with AR, mark their body movement patterns. Since children’s movement occurs mostly in room-scale AR, we only coded exceptions (when children moved in plane-based or non-AR, or they did not move in room-scale AR).

(2)

Spatial exploration: When a child holds a device, actively plays with AR, and moves their body, mark when they walk around multiple parts of physical spaces.

(3)

Self immersion: When a child talks to digital agents, or to themselves, but from digital agents’ perspectives. To avoid over-interpretation, we only focus on children’s explicit expressions.

Skip 5FINDINGS Section

5 FINDINGS

5.1 Children’s Interest and Exploration of Room-scale AR

5.1.1 Excitement, Questions, and Repeated Tries: Children’s Initial Interest.

Children expressed interest in NetLogo AR through body gestures, verbal cues, asking questions, and repeatedly scanning rooms.

As soon as children realized that NetLogo AR could capture and visualize the dimensions of their physical surroundings, they expressed their excitement through body gestures and verbal cues. For example, when Adam first volunteered to scan the room, his interactions with an iPad were projected on a screen for other children to see. When the facilitators asked, “What is happening here?” Michael stood up from his chair and walked closer to the projected screen with his eyes fixed on it. Sophia audibly drew her breath and murmured “It’s making our room!” At a table behind them, Michael spoke with emphasis, “Oh my god, it’s recreating our room!” Children further expressed excitement when they saw moving digital elements blended with their physical surroundings. When the first author guided Adam to input a NetLogo model of colorful dots into the 3D image of their scanned room, the second author prompted, “What’s new here?” Vihaan excitedly shouted “There’s polka dots!” and clapped his hands. When the first author showed Keisha how to pull dots toward her, Michael said “Ohh that is satisfying.”

Children also showed their interest by asking design-related questions. After the first demonstration, Michael questioned the development of AR experiences: “So how do you do this?” and “Can you make them interact with you?” Adam asked “What’s the end goal of this app?” While interacting, Sophia asked “If we went to the art museum and scanned the statues. Would it scan the statues?”

Children repeatedly and sometimes competitively used room-scale AR, demonstrating their enthusiasm even as they grew more familiar with it. For example, in Session 2, Sophia asked the facilitators to scan the room by herself. In Session 3, when Jayden finished his design work, he asked Vihaan and Sophia “Hey you wanna do a room scan?”, and they promptly agreed. Since only one iPad in our study could room scan, children sometimes asked for it from another child, or “protected” their room-scan opportunity. In Session 2, Sophia and Keisha asked Michael for the room-scanning iPad to scan the room. After several requests, Michael reluctantly agreed.

5.1.2 Playful Exploration: Children’s Collaborative Investigation.

Children’s strong interest in room-scale AR led to nine collaborative investigations across four sessions. After the first demonstration, children were intrigued by the AI-driven technology, which seemed capable of recognizing and visualizing the location and dimensions of their physical surroundings. Children were particularly curious whether AI could recognize their bodies as still objects. They collaboratively formed and tested hypotheses about how to “trick” AI. During each investigation, one child was responsible for scanning the room and reporting who got scanned. Other participants tested and adjusted their strategies based on that child’s feedback.

Figure 2:

Figure 2: Demonstration of three scenes of children’s group investigation. A) Scene One: Keeping Still B) Scene Two: Blending with Furniture C) Scene Three: Mimicking Furniture

Fig 2 presents three representative episodes of such investigation. In each episode, one child constantly held a tablet and reported whether other children were scanned. Other children quickly adapted their hypotheses and tried new strategies. Children’s hypotheses became more sophisticated as time passed, reflected through their body movements. During the first group investigation, children theorized that the technology might only include non-moving objects, so they sat on a couch or stood still in front of a wall. When children realized those strategies did not work, they gradually layered additional conditions to their hypotheses. By the seventh group investigation, children formed more complex hypotheses. They theorized that technology might detect non-moving people by recognizing breathing movements, so they held their breath while keeping still. Children also hypothesized that AI might scan people if they completely blend in with furniture, so they lay on a couch underneath pillows or on the bay window behind curtains with a chair in front. Such playful group investigations furthered their interest in the AR technology and the AI algorithm that drives the room scan.

Overall, children showed strong interest and engagement in the room-scale AR technology. They were eager to try it out repeatedly, and launched a series of collaborative investigations to explore its capability by forming, testing, and adjusting hypotheses.

5.2 Children’s Spatial Exploration to Spatial Thinking with Room-scale AR

Across eight design sessions, children engaged in spatial activities by playing with and designing room-scale AR experiences. Children’s initial interest-driven spatial exploration sometimes led to spatial thinking activities in all three functions: descriptive, analytical, and inferential[15].

5.2.1 Children’s Design Ideas and Playful Exploration in Physical Spaces.

Most children incorporated dynamic, physical, and spatial configurations in their initial design ideas (Table 3). Five out of six design ideas situated the digital content in physical surroundings: the ants or cars would avoid physical obstacles (Sullivan, Michael, Jayden); the laser beams or guards would limit the player’s movement together with physical obstacles (Sophia, Vihaan). Children’s design ideas necessitate spatial exploration: one has to move across the room to find ants, treasure, or navigate the car. The only exception (Keisha) was not directly connected with spatial configuration, but still centered around body movement.

Table 3:
StudentProject NameInitial IdeaSpatial Exploration?
SullivanAnts ArtThe player moves around the room. The ants follow the player, leaving traces that form an artwork. If two ants collide, they disappear. If one ant stays on another ant’s trace, the trace will be erased.Yes
SophiaStealing TreasureThe player needs to find the treasure box randomly placed in the room, and avoid or disable laser beams that might appear on the screen. The player will lose health points if Touched by laser beams.Yes
VihaanStealing MullahThe player needs to find treasure in the room and bring it back to home base while avoiding randomly stationed guards. If players get caught by guards, they will say mean words to players.Yes
JaydenDriving Virtual CarThe players take the room as a driveway, and steer a car through heavy traffic to reach the end.Yes
KeishaDance!The player dances while a virtual character follows the player’s body movementsNo
MichaelAnts BattleTwo teams of ants fight for resources. The player will be the lead ant of red ants and guide them to fight with blue ants and lead them to food sources.Yes
AdamAbsentAbsent, would later build on Sophia’s idea.N/A

Table 3: Children’s Initial Design Ideas

We found that children’s spatial design ideas did lead to spatial exploration. For example, children constantly attempted to scan novel places to play with the same project. Almost all children initiated a room scan in physical spaces beyond the classroom, with curiosity about exploring different spaces. For example, in Session 3, when Jayden invited Sophia and Vihaan for a collaborative room scan, Vihaan proposed: “Scan both rooms!” All three children stood up and went to the other room. In Session 4, Jayden attempted to scan the street outside the classroom window to project his car racing project onto a physical road.

Figure 3:

Figure 3: Sophia squeezed herself between a desk and a wall.

As scanning and exploring different places brings novel spatial constraints to the virtual world in room-scale AR, digital agents (e.g. ants) often behave differently between one room and another. Because of that, children not only walked around, but often used their whole body to explore the nuances of the space. For example, in Session 5, Sophia was testing her stealing treasure project. As the treasure was blocked from her perspective by physical obstacles, she squeezed between a desk and a wall, a part of classroom space not designed for students to pass, to check whether the treasure was in between (Fig 3). Similar spatial-body interaction also happened in Session 2, when Vihaan attempted to find digital agents under a sink, and in Session 6, when Sullivan and Jayden walked around and tilted their bodies to find the treasure in Adam’s project.

5.2.2 Spatial Thinking Through Playing, Ideation, and Iteration Activities.

The increased spatial exploration, in turn, led to increased spatial thinking opportunities for children. We observed all three functions of spatial thinking: descriptive, analytical, and inferential [15].

Figure 4:

Figure 4: Examples of three children sketching ideas on the paper with 2D illustrations. a) Sophia explained her ideas of stealing treasure with a 2D illustration. b) Michael sketched out his idea of the ants model in a 2D illustration with keywords to symbolize the rules of ants. c) Adam sketched out a 2D illustration of how players would play with the game in his version.

Descriptive function. Children leveraged the descriptive function when communicating their project ideas to facilitators. They sketched 2D spatial representations to convey the spatial arrangement of digital objects and the player’s spatial experiences. For example, by illustration, Sophia explained her thinking process and game rules stealing treasure to facilitators. She began by drawing the top-down view of the Ants AR model (room-scale), transforming her real-world experience into 2D illustrations (Fig 4a

Analytical function. Children utilized the analytical function by leveraging the 2D view to understand the spatial structure of their physical surroundings augmented with digital agents[58]. For example, in Session 2, when Vihaan, Sullivan, and Adam interacted with the AR-infused ants model, they walked around a room but could not find any ants. Vihaan switched to the 2D non-AR modality and realized that the ants and food piles were not generated yet because the model was paused. He unpaused the model to generate them and switched back to AR. However, he still could not find any ants and switched to 2D again. One child in the group realized that “Oh, wait, they are in the other room” (Fig 5a). Vihaan switched back to the room-scale view, and found the digital ants in the other room (Fig 5b).

Figure 5:

Figure 5: Children used 2D view to analyze their position and plan their further movement. a) Children used 2D to check their and other agents’ positions. b) Children switched back to room-scale view. c) Vihaan switched back to 2D view to plan their movement. d) Vihaan pointed to the screen and talked about his planned movement.

Inferential function. Children further used the inferential function to plan their next spatial move. In the same episode, when the three children finally found the digital ants, they stopped and considered their next steps. Vihaan switched to 2D again and pointed at the screen: “So we’re right here” (Fig 5c). Then, he pointed to another food pile and said, “and I’m going to go over there and lead them there” (Fig 5d) Children actively switched between the more concrete room-scale AR and the more abstract non-AR modalities to plan out their movements (we should move in this direction to the food pile) and infer the possible outcomes (ants would follow there).

The inferential function was more salient when children iterated their spatial design. In particular, children had to consider the mapping between the physical world and the digital, computational world as a two-way lane. In Session 4, Adam tested and iterated on Sophia’s project, stealing treasure. In the original game, users who touched lasers would lose but could restart from the same position. Adam sketched his first idea on the paper: the user’s position would also be reset. He highlighted the starting point of the game (Fig 6a) and the user’s hypothetical location when they touched laser beams (Fig 6b). Soon, Adam realized that the idea from non-AR games might not be readily transferable to a physical world. He pointed at the location in the room (Fig 6c

Figure 6:

Figure 6: Adam iterated Sophia’s project and changed game rules. a) Adam sketched the starting point. b) Adam sketched the hypothetical location when they touched the laser beams. c) Adam pointed to the player’s location in the real world. d) Adam pointed to the location where the player touched the laser beams in the real world.

5.3 Room-scale, Plane-based, and Non-AR Modalities

To find the main contributing factors to children’s spatial engagement, we compared children’s interest levels and spatial interactions of the same projects across three modalities: room-scale, plane-based, and non-AR (more on the methodology: 4.3). Children’s in-depth spatial engagement was more present in room-scale AR. We further investigated why some children shifted in their preferred modalities.

5.3.1 Different Modalities, Different Interaction Patterns.

We observed less body movement, spatial exploration, and self-immersion when children engaged with plane-based AR or non-AR modalities. Here, we excluded cases where children switched back and forth between the room-scale AR modality, as children always treated room-scale AR as the main modality in those cases.

Body Movement. With room-scale AR, children, in almost all cases, walked around the space with rich body movements. With plane-based AR or non-AR modality, aside from one exception, children always stand or sit still (e.g. Fig 7). In plane-based AR (Fig 7a), children mostly tilted tablets up and down towards different angles. They did not walk around or explore different spaces. The only exception was Vihaan walking to find a large enough plane to accommodate his spatial model. After he found the spot, he no longer moved. The body movement pattern with non-AR mode was similar to that with other tablet or laptop-based technology (Fig 7b): children only moved their fingers (touch-screen) or hands (mouse and keyboard).

Figure 7:

Figure 7: Children’s typical body language when using a) non-AR mode and b) plane AR mode.

Spatial Exploration. While we reported many instances of spatial exploration for room-scale AR, children never explored physical spaces using plane-based AR or non-AR modalities. For example, after trying room-scale scans several times, Jayden tried to scan the street outside for his car racing project. Once a researcher told him that the tablet did not support the room-scale scan, Jayden immediately gave up the plane-based scan and returned to the classroom. For Jayden, there was no meaning in exploring (or scanning) physical spaces, if the underlying technology could not recognize the properties of physical spaces and blend them with his virtual world.

Self-Immersion. Children self-immersed themselves in individual agents’ perspectives, e.g. as one (leading) agent in the modeling world, when engaging with models across modalities. With room-scaled AR, children’s self-immersion was embodied and verbal when using the room-scale modality. For example, Sullivan designed and developed his project in non-AR mode, mostly at his table. When he first got a chance to test it in room-scale AR, he not only expressed his excitement by shouting: “Hooray!”, but consciously avoided digital ants projected around the ground by running around. Asked by the facilitator, Sullivan explained: “Because I don’t want them (the ants) to die.” In Sullivan’s project, all ants (including the ant played by himself) leave traces around the world. Other ants will die of too many traces in the same place. As a comparison, children’s self-immersion behaviors in plane-based and non-AR modalities were verbal only. For example, when Vihaan was playing his game in plane-based AR, he said to himself: “Where is home… see you boys. Oh, there is home!” However, as he played, his body stayed in his chair, without any indicator of alignment with his thoughts.

5.3.2 Shift of Preferences.

While most children preferred the room-scale AR modality during Session 1-3, many children’s preferences shifted after Session 4, when they dived into the iterative design processes. The result is presented in table 4. We identified three main reasons that might explain the shift in preferences.

Table 4:
StudentSession 1-3Session 4+, DesignSession 4+, TestSpatial Project?
VihaanRoom-scale ARPlane-based ARPlane-based ARYes
JaydenRoom-scale ARNon-ARNon-ARNo
AdamRoom-scale ARNon-ARRoom-scale ARYes
SophiaRoom-scale ARRoom-scale ARRoom-scale ARYes
SullivanRoom-scale ARNon-ARRoom-scale ARYes
MichaelRoom-scale ARRoom-scale ARRoom-scale AR (In 8th: Non-AR)Yes
KeishaunclearNon-ARNon-ARNo

Table 4: Children’s Preference of AR/Non-AR Modalities

While all children’s initial design plans were more or less spatial, two children’s final project ideas had little spatial components, rendering AR modalities out of favor. For example, Jayden’s final project was a side-scrolling game. During the interview, Jayden explained: “I could do it with AR. it’s just I like the other option with no AR, it’s easier for me to design.” When Jayden was working on his first design idea of car racing, he was eager to use room-scale AR to scan the street. The shift in Jayden’s design ideas likely induced his shift of preference. On the other hand, Sophia, who made the room-escape game, clearly expressed her preference for room-scale AR: “AR is better for simulation. I’ve never tried the non-AR version. Plane scans don’t recognize the lines. BIRD-EYE VIEW.” The combination of physical surroundings (furniture, walls, doors) and model-generated obstacles (lasers) was a crucial element in her project, and only room-scale AR could provide that.

Children also chose modalities based on their design tasks at hand. For instance, Adam and Sullivan both used room-scale AR tablets for testing, and switched to non-AR laptops for programming and debugging. Naturally, such differences were not observed among children who worked less with code. Sophia also expressed similar views during the interview, even though she never used non-AR modality: “I would probably use non-AR mode if I was changing it (code).”

Finally, the immaturity of technology might influence children’s preferences of modalities. A major issue throughout the activities was misalignment. During our hour-long sessions, people walked around, and furniture was moved aside frequently. Our room-scale AR technology usually succeeded in scanning rooms and aligning the reconstructed AR representations back. However, when the device went to sleep, it often failed to re-align with the changed real-world surroundings, leading to frequent frustration among children. While we could keep devices awake to mitigate this issue, it would lead to overheating devices and lagging experiences. The unpleasant user experience might drive children to other modalities, making them less willing to return.

Skip 6DISCUSSION Section

6 DISCUSSION

6.1 An AR Design Framework for Spatial Exploration

NetLogo AR was designed to promote children’s spatial movement and exploration. While we initially focused on room-scale AR, not all devices in our implementation site support it. Informed by Malinverni et al.’s conceptual framework, which suggests a link between spatial exploration behaviors and markerless AR, we supported plane-based markerless AR. Different from the hypothesis, while both modalities present similar digital content (agent-based models, games, or artwork), children only engaged with spatial movement and exploration in room-scale AR. Previous studies also showed cases where marker-based AR leads to spatial exploration[2]. These findings motivated us to propose a new AR design framework. In this framework, we examine where AR content is linked with markers or open-ended spaces, which influences the users’ goals of movement; and the certainty of AR content’s location, which influences users’ destination of movement.

Figure 8:

Figure 8: Our AR design framework for spatial movement and exploration.

Similar to Malinverni et al.’s framework, our goal is to provide designers with practical suggestions that might inform their design decisions. As shown in Fig 8, we situate the perceived spatial properties of AR content in a two-dimensional space, categorize existing AR design from our study and previous literature, and then describe the corresponding user behaviors. The horizontal dimension is related to AR content’s location: whether the AR content could be certainly triggered in a given place(s) or not. The vertical dimension indicates to which the AR content is linked with: with one or more markers; or with an open-ended space. Both dimensions are continuous. We made an implicit assumption: users are already interested in augmented content. Below, we discuss each quadrant with examples:

Contents linked with markers; Location certain. The two studies reported in Malinverni et al. both match this dimension. Despite the differences in presentation (device-based vs. projection-based), the AR content in both designs is triggered by markers that learners can easily identify and recognize. As a result, learners moved from one marker to another, with the sole goal of triggering the digital content[40, 41]. Similarly, when finding in-game locations such as "gyms" or "PokeShops", players of the location-based game Pokemon Go (11-56 years old) moved between destinations, as incentivized by the game[13].

Contents linked with markers; Location uncertain. A marker-based design, NatureAR successfully triggers learners’ (6-12 years old) spatial exploration by placing hidden markers all over a yard[2]. The AR content is still linked with markers, but the location becomes uncertain. While children were motivated to explore the natural environment, they constantly focused on finding markers, leading to reduced interactions with other elements in physical environments[2].

Contents linked with space; Location uncertain. AR Magic Lantern employed a similar design idea: hiding triggers of AR content in a heritage site[27]. Different from NatureAR, triggers of AR content were not marked from the environment, making the entire space legitimate for visitors to explore. As such, cross-generational participants were motivated to explore different aspects of the physical surroundings while trying to trigger AR content[27]. Our findings around room-scale AR also fit into this dimension. Children started with uncertainty about the location of augmented content. Nor did they have an idea about what kind of physical items would be augmented. Consequently, children iteratively refined their hypotheses by exploring different parts of the physical world with their bodies (see 5.1). During the design and play-testing, digital ants were scattered across physical rooms in unknown places, triggering children’s playful exploration behaviors (see 5.2.1).

Contents linked with space; Location certain. While digital ants behave similarly to room-scale AR, the location of the “playground” was certain: plane-based AR gives a full, unobstructed picture of the computational model. The digital contents are much less related to the physical world. As a result, children’s interactions with the physical world are fewer and less related to exploration. While we did not design the plane-based AR activity to use the same controlling method as the room-scale one (i.e., use children’s bodies to control the digital agent), children used at least one more method (virtual joystick) besides crosshair. In both cases, even when children moved around, their behaviors were more about controlling the game than exploring the physical space (see 5.3). Children do not need to explore because all digital content has a specific and known location. Similar patterns could be seen in another AR design study focusing on spatially manipulating virtual objects. Even when AR content covered a large swath of the room, as the content stayed unobstructed and predictable in the same location, users preferred seated and avoided large body movements[60].

To summarize, in our proposed framework, the vertical dimension (certainty of location) is connected to the destination of spatial movement, while the horizontal dimension (linked with) is connected with the goal of it. When the destination is uncertain (i.e. the content is at uncertain locations), users’ spatial movement becomes a spatial exploration of the physical environment. When the goal is also unspecified (i.e. the content is linked with the space), the entire space becomes legitimate for open-ended spatial exploration. In all cases, users were motivated by the desire to reveal augmented content and incentivized by the design, yet the different incentive structures led to different patterns of spatial movement and exploration[2, 13, 27, 41, 60]. Therefore, we suggest designers first evaluate the intended spatial movement and exploration patterns relevant to the specific context. Then, make design decisions about the location and link of AR content accordingly.

6.2 Room-scale AR and Spatial Activities

We reported several cases where children moved from spatial exploration into spatial thinking activities. While all of them are directly related to room-scale AR, the technology alone does not generate the desired activities directly. Rather, the design decision to include a non-AR modality that is easy to switch back and forth (3.1) contributed to the transition, as evidenced in several episodes in our findings. Here, we discuss four moments that future designers should consider working on:

(1)

When children had an incentive to communicate their design of spatial projects. In Sophia’s example, spatial thinking happened when she needed to draw down her spatial, 3D design ideas in a 2D world to communicate with researchers. As such, we believe future design should create opportunities for children to communicate their design ideas and engage with the descriptive function of spatial thinking.

(2)

When children were incentivized to find digital contents, but only with an occluded, partial perspective of the spatial AR design. In Vihaan’s example, spatial thinking happened when digital ants were not in the current room and were blocked out by physical walls and he had to find them. To open up more spatial learning opportunities, we encourage future room-scale AR design to create dynamic experiences that interact with physical obstacles. Designers could even intentionally spawn digital content beyond children’s perspective, as controlling perspectives is also beneficial for spatial learning as well[52].

(3)

When children had an incentive to plan their movement in the physical space based on situations in the virtual world. In Vihaan’s example, finding digital ants was only the first step. He wanted to bring ants to food, yet ants only indirectly followed his movement through chemicals. As such, he had to infer the potential outcomes of his spatial movement. Planning should be an integral part of the experience for future designs that create spatial thinking opportunities. Here, children’s “escape game”-like design ideas are good examples.

(4)

When children were incentivized to design around the alignment or misalignment of virtual and physical worlds. In Adam’s example, the need to align the physical player with the virtual player motivated him to infer outcomes in different scenarios. As children move from (paper prototype) designers to developers and testers in a real-world environment, rich, dynamic spatial learning opportunities would emerge beyond those baked into the learning activities. To fully harness the learning potential, we suggest that future AR design for children should create more opportunities for children to design and develop their spatial experiences.

6.3 Room-scale AR and Children’s Design

With NetLogo AR, it is possible to transform children’s physical surroundings to create immersive, room-scale experiences. Virtual elements can interact with the real world in a dynamic and programmable way that welcomes children’s open-ended exploration and design ideas. Our findings showed that most children passionately played, designed, and iterated with room-scale AR technology. Children successfully leveraged spatial thinking skills to iterate on their spatial design, sometimes with sketches, and sometimes with coding - with a programming language they first learned during the activities.

NetLogo AR’s capability to materialize computational design at room scale stimulated children’s wild imagination beyond what is possible now. For example, Keisha’s initial design idea asked for virtual characters to dance in the room with the player; Jayden attempted to situate his car racing game on the street outside the school; during the final sessions, a child and a teacher tried to physically “stride across” the laser line in Sophia’s design, sparked a discussion about wearable devices that could capture more nuances of whole-body movements. Unfortunately, many of those ideas were not yet supported by the current iteration of NetLogo AR, leading to two children switching design ideas. Still, the enhanced awareness of bodily interaction and the capability to work with larger spaces are certainly directions we should continue to work toward.

On the other hand, the limitations of current technologies still negatively affected the design sessions. Two children in our study had to switch to a non-spatial design idea after knowing we could not implement their initial spatial idea quickly. Some children struggled with the misalignment of AR contents and overheating devices, leading to a shift of preferred modalities. To achieve Eisenberg’s[20] future of room-scale children-computer interaction, where children could control, design, and program digital artifacts all over physical surroundings, it seems that more design and development efforts are yet to be made.

Skip 7LIMITATIONS AND FUTURE WORK Section

7 LIMITATIONS AND FUTURE WORK

We acknowledge that the limitations of our study necessitate future work. First, while we collected rich video data that captured the nuances of children-computer interactions across eight weeks, we only worked with seven children. The study size was consistent with previous participatory design studies (e.g.[36, 68, 69]), and the close collaboration allowed us to understand each child’s thoughts and needs better. Nevertheless, future work could investigate whether similar engagement patterns persist with a larger audience through quantitative and/or controlled studies. Second, the study happened in an after-school program at a small school in a Midwestern city in the U.S. Children’s socio-economic and cultural backgrounds might be less diverse, leading to a limited understanding of room-scale AR with children. Future work could engage a more diverse audience group in other learning spaces. Third, as we designed different interaction methods for Plane-based and Room-scale AR modalities, we inadvertently introduced confounding factors into their comparison, particularly concerning body movement patterns. A follow-up comparison study that uses the same interaction method for both modalities may be needed. Finally, our sessions aimed at designing room-scale AR experiences in open-ended contexts without a pre-scripted learning objective. This allowed us to capture children’s perceptions, engagement, and learning gains with technology in a naturalistic setting. Yet, our understanding was limited in designing NetLogo AR integrated curriculum for learning scenarios with more structured learning objectives.

Skip 8CONCLUSION Section

8 CONCLUSION

As AR is increasingly used in formal and informal learning scenarios with children, it is important to understand the spatial learning opportunities afforded by device-based room-scale AR technology. We presented NetLogo AR, a spatial AR authoring toolkit, which first attempts to leverage the novel technology for learning purposes. To better understand children’s perceptions of NetLogo AR, engagement in spatial activities, and preferences of different AR modalities, we closely collaborated with seven children over eight weeks. During the design sessions, we focused on supporting children’s spatial exploration and diverse design ideas. We found that children expressed great interest in room-scale AR, which fostered their collaborative investigation of its capability, and bodily spatial exploration. When children played with or designed spatial AR projects, they often engaged in spatial thinking activities. Based on our findings and related studies, we proposed a novel AR design framework for learning that focuses on users’ spatial movement and exploration behaviors. We identified spatial learning opportunities and current limitations of room-scale AR technology for future design to better engage children with spatial thinking activities. Our findings inform future designs of room-scale AR technology for children and learning.

Skip ACKNOWLEDGMENTS Section

ACKNOWLEDGMENTS

We acknowledge the support of our partners: Andy Rodgers, Frances Judd, and Bennett Day School. We thank Reed Stevens for material support. We also thank the children who participated in our study and design work. We appreciate the intellectual contribution of our anonymous reviewers, whose valuable, insightful, and actionable feedback helped us orient the paper into a better form.

Skip Supplemental Material Section

Supplemental Material

Video Presentation

Video Presentation

mp4

193.7 MB

References

  1. 2023. 3D Parametric Room Representation with RoomPlan. https://machinelearning.apple.com/research/roomplanGoogle ScholarGoogle Scholar
  2. Ismo Alakärppä, Elisa Jaakkola, Jani Väyrynen, and Jonna Häkkilä. 2017. Using nature elements in mobile AR for education with children. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services(MobileHCI ’17). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3098279.3098547Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Haifa Alhumaidan, Kathy Pui Ying Lo, and Andrew Selby. 2018. Co-designing with children a collaborative augmented reality book based on a primary school textbook. International Journal of Child-Computer Interaction 15 (March 2018), 24–36. https://doi.org/10.1016/j.ijcci.2017.11.005Google ScholarGoogle ScholarCross RefCross Ref
  4. Jorge Bacca-Acosta, Silvia Baldiris, Ramón Fabregat, Sabine Graf, and Dr Kinshuk. 2014. Augmented Reality Trends in Education: A Systematic Review of Research and Applications. Educational Technology and Society 17 (Oct. 2014), 133–149.Google ScholarGoogle Scholar
  5. Daniel Bambuŝek, Zdeněk Materna, Michal Kapinus, Vítězslav Beran, and Pavel Smrž. 2019. Combining Interactive Spatial Augmented Reality with Head-Mounted Display for End-User Collaborative Robot Programming. In 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). 1–8. https://doi.org/10.1109/RO-MAN46459.2019.8956315 ISSN: 1944-9437.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Gilad Baruch, Zhuoyuan Chen, Afshin Dehghan, Tal Dimry, Yuri Feigin, Peter Fu, Thomas Gebauer, Brandon Joffe, Daniel Kurz, Arik Schwartz, and Elad Shulman. 2022. ARKitScenes: A Diverse Real-World Dataset For 3D Indoor Scene Understanding Using Mobile RGB-D Data. http://arxiv.org/abs/2111.08897 arXiv:2111.08897 [cs].Google ScholarGoogle Scholar
  7. Mathilde Bekker, Julie Beusmans, David Keyson, and Peter Lloyd. 2003. KidReporter: a user requirements gathering technique for designing with children. Interacting with computers 15, 2 (2003), 187–202. Publisher: Oxford University Press Oxford, UK.Google ScholarGoogle Scholar
  8. David Birchfield, Harvey Thornburg, M. Colleen Megowan-Romanowicz, Sarah Hatton, Brandon Mechtley, Igor Dolgov, and Winslow Burleson. 2008. Embodiment, multimodality, and composition: Convergent themes across HCI and education for mixed-reality learning environments. Advances in Human-Computer Interaction 2008 (2008). Publisher: Hindawi.Google ScholarGoogle Scholar
  9. Corey Brady, Melissa Gresalfi, Selena Steinberg, and Madison Knowe. 2020. Debugging for Art’s Sake: Beginning Programmers’ Debugging Activity in an Expressive Coding Context. (June 2020). https://repository.isls.org//handle/1/6319 Publisher: International Society of the Learning Sciences (ISLS).Google ScholarGoogle Scholar
  10. Carlos Carbonell Carrera and Luis Alberto Bermejo Asensio. 2017. Augmented reality as a digital teaching environment to develop spatial thinking. Cartography and geographic information science 44, 3 (2017), 259–270.Google ScholarGoogle Scholar
  11. John Chen and Uri J. Wilensky. 2021. Turtle Universe. https://turtlesim.com/products/turtle-universe/Google ScholarGoogle Scholar
  12. John Chen, Lexie Zhao, Horn Michael, and Wilensky Uri. 2023. The Pocketworld Playground: Engaging Online, Out-of-School Learners with Agent-based Programming. In Proceedings of the ACM Interaction Design and Children (IDC).Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Ashley Colley, Jacob Thebault-Spieker, Allen Yilun Lin, Donald Degraen, Benjamin Fischman, Jonna Häkkilä, Kate Kuehl, Valentina Nisi, Nuno Jardim Nunes, Nina Wenig, 2017. The geography of Pokémon GO: beneficial and problematic effects on places and movement. In Proceedings of the 2017 CHI conference on human factors in computing systems. 1179–1192.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Juliet M. Corbin and Anselm Strauss. 1990. Grounded theory research: Procedures, canons, and evaluative criteria. Qualitative sociology 13, 1 (1990), 3–21. Publisher: Springer.Google ScholarGoogle Scholar
  15. National Research Council. 2005. Learning to think spatially.Google ScholarGoogle Scholar
  16. Francisco del Cerro Velázquez and Ginés Morales Méndez. 2021. Application in augmented reality for learning mathematical functions: A study for the development of spatial intelligence in secondary education students. Mathematics 9, 4 (2021), 369.Google ScholarGoogle ScholarCross RefCross Ref
  17. Andreas Dengel, Muhammad Zahid Iqbal, Silke Grafe, and Eleni Mangina. 2022. A Review on Augmented Reality Authoring Toolkits for Education. Frontiers in Virtual Reality 3 (2022). https://www.frontiersin.org/articles/10.3389/frvir.2022.798032Google ScholarGoogle Scholar
  18. Allison Druin. 2002. The role of children in the design of new technology. Behaviour and information technology 21, 1 (2002), 1–25. Publisher: Citeseer.Google ScholarGoogle Scholar
  19. Matt Dunleavy, Chris Dede, and Rebecca Mitchell. 2009. Affordances and Limitations of Immersive Participatory Augmented Reality Simulations for Teaching and Learning. Journal of Science Education and Technology 18, 1 (Feb. 2009), 7–22. https://doi.org/10.1007/s10956-008-9119-1Google ScholarGoogle ScholarCross RefCross Ref
  20. Michael Eisenberg. 2003. Mindstuff: Educational technology beyond the computer. Convergence 9, 2 (2003), 29–53. Publisher: Sage Publications Sage CA: Thousand Oaks, CA.Google ScholarGoogle ScholarCross RefCross Ref
  21. Noel Enyedy, Joshua Danish, Girlie Delacruz, and Melissa Kumar. 2013. Learning physics through play in an augmented reality environment. https://doi.org/10.1007/s11412-012-9150-3 Journal Abbreviation: International Journal of Computer-Supported Collaborative Learning Publication Title: International Journal of Computer-Supported Collaborative Learning Volume: 7.Google ScholarGoogle ScholarCross RefCross Ref
  22. Emily K Farran, Mark Blades, Kerry D Hudson, Pascal Sockeel, and Yannick Courbois. 2022. Spatial exploration strategies in childhood; exploration behaviours are predictive of navigation success. Cognitive development 61 (2022), 101153.Google ScholarGoogle Scholar
  23. Christopher Frauenberger, Judith Good, Geraldine Fitzpatrick, and Ole Sejer Iversen. 2015. In pursuit of rigour and accountability in participatory design. International Journal of Human-Computer Studies 74 (2015), 93–106. https://doi.org/10.1016/j.ijhcs.2014.09.004Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Terrell Glenn, Ananya Ipsita, Caleb Carithers, Kylie Peppler, and Karthik Ramani. 2020. StoryMakAR: Bringing Stories to Life With An Augmented Reality & Physical Prototyping Toolkit for Youth. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–14. https://doi.org/10.1145/3313831.3376790Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Mona Leigh Guha, Allison Druin, and Jerry Alan Fails. 2013. Cooperative Inquiry revisited: Reflections of the past and guidelines for the future of intergenerational co-design. International Journal of Child-Computer Interaction 1, 1 (2013), 14–23. Publisher: Elsevier.Google ScholarGoogle ScholarCross RefCross Ref
  26. Jeonghye Han, Miheon Jo, Eunja Hyun, and Hyo-jeong So. 2015. Examining young children’s perception toward augmented reality-infused dramatic play. Educational Technology Research and Development 63, 3 (June 2015), 455–474. https://doi.org/10.1007/s11423-015-9374-9Google ScholarGoogle ScholarCross RefCross Ref
  27. Paul Hine, Loza Tadesse Mamo, and Narcis Pares. 2022. AR Magic Lantern: Group-Based Co-Located Augmentation Based on the World-as-Support AR Paradigm. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI EA ’22). Association for Computing Machinery, New York, NY, USA, Article 207, 5 pages. https://doi.org/10.1145/3491101.3519918Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Tien-Chi Huang, Chia-Chen Chen, and Yu-Wen Chou. 2016. Animating eco-education: To see, feel, and discover in an augmented reality-based experiential learning environment. Computers & Education 96 (May 2016), 72–82. https://doi.org/10.1016/j.compedu.2016.02.008Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Ole Sejer Iversen, Rachel Charlotte Smith, and Christian Dindler. 2017. Child as protagonist: Expanding the role of children in participatory design. In Proceedings of the 2017 conference on interaction design and children. 27–37.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Danielle Keifert, Christine Lee, Maggie Dahn, Randy Illum, David DeLiema, Noel Enyedy, and Joshua Danish. 2017. Agency, Embodiment, & Affect During Play in a Mixed-Reality Learning Environment. In Proceedings of the 2017 Conference on Interaction Design and Children. ACM, Stanford California USA, 268–277. https://doi.org/10.1145/3078072.3079731Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Mina Khan, Fernando Trujano, Ashris Choudhury, and Pattie Maes. 2018. Mathland: Playful Mathematical Learning in Mixed Reality. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems(CHI EA ’18). Association for Computing Machinery, New York, NY, USA, 1–4. https://doi.org/10.1145/3170427.3186499Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Matjaž Kljun, Vladimir Geroimenko, and Klen Čopič Pucihar. 2020. Augmented Reality in Education: Current Status and Advancement of the Field. In Augmented Reality in Education: A New Technology for Teaching and Learning, Vladimir Geroimenko (Ed.). Springer International Publishing, Cham, 3–21. https://doi.org/10.1007/978-3-030-42156-4_1Google ScholarGoogle ScholarCross RefCross Ref
  33. Eric Klopfer and Kurt Squire. 2008. Environmental Detectives—the development of an augmented reality platform for environmental simulations. Educational Technology Research and Development 56, 2 (April 2008), 203–228. https://doi.org/10.1007/s11423-007-9037-6Google ScholarGoogle ScholarCross RefCross Ref
  34. Eric Klopfer, Susan Yoon, and Luz Rivas. 2004. Comparative analysis of Palm and wearable computers for Participatory Simulations. Journal of Computer Assisted Learning 20, 5 (2004), 347–359. https://doi.org/10.1111/j.1365-2729.2004.00094.x _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1365-2729.2004.00094.x.Google ScholarGoogle ScholarCross RefCross Ref
  35. Veronika Krauß, Alexander Boden, Leif Oppermann, and René Reiners. 2021. Current practices, challenges, and design implications for collaborative ar/vr application development. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–15.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Kung Jin Lee, Wendy Roldan, Tian Qi Zhu, Harkiran Kaur Saluja, Sungmin Na, Britnie Chin, Yilin Zeng, Jin Ha Lee, and Jason Yip. 2021. The Show Must Go On: A Conceptual Model of Conducting Synchronous Participatory Design With Children Online. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems(CHI ’21). Association for Computing Machinery, New York, NY, USA, 1–16. https://doi.org/10.1145/3411764.3445715Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Simon N. Leonard and Robert N. Fitzgerald. 2018. Holographic learning: A mixed reality trial of Microsoft HoloLens in an Australian secondary school. Research in Learning Technology 26 (2018).Google ScholarGoogle Scholar
  38. Mayank Loonker, Sophia Ppali, Rocio von Jungenfeld, Christos Efstratiou, and Alexandra Covaci. 2022. “I was Holding a Magic Box”: Investigating the Effects of Private and Projected Displays in Outdoor Heritage Walks. In Proceedings of the 2022 ACM Designing Interactive Systems Conference(DIS ’22). Association for Computing Machinery, New York, NY, USA, 1565–1580. https://doi.org/10.1145/3532106.3533468Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Mille Skovhus Lunding, Jens Emil Sloth Grønbæk, Karl-Emil Kjær Bilstrup, Marie-Louise Stisen Kjerstein Sørensen, and Marianne Graves Petersen. 2022. ExposAR: Bringing Augmented Reality to the Computational Thinking Agenda through a Collaborative Authoring Tool. In CHI Conference on Human Factors in Computing Systems. ACM, New Orleans LA USA, 1–14. https://doi.org/10.1145/3491102.3517636Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Laura Malinverni, Julian Maya, Marie-Monique Schaper, and Narcis Pares. 2017. The World-as-Support: Embodied Exploration, Understanding and Meaning-Making of the Augmented World. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems(CHI ’17). Association for Computing Machinery, New York, NY, USA, 5132–5144. https://doi.org/10.1145/3025453.3025955Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Laura Malinverni, Cristina Valero, Marie-Monique Schaper, and Narcis Pares. 2018. A conceptual framework to compare two paradigms of augmented and mixed reality experiences. In Proceedings of the 17th ACM Conference on Interaction Design and Children. 7–18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Seungjae Oh, Hyo-Jeong So, and Matthew Gaydos. 2018. Hybrid Augmented Reality for Participatory Learning: The Hidden Efficacy of Multi-User Game-Based Simulation. IEEE Transactions on Learning Technologies 11, 1 (Jan. 2018), 115–127. https://doi.org/10.1109/TLT.2017.2750673 Conference Name: IEEE Transactions on Learning Technologies.Google ScholarGoogle ScholarCross RefCross Ref
  43. Seymour Papert. 1980. Mindstorms: Children, computers, and powerful ideas. (1980). Publisher: Basic Books.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Seymour Papert and Idit Harel. 1991. Situating constructionism. constructionism 36, 2 (1991), 1–11.Google ScholarGoogle Scholar
  45. Sara Price and Yvonne Rogers. 2004. Let’s get physical: The learning benefits of interacting in digitally augmented physical spaces. Computers & Education 43, 1-2 (2004), 137–151. Publisher: Elsevier.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Xun Qian, Fengming He, Xiyun Hu, Tianyi Wang, Ananya Ipsita, and Karthik Ramani. 2022. ScalAR: Authoring Semantically Adaptive Augmented Reality Experiences in Virtual Reality. In CHI Conference on Human Factors in Computing Systems. ACM, New Orleans LA USA, 1–18. https://doi.org/10.1145/3491102.3517665Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Iulian Radu. 2012. Why should my students use AR? A comparative review of the educational impacts of augmented-reality. In 2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 313–314. https://doi.org/10.1109/ISMAR.2012.6402590Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Iulian Radu and Blair MacIntyre. 2009. Augmented-reality scratch: a children’s authoring environment for augmented-reality experiences. In Proceedings of the 8th International Conference on Interaction Design and Children. ACM, Como Italy, 210–213. https://doi.org/10.1145/1551788.1551831Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Iulian Radu, Blair MacIntyre, and Stella Lourenco. 2016. Comparing children’s crosshair and finger interactions in handheld augmented reality: Relationships between usability and child development. In Proceedings of the The 15th International Conference on Interaction Design and Children. 288–298.Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Kay E Ramey, Reed Stevens, and David H Uttal. 2020. In-FUSE-ing STEAM learning with spatial reasoning: Distributed spatial sensemaking in school-based making activities.Journal of educational psychology 112, 3 (2020), 466.Google ScholarGoogle Scholar
  51. Dmitry Resnyansky, Emin İbili, and Mark Billinghurst. 2018. The Potential of Augmented Reality for Computer Science Education. In 2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE). 350–356. https://doi.org/10.1109/TALE.2018.8615331 ISSN: 2470-6698.Google ScholarGoogle ScholarCross RefCross Ref
  52. Brett E. Shelton and Nicholas R. Hedley. 2004. Exploring a cognitive basis for learning spatial relationships with augmented reality. Technology, Instruction, Cognition and Learning 1, 4 (2004), 323. Publisher: Old City Publishing.Google ScholarGoogle Scholar
  53. Manoela Silva, Rafael Roberto, Iulian Radu, Patricia Cavalcante, Bertrand Schneider, and Veronica Teichrieb. 2023. Development of Design Principles for AR Authoring Tools for Education Based on Teacher’s Perspectives. IEEE Transactions on Learning Technologies (2023).Google ScholarGoogle Scholar
  54. Abbey Singh, Matthew Peachey, Ramanpreet Kaur, Peter Haltner, Shannon Frederick, Mohammed Alnusayri, David Choco Manco, Colton Morris, Shannon Brownlee, Joseph Malloch, and Derek Reilly. 2022. Supporting Spatial Thinking in Augmented Reality Narrative: A Field Study. In Interactive Storytelling(Lecture Notes in Computer Science), Mirjam Vosmeer and Lissa Holloway-Attaway (Eds.). Springer International Publishing, Cham, 270–291. https://doi.org/10.1007/978-3-031-22298-6_17Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Anastasios Theodoropoulos and George Lepouras. 2021. Augmented Reality and programming education: A systematic review. International Journal of Child-Computer Interaction 30 (Dec. 2021), 100335. https://doi.org/10.1016/j.ijcci.2021.100335Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Seth Tisue and Uri Wilensky. 2004. Netlogo: A simple environment for modeling complexity. In International conference on complex systems, Vol. 21. Citeseer, 16–21.Google ScholarGoogle Scholar
  57. David H Uttal and Cheryl A Cohen. 2012. Spatial thinking and STEM education: When, why, and how? In Psychology of learning and motivation. Vol. 57. Elsevier, 147–181.Google ScholarGoogle Scholar
  58. David H. Uttal and Kelly J. Sheehan. 2014. The development of children’s understanding of maps and models: A prospective cognition perspective. Journal of Cognitive Education and Psychology 13, 2 (2014), 188–200. Publisher: Springer.Google ScholarGoogle ScholarCross RefCross Ref
  59. Ana Villanueva, Zhengzhe Zhu, Ziyi Liu, Kylie Peppler, Thomas Redick, and Karthik Ramani. 2020. Meta-AR-app: an authoring platform for collaborative augmented reality in STEM classrooms. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Stephen Voida, Mark Podlaseck, Rick Kjeldsen, and Claudio Pinhanez. 2005. A study on the manipulation of 2D objects in a projector/camera-based augmented reality environment. In Proceedings of the SIGCHI conference on Human factors in computing systems. 611–620.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Greg Walsh, Elizabeth Foss, Jason Yip, and Allison Druin. 2013. FACIT PD: a framework for analysis and creation of intergenerational techniques for participatory design. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2013). https://doi.org/10.1145/2470654.2481400Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Zeyu Wang, Cuong Nguyen, Paul Asente, and Julie Dorsey. 2021. Distanciar: Authoring site-specific augmented reality experiences for remote environments. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. David Weintrop, Elham Beheshti, Michael Horn, Kai Orton, Kemi Jona, Laura Trouille, and Uri Wilensky. 2016. Defining Computational Thinking for Mathematics and Science Classrooms. Journal of Science Education and Technology 25, 1 (Feb. 2016), 127–147. https://doi.org/10.1007/s10956-015-9581-5Google ScholarGoogle ScholarCross RefCross Ref
  64. Uri Wilensky. 1997. NetLogo Models Library: Ants. https://ccl.northwestern.edu/netlogo/models/AntsGoogle ScholarGoogle Scholar
  65. Uri Wilensky and Mitchel Resnick. 1999. Thinking in levels: A dynamic systems approach to making sense of the world. Journal of Science Education and technology 8, 1 (1999), 3–19. Publisher: Springer.Google ScholarGoogle ScholarCross RefCross Ref
  66. Uri J. Wilensky. 1999. NetLogo. http://ccl.northwestern.edu/netlogo/Google ScholarGoogle Scholar
  67. Uri J. Wilensky and Walter Stroup. 1999. Learning through participatory simulations: Network-based design for systems learning in classrooms. (1999). Publisher: International Society of the Learning Sciences (ISLS).Google ScholarGoogle Scholar
  68. Julia Woodward, Feben Alemu, Natalia E. López Adames, Lisa Anthony, Jason C. Yip, and Jaime Ruiz. 2022. “It Would Be Cool to Get Stampeded by Dinosaurs”: Analyzing Children’s Conceptual Model of AR Headsets Through Co-Design. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems(CHI ’22). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3491102.3501979Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Julia Woodward, Zari McFadden, Nicole Shiver, Amir Ben-Hayon, Jason C. Yip, and Lisa Anthony. 2018. Using co-design to examine how children conceptualize intelligent interfaces. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Hsin-Kai Wu, Silvia Wen-Yu Lee, Hsin-Yi Chang, and Jyh-Chong Liang. 2013. Current status, opportunities and challenges of augmented reality in education. Computers & Education 62 (March 2013), 41–49. https://doi.org/10.1016/j.compedu.2012.10.024Google ScholarGoogle ScholarCross RefCross Ref
  71. Rabia M. Yilmaz. 2016. Educational magic toys developed with augmented reality technology for early childhood education. Computers in Human Behavior 54 (Jan. 2016), 240–248. https://doi.org/10.1016/j.chb.2015.07.040 Publisher: Pergamon.Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Jason C. Yip, Kiley Sobel, Caroline Pitt, Kung Jin Lee, Sijin Chen, Kari Nasu, and Laura R. Pina. 2017. Examining Adult-Child Interactions in Intergenerational Participatory Design. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems(CHI ’17). Association for Computing Machinery, New York, NY, USA, 5742–5754. https://doi.org/10.1145/3025453.3025787Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. “Oh My God! It’s Recreating Our Room!” Understanding Children’s Experiences with A Room-Scale Augmented Reality Authoring Toolkit

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Article Metrics

          • Downloads (Last 12 months)214
          • Downloads (Last 6 weeks)214

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format