skip to main content
10.1145/3613904.3642145acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open Access
Artifacts Available / v1.1

WAVE: Anticipatory Movement Visualization for VR Dancing

Published:11 May 2024Publication History

Abstract

Dance games are one of the most popular game genres in Virtual Reality (VR), and active dance communities have emerged on social VR platforms such as VR Chat. However, effective instruction of dancing in VR or through other computerized means remains an unsolved human-computer interaction problem. Existing approaches either only instruct movements partially, abstracting away nuances, or require learning and memorizing symbolic notation. In contrast, we investigate how realistic, full-body movements designed by a professional choreographer can be instructed on the fly, without prior learning or memorization. Towards this end, we describe the design and evaluation of WAVE, a novel anticipatory movement visualization technique where the user joins a group of dancers performing the choreography with different time offsets, similar to spectators making waves in sports events. In our user study (N=36), the participants more accurately followed a choreography using WAVE, compared to following a single model dancer.

Figure 1:

Figure 1: Our proposed VR dance visualization technique. The user sees several model dancers with different time offsets, effectively becoming part of a crowd "making waves". The user can, therefore, mimic the moves of nearby dancers and anticipate upcoming movements. The image is a 3rd-person mixed-reality visualization with the user added inside the virtual world. See the supplementary video for a 1st-person view captured directly from a VR headset.

Skip 1INTRODUCTION Section

1 INTRODUCTION

Dance and rhythm games have emerged as one of the most popular genres in consumer Virtual Reality (VR). A prime example is Beat Saber [14], the best-selling VR title of all time [5], in which the player wields dual lightsabers to slice targets in rhythm. Dance has also organically emerged on social VR platforms such as VRChat [48] in various user-driven communities, where people enjoy social dancing and performance [34].

Despite the success of VR dancing, a fundamental problem remains: Dancing can be difficult, and it is challenging to instruct dancing , without an expert human teacher. In this paper, we propose and evaluate a new solution to this problem, focusing on a setting

The design challenge is how to present body movement trajectories in a way the user can easily understand and follow. [30] describes the problem as: “we don’t currently have a mechanism for streaming kinesthetic data into the human proprioceptive system in the same way that we stream audio and visual content” (p. 103).

Expecting a user to simply copy a model is not effective, as the user’s reaction time is limited and the human body has considerable inertia, an issue that Miller calls kinesthetic lag [30, p. 121]. Rather, skilled motor control generally requires some way of anticipating the required movements ahead of time [2, 40, 49].

Prior work typically uses some symbolic abstraction of the movement that allows for an anticipatory timeline presentation, e.g., as the timeline of arrows indicating dance pad footstep sequences in Dance Dance Revolution [22] and the sparse key poses of Just Dance [46]. In some cases, such symbolic presentation is augmented with a model dancer demonstrating full-body movements for the player to mimic (e.g., Just Dance and Dance Central [43]), Dance Central additionally include a practice mode that teaches the player to read the symbolic notation and execute the signified movements. However, dance game players may find such practice modes tedious or prefer not to switch modes during a social dance game session [30]. What is missing in dance games and more generally in VR dancing is a way to instruct realistic, non-abstracted choreography in real time, , without a separate practice mode.

Contribution: We propose and evaluate WAVE, a new solution to the dance instruction problem. WAVE is a novel anticipatory movement visualization technique, illustrated in Figure 1. The core idea is that the user becomes part of a crowd of virtual dancers performing the instructed choreography with different time offsets, similar to spectators making waves in sports events. In dance pedagogy terms, we use the choreographic device of “canon” [16] Our evaluation data (N=36) indicates that WAVE allows users to anticipate movements propagate through multiple virtual dancers, improving users’ accuracy in following the choreography in comparison to following a single model dancer.

Skip 2BACKGROUND AND RELATED WORK Section

2 BACKGROUND AND RELATED WORK

Today, there are multiple different solutions for instructing movement and dance in virtual environments [8]. Below, we review both dance games and non-game dance learning applications, dividing the discussion into non-VR and VR approaches. For a broader overview of HCI in the context of dance performance and related creative processes, we refer the reader to the review by Zhou et al. [53].

A popular approach for using computers to instruct dance is to adapt the mimetic method, an established dance-teaching approach [35]. Most applications using the mimetic method focus on teaching individual moves rather than whole choreographic pieces. Here, we have focused on studies using technologies similar to those available in Oculus Quest 2, the platform used in our application. These technologies include sound, visuals, and motion tracking of the user’s hands and head.

Chan et al. [7] use a motion capture suit for dance training. The student receives three types of feedback. Firstly, the user’s pose, captured by the motion capture suit, is shown in real time next to an animated 3D model performing the desired movements. Secondly, a report displays the joints in which the player’s movements were incorrect. Lastly, a slow-motion replay allows the user to review their performance.

When focused on teaching specific genres of dance, rather than dance movement generally, studies have tended to follow a similar pattern [18, 19, 51]. Teachers’ movements are recorded using motion capture, users try to copy those movements, and then the system evaluates their performance. Some studies of this kind have had promising results using Microsoft Kinect for real-time evaluation of full-body movement [4, 37] and gamified movement instruction using Labanotation [36].

The teaching approaches above rely heavily on information the user gets after performing; in effect, it is assumed that the user will learn gradually through multiple repetitions. In contrast, we strive to provide foresight of the desired movements so that players have a possibility of succeeding on the first try.

Optimizing instruction for the first try is also what commercial dance games appear to aim at. This is reasonable because optimizing the first-time user experience is of high importance in games [29, 33] and players have been found eager to skip tutorials [9]. However, most games simplify the instruction problem by specifying choreography only partially, abstracting away nuances. For instance, Dance Dance Revolution’s [22] arrows only specify footsteps and the key poses of Just Dance [46] do not indicate how to exactly transition between them. While there is evidence that dance games can teach dance skills [27], instructing realistic and nuanced dancing remains has remained non-trivial, requiring added complexity like the separate practice mode of Dance Central [43].

2.1 Dance Instruction in VR

VR has proven to be useful for instructing dance. For example, hip-hop students appreciated the way VR dance materials simplified movements and made them clear and easy to follow. [47]. Similarly, Eaves et al. [12] found that information provided to users should not be too detailed. Feedback based on only four tracked joints worked better than twelve, in that users were unable to extract the relevant information when they were presented with too much data.

Some research indicates that learning dance with a partner can be beneficial, elevating users’ interest in learning dance [51] and improving performance [19]. It is no surprise that virtual dance partners are used in multiple VR dance studies. Kirakosian et al. [21] The study did not measure how effective the method was for learning but the users’ rated their enjoyment as high and most of them anticipated being more confident to lead someone in real life. Senecal et al. [41] similarly used virtual partners for salsa dance instruction and found that the movement patterns of users without prior dance experience became more similar to that of users with dance experience after using their system. They measured movement patterns using a number of features, including several specifically designed to capture core technical elements of salsa. Studies have also explored using multiple virtual model dancers to support dance instruction, as in the work of Kico et al. [20]. Here, we extend their work by having the virtual model dancers perform in canon instead of unison to provide anticipation of the next movements.

Skip 3DESIGN Section

3 DESIGN

The WAVE prototype evaluated in this study has three lines of dancers, including dancers positioned to the left and right of the user, as shown in Figure 1. Naturally, this is only one of many possible configurations of dancers, and Figure 2 shows alternatives tested during development. Below, we explain our design process.

3.1 Problem Definition

Based on our review of related work, its limitations, , we defined two key requirements:

(1)

The system can instruct choreography to the same level of full-body detail as can be instructed outside of VR (ie., in naturalistic dance settings like the studio, stage or street), instead of relying on symbolic and/or abstracted dance notation.

(2)

The user can follow the instructions on-the-fly, instead of having to first engage in a separate learning or memorization phase.

3.2 Design Principles

We derived design principles to help us satisfy the above requirements. Regarding the first requirement, we hypothesized that we should focus on instructing movements through demonstration. Demonstration is prevalent in dance teaching and even most non-dancers have engaged with mimicking demonstrated movements at least occasionally, e.g., during childhood. From this point of view, it is natural to focus on using the moving body to instruct the moving body, i.e., using animated dancer characters as a core visualization element. With earlier screen-based systems, choreography was typically limited by the user needing to face forward to see the screen. However, dance choreography generally involves moving and facing in multiple directions, which called for placing model dancers in multiple positions, rather than than only immediately in front of the user.

To meet the second requirement, the user should be provided with a capability to anticipate/predict the upcoming movements. Executing the movement and timing demonstrated by a model in real-time is impossible; human reaction time is limited and the body has considerable inertia, so movements need to be planned and initiated ahead of time.

These design principles quite naturally lead to the core WAVE design idea of multiple dancers performing at different time offsets, which provides access to full-fidelity demonstrations of complex full-body movements with enough time for users to anticipate and then execute those movements at the target times.

3.3 Dance Style and Content

Our study used an 84-second contemporary dance choreography, designed for beginners. The choreography was designed for us by a professional contemporary dance teacher with over 20 years of experience teaching students of different levels and creating choreography for them. We chose contemporary dance as our focus, as it is relatively underexplored in dance games, compared to styles like hip-hop or other forms of street dance.

3.4 Formations of Dancers

We considered the formation using straight parallel lines most promising for two main reasons. First, with straight parallel lines to the left and right, dancers could see the upcoming moves, even when turned sideways. Second, with the formations using curved lines, some dancers noticed themselves accidentally following movements too early, possibly because the far-future dancers are more directly visible.

Note that our choreographies have the user mostly facing forward and only occasionally turning around and sideways. We do not expect our chosen formation of virtual dancers to be effective for choreography in which the dancer turns to face the back; future work will need to address this limitation, perhaps having a wave coming towards the user from each direction. However, even our present design is more flexible than traditional dance visualizations requiring the user to face a screen.

3.5 VR Technology

We targeted our system for the Oculus Quest 2 standalone headset, both because it is prevalent on the VR market [24] and because it supports accurate positional tracking of the user’s head and hands. The Quest 2 does not support tracking the user’s feet, but we deemed this an acceptable limitation, as the platform nevertheless has multiple dance and rhythm games, and our test choreography also largely focused on upper body movements.

Skip 4EVALUATION Section

4 EVALUATION

We conducted a quantitative evaluation (N=36) of our WAVE prototype, comparing against a baseline visualization with a single model dancer showing the movements in real-time. The two compared visualizations are shown in Figure 3.

Figure 3:

Figure 3: Screenshots of the two visualizations compared in the user study, taken from the user’s perspective using an Oculus Quest 2 VR headset. A) WAVE with three lines of dancers. B) Baseline with a single model dancer whose movements the player should copy.

4.1 Study design

We used a within-subjects design with two experimental conditions (WAVE & baseline), with the visualization type as the single categorical independent variable. Each participant danced the same 84-second choreography The order of experimental conditions was counterbalanced to mitigate the inevitable order effect caused by the participants remembering at least parts of the choreography.

4.2 Hypotheses

We tested two hypotheses about the suitability of the proposed WAVE visualization technique for instructing dance using VR:

H1:

WAVE allows players to perform choreography more accurately than the baseline. As discussed above, following choreographed movements requires the user to be able to anticipate upcoming movements, which WAVE is designed to facilitate.

H2:

WAVE elicits higher subjective assessment of being able to perform the choreography correctly.

4.3 Sample Size

4.4 Participants

36 adult volunteers were recruited among the students and staff of , using social media and by having a testing stand on campus. 17 participants were men, 18 women, and 1 preferred to not specify their gender. Mean participant age was 26 (SD = 5.5, min 20, max 43). The participants were somewhat experienced with VR (29 had tried VR before and 5 owned a VR device of their own).

4.5 Procedure

After the video instructions, the participants put on the VR headset. During the experiment, the facilitator watched the user’s view on the laptop, allowing the facilitator to help the user get into position, if needed.

The VR software prompted the participant to input an ID provided by the facilitator (this ID was not input by the facilitator to avoid having to switch the headset between persons, for hygienic reasons). The participant was then asked to calibrate their height by standing straight and clicking on a virtual button; the height was used to scale the virtual dancers to make the visualizations more appropriate for each participant’s body. The system displayed text instruction to click a “Done” virtual button after completing the calibration.

The participant then danced in both experimental conditions. At the start of each condition, the system instructed the participant to move to a marked position. Once the user was in the correct position, the system prompted the user to click a virtual button to start the choreography. After performing the choreography, the participant filled the per-condition questionnaire.

After completing both experimental conditions, the participant removed the VR headset and filled in the final questionnaire.

4.6 Data Collection

The following data was collected:

Demographics: age, gender, VR experience (has used before? owns a device?), dance experience (years of practice?), experience with dance games (total estimated hours played? which games?).

During dancing: the rotation and translation of the player’s head and hands for each game frame were tracked using the VR headset and hand trackers. This data was collected to allow for quantitative comparison between the player’s movements and the desired choreography (see Section 4.7).

At the end of each experimental condition: Users were instructed to indicate how they felt about their performance using two sliders: “I felt I was able to perform the choreography correctly” and “I felt I was able to time my movements correctly”. The sliders used a range from 0% to 100% and the order of the two items was randomized for each participant.

Final questionnaire: The participants were asked which of the two game versions was their favourite and to give justification for their choice. They were also asked for any additional comments or feedback.

Our primary interest in this study was to test whether anticipatory visualizations support users in accurately following the model choreography. While building the prototype, we observed that slow movements are relatively easy to follow, even without extra visual aids. The first part of the choreography used in this experiment only included slow movements, which are less appropriate for testing our hypothesis. Further, first-time users may need time to get used to the visualization and position themselves. For these reasons, we excluded the very slow start of the choreography from our analyses. After this exclusion, 47 seconds of data remained for each participant.

4.7 Methods: How to Measure Movement Accuracy?

Our goal was for users’ movements to accurately reflect the provided choreography, so we considered high error between the target movement and the user’s actual movement as indicating low accuracy. We measured error in two different ways:

Position-based movement error, the mean Euclidian distance between the tracked head and hand positions and their choreographed target positions, measured every frame. In the WAVE condition, the user’s goal is to move as the last dancer of the middle line, as shown in Figure 1 (communicated to the user as described in Section 4.5). Thus, the target timing corresponds to the dancers on the user’s left and right. In the baseline condition, the target timing corresponds to that of the single model dancer.

Direction-based movement error, the mean cosine distance between tracked and choreographed body-part , measured every frame,

4.8 Results

Position-based Movement Accuracy. A paired-samples t-test indicated that WAVE (M = 0.47, SD = 0.08) resulted in statistically significantly lower error than the baseline (M = 0.50, SD = 0.08); t = −2.18,. The effect size, as measured by Cohen’s d, was d = 0.36, indicating a small effect. Boxplots of the data are shown in Figure 4 A.

Direction-based Movement Accuracy. A paired-samples t-test indicated a statistically significant difference in direction-based movement error, with WAVE (M = 0.40, SD = 0.04) resulting in lower error than the baseline (M = 0.43, SD = 0.04); t = −4.87,. The effect size, as measured by Cohen’s d, was d = 0.81, indicating a large effect. Boxplots are shown in the Figure 4 B.

Subjective Performance. We used two virtual sliders at the end of each experimental condition to collect data about the participants’ subjective assessment about movement and timing accuracy. Paired-samples t-tests indicated no statistically significant differences between WAVE (Movement: M = 47.39, SD = 22.87; Timing: M = 48.42, SD = 23.03) and the baseline (Movement: M = 46.05, SD = 21.76; Timing: M = 48.05, SD = 22.65); Movement: t = 0.40, ; Timing: t = 0.10,.

Preferred Visualization. In the final questionnaire, participants were asked which approach they prefer. 20 participants preferred the WAVE approach while 16 preferred the baseline. We observed a clear order effect: 78% of the users preferred the approach they tested later.

Additional analyses. The effect of WAVE on anticipating upcoming movements is visualized in Figure 5. The figure shows how the direction-based movement error changes when the choreography is shifted in time. With the baseline condition, error is minimized with a shift of 0.5 seconds, indicating that the users follow the choreography 0.5 seconds late, on average. With WAVE, users perform slightly ahead of the target time, on average.

Figure 4:

Figure 4: Boxplots of (A) position-based movement error and (B) direction-based movement error when compared to the reference choreography.

Figure 5:

Figure 5: Mean and standard deviation of movement direction error when shifting the reference choreography in time. In the baseline condition, the error is minimized at a shift of approximately 0.5 seconds, i.e., the participants performed the choreography half a second late, on average. In the WAVE condition, the participants moved slightly ahead of the ideal time.

Skip 5DISCUSSION Section

5 DISCUSSION

5.1 Summary of Results

Our results suggest that WAVE provides a potentially useful visualization approach for VR dance designers. Supporting H1, both the position-based and direction-based movement error analyses indicate that users can match the choreography better when using WAVE than when using the baseline visualization (Figure 4). The effect is small for position-based movement error, but large for direction-based movement error. The majority of participants (20) also preferred WAVE over the baseline. The subjective performance ratings are inconclusive, however, providing no support for H2. This should be investigated in future work, although it may be that the subjective data is simply more noisy than the objective movement-based measures.

5.2 Dancing Ahead of Time

Fig. 5 clearly shows that users are late in following choreography with the baseline visualization, as expected. More surprisingly, with WAVE, the users perform the choreography slightly ahead of time.

We hypothesize two explanations for this. First, in both the user study and the initial testing of different dancer configurations, we noticed that users occasionally tried to follow the “future” dancers instead of the dancers closest to them, which affects Fig. 5 to some degree. We hypothesize that this is an artefact of the user study focusing on first-time use; in our own experience, one may at first instinctively copy the “future” dancers when they make larger and faster movements that steal one’s attention.

Second, it may be that at least some users synchronize their movements with the dancer directly in front of them, instead of the dancers to the left and right, which only become the focus of attention when the choreography requires one to turn sideways. We did not explicitly ask our participants to synchronize with the dancers to the left and right, or to add a small delay in relation to the dancer in front of them. In the tested WAVE version, the correct delay would be 0.7 seconds.

Here, it should be noted that different choreographies and movements might require different formations of virtual dancers, e.g., in all directions around the user.

Making users feel competent is important to facilitate enjoyment and intrinsic motivation [6, 31, 39, 45]. In addition to providing encouraging feedback, another way to support competence could be by manipulating the user’s perception of their own movements so that they appear more capable, e.g., through exaggerated jump height and flexibility [15, 17, 28]. In our system, the user does not have a visual avatar except for small indicators of their current hand positions. In future work, an avatar could be visible in a mirror, which would reflect the real-life experience of many dance studios.

5.3 Wider Applicability

Presently, WAVE is designed for a single user. However, we could imagine applying WAVE in a setting like social VR, allowing dancers to emit their movements as waves that other users can try to follow. This might also mitigate the latency problems inherent in social VR dancing, for example, by matching the wave propagation time between two users to one musical bar, so that even though the “follower” is delayed with respect to the “leader”, the movements of both would feel right with the music.

5.4 Methodological Limitations

We acknowledge that our choice of baseline only allows us to conclude that the WAVE visualization helps in timing and performing movements compared to not using any assistive visualizations at all. It does not allow determining whether WAVE is better than some other visualization technique.

We also tested WAVE with only one choreography, in one specific style of dance. In our own opinion, WAVE works best for relatively slow and continuous movements, whereas the fastest parts of our choreography feel less easy to follow. Hence, it may be that WAVE does not work for some other dance styles, though we hypothesize that careful timing of the wave propagation may support faster movements and should be explored in future work.

Skip 6CONCLUSION Section

6 CONCLUSION

We have proposed and evaluated WAVE, a new VR movement visualization technique aimed at solving the on-the-fly dance instruction problem. We build on a metaphor of the user being part of a crowd making a wave in a sports event—we use multiple model dancers with different time offsets, allowing the player to both mimic the movements of a model dancer close to them and anticipate future movements through seeing other dancers perform those movements ahead of time. To minimize visual occlusion and allow the use of peripheral vision, we render multiple lines of dancers at different locations.

Our study comparing WAVE against a baseline (N=36) provided evidence that WAVE helps users anticipate upcoming movements and perform choreography more accurately, particularly in terms of more-closely matching the velocities of the head and hands as choreographed (e.g., direction-based movement error in Section 4.7). In future work, it should also be possible to extend WAVE to multi-user social VR dancing, e.g., by allowing dancers to emit their own movements as waves for other dancers to follow.

Skip ACKNOWLEDGMENTS Section

ACKNOWLEDGMENTS

This work has been supported by the European Commission through the Horizon 2020 FET Proactive program (grant agreement 101017779). We also thank Anita Isomettä for creating the dance choreography for the user study.

Skip Supplemental Material Section

Supplemental Material

Video Presentation

Video Presentation

mp4

151.7 MB

WAVE visualization technique

Video showcasing the WAVE visualization technique.

mp4

166.6 MB

References

  1. 2011. Exclusive: Behind The Scenes Of Dance Central. https://www.gamedeveloper.com/production/exclusive-behind-the-scenes-of-i-dance-central-i-. Accessed: 2023-12-12.Google ScholarGoogle Scholar
  2. Salvatore M Aglioti, Paola Cesari, Michela Romani, and Cosimo Urgesi. 2008. Action anticipation and motor resonance in elite basketball players. Nature Neuroscience 11, 9 (2008), 1109–1116.Google ScholarGoogle ScholarCross RefCross Ref
  3. Karan Ahuja, Vivian Shen, Cathy Mengying Fang, Nathan Riopelle, Andy Kong, and Chris Harrison. 2022. Controllerpose: inside-out body capture with VR controller cameras. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Dimitrios S Alexiadis, Philip Kelly, Petros Daras, Noel E O’Connor, Tamy Boubekeur, and Maher Ben Moussa. 2011. Evaluating a dancer’s performance using kinect-based skeleton tracking. In MM ’11: Proceedings of the 19th ACM International Conference on Multimedia. 659–662.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Neil Barbour. 2020. Top 10 VR Games By Revenue. https://www.spglobal.com/marketintelligence/en/news-insights/blog/top-10-vr-games-by-revenue. Accessed: 2021-08-17.Google ScholarGoogle Scholar
  6. Bob Carroll and Julia Loumidis. 2001. Childrenís perceived competence and enjoyment in physical education and physical activity outside school. European physical education review 7, 1 (2001), 24–43.Google ScholarGoogle Scholar
  7. Jacky CP Chan, Howard Leung, Jeff KT Tang, and Taku Komura. 2010. A virtual reality dance training system using motion capture technology. IEEE transactions on learning technologies 4, 2 (2010), 187–195.Google ScholarGoogle Scholar
  8. Emiko Charbonneau, Andrew Miller, Chadwick Wingrave, and Joseph J LaViola Jr. 2009. Understanding visual interfaces for the next generation of dance-based rhythm video games. In Proceedings of the 2009 ACM SIGGRAPH Symposium on Video Games. 119–126.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Gifford K Cheung, Thomas Zimmermann, and Nachiappan Nagappan. 2014. The first hour experience: how the initial play can engage (or lose) new players. In Proceedings of the first ACM SIGCHI annual symposium on Computer-human interaction in play. 57–66.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Julia F Christensen, Camilo José Cela-Conde, and Antoni Gomila. 2017. Not all about sex: neural and biobehavioral functions of human dance. Annals of the New York Academy of Sciences 1400, 1 (2017), 8–32.Google ScholarGoogle ScholarCross RefCross Ref
  11. Chris G Christou and Poppy Aristidou. 2017. Steering versus teleport locomotion for head mounted displays. In Augmented Reality, Virtual Reality, and Computer Graphics: 4th International Conference, AVR 2017, Ugento, Italy, June 12-15, 2017, Proceedings, Part II 4. Springer, 431–446.Google ScholarGoogle Scholar
  12. Daniel L Eaves, Gavin Breslin, Paul Van Schaik, Emma Robinson, and Iain R Spears. 2011. The short-term effects of real-time virtual reality feedback on motor learning in dance. Presence: Teleoperators and Virtual Environments 20, 1 (2011), 62–77.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Franz Faul, Edgar Erdfelder, Albert-Georg Lang, and Axel Buchner. 2007. G* Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior research methods 39, 2 (2007), 175–191.Google ScholarGoogle Scholar
  14. Beat Games. 2019. Beat Saber. https://store.steampowered.com/app/620980/Beat_Saber/.Google ScholarGoogle Scholar
  15. Antti Granqvist, Tapio Takala, Jari Takatalo, and Perttu Hämäläinen. 2018. Exaggeration of avatar flexibility in virtual reality. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play. 201–209.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Kathryn Humphreys and Sinéad Kimbrell. 2013. Best Instructional Practices for Developing Student Choreographers. Journal of Dance Education 13 (07 2013), 84–91. https://doi.org/10.1080/15290824.2013.812790Google ScholarGoogle ScholarCross RefCross Ref
  17. Christos Ioannou, Patrick Archard, Eamonn O’Neill, and Christof Lutteroth. 2019. Virtual performance augmentation in an immersive jump & run exergame. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–15.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Iris Kico, Milan Dolezal, Nikos Grammalidis, and Fotis Liarokapis. 2020. Visualization of folk-dances in virtual reality environments. In Strategic Innovative Marketing and Tourism. Springer, 51–59.Google ScholarGoogle Scholar
  19. Iris Kico and Fotis Liarokapis. 2019. Comparison of trajectories and quaternions of folk dance movements using dynamic time warping. In 2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games). IEEE, 1–4.Google ScholarGoogle ScholarCross RefCross Ref
  20. Iris Kico, David Zelníček, and Fotis Liarokapis. 2020. Assessing the Learning of Folk Dance Movements Using Immersive Virtual Reality. In 2020 24th International Conference Information Visualisation (IV). 587–592. https://doi.org/10.1109/IV51561.2020.00100Google ScholarGoogle ScholarCross RefCross Ref
  21. Salva Kirakosian, Emmanuel Maravelakis, and Katerina Mania. 2019. Immersive Simulation and Training of Person-to-3D Character Dance in Real-Time. In 2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games). IEEE, 1–4.Google ScholarGoogle ScholarCross RefCross Ref
  22. Konami. 1998. Dance Dance Revolution.Google ScholarGoogle Scholar
  23. Matthew Kyan, Guoyu Sun, Haiyan Li, Ling Zhong, Paisarn Muneesawang, Nan Dong, Bruce Elder, and Ling Guan. 2015. An Approach to Ballet Dance Training through MS Kinect and Visualization in a CAVE Virtual Reality Environment. ACM Transactions on Intelligent Systems and Technology (TIST) 6, 2 (2015), 1–37.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Ben Lang. 2022. Meta’s PC VR Dominance Continues as Quest 2 Passes New Milestone. https://www.roadtovr.com/meta-quest-2-majority-vr-headset-on-steam/. Accessed: 2023-30-05.Google ScholarGoogle Scholar
  25. Eike Langbehn, Paul Lubos, and Frank Steinicke. 2018. Evaluation of locomotion techniques for room-scale vr: Joystick, teleportation, and redirected walking. In Proceedings of the Virtual Reality International Conference-Laval Virtual. 1–9.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Joseph J LaViola Jr. 2000. A discussion of cybersickness in virtual environments. ACM Sigchi Bulletin 32, 1 (2000), 47–56.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Ji Seol Lee. 2015. Analysis of Learning and Fun Elements Inherent in Dance Game. Journal of Korea Game Society 15, 1 (2015), 155–170.Google ScholarGoogle ScholarCross RefCross Ref
  28. Lauri Lehtonen, Maximus D Kaos, Raine Kajastila, Leo Holsti, Janne Karsisto, Sami Pekkola, Joni Vähämäki, Lassi Vapaakallio, and Perttu Hämäläinen. 2019. Movement Empowerment in a Multiplayer Mixed-Reality Trampoline Game. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play. 19–29.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Pascal Luban. 2020. Best practices for a successful FTUE (First Time User Experience). https://www.gamedeveloper.com/design/best-practices-for-a-successful-ftue-first-time-user-experience-. Accessed: 2023-09-14.Google ScholarGoogle Scholar
  30. Kiri Miller. 2017. Playable bodies: dance games and intimate media. Oxford University Press. 121–124 pages.Google ScholarGoogle Scholar
  31. Justin B Moore, Zenong Yin, John Hanes, Joan Duda, Bernard Gutin, and Paule Barbeau. 2009. Measuring enjoyment of physical activity in children: validation of the physical activity enjoyment scale. Journal of applied sport psychology 21, S1 (2009), S116–S129.Google ScholarGoogle ScholarCross RefCross Ref
  32. Movella. 2019. Xsens, MVN link. https://www.movella.com/products/motion-capture/xsens-mvn-link.Google ScholarGoogle Scholar
  33. Falko Weigert Petersen, Line Ebdrup Thomsen, Pejman Mirza-Babaei, and Anders Drachen. 2017. Evaluating the onboarding phase of free-toplay mobile games: A mixed-method approach. In Proceedings of the annual symposium on computer-human interaction in play. 377–388.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Roosa Piitulainen, Perttu Hämäläinen, and Elisa D Mekler. 2022. Vibing Together: Dance Experiences in Social Virtual Reality. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 188, 18 pages. https://doi.org/10.1145/3491102.3501828Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Katerina El Raheb, Marina Stergiou, Akrivi Katifori, and Yannis Ioannidis. 2019. Dance interactive learning systems: A study on interaction workflow and teaching approaches. ACM Computing Surveys (CSUR) 52, 3 (2019), 1–37.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Ioannis Rallis, Apostolos Langis, Ioannis Georgoulas, Athanasios Voulodimos, Nikolaos Doulamis, and Anastasios Doulamis. 2018. An Embodied Learning Game Using Kinect and Labanotation for Analysis and Visualization of Dance Kinesiology. In 2018 10th international conference on virtual worlds and games for serious applications (VS-Games). IEEE, 1–8.Google ScholarGoogle Scholar
  37. Michalis Raptis, Darko Kirovski, and Hugues Hoppe. 2011. Real-time classification of dance gestures from skeleton animation. In Proceedings of the 2011 ACM SIGGRAPH/Eurographics symposium on computer animation. 147–156.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Rebuff Reality. 2023. Dance Dash. https://store.steampowered.com/app/2005050/Dance_Dash.Google ScholarGoogle Scholar
  39. Richard M Ryan, C Scott Rigby, and Andrew Przybylski. 2006. The motivational pull of video games: A self-determination theory approach. Motivation and emotion 30, 4 (2006), 344–360.Google ScholarGoogle Scholar
  40. Richard A Schmidt and Craig A Wrisberg. 2008. Motor learning and performance: A situation-based learning approach. Human kinetics.Google ScholarGoogle Scholar
  41. Simon Senecal, Niels A Nijdam, Andreas Aristidou, and Nadia Magnenat-Thalmann. 2020. Salsa dance learning evaluation and motion analysis in gamified virtual reality environment. Multimedia Tools and Applications 79, 33 (2020), 24621–24643.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Alexa Sheppard and Mary C. Broughton. 2020. Promoting wellbeing and health through active participation in music and dance: a systematic review. International Journal of Qualitative Studies on Health and Well-being 15, 1 (2020), 1732526. https://doi.org/10.1080/17482631.2020.1732526Google ScholarGoogle ScholarCross RefCross Ref
  43. Harmonix Music Systems. 2010. Dance Central.Google ScholarGoogle Scholar
  44. Harmonix Music Systems. 2019. Dance Central VR. https://www.meta.com/experiences/2453152771391571/.Google ScholarGoogle Scholar
  45. April Tyack and Elisa D Mekler. 2020. Self-determination theory in HCI games research: Current uses and open questions. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–22.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Ubisoft. 2009. Just Dance.Google ScholarGoogle Scholar
  47. Yoko Usui, Katsumi Sato, Shinichi Watabe, and Erina Yanagida. 2019. VR Teaching Materials for Dance Practice. In 2019 8th International Congress on Advanced Applied Informatics (IIAI-AAI). IEEE, 178–183.Google ScholarGoogle Scholar
  48. VRChat. 2017. VRChat. https://hello.vrchat.com/.Google ScholarGoogle Scholar
  49. A Mark Williams and RC Jackson. 2019. Anticipation in sport: Fifty years on, what have we learned and what research still needs to be undertaken?Psychology of Sport and Exercise 42 (2019), 16–24.Google ScholarGoogle Scholar
  50. Alexander Winkler, Jungdam Won, and Yuting Ye. 2022. QuestSim: Human motion tracking from sparse sensors with simulated avatars. In SIGGRAPH Asia 2022 Conference Papers. 1–8.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Yanghui Xu and Yi Li. 2019. Design and Research of VR System in Latin Dance Teaching. In 2019 International Conference on Advanced Education, Service and Management, Vol. 3. The Academy of Engineering and Education, 59–62.Google ScholarGoogle Scholar
  52. Jackie Yang, Tuochao Chen, Fang Qin, Monica S Lam, and James A Landay. 2022. Hybridtrak: adding full-body tracking to VR using an off-the-shelf webcam. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Qiushi Zhou, Cheng Cheng Chua, Jarrod Knibbe, Jorge Goncalves, and Eduardo Velloso. 2021. Dance and Choreography in HCI: A Two-Decade Retrospective. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 262, 14 pages. https://doi.org/10.1145/3411764.3445804Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. WAVE: Anticipatory Movement Visualization for VR Dancing

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Article Metrics

            • Downloads (Last 12 months)279
            • Downloads (Last 6 weeks)279

            Other Metrics

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format .

          View HTML Format