Next Article in Journal / Special Issue
Mobile User Interface Adaptation Based on Usability Reward Model and Multi-Agent Reinforcement Learning
Previous Article in Journal
Design and Evaluation of a Memory-Recalling Virtual Reality Application for Elderly Users
Previous Article in Special Issue
Technology and Meditation: Exploring the Challenges and Benefits of a Physical Device to Support Meditation Routine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Into the Rhythm: Evaluating Breathing Instruction Sound Experiences on the Run with Novice Female Runners †

by
Vincent van Rheden
1,*,
Eric Harbour
2,
Thomas Finkenzeller
2 and
Alexander Meschtscherjakov
1
1
Department of Artificial Intelligence and Human Interfaces, Paris Lodron University of Salzburg, 5020 Salzburg, Austria
2
Department of Sport and Exercise Science, Paris Lodron University of Salzburg, 5400 Hallein-Rif, Austria
*
Author to whom correspondence should be addressed.
This article is an extended version of our paper published in the Proceedings of the 16th International Audio Mostly Conference (AM ’21), Virtual/Trento, Italy, 1–3 September 2021; pp. 16–23.
Multimodal Technol. Interact. 2024, 8(4), 25; https://doi.org/10.3390/mti8040025
Submission received: 1 February 2024 / Revised: 4 March 2024 / Accepted: 7 March 2024 / Published: 22 March 2024

Abstract

:
Running is a popular sport throughout the world. Breathing strategies like stable breathing and slow breathing can positively influence the runner’s physiological and psychological experiences. Sonic breathing instructions are an established, unobtrusive method used in contexts such as exercise and meditation. We argue sound to be a viable approach for administering breathing strategies whilst running. This paper describes two laboratory studies using within-subject designs that investigated the usage of sonic breathing instructions with novice female runners. The first study (N = 11) examined the effect of information richness of five different breathing instruction sounds on adherence and user experience. The second study (N = 11) explored adherence and user experience of sonically more enriched sounds, and aimed to increase the sonic experience. Results showed that all sounds were effective in stabilizing the breathing rate (study 1 and 2, respectively: mean absolute percentage error = 1.16 ± 1.05% and 1.9 ± 0.11%, percent time attached = 86.81 ± 9.71% and 86.18 ± 11.96%). Information-rich sounds were subjectively more effective compared to information-poor sounds (mean ratings: 7.55 ± 1.86 and 5.36 ± 2.42, respectively). All sounds scored low (mean < 5/10) on intention to use.

1. Introduction

Running is one of the most commonly practiced exercises in the world [1]. The sport even gained popularity in times of COVID-19 (e.g., Ref. [2]) as it is an easily accessible form of sport, requiring little equipment and low organizational effort.
Breathing is a key aspect of the running experience, and an important factor on the feeling of a run ’going well’ [3]. Respiration is a known key indicator for perceived workload [4], and can also be deliberately adapted in order to positively affect the running experience [5]. Breathing techniques such as slow-paced breathing and extended exhales show promise for improving psychophysiological aspects—and as such the experience of running [6,7]. Deep breathing may help runners to avoid exercise-related transient abdominal pain (ETAP), the negative sensations commonly referred to as ‘side stitches’ [8]. These breathing techniques might be particularly beneficial for female runners, as they are especially susceptible to respiratory limitations and exertional dyspnoea, or breathlessness, which refers to the subjective awareness of the sensation of uncomfortable breathing [9]. Hence, if such limitations could be alleviated, it might increase exercise tolerance or help these runners in enjoying running. With our research, we set out to support the growing group of female runners by increasing enjoyment and preventing dropout [10,11], and explore how this can be achieved through sonically guided breathing.
Sound-based guidance was chosen to instruct runners during activity since the auditory modality is relatively free, versus the visual modality [12]. Metronome-like sounds are typically used to instruct breathing in sports science experimental contexts. In other contexts, more elaborate breathing instruction sounds can be found such as RESPeRATE® (RESPeRATE®, InterCure, Inc., New York, NY, USA, www.resperate.com/MD (accessed on 20 November 2020)), in which the sound fades in and out. We observe that the information that is conveyed by means of those sounds differs, and the various approaches vary in information richness. The richness of this information is suggested to have an effect on the runners’ adherence to the breathing sounds and consequently affects the overall running experience. When such sounds are used for an extended period of time (e.g., several weeks), the type of sonification is especially crucial to the user experience. As such, in a follow-up study, we evaluated sonically enhanced instruction sounds, exploring whether these would increase the user experience.
In this paper we extend on earlier work [13]. We present two studies, which evaluated how well runners adhere to a given breathing rhythm provided through sonified guidance. We were interested in the runners’ ability to follow the instructions and how much influence the auditory information richness has on this adherence. Furthermore, we wanted to know how runners experienced running with and following the instruction sounds. Thus, we formulated the following research questions:
RQ1: 
To what extent can runners follow auditory breathing instructions while running?
RQ2: 
How does auditory information richness affect adherence to breathing guidance?
RQ3: 
What is the impact of various auditory breathing instructions on runners’ user experience?
Our research contributes to the understanding of sound-guided respiration by studying how the richness of auditory guidance affects adherence and user experience during running. The following section reviews related work in this area, after which we will present the setup and results of two user studies. Both studies evaluate the adherence and user experience (UX) regarding a variety of sounds. The results of both studies will then be discussed in the context of existing literature.

2. Related Work

This section discusses relevant literature in the contexts of modalities applied in SportsHCI and technologies utilized to guide breathing in various contexts. Feedback modalities in SportsHCI should be carefully considered since they inherently affect understanding of the target information communicated [14,15]. A variety of modalities have been explored within the SportsHCI realm. We reflect on the related work in terms of applicability of the various modalities, taking the following into account: First, the technology should be applicable to use outdoors, and thus be mobile, compact, and safe. Second, as we aim to scale up the user-base for testing our follow-up work on breath guidance, the technology should be low-friction and commonly used amongst runners.

2.1. Feedback Modalities and Sports

Here, we will reflect on a selection of this work, starting with the visual modality.

2.1.1. Visual Feedback

The visual modality in sports is mainly used in stationary sports and exergame situations, e.g., in stationary cycling [16,17], rowing [18,19], and even in-place running [20]. Visuals to aid treadmill running have been explored as well, e.g., Nunes et al. explored how to motivate treadmill runners [21]. Burr et al. [22] explored visuals during the run to guide the user’s breath. They used a setup consisting of a treadmill and a projector screen to display gamified breath guidance, which stabilized the participant’s breathing frequency and increased breath awareness. For our research, we aim to provide insights for systems that can be applied to outdoor running. As such, big screens with projectors are not applicable.
Hamada et al. explored the social effects of having a virtual runner augmented in a head-up display (HUD) [23]. Their main aim was to accompany and motivate the user on the run. To do so, they adopted smart glasses as they were lighter and hence fitting the sports context better. They avoided the use of audio as this would be too limited to create the experience of a fellow runner present with you. With JogAR [24], Tan et al. also explore a HUD in the form of smart glasses to enhance the running experience. They introduce an “audio-first” approach to limit the cognitive load that a runner needs to spend on the real world, emphasizing that the load might increase when the runner fatigues. Specially for runner- and cyclist-designed HUDs, such as Recon Jet (e.g., reported on in [25]), it integrates a small screen displaying stats as numbers, similar to what sports watches offer visually, or typical sports apps offer effectively and simply via spoken audio. Running requires attention to the real environment, which makes it difficult to apply an exergame’s method directly to these activities [12,23]. In line with Refs. [12,23,24], we argue that running requires attention to the real environment, which makes it difficult to utilize the visual modality in this context. Additionally, HUDs in cars are common, but for running, they are expensive, cumbersome to use and set up [26], and a highly uncommon technology among runners [27]. As such, we did not deem a HUD a feasible technology that would allow for easy scaling up and testing with a bigger user-base in future research.
Mauriellio and colleagues explored visual feedback using an e-textile to inform fellow runners in a group context on running statistics [28], which in our context would require a running partner that would always run ahead of the user to receive continuous breath guidance. Colley et al. explored the design space of shoe-integrated displays by placing LEDs on the running shoe and providing colour-coded feedback on the runner’s target pace [29]. It provided an ambient display to give users a relative reference to the target pace, yet runners report that during contests, they would need more precise feedback. For our research, a continuous interaction is needed to support runners with the breathing rate, and looking continuously to the shoe while running is not possible.
Smart watches are commonly used to deliver information on running statistics, such as pace and mileage. Seuter et al. evaluated the influence of interacting with a smart watch and smart glasses during the run [30]. They concluded that smart watches were not only preferred, but also influenced the running movement significantly less. Considering conveying information on the smart watch, Jenssen and Mueller suggest that it can be cumbersome to use the screen on a running watch as an interaction platform for guiding running movements [14]. For continuous interaction to guide the breath, as we investigate in this paper, they stress that “continuously addressing the watch during a run, in order to [authors: visually] observe the effects of movement corrections, is inexpedient and ultimately risks inciting an unfavorable running style”.
Visual feedback has proven to be a highly effective feedback modality post-sport activity to display stats and graphs, or e.g., to review motion [31], yet for continuous breath guidance, we conclude that visual breath guidance during the run might occlude the demand of visual attention with the real world.

2.1.2. Haptic Feedback

Greinacher and colleages [32] evaluated haptic versus visual LRC guidance in the context of virtual reality rowing. Their haptic feedback consisted of eight vibration motors attached to the back of the rower. They found a significant difference in favor of haptic feedback. Valsted et al. [33] showed that vibrations can be used to support LRC in running. They evaluated two on-step vibration patterns, the first vibrating only on the steps during the inhalation, and the second only vibrating on the steps during the exhalation. Participants were six “mainly competitive” runners and they reported that exhale-based instruction was preferred and resulted in a higher success rate. Additionally, they evaluated various modes of “temporality”, in which the tactile support (1) was always present, (2) only appeared when the user preferred it (“self-serviced”), (3) alternated on/off at regular times. Overall, they reported that runners found the vibrations irritating. Additionally, the haptic feedback led to relatively low adherence (26% success ratio) compared to Harbour et al. [34] (72% success ratio). We suspect that the usage of vibro-tactile interfaces while running might be difficult as the running motion introduces substantial tactile noise (impact of steps, movement of limbs, wind, rain, touch of clothing on the body). Additionally, the resolution of vibro-tactile interfaces is low, typically limited to on/off. A promising, but controversial, type of haptic assistance is Electro-Muscular Stimulation (EMS). Hassan et al. showed how EMS-based feedback can support runners in forefoot striking, guiding and correcting the user’s movement via electronic muscle stimulation [35]. Outside of the sports realm, breath is guided through inflatables. To guide the breath, Yu et al. created a computer-mouse-sized inflatable bag on which the user’s hand rests to feel the movement [36]. Another approach is inflatables integrated in a corset. This approach seems to be non-applicable for the sports context due to bulky technology consisting of multiple inflatable units, each containing an air pump and solenoid valve [37].

2.1.3. Auditory Feedback

In human–computer interaction (HCI), sound and sonification have been commonly explored within the realm of sports, as shown by a variety of literature reviews [38,39,40].
Sound is very effective in communicating information during motion, e.g., for navigation [41] and in running applications such as https://www.runtastic.com/ (accessed 3 February 2024) and Strava (https://www.strava.com/ (accessed 3 February 2024)) to provide updates on mileage and pace.
The usage of audio whilst running has been explored in various ways. For example, the smartphone app “Zombies, Run!” utilizes narrative-focused audio to convert a usual run into a game experience in which the user needs to escape from zombies [42]. Mueller et al. took a social angle when introducing an audio-based system that allowed people who are geographically apart to experience running together [43].
Sonification is a popular approach as a feedback modality. Within sports HCI, sound aspects like tone or volume are often mapped to the athlete’s movement, a sonification technique also referred to as parameter mapping [44], exemplified by various authors such as Cesarina et al. [45], Dubus et al. [46], Schaffert et al. [47], and Hermann et al. [48]. Another usage of sound in sports is to correct the athlete as shown by Yang et al. [49], who showcase a system that activates a white noise sound when the movement of the hand during a rip curl exercise was too fast. With a focus on running, Godbout et al. [50] explored how to assist runners in real-time with a toe-striking technique, utilizing a corrective sonification strategy. Hoffmann et al. [51], Lim et al. [52], and Nijs et al. [53] exemplify that in sports and physical exercise research, sounds are typically used as instruction methods, and do not reflect or correct the athlete’s movement. For example, Hoffmann et al. [51] instructed cyclists to pedal at the rate of a metronome.
The spontaneous effects of sound or music on physical activity have been evaluated. For example, Styns at al. [54] showed that walking tempo and speed increases when dictated by music versus a metronome. Costas and colleagues reported that music elicits a rhythm response, which refers to the effects of musical rhythm, particularly its tempo in BPM (beats per minute). Buhmann et al. [55] showed that changing the relative phase angle of the musical beat can be utilized to manipulate running cadence—especially in women. Hockman [56] explored a system in which the step rate of the runner was reflected by the beat of the music. Additionally, Dijck et al. [57] showed that runners spontaneously entrain cadence to slightly adapted (up to 3%) music tempo. To support and maintain a proper run stride researchers have proposed several systems utilizing auditory feedback [31,58]. The usage of sound tempo might be highly beneficial for breath guidance as well. Oliver and Flores-Mangas take this notion a step further in MPTrain, a system that automatically selects and plays music based on the runner’s objectives and physiological reactions [59]. The abundance of Spotify (https://www.spotify.com/us/, accessed on 1 February 2024) playlists in various BPMs targeted at runners shows that sound is helpful for pacing.
Humans have a tendency to synchronize not only movement [60,61], but also breathing, to external sounds [62]. Auditory guidance of breathing has been utilized in a variety of HCI research related to meditation and breathing games [63,64,65,66]. Typically, those examples primarily relied on visual instruction or visual feedback. For example, in order to evaluate their work, Zafar et al. [67] used the Paced Breathing App (https://play.google.com/store/apps/details?id=com.apps.paced.breathing, accessed on 1 February 2024) as breathing instruction baseline sounds. Focusing on music, Harris et al. showed that sonic feedback through music degradation can support slow breathing in a non-sports context [68]. In the context of health care, ResPiRate has been used to support users with slow-paced breathing exercises through sonic instruction [69,70,71]. The ResPiRate device produces sounds that fade in and out analogously to an inhalation or exhalation.
Within the sports context, we point out research by Hoffmann et al. [51,72] and Bardy et al. [73] involving the utilization of sonic cues for guiding respiration. Hoffmann and colleagues [74] utilized metronome-based breathing rate instruction sounds to evaluate whether a stabilized breathing rate entrained pedal strokes while cycling. Harbour et al. [34] also explored how sounds can support the guidance of locomotor–respiratory coupling (LRC), a breathing technique in which the breath is coupled to steps in specific ratios (e.g., two steps during the inhalation, and three steps during the exhalation). In line with [75] they showed that sounds are a highly effective way to achieve these rhythms. We distinguish our work as we solely focus on breathing guidance whereas [34,75] focused on LRC and thus includes steps and step-based guidance through sound.
We concluded that specially for fast-moving sports, the utilization of sound as an interaction modality is promising since it does not interfere with eyesight, which is crucial for the safety of an athlete [12].

2.2. Types of Breathing Instruction Sounds

Reflecting on the related work presented above, we observe two commonly used types of sound instructions for breathing: first, metronome-like sound cues to instruct the breathing rate, typically through sound cues either on inhalation or on flow reversal, and second, a sound analog to the airflow during respiration fading in and out, like ResPiRate and the Paced Breathing App.
We hypothesized that the information richness, i.e., the information that the user can retrieve from the sounds, influences the adherence and perceived experience. For example, the paced breathing application provides sound that fades in and out like actual breathing, which may allow users to predict more precisely when the next flow reversal will take place. In contrast, single tones only inform the user when to start an inhale or an exhale, which are perhaps less predictable for the listener.
In our first study, we aim to evaluate the effects of this information richness of sound instructions in the context of running. Then, we present a follow-up study aimed to explore adherence and user experience of enriched instruction sounds aimed to increase the sonic experience.

3. Study 1: Exploring the Effects of Sound Information Richness

3.1. Methods

This section describes participants, introduces the sound conditions, and presents a detailed procedure. Then, instruments are introduced, and we detail how the collected data are processed and analyzed.

3.1.1. Participants

A convenience sample was recruited through word of mouth and university mailing lists. Eleven female runners volunteered (no incentives were offered) to participate in this study. Selection criteria were as follows:
  • Participants must possess no prior familiarity with paced breathing methodologies during running.
  • Participants should self-define their running proficiency as ranging from beginner to intermediate.
  • Participants should have the capability to sustain uninterrupted running for a minimum of 20 min.
Participants were on average 29 ± 4.20 years old. Seven participants reported to run 1–2 times per week, two 1–2 runs per month, one 2–3 runs per week, and one less than 1 run per month. Participants stated that they typically ran between 35 and 60 min (45.0 ± 8.9 min) per instance. Participants stated that they did not follow any specific breathing technique while running. Two participants reported to use breathing techniques outside of running, slow breathing during yoga and to alleviate stress. Table 1 shows participants’ demographics.

3.1.2. Sound Conditions

For the first study, we devised five breathing instruction sounds that differed from each other in their levels of information richness (see Figure 1).
The first three sounds resembled a metronome. The first two sounds were the least information-rich sounds. They signaled the start of a full breath cycle. Sound 1 (Inhalation Metronome) indicated the start of inspiration, while sound 2 (Exhalation Metronome) indicated the start expiration. We chose the standard pitch “A3” as inhalation indication and “E3” as an exhalation indicator. The third sound labelled Full Breath Metronome included both inspiration (“A3”) and expiration (“E3”) sounds. Sound 4 (Siren) had a continuous tone for inhalation and exhalation similar to a siren. It provided the runner with continuous sound feedback and thus was more information-rich than sound 1 to 3. For sound 5 (Accordion), our metaphor was an accordion instrument. It guided the full breathing cycle and had the highest information density as the volume faded in and out for both inhalation and exhalation. With the aim to reduce the task complexity for runners, we designed all sounds to instruct a breathing ratio of 50%. This means that inhalation and exhalation lasted for the exact same time. This is within the normal range of breathing ratios during exercise of various intensities [76].
Sounds were created in Ableton Live (Ableton Live, v. 11.0.2, Ableton AG, Berlin, Germany, https://www.ableton.com/, accessed on 1 February 2024). As a base for the sounds, we used Sine Waveforms.adv (Waveforms.adv is a standard sound in Operator Ableton Live’s synthesizer) for their simplicity. As described above, we distinguished inhalation from exhalation through the pitch of the tone. For inhalation, we used the standard pitch “A3” tone. For exhalation, we took the pitch “E3”. The sound cues for the Inhalation Metronome, Exhalation Metronome, and Full Breath Metronome lasted 1/8 of a full breath length. The Siren and Accordion lasted the full breath length. The Accordion was faded in and out by adjusting their volume.

3.1.3. Procedure

For the experiment, we chose a within-subject design. Participants were asked to run on an indoor treadmill following breathing instruction sounds for four minutes. Figure 2 visualizes the experimental setup and instruments and Figure 3 illustrates the procedure steps.
Participants filled in a pre-participation questionnaire (see Appendix A) regarding basic demographics, running and sport behaviour, and breathing experience. Then, participants were informed about the exact study procedure. They were guided through a sound familiarization protocol, which included practicing to breathe in the rhythm of the sound cues provided by the breath tool. This was performed while participants were sitting.
The familiarization was intended to minimize learning effects during the experiment. The sound familiarization protocol had two phases: first, sounds 1–5 were played in randomized order. A researcher demonstrated when to inhale and exhale via hand movements (up for inhale and down for exhale). In the second phase, participants were asked to breathe according to the sound cues without the assistance of the researcher. All participants successfully completed the breathing tasks for all of the five sounds.
After the familiarization, participants were fitted with a Hexoskin (Carre Technologies, CAN, https://www.hexoskin.com/, accessed on 1 February 2024) (HX) smart shirt according to manufacturer recommendations. A standardized five-minute seated rest phase was performed to allow participants to reduce potential pre-experiment effects of stress on breathing patterns. Then, participants were introduced to the questionnaire that would be used during the experiment between sound conditions. This questionnaire included Borg’s Rating of Perceived Exertion (RPE) [77].
Participants performed a warm-up for ten minutes, running on the treadmill. First, the treadmill was set to 7.5 km/h. After five minutes, they were asked to adjust the treadmill to a comfortable speed at which they felt confident they could sustain for 30 min. The speed of the treadmill was then kept the same during the rest of the experiment. In order to confirm that the pace was comfortable, researchers performed a talk test [78] with the participants during the second half of the warm-up phase. During the last minute of the warm-up, the average breathing rate of the participant was recorded. This measurement was used to adapt the tempo of the breathing sounds. This ensured that sounds matched participants’ natural breathing rate.
For the main experiment, participants ran for five minutes during each of five sound conditions. The procedures for each of the five conditions were as follows: During the first minute, no sound was played, allowing participants to achieve a stable speed, stride rate, and breathing pattern. Then, a warning and ten-second countdown was given by the researchers. It included the information of whether the respective sound would start on an inhalation or exhalation. Then, the breathing instruction sound was played for four minutes while participants continued running. In order to limit potential effects of learning and exertion, the orders of the five sounds were randomized for each participant.
In order to prevent overexertion, we asked participants to verbally rate their perceived exertion (RPE) on the BORG scale (6–20) [77] while running at the end of the warm-up and after each experimental condition. An a priori threshold value was set to 16, above which the experiment would be terminated. None of the participants reached this threshold during the experiments.
After each of the five sound conditions, participants rested for three minutes, during which they filled in a shortened meCUE questionnaire (see Section 3.1.4) [79] on user experience. The data from the Hexoskin shirt (HX data) confirmed that participants’ heart rate recovered during the standardized three-minute resting phase to at most 120 beats per minute (bpm) for each participant and condition.
After the experiment on the treadmill, participants took part in a semi-structured interview (see Appendix B for the interview guide), which included questions about the perceived usefulness and usability of and user experience with the different sounds. The total time participants spent in the lab was up to 1.5 h.

3.1.4. Instruments

Figure 2 illustrates the experimental setup. While participants ran on the treadmill, breathing instruction sounds were produced in Ableton Live and played via Bluetooth speakers next to the treadmill. During the run, participants were wearing the HX shirt to gather respiration data (dual thoracic and abdominal stretch sensors, 2-channel respiratory inductance plethysmography, 16 bit, 128 Hz), heart rate (1 Ch ECG, 12bit), and accelerometry (3-axis, 64 Hz, range ±16 g). Additionally, participants were equipped with a Movesense (Suunto Oy, Vantaa, Finland) (MS) inertial measurement unit (IMU). This was fixed on the right tibia of the participant, just above the ankle, in order to capture tibial acceleration (9-axis, 208 Hz, range ±16 g). Each sound start was tagged manually by researchers within the Movesense mobile application to produce synchronized timestamps. The MS and HX accelerometers were synchronized via three taps before and after the experiment. This allowed us to align the sounds with the HX data.
To evaluate the effects of the five sounds on perceived user experience, we used the meCUE questionnaire [79] (German version). In order to adapt the questionnaire to the running context, we followed the meCUE handbook. The final questionnaire consisted of six items, which were quickly answered between each run. They were formulated as statements to be answered on a 10-point scale, ranging from 1 to 10 (with 5.5 being neutral), to have a high degree of measurement precision. Chosen items covered effectiveness, efficiency, intention to use, negative emotion, positive emotion, and aesthetics. Statements were as follows (translated from German):
  • Breathing while running can be easily adjusted to this sound specification (effectiveness).
  • I consider the use of this sound during running to achieve regular breathing to be absolutely useful (efficiency).
  • If I could, I would use this sound every day while running (intention to use).
  • Running to this sound annoys me (negative emotion).
  • Running to this sound exhilarates me (positive emotion).
  • I find this sound attractively designed (aesthetics).
The semi-structured interview at the end of the experiment had six parts and covered usability and user experience issues such as speed and volume of breathing instruction sounds, ease of use, subjective adherence, and hedonic experiences. Participants were also asked to rank the order of the different sounds with respect to ease of use and aesthetics. Finally, questions with respect to running routines and the willingness to use such a breathing tool in daily running routines were asked.

3.1.5. Data Analysis and Statistical Approach

The processing of HX data was conducted through a custom-developed algorithm in MATLAB (MATLAB Version 2021a, MathWorks, Inc., Natick, MA, USA) [80]. Timestamped flow reversals were used to segment breath cycle time (tB). The breathing rate (BR) was estimated from the inverse of t B 60 and smoothed using a 5-breath rolling average. The coefficient of variation of tB was calculated as a representation of BR quantitative variability (BRV).
HX respiration data were used to estimate adherence to the instructed BR during the experiment. First, Kolmogorov–Smirnov tests were used to confirm a normal distribution of tB for each participant. Adherence was determined by two aspects, (1) main adherence: percent error between measured BR and instructed BR, and (2) adherence strength: “attachments” and “detachments”. Main adherence was calculated within a window from 60 s after sound start to 210 s after sound start. Mean relative (MRPE) and absolute percent error (MAPE) was used to quantify adherence to each sound. A priori criteria were established at 2.5% MAPE for “acceptable” adherence. Then, attachments and detachments were calculated along a five-breath rolling window from sound start to 210 s after sound start, and the length (time) of each was recorded. These were defined a priori as five or more consecutive breaths inside or outside, respectively, of five percent of the instructed BR. For example, for an instructed BR of 30 breaths, an attachment was flagged if five consecutive breaths were between 28.5 and 31.5 bpm. Percent time attached and detached was calculated relative to the entire sound instruction window in order to quantify adherence strength.
A generalized model repeated-measures ANOVA ( a l p h a = 0.05 ) was used to detect any significant differences in main adherence and adherence strength between sounds. A multi-sample comparison of variances was used to determine intra-subject adherence strength differences between sounds.
For user experience, we followed the following statistical approach: We performed a one-way ANOVA of repeated measures for each item of the meCue to separately test the effect of each sound on the UX items of the meCUE questionnaire. Mauchly’s test was used to verify whether sphericity could be assumed. When this was the case, no correction was used. When sphericity was not assumed, the Greenhouse–Geisser correction was applied.

3.2. Results of Study 1

3.2.1. Adherence, Study 1

Participants selected speeds between 6.5 and 10 km/h (8.43 ± 0.87) and scored their RPE below 16/20. The analysis showed similar heart rate responses for each running condition, indicating no to little fatigue over the runs. The instructed BR ranged from 22 to 37 bpm (26.91 ± 4.18). Figure 4 shows the BR for S1P02 for all sounds. It exemplifies the stabilizing effect of each sound instruction on BR. Note that before the sound start (t = 0), BR ranges between 22 and 42 bpm. At 25 s after sound start, this has reduced to 22 and 25 bpm. Then, at 250 s, the sound is switched off, and BR varies heavily again. Table 2 shows the group average BR and overall adherence statistics. Pooled group data indicated that all participants could follow the instructed BR for each sound (100.76 ± 1.36%). While group adherence was excellent for all sounds (MAPE = 1.16 ± 1.05, percent time attached = 86.81 ± 9.71%), adherence was highly variable between individuals. For example, S1P08 expressed a very strong adherence to the Inhalation and Exhalation Metronome (96.76 and 97.03% time attached, respectively), whereas adherence to the Full Breath Metronome and Siren was more unstable (74.79% and 88.78% time attached, respectively). While there were no significant differences in adherence between sounds among the entire group, intra-individual comparisons revealed that several runners struggled specifically with the Inhalation Metronome (n = 3, p < 0.05). Attachment plots provided qualitative assessment of acute attachment/detachment behavior, as exemplified by Figure 5.

3.2.2. User Experience, Study 1

Table 3 contains the means and standard deviations of each UX item. Overall, the sounds scored low to relatively positive on user experience, with highest mean scores (around 7.00) for the effectiveness item, and lowest mean scores for intention to use (around 2.50). For the first item covering effectiveness, participants indicated that the sounds including inhalation and exhalation sound were easier to follow than the single metronome sounds on inhalation or exhalation (see Figure 6). ANOVA resulted in a significant group difference (F = 3.41, η p 2 = 0.25, p < 0.05). A significant difference was found between the Inhalation Metronome sound and the Full Breath Metronome sound (p < 0.05) and Inhalation Metronome sound and the Accordion Sound (p < 0.05). The differences found between the Exhalation Metronome Sound and both the Full Breath Metronome Sound (p < 0.01) and the Accordion sound (p < 0.05) were also significant.
The second item of the meCUE scale covered the efficiency of the sounds to achieve a stable breathing rate. Overall, the efficiency of the sounds was judged to be above average (means varying from 5.27 to 6.91). Furthermore, a favourable tendency towards the sounds with both inhalation and exhalation cues was observed. Overall, no significant difference was found (p = 0.18).
The item of envisioned daily use was overall scored as below neutral. In this context, S5 (Accordion) stands out as it was the least worst rated, though no significance was found for this item (p = 0.12).
The experience of annoyance (item: negative emotion) was scored relatively neutral with the Full Breath Metronome sound and the Accordion least annoying with the means slightly below 4, though no significant difference was found (p = 0.71).
Overall, none of the sounds were found to be exhilarating (item: positive emotion): the most exhilarating sound was the Accordion sound with a mean of 5.27. The least exhilarating was the Exhalation Metronome sound with a mean of 2.82. The ANOVA revealed no significant difference (p = 0.06).
Aesthetics was scored through the item I find this sound attractively designed. The high standard deviations indicate that whether a sound is deemed attractive or not is highly personal. The Accordion was the most attractively designed, while the Siren was the least attractively designed. No significant difference was found (p = 0.29).

4. Study 2: Adherence and User Experience of Sonically Enhanced Sounds

Based on the participant feedback of study 1, we created a new range of breathing instruction sounds. Specifically, we aimed to design sounds that were less mechanical and, like the Accordion, were analogous to the breath. These sounds were evaluated in the user study described in this section.

4.1. Methods

4.1.1. Participants

The same inclusion criteria were used to recruit 11 volunteers for this study from 18 to 30 years (24.27 ± 3.29). Participants reported running an average of 1–2 sessions per month. The average running distance was 5 km (4–7.5 km) with an average running time of 35 min (25–50 min). Two participants reported to count steps while breathing, indicating LRC, and one participant reported she takes extra deep breaths while running. Three participants reported using conscious abdominal breathing for relaxation in a sedentary state. We expected these techniques to have no major effect on the results. Table 4 provides an overview.

4.1.2. Sound Conditions

For this study, three new sounds were created: firstly, a rock-inspired sound (Rock Sound), consisting of two electric guitar strikes. This first stroke had a higher tonality to indicate an inhale, and then the second stroke had a lower tonality to indicate the exhale; the second sound was a classical-music-inspired sound (Harp Sound), consisting of two harp arpeggios with similar two-tone indications; the third sound was an actual recording of breathing. Next to the three novel sounds, the Full Breath Metronome was included as a sound condition to relate the findings to study 1. We learned from study 1 that it can be stressful to generate the sound at a customized playback speed on the spot. As such, we prepared for each sound for breathing rates from 18 to 48 bpm a separate sound file. This also fixed the playback time to exactly 4 min.

4.1.3. Setup

The procedure followed the same procedure and instruments as described in study 1. Five conditions were tested: (1) no sound, (2) Full Breath Metronome, (3) Rock Sound, (4) Breath Sound, (5) Harp Sound. The no-sound condition was included to capture a baseline breathing rate after the warm-up. As sounds were designed to have similar information richness and based on the findings of no learning effects of study 1, the sounds were presented in the fixed order as described above.

4.2. Results, Study 2

This section summarizes the result of study 2. The data analysis and statistical approach followed the same process as described in study 1. We also compared breathing rate quantitative variability between the no-sound condition and other sounds to quantify the ability of the sounds to stabilize BR. Furthermore, we used a two-sample t-test α = 0.05 to compare the Full Breath Metronome adherence between studies 1 and 2.

4.2.1. Adherence in Study 2

Table 5 shows the group average BR and overall adherence statistics. BR sound successfully stabilized BR, as BR variability was significantly greater in the no-sound condition versus all sounds (d > 3.83, p < 0.001). Pooled group data indicated that all participants could follow the instructed BR for each sound (BR = 101.21 ± 1.90%), and overall adherence was acceptable for all sounds (MAPE = 1.9 ± 0.11, percent time attached = 86.18 ± 11.96%). Figure 7 shows the BR for participant S2P04 for all conditions, and clearly shows the difference in stability for the conditions with instruction sounds compared to the no-sound condition. The t-test revealed no meaningful difference in adherence to the Accordion Sound between study 1 and study 2 (p = 0.37). Similar to study 1, we found high inter-individual variability in sound adherence. Intra-individual comparisons showed that the Breath Sound (p < 0.05, n = 3) and Harp Sound (p < 0.05, n = 3) were challenging (lower adherence strength) for some runners.

4.2.2. User Experience in Study 2

Table 6 shows the means and standard deviations for each UX item. Overall, the user experience of most sounds was rated as below average for most items. A significant difference in the group analysis was found for the item expressing subjective effectiveness (p ≤ 0.01, η p 2 = 0.32, F = 4.78). Figure 8 shows the mean scores for each of the sound conditions. Breath Sound scored the best with a mean of 8.91 ± 1.92, and scored significantly higher than the Accordion (p < 0.05) and the Harp Sound (p < 0.05). Significant differences between the sound conditions were found for none of the other items (efficiency: p = 0.10, F = 2.28, η p 2 = 0.19; intention to use: p = 0.76, F = 0.40, η p 2 = 0.04; negative emotion: p = 0.08, F = 2.46, η p 2 = 0.20; positive emotion: p = 0.56, F = 0.70, η p 2 = 0.07; aesthetics: p = 0.14, F = 1.96, η p 2 = 0.16). Nonetheless, the Breath Sound seemed to score relatively better than other sounds in terms of subjective efficiency (7.55 ± 2.21), where other sounds scored on average below 6.00. The Breath Sound also scored higher in the mean, albeit this effect was not significant in comparison to the other sound conditions. The Accordion Sound scored specifically low for positive emotion and aesthetics in comparison to the other sounds.

5. Discussion

This section discusses the results from study 1 and study 2 based on the three research questions outlined in the introduction followed by the limitations and future work:

5.1. Research Questions

5.1.1. RQ1: To What Extent Can Runners Follow Auditory Breathing Instructions While Running?

The results show that all sounds—regardless of their information richness—are able to instruct the breathing rate and stabilize the breathing rate. This finding is in line with Refs. [34,75], which showed that sound is highly effective in guiding LRC during the run. Perfect adherence is never to be expected, and always some deviations to the instruction will occur. Participants mentioned physical and mental reasons not to adhere to the sound at certain times, including the need to take a deep breath, the need to swallow, and drifting attention. This is in line with Ref. [33], which reported that participants were occasionally distracted by external influences. Overall, these results can inform other applications in which breathing plays an important role, such as healthcare products (e.g., ResPiRate [69,70,71]), or to instruct the breathing rate in other (endurance) sports, such as cycling, cross-country skiing, or rowing. It should be further studied whether the stabilizing effect remains during longer, more exhausting runs. In this experiment, the instructed breathing rate was really close to the natural breathing rate. Whether all sounds perform as well during the instruction of breathing rates that differentiate from the natural ones should be further examined.

5.1.2. RQ2: How Does Auditory Information Richness Affect Adherence to Breathing Guidance?

Within this work, no significant effect of auditory information richness on adherence was observed. At the same time, the user experience evaluation from study 1 indicates that sounds instructing inhalation and exhalation were easier to adapt to and created less mental load than single metronome sounds. This was confirmed by the interviews, in which participants indicated that the dual tone sounds were easier to follow “because you have a signal for inhalation and a signal for exhalation” (S1P10); S1P09 indicated the following: “the one sound [single metronome] that confused me a bit”. Also, the Inhalation and Exhalation Metronome seemed to induce unnatural breathing behaviour: S1P07—“I tried to breathe faster and then just wait till the tone started, and then exhale or inhale”. The Accordion may have cost less effort as it was more analogous to an actual breath—“it was more in rhythm with the body: the first one is more flow with the breathing and the other one is harder or sharper” indicated S1P07 when comparing the Accordion to the Full Breath Metronome sound. In study 2, we also observe a high ranking for effectiveness of sounds that are analogous to the breath, like the Breath Sound or the Accordion, e.g., S2P09 thought that the Breath Sound was easy to follow because “that’s how I breathe”. Therefore, we advise using information-rich sounds like the Breath Sound or the Accordion to learn breathing to guiding sounds. Also, for experiments in which adherence is important, we recommend these types of sounds. Also, sonic qualities seemed to have an effect on the experienced level of adherence. For example, S1P08 stated about the Accordion “I think maybe it was too smooth and soft—at some point, I felt like I had trouble sticking to my breathing pattern with it”. Overall, we see that information-rich sounds were subjectively easier to follow, in particular, sounds analogous to an actual breath, perhaps because these sounds demanded less mental effort. Hence, it is generally advisable to utilize information-rich sounds that indicate inhalation and exhalation. In particular, in situations that require a high level of adherence in a short amount of time, such as cross-sectional studies, sounds fading in and out analogous to the breath are recommended. As participants report that mental load decreases over time, metronome-like sound cues might be preferred for longer studies due to their simplicity and less invasive nature. In some cases, the instruction sounds also appeared to have a stabilizing effect on the step rhythm. As we did not analyze the data on LRC, we cannot objectively confirm this. Nonetheless, some participants mentioned that it occurred spontaneously. Others used it consciously as a strategy to keep the breathing rhythm, such as in study 1, P06: “it was 123 in, 123 out […], especially when there was one beep [S1, S2], I had to count”. This is in line with findings of Hoffman et al. [72] that instructing the breathing rate will increase the chance of coupled breathing.

5.2. RQ3: What Is the Impact of Various Auditory Breathing Instructions on Runners’ User Experience?

Overall, the user experience of all sounds was ranked relatively low and items were rated below average to well above average, with means ranging from 2.18 for intention to use to 8.91 for effectiveness. The low user experience can be explained by both the perceived usefulness of the stabilized breathing for their personal context, but also the simplistic design of the sounds in study 1 specifically. For study 2, we aimed to introduce sounds that were more pleasing for the participants, and explored richer sounds. Comparing the overall studies, it appears that the sounds in study 2 score (relatively) better in terms of most UX items. Table 7 provides a visual overview, highlighting the items that scored relatively (yellow) and absolutely (green) compared to sounds from study 1.
The range of instructional sounds explored in study 1 and study 2 all scored low on perceived aesthetics as well as positive emotion. Specifically in study 1, the Siren was deemed to be annoying as it reminded participants of something stressful, e.g., S1P05 reflected “the sounds […] more or less stressful for me like sirens, […] not relaxing, or not empowering music, which I normally listen to run while I’m running.” In study 1, the Accordion seemed to be the most pleasant: “the sound of [the Accordion] is a bit smoother, is a bit more calming than [the Siren]” (S1P08). The tonality also played a role in the sonic experience: some participants indicated that the lower tones such as in the Exhalation Metronome were less exhilarating and a high tone more exhilarating: “the high tone was more positive mood: the kind of sound is going down…the mood is going down” (S1P02), or “[Inhalation Metronome] just sounded friendlier […] because it was a little higher than [Exhalation Metronome]” (S1P08).
To increase understanding of the sound preferences of the participants, participants were asked to rank the sounds accordingly. The column chart in Figure 9 provides an overview of the rankings, with each column representing the number of participants that ranked the particular sound from 1 to 4, with 1 being best, and 4 being worst. In study 2, most participants (n = 5) found the rock sound most pleasing to listen to. As S2P4 mentioned, “this sound was most exciting!” Contrasting this overall tendency, one participant preferred the rock sound the least, as she found the Rock Sound “exhausting” (S2P08). Four participants preferred the Breath Sound because “it was similar to my own breath”(S2P09) and because the breath sound was “relaxing”(S2P02). The harp sound was preferred by two participants. S2P11 mentioned that with the “harp, I got into a trance and did not hear the sound consciously anymore”. In study 2, it seemed that the Accordion had a strenuous effect on some of the participants’ experiences. For example, the Accordion was found to be “monotonous” (S2P07) and “not motivating” (S2P09). As such, sounds should be designed to fit the running experience more, being more energetic and providing motivation for the run. Within the range of instructional sounds explored in study 2, preference—in line with music preference—is highly individual. As such, we advise offering users multiple sounds to choose from.
The aesthetics of the sounds also played a role in the intention to use, e.g., S1P09 indicated “I don’t think I would do it, because it’s not a sound I would listen to for 20 or 30 min straight”. Most participants felt the sounds were mainly suitable for training purposes and/or did not see them fitting with their normal running routines—including emptying the mind, listening to music or podcasts, e.g., S1P04: “I’m not training for anything specific, so I wouldn’t use it. I’d rather just listen to an audio book or music”. In order to implement breathing instructions in their own normal running routines, breathing instruction sounds should be less obtrusive and less present. Participants suggested often that an integration in their music would open them up to use the sounds, e.g., S2P09: “yes, I would use it if integrated in music, otherwise it is very annoying”. Also, self-serviced instructions, as Valsted et al. [33] propose, would be an interesting solution by allowing users to alternate between instructions and preferred music. This might also help runners that specifically need more information in the beginning of a run, but appreciate a less obtrusive sound once the correct rhythm is found.

5.3. Limitations and Future Work

This study was primarily aimed to explore the effects of the different sounds; as such, we limited the variables in the presented studies to different sounds—and did not explore the effects of various breathing rates. Additionally, we acknowledge the number of participants of each study, and the primary focus on female runners, as a limitation. With the goal to minimize influences, we decided to run the studies indoor on a treadmill, which limits our conclusions to this lab setting. In future work, we aim to explore how breathing instructions via sounds can be designed in a more aesthetically pleasing way and be integrated in a holistic sonic experience that complements the running experience. Next to that, we aim to utilize the findings to instruct slower and faster breathing rates and more complex breathing techniques such as LRC and extended exhales, and to explore the effects on biomechanics and subjective experiences when instructing breathing rates different than the preferred breathing rate measured in a baseline.

6. Conclusions

In this work, we presented two exploratory studies evaluating breathing effects of instruction sounds on adherence and user experience, during the run. Study 1 explored the effects of information richness for breathing instruction sounds in the context of running. Study 2 expanded on these findings, evaluating more sonically enhanced sounds. Based on the exploratory studies (N = 11 in study 1 and N = 11 in study 2), we concluded that participants were able to stabilize their breathing rate regardless of information richness of the sound. Participants indicated metronome sounds only on the breath cycle to be more difficult to follow; therefore, it is advisable to utilize sounds with a cue on both the inhalation and exhalation within research contexts and with users that are new to running with breathing cues. Sounds that mimic the breath flow, such as the Accordion or the Breath Sound, seemed to be relatively easier to adhere to. The aesthetic appreciation of sounds was found to be highly individual and it is advisable to offer end-users multiple sounds to choose from.

Author Contributions

Conceptualization, V.v.R., E.H., and T.F.; writing—original draft preparation, V.v.R.; writing—review and editing, V.v.R., E.H., T.F., and A.M.; visualization, V.v.R. and E.H.; supervision, A.M. and T.F.; project administration, A.M.; funding acquisition, A.M. All authors have read and agreed to the published version of the manuscript.

Funding

The authors gratefully acknowledge the financial support from the Austrian Federal Ministry for Climate Action, Environment, Energy, Mobility, Innovation and Technology, the Federal Ministry for Digital and Economic Affairs, and the federal state of Salzburg under the research programme COMET—Competence Centers for Excellent Technologies—in the project DiMo-NEXT Digital Motion in Sports, Fitness and Well-being (Project number: FO999904898).

Institutional Review Board Statement

The studies involving human participants were reviewed and approved by Ethics Committee of University of Salzburg, reference number: GZ 13/2021.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

Open Access Funding by the University of Salzburg.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BRBreathing rate (bpm)
BRVBreathing rate variability (%)
HCIHuman–computer interaction
HRHeart rate (bpm)
HXHexoskin smart shirt
LRCLocomotor–respiratory coupling
MAPEMean absolute percent error
MRPEMean relative percent error
tBBreath cycle time (from inspiration to next inspiration) (s)
tEExhale time (s)
tIInhale time (s)

Appendix A. Pre-Questionnaire Items (German)

Summary of the questionnaires:
  • Sport biography
  • Run experience
  • Musical experience
  • Breathing experience

Appendix A.1. Sport Biography

See Figure A1 and Figure A2 for items related to the sports biography.
Figure A1. Sports biography questions (part 1).
Figure A1. Sports biography questions (part 1).
Mti 08 00025 g0a1
Figure A2. Sports biography questions (part 2).
Figure A2. Sports biography questions (part 2).
Mti 08 00025 g0a2

Appendix A.2. Run Experience

See Figure A3Figure A5 for items related to the run experience.
Figure A3. Questions related to the participant’s running experience (part 1).
Figure A3. Questions related to the participant’s running experience (part 1).
Mti 08 00025 g0a3
Figure A4. Questions related to the participant’s running experience (part 2).
Figure A4. Questions related to the participant’s running experience (part 2).
Mti 08 00025 g0a4
Figure A5. Questions related to the participant’s running experience (part 3).
Figure A5. Questions related to the participant’s running experience (part 3).
Mti 08 00025 g0a5

Appendix A.3. Musical Experience

To probe the participant’s musical experience, we used the German version of the Goldsmiths Musical Sophistication Index [81].

Appendix A.4. Breathing Experience

See Figure A6 for items related to the breathing experience.
Figure A6. Questions related to the participant’s breathing experience.
Figure A6. Questions related to the participant’s breathing experience.
Mti 08 00025 g0a6

Appendix B. Semi-Structured Interview Guide (English)

We prepared an English and German version of the questions below. Dependent on whether the participant was comfortable with English, we would proceed with English, otherwise in German.
How did it go in general?
  • Distraction from what was going on around?
  • Speed breathing ok? Walking on the treadmill disturbing?
  • Volume ok? Sound from outside?
  • Enjoyable running with the sounds?
  • Reasons not to adhere? Distracted, swallowing, towel, thoughts...
  • Easy to do, also unfocused/multitasking possible, e.g., looking around, etc.?
  • Please rank the sounds to your preference.
Discussion on sounds
  • Which sound was easiest to follow?
  • Which hardest?
  • Strategies to follow?
  • Couple breath to steps?
  • How did you try to time br on the sound?
Interpretation of the questionnaires
  • “sound nervt mich” question > how interpreted?
  • Meaning of erschopft for you?
  • Nutzlich?
Discussion on personal routines
  • What normal running routines do you have?
  • Do any special breathing while running usually?
  • Music? Podcast?
  • What are factors that make your run enjoyable?
  • What makes your runs less enjoyable? (seitenstiche, etc.)
  • Can you imagine using it during running for yourself >would it fit your personal routines?

References

  1. Hulteen, R.M.; Smith, J.J.; Morgan, P.J.; Barnett, L.M.; Hallal, P.C.; Colyvas, K.; Lubans, D.R. Global participation in sport and leisure-time physical activities: A systematic review and meta-analysis. Prev. Med. 2017, 95, 14–25. [Google Scholar] [CrossRef]
  2. Strava. Year in Sport 2020. Available online: https://blog.strava.com/press/yis2020/ (accessed on 1 February 2024).
  3. Hockey, J.; Allen-Collinson, J. Chapter Digging in: The sociological phenomenology of ‘doing endurance’ in distance-running. In Endurance Running: A Socio-Cultural Examination; Routledge: London, UK, 2016; pp. 227–242. [Google Scholar]
  4. Nicolò, A.; Massaroni, C.; Passfield, L. Respiratory Frequency during Exercise: The Neglected Physiological Measure. Front. Physiol. 2017, 8, 922. [Google Scholar] [CrossRef]
  5. Harbour, E.; Stöggl, T.; Schwameder, H.; Finkenzeller, T. Breath Tools: A Synthesis of Evidence-Based Breathing Strategies to Enhance Human Running. Front. Physiol. 2022, 13, 813243. [Google Scholar] [CrossRef]
  6. Matsumoto, T.; Matsunaga, A.; Hara, M.; Saitoh, M.; Yonezawa, R.; Ishii, A.; Kutsuna, T.; Yamamoto, K.; Masuda, T. Effects of the breathing mode characterized by prolonged expiration on respiratory and cardiovascular responses and autonomic nervous activity during the exercise. Jpn. J. Phys. Fit. Sport. Med. 2008, 315–326. [Google Scholar]
  7. Matsumoto, T.; Masuda, T.; Hotta, K.; Shimizu, R.; Ishii, A.; Kutsuna, T.; Yamamoto, K.; Hara, M.; Takahira, N.; Matsunaga, A. Effects of prolonged expiration breathing on cardiopulmonary responses during incremental exercise. Respir. Physiol. Neurobiol. 2011, 178, 275–282. [Google Scholar] [CrossRef] [PubMed]
  8. Morton, D.; Callister, R. Exercise-related transient abdominal pain (ETAP). Sport. Med. 2015, 45, 23–35. [Google Scholar] [CrossRef] [PubMed]
  9. Archiza, B.; Leahy, M.G.; Kipp, S.; Sheel, A.W. An integrative approach to the pulmonary physiology of exercise: When does biological sex matter? Eur. J. Appl. Physiol. 2021, 121, 2377–2391. [Google Scholar] [CrossRef]
  10. Menheere, D.; Janssen, M.; Funk, M.; van der Spek, E.; Lallemand, C.; Vos, S. Runner’s Perceptions of Reasons to Quit Running: Influence of Gender, Age and Running-Related Characteristics. Int. J. Environ. Res. Public Health 2020, 17, 6046. [Google Scholar] [CrossRef]
  11. Fokkema, T.; Hartgens, F.; Kluitenberg, B.; Verhagen, E.; Backx, F.J.; van der Worp, H.; Bierma-Zeinstra, S.M.; Koes, B.W.; van Middelkoop, M. Reasons and predictors of discontinuation of running after a running program for novice runners. J. Sci. Med. Sport 2019, 22, 106–111. [Google Scholar] [CrossRef]
  12. Godbout, A.; Boyd, J.E. Audio Visual Synchronization of Rhythm. In Proceedings of the 2015 International Conference on 3D Vision, Lyon, France, 19–22 October 2015; pp. 589–597. [Google Scholar]
  13. Van Rheden, V.; Harbour, E.; Finkenzeller, T.; Burr, L.A.; Meschtscherjakov, A.; Tscheligi, M. Run, Beep, Breathe: Exploring the Effects on Adherence and User Experience of 5 Breathing Instruction Sounds While Running. In Proceedings of the Audio Mostly 2021 (AM ’21), Virtual/Trento, Italy, 1–3 September 2021; pp. 16–23. [Google Scholar] [CrossRef]
  14. Jensen, M.M.; Mueller, F.F. Running with Technology: Where Are We Heading? In Proceedings of the 26th Australian Computer-Human Interaction Conference on Designing Futures: The Future of Design (OzCHI ’14), New York, NY, USA, 2–5 December 2014; pp. 527–530. [Google Scholar] [CrossRef]
  15. Elvitigala, D.; Karahanoğlu, A.; Matviienko, A.; Vidal, L.; Postma, D.; Jones, M.; Montoya, M.; Harrison, D.; Elbæk, L.; Daiber, F.; et al. Grand Challenges in SportsHCI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’24), Boston, MA, USA, 24–28 April 1994; p. 20. [Google Scholar] [CrossRef]
  16. Max, E.J.; Samendinger, S.; Winn, B.; Kerr, N.L.; Pfeiffer, K.A.; Feltz, D.L. Enhancing aerobic exercise with a novel virtual exercise buddy based on the Köhler effect. Games Health J. 2016, 5, 252–257. [Google Scholar] [CrossRef]
  17. Michael, A.; Lutteroth, C. Race yourselves: A longitudinal exploration of self-competition between past, present, and future performances in a vr exergame. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020. [Google Scholar]
  18. Feltz, D.; Forlenza, S.; Winn, B.; Kerr, N. Cyber buddy is better than no buddy: A test of the Köhler motivation effect in exergames. Games Health J. 2014, 3, 98–105. [Google Scholar] [CrossRef]
  19. Forlenza, S.; Kerr, N.; Irwin, B.; Feltz, D. Is my exercise partner similar enough? Partner characteristics as a moderator of the Köhler effect in exergames. Games Health J. 2012, 1, 436–441. [Google Scholar] [CrossRef]
  20. Ioannou, C.; Archard, P.; O’neill, E.; Lutteroth, C. Virtual performance augmentation in an immersive jump & run exergame. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019. [Google Scholar]
  21. Nunes, M.; Nedel, L.; Roesler, V. Motivating people to perform better in exergames: Competition in virtual environments. In Proceedings of the 29th Annual ACM Symposium on Applied Computing, Chicago, IL, USA, 8–10 April 2024; pp. 970–975. [Google Scholar]
  22. Burr, L.; Betzlbacher, N.; Meschtscherjakov, A.; Tscheligi, M. Breathing Training on the Run: Exploring Users Perception on a Gamified Breathing Training Application During Treadmill Running. In Proceedings of the International Conference on Persuasive Technology, Doha, Qatar, 29–31 March 2022; pp. 58–74. [Google Scholar]
  23. Hamada, T.; Okada, M.; Kitazaki, M. Jogging with a virtual runner using a see-through HMD. In Proceedings of the 2017 IEEE Virtual Reality (VR), Los Angeles, CA, USA, 18–22 March 2017; pp. 445–446. [Google Scholar] [CrossRef]
  24. Tan, C.T.; Byrney, R.; Luiz, S.; Liux, W.; Mueller, F. JoggAR: A mixed-modality AR approach for technology-augmented jogging. In Proceedings of the SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications (SA 2015), Kobe, Japan, 2–6 November 2015. [Google Scholar]
  25. Zarin, R. Faster. Stronger. Better? Designing for Enhanced Engagement of Extreme Sports. Ph.D. Thesis, Umeå Universitet, Umeå, Sweden, 2017. [Google Scholar]
  26. Cooper, D. Recon Jet Review: Expensive Fitness Glasses With Potential to Be Better. Engadget. 2015. Available online: https://www.engadget.com/2015-07-17-recon-jet-review.html (accessed on 1 February 2024).
  27. Janssen, M.; Walravens, R.; Thibaut, E.; Scheerder, J.; Brombacher, A.; Vos, S. Understanding Different Types of Recreational Runners and How They Use Running-Related Technology. Int. J. Environ. Res. Public Health 2020, 17, 2276. [Google Scholar] [CrossRef]
  28. Mauriello, M.; Gubbels, M.; Froehlich, J. Social fabric fitness: The design and evaluation of wearable E-textile displays to support group running. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Toronto, ON, Canada, 26 April–1 May 2014; pp. 2833–2842. [Google Scholar]
  29. Colley, A.; Woźniak, P.; Kiss, F.; Häkkilä, J. Shoe Integrated Displays: A Prototype Sports Shoe Display and Design Space. In Proceedings of the 10th Nordic Conference on Human-Computer Interaction, Oslo, Norway, 29 September–3 October 2018; pp. 39–46. [Google Scholar] [CrossRef]
  30. Seuter, M.; Pfeiffer, M.; Bauer, G.; Zentgraf, K.; Kray, C. Running with technology: Evaluating the impact of interacting with wearable devices on running movement. In Proceedings of the ACM on Interactive, Mobile, Wearable And Ubiquitous Technologies, Maui, HI, USA, 11–15 September 2017. [Google Scholar]
  31. Nylander, S.; Jacobsson, M.; Tholander, J. Runright: Real-Time Visual and Audio Feedback on Running. In Proceedings of the CHI ’14 Extended Abstracts on Human Factors in Computing Systems, Toronto, ON, Canada, 26 April 2014; pp. 583–586. [Google Scholar] [CrossRef]
  32. Greinacher, R.; Kojić, T.; Meier, L.; Parameshappa, R.; Möller, S.; Voigt-Antons, J. Impact of tactile and visual feedback on breathing rhythm and user experience in VR exergaming. In Proceedings of the 2020 Twelfth International Conference on Quality of Multimedia Experience (QOMEX), Athlone, Ireland, 26–28 May 2020. [Google Scholar]
  33. Valsted, F.M.; Nielsen, C.V.H.; Jensen, J.Q.; Sonne, T.; Jensen, M.M. Strive: Exploring Assistive Haptic Feedback on the Run. In Proceedings of the 29th Australian Conference on Computer-Human Interaction, Brisbane, Australia, 28 November–1 December 2017; pp. 275–284. [Google Scholar] [CrossRef]
  34. Harbour, E.; Van Rheden, V.; Schwameder, H.; Finkenzeller, T. Step-adaptive sound guidance enhances locomotor-respiratory coupling in novice female runners: A proof-of-concept study. Front. Sport. Act. Living 2023, 5, 1112663. [Google Scholar] [CrossRef]
  35. Hassan, M.; Daiber, F.; Wiehr, F.; Kosmalla, F.; Krüger, A. FootStriker: An EMS-Based Foot Strike Assistant for Running. Proc. Acm Interact. Mob. Wearable Ubiquitous Technol. 2017, 1, 2. [Google Scholar] [CrossRef]
  36. Yu, B.; Feijs, L.; Funk, M.; Hu, J. Breathe with touch: A tactile interface for breathing assistance system. In Proceedings of the Human-Computer Interaction—INTERACT 2015: 15th IFIP TC 13 International Conference, Bamberg, Germany, 14–18 September 2015; pp. 45–52. [Google Scholar]
  37. Karpashevich, P.; Sanches, P.; Garrett, R.; Luft, Y.; Cotton, K.; Tsaknaki, V.; Höök, K. Touching Our Breathing through Shape-Change: Monster, Organic Other, or Twisted Mirror. ACM Trans. Comput.-Hum. Interact. 2022, 29, 22. [Google Scholar] [CrossRef]
  38. Van Rheden, V.; Grah, T.; Meschtscherjakov, A. Sonification Approaches in Sports in the Past Decade: A Literature Review. In Proceedings of the 15th International Conference on Audio Mostly (AM ’20), Graz, Austria, 15–17 September 2020; pp. 199–205. [Google Scholar] [CrossRef]
  39. Schaffert, N.; Janzen, T.B.; Mattes, K.; Thaut, M.H. A Review on the Relationship Between Sound and Movement in Sports and Rehabilitation. Front. Psychol. 2019, 10, 244. [Google Scholar] [CrossRef] [PubMed]
  40. Karageorghis, C.I.; Priest, D.L. Music in the exercise domain: A review and synthesis (Part I). Int. Rev. Sport Exerc. Psychol. 2012, 5, 44–66. [Google Scholar] [CrossRef]
  41. Zwinderman, M.; Zavialova, T.; Tetteroo, D.; Lehouck, P. Oh music, where art thou? In Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services, Stockholm, Sweden, 30 August–2 September 2011; pp. 533–538. [Google Scholar]
  42. Witkowski, E. Running from zombies. In Proceedings of the 9th Australasian Conference on Interactive Entertainment: Matters of Life and Death, Melbourne, VIC, Australia, 30 September–1 October 2013; pp. 1–8. [Google Scholar]
  43. Mueller, F.F.; Vetere, F.; Gibbs, M.R.; Edge, D.; Agamanolis, S.; Sheridan, J.G.; Heer, J. Balancing exertion experiences. In Proceedings of the Conference on Human Factors in Computing Systems, Austin, TX, USA, 5–10 May 2012; pp. 1853–1862. [Google Scholar]
  44. Grond, F.; Berger, J. Parameter mapping sonification. In The Sonification Handbook; Logos Publishing House: Berlin, Germany, 2011. [Google Scholar]
  45. Cesarini, D.; Ungerechts, B.E.; Hermann, T. Swimmers in the loop: Sensing moving water masses for an auditory biofeedback system. In Proceedings of the 2015 IEEE Sensors Applications Symposium (SAS), Zadar, Croatia, 13–15 April 2015; pp. 1–6. [Google Scholar]
  46. Dubus, G. Evaluation of four models for the sonification of elite rowing. J. Multimodal User Interfaces 2012, 5, 143–156. [Google Scholar] [CrossRef]
  47. Schaffert, N.; Mattes, K.; Effenberg, A.O. Examining effects of acoustic feedback on perception and modification of movement patterns in on-water rowing training. In Proceedings of the 6th Audio Mostly Conference: A Conference on Interaction with Sound, Coimbra, Portugal, 7–9 September 2011; pp. 122–129. [Google Scholar]
  48. Hermann, T.; Ungerechts, B.; Toussaint, H.; Grote, M. Sonification of pressure changes in swimming for analysis and optimization. In Proceedings of the 18th International Conference on Auditory Display, Atlanta, GA, USA, 18–21 June 2012. [Google Scholar]
  49. Yang, J.; Hunt, A. Real-time sonification of biceps curl exercise using muscular activity and kinematics. In Proceedings of the 21st International Conference on Auditory Display, Graz, Austria, 8–10 July 2015. [Google Scholar]
  50. Godbout, A.; Thornton, C.; Boyd, J.E. Mobile sonification for athletes: A case study in commercialization of sonification. In Proceedings of the 20th International Conference on Auditory Display (ICAD-2014), New York, NY, USA, 22–25 June 2014. [Google Scholar]
  51. Hoffmann, C.P.; Bardy, B.G. Dynamics of the locomotor–respiratory coupling at different frequencies. Exp. Brain Res. 2015, 233, 1551–1561. [Google Scholar] [CrossRef] [PubMed]
  52. Lim, H.B.; Karageorghis, C.I.; Romer, L.M.; Bishop, D.T. Psychophysiological effects of synchronous versus asynchronous music during cycling. Am. Coll. Sport. Med. 2014, 46, 407–413. [Google Scholar] [CrossRef] [PubMed]
  53. Nijs, A.; Roerdink, M.; Beek, P.J. Cadence Modulation in Walking and Running: Pacing Steps or Strides? Brain Sci. 2020, 10, 273. [Google Scholar] [CrossRef]
  54. Styns, F.; van Noorden, L.; Moelants, D.; Leman, M. Walking on music. Hum. Mov. Sci. 2007, 26, 769–785. [Google Scholar] [CrossRef]
  55. Buhmann, J.; Moens, B.; Lorenzoni, V.; Leman, M. Shifting the Musical Beat to Influence Running Cadence. In Proceedings of the 25th Anniversary Conference of the European Society for the Cognitive Sciences of Music (ESCOM 2017), Ghent, Belgium, 31 July–4 August 2017. [Google Scholar]
  56. Hockman, J.; Wanderley, M.M.; Fujinaga, I. Real-Time Phase Vocoder Manipulation by Runner’s Pace. In Proceedings of the 9th International Conference on New Interfaces for Musical Expression (NIME 2009), Pittsburgh, PA, USA, 3–6 June 2009; pp. 90–93. [Google Scholar]
  57. Van Dyck, E.; Buhmann, J.; Lorenzoni, V. Instructed versus spontaneous entrainment of running cadence to music tempo. Ann. N. Y. Acad. Sci. 2021, 1489, 91. [Google Scholar] [CrossRef]
  58. Fortmann, J.; Pielot, M.; Mittelsdorf, M.; Büscher, M.; Trienen, S.; Boll, S. PaceGuard: Improving running cadence by real-time auditory feedback. In Proceedings of the 14th International Conference on Human-Computer Interaction with Mobile Devices and Services Companion, San Francisco, CA, USA, 21–24 September 2012; pp. 5–10. [Google Scholar] [CrossRef]
  59. Oliver, N.; Flores-Mangas, F. MPTrain: A mobile, music and physiology-based personal trainer. In Proceedings of the 8th Conference on Human-Computer Interaction with Mobile Devices and Services, Helsinki, Finland, 12–15 September 2006; pp. 21–28. [Google Scholar] [CrossRef]
  60. Byblow, W.D.; Carson, R.G.; Goodman, D. Expressions of asymmetries and anchoring in bimanual coordination. Hum. Mov. Sci. 1994, 13, 3–28. [Google Scholar] [CrossRef]
  61. Fink, P.W.; Foo, P.; Jirsa, V.K.; Kelso, J.S. Local and global stabilization of coordination by sensory information. Exp. Brain Res. 2000, 134, 9–20. [Google Scholar] [CrossRef] [PubMed]
  62. Haas, F.; Distenfeld, S.; Axen, K. Effects of perceived musical rhythm on respiratory pattern. J. Appl. Physiol. 1986, 61, 1185–1191. [Google Scholar] [CrossRef]
  63. Patibanda, R.; Mueller, F.; Leskovsek, M.; Duckworth, J. Life Tree. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play, Amsterdam, The Netherlands, 15–18 October 2017. [Google Scholar] [CrossRef]
  64. Shamekhi, A.; Bickmore, T. Breathe Deep: A Breath-Sensitive Interactive Meditation Coach. In Proceedings of the 12th EAI International Conference on Pervasive Computing Technologies for Healthcare, New York, NY, USA, 21–24 May 2018; pp. 108–117. [Google Scholar] [CrossRef]
  65. Ghandeharioun, A.; Picard, R. BrightBeat: Effortlessly Influencing Breathing for Cultivating Calmness and Focus. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, Denver, CO, USA, 6–1 May 2017; pp. 1624–1631. [Google Scholar] [CrossRef]
  66. Morimoto, Y.; van Geer, B. Breathing space: Biofeedback sonification for meditation in autonomous vehicles. In Proceedings of the 25th International Conference on Auditory Display (ICAD 2019), Newcastle, UK, 23–27 June 2019. [Google Scholar]
  67. Zafar, M.A.; Ahmed, B.; Rihawi, R.A.; Gutierrez-Osuna, R. Gaming Away Stress: Using Biofeedback Games to Learn Paced Breathing. IEEE Trans. Affect. Comput. 2020, 11, 519–531. [Google Scholar] [CrossRef]
  68. Harris, J.; Vance, S.; Fernandes, O.; Parnandi, A.; Gutierrez-Osuna, R. Sonic respiration: Controlling respiration rate through auditory biofeedback. In Proceedings of the CHI’14 Extended Abstracts on Human Factors in Computing Systems, Toronton, ON, Canada, 26 April–1 May 2014; pp. 2383–2388. [Google Scholar] [CrossRef]
  69. Gavish, B. Device-guided breathing in the home setting: Technology, performance and clinical outcomes. Biol. Psychol. 2010, 84, 150–156. [Google Scholar] [CrossRef]
  70. Sharma, M.; Frishman, W.; Gandhi, K. RESPeRATE: Nonpharmacological treatment of hypertension. Cardiol. Rev. 2011, 19, 47–51. [Google Scholar] [CrossRef]
  71. Cernes, R.; Zimlichman, R. RESPeRATE: The role of paced breathing in hypertension treatment. J. Am. Soc. Hypertens. 2015, 9, 38–47. [Google Scholar] [CrossRef]
  72. Hoffmann, C.; Villard, S.; Bardy, B. Stabilizing the locomotor-respiratory coupling using a metronome to save energy. In Proceedings of the International Conference SKILLS 2011, Paris, France, 15–16 December 2011; p. 00036. [Google Scholar]
  73. Bardy, B.; Hoffmann, C.; Moens, B.; Leman, M.; Dalla, B.S. Sound-induced stabilization of breathing and moving. Ann. New York Acad. Sci. 2015, 1337, 94–100. [Google Scholar] [CrossRef]
  74. Hoffmann, C.P.; Torregrosa, G.; Bardy, B.G. Sound stabilizes locomotor-respiratory coupling and reduces energy cost. PLoS ONE 2012, 7, e45206. [Google Scholar] [CrossRef]
  75. Van Rheden, V.; Harbour, E.; Finkenzeller, T.; Meschtscherjakov, A. Breath Tools: Exploring the Effects on Adherence and User Experience of 4 Sounds Assisting Runners with Coupling Breath to Steps. In Proceedings of the CHI’23: CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023. [Google Scholar] [CrossRef]
  76. Salazar-Martínez, E.; de Matos, T.R.; Arrans, P.; Santalla, A.; Orellana, J.N. Ventilatory efficiency response is unaffected by fitness level, ergometer type, age or body mass index in male athletes. Biol. Sport 2018, 35, 393. [Google Scholar] [CrossRef]
  77. Borg, G. Borg’s Perceived Exertion and Pain Scales; Human Kinetics: Champaign, IL, USA, 1998. [Google Scholar]
  78. Woltmann, M.L.; Foster, C.; Porcari, J.P.; Camic, C.L.; Dodge, C.; Haible, S.; Mikat, R.P. Evidence that the talk test can be used to regulate exercise intensity. J. Strength Cond. Res. 2015, 29, 1248–1254. [Google Scholar] [CrossRef]
  79. Minge, M.; Riedel, L. meCUE-Ein modularer fragebogen zur erfassung des nutzungserlebens. In Mensch & Computer 2013: Interaktive Vielfalt; Oldenbourg: Berlin, Germany, 2013. [Google Scholar]
  80. Harbour, E.; Lasshofer, M.; Genitrini, M.; Schwameder, H. Enhanced breathing pattern detection during running using wearable sensors. Sensors 2021, 21, 5606. [Google Scholar] [CrossRef]
  81. Schaal, N.K.; Bauer, A.K.R.; Müllensiefen, D. Der Gold-MSI: Replikation und validierung eines fragebogeninstrumentes zur messung musikalischer erfahrenheit anhand einer deutschen stichprobe. Music. Sci. 2014, 18, 423–447. [Google Scholar] [CrossRef]
Figure 1. Study 1: The series of five sounds, arranged in ascending order of information richness from the highest (bottom) to the lowest (top), are represented. The waveform illustrates the pattern of respiratory airflow (image taken from Ref. [13]).
Figure 1. Study 1: The series of five sounds, arranged in ascending order of information richness from the highest (bottom) to the lowest (top), are represented. The waveform illustrates the pattern of respiratory airflow (image taken from Ref. [13]).
Mti 08 00025 g001
Figure 2. Instruments: (A) treadmill, (B) bluetooth speaker, (C) Hexoskin Sensor shirt, (D) phone with Movesense app, (E) Movesense IMU, (F) laptop running Ableton Live (image from Ref. [13]).
Figure 2. Instruments: (A) treadmill, (B) bluetooth speaker, (C) Hexoskin Sensor shirt, (D) phone with Movesense app, (E) Movesense IMU, (F) laptop running Ableton Live (image from Ref. [13]).
Mti 08 00025 g002
Figure 3. Visual overview of the steps of the experimental procedure.
Figure 3. Visual overview of the steps of the experimental procedure.
Mti 08 00025 g003
Figure 4. Breathing rate (5-breath rolling mean) and breathing rate variability for all sound trials of study 1 for participant S1P02. Participants ran 5 min, in which the first minute (t = 60 –0) was without sound. The vertical dash line indicates sound start. The horizontal dash lines represent the upper and lower limit of the adherence area (±5% of the instructed BR) (image based on Ref. [13]).
Figure 4. Breathing rate (5-breath rolling mean) and breathing rate variability for all sound trials of study 1 for participant S1P02. Participants ran 5 min, in which the first minute (t = 60 –0) was without sound. The vertical dash line indicates sound start. The horizontal dash lines represent the upper and lower limit of the adherence area (±5% of the instructed BR) (image based on Ref. [13]).
Mti 08 00025 g004
Figure 5. Breathing rate (five-breath rolling window) in bpm and heart rate (HR) in bpm during Siren sound condition for S1P04. The vertical dash line indicates sound start. The horizontal dash lines represent the upper and lower limit of the adherence area (±5% of the instructed BR). Note the three attachments interrupted by two detachments.
Figure 5. Breathing rate (five-breath rolling window) in bpm and heart rate (HR) in bpm during Siren sound condition for S1P04. The vertical dash line indicates sound start. The horizontal dash lines represent the upper and lower limit of the adherence area (±5% of the instructed BR). Note the three attachments interrupted by two detachments.
Mti 08 00025 g005
Figure 6. Subjective effectiveness (user experience item 1)—ranked on a scale of 1 (not easy at all) to 10 (very easy). One star (*) indicating p < 0.05, two stars (**) indicating p < 0.01 (image from Ref. [13]).
Figure 6. Subjective effectiveness (user experience item 1)—ranked on a scale of 1 (not easy at all) to 10 (very easy). One star (*) indicating p < 0.05, two stars (**) indicating p < 0.01 (image from Ref. [13]).
Mti 08 00025 g006
Figure 7. Study 2: breathing rate (5-breath rolling mean) of participant S2P04 during all runs. The vertical dash line indicates sound start. The horizontal dash lines represent the upper and lower limit of the adherence area (±5% of the instructed BR).
Figure 7. Study 2: breathing rate (5-breath rolling mean) of participant S2P04 during all runs. The vertical dash line indicates sound start. The horizontal dash lines represent the upper and lower limit of the adherence area (±5% of the instructed BR).
Mti 08 00025 g007
Figure 8. Study 2: subjective effectiveness (user experience item 1)—ranked on a scale of 1 (not easy at all) to 10 (very easy). One star (*) indicating p < 0.05.
Figure 8. Study 2: subjective effectiveness (user experience item 1)—ranked on a scale of 1 (not easy at all) to 10 (very easy). One star (*) indicating p < 0.05.
Mti 08 00025 g008
Figure 9. Study 2: preference rankings of the participants, from rank 1 (best) to rank 4 (worst).
Figure 9. Study 2: preference rankings of the participants, from rank 1 (best) to rank 4 (worst).
Mti 08 00025 g009
Table 1. Study 1: Participant overview.
Table 1. Study 1: Participant overview.
Participant CodeAge (Years)Running FrequencyRun Duration (min)Distance (km)Breathing Technique While RunningOther Breathing TechniquesMusic
S1P0128>2 times/week457NoNoNo
S1P0226<1 time/month458NoSlow breathingNo
S1P03321–2 times/week608–10NoYogaNo
S1P04311–2 times/week456–10NoNoYes
S1P05291–2 times/week357NoNoYes
S1P06401–2 times/week457NoNoNo
S1P07261–2 times/week458NoNoNo
S1P08281–2 times/month305NoNoNo
S1P09261–2 times/month405NoNoYes
S1P10261–2 times/week6010NoNoYes
S1P11271–2 times/month455NoNoYes
Table 2. Study 1: Adherence statistics for all sound conditions: Breathing Rate (bpm), Breathing Rate Variability, Mean Absolute Percentage Error (%), Mean Relative Percentage Error (%), # of attachments, percent time attached (%), # of detachments, percent time detached (%).
Table 2. Study 1: Adherence statistics for all sound conditions: Breathing Rate (bpm), Breathing Rate Variability, Mean Absolute Percentage Error (%), Mean Relative Percentage Error (%), # of attachments, percent time attached (%), # of detachments, percent time detached (%).
ConditionBreathing Rate (bpm)Breathing Rate Variability (%)MAPE (%)MRPE (%)# Attach/ ParticipantPercent Time Attached (%)# Detach/ ParticipantPercent Time Detached (%)
Inhalation Metr.27.26 ± 4.338.64 ± 2.811.38 ± 1.210.56 ± 1.793.64 ± 1.6381.09 ± 11.660.55 ± 0.691.96 ± 2.56
Exhalation Metr.27.25 ± 4.248.08 ± 2.700.56 ± 0.380.55 ± 0.393.00 ± 1.7386.53 ± 14.190.27 ± 0.901.15 ± 3.81
Full Breath Metr.27.33 ± 4.319.49 ± 2.551.51 ± 1.630.84 ± 2.092.36 ± 1.3688.17 ± 7.410.91 ± 0.943.25 ± 3.15
Siren27.44 ± 4.139.03 ± 2.521.73 ± 1.661.33 ± 2.022.45 ± 1.4489.69 ± 8.880.55 ± 0.691.84 ± 2.37
Accordion27.23 ± 4.238.77 ± 3.560.62 ± 0.380.51 ± 0.522.64 ± 1.1288.58 ± 6.400.73 ± 1.102.80 ± 4.27
Table 3. User experience of running with the sounds in study 1, and means and standard deviations.
Table 3. User experience of running with the sounds in study 1, and means and standard deviations.
Sound ConditionEffectivenessEfficiencyIntention to UseNegative EmotionPositive EmotionAesthetics
Inhalation Metronome5.36 ± 2.425.27 ± 1.852.18 ± 2.524.36 ± 2.843.73 ± 2.494.36 ± 2.38
Exhalation Metronome5.00 ± 2.726.00 ± 2.102.55 ± 3.274.55 ± 3.272.82 ± 2.184.18 ± 2.14
Full Breath Metronome7.09 ± 2.176.82 ± 1.723.09 ± 3.083.55 ± 2.584.18 ± 2.185.18 ± 2.18
Siren7.00 ± 2.286.55 ± 2.542.36 ± 2.384.82 ± 2.753.82 ± 1.943.91 ± 2.66
Accordion7.55 ± 1.866.91 ± 2.594.18 ± 2.963.91 ± 3.115.27 ± 2.725.91 ± 3.05
Table 4. Study 2: Participant overview.
Table 4. Study 2: Participant overview.
Participant CodeAge (Years)Running FrequencyRun Duration (min)Run Distance (km)Breathing Technique While RunningOther Breathing TechniquesMusic
S2P01291–2/month356LRCYesNo
S2P0224>2/week305NoNoYes
S2P03241–2/month355NoNoYes
S2P04241–2/month407Deep breathingBelly breathingNo
S2P0522<1/month304NoNoYes
S2P0621<1/month254NoNoYes
S2P07211–2/month305NoNoNo
S2P0821<1/month507–8NoNoYes
S2P09231–2/week355NoNoYes
S2P10301–2/month405NoNoNo
S2P11281–2/month405LRCBelly breathingNo
Table 5. Study 2:Adherence Data for each sound condition: Breathing Rate (bpm), Breathing Rate Variability, Mean Absolute Percentage Error (%), Mean Relative Percentage Error (%), # of attachments, percent time attached (%), # of detachments, percent time detached (%).
Table 5. Study 2:Adherence Data for each sound condition: Breathing Rate (bpm), Breathing Rate Variability, Mean Absolute Percentage Error (%), Mean Relative Percentage Error (%), # of attachments, percent time attached (%), # of detachments, percent time detached (%).
Sound ConditionBreathing Rate (bpm)Breathing Rate Variability (%)MAPE (%)MRPE (%)# Attach/ ParticipantPercent Time Attached (%)# Detach/ ParticipantPercent Time Detached (%)
No Sound33.50 ± 6.9816.51 ± 5.18
Full Breath Metronome32.22 ± 5.927.93 ± 2.851.36 ± 1.791.79 ± 0.282.30 ± 1.3490.71 ± 4.770.90 ± 0.742.76 ± 2.26
Rock Sound32.17 ± 6.088.64 ± 3.910.85 ± 1.521.52 ± 0.573.00 ± 1.7083.95 ± 10.611.70 ± 1.645.13 ± 4.47
Breath Sound32.16 ± 6.058.50 ± 3.201.12 ± 2.062.06 ± 0.571.90 ± 1.6688.39 ± 9.070.80 ± 0.922.32 ± 2.88
Harp Sound32.10 ± 8.2711.72 ± 6.181.50 ± 2.222.22 ± 1.002.45 ± 1.2981.57 ± 22.181.64 ± 1.914.52 ± 4.04
Table 6. Study 2: user experience of running with the sounds in study 2, and means and standard deviations.
Table 6. Study 2: user experience of running with the sounds in study 2, and means and standard deviations.
Sound ConditionEffectivenessEfficiencyIntention to UseNegative EmotionPositive EmotionAesthetics
Accordion7.18 ± 3.255.73 ± 3.612.82 ± 3.405.00 ± 3.692.82 ± 2.792.64 ± 2.62
Rock Sound7.91 ± 2.595.91 ± 3.242.73 ± 3.175.09 ± 3.813.73 ± 3.234.09 ± 3.39
Breath Sound8.91 ± 1.927.55 ± 2.213.45 ± 2.582.55 ± 1.513.73 ± 2.334.36 ± 2.06
Harp Sound6.82 ± 2.895.36 ± 4.083.00 ± 3.444.45 ± 3.783.64 ± 3.073.91 ± 2.88
Table 7. Visual comparison between UX scores of Accordion (study 1) and the sounds from study 2. Green- and yellow-highlighted scores represent absolute higher and relative higher UX scores compared to study 1, taking the Accordion sound as relative reference.
Table 7. Visual comparison between UX scores of Accordion (study 1) and the sounds from study 2. Green- and yellow-highlighted scores represent absolute higher and relative higher UX scores compared to study 1, taking the Accordion sound as relative reference.
Sound ConditionEffectivenessEfficiencyIntention to UseNegative EmotionPositive EmotionAesthetics
Study 1Accordion7.55 ± 1.866.91 ± 2.594.18 ± 2.963.91 ± 3.115.27 ± 2.725.91 ± 3.05
Study 2Accordion7.18 ± 3.255.73 ± 3.612.82 ± 3.405.00 ± 3.692.82 ± 2.792.64 ± 2.62
Rock Sound7.91 ± 2.595.91 ± 3.242.73 ± 3.175.09 ± 3.813.73 ± 3.234.09 ± 3.39
Breath Sound8.91 ± 1.927.55 ± 2.213.45 ± 2.582.55 ± 1.513.73 ± 2.334.36 ± 2.06
Harp Sound6.82 ± 2.895.36 ± 4.083.00 ± 3.444.45 ± 3.78 3.64 ± 3.073.91 ± 2.88
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

van Rheden, V.; Harbour, E.; Finkenzeller, T.; Meschtscherjakov, A. Into the Rhythm: Evaluating Breathing Instruction Sound Experiences on the Run with Novice Female Runners. Multimodal Technol. Interact. 2024, 8, 25. https://doi.org/10.3390/mti8040025

AMA Style

van Rheden V, Harbour E, Finkenzeller T, Meschtscherjakov A. Into the Rhythm: Evaluating Breathing Instruction Sound Experiences on the Run with Novice Female Runners. Multimodal Technologies and Interaction. 2024; 8(4):25. https://doi.org/10.3390/mti8040025

Chicago/Turabian Style

van Rheden, Vincent, Eric Harbour, Thomas Finkenzeller, and Alexander Meschtscherjakov. 2024. "Into the Rhythm: Evaluating Breathing Instruction Sound Experiences on the Run with Novice Female Runners" Multimodal Technologies and Interaction 8, no. 4: 25. https://doi.org/10.3390/mti8040025

Article Metrics

Back to TopTop