Published March 8, 2023 | Version 1.1.0
Dataset Open

STARSS23: Sony-TAu Realistic Spatial Soundscapes 2023

Description

DESCRIPTION:

The Sony-TAu Realistic Spatial Soundscapes 2023 (STARSS23) dataset contains multichannel recordings of sound scenes in various rooms and environments, together with temporal and spatial annotations of prominent events belonging to a set of target classes. The dataset is collected in two different countries, in Tampere, Finland by the Audio Researh Group (ARG) of Tampere University (TAU), and in Tokyo, Japan by SONY, using a similar setup and annotation procedure. The dataset is delivered in two 4-channel spatial recording formats, a microphone array one (MIC), and first-order Ambisonics one (FOA). These recordings serve as the development dataset for the DCASE 2023 Sound Event Localization and Detection Task of the DCASE 2023 Challenge.

The STARSS23 dataset is a continuation of the STARSS22 dataset. It extends the previous version with the following:

  • An additional additional 2hrs 30mins of recordings in the development set, from 5 new rooms distributed in 47 new recording clips.
  • An additional 1hr 40mins of recordings added in the evaluation set of the dataset.
  • 360° videos spatially and temporally aligned to the audio recordings of the dataset (apart from 12 audio-only clips).
  • Distance labels (in cm) for the spatially annotated sound events, instead of the previous azimuth and elevation only labels.

Contrary to the three previous datasets of synthetic spatial sound scenes of TAU Spatial Sound Events 2019 (development/evaluation), TAU-NIGENS Spatial Sound Events 2020, and TAU-NIGENS Spatial Sound Events 2021 associated with previous iterations of the DCASE Challenge, the STARS22-23 dataset contains recordings of real sound scenes and hence it avoids some of the pitfalls of synthetic generation of scenes. Some such key properties are:

  • annotations are based on a combination of human annotators for sound event activity and optical tracking for spatial positions,
  • the annotated target event classes are determined by the composition of the real scenes,
  • the density, polyphony, occurences and co-occurences of events and sound classes is not random, and it follows actions and interactions of participants in the real scenes.

The first round of recordings was collected between September 2021 and January 2022. A second round of recordings was collected between November 2022 and February 2023.

Collection of data from the TAU side has received funding from Google.

A demo video combining the different modalities and spatial annotations can be found here.

REPORT & REFERENCE:

If you use this dataset you could cite this report on its design, capturing, and annotation process:

Kazuki Shimada, Archontis Politis, Parthasaarathy Sudarsanam, Daniel Krause, Kengo Uchida, Sharath Adavanne, Aapo Hakala, Yuichiro Koyama, Naoya Takahashi, Shusuke Takahashi, Tuomas Virtanen, Yuki Mitsufuji (2023). STARSS23: An Audio-Visual Dataset of Spatial Recordings of Real Scenes with Spatiotemporal Annotations of Sound Events,

found here, and

Archontis Politis, Kazuki Shimada, Parthasaarathy Sudarsanam, Sharath Adavanne, Daniel Krause, Yuichiro Koyama, Naoya Takahashi, Shusuke Takahashi, Yuki Mitsufuji, Tuomas Virtanen (2022). STARSS22: A dataset of spatial recordings of real scenes with spatiotemporal annotations of sound events. In Proceedings of the Detection and Classification of Acoustic Scenes and Events 2022 Workshop (DCASE2022), Nancy, France.

found here.

AIM:

The STARSS22-23 dataset is suitable for training and evaluation of machine-listening models for sound event detection (SED), general sound source localization with diverse sounds or signal-of-interest localization, and joint sound-event-localization-and-detection (SELD). Additionally, the dataset can be used for evaluation of signal processing methods that do not necessarily rely on training, such as acoustic source localization methods and multiple-source acoustic tracking. The dataset allows evaluation of the performance and robustness of the aforementioned applications for diverse types of sounds, and under diverse acoustic conditions.

Specifically the STARSS23 allows evaluation of audiovisual processing methods with a spatial dimension, such as audiovisual source localization or audiovisual object recognition.

SPECIFICATIONS:

General:

  • Recordings are taken in two different sites.
  • Each recording clip is part of a recording session happening in a unique room.
  • Groups of participants, sound making props, and scene scenarios are unique for each session (with a few exceptions).
  • To achieve good variability and efficiency in the data, in terms of presence, density, movement, and/or spatial distribution of the sounds events, the scenes are loosely scripted.
  • 13 target classes are identified in the recordings and strongly annotated by humans.
  • Spatial annotations for those active events are captured by an optical tracking system.
  • Sound events out of the target classes are considered as interference.
  • Occurrences of up to 3 simultaneous events are fairly common, while higher numbers of overlapping events (up to 5) can occur but are rare.

Volume, duration, and data split:

  • A total of 16 unique rooms captured in the recordings, 4 in Tokyo and 12 in Tampere (development set).
  • 70 recording clips of 30 sec ~ 5 min durations, with a total time of ~2hrs, captured in Tokyo (development dataset).
  • 98 recording clips of 40 sec ~ 9 min durations, with a total time of ~5.5hrs, captured in Tampere (development dataset).
  • 79 recordings clips of 40 sec ~ 7 min durations, with a total time of ~3.5hrs, captured in both sites (evaluation dataset).
  • A training-testing split is provided for reporting results using the development dataset.
  • 40 recordings contributed by Sony for the training split, captured in 2 rooms (dev-train-sony).
  • 30 recordings contributed by Sony for the testing split, captured in 2 rooms (dev-test-sony).
  • 50 recordings contributed by TAU for the training split, captured in 7 rooms (dev-train-tau).
  • 48 recordings contributed by TAU for the testing split, captured in 5 rooms (dev-test-tau).

Audio:

  • Sampling rate: 24kHz.
  • Bit depth:         16 bits.
  • Two 4-channel 3-dimensional recording formats: first-order Ambisonics (FOA) and tetrahedral microphone array (MIC).

Video:

  • Video 360° format: equirectangular
  • Video resolution: 1920x960
  • Video frames per second (fps): 29.97
  • All audio recordings are accompanied by synchronised video recordings, apart from 12 audio recordings with missing videos (fold3_room21_mix001.wav - fold3_room21_mix012.wav)

More detailed information on the dataset can be found in the included README file.

SOUND CLASSES:

13 target sound event classes are annotated. The classes follow loosely the Audioset ontology.

  0. Female speech, woman speaking
  1. Male speech, man speaking
  2. Clapping
  3. Telephone
  4. Laughter
  5. Domestic sounds
  6. Walk, footsteps
  7. Door, open or close
  8. Music
  9. Musical instrument
  10. Water tap, faucet
  11. Bell
  12. Knock

The content of some of these classes corresponds to events of a limited range of Audioset-related subclasses. For more information see the README file.

EXAMPLE APPLICATION:

An implementation of a trainable model performing audio-only joint SELD, trained and evaluated with this dataset is provided here. This implementation will serve as the baseline method in the DCASE 2023 Sound Event Localization and Detection Task, under the audio-only inference track.

Additionally, an implementation of a trainable model performing audiovisual SELD, trained and evaluated with this dataset is provided here. This implementation will serve as the baseline method in the DCASE 2023 Sound Event Localization and Detection Task, under the audiovisual inference track.

DEVELOPMENT AND EVALUATION:

The current version (Version 1.1) of the dataset includes development audio/video recordings and labels and the evaluation recordings without labels, used by the participants of Task 3 of DCASE2023 Challenge to train and validate their submitted systems (development), and produce system outputs for the challenge evaluation phase.

If researchers wish to compare their system against the submissions of DCASE2023 Challenge, they will have directly comparable results if they use the evaluation data as their testing set.

DOWNLOAD INSTRUCTIONS:

The file foa_dev.zip, correspond to audio data of the FOA recording format.
The file mic_dev.zip, correspond to audio data of the MIC recording format.

The file video_dev.zip contains the common videos for both audio formats.
The file metadata_dev.zip contains the common metadata for both audio formats.

The file foa_eval.zip corresponds to audio data of the FOA recording format for the evaluation dataset.
The file mic_eval.zip corresponds to audio data of the MIC recording format for the evaluation dataset.
The file video_eval.zip contains the common videos for both audio formats of the evaluation dataset.

Download the zip files corresponding to the format of interest and use your favourite compression tool to unzip these zip files.

Files

foa_dev.zip

Files (16.3 GB)

Name Size Download all
md5:316b834ee6393c22862c314cc3c7ebb0
3.4 GB Preview Download
md5:7caa61bb6ea997f5bfac5b20177f505a
1.6 GB Preview Download
md5:1c11108eda7c915172b10c48276cc189
1.2 kB Download
md5:e73af95a6d5f3f7e009ac6a70804f44a
1.2 MB Preview Download
md5:06967bcc8def1580c2425fabc311dbd2
3.4 GB Preview Download
md5:904dde7a7bff1a2b81527f16e4f26625
1.6 GB Preview Download
md5:bbfde2b9d0e47c7dafd3cbc92107c920
28.7 kB Preview Download
md5:b92295e162600adb545ed99bdc0a9c76
4.1 GB Preview Download
md5:646f6678af32e4205f45a3d77d4eaa29
2.1 GB Preview Download

Additional details

References

  • Archontis Politis, Kazuki Shimada, Parthasaarathy Sudarsanam, Sharath Adavanne, Daniel Krause, Yuichiro Koyama, Naoya Takahashi, Shusuke Takahashi, Yuki Mitsufuji, Tuomas Virtanen (2022). STARSS22: A dataset of spatial recordings of real scenes with spatiotemporal annotations of sound events. In Proceedings of the Detection and Classification of Acoustic Scenes and Events 2022 Workshop (DCASE2022), Nancy, France.
  • Archontis Politis, Sharath Adavanne, Daniel Krause, Antoine Deleforge, Prerak Srivastava, Tuomas Virtanen (2021). A Dataset of Dynamic Reverberant Sound Scenes with Directional Interferers for Sound Event Localization and Detection.  In Proceedings of the Detection and Classification of Acoustic Scenes and Events 2021 Workshop (DCASE2021), Barcelona, Spain.
  • Archontis Politis, Sharath Adavanne, and Tuomas Virtanen (2020). A Dataset of Reverberant Spatial Sound Scenes with Moving Sources for Sound Event Localization and Detection. In Proceedings of the Detection and Classification of Acoustic Scenes and Events 2020 Workshop (DCASE2020), Tokyo, Japan.
  • Sharath Adavanne, Archontis Politis, and Tuomas Virtanen (2019). A Multi-room reverberant dataset for sound event localization and detection. Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019), New York, NY, USA.