ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

VocalTurk: Exploring Feasibility of Crowdsourced Speaker Identification

Susumu Saito, Yuta Ide, Teppei Nakano, Tetsuji Ogawa

This paper presents VocalTurk, a feasibility study of crowdsourced speaker identification based on our worker dataset collected in Amazon Mechanical Turk. Crowdsourced data labeling has already been acknowledged in speech data processing nowadays, but empirical analysis that answer to common questions such as “how accurate are workers capable of labeling speech data?” and “what does a good speech-labeling microtask interface look like?” still remain underexplored, which would limit the quality and scale of the dataset collection. Focusing on the speaker identification task in particular, we thus conducted two studies in Amazon Mechanical Turk: i) hired 3,800+ unique workers to test their performances and confidences in giving answers to voice pair comparison tasks, and ii) additionally assigned more-difficult tasks of 1-vs-N voice set comparisons to 350+ top-scoring workers to test their accuracy-speed performances across patterns of N = 1, 3, 5. The results revealed some positive findings that would motivate speech researchers toward crowdsourced data labeling, such as that the top-scoring workers were capable of giving labels to our voice comparison pairs with 99% accuracy after majority voting, as well as they were even capable of batch-labeling which significantly shortened up to 34% of their completion time but still with no statistically-significant degradation in accuracy.


doi: 10.21437/Interspeech.2021-464

Cite as: Saito, S., Ide, Y., Nakano, T., Ogawa, T. (2021) VocalTurk: Exploring Feasibility of Crowdsourced Speaker Identification. Proc. Interspeech 2021, 1723-1727, doi: 10.21437/Interspeech.2021-464

@inproceedings{saito21_interspeech,
  author={Susumu Saito and Yuta Ide and Teppei Nakano and Tetsuji Ogawa},
  title={{VocalTurk: Exploring Feasibility of Crowdsourced Speaker Identification}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={1723--1727},
  doi={10.21437/Interspeech.2021-464}
}