Skip to main content

EDITORIAL article

Front. Psychol., 05 May 2014
Sec. Emotion Science
This article is part of the Research Topic Expression of emotion in music and vocal communication View all 30 articles

Expression of emotion in music and vocal communication: Introduction to the research topic

  • 1Sorbonne Paris Cité, Université Paris Descartes, Paris, France
  • 2Laboratoire Psychologie de la Perception, CNRS, UMR 8242, Paris, France
  • 3Department of Psychology, Stockholm University, Stockholm, Sweden
  • 4Department of Psychology, McGill University, Montreal, QC, Canada

In social interactions, we must gauge the emotional state of others in order to behave appropriately. We rely heavily on auditory cues, specifically speech prosody, to do this. Music is also a complex auditory signal with the capacity to communicate emotion rapidly and effectively and often occurs in social situations or ceremonies as an emotional unifier.

Scientists and philosophers have speculated about the common cognitive origins of music and language. Perhaps their common origin lies in their efficacy for emotional expression. Unlike semantic or syntactic aspects of language (and music), many of their acoustic and emotional aspects are shared with sounds made by other species (Fitch, 2006); music and speech share a common acoustic code for expressing emotion (Juslin and Laukka, 2003). Until recently, however, scientists working in the two domains of music and speech rarely communicated, so research was restricted to one domain or the other. The purpose of this Research Topic was to bring these researchers together and encourage cross-talk.

Over 25 groups of researchers contributed their expertise, and the included papers give an overview of the diversity of current research, both in terms of research questions and methodology. Some articles focus on aspects in one of the two domains, whereas other articles directly compare, contrast, or combine music and vocal communication.

Empirical studies on music perception include work by Eerola et al. (2013), in which they systematically manipulated musical cues to determine their effects on perception of emotion, and Droit-Volet et al. (2013), who altered acoustic elements associated with emotion to examine the effect of these changes on time perception. Effects of context on music understanding were also investigated: Spreckelmeyer et al. (2013) examined preattentive processing of emotion, measuring ERPs during the processing of a sad tone within the context of happy tones and the reverse. Schellenberg et al. (2012) demonstrated a listener preference for music that expressed emotion contrasting with an established context, and Loui et al. (2013) examined the role of vocals on perceived arousal and valence in songs.

Turning to emotional responses to music, Russo et al. (2013) developed models aimed at predicting the emotion being experienced using information in the listeners' physiological signals, and Altenmüller et al. (2014) used fMRI to investigate the neural basis of episodic memory for arousing film music. Following up on Gabrielsson's (2002) distinction between emotion felt by a listener and emotion expressed by a piece of music, Schubert (2013) provided a review and suggestions for future research on the internal and external loci of musical emotion. There were also two theoretical papers on musical emotions: Flaig and Large (2014) speculated that music may induce affective response by speaking to the brain in its own language by way of neurodynamics, and Allen et al. (2013) presented a view of the general nature of musical emotions based on studies on autism.

In the speech domain, Paulmann et al. (2013) used EEG to investigate influences of arousal and valence on cortical responses to emotional prosody. Rigoulot et al. (2013) used a gating paradigm to demonstrate the importance of utterance-final syllables in emotion recognition. Two papers focused on the role of specific acoustic cues in vocal expression: Weusthoff et al. (2013) discussed the role of fundamental frequency in the success of romantic relationships, and Yanushevskaya et al. (2013) examined the role of loudness, both independently and in conjunction with voice quality.

Several researchers undertook cross-cultural studies of emotion perception in speech and non-verbal vocalizations. Jürgens et al. (2013) examined the perception of German emotional speech tokens across three cultures. Waaramaa and Leisiö (2013) examined the recognition of emotion in Finnish pseudo-sentences by listeners from five countries. There were also three cross-cultural investigations of non-verbal vocalizations: Koeda et al. (2013) examined perception of emotional vocalizations by Canadian and Japanese listeners, Laukka et al. (2013) examined Swedish listeners' perception of vocalizations from four countries, and Sauter (2013) examined the role of motivation in the in-group advantage for emotion recognition by presenting listeners with vocalizations produced by in- or out-group members.

Discussing the similarity between music and speech emotion expression, Juslin (2013) forwarded the argument that this similarity lies at the “core” or basic emotion level, and that more complex emotions are more domain-specific. Several authors empirically tested the similarity and contrasts between music and vocal expression. Margulis (2013) posited that the relative preponderance of repetition in music compared to speech contributes to a fundamental difference between the two domains. Quinto et al. (2013) showed differences in the functions of pitch and rhythm between these domains. Weninger et al. (2013) synthesized information from databases including speech, music, and environmental sounds, and thereby took a step toward a holistic computational model of affect in sound. To aid future cross-domain research, Paquette et al. (2013) presented a new validated set of stimuli—a musical equivalent to vocal affective bursts. Bowling (2013) reviewed the affective character of musical modes, based in the biology of human vocal emotion expression, and Bryant (2013) further argued that research on music and emotion might benefit from research on form and function in non-human animal signals.

Three papers examined developmental and lifespan changes. Corbeil et al. (2013) contrasted the perception of speaking and singing in infancy, and found that it is not the domain (music or speech) that matters but rather the level of (positive) emotion. Wang et al. (2013) examined early auditory deprivation, asking children with cochlear implants to imitate happy and sad utterances. Vieillard and Gilet (2013) found an increase in positive responding to music with aging.

In sum, the main contribution of this Research Topic, along with highlighting the variety of research being done already, is to show the places of contact between the domains of music and vocal expression that occur at the level of emotional communication. In addition, we hope it will encourage future dialog among researchers interested in emotion in fields as diverse as computer science, linguistics, musicology, neuroscience, psychology, speech and hearing sciences, and sociology, who can each contribute knowledge necessary for studying this complex topic.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

Allen, R., Walsh, R., and Zangwill, N. (2013). The same, only different: what can responses to music in autism tell us about the nature of musical emotions? Front. Psychol. 4:156. doi: 10.3389/fpsyg.2013.00156

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Altenmüller, E., Siggel, S., Mohammadi, B., Samii, A., and Münte, T. (2014). Play it again Sam: brain correlates of emotional music recognition. Front. Psychol. 5:114. doi: 10.3389/fpsyg.2014.00114

CrossRef Full Text

Bowling, D. L. (2013). A vocal basis for the affective character of musical mode in melody. Front. Psychol. 4: 464. doi: 10.3389/fpsyg.2013.00464

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Bryant, G. A. (2013). Animal signals and emotion in music: coordinating affect across groups. Front. Psychol. 4:990. doi: 10.3389/fpsyg.2013.00990

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Corbeil, M., Trehub, S. E., and Peretz, I. (2013). Speech vs. singing: infants choose happier sounds. Front. Psychol. 4:372. doi: 10.3389/fpsyg.2013.00372

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Droit-Volet, S., Ramos, D., Bueno, J. L. O., and Bigand, E. (2013). Music, emotion, and time perception: the influence of subjective emotional valence and arousal? Front. Psychol. 4:417. doi: 10.3389/fpsyg.2013.00417

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Eerola, T., Friberg, A., and Bresin, R. (2013). Emotional expression in music: contribution, linearity, and additivity of primary musical cues. Front. Psychol. 4:487. doi: 10.3389/fpsyg.2013.00487

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Fitch, W. T. (2006). The biology and evolution of music: a comparative perspective. Cognition 100, 173–215. doi: 10.1016/j.cognition.2005.11.009

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Flaig, N. K., and Large, E. W. (2014). Dynamic musical communication of core affect. Front. Psychol. 5:72. doi: 10.3389/fpsyg.2014.00072

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Gabrielsson, A. (2002). Emotion perceived and emotion felt: same or different? Music. Sci. 5, 123–147. doi: 10.1177/10298649020050S105

CrossRef Full Text

Jürgens, R., Drolet, M., Pirow, R., Scheiner, E., and Fischer, J. (2013). Encoding conditions affect recognition of vocally expressed emotions across cultures. Front. Psychol. 4:111. doi: 10.3389/fpsyg.2013.00111

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Juslin, P. N. (2013). What does music express? Basic emotions and beyond. Front. Psychol. 4:596. doi: 10.3389/fpsyg.2013.00596

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Juslin, P. N., and Laukka, P. (2003). Communication of emotions in vocal expression and music performance: different channels, same code? Psychol. Bull. 129, 770–814. doi: 10.1037/0033-2909.129.5.770

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Koeda, M., Belin, P., Hama, T., Masuda, T., Matsuura, M., and Okubo, Y. (2013). Cross-cultural differences in the processing of non-verbal affective vocalizations by Japanese and Canadian listeners. Front. Psychol. 4:105. doi: 10.3389/fpsyg.2013.00105

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Laukka, P., Elfenbein, H. A., Söder, N., Nordström, H., Althoff, J., Chui, W., et al. (2013). Cross-cultural decoding of positive and negative non-linguistic emotion vocalizations. Front. Psychol. 4:353. doi: 10.3389/fpsyg.2013.00353

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Loui, P., Bachorik, J. P., Li, H. C., and Schlaug, G. (2013). Effects of voice on emotional arousal. Front. Psychol. 4:675. doi: 10.3389/fpsyg.2013.00675

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Margulis, E. H. (2013). Repetition and emotive communication in music versus speech. Front. Psychol. 4:167. doi: 10.3389/fpsyg.2013.00167

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Paquette, S., Peretz, I., and Belin, P. (2013). The “musical emotional bursts”: a validated set of musical affect bursts to investigate auditory affective processing. Front. Psychol. 4:509. doi: 10.3389/fpsyg.2013.00509

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Paulmann, S., Bleichner, M., and Kotz, S. A. (2013). Valence, arousal, and task effects in emotional prosody processing. Front. Psychol. 4:345. doi: 10.3389/fpsyg.2013.00345

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Quinto, L., Thompson, W. F., and Keating, F. L. (2013). Emotional communication in speech and music: the role of melodic and rhythmic contrasts. Front. Psychol. 4:184. doi: 10.3389/fpsyg.2013.00184

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Rigoulot, S., Wassiliwizky, E., and Pell, M. D. (2013). Feeling backwards? How temporal order in speech affects the time course of vocal emotion recognition. Front. Psychol. 4:367. doi: 10.3389/fpsyg.2013.00367

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Russo, F. A., Vempala, N. N., and Sandstrom, G. M. (2013). Predicting musically induced emotions from physiological inputs: linear and neural network models. Front. Psychol. 4:468. doi: 10.3389/fpsyg.2013.00468

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Sauter, D. A. (2013). The role of motivation and cultural dialects in the in-group advantage for emotional vocalizations. Front. Psychol. 4:814. doi: 10.3389/fpsyg.2013.00814

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Schellenberg, E. G., Corrigall, K. A., Ladinig, O., and Huron, D. (2012). Changing the tune: listeners like music that expresses a contrasting emotion. Front. Psychol. 3:574. doi: 10.3389/fpsyg.2012.00574

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Schubert, E. (2013). Emotion felt by listener and expressed by music: a literature review and theoretical investigation. Front. Psychol. 4:837. doi: 10.3389/fpsyg.2013.00837

CrossRef Full Text

Spreckelmeyer, K. N., Altenmüller, E. O., Colonius, H., and Münte, T. F. (2013). Preattentive processing of emotional musical tones: a multidimensional scaling and ERP study. Front. Psychol. 4:656. doi: 10.3389/fpsyg.2013.00656

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Vieillard, S., and Gilet, A.-L. (2013). Age-related differences in affective responses to and memory for emotions conveyed by music: a cross-sectional study. Front. Psychol. 4:711. doi: 10.3389/fpsyg.2013.00711

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Waaramaa, T., and Leisiö, T. (2013). Perception of emotionally loaded vocal expressions and its connection to responses to music. a cross-cultural investigation: Estonia, Finland, Sweden, Russia, and the USA. Front. Psychol. 4:344. doi: 10.3389/fpsyg.2013.00344

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Wang, D. J., Trehub, S. E., Volkova, A., and van Lieshout, P. (2013). Child implant users' imitation of happy-and sad-sounding speech. Front. Psychol. 4:351. doi: 10.3389/fpsyg.2013.00351

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Weninger, F., Eyben, F., Schuller, B. W., Mortillaro, M., and Scherer, K. R. (2013). On the acoustics of emotion in audio: what speech, music, and sound have in common. Front. Psychol. 4:292. doi: 10.3389/fpsyg.2013.00292

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Weusthoff, S., Baucom, B. R., and Hahlweg, K. (2013). The siren song of vocal fundamental frequency for romantic relationships. Front. Psychol. 4:439. doi: 10.3389/fpsyg.2013.00439

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Yanushevskaya, I., Gobl, C., and Ní Chasaide, A. (2013). Voice quality in affect cueing: does loudness matter? Front. Psychol. 4:335. doi: 10.3389/fpsyg.2013.00335

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Keywords: music, speech, emotion, voice, cross-domain cognition

Citation: Bhatara A, Laukka P and Levitin DJ (2014) Expression of emotion in music and vocal communication: Introduction to the research topic. Front. Psychol. 5:399. doi: 10.3389/fpsyg.2014.00399

Received: 26 March 2014; Accepted: 15 April 2014;
Published online: 05 May 2014.

Edited and reviewed by: Luiz Pessoa, University of Maryland, USA

Copyright © 2014 Bhatara, Laukka and Levitin. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: bhatara@gmail.com

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.