Abstract
It has been suggested that individuals with autism will be less responsive to the emotional content of music than typical individuals. With the aim of testing this hypothesis, a group of high-functioning adults on the autism spectrum was compared with a group of matched controls on two measures of emotional responsiveness to music, comprising physiological and verbal measures. Impairment in participants ability to verbalize their emotions (type-II alexithymia) was also assessed. The groups did not differ significantly on physiological responsiveness, but the autism group was significantly lower on the verbal measure. However, inclusion of the alexithymia score as a mediator variable nullified this group difference, suggesting that the difference was due not to absence of underlying emotional responsiveness to music in autism, but to a reduced ability to articulate it.
Similar content being viewed by others
References
Allen, R., & Heaton, P. (2010). Autism, music, and the therapeutic potential of music in alexithymia. Music Perception, 27(4), 251–261.
Allen, R., Hill, E., & Heaton, P. (2009). ‘Hath charms to soothe…’ An exploratory study of how high-functioning adults with ASD experience music. Autism, 13(1), 21–41.
American Psychiatric Association. (2000). Diagnostic and statistical manual of mental disorders (4th Ed., text Revision. Ed.). Washington, DC: American Psychiatric Association.
Bagby, R. M., Parker, J. D. A., & Taylor, G. J. (1994). The 20-item Toronto-Alexithymia-Scale.1. Item selection and cross-validation of the factor structure. Journal of Psychosomatic Research, 38(1), 23–32.
Baron, R. M., & Kenny, D. A. (1986). The moderator mediator variable distinction in social psychological-research—Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51(6), 1173–1182.
Baron-Cohen, S., Wheelwright, S., Skinner, R., Martin, J., & Clubley, E. (2001). The Autism-spectrum quotient (AQ): Evidence from Asperger syndrome/high-functioning autism, males and females, scientists and mathematicians. Journal of Autism and Developmental Disorders, 31(1), 5–17.
Begeer, S., Koot, H. M., Rieffe, C., Terwogt, M. M., & Stegge, H. (2008). Emotional competence in children with autism: Diagnostic criteria and empirical evidence. Developmental Review, 28(3), 342–369.
Berthoz, S., & Hill, E. L. (2005). The validity of using self-reports to assess emotion regulation abilities in adults with autism spectrum disorder. European Psychiatry, 20(3), 291–298.
Bhatara, A. K., Quintin, E.-M., Heaton, P., Fombonne, E., & Levitin, D. J. (2009). The effect of music on social attribution in adolescents with autism spectrum disorders. Child Neuropsychology, 15(4), 375–396.
Bird, G., Silani, G., Brindley, R., White, S., Frith, U., & Singer, T. (2010). Empathic brain responses in insula are modulated by levels of alexithymia but not autism. Brain, 133, 1515–1525.
Blood, A. J., & Zatorre, R. J. (2001). Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion. Proceedings of the National Academy of Sciences of the United States of America, 98(20), 11818–11823.
Carlson, S., Rama, P., Artchakov, D., & Linnankoski, I. (1997). Effects of music and white noise on working memory performance in monkeys. NeuroReport, 8(13), 2853–2856.
Dellacherie, D., Roy, M., Hugueville, L., Peretz, I., & Samson, S. (2010). The effect of musical experience on emotional self-reports and psychophysiological responses to dissonance. Psychophysiology, 48(3), 337–349.
Dunn, L., Dunn, L., Whetton, C., & Burley, J. (1997). British picture vocabulary scale. London: NFER-Nelson.
Falkmer, M., Stuart, G. W., Danielsson, H., Bram, S., Lonebrink, M., & Falkmer, T. (2011). Visual acuity in adults with Asperger’s syndrome: No evidence for “Eagle-Eyed” vision. Biological Psychiatry, 70(9), 812–816.
Fritz, T., Jentschke, S., Gosselin, N., Sammler, D., Peretz, I., Turner, R., et al. (2009). Universal recognition of three basic emotions in music. Current Biology, 19(7), 573–576.
Hala, S., Pexman, P. M., & Glenwright, M. (2007). Priming the meaning of homographs in typically developing children and children with autism. Journal of Autism and Developmental Disorders, 37(2), 329–340.
Harvey, D. I., Leybourne, S. J., & Taylor, A. M. R. (2009). Simple, robust, and powerful tests of the breaking trend hypothesis. Econometric Theory, 25(4), 995–1029.
Heaton, P. (2009). Speaking about music and the music of speech. Brain, 132(10), 2897–2899.
Heaton, P., Allen, R., Williams, K., Cummins, O., & Happe, F. (2008). Do social and cognitive deficits curtail musical understanding? Evidence from autism and Down syndrome. British Journal of Developmental Psychology, 26, 171–182.
Heaton, P., Hermelin, B., & Pring, L. (1999). Can children with autistic spectrum disorders perceive affect in music? An experimental investigation. Psychological Medicine, 29(6), 1405–1410.
Heider, F., & Simmel, M. (1944). An experimental study of apparent behavior. The American Journal of Psychology, 57(2), 243–259.
Hill, E., Berthoz, S., & Frith, U. (2004). Brief report: Cognitive processing of own emotions in individuals with autistic spectrum disorder and in their relatives. Journal of Autism and Developmental Disorders, 34(2), 229–235.
Howell, D. (2012). Statistical methods for psychology (8th Ed.). Belmont, CA: Wadsworth.
Huron, D. (2001). Is music an evolutionary adaptation? Biological Foundations of Music, 930, 43–61.
iWorx. (2011). iWorx Psychophysiology Lab Manual. Retrieved 20 March 2012, from http://www.iworx.com/LabExercises/lockedexercises/LockedGSRANL.pdf.
Juslin, P. N., & Sloboda, J. A. (2010). Handbook of music and emotion: Theory, research, applications. Oxford: Oxford University Press.
Juslin, P. N., & Vastfjall, D. (2008). Emotional responses to music: The need to consider underlying mechanisms. Behavioral and Brain Sciences, 31(5), 559–621.
Khalfa, S., Bruneau, N., Roge, B., Georgieff, N., Veuillet, E., Adrien, J. L., et al. (2004). Increased perception of loudness in autism. Hearing Research, 198(1–2), 87–92.
Khalfa, S., Isabelle, P., Jean-Pierre, B., & Manon, R. (2002). Event-related skin conductance responses to musical emotions in humans. Neuroscience Letters, 328(2), 145–149.
Kim, J., & Andre, E. (2008). Emotion recognition based on physiological changes in music listening. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(12), 2067–2083.
Koelsch, S. (2010). Towards a neural basis of music-evoked emotions. Trends in Cognitive Sciences, 14(3), 131–137.
Konecni, V. J. (2005). The aesthetic trinity: Awe, being moved, thrills. Bulletin of Psychology and the Arts, 5, 27–44.
Krumhansl, C. L. (1997). An exploratory study of musical emotions and psychophysiology. Canadian Journal of Experimental Psychology-Revue Canadienne De Psychologie Experimentale, 51(4), 336–353.
Levitin, D. (2006). This is your brain on music. New York: Dutton.
Lombardo, M. V., Barnes, J. L., Wheelwright, S. J., & Baron-Cohen, S. (2007). Self-referential cognition and empathy in Autism. PLoS One, 2(9), 11.
Lutkenhoner, B., Seither-Preisler, A., & Seither, S. (2006). Piano tones evoke stronger magnetic fields than pure tones or noise, both in musicians and non-musicians. Neuroimage, 30(3), 927–937.
Main, I. G., Leonard, T., Papasouliotis, O., Hatton, C. G., & Meredith, P. G. (1999). One slope or two? Detecting statistically significant breaks of slope in geophysical data, with application to fracture scaling relationships. Geophysical Research Letters, 26(18), 2801–2804.
McGregor, K. K., & Bean, A. (2012). How children with Autism extend new words. Journal of Speech Language and Hearing Research, 55(1), 70–83.
Mottron, L., & Burack, J. A. (2001). Enhanced perceptual functioning in the development of autism. In J. A. Burack, T. Charman, N. Yirmiya, & P. R. Zelazo (Eds.), The development of autism: Perspectives from theory and research (pp. 131–148). Mahwah, NJ: Lawrence Erlbaum Associates.
Mottron, L., Peretz, I., & Menard, E. (2000). Local and global processing of music in high-functioning persons with autism: Beyond central coherence? Journal of Child Psychology and Psychiatry and Allied Disciplines, 41(8), 1057–1065.
North, A. C., Hargreaves, D. J., & Hargreaves, J. J. (2004). Uses of music in everyday life. Music Perception, 22(1), 41–77.
North, A. C., Hargreaves, D. J., & O’Neill, S. A. (2000). The importance of music to adolescents. British Journal of Educational Psychology, 70, 255–272.
Peretz, I. (2001). Listen to the brain: A biological perspective on musical emotions. In P. Juslin & J. Sloboda (Eds.), Music and emotion. Oxford: Oxford University Press.
Plaisted, K. C. (2001). Reduced generalization in autism: An alternative to weak central coherence. In Development of Autism: Perspectives from theory and research (pp. 149–169). Mahwah: Lawrence Erlbaum Assoc Publ.
Quintin, E.-M., Bhatara, A., Poissant, H., Fombonne, E., & Levitin, D. J. (2011). Emotion perception in music in high-functioning adolescents with Autism spectrum disorders. Journal of Autism and Developmental Disorders, 41(9), 1240–1255.
Salimpoor, V. N., Benovoy, M., Larcher, K., Dagher, A., & Zatorre, R. J. (2011). Anatomically distinct dopamine release during anticipation and experience of peak emotion to music. Nature Neuroscience, 14, 257–262.
Salimpoor, V. N., Benovoy, M., Longo, G., Cooperstock, J. R., & Zatorre, R. J. (2009). The rewarding aspects of music listening are related to degree of emotional arousal. PLoS One, 4(10), e7487.
Schaefer, F., & Boucsein, W. (2000). Comparison of electrodermal constant voltage and constant current recording techniques using the phase angle between alternating voltage and current. Psychophysiology, 37(1), 85–91.
Soulieres, I., Mottron, L., Giguere, G., & Larochelle, S. (2011). Category induction in autism: Slower, perhaps different, but certainly possible. Quarterly Journal of Experimental Psychology, 64(2), 311–327.
Stewart, L. (2011). Characterizing congenital amusia. Quarterly Journal of Experimental Psychology, 64(4), 625–638.
Vorst, H. C. M., & Bermond, B. (2001). Validity and reliability of the Bermond-Vorst Alexithymia Questionnaire. Personality and Individual Differences, 30(3), 413–434.
Zentner, M., Grandjean, D., & Scherer, K. R. (2008). Emotions evoked by the sound of music: Characterization, classification, and measurement. Emotion, 8(4), 494–521.
Acknowledgments
The research reported in this paper was conducted by the first author as part of a program of PhD study, and is largely based on material contained in his PhD thesis. This program of study was self-funded by the first author. The third author was the first author’s PhD supervisor. The second author provided significant assistance with developing the equipment for carrying out the GSR experiments and the software for recording the results in a form available for subsequent analysis by the first author using standard statistical methods.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix 3: Instructions to Participants
I am going to play you six items of music, and then ask you about your responses to them. To begin with, I would like you to take a few minutes to look at this list of words. If you come across a word which you don’t understand, please ask me when you come to it and I will try and explain it.
Please now read the words to yourself, not aloud, and tell me when you have finished.
I will now play the music, one piece at a time. While I am playing the first piece, please look again at the words and decide if any of them describes the way the music makes you feel, or describes a thought that comes to you when listening, even if momentarily or fleetingly. If there are any words like this, please put a tick next to them in column 1, which is this one. You can do this either during the music or after it has finished. It is quite all right if you decide that none of the words applies, and in that case just leave the column blank and tell me when you are ready to move on to the next item of music.
Please note that I only want you to record the thoughts or feelings that the music evokes in you. I am not looking for a description of the music itself, but of what is going on in your mind in response to the music, using the words provided in the list. If you have a thought or feeling that is not on the list, please tell me what it is.
Now I will play you the next item, and I would like you to do the same in column 2. Please remember that the feeling should be one that you have yourself—it may be that this is not the feeling you see expressed by the music. If for example you think the music sounds sad, but you yourself feel pleasure and not sadness (perhaps because it has pleasant memories for you), you should mark ‘pleasure’ and not ‘sad’.
Now please look at the following set of six “bundles” of words that might be used to describe music. I am going to play you six more items of music.
This experiment is different from the last one. The bundles represent typical words used by an earlier group of people to describe the way they felt about the different items of music. I would like you to use your judgement to decide which bundle you think was the one used by the group to describe that piece of music. The words in the bundle don’t necessarily have to represent the way the music makes you feel, or a thought that the music evokes in you.
You do not have to choose different bundles for each item of music. It is fine if you want to choose the same one to describe two or more items.
Appendix 4: GSR Analysis
The program devised by the second author sampled GSR values and stored them on computer at the rate of 2 Hz. The program also recorded the exact time at which each GSR reading was taken, and also whether there was a sound stimulus being presented at that point (and if so it identified the code number of the stimulus) or whether there was silence at that point. A simple Windows Excel program was written to analyse the data for each individual. The program was applied separately to the data recording response to music, and to noise stimuli. The first step executed by the program was to locate each point at which there was a change of stimulus, i.e., where either the sound (music or environmental noise) ceased and a period of silence began, or vice versa (a “transition point”). For each transition point, the program also measured the GSR readings taken at two other points, located at 5 s before, and 5 s after the transition point (i.e., at ten sample readings before and after the transition point): the “bracket points” for that transition point. This period was found to give a measure of the short term trend of the readings that would be both large enough to minimise the distorting effects of random noise, and short enough to represent the trend prevailing over a short time period.
For each transition point, the program averaged the readings at the two corresponding bracket points, and subtracted them from the reading at the transition point. In order to give a result in kilohms per second, this result was divided by a scaling factor of 2.5 (for the reason, see below). The program then calculated the absolute value of the resulting number for each transition point, and finally, took the overall median of these absolute values. The median was adopted, since it this statistic is known to be less susceptible to the influence of possible artifacts or other causes of extreme values, than the use of a simple average. Algebraically, if the reading at the transition point at time t was Rt, where t is measured in seconds, then readings were taken at the bracket points corresponding to Rt−5 and Rt+5 and the quantity |Rt − (Rt−5 + Rt+5)/2| was calculated for each value of t corresponding to a transition point, and then the effect size was finally calculated as effect size = median{|(Rt − (Rt−5 + Rt+5)/2)|/2.5}, taken over all t marking transition points. The units of GSR measurement were in kilohms, so the units of this effect size measure were in kilohms per second.
The rationale for this is that in the absence of any effect of sound versus silence, one would expect the slope of the trace line at the transition point to be continuous before, during and after the transition point. The most parsimonious hypothesis is that the change from music to silence or vice versa has a first order linear effect on the slope. It is easy to see that if Rt were a linear function of time t (i.e., if the transition had no effect), then Rt − (Rt−5 + Rt+5)/2 would be zero. In fact, if the trace of resistance against time is visualised as a graph with time as the x-axis and resistance as the y-axis, then Rt − (Rt−5 + Rt+5)/2 is the vertical distance between the point represented by coordinates (t, Rt), and the midpoint of the line joining the points (t − 5, Rt−5) and (t + 5, Rt+5). The estimate of rate of change of R with time during the period between the first bracket point and the transition point is (Rt − Rt−5)/5, in units of kilohms per second. The rate of change of R in the second time period is (Rt+5 − Rt)/5, in units of kilohms per second. The difference between these two, or (2Rt − (Rt−5 + Rt+5))/5, is the difference in the slopes of the traces immediately before and after the transition point, and this was taken as the definition of the effect size of the sound/silence or silence/sound transition at that point.
Rights and permissions
About this article
Cite this article
Allen, R., Davis, R. & Hill, E. The Effects of Autism and Alexithymia on Physiological and Verbal Responsiveness to Music. J Autism Dev Disord 43, 432–444 (2013). https://doi.org/10.1007/s10803-012-1587-8
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10803-012-1587-8