Elsevier

Hearing Research

Volume 355, November 2017, Pages 64-69
Hearing Research

Research Paper
Standard-interval size affects interval-discrimination thresholds for pure-tone melodic pitch intervals

https://doi.org/10.1016/j.heares.2017.09.008Get rights and content

Highlights

  • Pitch-interval discrimination thresholds were compared at 3 standard interval sizes.

  • Thresholds were compared for nonmusicians, amateur musicians, and expert musicians.

  • Pitch-interval discrimination thresholds changed with increasing interval size.

  • Expert musicians produced the lowest discrimination thresholds.

Abstract

Our ability to discriminate between pitch intervals of different sizes is not only an important aspect of speech and music perception, but also a useful means of evaluating higher-level pitch perception. The current study examined how pitch-interval discrimination was affected by the size of the intervals being compared, and by musical training. Using an adaptive procedure, pitch-interval discrimination thresholds were measured for sequentially presented pure-tone intervals with standard intervals of 1 semitone (minor second), 6 semitones (the tri-tone), and 7 semitones (perfect fifth). Listeners were classified into three groups based on musical experience: non-musicians had less than 3 years of informal musical experience, amateur musicians had at least 10 years of experience but no formal music theory training, and expert musicians had at least 12 years of experience with 1 year of formal ear training, and were either currently pursuing or had earned a Bachelor's degree as either a music major or music minor. Consistent with previous studies, discrimination thresholds obtained from expert musicians were significantly lower than those from other listeners. Thresholds also significantly varied with the magnitude of the reference interval and were higher for conditions with a 6- or 7-semitone standard than a 1-semitone standard. These data show that interval-discrimination thresholds are strongly affected by the size of the standard interval.

Introduction

Perceiving changes in pitch is important for communication, social interaction, and making sense of an acoustic environment. In speech, pitch changes can indicate emotion, affect, and the linguistic meaning of an utterance. For example, tonal languages rely on the direction of vocal pitch change, or a pitch contour, to convey meaning through lexical tone (Ye and Connine, 1999) while pitch contours and vocal pitch levels add emotional valence, arousal, and other nonverbal meaning to speech in non-tonal languages (Bänziger and Scherer, 2005, Grichkovtsova et al., 2012, Scherer, 2003, Scherer et al., 1984). The amount by which a pitch changes – defined as a pitch interval – is also important for conveying information in certain contexts; in non-tonal languages it may be used to emphasize certain emotions (Curtis and Bharucha, 2010) and in tonal languages it differentiates two lexical tones that share the same pitch contour, as in the low-level, mid-level, and high-level tones of Cantonese (Cutler and Chen, 1997, Ma et al., 2006).

Pitch changes are also the foundation of musical composition. Musical intervals, which are quantified in units of a semitone, form melodies when they are combined sequentially and harmonies when they are combined simultaneously. In Western musical theory, different intervals serve different functional roles and convey different emotions. The interval of 7 semitones, called a perfect fifth, is used to create harmonious and consonant sounds, while the 6-semitone interval, called an augmented 6th or tri-tone, was historically used to create musical tension and dissonance (Cooke, 1959). Pitch-interval perception and our ability to discriminate between pitch intervals of different sizes is thus an important aspect of both speech and music perception, and is commonly studied in auditory perceptual research.

Such studies of pitch-interval perception use a variety of paradigms ranging from the method of adjustment (Demany and Semal, 1990, Plomp and Steeneken, 1968, Ward, 1954) and subjective ratings (Kameoka and Kuriyagawa, 1969, McDermott et al., 2010b, Plomp and Levelt, 1965, Russo and Thompson, 2005, van de Geer et al., 1962), to interval identification and discrimination (Burns and Campbell, 1994, Burns and Ward, 1978, Killam et al., 1975, Siegel and Siegel, 1977a, Zatorre and Halpern, 1979). Interval identification paradigms, which require listeners to name intervals of the Western musical system using labels such as “minor second” and “perfect fifth”, have shown that musicians demonstrate learned categorical perception for the 12 canonical intervals of the Western musical system (e.g. 1 semitones or 2 semitones) (Burns and Campbell, 1994, Burns and Ward, 1978, Siegel and Siegel, 1977a, Siegel and Siegel, 1977b, Zatorre and Halpern, 1979) and have greater difficulty identifying non-canonical intervals such as quarter tones (i.e. 2.5 semitones or 0.5 semitones) without explicit training (Siegel and Siegel, 1977b). Musical training also enhances interval perception by enabling listeners to more easily detect changes made to a single note embedded in a short musical melody (Dowling and Fujitani, 1971, Dowling, 1978); listeners without musical experience can only detect brief melodies with altered intervals if the contour of these melodies is also altered. Additional studies of interval perception show that relative pitch perception is influenced by a variety of other stimulus factors, including harmonicity (McDermott et al., 2010a, Plomp et al., 1973, Trainor, 1996), timbre (Russo and Thompson, 2005, Zarate et al., 2013), sound level (Thompson et al., 2012), and whether the interval is ascending/descending or simultaneous/sequential (Killam et al., 1975, Luo et al., 2014; for a review, see Burns, 1999).

Due to the highly musical nature of interval perception, many paradigms feature stimuli in a musical context (as in the short melodies above), or require a minimal amount of musical experience by the listeners (as in the musical interval identification tasks). But because this often precludes the possibility of examining this perceptual ability in listeners without formal musical training, many investigators instead use pitch-interval discrimination tasks to avoid reliance on musical experience. In a pitch-interval discrimination task, also called an interval discrimination task, listeners are presented with two intervals and are asked to judge which is larger. This task is similar to basic frequency discrimination tasks except that listeners are asked to identify the larger interval rather than the higher tone. Not surprisingly, such studies show that discrimination performance improves as the difference between the two intervals increases and that musicians typically perform the task better than non-musicians, even without an explicit musical context (Burns and Ward, 1978, Luo et al., 2014, McDermott et al., 2010a, Zarate et al., 2013, Zarate et al., 2012).

Yet, studies of pitch-interval discrimination report conflicting results about how listeners are affected by the size of the reference interval. Listeners in several studies (Burns and Ward, 1978, McDermott et al., 2010a) produced thresholds which did not significantly differ across different standard intervals. The minimum difference (quantified in semitones units) needed to discriminate a 1-semitone interval from a slightly larger interval is the same as needed to discriminate between a 4-semitone interval and a slightly larger one. This trend holds true for standards that are both canonical (1 semitones, 2 semitones) and non-canonical (1.5 semitones, 2.5 semitones) Western musical theory intervals (McDermott et al., 2010a). However, a more recent study examining a wide range of standard interval sizes showed that discrimination thresholds strongly varied with standard interval size and were higher for larger standard intervals, increasing by an average of 0.22 semitones for each interval-standard increase of 1 semitone (Luo et al., 2014).

The different effects of standard-interval size across studies may stem from a number of factors, including the extent of the base tones' frequency rove, whether the stimuli were pure or complex tones, and, in particular, the musical experience of the listeners. The existence of an effect of standard interval magnitude for certain listeners may have important implications for our understanding of relative pitch perception. Furthermore, if the effect is influenced by musical training, an examination of this type of perception may shed light on potential differences between the listening strategies of musicians and nonmusicians.

The goal of the current study was thus twofold: to examine how listeners' pitch-interval discrimination thresholds vary with the size of the standard interval across large intervals, and to examine whether previous conflicting reports of the effects of standard interval size might be due to differences in musical training. To this end, listeners were tested in a melodic pure-tone interval-discrimination task with a procedure analogous to that used by McDermott et al. (2010a) and Luo et al. (2014) to explore pitch-interval discrimination by nonmusicians, amateur musicians, and professionally trained musicians across three standard-interval sizes: 1, 6, and 7 semitones. Standard intervals of 1, 6, and 7 semitones are sufficiently large to show a possible interval-magnitude effect, but not so large that they introduce problems with frequency roves. We included both 6- and 7-semitone conditions because, although similar in semitone size, these two intervals are radically different in their functional and theoretical role in Western musical theory. They are also larger than many standards used previously.

Since musical training is well known to affect pitch and interval perception (Kishon-Rabin et al., 2001, Micheyl et al., 2006, Spiegel and Watson, 1984), and differences in the musical experience of the listeners may explain previous discrepancies in the effect of interval size, both musicians and nonmusicians were tested. Furthermore, because the degree of musical training can highly vary from musician to musician, we separated the musicians into two sub-groups: those who had formal music theory instruction which included one year of ear training and those who did not receive such formal training. Formal music theory instruction includes training in the harmonies, tonalities, and intervals of the Western musical system, and when taught in a university setting is almost always paired with a standardized ear training/sight-singing curriculum. Ear training/sight-singing classes teach students how to recognize intervals, discriminate between them, and vocally produce them without reference tones. These tasks are practiced either outside of a musical context or with the intervals embedded in a musical melody, and are intended to develop a musician's sense of relative pitch. It was therefore expected that although musicians would perform better than nonmusicians in general, ear training might lead to additional improvements in discrimination performance and produce thresholds that were consistent across different interval standards.

Section snippets

Listeners

Fourteen adult listeners participated. All reported normal hearing, none had absolute pitch, and none spoke a tonal language. Listeners were classified into three groups based on musical experience. Five nonmusicians (all males, mean (M) = 26 years of age, standard deviation (SD) = 4 years) had less than three years of musical instruction during childhood. Three of the nonmusicians had no musical experience and two had 9 months and 3 years respectively, both at least ten years prior to the

Effect of musical experience

Fig. 1 shows the individual thresholds for each subject in each condition. A between-subjects ANOVA revealed a significant effect of group [F(2,11) = 6.238, p = 0.015] with expert musicians producing the lowest thresholds of the three groups. Independent t-tests showed a significant difference between the thresholds for the non-musicians and the expert musicians [t(9) = 2.90, p = 0.018] and between the amateur musicians and the expert musicians [t(6.360) = 3.74, p = 0.009], but no significant

Discussion

Our results showed that, in an interval-discrimination task using pure tones, thresholds varied across different standard interval magnitudes; for the standard intervals tested in this study, discrimination thresholds were very high for standard interval sizes of 6- and 7-semitones. In agreement with previous studies, we found that thresholds for expert musicians were significantly lower than those for our other listeners, while the amateur musicians produced thresholds that were the same as

Conclusions

Pitch-interval discrimination thresholds were found to vary with the size of the standard interval. The results suggest a potential added benefit of formal music theory and ear-training which is different from that of simple musical experience: musicians with formal musical training produced the lowest thresholds, while musicians without this training performed similarly to nonmusicians.

Acknowledgments

I thank Brian C.J. Moore, Kourosh Saberi, Jon Venezia, Sierra Broussard, Kyle Stevens, Ashley Thomas, Barbara Sarnecka, and two anonymous reviewers for helpful comments on earlier drafts of the manuscript. Work was funded by NIH R01 DC009659, and NIH/NIDCD grant #T32 DC010775 through the UCI Center for Hearing Research.

References (41)

  • M.E. Curtis et al.

    The minor third communicates sadness in speech, mirroring its use in music

    Emotion

    (2010)
  • A. Cutler et al.

    Lexical tone in Cantonese spoken-word processing

    Percept. Psychophys.

    (1997)
  • L. Demany et al.

    Harmonic and melodic octave templates

    J. Acoust. Soc. Am.

    (1990)
  • W.J. Dowling

    Scale and contour: two components of a theory of memory for melodies

    Psychol. Rev.

    (1978)
  • W.J. Dowling et al.

    Contour, interval, and pitch recognition in memory for melodies

    J. Acoust. Soc. Am.

    (1971)
  • A. Kameoka et al.

    Consonance theory Part I: consonance of dyads

    J. Acoust. Soc. Am.

    (1969)
  • R.N. Killam et al.

    Interval recognition: identification of harmonic and melodic intervals

    J. Music Theory

    (1975)
  • L. Kishon-Rabin et al.

    Pitch discrimination: are professional musicians better than non-musicians?

    J. Basic Clin. Physiol. Pharmacol.

    (2001)
  • H. Levitt

    Transformed up-down methods in psychoacoustics

    J. Acoust. Soc. Am.

    (1971)
  • X. Luo et al.

    Melodic interval perception by normal-hearing listeners and cochlear implant users

    J. Acoust. Soc. Am.

    (2014)
  • Cited by (2)

    • The perception of octave pitch affinity and harmonic fusion have a common origin

      2021, Hearing Research
      Citation Excerpt :

      A melody is a sequence of periodic sounds with specific frequency ratios, forming musical intervals that are perceived as pitch relations. The precision with which these intervals are perceived is of course limited; it depends on the listener's musical training, the intervals themselves, and other factors (Burns and Ward, 1978; Rakowski, 1990; Perlman and Krumhansl, 1996; McDermott et al., 2010; McClaskey, 2017; Graves and Oxenham, 2017). However, in the Western world at least, even people with no substantial musical education readily detect an error of only one semitone (corresponding to a frequency change of about 6%) in the production of one note of a familiar melody (Dowling and Fujitani, 1971; Trainor and Trehub, 1994).

    View full text