Original ArticlesComparing real-time and transcript-based techniques for measuring stuttering
Introduction
In recent years, there has been considerable debate regarding the measurement of stuttering behaviors. One of the most prominent topics in the literature has been measurement reliability Cordes 1994, Cordes & Ingham 1994a, Curlee 1981, Kully & Boberg 1988, MacDonald & Martin 1973, Tuthill 1946, Young 1975, Young 1984; however, there are a number of other important issues in need of further study (Yaruss, 1997a). For example, recent research has examined whether it is appropriate to count instances of disfluency or instances of stuttering (e.g., Conture 1990, Cordes & Ingham 1996a, Costello & Ingham 1984, Ham 1989), whether such counts should be based on the number of words or number of syllables produced Andrews & Ingham 1971, Brundage & Bernstein Ratner 1989, Ham 1986, or whether counts should be based on the instances of speech disruptions or the time intervals containing disruptions Cordes & Ingham 1994b, Cordes & Ingham 1994c, Cordes & Ingham 1995a, Cordes & Ingham 1996b, Cordes et al. 1992, Ingham, Cordes, & Finn 1993, Ingham, Cordes, & Gow 1993. It seems likely that clinicians’ varying decisions regarding these variables may contribute to the finding that different clinicians obtain different results when measuring stuttering Kully & Boberg 1988, Cordes & Ingham 1995a.
In addition to these topics, it seems reasonable to assume that there may also be important differences associated with the of measurement technique a clinician selects. Some methods for measuring stuttering emphasize a more comprehensive analysis of speech (dis)fluency based on a detailed verbatim transcript of a speech sample (e.g., Campbell & Hill 1987, Rustin, Botterill, & Kelman 1996), while others emphasize a more rapid but less detailed analysis of the production of speech disfluencies based on real-time (or “on-line,” after Conture, 1990) counting and categorizing of speech disfluencies (e.g., Conture 1990, Conture & Yaruss 1993, Riley 1994, Yaruss 1998). Certainly, each technique has its advantages and disadvantages. Transcript-based techniques provide more information than can readily be obtained through real-time measures, including more detailed analyses of how a client’s speech and language abilities relate to the production of speech disfluencies. Transcript-based techniques also provide greater opportunities to assess qualitative aspects of speech disfluencies, such as audible and visible tension. Unfortunately, transcript-based analyses are also quite time consuming, so it is generally not feasible to conduct such detailed measurements on a regular basis (e.g., for documenting changes in a client’s progress throughout treatment). Real-time techniques, on the other hand, are much faster to complete, so they provide a method for collecting the objective data necessary to document changes in a client’s stuttering behaviors without requiring a large time commitment. Still, the amount of detail that can be assessed with real-time techniques is somewhat limited. Based on the strengths and weaknesses of these two approaches, then, one seemingly reasonable measurement strategy would be to utilize transcript-based methods when more detailed data are necessary, such as during a diagnostic evaluation, and real-time methods when it is less feasible to devote a large amount of time to data collection, such as during treatment (Yaruss, 1997a).
Unfortunately, it is not presently clear how the results of real-time and transcript-based measures relate to one another. Given the concerns regarding the reliability of stuttering measurements noted above, it seems reasonable to assume that there may be differences in the results obtained using these different techniques. For example, real-time analyses require rapid judgments of speech behaviors, so subtle behaviors may be less likely to be identified. Conversely, there may be a tendency to “overanalyze” subtle behaviors with transcript-based procedures, when a videotaped speech sample can be viewed repeatedly or in slow motion. As a result, it is not clear whether data obtained using these two techniques can be reasonably compared in a strategy such as that proposed above. Accordingly, the purpose of this study was to examine similarities and differences in the frequency and types of speech disfluencies obtained using a transcript-based and a real-time analysis in order to determine whether these two types of measurement approaches can be combined in a comprehensive strategy for measuring the speech disfluency behaviors of individuals who stutter.
Section snippets
Speech Samples
Analyses in this study were based on 50 audio/videotaped speech samples, 200 syllables in length, that were drawn from a collection of videotapes at the Northwestern University Speech and Language Clinics. Each speech sample was collected during a comprehensive diagnostic evaluation of the client’s speech and language production designed to determine whether the client was in need of treatment for stuttering, and, if so, what the nature of that treatment should be (for detailed discussions of
Similarities and Differences Between the Techniques
Similarities and differences between the two measurement techniques were assessed in three ways for both more typical and less typical disfluencies: (a) a direct comparison of the frequency counts obtained with each technique; (b) mean differences and paired samples t-tests (Figure 1); and (c) Pearson product-moment correlations (Figure 2).
Discussion
Results from the present investigation reveal that the frequency of more typical and less typical disfluencies obtained from a transcript-based analysis were quite similar, but not identical, to those obtained from a real-time analysis. As such, these findings provide part of the necessary background for evaluating a comprehensive data-collection strategy in which clinicians utilize a transcript-based approach when more data are needed for making diagnostic decisions and a real-time approach
Acknowledgements
This manuscript was prepared while all authors were at Northwestern University. The authors would like to express their appreciation to Gene Brutten, Ken St. Louis, and two anonymous reviewers for their helpful input on an earlier version of this paper. Portions of this paper were presented at the 1995 Convention of the American Speech-Language-Hearing Association, Orlando, Florida.
References (44)
- et al.
The measurement of stuttering frequency in children’s speech
Journal of Fluency Disorders
(1989) “What are we measuring?”
Journal of Fluency Disorders
(1989)Reporting observer agreement on stuttering event judgmentsA survey and evaluation of current practice
Journal of Fluency Disorders
(1994)- et al.
Predictive factors of persistence and recoveryPathways of childhood stuttering
Journal of Communication Disorders
(1996) Clinical implications of situational variability in preschool children who stutter
Journal of Fluency Disorders
(1997)- et al.
StutteringConsiderations in the evaluation of treatment
British Journal of Disorders of Communication
(1971) - et al.
Comprehensive Stuttering Program
(1985) - Campbell, J. & Hill, D. (1987, Nov.). Systematic disfluency analysis: Accountability for differential evaluation and...
- Campbell, J., Hill, D., & Driscoll, M. (1991, Nov.). Systematic Disfluency Analysis: Using SDA to determine stuttering...
- Campbell, J.H., Hill, D.G., Yaruss, J.S., & Gregory, H.H. (1996, Nov.). Integrating academic and clinical education in...
Stuttering
Handbook for childhood stutteringA training manual
The reliability of observational dataI. Theories and methods of speech-language pathology
Journal of Speech and Hearing Research
The reliability of observational dataII. Issues in the identification and measurement of stuttering events
Journal of Speech and Hearing Research
Time-interval measurement of stutteringEffects of interval duration
Journal of Speech and Hearing Research
Time-interval measurement of stutteringEffects of training with highly agreed or poorly agreed exemplars
Journal of Speech and Hearing Research
Judgments of stuttered and nonstuttered intervals by recognized authorities in stuttering research
Journal of Speech and Hearing Research
Disfluency types and stuttering measurementA necessary connection?
Journal of Speech and Hearing Research
Time-interval measurement of stutteringEstablishing and modifying judgment accuracy
Journal of Speech and Hearing Research
Time-interval analysis of interjudge and intrajudge agreement for stuttering event judgments
Journal of Speech and Hearing Research
An analysis of the relationship among stuttering behaviors
Journal of Speech and Hearing Research
Assessment strategies for stuttering
Cited by (21)
Speech disfluencies of preschool-age children who do and do not stutter
2014, Journal of Communication DisordersCitation Excerpt :Johnson and colleagues further speculated that from such distributions “we may draw the generalization that there are more relatively mild than relatively severe stutterers” (p. 252). Interestingly, however, researchers assessing between-group differences in speech fluency (e.g., Yaruss, LaSalle, et al., 1998; Yaruss, Max, Newman, & Campbell, 1998) have typically employed parametric inferential statistical analyses that assume normality of distribution (e.g., analysis of variance, t-tests, etc.). Unfortunately, despite the observations of Johnson and colleagues, as well as Davis and others, there is little empirical evidence in the literature that the underlying distributions of reported speech disfluencies (e.g., stuttered disfluencies, non-stuttered disfluencies and so forth) are normally distributed.
Comparing judgments of stuttering made by students, clinicians, and highly experienced judges
2006, Journal of Fluency DisordersEffects of measurement method and transcript availability on inexperienced raters’ stuttering frequency scores
2018, Journal of Communication DisordersCitation Excerpt :Most of the research that deals with this aspect of fluency assessment has related to the intra- and inter-judge reliability of stuttering judgments. Such research has led to the conclusion that overall stuttering frequency score agreement is higher than point-by-point agreement (i.e., whether specific syllables are stuttered or not) in intra-judge, inter-judge, and inter-clinic contexts (Cordes & Ingham, 1994; Kully & Boberg, 1988; Lewis, 1994; Yaruss et al., 1998), and, not unexpectedly, that inexperienced raters tend to be less reliable in their stuttering measurements than experienced raters are, with much of the difference between the two rater groups apparently resulting from the tendency for inexperienced raters to under-identify instances of stuttering-related behavior (Brundage, Bothe, Lengeling, & Evans, 2006; Young, 1975). The tendency toward under-identification of stuttered syllables has been attributed to factors such as unfamiliarity with a speaker’s stuttering patterns and rater reluctance to wrongly label non-stuttered disfluencies as stuttering (Brundage et al., 2006), as well as to observers’ momentary lapses in attention, the frequency and speed with which stuttering behaviors occur in a speech sample, and challenges associated with detecting stutters that span word boundaries (Young, 1975).
Rate of Stuttering and Factors Associated With Speech Fluency Characteristics in Adult Struggling Readers
2022, Journal of Learning DisabilitiesDuration of stuttered syllables measured by "Computerized Scoring of the Stuttering Severity (CSSS)" and "Pratt"
2017, Iranian Rehabilitation Journal