Elsevier

Cognitive Brain Research

Volume 22, Issue 2, February 2005, Pages 193-203
Cognitive Brain Research

Research report
Neural organization for recognition of grammatical and emotional facial expressions in deaf ASL signers and hearing nonsigners

https://doi.org/10.1016/j.cogbrainres.2004.08.012Get rights and content

Abstract

Recognition of emotional facial expressions is universal for all humans, but signed language users must also recognize certain non-affective facial expressions as linguistic markers. fMRI was used to investigate the neural systems underlying recognition of these functionally distinct expressions, comparing deaf ASL signers and hearing nonsigners. Within the superior temporal sulcus (STS), activation for emotional expressions was right lateralized for the hearing group and bilateral for the deaf group. In contrast, activation within STS for linguistic facial expressions was left lateralized only for signers and only when linguistic facial expressions co-occurred with verbs. Within the fusiform gyrus (FG), activation was left lateralized for ASL signers for both expression types, whereas activation was bilateral for both expression types for nonsigners. We propose that left lateralization in FG may be due to continuous analysis of local facial features during on-line sign language processing. The results indicate that function in part drives the lateralization of neural systems that process human facial expressions.

Introduction

Recognition of facial expressions of emotion is a crucial communication skill relevant for both human and non-human primates. Sensitivity to emotional facial expressions occurs very early in development, and the neural circuitry underlying facial affect recognition is partially independent of neural systems that underlie recognition of other information from faces, such as person identity or gender [13], [17], [34]. Humans have clearly evolved an ability to quickly recognize emotional and socially relevant facial expressions, and this ability appears to be processed by a distributed neural circuitry, generally lateralized to the right hemisphere [8], [24], [31]. We investigate the plasticity and functional organization of this neural circuitry by studying facial expressions that do not convey emotional or social-regulatory information, namely the linguistic facial expressions produced by users of American Sign Language (ASL).

A unique and modality specific aspect of the grammar of ASL and other signed languages is the use of the face as a linguistic marker. Distinct facial expressions serve to signal different lexical and syntactic structures, such as relative clauses, questions, conditionals, adverbials, and topics [4], [29]. Linguistic facial expressions differ from emotional expressions in their scope and timing and in the facial muscles that are used [28]. Linguistic facial expressions have a clear onset and offset, and are coordinated with specific parts of the signed sentence. These expressions are critical for interpreting the syntactic structure of many ASL sentences. For example, restrictive relative clauses are indicated by raised eyebrows, a slightly lifted upper lip, and a backward tilt of the head. When this combination of head and facial features occurs, the co-occurring lexical items are interpreted as constituting a relative clause [22]. Facial behaviors also constitute adverbials that appear in predicates and carry various specific meanings. For example, the facial expression glossed as MM (lips pressed together and protruded) indicates an action done effortlessly, whereas the facial expression TH (tongue protrudes slightly) means “carelessly” (see Fig. 1A). These two facial expressions accompanying the same verb (e.g., DRIVE) convey quite different meanings (“drive effortlessly” or “drive carelessly”).

Of course, deaf signers also use their face to convey emotional information. When perceiving visual linguistic input, ASL signers must be able to quickly identify and discriminate between different linguistic and affective facial expressions in order to process and interpret signed sentences. Thus, signers have a very different perceptual and cognitive experience with the human face compared to nonsigners. This experience appears to result in specific enhancements in face processing. Several studies have found that both hearing and deaf signers perform significantly better than nonsigners in distinguishing among similar faces (e.g., the Benton Faces Test), in identifying emotional facial expressions, and in discriminating local facial features [2], [7], [15], [16], [23]. It is possible that ASL signers exhibit a somewhat different neural representation for face perception due to their unique experience with human faces.

Recently, it has been argued that cognitively distinct aspects of face perception are mediated by distinct neural representations [17]. We hypothesize that the laterality of these representations can be influenced by the function of the expression conveyed by the face. Linguistic facial expressions are predicted to robustly engage left hemisphere structures only for deaf signers, whereas perception of emotional expressions is predicted to be lateralized to the right hemisphere for both signers and nonsigners.

An early hemifield study by Corina [10] found distinct visual field asymmetries for deaf ASL signers when recognizing linguistic and emotional facial expressions, compared to hearing nonsigners. The visual field effects were contingent upon the order of stimulus presentation. Both emotional and linguistic facial expressions produced significant left visual field (right hemisphere) asymmetries when emotional facial expressions were presented first. In contrast, when deaf signers viewed the linguistic expressions first, no significant visual field asymmetries were observed. Although suggestive, the results do not provide support for a dominant role of the left hemisphere in recognizing linguistic facial expressions. A right visual field (left hemisphere) advantage was not observed for linguistic facial expressions.

Nonetheless, data from lesion studies indicate that damage to the left hemisphere impairs signers' ability to produce ASL linguistic facial expressions [11], [21]. In contrast, damage to the right hemisphere impairs the ability to produce emotional facial expressions, but leaves intact the ability to produce linguistic facial expressions [11]. With respect to perception, a recent study by Atkinson et al. [3] examined the comprehension of non-manual markers of negation in British Sign Language (BSL) by signers with left- or right-hemisphere damage. Non-manual negation in BSL is marked by a linguistic facial expression and an accompanying headshake. Right-hemisphere-damaged signers were impaired in comprehending non-manual negation, in contrast to left-hemisphere-damaged signers who were unimpaired. However, a negative headshake is obligatory for grammatical negation in BSL, and recognition of a headshake is distinct from the recognition of the linguistic facial expressions because (1) a headshake can be used non-linguistically to signal a negative response and (2) a headshake can occur without signing, unlike grammatical facial expressions which are bound to the manual signs and do not occur in isolation. Thus, the neural organization for the recognition of linguistic facial expressions may differ from that for the recognition of headshakes marking negation. With the advent of functional neural imaging, we can now study the brain regions involved in the perception of linguistic and emotional facial expressions in intact deaf signers with much greater anatomical precision than is possible with lesion studies.

Neuroimaging results with hearing subjects indicate that the superior temporal sulcus (STS) is critically involved in processing changeable aspects of the face, such as eye gaze [19], mouth configuration [27], and facial expression [17]. Furthermore, attention to emotional facial expression can modulate activity within the right superior temporal sulcus [24]. We predict that attention to linguistic facial expressions will produce greater activity within the left STS for deaf signers than for hearing nonsigners.

In addition, recognition of facial expressions may modulate activity within the face-responsive areas within inferior temporal cortex. In particular, the fusiform gyrus (FG) has been identified as crucial to the perception of faces and as particularly critical to perceiving invariant properties of faces, such as gender or identity [17], [20]. Activation within the fusiform gyrus in response to faces may be bilateral but is often lateralized to the right hemisphere. We hypothesize that the linguistic content of ASL facial expressions for deaf signers will modulate activity within the fusiform gyrus, shifting activation to the left hemisphere. In contrast, we hypothesize that hearing nonsigners will treat the unfamiliar ASL linguistic expressions as conveying social or affective information, even though these expressions are unique and non-identical to canonical affective expressions [28]. Thus, activation in the fusiform gyrus is expected to be bilateral or more lateralized to the right hemisphere for hearing subjects with no knowledge of ASL.

In our study, subjects viewed static facial expressions performed by different models who produced either emotional expressions or linguistic expressions (adverbials indicating manner and/or aspect) with or without accompanying ASL verbs. Subjects made same/different judgments to two sequentially presented facial expressions, blocked by expression type. This target task alternated with a control task in which subjects made same/different judgments regarding gender (the models produced neutral expressions with or without verbs). Fig. 1 provides examples of the ‘face only’ condition, and Fig. 2 provides examples from the ‘face with verb’ condition. In this latter condition, models produced ASL verbs with either linguistic, neutral, or emotional facial expression. The ‘face only’ condition was included because most previous face processing studies presented isolated face stimuli. Although emotional facial expressions can be produced without an accompanying manual sign, ASL linguistic facial expressions are bound morphemes (like –ing in English) that must co-occur with a manually produced sign. Therefore, we included a second ‘face with verb’ condition in order to present the linguistic facial expressions in a more natural linguistic context.

Section snippets

Participants

Ten deaf native signers (five male, five female, mean age=29.4±6 years) and 10 hearing nonsigners (five male, five female, mean age=24.2±6 years) participated in the experiment. All of the deaf native signers had deaf parents and learned ASL from birth. All were prelingually deaf with severe to profound hearing loss (90 db or greater) and used ASL as their preferred means of communication. Hearing nonsigners had never been exposed to ASL. All subjects had attended college (an average of 5.1 and

Behavioral results

All subjects performed the tasks without difficulty. Separate two-factor ANOVAs (2 (subject group)×2 (facial expression type)) were conducted on the accuracy data for the ‘face only’ and the ‘face with verb’ conditions. For the ‘face only’ condition, there were no significant main effects of group or facial expression type and no interaction between subject group and facial expression (F<1). For the emotional expressions, deaf and hearing subjects were equally accurate (81.8% and 80.6%,

Discussion

Overall, our results revealed consistent activation in face-related neural regions for the recognition of both emotional and linguistic facial expressions for deaf and hearing subjects. Thus, the neural organization underlying facial expression recognition is relatively robust since these regions are engaged in processing all types of facial information, including linguistic facial expressions. However, the lateralization of activation within these face-related neural regions appears to be

Acknowledgements

This research was supported by the National Institutes of Health R01 HD13249. We are grateful to the following people for their help in this study: Judy Reilly for help with the FACS analysis, and to Cecelia Kemper for technical assistance with MRI scanning. We would also like to thank all of the Deaf and hearing individuals who participated in our study.

References (34)

  • C. Baker et al.

    ASL: A Teacher's Resource Text on Grammar and Culture

    (1980)
  • D. Bavelier et al.

    Visual attention to the periphery is enhanced in congenitally deaf individuals

    Journal of Neuroscience

    (2000)
  • D. Bavelier et al.

    Impact of early deafness and early exposure to sign language on the cerebral organization for motion processing

    Journal of Neuroscience

    (2001)
  • J. Bettger et al.

    Enhanced facial discrimination: effects of experience with American Sign Language

    Journal of Deaf Studies and Deaf Education

    (1997)
  • J.D. Cohen et al.

    PsyScope. A new graphic interactive environment for designing psychology experiments

    Behavioral Research Methods, Instruments, and Computers

    (1993)
  • D. Corina et al.

    Neuropsychological studies of linguistic and affective facial expressions in deaf signers

    Language and Speech

    (1999)
  • A. Damasio et al.

    Cortical systems for retrieval of concrete knowledge: the convergence zone framework

  • Cited by (87)

    • Sign language aphasia

      2022, Handbook of Clinical Neurology
    • Analysis of the visual spatiotemporal properties of American Sign Language

      2019, Vision Research
      Citation Excerpt :

      There are over 40 handshape variants in ASL, that require the observer to attend to fine differences in the configurations of the fingers to distinguish between them (Battison, 1978). Supporting the effects of experience with ASL, there are several studies showing that expert signers who have been signing since infancy (both deaf and hearing) exhibit altered and/or enhanced visual abilities for aspects of visual processing that might be important for sign language, such as categorical perception for facial expressions, visual motion perception, and face discrimination (Bavelier et al., 2000, 2001; Bosworth & Dobkins, 1999, 2002; Brozinsky & Bavelier, 2004; Emmorey & Kosslyn, 1996; Emmorey, Klima, & Hickok, 1998; Emmorey, McCullough, & Brentari, 2003; McCullough & Emmorey, 1997; McCullough, Emmorey, & Sereno, 2005; Poizner, 1983). Given that life-long experience with sign language alters visual processing, it is reasonable to predict that differences in visual processing between signers and non-signers might be greatest for visual stimulus properties that reflect the statistical range encountered in the perceived sign language signal.

    • Language learning in the adult brain: A neuroanatomical meta-analysis of lexical and grammatical learning

      2019, NeuroImage
      Citation Excerpt :

      Since learning in this paradigm may involve both the acquisition of lexical-like information (i.e., extracting specific phonological sequences from a speech stream) and the statistical learning of regularities in language (Karuza et al., 2013; Ullman, 2016), such studies could therefore not be clearly categorized as lexical or grammatical learning. The 205 excluded papers included, among others: review papers that did not include original (empirical) research (criterion 1 above; e.g., Friederici, 2004); papers that did not use appropriate methodology, analyses, or reporting to be included in an ALE analysis (criteria 2–5; e.g., Ghazi Saidi et al., 2013); papers that examined only subject groups with children under 12 or individuals with disorders (criterion 6; e.g., Friedrich and Friederici, 2011); papers that examined sign language (criterion 7; e.g., McCullough et al., 2005); and papers that did not examine the functional neuroanatomy of lexical or grammatical learning (criterion 8; e.g., de Zubicaray et al., 2014; Scott-Van Zeeland et al., 2010; Stewart et al., 2003). Note that many excluded papers, including some listed above, failed multiple inclusion criteria (e.g., Friedrich and Friederici, 2011, tested word learning in infants using ERPs, and thus did not meet criteria 2–6).

    View all citing articles on Scopus
    View full text