Situating word deafness within aphasia recovery: A case report

Word deafness is a rare neurological disorder often observed following bilateral damage to superior temporal cortex and canonically deﬁned as an auditory modality-speciﬁc deﬁcit in word comprehension. The extent to which word deafness is dissociable from aphasia remains unclear given its heterogeneous presentation

Debate over whether word deafness is dissociable from aphasia specifically spans the history of the field.Although Wernicke (1874) was the first to postulate an aphasia resulting from disruption in ascending afferent auditory pathways, Kussmaul (1877) coined the term "word deafness," describing it as a selective disorder of spoken language input separate from aphasia, and Lichtheim (1885) defined word deafness as a distinct syndrome within the connectionist framework (Case IV).Soon after, anatomical evidence from autopsies of patients with lesions in bilateral temporal cortex (Dejerine & S erieux, 1898) and left subcortical structures (Liepmann, 1898) were presented.In tandem, an opposing view was put forth by both connectionist and anti-connectionist scholars.Wernicke and Friedlander (1893) asserted that Kussmaul's description instead reflected an incomplete view of sensory aphasia while Marie (1906) stated that he had never observed such a case.Subsequent research on the topic can thus be understood as primarily aligning with one of two differing perspectives (Poeppel, 2001;Polster & Rose, 1998;Simons & Lambon Ralph, 1999;Spinnler & Vignolo, 1966;Stefanatos et al., 2005)done where word deafness is considered an apperceptive syndrome of auditory linguistic processing, with experimental speech perception tasks predominating the evidence presented in case reports (e.g., Albert & Bear, 1974;Goldstein, 1974;Miceli, 1982;Phillips & Farmer, 1990;Yaqub et al., 1988); and another where word deafness is an associative symptom of aphasia and psycholinguistic measures are used to quantify performance (e.g., Caramazza et al., 1983;Franklin, 1989;Franklin et al., 1994Franklin et al., , 1996;;Hier & Mohr, 1977;Howard & Franklin, 1988;Saffran et al., 1976).
A compelling yet often overlooked third perspective comes from the stroke literature, where word deafness is situated within aphasia recovery (e.g., Auerbach et al., 1982;Hier & Mohr, 1977;Kirshner et al., 1981;Kleist, 1962;Mohr et al., 1977;Weisenberg & McBride, 1935).Here, the disrupting mechanism need not be exclusively auditory (apperceptive) or linguistic (associative).Rather, damage to the left temporal lobe is sufficient to impair both auditory and linguistic processing to varying degrees, which recover at differing magnitudes and rates over time.The most systematic account on this perspective of word deafness remains a series of autopsy cases by the German neurologist Karl Kleist (1962).He proposed multiple types of word deafness delineated by linguistic unit (phoneme, word, word meaning, sentence) that partially resolve to milder forms or evolve into a sensory (Wernicke's) aphasia of a similar profile.A few case studies on word deafness include longitudinal information in support of this view; however, existing reports are anecdotal (e.g., Slevc et al., 2011;Stefanatos et al., 2005;Wolmetz et al., 2011), incomplete (e.g., Maffei et al., 2017), brief in duration (e.g., first few weeks of recovery; Coslett et al., 1984;Metz-Lutz & Dahl, 1984), or lack reliable neural data (e.g., Horenstein & Bealmear, 1973;Klein & Harper, 1956).
c o r t e x 1 7 3 ( 2 0 2 4 ) 9 6 e1 1 9 We report a case of a stroke patient with bilateral temporal lobe lesions whose presentation evolved over the course of his recoverydfrom a severe aphasia acutely to an atypical form of word deafness chronically, where auditory linguistic processing was disrupted at the sentence level and above.Our research aims were (1) to provide a detailed account of his symptoms through the acute-to-chronic periods of recovery; (2) to characterize his current behavioral presentation relative to his recovery trajectory and in relation to the word deafness and stroke aphasia literature; and (3) to explore possible neurobiological mechanisms underlying his presentation using multimodal neuroimaging.

Case report description and methods
Mr. C (not his real initial) was a 29-year-old right-handed native English speaker at the time of his stroke.During our testing and when he had his stroke, he was unemployed but had previously worked in shipping and information technology.He had attended a high school for students with high academic achievement, completed an associate's degree, and excelled in an educational environment according to his mother.He endorsed enjoying reading, playing video games, and "learning new things" as hobbies.He had no prior neurologic or major psychiatric medical history.He and his mother also reported that his hearing, vision, speech-language, and cognition function were normal at baseline.Mr. C provided written informed consent for all testing and for review of his medical records, and he was compensated for his time.This study was approved by the institutional review board at Vanderbilt University Medical Center (VUMC) and in accordance with principles of the 1964 Declaration of Helsinki.No part of the procedures or analyses of this case report were pre-registered prior to the research being conducted.

Stroke recovery over the first 20 months
The following details a reconstruction of Mr. C's recovery from stroke onset until he was evaluated by our research team.Information was obtained from his medical records, which we had unrestricted access to, and corroborated and supplemented through interviews with him and his mother.A summary of his recovery is provided in Table 1.Results from select clinical testing and neuroimaging are presented in Fig. 1.

Acute presentation
Per medical records, Mr. C presented to VUMC in 2019 after a three-day history of altered mental status.He was found to have bilateral middle cerebral artery infarcts (Fig. 1A) and a large luminal thrombus of the left internal carotid artery, most likely from a cardioembolic source.Transthoracic and transesophageal echocardiograms were without obvious abnormalities.Given that he presented >24 h after symptom onset, Mr. C was not a candidate for tissue plasminogen activator or thrombectomy.He was placed on dual antiplatelet therapy and discharged home with his family and home health therapies after a six-day hospital stay.
Treatment notes from his speech-language pathologist (SLP) indicated a severe aphasia acutely, as well as superimposed attentional deficits secondary to delirium, both of which improved rapidly.He initially was unable to follow commands or verbally express himself.Within a few days, he was observed to write, albeit with errors, at the sentence level to communicate wants and needs (e.g., writing "A laptop or ipad would make easier for me.").Auditory comprehension and verbal expression remained notably impaired.By discharge, he continued to have "significant difficulty with auditory comprehension" per an SLP note but could follow a small number of written commands with 100% accuracy.He also had begun speaking at the sentence level (e.g., saying "I used to know how to tie one of those at some point."),albeit inconsistently.Neurology notes stated that sensorimotor function was unaffected during this time, with no deficits on cranial nerve or sensorimotor examinations.

Subacute presentation
Mr. C was briefly seen by a home health SLP, whose notes documented severely impaired word comprehension (i.e., 50% accuracy on a word-to-picture matching task).Mr. C additionally endorsed significant difficulty comprehending words, as well as music and environmental sounds, at the time.His mother described creating an intensive home-based therapy program for him.For approximately 1 h per day, she and Mr. C's father would drill listening, reading, and writing word comprehension tasks with him.She additionally reported that Mr. C's father audio-recorded various environmental sounds for him to practice identifying.A few months after his stroke, both Mr. C and his mother reported a substantial Auditory comprehension: words Auditory comprehension: sentences Note.Acute ¼ first week post-stroke; Subacute ¼ second week to three months post-stroke; Chronic ¼ greater than three months post-stroke; d ¼ impaired functioning; þ ¼ spared or recovered functioning.
a Impairment was secondary to hospital-acquired delirium.
c o r t e x 1 7 3 ( 2 0 2 4 ) 9 6 e1 1 9 improvement in auditory word comprehension and environmental sound recognition.No occupational or physical therapy services were documented in his chart or reported by Mr. C during this time.
At approximately one-month post-onset, Mr. C was seen by an audiologist at VUMC for concerns regarding his peripheral hearing given his persistent difficulty with spoken language comprehension.Per his audiologist's assessment note, pure tone audiometry, tympanometry, and acoustic reflexes were normal (Fig. 1B).Otoscopy was unremarkable.Speech recognition thresholds were also normal, and word recognition thresholds were judged to be excellent for each ear; however, he required pictorial support for both and was unable to repeat stimuli, even when provided visual cues.Mr. C's examination overall suggested a "central" etiology for his comprehension deficit.

Chronic presentation
Per Mr. C and his mother's report, he continued to experience ongoing albeit gradual progress.His word comprehension deficits resolved, and his ability to both identify environmental sounds and follow a melodic line improved.He endorsed no difficulties with receptive linguistic or affective prosody.When unable to understand spoken language, primarily at the sentence level and beyond, he described what he "heard" was a "wa wa wa" similar to Charlie Brown's teacher on the Peanuts cartoon.At ~19 months post-onset, he was seen for an SLP evaluation with author K. C. at VUMC's outpatient rehabilitation clinic.Per her assessment note, he presented with near-unimpaired performance, receiving an aphasia quotient of 94.6 on the Western Aphasia BatterydRevised (Kertesz, 2006, Fig. 1C).She documented, however, a persistent impairment in auditory sentence comprehension that was partially ameliorated by a slowed speaking rate.At this time, he began weekly treatment with an emphasis on auditory linguistic comprehension using language and attention tasks.

2.2.
Research testing between 21-and 25-months postonset Mr. C was referred for research testing shortly after beginning SLP treatment.He completed testing over the course of four months, including three behavioral sessions and two neuroimaging sessions (Fig. 3D).
Our testing took a psycholinguistic approach, a decision motivated by two key factors.First, there exist a wide range of linguistic processing tasks (both behavioral and for functional MRI) that have undergone rigorous psychometric validation and possess readily available normative data (e.g., Kay et al., 1996;Wilson, Eriksson, et al., 2018;Wilson, Yen et al., 2018;Yen et al., 2019), allowing for detailed and reliable characterization of performance in a single case.Second, we aimed to situate Mr. C's performance within his recovery from aphasia; thus, focusing on linguistic characterization allowed for more direct comparison to the larger stroke aphasia literature (e.g., Wilson et al., 2023).A schematic of the psycholinguistic model that acted as a framework for this case report is shown in Fig. 2.

Behavioral testing
The two aims of behavioral testing were (1) to characterize Mr. C's linguistic processing relative to his nonlinguistic auditory processing, and (2) compare linguistic processing across differing input modalities (i.e., auditory vs nonauditory) and linguistic units (i.e., words vs sentences).A schematic of the behavioral testing framework is shown in Fig. 3A.
Of note, lip reading was allowed unless restricted per standardized test instructions, although Mr. C. reported experiencing minimal benefit from it.
2.2.1.1.LINGUISTIC PROCESSING.To obtain an initial profile of performance, we administered the Quick Aphasia Battery (QAB; Wilson, Eriksson, et al., 2018), a time-efficient test that assesses language comprehension and production across auditory and nonauditory input modalities.We also administered four tasks (Extra word comprehension, Extra sentence comprehension, Written word comprehension, Writing) from the unpublished extended version of the QAB (available at https://langneurosci.org/qab).To probe Mr. C's sentence comprehension in greater depth, we administered the sentence comprehension task under two additional conditions using analogous stimuli from parallel forms of the QAB: (1) Slow sentence comprehension, where the examiner's speech rate was approximately half (M ¼ 1.85 syllables per second [Hz]) that of the Sentence comprehension task (M ¼ 3.45 Hz); and (2) Written sentence comprehension, where task items were presented as text.Presentation rate was reduced through use of enhanced prosody, exaggerated articulation, and strategic pausingdtechniques known to increase comprehension of a spoken message for hearing impaired listeners (Payton et al., 1994) or for healthy listeners perceiving the speech of individuals with dysarthria (e.g., Liss, 2007;Duffy, 2012;Park et al., 2016).Each of the two conditions was administered twice to ensure reproducibility of performance.To make findings comparable to other tasks, accuracy scores (i.e., 1 ¼ correct, 0 ¼ incorrect) were used in place of the QAB's five-point scoring scale for all tasks except Connected speech and Motor speech, which were interpreted at the item level (see Wilson, Eriksson et al., 2018 for details regarding QAB scoring).
We then evaluated Mr. C using a range of fine-grained word processing tasks.The majority came from the Psycholinguistic Assessments of Language Processing in Aphasia (PALPA; Kay et al., 1996).The first two tasks (PALPA 1, 2) make use of minimally linguistic stimuli (nonword and word minimal pairs) and were designed to rule out the role of impaired auditory processing on performance on other PALPA tasks (Kay et al., 1996;Whitworth et al., 2013).The remaining tasks from the PALPA (5,35,36,44,46) were designed to isolate different linguistic processes and directly compare performance across differing input modalities.We also administered (1) the Boston Naming Test (BNT; Kaplan et al., 2001), which has the advantage over the QAB Picture naming task of containing more difficult items (del Toro et al., 2011) and extensive normative data (Tombaugh & Hubiey, 1997); and (2) a shortened version (Breining et al., 2015) of the Pyramids and Palm Trees test (PPT; Howard & Patterson, 1992) to rule out a central semantic processing deficit.
2.2.1.2.NONLINGUISTIC AUDITORY PROCESSING.We assessed nonlinguistic auditory processing in three primary domains: working memory, environmental sound perception, and music perception.Auditory working memory was evaluated using the digit span forward, digit span backward, digit span sequencing, and number-letter sequencing tasks from the fourth edition of the Wechsler Adult Intelligence Scale (WAIS-IV; Wechsler, 2008).Environmental sound perception was evaluated using the Norms for Environmental Sound Stimuli (NESSTI; Hocking et al., 2013), a free identification test in which examinees wrote the name of the sound associated with auditory stimuli.The top ten items with the highest identification accuracy (98.37e100 %) were selected for the purposes of this study.Finally, music perception was assessed using the Scale and Rhythm tasks of the Montreal Battery of c o r t e x 1 7 3 ( 2 0 2 4 ) 9 6 e1 1 9 Evaluation of Amusia (MBEA; Peretz et al., 2003), which have been previously used in studies of acquired amusia (e.g., S€ ark€ am€ o et al., 2009).
2.2.1.3.ANALYSES.CrawfordeHowell's t-tests, which estimate the atypicality of an individual's performance relative to a normative sample (Crawford & Garthwaite, 2012;Crawford & Howell, 1998), were performed for tasks with normative data using the singcar package (Rittmo & McIntosh, 2021) in R. All tests were one-tailed (alternative hypothesis: t < 0) with a conservative p < .02 to better control for inflated Type I error (Crawford et al., 2006); instances of .02p < .05were interpreted as approaching significance and potentially outside the bounds of the normative sample (Crawford et al., 2006).For the WAIS-IV, scaled scores (<8 ¼ atypical performance) were instead interpreted.Performance on all other tasks was interpreted qualitatively.

Multimodal neuroimaging
Our two aims in obtaining multimodal neuroimaging were to characterize Mr. C's neural activity (1) during linguistic processing across differing linguistic units (words vs narratives), input modalities (spoken vs written), and presentation rates (regular vs slow); and (2) during auditory processing of linguistic stimuli.A secondary aim was to quantify the location and extent of Mr. C's lesions.Conditions and example stimuli of the functional MRI paradigms are shown in Fig. 3B and C. 2.2.2.1.FUNCTIONAL PARADIGMS.Mr. C completed seven unique functional MRI paradigms.Three involved word processing (Adaptive word paradigms) and four engaged narrative processing (Narrative paradigms).
The three Adaptive word paradigms were Written rhyme matching, Written semantic matching, and Spoken semantic matching.All were a simple AB block design with 10 blocks per condition, each containing 4 to 10 trials that varied across seven levels of difficulty (2-down-1-up adaptive staircase procedure).Written rhyme matching and Written semantic matching have undergone previous psychometric validation at both the group and individual levels (Wilson, Yen et al., 2018;Yen et al., 2019).Spoken semantic matching is an auditory version of Written semantic matching but has not been previously validated or published (although see Philips et al., 2023 for data supporting the validity of a non-adaptive version of Spoken semantic matching, including its direct comparability with a non-adaptive version of Written semantic matching).
Each paradigm consisted of language and perceptual conditions that required Mr. C to indicate positive responses via a button press.The language conditions, which targeted higherorder linguistic processes, were a pseudoword rhyming task (Written rhyme matching; Yen et al., 2019) and a semantic decision task presented in written (Written semantic matching; Wilson, Yen et al., 2018) and auditory (Spoken semantic matching; not previously published) modalities.For the spoken semantic decision task, stimuli were recorded by a female speaker.Item properties and difficulty levels were modulated similarly to the written one, except that in the spoken task, the interval between the onsets of successive trials ranged from 2.86 to 5 sec and the gap between the word pairs for each trial ranged from 10 to 500 msec according to the difficulty level.
The perceptual conditions involved lower-level sensory processes and were matched to the language conditions across visual, motor, and cognitive domains.These were a symbol matching task (Written rhyme matching; Written semantic matching; Wilson, Yen et al., 2018;Yen et al., 2019) and a tone matching task (Spoken semantic matching; not previously published).For the tone matching task, three tones (C4, Fig. 2 e Psycholinguistic Model of Auditory Linguistic Processing.Model of the hypothesized processes that underly successful spoken language comprehension.Boxes and connecting arrows in blue depict linguistic processes; boxes and arrows in gray show auditory processes.This model is a simplified modification of Whitworth et al. (2013) and designed to also express interaction among processes (e.g., Dell et al., 1997;Levelt et al., 1999).Components of this model were ultimately derived from Kleist (1962), a student of Carl Wernicke and whose work can be understood within the historical connectionist model (Lichtheim, 1885;Wernicke, 1874).To show this direct comparison, lettering in the top-right corner of the boxes corresponds to those used in Lichtheim (1885), as does the color-coding of the boxes and connecting arrows described above (i.e., those outlined in blue represent centers and pathways that result in select aphasia syndromes if damaged).D4, E4 in scientific pitch notation, which correspond to 261.6 Hz, 293.7 Hz, 329.6 Hz respectively) were each sung as "da" by the same female speaker for the spoken semantic decision task.As with the symbol matching task, presented sequences were either identical (e.g., C4eD4eE4, C4eD4eE4) or different (e.g., C4eD4eE4, C4eD4eC4).Item difficulty was manipulated by the number of tones in each sequence (ranging from 1 to 5, with the length of the individual tones varied from 166 to 830 msec), the number of identical tones in different sequences (ranging from 0 to 4), the interval between the onset of successive trials (ranging from 2.86 to 5 sec), and the gap between the tone sequence pairs for each trial (ranging from 310 to 800 msec).
The four Narrative paradigms were Written narrative, Slow written narrative, Spoken narrative, and Slow spoken narrative.All were a simple ABC block design with 10 blocks per condition (block length: M ¼ 15.99 ± 2.89).Spoken narrative was augmented from a previously validated paradigm (Wilson, Yen et al., 2018) to include a rest condition and administered a second time (making for eight paradigms in total) using parallel stimuli to ensure reproducibility of neural activity.Written narrative, Slow spoken narrative, and Slow written narrative were based on Spoken narrative, among other narrative paradigms (e.g., Wilson, Bautista et al., 2018), and created for this case report.Similar paradigms have been shown to produce reliable maps at both the individual and group levels (Wilson, 2014;Wilson, Bautista et al., 2018;Wilson et al., 2017;Wilson, Yen et al., 2018).
Each paradigm consisted of language, perceptual, and rest conditions.As before, language conditions engaged higherorder linguistic processes while perceptual conditions involved lower-level sensory processes and were matched to the language conditions.The language conditions entailed reading (Written narrative, Slow written narrative) or listening to (Spoken narrative, Slow spoken narrative) excerpts from the Who Was? book series for children aged 8e12 years old, which contain simple language and are relatively easy for most individuals with aphasia to follow (Wilson, Yen et al., 2018).The books selected were Who was William Shakespeare?(Written narrative), Who was Thomas Edison? (Slow written narrative), Who was Albert Einstein?and Who was Amelia Earhart? (Spoken narrative), and Who were the Beatles?(Slow spoken narrative).For Written narrative, words were presented at the same rate as Spoken narrative and Slow spoken narrative, and appeared left-to-right on the screen, remaining for the duration of the block.This offered a more naturalistic reading experience and allowed for direct comparison with the spoken narrative paradigms (Wilson, Bautista et al., 2018).To create the slow language conditions, presentation rate was reduced to approximately half (M ¼ 1.59 Hz) of Written narrative and Spoken narrative (M ¼ 3.30 Hz).For Slow spoken narrative specifically, author M. C. audio-recorded excerpts at the targeted presentation rate using the same facilitative techniques described above.
The perceptual conditions were modifications of the matched language conditions that rendered the message unintelligible.For Written narrative and Slow written narrative, the text was scrambled by replacing all consonants with randomly selected consonants and all vowels with randomly selected vowels.For Spoken narrative and Slow spoken narrative, the audio recordings were reversed.Finally, a rest condition was included to characterize auditory processing during the language conditions.2.2.2.2.ACQUISITION.Scanning sessions were completed on a Phillips Achieva 3 T scanner with a 32-channel head coil at the Vanderbilt University Institute of Imaging Science.Structural images (T1-weighted, T2-weighted FLAIR, coplanar T2weighted) were acquired during the first scanning session.Functional images (T2-weighted BOLD echo planar) were acquired during both scanning sessions (four paradigms each) using parameters detailed in our prior work (e.g., Yen et al., 2019).Total scan time was 400 sec (200 volumes plus an additional 4 discarded) for each of the Adaptive word paradigms and 480 sec (240 volumes plus an additional 4 discarded) for each of the Narrative paradigms.Visual stimuli were viewed via a mirror attached to the head coil, which reflected text projected onto a screen at the end of the scanner bore.Audio stimuli was presented via MRI-safe headphones (Nordic NeuroLabs) and confirmed by Mr. C to be audible.All paradigms were implemented in MATLAB's Psychophysics Toolbox (Brainard, 1997;Pelli, 1997), and involved pre-scanning training and in-scanner practice.Behavioral responses for the Adaptive word paradigms were recorded via a button box placed in Mr. C's left hand.A six-item forcedchoice written comprehension test for each Narrative paradigm was administered after the scanning session for the Narrative paradigms.
2.2.2.3.ANALYSES.Lesioned tissue was delineated manually in native space on structural images, after which all images underwent segmentation and enantiomorphic normalization to bring them into standardized space, as described previously (Wilson et al., 2023).Lesion extent in each hemisphere was calculated, and lesion location was determined via visual inspection.
CrawfordeHowell t-tests for accuracy (percent correct across trials), difficulty (mean difficulty level across trials), and speed (reaction time in seconds for button-press trials only) Conditions and example stimuli for the four Narrative functional MRI paradigms.D Study timeline, including time postonset and types of behavioral tests or functional MRI paradigms completed.For the second scanning session, all four Narrative paradigms were administered.min.¼ minimum; comp.¼ comprehension; no.¼ number; PALPA ¼ Psycholinguistic Assessments of Language Processing in Aphasia (Kay et al., 1996); QAB ¼ Quick Aphasia Battery (Wilson, Eriksson et al., 2018); PPT ¼ Pyramids and Palm Trees (Howard & Patterson, 1992); BNT ¼ Boston Naming Test (Kaplan et al., 2001); WAIS-IV ¼ Wechsler Adult Intelligence ScaledFourth Edition (Wechsler, 2008); NESSTI ¼ Norms for Environmental Sound Stimuli (Hocking et al., 2013); MBEA ¼ Montreal Battery of Evaluation of Amusia (Peretz et al., 2003).
c o r t e x 1 7 3 ( 2 0 2 4 ) 9 6 e1 1 9 were completed for behavioral responses from paradigms with normative data.Behavioral responses from the remaining paradigms were interpreted qualitatively.Functional images were processed followed previously described procedures (e.g., Schneck et al., 2021;Wilson, Yen et al., 2018;Yen et al., 2019).These were manually inspected for motion artifacts, preprocessed, and co-registered to the structural images.The reference T1-weighted image was then segmented and all images were brought into standardized space.Paradigms were then modeled using a general linear model with the following contrasts: language versus perceptual for all paradigms and language versus rest for the spoken narrative paradigms.
We then applied a relative threshold (e.g., Gross & Binder, 2014;Wilson et al., 2017;Wilson, Yen et al., 2018) to each contrast map, where the top 5% of t-statistics in clusters 2 cm 3 or larger were retained, as well as an absolute threshold of p < .001uncorrected to ensure that spurious activations resulting from noise were eliminated.The resultant maps were interpreted qualitatively via visual inspection and with reference to normative data for relevant paradigms.

Case report findings
Mr. C endorsed strong motivation to participate in all behavioral testing and scanning sessions with no overt indication of disinterest, fatigue, or boredom.

Behavioral testing
The following details the results and our interpretation of the findings, the latter of which was done in the context of his recovery trajectory, and the extant evidence on word deafness and aphasia recovery.When relevant, we also discuss the degree to which testing performance aligns with impairment in auditory processing, linguistic processing, or both.

Linguistic processing
Mr. C's performance across QAB tasks was predominantly within the bounds of the normative sample (Table 2).He performed in the impaired range on sentence-level tasks and, to lesser extent, on auditory language tasks.He was either significantly lower (p ¼ .01)or approaching significantly lower (p ¼ .03) on the Sentence comprehension and Extra sentence comprehension tasks respectively.For each administration, he displayed the same pattern for incorrect items; he missed two items with passive syntactic constructions (e.g., Are doctors treated by patients?),one item with complex aspectual and modal syntactic elements (e.g., If I tell you I used to exercise, do you think I exercise now?), and one item with simple syntax that required interpretation of an action (e.g., Do you cut the grass with an axe?).The items missed were also somewhat longer than those answered correctly (mean of 9.55 syllables compared to 8.85 syllables) and presented at a relatively faster presentation rate (mean of 3.67 Hz compared to 3.32 Hz).
When presentation rate was slowed, his performance improved from marginally above chance (54e66%) to near perfect (92% accuracy for both administrations).
Mr. C's score on the Repetition task also approached significance (p ¼ .02),with reduced performance on longer words and sentences.He made phonemic errors on a multisyllabic word (i.e., /kəntaestrəfi/ for catastrophe) that was readily corrected with an examiner repetition at a slowed presentation rate and on a complex sentence (i.e., The ambitious journalist discovered where we'd be going), which could only be repeated when chunked into 1e2 word segments.In contrast, accuracy was perfect for all other nonauditory language tasks, both at the word and sentence levels, and auditory language tasks at the word level (Table 2).
During connected speech, Mr. C occasionally produced paragrammatic utterances and experienced anomic episodes, which resulted in a slightly reduced speech rate and an overall perception of mildly impaired communication.His connected speech comprehension, although not quantified on the QAB, appeared markedly affected.During an 11-min conversation, he endorsed difficulty comprehending ~9% of questions (38 out of 44) despite a quiet environment and the use of facilitative techniques by the two examiners (authors M. C., J. L. E.).
Mr. C's performance on fine-grained word processing tasks was relatively strong, mirroring prior clinical reports and QAB performance (Table 3).Of the 21 measures with normative data, only six were significantly lower than that of the normative sample and another one approached significance; all but one of these were auditory language measures.Of the six auditory language measures in which Mr. C scored in the impaired range, one from PALPA 2 required minimal linguistic processing (same judgments: p < .001);three from PALPA 5 required the integration of auditory-phonemic information with lexical representations (low frequency: p < .001,low imageability: p ¼ .01,both: p < .001);and two from PALPA 45 required integration of auditory-phonemic information with graphemic representations (3-letter: p < .001,6-letter: p ¼ .03).A similar integration is necessary for the one nonauditory language measure on which he scored in the impaired range, where visual-orthographic information is mapped to lexical representations (PALPA 35, exception words: p < .001).No impairments were noted in integrating visual-object information with lexical representations (BNT: p ¼ .51)or nonlinguistic semantic relations (PPT: p ¼ .65).
In contrast, he demonstrated relative preservation of lower-level processing across auditory and nonauditory language tasks.He was within the range of the normative sample for three out of four auditory language measures requiring minimal linguistic processing (PALPA 1, same judgments: p ¼ .12,different judgments: p ¼ .65;PALPA 2, different judgments: p ¼ .67).His performance was similarly strong on nonauditory language measures that differentially tax visualgraphemic processing (PALPA 35, regular words, p ¼ .58;PALPA 36: p ¼ .51e.68; see Coltheart, 2005 for a detailed description of possible linguistic processes for reading different types of words).
3.1.1.1.DISCUSSION.In the chronic period of stroke recovery, Mr. C exhibited a mild yet persistent deficit on linguistic processing tasks, with biases in both input modality (auditorygreater-than-nonauditory impairment) and linguistic unit (sentence-greater-than-word impairment).This pattern of impairment aligns generally with a subtype of word deafness c o r t e x 1 7 3 ( 2 0 2 4 ) 9 6 e1 1 9 that Kleist (1962) described as "sentence deafness," which he identified in an individual with bilateral temporal lesions.
Compared with his recovery trajectory outlined above, Mr. C's performance represents not only a substantial recovery from aphasia but an evolution of his subacute word deafness profile into a milder form.Similar patterns of recovery have been documented in the word deafness literature (Albert & Bear, 1974;Auerbach et al., 1982;Buchman et al., 1986;Coslett et al., 1984;Denes & Semenza, 1975;Hemphill & Stengel, 1940;Maffei et al., 2017;Stefanatos et al., 2005).Of these cases, all but two (Coslett et al., 1984;Stefanatos et al., 2005) were also in the chronic period of recovery and had experienced improvement in both aphasia and word deafness symptoms.
Mr. C's most salient impairment on linguistic testing was auditory comprehension for sentences that were longer, faster, and more syntactically complex, which comports with observations of his connected speech comprehension and his self-report.Two possible explanations may underly this pattern in performance.The first is that Mr. C's auditory processing is impaireddsyntactically complex sentences have a high proportion of closed class words, which often contain weak syllables, and thus more difficult to extract from the acoustic signal (e.g., Cutler et al., 1997;Cutler & Norris, 1988).These types of sentences often are longer in length, further impeding auditory processing.The effect of reduced presentation rate, both across conditions and within the regular presentation rate condition, on Mr. C's accuracy supports this perspective, where key changes to the acoustic signal (e.g., decreased coarticulation; Browman & Goldstein, 1990) boost Table 2 e Performance on the quick aphasia battery (Wilson, Eriksson et al., 2018).

Linguistic unit
Task saliency.The presentation rate effect has been reported not only in word deafness following bilateral temporal lesions (Albert & Bear, 1974;Brick et al., 1985) but also aphasia (e.g., Albert, 1972;Carmon & Nachshon, 1971;Efron, 1963;Goodglass et al., 1970;Robson et al., 2012Robson et al., , 2013Robson et al., , 2019) ) and amusia (e.g., Li egeois-Chauvel et al., 1998;Peretz, 1990) following left temporal lesions.This neuroanatomical convergence appears to evidence a special role for the left temporal lobe in rapid temporal resolution for speech perception (Hickok & Poeppel, 2007;Poeppel, 2001).The second explanation is that his linguistic processing is impaired.Here, closed class words remain the problem but of a different sourcedthese words have a relative dearth of semantic associations, limiting the use of top-down processing as a strategy to compensate for other linguistic deficits (e.g., accessing syntactic and lexical representations; Kay et al., 1996;Whitworth et al., 2013;Beeson et al., 2022).In this view, difficulty with closed class words should be observed across input and output modalities.Though less conspicuous, Mr. C displayed hints of a persistent impairment in integrating lexical information across auditory (e.g., reduced performance on auditory lexical decision) and written (e.g., reduced performance on oral reading of exception words) input modalities.His connected speech production was also occasionally paragrammatic and anomic, two behaviors that co-occur in individuals with lexical and syntactic deficits (Casilio et al., 2019;Vermeulen et al., 1989), and are closely associated with left temporal lesions (Casilio et al., 2023).One final piece of evidence supporting a residual linguistic impairment was his near-unimpaired performance on minimal pair discrimination (PALPA 1, 2), tasks that predominantly involve auditory processing.However, strong performance here does not necessarily mean the absence of an auditory processing deficit, but rather that linguistic impairments are contributory to performance on other tasks in the auditory modality, as observed here.
In summary, there is reasonable evidence to support both explanations.With regard to auditory processing, it is plausible that his deficit is in rapid temporal resolution, as mentioned above, or potentially in binaural integration, which has also been found to be impaired in word deafness cases resulting from unilateral and bilateral lesions (cf.Miceli & Caccia, 2022).For linguistic processing, the majority of the evidence suggests a disruption to both lexical and syntactic domains, comporting with the "sentence deafness" profile originally described by Kleist (1962).Thus, it appears Mr. C's performance is likely underpinned by residual impairments in both auditory and linguistic processing.

Nonlinguistic auditory processing
Performance on nonlinguistic auditory processing was relatively more variable (Table 4).Scores on all four WAIS-IV tasks of auditory working memory were in the range of the agematched normative sample.Overall accuracy on the environmental sound stimuli selected from the NESSTI was 50%, representing impaired performance.Performance on the MBEA was significantly worse than the normative sample for the Scale task (p < .001)and approaching significance for the Rhythm task (p ¼ .03).
Compared with his recovery trajectory, Mr. C's performance represents a relatively modest improvement in nonlinguistic auditory processing, although this is ultimately challenging to quantify in the absence of clinical testing in these domains.Recovery of nonlinguistic auditory processing is largely undocumented in word deafness cases (although see Engelien et al., 1995).The stroke aphasia literature is similarly sparse; however, emerging evidence suggests that auditory processing for both linguistic and nonlinguistic stimuli does not improve in Wernicke's aphasia despite partial recovery of overall language comprehension (Robson et al., 2019).Moreover, some have speculated that the individual variation in recovery from Wernicke's aphasia and left temporal lesions is due to poorer recovery of auditory processing more generally as compared with linguistic processing (Hier & Mohr, 1977;Kirshner et al., 1981;Mohr et al., 1977;Wilson et al., 2023).

Multimodal neuroimaging
The following details the neuroimaging results and our interpretation of the findings, the latter of which was done in the context of two hypothesized mechanisms for word deafness, as mentioned in the introduction: (1) an anatomical or functional disconnection of auditory and language brain regions (disconnection mechanism); and (2) damage to regions specialized for auditory and linguistic processing (localization mechanism).We additionally comment where relevant on the degree to which findings align with his behavioral presentation.c o r t e x 1 7 3 ( 2 0 2 4 ) 9 6 e1 1 9

Structural scans
Mr. C's T1 structural scan (Fig. 4) confirmed bilateral infarcts to the temporal lobes.In the left hemisphere, the lesion extended from the mid-anterior portion of the superior temporal gyrus and sulcus to the posterior temporoparietal junction with involvement of the inferior parietal lobe.In the right hemisphere, the lesion was more expansive, involving the near entirety of the middle and superior temporal gyri and sulci with an island of spared tissue in the mid-posterior region of the superior temporal gyrus.The lesion also extended discontinuously and superiorly into the parietal and frontal lobes, as well as medio-inferiorly into the insula, basal ganglia, and underlying white matter.There was left-greaterthan-right damage to Heschl's gyri and underlying white matter but spared tissue in these regions was also appreciable bilaterally.Total lesion volume was 72.38 cm 3 (left hemisphere: 28.36 cm 3 , right hemisphere: 44.02 cm 3 ).Given that damage was most predominant in the superior gyri and sulci of the bilateral temporal lobes, the term "superior temporal cortex" will henceforth be used to refer to Mr. C's lesion profile.
Regarding neurobiological mechanisms, Mr. C's lesion profile offers no definitive evidence of an anatomical disconnection, as first hypothesized by Liepmann ( 1898) and refined by others (e.g., Geschwind, 1965;Poeppel, 2001).Specifically, Heschl's gyri, along with its ipsilateral and contralateral white matter pathways, were relatively persevered bilaterally while superior temporal cortex of the left hemisphere was largely destroyed.Our findings mirror two prior case reports (Maffei et al., 2017;Slevc et al., 2011) of individuals with lefthemisphere lesions, one of whom presented with a sentence deafness profile similar to Mr. C's (Maffei et al., 2017).Although these cases used diffusion tensor imaging, which was not acquired due to scanner malfunction, high-resolution MRI, as used here, still provides a more detailed view of the tissue integrity than most prior reports.
A localization mechanism provides a more cogent interpretation of the neuroanatomical evidence.Although superior temporal cortex bilaterally is involved in auditoryphonemic analysis (Binder et al., 2000;Gutschalk et al., 2015;Hickok & Poeppel, 2000), certain aspects such as rapid temporal resolution do appear to be left lateralized (e.g., Hickok & Poeppel, 2007;Poeppel, 2001).Moreover, language processing depends almost exclusively on left superior temporal cortex (Malik-Moraleda et al., 2022;Price, 2012;Price et al., 2010), playing a critical role in integrating both auditory-phonemic signals and visual-graphemic symbols with linguistic representations to support language comprehension more generally (Wilson, Bautista et al., 2018).Importantly, Mr. C performed in the impaired range on linguistic processing tasks designed to target this integration.Thus, it appears likely that damage in the left hemisphere is driving not only the linguistic component of his presentation but the auditory one as well; the right hemisphere is likely contributory but may not be critical (see Miceli & Caccia, 2022, 2023 for evidence on the rarity of word deafness following right hemisphere lesions).

Adaptive Word paradigms
Mr. C's performance was within the range of the normative sample for all conditions except accuracy on the language condition for Written rhyme matching (p < .001;Table 5).Normative data were not available for Spoken semantic matching; however, performance on the language condition was qualitatively reduced as compared with the language condition of Written semantic matching for accuracy (60% vs 87% accuracy), difficulty (2.11 vs 4.15 mean level) and speed (3.32 vs 2.57 sec).
All three language activation maps were strongly leftlateralized and primarily localized to the inferior frontal gyrus; ventral precentral gyrus and sulcus; and ventral temporal region, specifically the posterior inferior temporal gyrus (Fig. 5).Inferior parietal lobe activation was also present in Written rhyme matching and Spoken semantic matching (Fig. 5AeC), and anterior temporal lobe activation was seen in Written semantic matching (Fig. 5B).
3.2.2.1.DISCUSSION.Mr. C's performance across the Adaptive word paradigms was exceptional, particularly given his lesion pattern, and further supports findings of a sentence deafness profile from his behavioral testing, where word-level processing was a relative strength.His in-scanner behavioral responses were unimpaired on Written semantic matching and near-unimpaired on Written rhyme matching.For Spoken semantic matching, scores appeared modestly reduced relative to Written semantic matching; however, in the absence of a normative sample, it is unclear whether this difference in performance reflects systematic measurement bias across the two sets of conditions (e.g., slower response times for auditory vs written input, on average, across individuals) versus a true reduction in performance in the spoken modality.
Regarding neurobiological mechanisms, activation patterns revealed no overt evidence of a functional disconnection mechanism.Neurofunctional performance appeared similar across all three Adaptive word paradigms.Aside from the absence of activation in left superior temporal cortex, which was destroyed, language activation maps were comparable to normative sample for Written rhyme matching (Yen et al., 2019) and Written semantic matching (Wilson, Yen et al., 2018).Qualitatively, activation patterns from Spoken semantic matching were almost identical to Written semantic matching, although again no definitive conclusions can be made in the absence of normative data.
A localization mechanism offers a more viable interpretation.Mr. C's remarkable behavioral performance and nearnormal activation patterns suggest that his language network is compensating adequately for the loss for tissue in bilateral superior temporal cortex, meaning that another region within the language network has the functional capacity to integrate auditory-phonemic and linguistic representations.A likely candidate is the left ventral temporal region, which was consistently activated across all three paradigms and in the normative samples (Wilson, Yen et al., 2018;Yen et al., 2019).Notably, prior work has shown this region to be Note.Given the adaptive nature of the tasks, the N for accuracy and difficulty varies as a function of Mr. C's speed in responding to items; the N for speed reflects only trials in which a button press was logged (i.e., a pair of stimuli were judged to match in some regard); mean level of difficulty ranged from 1 to 7; RT ¼ reaction time (seconds); df ¼ degrees of freedom; *significant result at p < .02;bold font face ¼ result interpreted as significantly different than the normative sample across all potential per CrawfordeHowell t-test, scale score, or item-level/ qualitative interpretation.
c o r t e x 1 7 3 ( 2 0 2 4 ) 9 6 e1 1 9 a multisensory hub for supporting lexical and semantic processing (e.g., Bu ¨chel et al., 1998;Price & Devlin, 2011;Turken & Dronkers, 2011).Thus, it seems plausible the left ventral temporal region could have the capacity for functional specializationdeither premorbidly or in response to tissue lossdof auditory and linguistic processes more commonly associated with left superior temporal cortex and successful spoken language comprehension.

Narrative paradigms
Mr. C demonstrated perfect accuracy for the Written narrative and Slow written narrative out-of-scanner tests, and he was able to recount several details (e.g., specific family members in the Thomas Edison story) about each narrative when asked to summarize what he recalled.However, he was at chance for both administrations of Spoken narrative and Slow spoken narrative.Here, he reported that, although he could hear everything adequately, he "caught" only one or two details (e.g., mentioning of a house in the Amelia Earhart story) but otherwise was not able to follow or understand the larger narrative.
Language activation maps revealed a modality-specific divergence in response to the four Narrative paradigms (Fig. 6), all of which were derived by contrasting their respective language versus perceptual conditions.Specifically, Written narrative (Fig. 6A) and Slow written narrative (Fig. 6B) activation patterns were notably strong, with activity in the left inferior frontal gyrus, left middle precentral gyrus, and left anterior temporal lobe.Written narrative showed some additional activation in the right anterior temporal lobe.Both administrations of the Spoken narrative (Fig. 6C) and Slow spoken narrative (Fig. 6D) did not show any activation at all anywhere in the brain.
When contrasting language versus rest conditions for the three spoken narrative paradigms combined (i.e., activation had to be present for all three), there was converging activation in Heschl's gyrus bilaterally.Activation occurred in both healthy tissue (e.g., right hemisphere activation at slice y ¼ À20 of Fig. 7) and along the edges of lesioned tissue (e.g., left hemisphere activation at slice y ¼ À30 of Fig. 7).
3.2.3.1.DISCUSSION.Performance across the Narrative word paradigms appeared contingent on modality, further supporting findings from the behavioral testing of a sentence deafness profile.Beyond Mr. C's at-chance accuracy on the out-of-scanner test for all three Spoken narrative paradigms, his recruitment of language regions in response to the language condition was not statistically different than that of the perceptual condition.In other words, his neural responses to spoken narratives were no different than that of his neural responses to reversed narratives, which carry no meaningful linguistic information, and this lack of difference was replicated three times across two unique paradigms and two testing sessions.This absence of language-specific activation occurred despite robust engagement of bilateral auditory regions across both administrations of Spoken narrative and Slow spoken narrative.Thus, when considered in conjunction with his strong behavioral and neural performance on Spoken semantic matching, it appears that Mr. C has sufficient residual tissue in bilateral Heschl's gyri to support at least rudimentary auditory processing for spoken language.
Regarding neurobiological mechanisms, Mr. C's performance seems to superficially support a functional disconnection mechanism.He was able to successfully engage key brain regions for both auditory and linguistic processing but only under select conditions.However, a critical challenge to this perspective is the extensive damage to bilateral superior temporal cortex.Although Mr. C does successfully recruit anterior (Written narrative, Slow written narrative) and left ventral (Written rhyme matching, Written semantic matching, Spoken semantic matching) temporal regions during successful language comprehension, this does not necessarily mean these same regions will fill the role of bilateral superior temporal cortex, which was largely destroyed.In particular, the loss of left superior temporal cortex to support both auditory and linguistic processing appears to be critical.AeC Activation maps contrasting language conditions relative to matched perceptual conditions.The color bar reflects the percentile range a thresholded t statistic fell within for a given voxel (lower % indicates greater statistical significance).The smoothed lesion mask is depicted in blue.Left is left.Fig. 6 e Activation Maps for the Narrative Paradigms.AeB Activation maps contrasting language conditions relative to matched perceptual conditions for the Written narrative and Slow written narrative paradigms.CeD Activation maps contrasting language conditions relative to matched perceptual conditions for the Spoken narrative and Slow spoken paradigms.The activation map from the repeated Spoken narrative paradigm obtained during the second scanning session is not depicted, as it is identical to the one obtained during the first scanning session.The smoothed lesion mask is depicted in blue.The color bar reflects the percentile range a thresholded t statistic fell within for a given voxel (lower % indicates greater statistical significance).Left is left.In contrast, a localization mechanism is a plausible and more likely interpretation of the findings.Without bilateral superior temporal cortex, Mr. C may be more reliant on spared left frontal regions specialized for linguistic control to facilitate top-down processing during language comprehension (Binder et al., 1997;Davis & Johnsrude, 2003).For the simpler stimuli of the Adaptive word paradigms, this reliance may place minimal processing demands on transmitting neural signals to the more distal and perhaps less specialized left ventral temporal region.However, for the more complex stimuli of the Narrative paradigms, this reliance may result in a processing bottleneck, either in the left ventral temporal region or destroyed left versus bilateral superior temporal cortex, that prevents the language network from coming online.Supporting this view is emerging evidence on the supremacy of left superior temporal cortex in recovery from aphasia, although the role of right hemisphere may not be entirely ancillary (see Schneck, 2022;Wilson & Schneck, 2021).Another source of evidence may be the observation that Mr. C's left frontal activation in the Written narrative and Slow written narrative paradigms was substantially greater than that of normative sample (Wilson, Yen et al., 2018).Finally, this converging neural evidence comports with Mr. C's report of hearing but minimally understanding the majority of the spoken narratives while in the scanner.
Although the emphasis of the present case report is on auditory linguistic processing, the left temporal activation during the written narrative paradigms is worth mentioning.Here, there was no recruitment of the left ventral temporal region.Instead, the left anterior temporal region, an area recruited during narrative comprehension specifically due to its important role in integrating orthographic and semantic representations (e.g., Crinion et al., 2003;Dronkers et al., 2004;Wilson et al., 2008), was consistently activated.This pattern further supports a localization mechanism, in that regions with similar or overlapping functional specificity were leveraged to support successful language comprehension.
Finally, we reiterate that, although the Narrative paradigms were adapted to Mr. C's unique presentation, highly similar narrative paradigms in spoken and written modalities have been used to reliably map neural response patterns in individual participants (Wilson, 2014;Wilson et al., 2017;Wilson, Yen et al., 2018), indicating that our paradigms were appropriate measures of spoken narrative processing.However, environmental factors may have influenced the lack of neural recruitment in response to Slow Narrative specifically.Here, the substantially reduced presentation rate may have had unintended effects and exacerbated challenges related to the scanner (e.g., degraded acoustic signal due to noise and use of headphones).

General discussion
In this case report on an individual (Mr. C) with bilateral temporal lobe lesions, we aimed to (1) to describe his symptoms throughout his recovery, (2) to characterize his current behavioral presentation relative to his recovery trajectory and the relevant literature, and (3) to explore potential neurobiological mechanisms underlying his presentation.Our study is one of only a few prior reports (e.g., Coslett et al., 1984;Metz-Lutz & Dahl, 1984) to document longitudinal performance across all relevant domains and the first to comprehensively characterize auditory and linguistic processing using functional MRI.Overall, we argue there is converging evidence that Mr. C's current presentation represents an extensive recovery from aphasia and an evolution of word deafness into a milder form (sentence deafness).When viewed through an aphasia recovery lens, his residual impairment reflects disruption in both auditory and linguistic processing, and can be most plausibly explained via a localization mechanism.Below we summary key findings in support of these conclusions, as discussed in greater depth in the interim discussions above.

Word deafness as a stage in aphasia recovery
As first observed by Kleist (1962), the extent to which word deafness is dissociable from aphasia depends on where an individual lies on the recovery continuum.For Mr. C, his word deafness was not discernable until he was subacutely poststroke, eventually evolving to sentence deafness in the chronic period.As evidenced by both his recovery trajectory and current presentation, Mr. C experienced greater and faster improvement in linguistic processing relative to auditory linguistic and nonlinguistic auditory processing.Analogous observations have been noted in prior word deafness cases (e.g., Coslett et al., 1984;Horenstein & Bealmear, 1973;Klein & Harper, 1956;Lichtheim, 1885;Maffei et al., 2017;Metz-Lutz & Dahl, 1984;Slevc et al., 2011;Stefanatos et al., 2005;Tanaka et al., 1965;Wirkowski et al., 2006;Wolmetz et al., 2011) and studies on aphasia following stroke (e.g., Kertesz et al., 1993;Pashek & Holland, 1988).Our recent large-scale study on aphasia recovery (Wilson et al., 2023) also revealed a divergence within auditory linguistic processing.Here, word comprehension recovered rapidly and to a greater magnitude than sentence comprehension (Wilson et al., 2023; see Fig. 5 for overall recovery and Fig. 3A and B for recovery specific to left temporal lesions), mirroring the evolution of word deafness into sentence deafness seen in Mr. C and other word deafness case reports (e.g., Albert & Bear, 1974;Auerbach et al., 1982;Buchman et al., 1986;Denes & Semenza, 1975;Hemphill & Stengel, 1940;Maffei et al., 2017).

Word deafness as combined auditory and linguistic impairments
When considered in the context of aphasia recovery, word deafness can be understood as differing degrees of impairment in auditory and linguistic processing.In other words, word deafness need not be classified as an apperceptive or associative deficit, as traditionally done, and mutually exclusive categories such as these fail to capture the complexities and inconsistencies reported in the extant literature (Poeppel, 2001;Polster & Rose, 1998;Miceli & Caccia, 2022, 2023;Simons & Lambon Ralph, 1999;Spinnler & Vignolo, 1966;Stefanatos et al., 2005).For Mr. C, his core difficulty with understanding spoken language at the sentence level and beyond was explainable as arising from either form of impairment.Other testing was specific to one form of impairment over another.In particular, his reduced performance on auditory linguistic (both behavioral and neurally) and nonlinguistic auditory processing suggested an auditory processing impairment while his reduced performance on nonauditory linguistic processing, combined with evidence of a severe aphasia acutely, suggested a linguistic processing impairment.Thus, when considered collectively, Mr. C's performance is best explained by a combined impairment in both processes, where the auditory processing deficit appears perhaps more predominant.
Regardless of reporting patterns in the literature, the view of word deafness as affecting both auditory and linguistic processing is rooted in contemporary cognitive (e.g., Dell et al., 2013;Hickok & Poeppel, 2007;Walker & Hickok, 2016) and neurobiological (e.g., Bhaya-Grossman & Chang, 2022;Gwilliams et al., 2022;Levy & Wilson, 2020;Price et al., 2005;Vaden et al., 2010) accounts of spoken language comprehension, which posit early interactional interfacing of the acoustic signal with linguistic representations.Thus, the overlapping behavioral presentation between word deafness and aphasia suggests greater convergence than divergence.

Word deafness as explained by a localization mechanism
Although disconnection remains a predominant mechanistic account for word deafness (e.g., Miceli & Caccia, 2022, 2023), those who situate word deafness within aphasia recovery tend to posit a localization mechanism (e.g., Kleist, 1962).Our interpretation of Mr. C's neuroimaging is that a localization mechanism is the more cogent interpretation when considering all available evidence.
In terms of structural findings, the key source of evidence supporting a localization mechanism for Mr. C is damage to left superior temporal cortex.Nearly all case reports of word deafness converge on this region (Miceli & Caccia, 2022, 2023), which has been shown to be indispensable for processing language (e.g., Malik-Moraleda et al., 2022;Marie, 1906;Price, 2012;Price et al., 2010;Wilson, Bautista et al., 2018;Wilson & Schneck, 2021), environmental sounds (Dick et al., 2007;Giraud & Price, 2001;Humphries et al., 2001;Specht & Reul, 2003;Thierry et al., 2003), and, potentially, music (Hugdahl et al., 1999;Koelsch et al., 2002;Sihvonen et al., 2016;Stefaniak et al., 2021).Evidence on the specialization of left superior temporal cortex for auditory linguistic processing specifically comes from the aphasia recovery literature, where poorer outcomes have been associated with extent of damage to this region (Naeser et al., 1987(Naeser et al., , 1990)).Notably, our work (Wilson et al., 2023) shows that individuals with left superior temporal cortex lesions, who typically have a persistent auditory sentence comprehension impairment, have the highest concentration of damage not in Heschl's gyri or underlying white matter but in the posterior temporoparietal junction, an area critical for integrating nonlinguistic auditory and auditory linguistic information (e.g., Buchsbaum et al., 2011;Hickok et al., 2003Hickok et al., , 2009) ) and one that was damaged in Mr. C.
For the functional findings, a compelling source of evidence supporting a localization mechanism is Mr. C's engagement of residual tissue in the left temporal lobe, either ventral or anterior to his lesion, during successful language comprehension (Adaptive word paradigms, Written narrative, Slow written narrative).Importantly, these regions are commonly recruited during these paradigms or highly similar ones (e.g., Wilson, 2014;Wilson, Bautista et al., 2018;Wilson et al., 2017;Wilson, Yen et al., 2018;Yen et al., 2019).Whether activation in these regions represents functional specialization similar to left superior temporal cortex, which also activates during these paradigms, or the capacity for reorganization is yet to be determined.Nonetheless, it is our view that the remarkable aspect of Mr. C's case is that it illustrates the extent to which the language network can support successful comprehension in spite of substantial tissue loss of core regions for both auditory and linguistic processing.
Following this logic, Mr. C's failed language comprehension (Spoken narrative, Slow spoken narrative) may then localize specifically to left superior temporal cortex.In its absence, other left temporal regions may be insufficiently specialized or too resource-limited to bring the language network online, and the absence of right superior temporal cortex likely restricts the brain's options for possible processing routes.Notably, spoken narrative comprehension failed despite robust recruitment in bilateral Heschl's gyri to support bottom-up auditory processing and sparing of left frontal regions to facilitate top-down linguistic control.This observation aligns not only with the aphasia recovery literature already discussed (Schneck, 2022;Wilson & Schneck, 2021) but also the developmental neuroimaging literature, where integrity of the left temporal lobe appears necessary for supporting specialization of the language network within the left hemisphere (Brauer et al., 2013;Tuckute et al., 2022).

Limitations
The present case report is not without limitations.First, we took a psycholinguistic approach to behavioral testing with Mr. C.Although this offered multiple advantages, it necessarily came at the cost of evaluating speech perception and auditory processing in depth.Nonetheless, it is our view that this case report still represents a substantial contribution to an otherwise underspecified literature.
Second and relatedly, we used functional MRI to measure neural activity, which offers excellent spatial resolution but does not provide a fine-grained view of the time course for processing.As such, the inclusion of electroencephalography or other neurophysiological methods may be advantageous to use in future case reports in this area.
Third, we were unable to obtain longitudinal research testing, although we did reconstruct his recovery based primarily on medical record review, and the information herein is more extensive in time course and detail than other reports.In future, prospective longitudinal testing would provide valuable information beyond what could be included in this case report.

Conclusion
Word deafness is a controversial disorder that has considerable behavioral and anatomical similarities to aphasia, leading some to view it as a stage within aphasia recovery.The case of a Mr. C, an individual with bilateral lesions to superior temporal cortex, supports this view.His presentation evolved from a severe aphasia acutely post-stroke to an atypical word deafness (sentence deafness) chronically post-stroke, which was explained by impairments in auditory and linguistic processing that recovered to differing degrees.As shown using functional MRI, a localization mechanism appears responsible for his persistent difficulty with spoken language comprehension.Specifically, recruitment (or lack thereof) of left temporal regions adjacent to his lesion seemed to dictate engagement of the entire language network, providing further evidence in support of left superior temporal cortex as critical to auditory and linguistic processing.

Fig. 1 e
Fig. 1 e Clinical Behavioral and Neuroimaging Findings.A Representative axial slices from diffusion-weighted magnetic resonance imaging (MRI) obtained at approximately three days post-stroke onset.Images were warped to standardized space.Left is left.B Pure tone testing results from an audiological evaluation approximately one-month post-stroke onset, as obtained from medical records.C Scores on the extended version of the Western Aphasia BatterydRevised (WAB-R; Kertesz, 2006) administered ~19 months post-stroke onset, as obtained from medical records.

Fig. 3 e
Fig. 3 e Methodological Details of Research Testing.A Framework and measures for behavioral testing of auditory and language processing.B Conditions and example stimuli for the three Adaptive word functional MRI paradigms.C

Fig. 4 e
Fig. 4 e Lesion-delineated Structural Scan.Axial slices at five-slice intervals from Mr. C's T1-weighted scans spanning the extent of his lesions, which are delineated in blue (unsmoothed to allow for fine-grained review of structural damage).Left is left.

Fig. 5 e
Fig. 5 e Activation Maps for the Adaptive Word Paradigms.AeC Activation maps contrasting language conditions relative to matched perceptual conditions.The color bar reflects the percentile range a thresholded t statistic fell within for a given voxel (lower % indicates greater statistical significance).The smoothed lesion mask is depicted in blue.Left is left.

Fig. 7 e
Fig.7e Combined Activation Map of Auditory Processing.Activation map contrasting language conditions relative to rest for the two Spoken narrative and Slow spoken narrative paradigms, where only activation in a given voxel, subject to thresholding, across all three paradigms is shown.The underlay is coronal slices in two-slice increments from the T1weighted structural imaging spanning the length of Heschl's gyri bilaterally.Slices with activation of Heschl's gyri in a given hemisphere are denoted with arrows.The lesion mask (unsmoothed to allow for fine-grained interpretation of the activation patterns relative to structural damage) is depicted in blue.Left is left.HG ¼ Heschl's gyri.

Table 1 e
Clinical time course of recovery.

Table 3 e
Performance on word processing tasks.
Note. df ¼ degrees of freedom; discrim.¼ discrimination; min.¼ minimal; *significant result at p < .02;^result approaching significance at .02 p < .05;bold font face ¼ result interpreted as significantly different than the normative sample per CrawfordeHowell t-test, scale score, or item-level/qualitative interpretation.

Table 4 e
Performance on nonlinguistic auditory processing tasks.
Note.Max.¼ maximum; df ¼ degrees of freedom; *significant result at p < .02;^resultapproaching significance at .02 p < .05;bold font face ¼ result interpreted as significantly different than the normative sample across all potential per CrawfordeHowell t-test, scale score, or item-level/qualitative interpretation.

Table 5 e
Performance on the adaptive word paradigm conditions.