Language and nonverbal auditory processing in the occipital cortex of individuals who are congenitally blind due to anophthalmia

Individuals with congenital blindness due to bilateral anophthalmia offer a unique opportunity to examine cross-modal plasticity in the complete absence of any stimulation of the 'visual' pathway even during development in utero. Our previous work has suggested that this complete sensory deafferentation results in different patterns of reorganisation compared with those seen in other early blind populations. Here, we further test the functional specialisation of occipital cortex in six well-studied cases with anophthalmia. Whole brain functional MRI was obtained while these human participants and a group of sighted controls performed two experiments involving phonological and semantic processing of words (verbal experiment) and spatial and identity processing of piano chords (nonverbal experiment). Both experiments were predicted to show a dorsal-ventral difference in activity based on the specific task performed. All tasks evoked activation in occipital cortex in the individuals with anophthalmia but not in the sighted controls. For the verbal experiment, both dorsal and ventral occipital areas were strongly activated by the phonological and semantic tasks in anophthalmia. For the nonverbal experiment, both the spatial and the identity task robustly activated the dorsal occipital area V3a but showed inconsistent activity elsewhere in the occipital lobe. V1 was most strongly activated by the verbal tasks, showing greater activity on the left for the verbal task relative to the nonverbal one. For individual anophthalmic participants, however, activity in V1 was inconsistent across tasks and hemispheres with many participants showing activity levels in the control range, which was not significantly above baseline. Despite the homogeneous nature of the cause of blindness in the anophthalmic group, there remain differences in patterns of activation among the individuals with this condition. Investigation at the case level might further our understanding of how post-natal experiences shape functional reorganisation in deafferented cortex.


Introduction
Bilateral anophthalmia is a very rare condition in which a failure of the eye globes to develop in utero results in congenital blindness. In anophthalmia, because the retina never develops, there is no possibility of prenatal retinal stimulation either by light or by endogenous activity. As a result, the 'visual' pathways, subcortically and cortically, never receive the stimulation that is considered necessary to initiate functional specialisation. Thus, it is suspected that anophthalmic individuals may show enhanced or different patterns of reorganisation relative to other congenitally blind populations, in which the cause of blindness occurs after a period of normal prenatal or even brief postnatal development, and there is often some minimal residual vision (light perception).
We have used MRI to investigate brain structural and functional reorganisation in a small series of six individuals with anophthalmia who are otherwise neurologically unimpaired (Bridge and Watkins, 2019). Our previous functional MRI work with these anophthalmic individuals revealed activity during a language task in occipital areas, in addition to activation of the expected left inferior frontal and superior temporal cortex (Watkins et al., 2012). The language task involved listening to short phrases, searching the lexicon and covertly retrieving a suitable target word that met this definition (e.g. "bees make it" => "honey"). Task-related activity increased in magnitude along the cortical processing hierarchy from pericalcarine cortex (V1) ventrally to lateral occipital cortex (LO) and dorsally to area V3a. Activity in V1 was not task specific in that it responded equally to the auditory naming task and the control condition, reversed speech. Because of this lack of specialisation, we argued that V1 in anophthalmia maintained its role in early sensory processing but shifts modalities from visual to auditory. In another study, listening to pure tone stimuli of different frequencies evoked a pattern of tonotopic activity in some participants in area V5/hMT+ (an area typically involved in visual motion perception in the sighted occipital cortex) in addition to V1 and primary auditory cortex located on Heschl's gyrus in the temporal lobe (Watkins et al., 2013). We proposed that V5/MT+, a dorsal stream area, might receive direct subcortical input in anophthalmia. Thus, the evidence so far suggests that V1 maintains its early sensory processing role in anophthalmia, possibly in addition to dorsal region hMT+, while regions in the ventral processing stream, specifically the lateral occipital cortex show higher level processing for language.
In contrast, previous studies of other populations with heterogeneous causes of congenital or early blindness show V1 activated by higherlevel language tasks more than control conditions, with the strongest responses to tasks involving sentence processing, including semantic and syntactic manipulations (Roder et al., 2002;Bedny et al., 2011;Lane et al., 2015;Bedny, 2017). V1 activity was also correlated with performance on a verbal memory task in congenitally blind participants even when the task did not involve any sensory stimulation (Amedi et al., 2003). Furthermore, interfering with V1 in congenital blindness using transcranial magnetic stimulation (TMS) impaired performance on a verb generation task in blind participants (Amedi et al., 2004) confirming that the V1 activity seen in these blind individuals contributes to language processing and is not simply epiphenomenal. These findings suggest that in non-anophthalmic blind groups V1 contributes to more complex tasks associated with later stages of processing.
In the dorsal visual areas of groups with congenital or early blindness, the V5/hMT + complex responds to auditory motion (Poirier et al., 2006;Bedny et al., 2010;Lewis et al., 2010;Jiang et al., 2014Jiang et al., , 2016. Auditory spatial localisation tasks in blind participants activate right dorsal occipital regions (Weeks et al., 2000;Gougoux et al., 2005;Collignon et al., 2007). TMS over the right dorsal extrastriate cortex in early blind participants interfered with sound localisation, but not pitch or intensity discrimination (Collignon et al., 2007), confirming the contribution of these areas to the processing of dynamic auditory information, analogous with the role of the dorsal stream areas in visual processing in the sighted brain.
Here, we further investigate functional specialisation for language and auditory processing within the occipital cortex of individuals with anophthalmia using a verbal experiment (that differently emphasised semantic and phonological processing of single word stimuli) and a nonverbal experiment (that examined occipital cortex responses for identification and localisation of musical chords).
The verbal experiment was based on a paradigm previously shown to preferentially activate different portions of the inferior frontal gyrus (IFG), namely the anterior portion (BA45, pars triangularis) for semantic processing and the posterior portion (BA44, pars opercularis) for phonological processing (Devlin et al., 2003;Gough et al., 2005). Some adaptations were necessary for use in blind participants. Spoken word stimuli were recorded and presented sequentially (word pairs were presented visually in the original task) and we asked participants to perform a 1-back task to identify words that matched for meaning or syllable number (rather than judge homophones from written forms). These semantic and phonological tasks involve lexical and sublexical processing respectively and should map on to the functions of the two parallel auditory speech processing streams (Hickok and Poeppel, 2007). The ventral processing stream includes ventral portions of the middle and anterior temporal lobe and anterior inferior frontal cortex, whereas the dorsal processing stream includes areas in the inferior parietal cortex, posterior inferior frontal cortex and dorsolateral prefrontal cortex (Saur et al., 2008). Additionally, in anophthalmia, processing the auditory language stimuli was expected to evoke activity in occipital areas, based on our previous findings (Watkins et al., 2012). Specifically, in anophthalmia we hypothesised preferential (and perhaps left lateralised) activation of dorsal visual areas hMT+ and V3a during phonological processing and of ventral visual areas V4 and LO during semantic processing, reflecting the dorsal/ventral split seen in non-occipital regions for auditory speech processing.
The nonverbal experiment was designed to be similar in terms of task requirements to the verbal one. Participants performed a 1-back task attending to whether the piano tones were the same chord (identity processing or What task) or were presented from the same spatial location (spatial processing or Where task). A comparable task was used with early blind individuals and sighted controls previously (Renier et al., 2010;Anurova et al., 2015). Based on previous findings Collignon et al., 2011) we expected spatial processing of sounds to activate dorsal auditory processing areas in the parietal and superior frontal cortex (right lateralised) and that identity processing would activate the ventral auditory stream in the temporal lobe and inferior frontal lobe (left lateralised). Additionally, in the occipital cortex in anophthalmia, we predicted that we would find preferential activation of dorsal areas including hMT+ and V3a for the Where task, and preferential activity in ventral areas (V4 and LO) for the What task reflecting the dorsal/ventral processing split seen in the auditory processing streams and the function of these visual streams in sighted participants for processing visual stimuli.
Across both experiments, we hypothesised that if V1 maintains its early sensory processing role in anophthalmia then these auditory stimuli would evoke activity in V1 that was unselective for task. Finally, we predicted that the anophthalmia and sighted controls groups would show no differences in activity in task-related regions outside the occipital lobe.

Participants
Six bilateral anophthalmic participants were recruited (mean age 29.8 years, range 23-38 years, two females). All had participated in previous imaging studies (Bridge et al., 2009;Watkins et al., 2012Watkins et al., , 2013Coullon et al., 2015aCoullon et al., , 2015b and are referred to as cases 1 to 6. Five of the anophthalmic participants (not Case 1) read Braille and none had any neurological history beyond the cause of their blindness. Twelve control participants with normal or corrected-to-normal vision were also recruited (mean age 25.6 years, range 19-30 years, 8 females). This study was granted ethical approval by the Oxford University Central Ethical Committee and all participants gave informed written consent prior to participation.

Neuroimaging experiments
In the scanner, we performed two experiments using a 1-back task: (i) a verbal experiment to compare BOLD responses during semantic (Meaning task) and phonological processing (Syllable task) and (ii) a nonverbal experiment designed to compare BOLD responses during auditory identification (What task) and spatial localisation processing (Where task) (see Fig. 1). Auditory stimuli were presented over MRIcompatible electrostatic stereo headphones (http://www.nordic neurolab.com/products/AudioSystem.html).

Verbal experiment
Participants heard a series of words (one word presented every 2.5 s). Words of one to five syllables were selected based on similar ratings of concreteness, imageability and familiarity (Gough et al., 2005). They were recorded in-house and audio levels were normalised and altered for background noise using the software Audacity (Version 2.0.3, www. audacity.sourceforge.net).
In the semantic processing blocks, participants responded when they heard successive words with the same meaning. In the phonological processing blocks, participants responded when they heard successive words with the same number of syllables. Each block lasted 25 s and was preceded by a 5 s interval during which a spoken cue was presented ("meaning" or "syllable") to instruct the participant. A rest period (15 s of silence) separated the blocks (Fig. 1). The blocks (13 for each condition) were presented in a fixed pseudorandom order. Each block contained either two or three target pairs, and the total number of target pairs across the semantic and phonological blocks was matched over the entire experiment.

Nonverbal experiment
Participants heard a series of piano chords that were simulated (using inter-aural level and time difference cues) as coming from different spatial locations and performed either an 'Identity' task (reporting when successive stimuli were the same chord) or a 'Location' task (reporting when successive stimuli were in the same location).
There were four possible piano chords (A major, C major, D major and E major) and four possible spatial locations in virtual auditory space, all on the horizontal azimuth with respect to the participant, who was supine when performing the task in the scanner. The chords were triads with the peak frequency as follows: A major 299Hz, C major 172 Hz, D major 408Hz and E major 201Hz. Spatial positions, relative to head position, were − 90 • (far left), − 30 • (front left), +30 • (front right), +90 • (far right), determined using interaural intensity differences. Stimuli were made in Garage Band '11 (Version 6.0.5, Apple Inc. 2012) on a MacBook Pro laptop (Mac OS X 10.9.4).
Each 25s block consisted of a series of 10 piano chords (one sound followed by a pause presented every 2.5 s). Each block contained either two or three target pairs in which the same chord or the same location was presented in succession. A rest period (15 s of 'silent' baseline) separated each block, followed by a spoken cue ("what", "where" or "rest") to instruct the participant and a 5 s silent pause. Each run contained a total of 13 blocks per condition (in a fixed pseudo-random order) and contained the same total number of target pairs.

Acquisition
All images were acquired using a Siemens Trio 3-T whole body MRI scanner and a 12-channel coil at the Oxford Centre for Clinical Magnetic Resonance Research (OCMR, University of Oxford). Structural images were acquired using a T1-weighted MPRAGE sequence (TR = 2040 ms, TE = 4.7 ms, flip angle = 8 • , 192 transverse slices, 1-mm isotropic voxels). The fMRI scan was acquired using an echo-planar imaging pulse sequence (TR = 3000 ms, TE = 30 ms, flip angle = 87 • , 3-mm isotropic voxels, 45 transverse slices, 390 vol). Transverse slices were positioned to cover the entire brain.

Preprocessing
Functional MRI data processing was performed using FEAT (FMRI Expert Analysis Tool) Version 6.00, part of FSL (FMRIB's Software Library, www.fmrib.ox.ac.uk/fsl). Pre-processing of images included motion correction, non-brain removal and prewhitening. A highpass temporal filter cut-off of 100s was applied to remove low-frequency fluctuations, and spatial smoothing was applied using a Gaussian kernel of 6 mm (full width at half maximum). Motion correction parameters (translations and rotations in x, y and z) were included as covariates of no interest in the general linear model, and thus removed potential motion and physiological noise artefacts from the data. Functional images were initially registered to each participant's T1weighted structural image using BBR (Boundary Based Registration) and then to T1-weighted MNI-152 (Montreal Neurological Institute) standard space using non-linear registration.

Region of interest (ROI) selection
The ROIs for V1, V4, and V5 (hMT+) were derived from the Juelich probabilistic histological atlas implemented in FSL, thresholded at 10% (Morosan et al., 2001). The V3a ROI was derived from a probabilistic atlas based on retinotopic mapping in 18 sighted controls (Bridge, 2011), thresholded at 30%.The LO mask was that used in Watkins et al. (2012), derived from comparing activation to objects with that to scrambled pictures in a different group of eight sighted subjects.

Statistical analyses
The two experiments (verbal and nonverbal) were analysed separately at the whole brain level. Group analyses were performed using FLAME (FMRIB's Local Analysis of Mixed Effects). For visualisation purposes, statistical maps were thresholded using cluster-extent based thresholding with a primary cluster-defining height threshold of Z ≥ 3.1 (p < 0.001) and an extent threshold of p < 0.05 (family-wise error [FWE] rate corrected) and rendered on the group-averaged Freesurfer inflated brain.
ROI-based analyses were used to address our specific hypotheses. Percentage BOLD signal changes relative to the baseline (rest) were calculated within each individual ROI for each task (Verbal experiment: Meaning and Syllable; Nonverbal experiment: What and Where). Since there was little to no activity in occipital areas for sighted controls, we focussed our analyses on the anophthalmia group only. For each occipital ROI, we ran an analysis of variance (ANOVA) to compare activity within-subjects among the four tasks (Meaning, Syllable, What, and Where) and between hemispheres (left vs. right). We adjusted the p- Fig. 1. Schematic diagram of the experimental procedure. In the verbal experiment participants were presented with a stream of words. A cue prior to the task block indicated that the participant should indicate when two words in succession (1-back task) contained the same number of syllables ('Syllable'; e. g. Present and Concrete) or had the same meaning ('Meaning'; e.g. Gift and Present). In the non-verbal experiment, participants were presented with musical chords at specific locations in relation to the head. The cue 'Where' indicated that participants should respond when two chords in succession were played in the same location, whereas 'What' meant participants responded when two chords were the same.
values using a Bonferroni correction to account for the five ANOVAs conducted. Significant interactions were interpreted with post-hoc comparisons (uncorrected p-values reported).

Meaning and Syllable task-evoked activity in sighted controls
The whole brain analysis of sighted control participants revealed the expected patterns of activity in the dorsal and ventral auditory speech processing streams for the Syllable and Meaning tasks respectively (see Fig. 2). Both tasks activated the superior temporal gyrus bilaterally along its full extent, the anterior insula bilaterally, the left dorsal and ventral premotor cortex, and the presupplementary motor area (pre-SMA) medially. Subcortically, we saw activity for both tasks in the left putamen and right cerebellar lobule VI and Crus II. For the Meaning task, additional areas in the ventral auditory processing stream were activated including the left inferior frontal gyrus (IFG) extending through pars opercularis and pars triangularis (Broca's area) to the lateral orbitofrontal cortex. Subcortically, the activity was seen in left caudate nucleus and thalamus as well as left putamen. For the Syllable task, dorsal processing stream areas in the left inferior parietal cortex (supramarginal gyrus), and left pars opercularis in the IFG were additionally activated. There was also activity in the right dorsal and ventral premotor cortex and superior parietal cortex bilaterally. Subcortically, activity was seen in the putamen bilaterally and the cerebellum lobules VI and Crus II bilaterally.

Meaning and Syllable task-evoked activity in anophthalmia
The whole brain analyses for the anophthalmic participants revealed very similar dorsal/ventral patterning of activity to that seen in controls for Syllable and Meaning tasks with additional extensive activation of lateral and ventral occipital areas bilaterally for both tasks that was not seen in controls (Fig. 2). For the Meaning task, there was also activity in the superior parietal cortex bilaterally and the premotor cortex and striatal activity was also seen on the right (so bilaterally) in anophthalmia. For the Syllable task, the left dorsal premotor cortex (but not the right) was also activated, and subcortically only the right putamen and right Crus II of the cerebellum showed significant activity.

Group differences in Meaning and Syllable task-evoked activity
Activity evoked by each task separately was contrasted between groups to address our hypotheses that the anophthalmia group would show significant occipital lobe activation relative to the controls and that the two groups would not show task-related differences in areas outside of the occipital lobe. Accordingly, the contrast between groups confirmed significantly greater activation in the anophthalmia group relative to the sighted group for the Meaning task in the left ventral pericalcarine cortex, and for both tasks in extensive portions of the extrastriate cortex bilaterally in dorsal and ventral regions, extending along the fusiform gyrus in the ventral occipitotemporal cortex bilaterally (Fig. 3). Dorsally, both tasks activated the left dorsomedial occipital cortex (cuneous) and superior occipital cortex on the lateral surface. There was activity in the posterior part of the middle temporal gyrus bilaterally but more extensively on the right, again for both tasks. Outside of the occipital lobe, there was significantly greater activity in anophthalmia than controls in the Meaning task only in the right dorsal and ventral premotor cortex and the superior parietal cortex. There were no areas where activity was greater in controls compared with the anophthalmia group. Thus, our first hypothesis that these auditory language tasks would evoke activity in the occipital cortex of our anophthalmia group was supported. But our second hypothesis, that they would not show differences in activity in areas outside the occipital cortex was not upheld.
Differences between Meaning and Syllable task-evoked activity in Sighted Controls.
Contrast of the two tasks in sighted participants revealed the expected left-lateralised pattern of greater activity in the ventral auditory processing stream, namely anterior left IFG (pars triangularis and orbitalis), left angular gyrus, and superior temporal sulcus for the Meaning task relative to the Syllable task. In addition, the right posterior cerebellum also showed greater activity for the Meaning compared with the Syllable task in sighted controls, as did left ventromedial and dorsomedial frontal areas, dorsolateral prefrontal cortex and retrosplenial cortex areas. These latter areas were not seen to be active in the whole brain analyses for either task, however, indicating they reflect differences in subthreshold activity between tasks. For the contrast of Syllable > Meaning task, the sighted participants showed differences in dorsal auditory speech processing areas in the left supramarginal gyrus and ventral premotor cortex, as well as a bilateral dorsal network of superior parietal and dorsal premotor cortex and the preSMA (Fig. 4).

Differences between Meaning and Syllable task-evoked activity in anophthalmia
The anophthalmia group also showed greater activity for the Meaning task relative to the Syllable one in the ventral auditory speech processing areas of the left anterior IFG, and the angular gyrus and superior temporal sulcus bilaterally (Fig. 4). In addition, there was greater right posterior cerebellar activity as for the controls. As seen in the Anophthalmia group average for each task (Fig. 2), the Meaning task activated the left ventral anterior peri-calcarine cortex (lingual gyrus) significantly more than the Syllable task. This significant task difference was seen in the posterior fusiform gyrus bilaterally and extended along its length anteriorly in the left hemisphere. Dorsally, the only task differences seen in favour of the Meaning task were in the left superior lateral occipital cortex (Fig. 4). In contrast to the pattern seen in the Controls, no areas showed greater activity for the Syllable task compared with the Meaning task in anophthalmia at the corrected statistical threshold used. At an uncorrected voxel-level threshold of Z > 2.3, there was greater activity for the Syllable task relative to the Meaning task in the posterior middle temporal gyrus bilaterally (probabilistic location of V5/hMT+ in the Juelich atlas) in the anophthalmia group along with noisy activity in inferior parietal and frontal regions bilaterally.

What and Where task-evoked activity in sighted controls
The What and Where tasks activated a similar bilateral network of cortical areas in sighted control participants that included the posterior superior temporal cortex, inferior parietal cortex (supramarginal gyrus), superior parietal cortex and posterior superior frontal cortex at the junction with the precentral gyrus (a structural landmark for the frontal eye-fields), the anterior insula, and posterior IFG, in addition to the right ventral premotor cortex (Fig. 5). Medially, the SMA/preSMA complex was activated in both tasks, as were cerebellar lobule VI/Crus I and lobules VIIb and VIIIa. For the Where task, there was additional subcortical activity in the right putamen and thalamus. For the What task, there was additional left putamen activity.

What and Where task-evoked activity in anophthalmia
Analysis of the anophthalmia group revealed a less extensive pattern of activity in these same cortical networks for both tasks with additional activation of medial and lateral dorsal occipital areas bilaterally for both tasks that was not seen in controls (Fig. 5). As predicted, the anophthalmia group showed additional activity for the Where task in the dorsal occipital cortex extending from the posterior bank of the parietooccipital sulcus on the medial wall to the lateral occipital surface bilaterally. For the What task, additional activity was seen in the anophthalmia group on the dorsal bank of the calcarine sulcus on the right extending to the dorsal medial and lateral surfaces of the occipital lobe bilaterally and the posterior right middle temporal gyrus. Subcortically, for the Where task, there was no significant activity, whereas for the What task, the left putamen, thalamus bilaterally, and the cerebellar vermis and right lobule VI were also activated in the anophthalmia group.

Group differences in What and Where task-evoked activity
Activity evoked by the What and Where tasks separately was contrasted between groups to address our hypotheses that the anophthalmia group would show significant occipital lobe activation relative to the sighted controls and that the two groups would not show task-related differences in areas outside of the occipital lobe. Accordingly, the contrast between groups confirmed significantly greater activation in the anophthalmia group relative to the controls for both What and Where tasks in the dorsal lateral occipital cortex extending from the medial parieto-occipital sulcus onto the lateral convexity (Fig. 6). For the What task, there was also activation of the posterior part of the middle temporal gyrus bilaterally (overlapping with the probabilistic map of V5 in the Juelich atlas). Outside of the occipital lobe, there was significantly greater activity in anophthalmia in the left posterior cingulate cortex and, subcortically, in the thalamus and hippocampus on the left and in the cerebellar vermis. For the Where condition, there were no additional areas outside the occipital lobe that were more active in anophthalmia than controls. However, the reverse contrast showed greater activity in controls than anophthalmia for the Where contrast (but none for What) on the medial surface on the parietal bank of the parieto-occipital sulcus in the right hemisphere and more dorsally in medial parietal cortex (precuneous) bilaterally. Thus, our first hypothesis that these auditory nonverbal tasks would evoke activity in the occipital cortex of our anophthalmia group was supported. But, our second hypothesis, that they would not show differences in activity in areas outside the occipital cortex was not upheld.

Differences between What and Where task-evoked activity in sighted controls
Contrast of the two tasks in sighted participants revealed greater activity in dorsal areas for the Where task relative to the What task, including medial parietal cortex (cuneous), anterior inferior parietal lobe (supramarginal gyrus), and dorsal prefrontal cortex at the junction of the superior frontal sulcus and precentral sulcus (frontal eye-fields), all bilaterally (Fig. 7A). There were no areas in sighted controls where there was more activity for the What task relative to the Where one (Fig. 7B).

Differences between What and Where task-evoked activity in anophthalmia
In contrast to the sighted group, the anophthalmia group showed greater activity during the What task relative to the Where task in the left calcarine sulcus extending ventrally to the posterior fusiform gyrus. Outside of the occipital lobe, there was additionally greater activity to the What task bilaterally in retrosplenial cortex (Fig. 7A). There were no areas where there was greater activity for the Where task relative to the What task in anophthalmia (Fig. 7B). The activation shown in the top right panel corresponding to regions responding more to What than Where in the anophthalmia > controls contrast (i.e. the interaction) is located in two regions. The area on the lower bank of the calcarine sulcus shows a group difference because there is greater activity for What than Where in the anophthalmia group but not in the controls, as can be seen in the left most panel of Fig. 7A. However, the difference in the cuneus reflects the fact that this region activates less to What than Where in the sighted controls but not in anophthalmia, as seen in Fig. 7B; there is no greater activity to What than to Where in either group in this location.

ROI analyses by task in anophthalmia
As hypothesised, the whole brain analyses highlighted the significant activation of the occipital regions in the anophthalmia group evoked by verbal and nonverbal auditory stimuli, which was absent from the sighted control group. To further investigate the patterns of activity in specific 'visual' areas in individual participants, an ROI analysis was performed across pre-selected areas, divided into V1, 'dorsal' and 'ventral' regions. Since the individual sighted controls showed on average no activity in these regions, we plotted the entire range of their data to indicate where there was an overlap between the six participants with anophthalmia and all 12 controls (grey boxes, Fig. 8) and we limited our analysis to the activity in the anophthalmia group. A twoway ANOVA with main effects of task (meaning, syllable, what, where) and hemisphere was performed for each of the five occipital ROIs.
In V1, there was a significant interaction between task and hemisphere (F (3, 20) = 5.9; corrected p = 0.02) reflecting the marginally greater activity in the left hemisphere in the two verbal tasks; the main effects of task and hemisphere were not significant. At the individual level, there were quite variable responses in terms of the percent signal change in V1. The semantic task showed the strongest responses on the left for most participants but in each of the other tasks only one or two of the anophthalmic participants showed non-overlapping levels of activation with the control range.
In the ventral visual areas, V4 and LO, both verbal tasks evoked robust activity in nearly all anophthalmic participants that exceeded the control range, while responses to the non-verbal tasks were comparable in the two groups (Fig. 8). Across the Anophthalmia group, both areas showed a significant effect of task (V4: (F (3, 20) = 8.6; corrected p = 0.004; LO: (F (3, 20) = 8.2; corrected p = 0.004)). Planned post-hoc tests indicated that, in both areas, this overall difference was driven by greater responses to the Meaning task relative to both the What (p = 0.01) and Where (p < 0.005) tasks, and greater response to the Syllable task compared to Where (p < 0.05). There were no differences between tasks for a given experiment (i.e. no difference between Meaning and Syllable tasks in the verbal experiment).
In the dorsal visual areas, hMT+ and V3a, both verbal tasks again evoked robust activation in nearly all of the 6 anophthalmic participants that exceed the range in controls. The What task similarly evoked robust activity in V3a in all anophthalmic participants and nearly all for the Where task. V3a showed no significant differences in activity among tasks or between hemispheres, however. Area hMT+ was less reliably activated by the nonverbal tasks. There was a significant difference among tasks (F (3, 20) = 6.1; corrected p = 0.025) but no difference between hemispheres. The task effect reflected greater response to the Meaning (p = 0.01) and Syllable (p = 0.02) tasks relative to the Where task.
To ensure that any differences in activation did not reflect task performance, the number of correct responses in each of the experiments was compared across both groups and experiments using a 2-way ANOVA. There were no significant effects of either group or experiments and no interaction. The group means (standard error) were as follows: Verbal experiment: anophthalmia 91.9% (0.02), sighted controls 91.1% (0.01); Nonverbal experiment: anophthalmia 96.6% (0.01), sighted controls 88.7% (0.01).
In sum, while the whole brain analysis showed 'visual' regions in all tasks that were activated to a greater level in the anophthalmia group compared to controls, there were fewer regions that showed task-specific activation. Specifically, both ventral areas showed a significantly higher response to the verbal, compared to non-verbal tasks in the anophthalmia group. Area hMT + also showed a significant effect of task with greater activity in the verbal tasks. At the individual level all six anophthalmic participants showed activity greater than any sighted control in both dorsal and ventral ROIs on the left for the Meaning task and nearly all showed activity on the right also. Responses for the Syllable task were similarly robust with five or six participants showing activity for the left hemisphere dorsal and ventral visual areas and four or five on the right. The results from the ROI analysis for nonverbal auditory processing showed an overall lower level of activation. V1, V4 and LO were not reliably activated by the What and Where tasks in the anophthalmic participants with many individuals showing negative signal change (de-activation) as seen in the sighted controls (Fig. 8). In the dorsal stream areas, four of the anophthalmic participants showed activity on the right in hMT + for the Where task and one on the left, whereas only three participants showed activity on the right for the What task and one on the left (Fig. 8). Activity in left and right V3a was seen in all six anophthalmic participants for the What task and four or five for the Where task.

Fig. 7.
A Regions showing significantly greater activation to the What compared to the Where task in both the Anophthalmia and the Control groups. Only the Anophthalmia group had regions that were significantly more activated by the What task. B Regions showing significantly greater activation to the Where compared with the What task. Only the Control group showed greater activity to Where. Red areas are the statistical maps for the group contrast cluster thresholded at Z > 3.1, p < 0.05 FWE corrected. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)

Discussion
This study investigated the patterns of neural activity evoked by verbal and non-verbal auditory processing in anophthalmic and sighted participants. Occipital regions showed significantly greater activation in anophthalmic individuals than sighted control participants across all tasks, consistent with previous findings in anophthalmic and other early blind populations. Dorsal stream visual regions, hMT+ and V3a showed similar responses across all tasks. In contrast, the ventral stream visual regions were more robustly activated by the verbal tasks with inconsistent activation across the anophthalmic participants for the nonverbal tasks. Put another way, the verbal tasks strongly activated both ventral and dorsal visual areas in anophthalmia, whereas the nonverbal tasks showed the most reliable activation in dorsal visual area V3a. Below, we summarise our findings and discuss them in the context of findings from other studies in early blind populations.

Dorsal and ventral patterns of processing during the verbal tasks
As seen previously (Poldrack et al., 1999;Burton et al., 2003;Devlin et al., 2003) and predicted by the dual-stream model (Hickok and Poeppel, 2007), the Meaning (lexical, semantic) and Syllable (sublexical, phonological) tasks produced the expected division of processing in the ventral and dorsal auditory speech processing streams, respectively. In the occipital cortex of the anophthalmia group, the whole-brain analysis revealed greater activity for the Meaning task relative to the Syllable task as hypothesised in ventral areas extending from the left pericalcarine cortex ventrally along the left ventral surface and including posterior fusiform gyrus bilaterally. There was greater activity for the Meaning task compared with the Syllable one also in the left lateral dorsal occipital cortex, however, which was unexpected. Our prediction that the dorsal and ventral visual areas would show a similar division of labour for auditory speech processing was not supported, therefore. Instead, we found that both verbal tasks activated dorsal and ventral occipital areas and that there was greater activity for the Meaning task relative to the Syllable task in both dorsal and ventral Fig. 8. Percent BOLD signal changes for each 'visual' area in V1, ventral and dorsal regions. In each case the coloured markers indicate individual data points from the participants with anophthalmia, with the black line indicating the mean. For each task, the darker coloured marker represents the left hemisphere and lighter one the right. The four tasks are Meaning (blue), Syllable (red/pink), What (green) and Where (purple). The grey boxes represent the entire range of control participant responses. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.) occipital areas in the left hemisphere.

Language processing in occipital cortex in individuals who are congenitally blind
Together with the results of the current study, there is now convincing evidence that wide regions of occipital cortex are involved in semantic processing, in both anophthalmic and early blind individuals. These areas have been activated across many different studies that involve semantic processing in various ways. As far as anophthalmic participants are concerned, we found activations of V1, V2, V3, V4, V3a and LO bilaterally (though greater on the left) in a previous language task (Watkins et al., 2012). This was language specific (i.e. it was greater than for the baseline condition of listening to reversed speech) in LO only. In the current study, we saw activation across a very similar set of areas for both the phonological and the semantic tasks, with stronger activation in the semantic task. Findings are very similar for early blind participants. One previous study of small groups of early and late blind participants also specifically examined whether differential responses would be seen in occipital areas for semantic (meaning) and phonological (rhyming) processing of words (Burton et al., 2003). Their results in the early blind group were strikingly like those of the current study, with activation across all ventral and dorsal 'visual' areas in occipital cortex that was greater in magnitude and extent for the semantic task than the phonological one. Verb generation and sentence comprehension tasks also elicit widespread occipital cortex activity in blind populations (Bedny, 2017;Abboud and Cohen, 2019). Furthermore, the errors induced by TMS over V1 during verb generation in blind participants were semantic (e.g. "apple" => "jump" rather than "eat") rather than phonological (e.g. "apple" => "eap") (Amedi et al., 2004). Taken together, it is possible that the responses to language tasks in both dorsal and ventral occipital cortex of anophthalmic individuals predominantly reflect semantic processing. Even though participants' attention was directed to either the semantic or phonological properties of the words heard, semantic processes remained at play during both tasks. There was no evidence for the predicted specialisation within the dorsal 'visual' areas for phonological processing. However, it is worth noting that while the sighted group showed significantly greater activity in auditory dorsal stream areas for the phonological task, the anophthalmia group did not. The sample size or the task may have reduced sensitivity to detecting this difference.

Dorsal and ventral patterns of processing during the nonverbal tasks
The What (identity) and Where (spatial localisation) tasks produced very similar patterns of activity in the expected portions of posterior superior temporal cortex, supramarginal gyrus, superior parietal cortex and the frontal eye-fields but the expected dorsal/ventral processing split in auditory processing streams was only partially evident. Specifically, in sighted controls, the Where task showed greater activity than the What task in the dorsal medial parietal cortex, the supramarginal gyrus, and the frontal eye-fields bilaterally but there were no areas showing greater activity for What relative to Where. In the occipital cortex of the anophthalmia group, both tasks additionally activated dorsal occipital cortex on the medial and lateral surfaces. The wholebrain group comparison for the What task showed additional activation in anophthalmia of hMT + bilaterally. However, the contrast between tasks in anophthalmia found greater activity for the What task relative to the Where task in ventral areas specifically in the left calcarine sulcus and left posterior fusiform (overlapping the probabilistic map of V4 from the Juelich atlas). The ROI analysis showed that the mean activity for the What task was at the top of the control range for hMT + but was not significantly different within individuals between What and Where. Also, neither V1 nor V4 were robustly activated by the nonverbal tasks in the anophthalmic participants and the task difference in these areas appears driven by deactivations for the Where task in V1 bilaterally and left V4 for nearly all individuals (including controls). Our prediction that the dorsal and ventral visual areas would show a similar division of labour for auditory nonverbal processing was not supported, therefore. Both tasks activated dorsal occipital areas and there was inconsistent activation of ventral areas for the What task.

Nonverbal auditory processing in occipital cortex in individuals who are blind
Previous studies in early and congenitally blind populations similarly found activation of visual dorsal stream areas specifically in response to spatial processing of auditory stimuli as opposed to identification (Renier et al., 2010;Collignon et al., 2011) and to auditory motion (Poirier et al., 2006;Bedny et al., 2010;Wolbers et al., 2011;Jiang et al., 2016). These dorsal stream areas also appear integrated into a parietal (intraparietal sulcus) and frontal network, which is typically involved in spatial attention and sound localisation (Collignon et al., 2011). Area hMT+ is also active during visuospatial processing in sighted participants, so retains its specialisation of function with different sensory input in the early blind (Renier et al., 2010). Outside of occipital regions, medial parietal (precuneus) activity is typically associated with multisensory spatial processing in sighted and blind participants (Ricciardi et al., 2014) as seen here.
The lack of specificity for nonverbal tasks and hemisphere in these dorsal occipital regions in anophthalmia could indicate that hMT+ and V3a are involved in lower-level auditory processing, consistent with the notion that hMT + acts as an early auditory processing area in anophthalmia and potentially receives direct subcortical inputs (Bridge and Watkins, 2019). Alternatively, it is possible that our task was not sufficiently difficult to reveal task-specific processes or that the use of a limited set of stimuli resulted in a weak taskrelated response that failed to evoke reliable task-related differences in occipital areas. Unlike the stimuli used in the verbal experiment, which were unique in each trial, only four piano chords and four locations were used in this experiment, making the stimulation less rich in comparison.
The low task demands and repetition of a small set of stimuli in the nonverbal experiment should also be considered as possible explanations for the apparent selective activation by the verbal stimuli of V1 and the ventral areas. Using nonverbal sound stimuli that were complex and rich (in contrast with our four piano chords) Mattioni and colleagues revealed categorical representations in early blind individuals along the ventral occipital temporal cortex (Mattioni et al., 2020). Importantly, the organisation of these sound categories resembled the organisation of the same categories of visual stimuli in sighted individuals, indicating that this organisation does not depend on visual experience.

The role of V1 activity
In the current study, the processing of words in the semantic task evoked activity in V1 in anophthalmia, but very little activity was evoked by the four piano chords used as stimuli in the What-Where tasks. As noted above, a previous study of ours found equivalent levels of response in V1 in anophthalmia when listening to meaningful (auditory naming task) and to non-meaningful (reversed) speech (Watkins et al., 2012), whereas similar contrasts in other early blind populations yielded language selective responses in V1 (e.g. Bedny et al. (2011)). In another previous study, listening to pure-tone stimulus trains evoked activity in V1 in some but not all anophthalmic participants (Watkins et al., 2013) but a rich auditory stimulus (scrambled song clips) robustly activated V1 (Coullon et al., 2015b). One possibility is that V1 activity in anophthalmia differs from that in other early blind groups in that it represents simple auditory processing but that only a powerful auditory stimulus is sufficient to activate V1. Alternatively, V1 activity could be partially or exclusively due to back projections from other cortical areas as seen in other studies of early blindness (Amedi et al., 2004;Bedny et al., 2011) consistent with the left-lateralised pattern of activity seen in V1 for the language tasks. A further possibility relates to a preparatory attentional response to the stimuli that differed between the tasks (Stevens et al., 2007) based on the cue at the start of each block, which might also reflect top-down modulation of early occipital cortex by frontoparietal attention networks (Kastner et al., 1999).
We have previously speculated that the function of V1 in anophthalmia retains its early position in the processing hierarchy of sensory stimulation albeit for a different modality due to subcortical reorganisation (see Bridge and Watkins, 2019). There is some support for such a claim from the rodent models of blindness due to absent eye development and consequent rerouting of auditory inputs to the lateral geniculate nucleus from the inferior colliculus and consequent auditory activation of visual cortex (Chabot et al., 2007). The lack of stimulation by light or endogenous activity prenatally in the anophthalmic human brain and these rodent models might allow greater reorganisation of function subcortically because of a failure to become functionally specialised.
Alternatively, pre-existing cortico-cortico connections between primary sensory areas (Falchier et al., 2002) or between the language networks and the occipital lobes (Tomasello et al., 2019) could be unmasked or even strengthened by the absence of 'visual' input. There is little evidence in our own work, however, that V1 is functionally connected to language networks in either anophthalmia or in the sighted control groups studied (Watkins et al., 2012(Watkins et al., , 2013Coullon et al., 2015b). When using a reduced statistical threshold in one study to explore an interaction, controls showed activity in V1 when listening to speech but not reversed speech whereas the anophthalmic participants showed equivalent activity in V1 to both conditions (Watkins et al., 2012). However, in the same study, at rest, V1 retained high inter-hemispheric connectivity in both blind and sighted groups and was not involved in the left-lateralised resting state network that overlapped with functional language areas in the frontal, parietal, and temporal lobes. In contrast with these findings in anophthalmia, a seed-based functional connectivity analysis showed integration of pericalcarine cortex functionally activated by a word-generation task into a left-lateralised "language" network in early blind but not sighted participants (Abboud and Cohen, 2019).

Limitations of the study design
Within each of the verbal and non-verbal experiments, the auditory stimuli were identical, with the tasks directing attention to different features of the stimuli. The design was selected to tease apart the sensory input and the task-related processing. What this type of approach does not necessarily control for, however, is differences in task difficulty. The non-verbal task was relatively straightforward as there were only four different chords to identify and four possible locations. In contrast, the verbal experiment was more challenging, particularly the syllable task. While performance across tasks did not differ significantly, this may have been due to ceiling effects, since average anophthalmia performance in the non-verbal task was 96.6% compared with 91.9% in the verbal one. Thus, it is still possible that differences in task difficulty could have contributed to the greater overall activation levels in the verbal, compared to the non-verbal experiment.
Another limitation of the study design was the limited number of repeats of each type of stimulus. While the experiment used a block design to maximise the power of the experiment, each condition only consisted of 13 blocks to ensure that the scan session was a reasonable duration. Additional blocks for each condition and additional runs could have uncovered further differences between tasks.

Case studies and small sample size
This group of six people with anophthalmia has been studied extensively over the past decade. Isolated bilateral anophthalmia is extremely rare and it has not been possible to recruit additional suitable participants. The aim of the current study was to determine effects that were both robust and consistent across the participants with anophthalmia. The small sample raises issues relating to statistical power, and the extent to which assumptions made are valid. Indeed, the ROI analyses indicated that only large differences between tasks are statistically significant. Moreover, we additionally used Bonferroni correction to ensure conservative correction for multiple comparisons. The downside of this approach is that there may be some interesting inter-participant differences that relate to specific experiences or training, such as being able to read Braille or playing an instrument. To allow inspection of individual differences, we have plotted the individual data points for each ROI. This shows that even when the cause of visual impairment is homogeneous, there is considerable inter-individual variability in the BOLD response across 'visual' areas.

Open access statement
This research was funded in whole, or in part, by the Wellcome Trust (203139/Z/16/Z). For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.