Topographic maps and neural tuning for sensory substitution dimensions learned in adulthood in a congenital blind subject

Topographic maps, a key principle of brain organization, emerge during development. It remains unclear, however, whether topographic maps can represent a new sensory experience learned in adulthood. MaMe, a congenitally blind individual, has been extensively trained in adulthood for perception of a 2D auditory-space (soundscape) where the y- and x-axes are represented by pitch and time, respectively. Using population receptive field mapping we found neural populations tuned topographically to pitch, not only in the auditory cortices but also in the parietal and occipito-temporal cortices. Topographic neural tuning to time was revealed in the parietal and occipito-temporal cortices. Some of these maps were found to represent both axes concurrently, enabling MaMe to represent unique locations in the soundscape space. This case study provides proof of concept for the existence of topographic maps tuned to the newly learned soundscape dimensions. These results suggest that topographic maps can be adapted or recycled in adulthood to represent novel sensory experiences.


Introduction
Topographic maps, a key principle of brain organization, have mostly been investigated in the context of motor and sensory representations. In the visual cortex for example, neuronal receptive fields form retinotopic maps that preserve the spatial layout of the two-dimensional (2D) image falling on the retina. Following the seminal studies by Hubel and Wiesel ( Hubel and Wiesel, 1970 ;Wiesel and Hubel, 1965 ) the general consensus has been that an accurate representation of the visual field in the early visual cortex necessitates visual input during critical stages of development. The critical role of sensory-dependent experience early in life for typical cortical development has been generalized to other sensory and motor modalities ( Knudsen, 2004 ) and even to higher-order multisensory systems ( Hensch, 2004 ). However, the extent to which the emergence of structured topographic maps requires specific sensory experience early in life is still unknown ( Striem-Amit et al., 2015 ), nor is it known whether training in adulthood in a substitute sensory modality or augmented sensory experience ( Konig et al., 2016 ) could result in a similar topographic representation.
To explore these questions, we used a unique individual as a model, namely a congenitally blind adult who has been extensively trained Successful behavioral training in SSDs with the emergence of distal attribution, together with comparable brain activation in consistent cortical regions in the congenitally blind, suggests that there may be internal perceptual integration of the new artificial sensory experience. If true, this proposition would provide a unique model to study whether topographic maps of the 'visual'/soundscape field develop in adulthood without any prior visual experience.
Here we report a case study of MaMe (a 50-year-old man), our most proficient EyeMusic visual-to-auditory SSD congenitally blind user ( Abboud et al., 2014 ). The EyeMusic SSD processes the visual image column by column, from left to right, into sheets of sound ('soundscapes') that preserve the two dimensions of the visual image: the y-axis is represented by musical notes (pitch) in the pentatonic scale and the x-axis by time in seconds. MaMe is our most skilled and veteran SSD EyeMusic user, and the only participant that has been training almost regularly on the EyeMusic SSD for about 15 years. By now MaMe is able to recognize using the 2D soundscapes, in both natural and computerized environments, various visual categories such as everyday objects, faces, numbers, letters and body parts.
To investigate the extent to which the learned 2D soundscape space is represented topographically in the brain we used population receptive field (pRF) mapping ( Dumoulin and Wandell, 2008 ), a methodological approach employed for imaging and analysis of sensory topographic maps in the visual ( Dumoulin and Wandell, 2008 ;Harvey and Dumoulin, 2011 ), auditory ( Thomas et al., 2015 ) and motor ( Schellekens et al., 2018 ) cortices, and even for cognitive dimensions ( Harvey et al., 2020( Harvey et al., , 2015( Harvey et al., , 2013. pRF modeling links the underlying neural response, as measured by blood oxygenation level-dependent (BOLD) functional magnetic resonance imaging (fMRI), with specific stimuli in a predefined field, making it possible to quantify the spatial preference and tuning of the underlying neural population to the perceived stimuli ( Dumoulin and Wandell, 2008 ). pRF modeling was used here to investigate neural tuning to each of the two dimensions of the soundscape space, namely pitch and time, and their cortical spatial arrangement. We hypothesized that if MaMe developed a 2D representation of the soundscape field, then similarly to the organizational principles of the visual domain, both axes of the EyeMusic SSD should be organized topographically and correlate in some manner (conceptually similar to the correlative axes of eccentricity and polar angle in retinotopic areas). Theoretically, these maps may be formed by neural recycling ( Dehaene and Cohen, 2007 ) of the sensory domains processing the soundscape dimensions of pitch and time ( Harvey et al., 2020 ), or in visual cortex, the modality being substituted, shown to be extensively recruited in the congenitally blind following training with SSD ( Amedi et al., 2017 ). Maps of the artificial soundscape space may also be represented in the association cortices that are involved in multisensory integration and high-order cognition, and may be expected to retain more plastic capacity in adulthood than the primary sensory cortices. In contrast, the neural response to the SSD stimuli may not be topographically organized, as this organization principle may be constrained by development in critical periods.

Subject
MaMe, a 50-year-old male who is congenitally blind. MaMe lost his sight at birth owing to retinopathy of prematurity. He participated at the scanning sessions at the age of 46 and 50. The study was approved by the Ethics Committee for human subjects at Tel Aviv University, and informed consent was obtained from the subject.

Stimuli
The EyeMusic algorithm reads a visual image, column by column, from left to right and forms a soundscape of the image. The topograph-ical soundscape maintains the exact shape and location of the image. The x-axis is represented by the time domain (0 to 4.5 s) and the y-axis by musical notes rising in a pentatonic scale (pitch in 65.4 to 1760 Hz) ( Abboud et al., 2014 ), such that higher notes represent a higher position on the y-axis. The EyeMusic resizes the captured visual images. Here, the EyeMusic the full image resolution was x = 50 by y = 30 pixels, and the maximum stimulus radius was 15 pixels. The EyeMusic runs a color-clustering algorithm to match the musical instruments with the visual pixels color ( Abboud et al., 2014 ). The EyeMusic app is available for free from google play (developed by the Amedi lab).
Visual stimuli conveyed to audition were white bars (generated in MATLAB), and each bar was read by the EyeMusic in 4.5 s. A beep sound indicates the beginning of stimulus presentation. Within one functional scan the bars move along the soundscape field in 4 different orientations and 2 opposite directions, where 5 bars in each direction cover the entire soundscape field ( Fig. 1 A; see video in supplementary material). Four periods of no visual-to-auditory stimuli were included in each functional run (adapted from ( Dumoulin and Wandell, 2008 )). This presentation was repeated in all functional runs and scanning sessions. During the first scanning session red bars randomly replaced the white bars (the EyeMusic represents color by different musical instruments ( Abboud et al., 2014 )). In these few instances MaMe was asked to respond using a response box. In order to test the reliability of the results a second scanning session was conducted four year later, using the same stimuli and task. Finally, a third scanning session included the same bar stimuli, only here all the bars were white and the rest periods also included beep sounds every 4.5 s (i.e., the rest periods included the beep sound that indicates the start of the EyeMusic reading the visual image, but no bars were presented). At few random instances a second beep sound was added at the beginning. MaMe was asked to attend to the beep sounds and respond when two beep sounds were presented.
The experiment was carried out using the software package Presentation (Neurobehavioral Systems, Albany, CA, USA). Auditory stimulation was delivered to both ears using fMRI-compatible headphones (S14 in-ear headphones by Sensimetrics LCD).

MRI acquisition
MRI was performed at Tel Aviv University with a 3T MRI system (Siemens Prisma, Siemens, Erlangen, Germany) with a receiver head coil of 64 channels.

Data processing
T1-weighted anatomical scans were automatically segmented using Freesurfer ( Dale et al., 1999 ) and then hand-edited using ITK-SNAP ( Yushkevich et al., 2006 ) to reduce segmentation errors. Functional MRI analysis was performed with the mrVista software package ( http://white.stanford.edu/software/ ). The first 8 timeframes of each functional scan were discarded because of non-steady-state magnetization. Time-series functional data were corrected for head movement and motion artifacts within and between scans. The time-series data of the In this region, organized maps for the EyeMusic notes in a pentatonic scale (representing the y-axis of the image) are shown on an inflated cortical surface, and slices (coronal left; axial right) are shown on Heschl's gyrus. ff indicates the fundamental frequency of the EyeMusic notes. 8 runs were then averaged and aligned to the T1-weighted anatomical space. Data were interpolated to the anatomical segmentation space using a trilinear interpolation. The gray matter cortical surface was reconstructed and rendered as a smooth 3D surface ( Wandell et al., 2000 ).

Population receptive field analysis
pRFs were modeled at each cortical location as a 2D Gaussian, as previously described ( Dumoulin and Wandell, 2008 ). Modeling the pRF as 2D circular or non-circular Gaussian, with different sizes in the x and y dimensions, gave similar results. The circular Gaussian is defined by 3 parameters: spatial position of the center (x 0 , y 0 ) and the receptive field size. These parameters are estimated from the fMRI data and the position of the stimulus time course. Briefly, for each voxel a predicted neural response is estimated from a combination of the modeled pRF and the stimulus position time course. The predicted response is then convolved with a canonical hemodynamic response function (HRF). The estimated parameters of the pRF are those that minimize the sum of squared errors (RSS) between the predicted and the observed time series. Next, the HRF parameters were estimated from the data to minimize the RSS between the predicted and the observed BOLD response, while keeping the pRF parameters constant. Finally, the pRFs were re-estimated using the estimated HRF parameters ( Harvey and Dumoulin, 2011 ). The probability of observing variance explained (R 2 ) values above 20% was calculated by generating a null distribution of model fits from an ROI in the medial surface of the right hemisphere (10,493 voxels), and then determining the proportion of R 2 exceeding 20%.

Test-retest analysis
Regions of interest (ROIs) were manually drawn around areas that showed topographic presentation (maps 1-4 Fig. 2 A-B, and auditory cortices). We extracted the pRFs center positions within these ROIs for both the original and the replication data. We excluded recording sites where the variance explained was lower than 20% in either of the two data sets. For each ROI, Pearson correlation analysis was run on the pRF center position between recording sites in the original and replication data (table S1). P-value were corrected for upsampling of the functional data to the anatomy.

Results
To investigate the tuning and spatial arrangement of the neuronal population processing the soundscape space, we adapted the pRF method ( Dumoulin and Wandell, 2008 ) to the visual-to-auditory space learned with the EyeMusic SSD developed in our lab. Visual white bars were conveyed through the EyeMusic SSD and swept through the soundscape space in four different orientations and two opposite directions within one functional scan ( Fig. 1 A). The pRFs were modeled as 2D Gaussians and described by three parameters: their position in the soundscape field (y0, x0) and size ( ). To keep MaMe engaged during scanning, he was asked to respond when a bar appeared in red rather than white (colors are represented by a different type of sound ( Abboud et al., 2014 )). Red bars were presented randomly on a few occasions during a functional run. MaMe responded correctly in all trials. We conducted a second scanning session four year after the original data acquisition in order to evaluate the replicability and reliability of the results. We report here on the neural tuned responses that were found in the original dataset and were replicated in the second scanning session (supplementary Figure S1).

The pRF model captures neural responses driven by the soundscape space
Whole-brain cortical analysis revealed that the pRF model captures the neural responses to the conveyed SSD stimuli very well ( Fig. 1 B, C and supplementary Figure s1 for replicated data results), with the variance explained (R 2 ) exceeding 20% (corresponding to p < 0.0002, see Methods). Neural responses elicited by the SSD stimuli were found in extensive areas in the bilateral auditory, occipital and parietal cortices. Maps of the variance explained by the model showed that the neural response to the EyeMusic visual-to-auditory stimuli is not restricted to the auditory cortices, but rather that visual and high-order multisensory regions participate in the processing of the learned soundscape field. In fact, the variance explained in the occipital and parietal lobes was similar in magnitude to the variance explained in the auditory cortex, the main region known to process sounds. Within the visual cortex, neural responses were seen from early to higher retinotopic regions and further proceeded into both the dorsal and the ventral 'visual' streams.

Topographic maps for pitch in auditory cortex
We examined the tuning of the neural response to the EyeMusic SSD's notes in the pentatonic scale (indicating the y-axis of the soundscape image) and their spatial arrangement as represented by the pRF position along the y-axis of the field.
We found organized maps of the EyeMusic's notes in the pentatonic scale in bilateral auditory cortices ( Fig. 1 D). Topographically, these maps exhibited mirror-symmetric organization of the stimulus pitches, similar to the tonotopic maps of pure tone stimuli ( Formisano et al., 2003 ;Humphries et al., 2010 ;Striem-Amit et al., 2011 ). Specifically, we found mapping from high notes (high fundamental frequency) to low notes (low fundamental frequency) and to high again in the superior temporal plane, with Heschl's gyrus located within the mirrorsymmetric map. Interestingly, this is similar to findings reported when pure tones rather than natural pentatonic music notes were used ( Formisano et al., 2003 ;Humphries et al., 2010 ;Striem-Amit et al., 2011 ).

Topographic maps for time and pitch axes of the soundscape space
Beyond the auditory cortices, several higher-order occipitaltemporal areas revealed highly elaborate and intriguing patterns. These included topographical arrangements of both pitch in the pentatonic scale (y-axis) and time (x-axis) ( Fig. 2 A). In the right lateral occipital cortex (LO), axes of change in pitch frequency and time were found in two separate, yet adjacent clusters. Within the superior cluster, a topographic map of pitch was found ( Fig. 2 A-B, map 3). Within the inferior cluster, a topographic map of time emerged ( Fig. 2 A-B, map 4).

Overlapping maps of time and pitch in the occipito-parietal cortex
Interestingly, two more maps representing both time and pitch were found in the right occipito-parietal cortex: one situated in the posterior superior parietal cortex adjoining the borders of the occipital cortex ( Fig. 2 A − B , map 1), and the other spanning more anteriorly, adjacent to the supramarginal gyrus ( Fig. 2 A-B, map 2). Within these maps the order of pitch increased from the top of the gyrus to the intraparietal sulcus. The map of pRF alignment in the time dimension exhibited a similar axis of change along the face of the gyrus, in that its timing stimuli decreased from late to early. In particular, the axis of the arrangement of the topographic map of time overlapped with that of the map of pitch, suggesting that the two maps together span the 2D soundscape-field. Conceptually, the overlap of the two maps resembled the way in which retinotopic areas, such as V1, use two orthogonal axes to identify a unique location in the visual space ( Fig. 2 C).
The gradients of the pRF center position along the X and/or Y axes in all the maps reported above were well correlated between independent datasets collected years apart (Table S1). This suggests that these maps are stable and replicable. Finally, the pRF model uses spatial smoothing in the first stage of the parameters search ( "coarse-to-fine "). This smoothing, however, does not affect the results, and similar maps are found when no smoothing is applied in the first fit search.

Receptive fields size does not change with the soundscape axes
Next, we tested whether akin to retinotopic maps of the visual field, the size of receptive fields of the soundscape space also covaried with the axes of the soundscape field. We did not find such a link. The vast majority of the pRFs in all maps were at maximum size. Perceiving the soundscape space does not follow the retinotopic link between gaze and eccentricity, since the EyeMusic algorithm reads the visual image column by columns. Therefore, similar receptive field sizes might provide similar resolution for perception of the soundscape field.

Attention to the stimuli is critical for the perception of the soundscape space
The current experiment focused the attention of MaMe on the bars. To evaluate whether the neural tuned responses were the sole result of attention, and whether attention to some aspect of the SSD bars was necessary for the emergence of the corresponding topographic representations, we included a third scanning session with an identical presentation of the SSD bars. In this experimental condition, however, instead of focusing on a task-irrelevant dimension of the presented stimuli, like its color, MaMe was asked to respond when an irregularity was heard at the sound of the beep (hearing two instead of one beep sounds). The SSD algorithm indicates the start of "scanning " of the visual image by the sound of a beep. In this condition, no neural tuning was found to the presented bars, demonstrating that attention to some property of the soundscape stimuli is needed for their representation to fully emerge.

Summary of the main results
Topographic maps, a central principle of brain organization, is mostly known to represent the primary motor and sensory cortices ( Gray, 1918 ;Penfield and Boldrey, 1937 ). Here, we examined how prolonged training during adulthood in the perception of a learned 2D visual-to-auditory soundscape space is represented in the brain. MaMe, a congenitally blind 50-year-old male, has been extensively trained during adulthood on perception of the soundscape space where pitch and time represent, respectively, the y-and x-axes of a visual image. By perceiving various soundscapes, MaMe was already shown to successfully identify complex visual input ( Abboud et al., 2015 ;Striem-Amit et al., 2012a ;Striem-Amit and Amedi, 2014 ). Using pRF mapping, we found neural populations tuned in a topographic manner to pitch and time, not only in the auditory cortices, but also in the occipito-temporal and parietal cortices. The overlap between maps of time and pitch in the occipito-parietal cortex may suggest that the dimensions of the EyeMusic SSD adheres to topographic organization principles similar to those described in the visual retinotopic cortex for identifying a position in visual space.
This case study provides proof of concept for the existence of topographic maps of pitch beyond auditory cortex. Recent studies provided evidence of topographic maps for time duration ( Harvey et al., 2020 ;Protopapa et al., 2019 ). Although we cannot specify whether the pitch and time maps rely on preexisting topographic connectivity and functional maps or whether such topographic maps emerge following extensive training, their discovery suggests a novel form of plasticity that goes beyond classic works on the reorganization of topographical maps in adulthood within unisensory cortices subsequent to sensory deprivation or through the rewiring of neural pathways during development ( Kaas et al., 1990 ;Merzenich et al., 1984Merzenich et al., , 1983Pons et al., 1991 ;Roe et al., 1990 ).

Computational task-selectivity vs. (uni)sensory specific cortical organization
The use of SSDs constitutes a valuable approach not only to investigate brain adaptation and plasticity, but also to unravel the organization principles of the sensory brain and its related divisions of cortical specialization. Previous research on blind subjects trained in SSD algorithms that convert visual information into sound or touch have shown that the cortical recruitment is not restricted to the sensory modality substituting for the visual input, but that neural activity is found in specific task-related high-order visual areas after auditory ( Abboud et al., 2015 ;Amedi et al., 2007 ;Arno et al., 2001 ;Collignon et al., 2006 ;Merabet et al., 2009 ;Renier et al., 2005 ;Striem-Amit et al., 2012a, 2012bStriem-Amit and Amedi, 2014 ) or tactile substitution ( Matteau et al., 2010 ;Ptito et al., 2012Ptito et al., , 2009Ptito et al., , 2005. Those results prompted a general theoretical framework suggesting that brain areas are specialized not to unisensory inputs but rather to a given computational task (Task-Specific Sensory-Independent organization (TSSI); for example, the LOC/ventral visual stream can render 3D shape geometry regardless of the sensory input and in the absence of visual experience ( Amedi et al., 2017( Amedi et al., , 2001). Similarly, although MaMe has no visual experience, he showed significant neural responses when presented with transformed 'retinotopic like' auditory stimuli in known retinotopic regions of the early visual cortex, and in both the ventral and the dorsal 'visual' streams, known to process object perception and object localization, respectively ( Goodale and Milner, 1992 ;Mishkin et al., 1983 ) ( Fig. 1 B). Upon training, the neural response along the occipitotemporal and the occipito-parietal axes became tuned to the stimuli to at least the same extent as in the auditory cortex ( Fig. 1 B). A recent study in blind echolocators found that their primary visual cortices represent sounds based on their spatial locations ( Norman and Thaler, 2019 ). The results of the first scanning session of MaMe also hinted to the existence of a topographic maps for pitch in the early visual cortex. However, this result was not replicated in the second scanning session. Therefore, we cannot provide further support to the result of 'retinotopic like' maps of sounds in the early visual cortex.

Multiple novel topographic maps in occipital and parietal cortices
Modeling the neural response with 2D Gaussian pRFs revealed topographic maps of the pRF position along the two dimensions of the soundscape field (the notes in a pentatonic scale and the timing of stimuli presentation corresponding to the y-and x-axes, respectively) in the right superior parietal cortex and the right supramarginal gyrus. Perception of the soundscape space within these regions is supported by studies reporting involvement of the right parietal cortex in spatial attention and temporal processing ( Battelli et al., 2007 ;Bueti et al., 2008 ;Colby and Goldberg, 1999 ;Harvey et al., 2020 ;Hayashi et al., 2018 ;Protopapa et al., 2019 ;Silver et al., 2005 ;Wiener et al., 2010 ). The parietal cortex was found to participate in judgments of both auditory and visual durations ( Bueti et al., 2008 ;Harvey et al., 2020 ;Hayashi et al., 2018 ), and the right supramarginal gyrus was suggested as a locus for time estimation ( Hayashi et al., 2015 ;Wiener et al., 2012Wiener et al., , 2010. Notably, a network of topographic maps for visual time duration were recently revealed in the parietal and occipito-temporal cortices ( Harvey et al., 2020 ). Note that at least for the pitch dimension, the reported neural responses in these regions were not triggered by the repeated rising logarithmic tone chirp commonly used for tonotopic mapping, as revealed by a previous study ( Figure S2). Furthermore, in a study from our lab in sighted individuals ( Striem-Amit et al., 2011 ) it was found that topographical tonotopic maps were not present in active regions outside the auditory cortex. However, some evidence of limited organized tonotopic maps outside the auditory cortex was detected in a group of congenitally blind subjects ( Watkins et al., 2013 ). In response to pure tones, greater activity in the lateral occipital cortex was found in an anophthalmic group relative to controls. In some subjects the activation in area V5/MT + in response to auditory listening was organized in a topographic manner. The pitch maps found in the parietal and occipitotemporal cortices of MaMe, however, are unique, and to the best of our knowledge topographic pitch maps in these areas have not been previously reported. Notably, In the present study the topographic maps of the two soundscape axes, time and pitch, were shown to coincide in the parietal lobe. Conceptually, the overlap between the maps suggests that the learned soundscape field may be analyzed in a way similar to how retinotopic maps span the visual field in order to identify a unique position in visual space.
A representation of the soundscape field was found in the right LO ( Fig. 2 A-B), an area that was first implicated in visual object recognition ( Malach et al., 1995 ) and in processing of visual shape information ( Kourtzi and Kanwisher, 2001 ). Later studies modified the interpretation of the role of this region to suggest that the LO functions as a multimodal shape operator ( Amedi et al., 2007 ;Tal et al., 2016 ). Neural activity in the LO in blind subjects was shown to be recruited for object recognition by means of haptic, visual-to-auditory SSD, and echolocation ( Amedi et al., 2007( Amedi et al., , 2001Arnott et al., 2013 ). Our results on the topographical representation of both dimensions of the soundscape field (albeit in two separate, yet adjacent, clusters) extend the role of the LO as a sensory-independent shape processor and imply that the underlying multisensory neuronal response is organized in an orderly way that can be tuned to specific axes of the displayed stimuli.

Study limitations, confounding factors and future directions
In this case study we investigated the organization of neural populations that process the EyeMusic SSD soundscape space. We found several cortical maps that correspond to the EyeMusic's dimensions of pitch and temporal position. However, we cannot exclude the contribution of other cognitive processes, such as imagery in the surfacing of these maps. Imagery is always present to some extent in studies of sensory substitution devices. We cannot rule out the possibility that MaMe created a representation of the EyeMusic stimuli by some form of mental imagery, and that the topographic maps we uncovered represent a high-cognitive map of his mental images rather than a lower-level topographic perceptual representation. Indeed, we found that attention to the SSD bars is necessary to perceive the soundscape dimensions. Some level of attention is expected to be needed in order to reconstruct the soundscape space into a mental representation. This cognitive load is one of the reasons SSD is not used in everyday life. The involvement of attention to the emergence of these maps, with or without imagery (non-visual as he never had visual experience), does not weaken the conclusion that MaMe successfully formed a dedicated space in his mind for the SSD stimuli that was highly detailed and topographically organized. Future studies should aim to disentangle the contribution of perception, imagery and attention to the appearance of these occipital-parietal maps.
Another limitation is the extent to which we can draw conclusions as to the origins of the maps; namely, whether the maps rely on preexisting topographic connectivity and prior functional maps that were "recycled " in MaMe as a result of the SSD task demands, or whether his extensive training in a new sensory perception formed entirely novel topographic maps. Recent studies suggesting that topographic connectivity is innate and is formed prior to the emergence of visual category-selectivity  ) support the view of functional plasticity in adulthood that relies on underlying pre-existing neural connectivity. However, the overlapping maps for both pitch and time in the parietal cortex, might hint at a new form of cortical neural recycling ( Dehaene and Cohen, 2007 ) that involves the re-adaptation of topographic maps in two dimensions. Thus, these results encourage us to hypothesize that topographic maps can be combined/enriched in adulthood to represent an artificial sense or a novel sensory experience. However, the question of whether any subject who is highly trained with SSD can create such novel topographic maps and whether these maps will be consistent in their anatomical locations across users, or whether each individual recycles the cortex in a different way, remain unresolved ( Dehaene and Cohen, 2007 ;Hannagan et al., 2015 ). In addition, the specificity of these maps for the soundscape space, or whether such maps would emerge in the same location for any novel spatial dimensions, also lacks an answer. Moreover, the temporal time scale for the emergence of these maps remains unclear, since we did not control here for the number of hours spent on MaMe's training, nor can we indicate how much training was necessary for the topographic maps to emerge. These are all fascinating directions for future longitudinal studies. Furthermore, the development of topographic neuronal specialization may be dependent on customized training that creates a sufficiently detailed perception with preserved continuous axes. Residual sensory experience and visual imagery may affect the SSD experience, such that discrepancies might arise between the neural tuning in related anatomical locations for different populations (e.g., trained sighted, late blind, congenitally blind).
Finally, the maps for pitch outside the auditory cortices showed bias in the position of the pRF centers towards the upper half of the soundscape field. We can only speculate that this bias might be due to differences in auditory discrimination in general, suggesting that higher frequencies within the range of the EyeMusic notes are easier to discriminate ( Long, 2014 ). This in turn may result in the fact that MaMe might discriminate better EyeMusic stimuli presented in the upper field of the soundscape space, ultimately creating some bias in the position of the centers of the receptive fields.

Conclusions
The topographic principles of brain organization are found throughout the brain and are thought to be grounded in innate connectivity and specific experiences during critical periods of development. Gradual and ordered cortical representations linked to the retinotopic axes of eccentricity or polar angle have been reported outside the primary visual cortex, in regions selective for higher visual concepts ( Malach et al., 2002 ), and in multisensory associative cortices such as the parietal ( Saygin and Sereno, 2008 ;Schluppeck et al., 2005 ;Sereno et al., 2001 ;Silver et al., 2005 ) and frontal cortices ( Hagler and Sereno, 2006 ;Kastner et al., 2007 ). The recent discovery of a topographic maps of numerosity ( Harvey et al., 2013 ), object size ( Harvey et al., 2015 ) and duration judgment ( Harvey et al., 2020 ;Protopapa et al., 2019 ), extended the principle of systematic neural organization to higher cognitive concepts. The current results put forward the novel hypothesis that topographic maps can be modified or can even emerge after extensive training in topographic sensory experiences. This would imply that topographic maps are not constrained to the modality-specific sensory input perceived during development but can be modified by novel sensory experiences across the lifespan. Beyond the question of nature vs. nurture, our results contribute to studies on plasticity in adulthood in that they raise the issue of the impact of increasing use of novel technologies and augmented reality in shaping the brain, and their interactions with existing sensory specialization and brain organization. The effects of augmented sensory experience have begun to be addressed both through invasive methods in animals ( Hartmann et al., 2016 ;Thomson et al., 2013 ) and through non-invasive technologies in humans ( Kaspar et al., 2014 ;Konig et al., 2016 ;Nagel et al., 2005 ). Overall, these results not only carry implications for our understanding of brain organization and neural adaptation during adulthood, but they also carry practical implications for assistive technologies ( Karcher et al., 2012 ) and brain/machine interfaces.

Data and code availability
The data that supports the findings of this study is available from the corresponding authors upon reasonable request. The data was analysed using mrVista, available at ( http://white.stanford.edu/software/ ).

Funding
This work was supported by the James S. McDonnel Foundation scholar award (No. 652 220020284 ) to A.A., the European Research Council Consolidator-Grant ( 773121 ) to A.A., the Ammodo KNAW Award to S.D., Netherlands Organization for Scientific Research grant ( NWO-VICI 016 .Vici.185.050) to S.D, and KNAW Visiting Professorship Programme to S.D. and A.A.

Declaration of Competing Interests
There is no conflict of interest to declare.