Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

The sound of reading: Color-to-timbre substitution boosts reading performance via OVAL, a novel auditory orthography optimized for visual-to-auditory mapping

  • Roni Arbel ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Writing – original draft, Writing – review & editing

    roni.arbel@mail.huji.ac.il

    Affiliations Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Carem, Jerusalem, Israel, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel

  • Benedetta Heimler,

    Roles Data curation, Formal analysis, Investigation, Methodology, Supervision, Writing – original draft, Writing – review & editing

    Affiliations Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Carem, Jerusalem, Israel, The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Herzliya, Israel

  • Amir Amedi

    Roles Conceptualization, Funding acquisition, Supervision, Writing – original draft, Writing – review & editing

    Affiliations Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Carem, Jerusalem, Israel, The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Herzliya, Israel

Abstract

Reading is a unique human cognitive skill and its acquisition was proven to extensively affect both brain organization and neuroanatomy. Differently from western sighted individuals, literacy rates via tactile reading systems, such as Braille, are declining, thus imposing an alarming threat to literacy among non-visual readers. This decline is due to many reasons including the length of training needed to master Braille, which must also include extensive tactile sensitivity exercises, the lack of proper Braille instruction and the high costs of Braille devices. The far-reaching consequences of low literacy rates, raise the need to develop alternative, cheap and easy-to-master non-visual reading systems. To this aim, we developed OVAL, a new auditory orthography based on a visual-to-auditory sensory-substitution algorithm. Here we present its efficacy for successful words-reading, and investigation of the extent to which redundant features defining characters (i.e., adding specific colors to letters conveyed into audition via different musical instruments) facilitate or impede auditory reading outcomes. Thus, we tested two groups of blindfolded sighted participants who were either exposed to a monochromatic or to a color version of OVAL. First, we showed that even before training, all participants were able to discriminate between 11 OVAL characters significantly more than chance level. Following 6 hours of specific OVAL training, participants were able to identify all the learned characters, differentiate them from untrained letters, and read short words/pseudo-words of up to 5 characters. The Color group outperformed the Monochromatic group in all tasks, suggesting that redundant characters’ features are beneficial for auditory reading. Overall, these results suggest that OVAL is a promising auditory-reading tool that can be used by blind individuals, by people with reading deficits as well as for the investigation of reading specific processing dissociated from the visual modality.

Introduction

“Once you learn to read, you will be forever free.”

- Frederick Douglass

Among the most important cognitive skills, reading not only enables communication and information acquisition, but exerts a wide impact on brain organization and neuroanatomy as revealed when comparing literates to illiterates [16]. This includes the further refinement of brain mechanisms tailored to spoken language processing and phonemic awareness, and the development of anatomical structures and language-oriented functional-selectivity across the visual and lingual cortices, alongside a series of behavioral advantages in various domains [3, 6]. Readers, for instance, showed better low-level sensory abilities compared to non-readers [712] improved phonological processes and speech processing [1116] and improved verbal memory [1719]. Some evidence even hint to the possibility that reading influences higher-level cognition such as problem solving or planning, although the effect of schooling is difficult to filter out in these cases [20] (see for reviews [1, 21]).

While nowadays in developed countries literacy rates among sighted individuals are high and constantly increasing, literacy rates among blind individuals who rely on tactile-based systems such as Braille is only about 10% and declining [22]. In addition to possible effects on neuroanatomy and various related behaviors, this low literacy rate is associated with lower academic achievement, lower employment status and even lower general satisfaction with life [23, 24]. So, why is Braille literacy in decline? The first problem is related to the accessibility of Braille code. In contrast to print based systems that benefit from thorough instruction in schools, and are easily accessible by sighted readers both in physical form (i.e., books) and through all types of electronic devices, Braille code often lacks access to dedicated instruction, thus preventing many blind children to properly learn it [25]. Moreover, Braille texts require either physical production of heavy and expensive books, or expansive electronic Braille displays that require specialized hardware [25]. The second, and even more crucial issue, concerns the difficulties in mastering a tactile script, which requires enormous effort on the sharpening of tactile sensitivity. The distance between dots in a Braille cell is close to the tactile acuity limit [26], which makes tactile reading acquisition a long and difficult process especially for late blind individuals who typically have lower tactile sensitivity [2729]. Indeed, several pieces of evidence demonstrate that the onset of blindness largely determines proficiency in Braille reading in adulthood: congenitally and early blind individuals are able to read 80–120 Braille words-per-minute and more [3032], whereas late-blind people usually read two to three times slower and many of them do not succeed to properly learn Braille at all [27, 31, 33]. The decline of Braille reading is accompanied by an increased use of audio technologies such as audiobooks and other text-to-speech conversion algorithms used in tandem with digital devices. The lack of training required for the use of these programs, their low-cost and easy installation and the fact they don’t require dedicated hardware make them appealing for blind and visually impaired individuals, especially students and others who are required to read large pieces of information, although initial evidence suggests that Braille reading is still the preferred medium by students [34]. This reported preference for Braille-reading appears to be in line with the predictions coming from the seminal and influential theory of the Simple View of Reading positing that reading comprehension is determined by two cognitive capacities: decoding words through alphabetic coding, and language understanding [35, 36]; see [37] for more recent empirical evidence). Under this framework, audio listening technologies, such as audiobooks, are not classified as reading as they lack the decoding aspect and only rely on language understanding. Although initial evidence suggests that semantic comprehension is not worse for audiobooks than print-reading [3840], other evidence highlights various advantages of print-reading over passive listening including extensive changes triggered by literacy both in white and grey matter within the language-system, with reading activating a broader brain network compared to listening [1, 41]. Research in this field, however, is still inconclusive and more work is needed to fully characterize the differences and similarities between these two processes.

Taken together, all the aforementioned evidence clearly suggests that typical reading acquisition exerts many benefits for neural and behavioral efficiency, thus highlighting the great need to create additional and alternative full reading options for the blind and visually-impaired population which in turn rely not only on language understanding (as audio listening technologies), but also on active word decoding (as print-reading systems). Such novel systems should be cheap, easily accessible, as well as both relatively easy and quick to master also in adulthood, in contrast to current tactile-based systems [42]. We propose that creating a reading system that is based on discrimination of auditory patterns, rather than tactile ones as Braille, could be a promising solution. Indeed, it is known that the auditory system has a higher resolution in both time and frequency than that of the somatosensory receptors on the skin [43], thus potentially substantially reducing training time while simultaneously also potentially increasing overall reading speed. Previous attempts in this direction, aimed at transforming visual letters into auditory spatio-temporal patterns using visual-to-auditory Sensory-Substitution Devices (SSDs), namely devices that transform visual information into audition using algorithms that preserve exact shape and location of objects [44, 45]. These studies demonstrated that it is indeed possible to read via audition, ultimately showing that auditory reading activates the same region in the ventral visual stream, the Visual Word Form Area (VWFA), as visual and Braille reading [4649]. However, this type of SSD reading did not exit the lab’s walls, and was not adopted as a reading tool by the blind community. One possible explanation for this outcome, is that visual letters’ orthography may not be optimal for conversion to the auditory modality, ultimately lengthening the process of reading, hampering its fluidity and potentially even increasing the necessary training time. Indeed, similar conclusions were reached in the creation process leading to tactile-reading systems. Initial attempts in this direction aimed at reading via embossed visual letters, which resulted in a cumbersome process [50]. Tactile reading accuracy was significantly improved following the development of tailored tactile-based systems such as the Braille code [51]. We therefore hypothesize that instead of a direct translation of the visual orthography into audition, auditory reading may be facilitated by using a script designed specifically for this sensory modality.

Therefore, we created an auditory-based alphabet called OVAL, which characters are composed by combining together features from Braille systems (i.e., in Braille each character is characterized by a unique spatial pattern) and Morse code (i.e., in Morse code, each character is characterized by a unique temporal pattern of “dots” and “lines”) (Fig 1A). OVAL characters were first created in their visual form, and then sonified by the visual-to-auditory EyeMusic SSD, creating unique auditory spatio-temporal patterns that we term "audemes", auditory transformed graphemes (Fig 1B).

thumbnail
Fig 1.

A. OVAL orthography. Visual representation of the Hebrew alphabet translated into the OVAL orthography. Trained letters are highlighted by a green square around the letter’s Hebrew name and its correspondent transcription homologue in the Latin alphabet. Note that we depict here the Color OVAL. In the Monochromatic OVAL, all letters are white. B. OVAL Algorithm. OVAL visual letters were translated into sounds following the EyeMusic transformation algorithm: each image is scanned from left-to-right using a sweep-line approach so that x-axis is mapped to time (i.e., characters positioned more on the left of the image are heard first). Y-axis is mapped to the frequency domain using the pentatonic scale such that parts of a character which appear higher in the image, will be sonified with a higher pitch. Color is mapped to musical instruments. In the Color OVAL we used three colors: white, blue and red transformed into choir, trumpet and piano respectively. In this example we wrote with the OVAL the word “Hello”.

https://doi.org/10.1371/journal.pone.0242619.g001

In addition, seminal findings in tactile shape analyses demonstrated that the addition of texture variations to tactile shapes, namely a redundant, discriminative feature, enhanced shape discrimination performances [52]. It was thus suggested that tactile reading might also benefit from the addition of redundant discrimination information. However, because the systematic addition of texture to Braille is not feasible due to high costs and the fact that Braille characters wear out with use, the influence of redundant features on non-visual reading performance was never systematically tested. Research on proficient visual readers, demonstrated the opposite outcome, namely that when adding redundant discriminative features to each letter, such as color, reading performance was hindered [53]. In contrast to Braille code, the addition of redundant discriminative features is easily implemented in the OVAL orthography. Indeed, the EyeMusic algorithm, which we use to create OVAL audemes, has the option to convert color into audition via timbre manipulations (i.e., different musical instruments) [44]. This in turn enabled us to directly investigate the influence of redundant discrimination information on auditory reading. One hypothesis was that reading performance will be enhanced when adding redundant features to OVAL characters, due to possibly easier discrimination between letters, as suggested in the tactile modality [52]. The alternative hypothesis was that such redundant information would hinder reading as shown in the visual modality [53].

The first aim of this work was to investigate the feasibility of OVAL as a quick-to-learn and efficient reading system. Thus, we tested a group of blindfolded literate sighted participants on several reading tasks after 6-hours of a specific OVAL training. In addition, we aimed to investigate whether redundant features, such as color, facilitate or hamper auditory-reading. Therefore, we tested an additional group of blindfolded sighted participants using the exact same training and reading tasks, though using this time a color (multi-instrument) version of OVAL. For this first investigation into auditory reading we tested sighted rather than blind participants to assess whether our auditory orthography is a valid non-visual reading system. This allowed us to exclude possible confounds in our results such as the level of literacy (all our participants acquired reading at typical developmental stages) or auditory training and experience (which is known to be enhanced in the blind population [5457]). These results can be used as a baseline for quick OVAL learning, which could be then used in future studies as comparison to results acquired in other populations such as blind and visually impaired, as well as synesthetes or people with reading difficulties, among others.

Material and methods

Participants

A total of 18 sighted individuals took part in the experiments (age: 22.67 years SD; 4.54, 7 males). All participants were right handed, literate, native Hebrew speakers without any diagnosed neurological conditions (including dyslexia or other learning disabilities) and all had normal or corrected-to-normal visual and hearing abilities. In addition, all participants were naïve to the EyeMusic visual-to-auditory SSD as well as to any other SSDs. The experiment was approved by the ethical committee of the Hebrew University of Jerusalem. All participants signed a consent form before starting the experiment and received monetary compensation for their participation. Participants were assigned to one of two groups (9 participants in each), the "Color" group or the "Monochromatic" group and underwent OVAL training and experiments as described below. Participants were blindfolded throughout both training and experimental procedures.

OVAL system

The OVAL auditory script is a novel alphabetic orthography designed for optimal compatibility with the EyeMusic visual-to-auditory sensory substitution device (SSD). Each letter of the alphabet is created in its visual form using a unique combination of vertical lines and dots, which vary in number as well as in their spatial layout, and optional color (depending on the training group, see below) (Fig 1A). OVAL letters are transformed to sound following the three principles of the EyeMusic SSD: 1. Y-axis location: the higher on the screen a feature of a character is positioned, the higher in pitch it will be sonified, and the lower on the screen a feature of a character is positioned, the lower in pitch it will be sonified, using a pentatonic scale; 2. X-axis location: each image is scanned from left to right in a column-by-column manner, so that users will hear first the feature of a character positioned more to the left. 3. Color: Colors are conveyed via timbre manipulations (i.e., different musical instruments). The EyeMusic algorithm can currently convey five colors (white, blue, yellow, red, green) and silence is conveyed with black. Here we used 3 colors: white (choir), red (Reggae organ), and blue (brass instruments) (Fig 1B). In the Monochromatic group, all letters were white, while in the Color group each letter had a given timbre assigned to it, thus color served as another feature helping to differentiate among the letters (Fig 1B).

More specifically, OVAL characters are composed by combining together features from Braille systems (i.e., in Braille each character is characterized by a unique spatial pattern) and Morse code (i.e., in Morse code, each character is characterized by a unique temporal pattern of “dots” and “lines”). Letter orthography was based on the combination of dots and lines as in Morse code. However, differently from Morse characters’ configurations, in OVAL lines were vertical rather than horizontal. This was done to take advantage of pitch variations characterizing the EyeMusic algorithm (i.e., see y-axis transformation in the paragraph above; Fig 1B) allowing the creation of a unique spatial configuration for each character, a key feature of all orthographies. This choice also allowed each character to be played more quickly than if horizontal lines were used (i.e., see x-axis transformation in the paragraph above; Fig 1B). Morse characters are composed of up to 4 elements. In the current tested version of the OVAL algorithm, each element is played for 100 ms. However, to further limit the temporal length of letters and thus permit faster reading, in case a letter was composed of 4 components, the 4th component was added on top of the third, thus creating a “dual component” (an EyeMusic column containing two features of a character played simultaneously). Therefore, the maximum length of OVAL characters was 300 ms (3 x-axis EyeMusic columns). Similarly to Morse code, more frequent letters use shorter combinations, thus the most frequent letters in the Hebrew orthographic system were represented by short spatio-temporal configurations (i.e., 1 EyeMusic column: 100 ms), a bit less frequent letters by longer spatio-temporal configurations (i.e., 2 EyeMusic columns: 200 ms), and least frequent letters by even longer spatio-temporal configurations (i.e., 3 EyeMusic columns: 300 ms). Finally, in both training and experimental procedures, independently of the length of OVAL characters/words, each EyeMusic scan of an OVAL display lasted 4 seconds.

In this experiment we trained participants on identifying 11 OVAL characters, half of the Hebrew alphabet (see Fig 1A; for details see section below on OVAL training). These chosen letters were controlled for frequency in the language. Specifically, the letters were arranged in order of frequency in the Hebrew language. Every other letter was chosen as part of this experiment, thus achieving a set of 11 letters with varying occurrence in the language.

Experimental apparatus

All participants sat in-front of a laptop computer and wore a set of headphones through which they received EyeMusic information throughout the training and experimental procedures (Fig 1B).

Experimental procedure

Before OVAL training, participants performed an OVAL character discrimination task which served as a baseline test for discrimination of OVAL characters. Then participants went through 6-hour specific OVAL training which consisted of 3 training sessions, lasting 2 hours each. During each two-hour sessions, participants took short breaks when they felt tired to prevent excessive fatigue and to allow the maintenance of focus during OVAL learning. Following training, participants repeated the OVAL character discrimination task, along with three additional experiments, a task discriminating trained from untrained letters, a letter identification task on the trained characters, and a reading task using both words and pseudo-words of up to 5 characters comprised by any of the trained OVAL letters. The training sessions and the after-training experimental session needed to be completed within six days with the experimental session being held the day after the last training session took place. Training protocols and experiments were exactly the same between the two groups (Monochromatic and Color groups). The presence or absence of colored OVAL letters was the only feature differing between groups. Below, we will describe in details all aspects of the experimental procedure. A power analysis using G*power [58] showed that to achieve a power > 80% with a priori alpha set at 0.05 to study within-between variable interactions with partial eta squared of 0.14 in the ANOVAs, 5 participants are needed in each group.

All experiments were performed using Presentation® software (Version 18.0, Neurobehavioral Systems, Inc., Berkeley, CA, www.neurobs.com)

OVAL training

The OVAL training protocol was designed to teach participants the identification of 11 Hebrew OVAL letters, using the EyeMusic SSD and read short words and pseudo-words consisting of those letters formed by up to 5 characters.

Following baseline “letter discrimination” task, it was explained to the participants that the sounds they just heard conveyed letters in a novel auditory orthography. Then, the basic algorithm of the EyeMusic was explained briefly (i.e., the rules of the algorithms regarding the x- and y-axes transformations, and the color transformation for the color training group). The training on reading with OVAL unfolded as follows: First, the letter “waw”, the most common letter in the Hebrew orthographic system, was played. This is represented by a single dot in the OVAL orthography, translated into audition, according to the EyeMusic algorithm, as a brief, single tone -this letter was blue for the Color group, i.e., played via the brass instrument. Participants were then encouraged to decipher such pattern independently, based on the algorithm of the EyeMusic that was briefly explained to them. Most participants could report that the sound they heard represented a dot. The experimenter then explained the related color (timbre) information (for the Color group), and provided them with the phonological meaning of the audeme. The next letter, “alef” was then played. Again, participants tried to decipher the pattern based on the algorithm transformation rules with feedback from the experimenter if needed. Then for the Color group, the color/timbre information was disclosed and finally the phonological meaning of the audeme was provided. Two more letters were introduced, and after a short repetition of the 4 learnt OVAL letters, participants were instructed to read short words composed of these characters. When participants became able to read these words, 3 more letters were introduced following the same procedure described above, then participants were asked to read words formed by any combination of the 7 letters they learned. Then 2 more OVAL letters were introduced, and then participants read words composed by any of the 9 letters learned; then the 2 last letters were added and participants read words composed by any of the 11 learned letters. The final part of the training consisted of reading words and pseudo-words containing all 11 learned letters. The majority of participants learned 7–9 OVAL letters (and to read short words composed by them) within the first 2-hours training session, with some participants, from the “Color” group even succeeding to learn all 11 OVAL characters within this first training session. When necessary, the remaining 2–4 letters were learned in the second training session, and the rest of the training program was dedicated to word/pseudo-words reading and 2-words combinations reading. The order of introduced letters, as well as the order of presented words/pseudo-words was exactly the same for all participants in both training groups, with little variation to address specific individual needs. By the end of the training program, all participants were able to correctly read the list of words/pseudo-words presented during training. Importantly, the words/pseudo-words used in training differed from those used in the experiment.

Behavioral experiments

Experiment 1 –letter discrimination.

The ability to discriminate different OVAL characters was tested both before OVAL training ("pre-training") and after training ("post-training"). In the pre-training stage, all subjects were naïve to the purpose of the study, to the OVAL orthography and to the EyeMusic SSD algorithm.

To test the baseline character-discriminability of the OVAL auditory orthography as well as training induced improvement in discrimination, similarly to what has been done in Braille, we created a letter-discrimination task with forced-choice responses [59, 60]. Each trial consisted of a pair of OVAL characters which could be either identical or different. The task of participants was to decide whether the characters played were same or different by pressing one of two possible response keys with the index and ring fingers (response keys for same/different were counterbalanced among participants). Each trial began with a silence period of 1500–1800 ms and then a pair of OVAL characters was played consecutively with a 700 ms pause between the two presented letters until participants’ response. Each pair of characters was presented within 4 seconds intervals (i.e., the time used by the EyeMusic to scan each OVAL display).

The experiment comprised 242 trials in total divided in two separate blocks of trials, with a break of up to 90 seconds between blocks. The 242 trials consisted of all possible combinations of the 11 experimental OVAL letters, each repeated twice in a random order with the constraint of avoiding two identical trials one after the other. Correct rates were assessed.

After training, participants repeated the same task using a different sequence of trials presentation, which was created based on the same criteria of randomization of the pre-training sequence.

Both experiments lasted ~20 minutes each.

Experiment 2 –discrimination of trained vs. untrained letters.

In order to further investigate how well the OVAL characters could be discriminated between each other, and to exclude that OVAL characters selected for this experiment were by chance easier to discriminate, we presented to participants, after training, a discrimination task among all OVAL letters (all 22 letters forming the Hebrew orthographic system; Fig 1A). Participants were asked to decide if a letter belonged to the cohort of trained letters (i.e., familiar letters), or not (i.e., untrained letters), and provide their response by pressing one of two possible response keys with the index and ring fingers (response keys for trained/untrained letters were counterbalanced among participants). Each trial began with a silence period of 1500–1800 ms and then one of the 22 OVAL letters was presented repeatedly within four seconds intervals.

During the whole experiment, each OVAL letter was repeated 3 times for a total of 66 trials presented in a random order with the constraints that the same letter could not be repeated more than twice consecutively, and that repetitions of the same letter were not all appearing in the same chunk of the experiment. There were two possible trials sequences to control for order biases, which were presented to participants in counterbalanced order. The number of correctly identified letters (correct rate–CR) and reaction times were collected. The whole experiment lasted ~7 minutes.

Experiment 3—letter identification of trained characters.

In order to test basic letter identification ability following training, participants were asked to name each of the 11 experimental letters they received training on. Each trial began with a silence period of 1500–1800 ms and then one OVAL letter was presented repeatedly within four seconds intervals. Participants were instructed to press the space bar as soon as they identified the letter and then to give their response orally (which was inserted in the computer by the experimenter). Then they pressed again the space bar to move to the next trial. During the experiment, each letter was repeated 6 times for a total of 66 trials in a random order with the constraints that the same letter could not be repeated more than twice consecutively, and that repetitions of the same letter were not all appearing in the same chunk of the experiment. There were two possible trials sequences to control for order biases, which were presented to participants in counterbalanced order. The number of correctly identified letters (correct rate–CR) and reaction times were collected. The experiment lasted ~10 minutes.

Experiment 4 –words/pseudo-words reading task.

To test participants’ reading proficiency, after training, they underwent a words/pseudo-words reading task. 68 stimuli, half words and half pseudo-words with various length, all of which had not been introduced during training, were presented in a random order, with the constrains that not all same-length words appeared in the same chunk of the experiment. Pseudo-words were created by switching one letter of an experimental word, with a different letter from the pool of trained letters. Each word/pseudo-word consisted of two to five letters (20 short words and 20 pseudo-words of either 2 or 3 letters; 14 long words and 14 long pseudo-words of either 4 or 5 letters, using all the 11 characters learned during training). Each stimulus was played repeatedly until participants’ response. Participants were informed at the beginning of the task that stimuli could be either words or pseudo-words and were instructed to listen to the whole stimulus and press the space bar as soon as they completed the reading of each word/pseudo-word. Then they were asked to provide their response orally, namely saying the word/pseudo-word aloud (and the experimenter entered it in the computer) and pressed the space bar to move to the next trial. Spelling of Hebrew words and pseudo words correspond most of the time to their pronunciation. In the few cases in which spelling was not directly obvious from pronunciation, we asked participants to spell the word aloud. Each trial started with a tone marking its beginning lasting ~20 ms. During the experiment, each word/pseudo-word was presented within 4 seconds intervals. The number of correctly read words/pseudo-words (CR) and reaction times were recorded. The length of the experiment varied based on participants’ performances, ranging between 20–40 minutes.

Data analysis

All statistical analysis was conducted using JASP software (version 0.11.1), using t-tests and mixed repeated measures ANOVA. Post-hoc tests were conducted using Bonferroni correction. Effect sizes are reported using partial eta squared in ANOVAs or using the Cohen’s d representing pooled-SD in t-tests. Below, we present all the results comparing performances between the two groups (monochromatic and color OVAL readers), separately for each one of the experiments. Reaction times are calculated only for correct trials.

Results

Experiment 1—letter discrimination task

In the pre-training task, thus before learning the phonological meaning of the auditory spatio-temporal patterns, we aimed at testing basic auditory discrimination among the various OVAL letters. OVAL monochromatic readers' average success was 88.4% (± 9.84%—standard deviation (SD)) while OVAL color readers' average success was 98.87% (SD ± 0.92%). Two t-tests against chance (50%) confirmed that performances in both groups was above the chance level (Monochromatic readers: t(8) = 11.70, p < .001, d = 3.9; Color readers t(8) = 158.55, p < .001, d = 52.85), thus indicating that participants were able to discriminate among OVAL characters only based on their distinctive auditory properties.

When we repeated the same experiment after training, thus investigating the benefit of phonological training on OVAL letters discrimination, we observed that OVAL monochromatic readers' average success was 95.27% (SD ± 3.6%) while OVAL color readers’ average success was 99.7% (SD ± 0.34%) (Fig 2A). To test whether there was any significant difference between the performances of the two groups and whether explicit reading instruction induced improvement in discriminability of letters, we performed a mixed repeated measures ANOVA using average success rates as the dependent variable, with "Group" (color vs. monochromatic) as between-group factor and "Learning stage" (pre-training vs. post-training) as within-group factor. This ANOVA revealed a main effect of Group ([F(1,16) = 11.3, p = .004, η2p = 0.41]), due to overall higher accuracy in the Color compared to the Monochromatic training group (Color readers: M = 99.17% SD = 0.85%; Monochromatic readers: M = 91.83% SD = 8.0%). Also the main effect of “learning stage” was significant ([F(1,16) = 11.4, p = .004, η2p = 0.42]), due to overall higher accuracy in the post- compared to the pre-training task in both groups (pre-training = 93.52% SD = 8.60%; post-training = 97.47% SD = 3.36%). Finally, we also observed a significant interaction effect between learning stage and group ([F(1,16) = 6.28, p = .02, η2p = 0.28]). Post-hoc analysis revealed that the difference in performance between the two groups was mainly due to the Monochromatic training group performing significantly worse before training compared to the Color group (p = .002), while performances between groups were not significantly different post-training (p = .53) (Fig 2A).

thumbnail
Fig 2.

A. Comparison in the auditory discrimination among trained OVAL characters between pre- and post-training performances. Although pre-training discrimination was already high, indicating easy discriminability of the OVAL audemes, both groups significantly improved after-training, reaching ceiling effect. The Color group outperformed the Monochromatic group in the pre-training performance. B-C. After training, we tested the discriminability of trained vs untrained OVAL letters. B. Both groups reached high-success rate in this task, but Color readers significantly outperformed Monochromatic readers. C. Color readers could also identify trained from untrained letters significantly faster than Monochromatic readers. D-E. During the letter identification task, both groups show high accuracy, but the Color group achieved higher success rates (D). In addition the Color group also showed shorter reaction times (E). F. In the majority of trials, Color OVAL readers required only one presentation of a letter in order to correctly identify it, while Monochromatic readers require an additional repetition on average. In A-F error bars represent standard deviations (SD). Asterisks represent statistically significant differences: ** p<0.005; * p<0.05. G. Pattern of errors during identification of trained characters depicted in a confusion matrix. Each cell represents the percent of errors participants committed in identifying each letter pairs, reported separately for the Monochromatic (left) and Color (right) groups.

https://doi.org/10.1371/journal.pone.0242619.g002

Experiment 2 –discrimination of trained vs untrained OVAL letters

Considering participants were trained on half of the alphabet, we wanted to ensure that discrimination of trained letters generalized to the entire Hebrew Oval alphabet. Therefore, we included after training, a discrimination task of all characters (22 letters forming the Hebrew orthographic system; Fig 1A), in which participants had to decide if a given character was familiar or not.

Results showed that both groups could successfully differentiate trained letters from untrained letters, with monochromatic readers correctly identifying 85% (SD ± 0.10%) of the letters on average, significantly above chance level (50%) (t(8) = 10.31, p < .001, d = 3.4). Color readers identified on average 94% (SD ± 0.051%) of the letters, which resulted significantly above chance level as well (t(8) = 26.2, p < .001, p = 8.7). Color group performed significantly more accurately than monochromatic group (t(16) = 2.56, p = .023, d = 1.181) (Fig 2B).

Color readers were also significantly faster in such discrimination, taking on average 1.36 seconds (SD ± 0.77 sec) to respond comparted to Monochromatic readers who took 2.68 seconds on average (SD ± 0.02 sec; t(16) = 3.104, p = .007, d = 1.463) (Fig 2C). Despite this difference, both groups identified if a letter was trained or not within the first character presentation (each character was presented within 4 seconds intervals).

Experiment 3 –letter identification task

Apart from basic sensory discrimination, we tested weather participants learned to successfully identify the 11 OVAL letters they learned during our relatively short training period. Results showed that both groups could successfully identify trained letters, with Monochromatic readers correctly identifying 84.7% (SD ± 10.0%) of the letters on average, significantly above chance level (9%) (t(8) = 22.69, p < .001, d = 7.56). Color readers identified 96.97% (SD = ±3.7%) of the letters on average which resulted significantly above chance level as well (t(8) = 71.11, p < .001, d = 23.7). In addition, the Color group performed significantly more accurately than the Monochromatic group (t(16) = 3.46, p = 0.003, d = 1.63) (Fig 2D). To investigate the error pattern of participants’ responses and thus highlight whether participants had difficulties in identifying specific OVAL letters, we created a “confusion matrix” with percent of mistakes for each possible pair. Results showed no specific pattern of mistakes in the Color group which overall did very little identification errors (see Fig 2G). Results from monochromatic readers were more informative, showing they committed more mistakes in identifying Lamedh (L) as Het (H), and Tsade (W) as Teth (U), pairs of letters with relatively similar visual orthography (Figs 1A; 2G).

In addition to their superior accuracy Color readers were significantly faster than monochromatic readers in correctly identifying OVAL characters (t(16) = 2.89, p = 0.01, d = 1.36) (average Color readers = 2.73 sec, SD ± 1.37 sec; average Monochromatic readers 8.27 sec, SD ± 5.59 sec). These results show no evidence of speed-accuracy trade-off with the addition of color (Fig 2E). Indeed, while Monochromatic readers correctly identified the letters within about 2 presentations of each character, Color readers required only one presentation on average [Fig 2F].

Experiment 4—word/pseudo-word reading task

Finally, we assessed whether the letter identification capacity of our participants following this short reading-specific training, extended to read new, untrained words and pseudo-words composed of the trained 11 OVAL letters. Experimental stimuli were not presented during training and were not limited to a specific list, thus chance level performance is close to zero. Monochromatic readers correctly read 61.6% of the words/pseudo-words on average (SD ± 25.0%) while Color group readers correctly read 85.7% of the words/pseudo-words on average (SD ± 5%) (Fig 3A).

thumbnail
Fig 3. After only 6-hours of training both groups could successfully read OVAL strings.

A. Color readers were overall more accurate than Monochromatic readers. B. Interaction Group*Word-Type (words; pseudo-words): Both groups were more accurate to read words than pseudo-words but Color readers showed an advantage in reading pseudo-words compared to Monochromatic readers. C. Color readers were overall faster than Monochromatic readers in reading OVAL strings D. Both groups read words faster than pseudo-words. E. Interaction Group*Stimulus-Length. Both groups tended to read long strings quicker than short strings. However, Monochromatic readers read long words significantly slower than Color readers. Post-hoc analyses also revealed that Monochromatic readers were significantly slower when reading long strings compared to short ones, while Color readers, did not significantly differ in the reading speed of short and long strings. F. Length effect on reading speed. This is a complete representation of speed performances when reading OVAL strings of various length, plotted separately for each group, and for words and pseudo-words. Monochromatic readers drastically increased their reading speed when length of the OVAL strings increased. Color readers tended to show the same pattern but less prominent.

https://doi.org/10.1371/journal.pone.0242619.g003

We entered the average accuracy rate for each participant into a mixed repeated-measures ANOVA with Group (Monochromatic vs. Color) as between-group factor, stimulus-type (words; pseudo-words) and stimulus-length (short, i.e., 2–3 characters; long, i.e., 4–5 characters) as within-participants factors. This ANOVA revealed a significant main effect of Group ([F(1,16) = 8.021, p = .012, η2p = 0.33]) due to overall higher accuracy in the Color compared to the Monochromatic training group. In addition, also the main effect of stimulus-type was significant ([F(1,16) = 8.3, p = .01, η2p = 0.34) due to overall higher accuracy when reading words compared to pseudo-words in all participants (words: M = 76.14% SD = 20.60%; pseudo-words: M = 70.91% SD = 24.50%). Finally, also the interaction between Group and stimulus-type was significant ([F(1,16) = 10.96, p = .004, η2p = 0.41). Post-hoc analyses revealed that between-groups differences in performances resulted from readers in the Monochromatic training group being significantly less accurate than readers in the Color training group in reading pseudo-words (Color readers: M = 86.27% SD = 5.9%; Monochromatic readers: M = 55.56% SD = 26.65%; p = .012) (Fig 3B). No other factors were significant (all F-values<2.02).

When analyzing reaction times, we observed that Monochromatic readers identified an OVAL string after 28.81 sec on average (about 7 word’s repetitions; SD ± 8.56 sec), while Color readers identified OVAL strings in 16.53 sec on average (about 4 word’s repetitions; SD ± 5.53 sec) (Fig 3C). We entered individual average reaction times into a mixed repeated-measures ANOVA, with Group (Monochromatic vs. Color) as between-group variable, stimulus-type (words; pseudo-words) and stimulus-length (short, i.e., 2–3 characters; long, i.e., 4–5 characters) as within-participants factors. This ANOVA revealed a main effect of Group ([F(1,16) = 13.34, p = .002, η2p = 0.46]), due to overall faster responses in the Color training group compared to the Monochromatic training group. Importantly these results confirm that the significantly more accurate reading outcome reported in the Color compared to the Monochromatic group was also accompanied by an advantage in reading speed. In addition, similarly to the ANOVA on accuracy, also the main effect of stimulus-type was significant ([F(1,16) = 9.74, p = .007, η2p = 0.38]) due to all participants being faster in reading words than pseudo-words (words: M = 21.38 sec SD = 9.20 sec; pseudo-words: M = 24.14 sec SD = 10.27 sec) (Fig 3D). The main effect of stimulus-length was also significant ([F(1,16) = 29.63, p < .001, η2p = 0.65)]) due to faster responses for shorter than longer OVAL strings (short: M = 16.81 sec SD = 6.60 sec; long: M = 32.60 sec SD = 16.50 sec). Importantly, also the interaction between stimulus-length and group was significant ([F (1,16) = 6.00, p = .029, η2p = 0.27)]) (Fig 3E and 3F). Post-hoc analysis revealed that Color readers were faster than Monochromatic readers in reading long OVAL strings (p < .001). Indeed, Monochromatic readers read long strings slower than short strings (short: M = 20.87 sec SD = 16.50 sec; long: M = 43.31 sec SD = 16.49 sec; p < .001), while Color readers did not take significantly longer to read long than short strings (short: M = 12.75 sec SD = 4.54 sec; long: M = 21.83 sec SD = 6.88 sec; p = .15). No other main effect or interactions were significant (all F-values <0.151).

Discussion

In this work, we showed the efficacy of the OVAL, a novel auditory-based orthographic system, for non-visual reading. The OVAL comprises all letters of the Hebrew alphabet designed with unique visual patterns of vertical lines and dots. Such visual letters are then transformed into audition using the algorithm of a visual-to-auditory SSD, the EyeMusic, which in turn creates “audemes”–unique auditory spatio-temporal patterns representing exact shapes and locations of letters. Our results show that even before undergoing tailored OVAL training, literate sighted blindfolded participants with no previous training on the EyeMusic or any other SSDs, were successful in discriminating OVAL characters. Then, participants underwent 6 hours of OVAL-specific training during which they were introduced to 11 OVAL letters, and were trained to read words/pseudo-words of various lengths composed of the trained characters. After training, participants showed a significant improvement in the same OVAL letter discrimination task and successfully discriminated trained from untrained OVAL characters. Additionally, they successfully identified all the OVAL trained characters as well as read untrained OVAL words and pseudo-words. These results suggest that the OVAL auditory-reading script can be successfully learned in adulthood, after a relatively short specific training. Future studies with prolonged training and bigger sample size can further assess the properties of OVAL reading, thus investigating for instance reading performances on the other untrained half of the OVAL alphabet and on the full alphabet, the reading speed achieved after significantly longer training, text comprehension properties, as well as additional possible convergent or divergent properties of OVAL compared to print reading, such as invariance to location and font variations [61]. Intriguingly, our findings suggest that adding redundant features to OVAL letters (i.e., adding specific colors to OVAL characters transformed into audition via timbre manipulations) is beneficial for the final learning outcomes. Specifically, we tested in the exact same paradigm, two different groups of participants: one who was only exposed to a monochromatic version of the OVAL and another who was only exposed to a colored version of the OVAL (i.e., different letters always appeared in a specific color and were played by a specific musical instrument according to the EyeMusic transformation algorithm; Fig 1A and 1B). Results showed that participants from the Color group, performed better than the participants from the Monochromatic group in all of our experiments. Importantly this enhancement was expressed via both faster reading rates and higher accuracy rates, thus with no “speed accuracy trade-off”.

Auditory OVAL compared with tactile Braille: Insights for reading and SSD training

The main method for reading in the blind and low-vision individuals who cannot read via the visual modality, is the Braille system based on tactile identification of letters. However while some people read Braille fluently and well, many people find it difficult and cumbersome, if not impossible, to master it. This is especially true for those faced with blindness in adulthood who may lack the tactile sensitivity needed to read Braille, such as, but not exclusively, when blindness is in co-morbidity with diabetes, today one of the leading causes of late on-set blindness, which leads to decreased tactile acuity [62]. Prior to the invention of Braille, blind individuals were taught to read using a system of embossed letters [63] that was slow to decipher. Thus, tactile reading had become popular only with the invention of the Braille code that aimed at creating a system which maximizes the properties of the tactile modality rather than a direct translation of the visual print to a tactile format. Braille letters are based on a tactile code developed by a French army captain designed specifically to allow reading in darkness. Then, Louis Braille, blind himself, simplified the original code by reducing the number of dots conveying each letter, from 12 to 6 in order to fit a 2X3 grid that can be read with a stroke of a finger; this change considerably improved the speed and efficiency of tactile reading, establishing the Braille code known today [63, 64]. Braille reading peaked in the 1960's with 50% Braille literacy rate among the blind but decreased dramatically in the last few decades. Today it is estimated that only 5% to 12% of the blind population currently uses it as the main method for information acquisition, and numbers are estimated to be even lower when considering only the blind adult population [22]. This drop in the use of Braille is due to a combination of reasons among which is the lack of proper Braille instruction, high-cost of Braille electronic devices, the demand for very high tactile acuity (but see [42], and inherent prolonged training times to become a proficient reader. This drop in Braille literacy was flanked by an increase in the use of text-to-voice converters or audiobooks. However, in contrast to these approaches that make information accessible only through listening, Braille reading and also the novel OVAL orthography, are active reading systems that require the users to apply audeme-phoneme correspondence and understand the rules of spelling, similarly to print reading systems [35, 36]; see [37] for more recent empirical evidence), which were shown to exert various brain and behavioral advantages compared to auditory-only methods [1, 41].

To our knowledge OVAL is the first non-visual orthography that provides an easy-to-learn reading approach that can also be easily acquired in adulthood. We propose that the successful OVAL outcomes are at least partially due to the fact that this novel orthography is built on auditory discrimination of unique spatio-temporal sound patterns which are discriminable among each other even before specific training on this orthography (Fig 2A). Such discriminability was especially high in the Color group, thus suggesting that even without any phonological association to the auditory patterns, basic auditory discrimination of the soundscapes benefited by the addition of redundant information, in this case timbre manipulations (see also [65] for another color-related advantage in soundscapes discriminations). One explanation for these results, might be that learning OVAL, differently from learning Braille, does not require to stretch the limits of human auditory perception, again particularly in the color group, ultimately both reducing training time and increasing its efficacy: Indeed it is known that the auditory system has higher resolution in both time and space compared to the receptors on the skin (43). Several previous studies conducted on blindfolded sighted Braille learners, provide convergent results supporting these conclusions: For instance, in one study blindfolded sighted participants were able to discriminate between same and different Braille letter pairs, with error rates dropping from approximately 37% pre-training to 25% post-training, which included 5 days of extensive tactile stimulation in addition to 20 hours of formal Braille training [59]. In another study, blindfolded sighted children, before training, were able to perform a Braille discrimination task on 4 Braille characters with error rates of little less than 11%. After training, a final tactile discrimination test on these trained characters showed accuracy rates of ~90% [66]. In addition, a recent study showed that sighted individuals performing a Braille discrimination task on the whole Braille alphabet were able to discriminate 37.5% of the letters prior to Braille training, and reached 91.85% following more than 10 hours of training specifically focused only on characters discrimination [60]. These studies highlight that not only pre-training discrimination rates with Braille were substantially lower than those obtained via the OVAL auditory orthography, but crucially, even post-training discrimination rates were still lower than those obtained via OVAL, despite longer Braille training time. Note however, that a recent work showed that tactile acuity thresholds were not directly related to Braille learning outcomes in blindfolded sighted adults [42]. This latter result suggests, in turn, that the impressive higher discriminability of OVAL compared to Braille characters may not be entirely due to the lower demands of the OVAL orthography on auditory sensitivity compared to Braille demands on the tactile one. An additional and not mutual exclusive explanation for the more efficient learning of OVAL compared to Braille reading is that the several dimensions of variability among OVAL characters facilitated their discrimination. While Braille characters differ from each other only in the presence or absence of a dot in one of six possible locations [52], OVAL characters differ in shape, frequencies and duration (as well as color in the Color group). The fact that discrimination abilities of the Color group exceeded those of the Monochromatic group supports this possibility, and suggests that the more different characters are easily distinguishable among each other, the more learning is facilitated. Note that the inclusion of all possible letter pairs in the current design imposes an imbalance of the task, as more trials were “different” than “same”. Future studies that include only a partial sample of letter-pairs that enables a balance between same- and different-trials can increase sensitivity of the current result.

Finally, in-line with our predictions, OVAL reading outcomes were more successful than reading outcomes after conversion of visual letters to SSD soundscapes [47, 49]. Specifically, we achieved with OVAL, higher letters’ discrimination abilities and related reading performances in shorter training time compared to SSD presented visual letters [47, 49]. These outcomes are particularly significant if one considers that differently from letters-to-SSD readers, OVAL participants were naïve both to the concept of SSD, and to the letters’ shapes prior to training, and received no tactile feedback on such shapes during training to facilitate their learning process [47, 49]. Additionally, OVAL offers faster reading rates than SSD presented visual letters, as OVAL characters are deliberately narrower than print letters, and thus are translated to inherently shorter soundscapes. Finally, unlike OVAL character discrimination that was high even before training and reached ceiling after training (Fig 2A), soundscapes of visual letters are not necessarily easy to discriminate, as some letters have very similar visual shapes, making them harder to disentangle via SSDs. Taken together, these results suggest that similarly to what have been demonstrated for the Braille code [63], a reading system specifically tailored to the auditory modality leads to better reading outcomes than a simple transformation of visual letters into audition.

Future studies will need to assess whether other forms of auditory reading might elicit similar outcomes than what we reported for OVAL, such as hearing each letter’s name and creating a word from that sound (i.e., dictated spelling). Our prediction is that dictated spelling will elicit a set of behaviors and neural activations that will lie in between the outcomes of audiobooks listening and the outcomes of reading. Dictated spelling, differently from audiobooks, requires both decoding words through alphabetic coding and language understanding, namely the two cognitive capacities determining reading comprehension according to the seminal theory of the Simple View of Reading [35, 36]; see [37] for more recent empirical evidence). However, while in OVAL reading each character has a unique spatio-temporal patterns, which we believe facilitates perception and speeds comprehension, reading via dictated spelling relies strongly on visual imagery of the print alphabet of reference (which might not be similarly available to all users), in turn potentially making retrieving words more effortful. Note that dictated spelling might nonetheless prove to be a promising supportive training solution to teach the OVAL alphabet, or other non-visual orthographies, mediating for instance the passage from the print visual alphabet to non-visual orthographies.

Taken together, these encouraging results obtained with OVAL highlight also the benefits that OVAL training can exert on other skills both within and outside the SSD realm. First, because our participants were entirely naïve to SSDs before starting the experiment, the current results promote OVAL as a promising way to teach the principles of SSDs to those interested in mastering their use, such as people who are blind and visually impaired. Indeed, the introduction of color and shapes through the teaching of reading a script can potentially be more engaging and enjoyable for users than the use of simple lines and geometrical shapes in the initial steps of SSD training as typically carried out [44]. This may in turn motivate participants to persevere with SSD training. Furthermore, learning OVAL teaches to discriminate different notes on the same Y axis column (i.e., through learning to differentiate characters with overlapping components, namely two characters’ elements appearing on the same column such as in “P” and “H” -see Fig 1A), a task that has been shown to be challenging for SSD users [67]. Our results show that even participants from the Monochromatic group, who had no redundant cues to differentiate letters among each other, could successfully differentiate letters with overlapping features in the Y axis (Fig 2G), thus supporting the notion that OVAL training facilitates tuning of auditory discrimination skills. Another, non-mutual exclusive possibility is that OVAL letters are perceived as an auditory unified spatial patterns, rather than separate components on different locations on the Y axis, which result in successful Y-axis discrimination of the character’s features. This in turn may suggest that OVAL learning might train holistic auditory processing and thus facilitate learning of more advanced SSD soundscapes that require the analysis and perception of multiple overlapping components such as soundscapes of human faces. Finally, because OVAL is based on Morse code, learning OVAL might also facilitate Morse learning, as letters in both systems use the same combination of dots and lines–although some nuanced differences in the spatial arrangements of the characters’ components between the two systems are present.

The boosting power of color information carried by sound

In this experiment, we also tested the influence of adding redundant discriminative features to letters on the final learning outcomes measured via letter discrimination, identification and words/pseudo-words reading. To this aim we compared two versions of the OVAL auditory orthography, tested in two different groups of participants who differed only in the presence/absence of color-specific information to OVAL letters, transformed into audition via timbre manipulations. Specifically, in the Monochromatic group, all letters were white, while in the Color group each letter was always appearing in a specific color (with three possible colors/timbres; Fig 1A).

While participants from both groups reached high accuracies in all experiments, performances of the Color group resulted systematically better. Specifically, participants from the Color group achieved success rates significantly higher than those achieved by participants in the Monochromatic group in all tasks. Reading speed was also significantly faster in the Color group, highlighting that accurate responses did not compromise reading speed. In certain tasks the color group achieved the ceiling effect, thus making the test insensitive to their superior performance compared to the Monochromatic group (Fig 2). Therefore, when using harder tasks, the advantage of the Color group over the Monochromatic group might even further increase. Facilitation of reading when adding redundant features has been shown for tactile pattern recognition, which appeared to be enhanced when texture was congruently added to shapes [52]. However, texture is difficult to systematically manipulate during tactile reading, and thus its contribution to reading was never directly assessed. The improvement in both reading speed and accuracy with the addition of redundant features to audemes, together with the ease of systematic addition of such features in the auditory modality compared with the tactile reading system, further supports the conclusion that auditory reading is a promising new venue. Future studies may further investigate the properties of such redundancy facilitation by investigating its limits. For instance, in this work we only used three colors, and color manipulations involved whole letters (Fig 1A). Will the facilitation be further enhanced by adding more colors/timbres? Or will it be further enhanced if we use timbre to differently sonify the most complex letters’ elements to increase letters’ discriminability–rather than full letters as was done here? In other words, which is the highest number of colors/timbres and which are the best letters’ attributes to apply such manipulation, so that we would still observe a reading advantage before this redundant feature becomes detrimental for reading performances? Future studies may further investigate this intriguing issue by aiming at further differentiating OVAL characters among each other through shape manipulations rather than only through the addition of redundant features such as color. In the current version of OVAL, for instance, letters were created with “descending” components, namely each component was positioned in a lower spatial position compared to the previous one, and thus were played with an increasing lower tone (Fig 1A). Future studies could, therefore, manipulate letters’ shapes and create, for instance, not only “descending” characters as was the case here, but also “ascending” ones (i.e., characters which consecutive components are positioned in higher positions in space and thus played with an increasing higher tone). Additionally also more nuanced manipulations can be implemented such as changing the vertical distance between the components in letters that have a dual component (i.e., two components positioned on the same spatial column which are played together by the EyeMusic such as “L” or “P”; see Fig 1A). These studies, in turn, will allow to further uncover which manipulations (or combination of manipulations) will lead to the highest increase in OVAL characters’ discriminability, ultimately also uncovering the threshold which if overcome, will lead to a decrement in reading performance.

Interestingly, our results showing enhancement of reading performance when color (i.e., timbre) was added to audemes are in contrast to some of the results obtained when adding color to visually presented letters. A recent study [68] explored the effect of reading with the addition of color in dyslexics as well as normal readers, both adults and children. When each whole-word was colored in a different color, reading was enhanced in all groups compared to a monochromatic condition, possibly due to the fact that color strengthened the perceptual grouping of words. However, when each letter was assigned a different color, similarly to our experiment, reading speed and error-rate returned to “baseline” levels, same as in monochromatic condition, for all 4 groups. The authors conclude that when coloring all letters, letters become “similarly dissimilar” just as in the monochromatic condition, so the reading enhancement achieved via the different coloring of each word, cancels out with the breaking of similarity achieved when each whole-word is colored in the same color [68].

This difference on the effects of redundant letters’ features between visual and OVAL reading, may stem from the fact that individuals in the OVAL Color group, learned the letter already with the color assigned to them, thus binding the letter and the color. In contrast, in vision, individuals were introduced to the coloring of letters after many years of fluent reading. Another possibility is that the OVAL results may show that the addition of redundant characters’ features might facilitate letters discriminability in the initial stages of learning to read with a novel orthography. Thus, it could be that after more reading training, redundant features will diminish their impact on OVAL reading outcomes. A third option is that unlike in vision, the addition of redundant features to an auditory spatio-temporal pattern, does not break the similarity principle, due to the serial nature of the OVAL presentation, opposed to the parallel presentation of print words [69]. However, due to the few studies done in vision to investigate the effect of adding redundant features such as color to letters, it is still too early to conclusively compare the outcomes of such addition on visual versus auditory reading. One interesting study to address this issue might be to test color-grapheme synesthets and ask whether they would also show better reading performances similarly to OVAL readers, at least during reading acquisition.

In addition, as expected, our results demonstrate “word superiority effect”, namely words were read faster and more accurately than pseudo-words, in both Monochromatic and Color OVAL groups, even though accuracy in pseudo-words reading dropped more in the Monochromatic than in the Color group (Fig 3B). This is in line with previous results reporting faster and more accurate outcomes for words than pseudo-words also for visual and Braille reading [70]. Interestingly, the length effect of Braille reading, in which the time to correctly read a word is significantly longer with every syllable added [70], was only evidenced in Monochromatic OVAL readers and not in Color OVAL readers. In other words, colors readers did not take significantly longer to read long than shorts OVAL strings, while Monochromatic readers took longer to read long strings (Fig 3E and 3F). This may further confirm the discriminative advantage of Color OVAL which seems to allow quicker long-strings processing. While the small pool of words as well as the short training period of participants alongside the relatively small sample size all limit the extent of application of the results, these lastly presented results suggest that the OVAL system may be a promising tool to investigate reading processes between sensory-modalities, ultimately allowing to disentangle sensory-specific from sensory-independent processes, an issue often debated in reading research for various reading components [71]. Past attempts in this direction compared performances between visual reading and Braille [7173]. However, all this research had an inherent bias, as sighted individuals were tested on print while blind readers on Braille. Due to the inherent impossibility to test blind on print reading and to the complexity of learning Braille for sighted, none of the subjects could be tested on the other modality. The ease of learning auditory OVAL opens the intriguing possibility of directly comparing reading performances of sighted individuals in two modalities–print and audition, via OVAL reading. Future studies could therefore continue to assess the efficacy of OVAL as a reading system including also direct comparisons between OVAL and other visual print systems.

Can OVAL be used to assist visual reading?

The potential of an auditory orthography for reading goes beyond blindness and includes sighted populations, especially people with specific reading impairments, such as dyslexic individuals. Indeed, the ultimate impairment leading to developmental dyslexia is still debated [74], but one of the most prominent views attributes dyslexia to an impairment in the cross-modal visual-auditory integration between graphemes and phonemes [75]. However, there is still no agreement on a gold-standard rehabilitation procedure for this reading impairment. Current rehabilitation programs for dyslexia taps on different skills spanning from potentiation of the auditory-phonological processing [76, 77], explicit systematic training on letter-to-speech correspondences [76, 78, 79], to training on visuo-attentional abilities via action video-games [80] or virtual reality [81]. These latter approaches are lately considered among the most promising venues, as it has been shown that individuals with developmental dyslexia suffer from specific visuo-spatial attention impairments which are evidenced even before children learn to read [82], the identification of which can potentially allow early interventions. Crucially, it has been proposed that these impairments cause the documented higher susceptibility to crowding effects of letters and words in dyslexics [83], ultimately preventing the deployment of the correct cross-modal grapheme-to-phoneme integration.

We suggest here that OVAL might aid dyslexic individuals in reading, perhaps even facilitating their learning of a reading system, by by-passing all the issues related to visual-attention documented in dyslexic individuals [8284]—although some results seem to suggest that also auditory and not only visual attention might be impaired in these individuals [85, 86].

One option to test these conclusions might be to train dyslexic individuals in reading via audition using the OVAL, ultimately succeeding to separate the visual modality as reading channel, from the phonological system, thus directly testing the actual role of vision in reading impairments.

In addition, OVAL could be inserted within multisensory rehabilitative procedures pairing together both auditory and visual reading. As a matter of fact, dyslexics were shown to have deficits in multisensory integration [86], however results suggest that they show sluggish shifts in cross-modal attention from vision to audition but not from audition to vision [86]. Thus, multisensory training with the OVAL might be undertaken in an interleaved rather than in a simultaneous fashion: in other words, dyslexics patients should be first exposed to OVAL sentences and then to the same text presented visually (see also [86] for a similar suggestion). The success of this training regimen should be tested on the improvement at the end of the program in print reading alone. Future studies may test the feasibility of this approach and perhaps adapt the OVAL algorithm to the specific needs of this population, for instance, by enlarging the spaces between letters [83] and by reducing the speed of letter presentation to match it more closely to the speed of reading in the visual modality, which is known to be slackened in this population [87]. In addition, these studies should also address text comprehension abilities on top of single-word reading speed. Importantly, also other groups could benefit from multisensory OVAL and visual reading, such as people with low vision, who currently can only read via pairing vision and Braille with all the issues highlighted above regarding the learning of Braille code, as well as people with progressive visual loss disorders such as patients with Retinitis Pigmentosa. These individuals could be trained on visual and auditory OVAL in parallel, perhaps making the OVAL training even quicker. Future studies may investigate these new venues and also test the extent to which pairing the print system with the OVAL system may slow down the process of visual deterioration.

Concluding remarks

Taken together, our results suggest that due to its ease of learning and simple application, the auditory script OVAL, and especially its multi-color version, is an alternative to those who cannot read using the visual modality such as blind and low-vision individuals and could potentially be a rehabilitative tool for individuals with reading disabilities. We are currently working on the development of an OVAL app, which will offer the possibility to adapt OVAL reading features to the specific needs of different populations/users by allowing for instance, the personalized control and manipulation of the time of scanning to match reading speed of each individual. This includes both the speed of presentation of each letter (that can be even doubled) as well as the adjustment of spacing between letters and between words, among other features. In addition, OVAL script could be used to test the extent to which reading properties are sensory-independent or sensory-specific, due to its relatively easy learning which makes it potentially suitable for developmental longitudinal studies as well. OVAL is also applicable in the research of the reading network independent of visual exposure, and for research aiming to uncover potential differences due to serial versus holistic reading.

References

  1. 1. Dehaene S, Cohen L, Morais J, Kolinsky R. Illiterate to literate: behavioural and cerebral changes induced by reading acquisition. Nat Rev Neurosci [Internet]. 2015 Apr 18 [cited 2019 May 13];16(4):234–44. Available from: http://www.nature.com/articles/nrn3924 pmid:25783611
  2. 2. Julayanont P, Ruthirago D. The illiterate brain and the neuropsychological assessment: From the past knowledge to the future new instruments. Vol. 25, Applied Neuropsychology:Adult. Routledge; 2018. p. 174–87. https://doi.org/10.1080/23279095.2016.1250211 pmid:27841690
  3. 3. Dehaene S, Pegado F, Braga LW, Ventura P, Nunes Filho G, Jobert A, et al. How learning to read changes the cortical networks for vision and language. Science (80-). 2010;330(6009):1359–64. pmid:21071632
  4. 4. Castles A, Holmes VM, Neath J, Kinoshita S. How does orthographic knowledge influence performance on phonological awareness tasks? Q J Exp Psychol A Hum Exp Psychol. 2003;
  5. 5. Ventura P, Kolinsky R, Fernandes S, Querido L, Morais J. Lexical restructuring in the absence of literacy. Cognition. 2007;105(2):334–61. pmid:17113063
  6. 6. Dehaene S, Cohen L, Morais J, Kolinsky R. Illiterate to literate: Behavioural and cerebral changes induced by reading acquisition. Vol. 16, Nature Reviews Neuroscience. Nature Publishing Group; 2015. p. 234–44. https://doi.org/10.1038/nrn3924 pmid:25783611
  7. 7. Grant AC, Thiagarajah MC, Sathian K. Tactile perception in blind braille readers: A psychophysical study of acuity and hyperacuity using gratings and dot patterns. Percept Psychophys. 2000;62(2):301–12. pmid:10723209
  8. 8. Wong M, Gnanakumaran V, Goldreich D. Tactile spatial acuity enhancement in blindness: Evidence for experience-dependent mechanisms. J Neurosci. 2011 May 11;31(19):7028–37. pmid:21562264
  9. 9. Duñabeitia JA, Orihuela K, Carreiras M. Orthographic coding in illiterates and literates. Psychol Sci [Internet]. 2014 Jun 23 [cited 2020 Sep 20];25(6):1275–80. Available from: http://journals.sagepub.com/doi/10.1177/0956797614531026 pmid:24760145
  10. 10. Pegado F, Comerlato E, Ventura F, Jobert A, Nakamura K, Buiatti M, et al. Timing the impact of literacy on visual processing. Proc Natl Acad Sci U S A. 2014 Dec 9;111(49):E5233–42. pmid:25422460
  11. 11. Peereman R, Dufour S, Burt JS. Orthographic influences in spoken word recognition: The consistency effect in semantic and gender categorization tasks. Psychon Bull Rev. 2009;16(2):363–8. pmid:19293108
  12. 12. Ziegler JC, Ferrand L. Orthography shapes the perception of speech: The consistency effect in auditory word recognition. Psychon Bull Rev. 1998;5(4):683–9.
  13. 13. Dehaene S, Pegado F, Braga LW, Ventura P, Nunes Filho G, Jobert A, et al. How learning to read changes the cortical networks for vision and language. Science (80-). 2010 Dec 3;330(6009):1359–64. pmid:21071632
  14. 14. Monzalvo K, Dehaene-Lambertz G. How reading acquisition changes children’s spoken language network. Brain Lang. 2013 Dec 1;127(3):356–65. pmid:24216407
  15. 15. van Atteveldt N, Formisano E, Goebel R, Blomert L. Integration of Letters and Speech Sounds in the Human Brain. Neuron [Internet]. 2004 Jul 22 [cited 2020 Sep 20];43(2):271–82. Available from: https://linkinghub.elsevier.com/retrieve/pii/S0896627304003964 pmid:15260962
  16. 16. Kosmidis MH, Folia V. Semantic and phonological processing in illiteracy Implicit learning syntactic processing and dyslexia View project LEFIELD: Language and executive function intervention strategies in language disorders View project. Artic J Int Neuropsychol Soc [Internet]. 2004 [cited 2020 Sep 20]; Available from: https://www.researchgate.net/publication/8093069
  17. 17. Morais J, Bertelson P, Cary L, Cognition JA-, 1986 undefined. Literacy training and speech segmentation. Elsevier [Internet]. [cited 2020 Sep 20]; Available from: https://www.sciencedirect.com/science/article/pii/0010027786900041
  18. 18. Kosmidis M, Zafiri M, clinical NP-A of, 2011 undefined. Literacy versus formal schooling: Influence on working memory. academic.oup.com [Internet]. [cited 2020 Sep 20]; Available from: https://academic.oup.com/acn/article-abstract/26/7/575/4868
  19. 19. Pattamadilok C, Lafontaine H, Morais J, Kolinsky R. Auditory Word Serial Recall Benefits from Orthographic Dissimilarity. Lang Speech. 2010;53(3):321–41. pmid:21033650
  20. 20. Kolinsky R, Monteiro-Plantin S, Mengarda EJ, Grimm-Cabral L, Scliar-Cabral L, Morais J. How formal education and literacy impact on the content and structure of semantic categories. Trends Neurosci Educ [Internet]. 2014 [cited 2020 Sep 20];3:106–21. Available from: http://dx.doi.org/10.1016/j.tine.2014.08.001
  21. 21. Huettig F, Mishra RK. How Literacy Acquisition Affects the Illiterate Mind A Critical Examination of Theories and Evidence. Lang Linguist Compass [Internet]. 2014 Oct 1 [cited 2020 Sep 20];8(10):401–27. Available from: https://onlinelibrary.wiley.com/doi/abs/10.1111/lnc3.12092
  22. 22. Scheithauer MC, Tiger JH. A COMPUTER‐BASED PROGRAM TO TEACH BRAILLE READING TO SIGHTED INDIVIDUALS. J Appl Behav Anal. 2012 Jun 1;45(2):315–27. pmid:22844139
  23. 23. Ryles R. The Impact of Braille Reading Skills on Employment, Income, Education, and Reading Habits. J Vis Impair Blind. 1996;90(3):219–26.
  24. 24. Silverman Arielle Michal; Bell EC. the association between braille reading history and well being from blind adults. J Blind Innov Res. 2018;8(1).
  25. 25. Nahar L, Jaafar A, Ahamed E, Kaish ABMA. Design of a Braille Learning Application for Visually Impaired Students in Bangladesh. Assist Technol [Internet]. 2015 Jul 3 [cited 2020 May 6];27(3):172–82. Available from: http://www.tandfonline.com/doi/full/10.1080/10400435.2015.1011758 pmid:26427745
  26. 26. Grant AC, Thiagarajah MC, Sathian K. Tactile perception in blind Braille readers: a psychophysical study of acuity and hyperacuity using gratings and dot patterns. Percept Psychophys. 2000;62(2):301–12. pmid:10723209
  27. 27. Oshima K, Arai T, Ichihara S, Nakano Y. Tactile sensitivity and braille reading in people with early blindness and late blindness. J Vis Impair Blind. 2014 Mar 1;108(2):122–31.
  28. 28. Stevens JC, Foulke E, Patterson MQ. Tactile Acuity, Aging, and Braille Reading in Long-Term Blindness. J Exp Psychol Appl. 1996;2(2):91–106.
  29. 29. Harris JA, Harris IM, Diamond ME. The topography of tactile learning in humans. J Neurosci. 2001 Feb 1;21(3):1056–61. pmid:11157091
  30. 30. Knowlton M.; Wetzel R. Braille Reading Rates as a Function of Reading Tasks. J Vis Impair Blind. 1996;90(3):227–36.
  31. 31. Legge GE, Madison CM, Mansfield JS. Measuring Braille reading speed with the MNREAD test. Vis Impair Res. 1999 Jan 1;1(3):131–45.
  32. 32. Nolan CY |Kederis CJ. Perceptual Factors in Braille Word Recognition. (American Foundation for the Blind. Research Series No. 20). American Foundation for the Blind, Inc., 15 West 16th Street, New York, New York 10011 ($3.00); 1969.
  33. 33. Legge GE, Madison C, Vaughn BN, Cheong AMY, Miller JC. Retention of high tactile acuity throughout the life span in blindness. Percept Psychophys. 2008 Nov;70(8):1471–88. pmid:19064491
  34. 34. Argyropoulos V, Padeliadu S, Avramidis E, Tsiakali T, Nikolaraizi M. An investigation of preferences and choices of students with vision impairments on literacy medium for studying. Br J Vis Impair. 2019 May 1;37(2):154–68.
  35. 35. Gough PB, Tunmer WE. Decoding, Reading, and Reading Disability. Remedial Spec Educ [Internet]. 1986 Jan 18 [cited 2020 Sep 20];7(1):6–10. Available from: http://journals.sagepub.com/doi/10.1177/074193258600700104
  36. 36. Hoover WA, Gough PB. The simple view of reading. Read Writ. 1990 Jun;2(2):127–60.
  37. 37. Kendeou P, Savage R, Broek P. Revisiting the simple view of reading. Br J Educ Psychol [Internet]. 2009 Jun 1 [cited 2020 Sep 20];79(2):353–70. Available from: http://doi.wiley.com/10.1348/978185408X369020 pmid:19091164
  38. 38. Horowitz-Kraus T, Vannest J, Neuropsychologia SH-, 2013 undefined. Overlapping neural circuitry for narrative comprehension and proficient reading in children and adolescents. Elsevier [Internet]. [cited 2020 Sep 20]; Available from: https://www.sciencedirect.com/science/article/pii/S0028393213002868
  39. 39. Wilson S, Bautista A, Neuroimage AM-, 2018 undefined. Convergence of spoken and written language processing in the superior temporal sulcus. Elsevier [Internet]. [cited 2020 Sep 20]; Available from: https://www.sciencedirect.com/science/article/pii/S1053811917310923
  40. 40. Deniz F, Nunez-Elizalde AO, Huth AG, Gallant JL. The Representation of Semantic Information Across Human Cerebral Cortex During Listening Versus Reading Is Invariant to Stimulus Modality. J Neurosci. 2019 Sep 25;39(39):7722–36. pmid:31427396
  41. 41. Berl M, Duke E, Mayo J, language LR-B and, 2010 undefined. Functional anatomy of listening and reading comprehension during development. Elsevier [Internet]. [cited 2020 Sep 20]; Available from: https://www.sciencedirect.com/science/article/pii/S0093934X10001045
  42. 42. Bola Ł, Siuda-Krzywicka K, Paplińska M, Sumera E, Hańczur P, Szwed M. Braille in the Sighted: Teaching Tactile Reading to Sighted Adults. Bensmaia SJ, editor. PLoS One [Internet]. 2016 May 17 [cited 2019 May 6];11(5):e0155394. Available from: pmid:27187496
  43. 43. Kim J-K, Zatorre RJ. Generalized learning of visual-to-auditory substitution in sighted individuals. Brain Res. 2008;1242:263–75. pmid:18602373
  44. 44. Abboud S, Hanassy S, Levy-Tzedek S, Maidenbaum S, Amedi A. EyeMusic: Introducing a “visual” colorful experience for the blind using auditory sensory substitution. Restor Neurol Neurosci [Internet]. 2014 Jan 1 [cited 2018 Apr 4];32(2):247–57. Available from: https://content.iospress.com/articles/restorative-neurology-and-neuroscience/rnn130338 pmid:24398719
  45. 45. Meijer PBL. An experimental system for auditory image representations. Biomed Eng IEEE Trans. 1992;39(2):112–21. pmid:1612614
  46. 46. Reich L, Szwed M, Cohen L, Amedi A. A ventral visual stream reading center independent of visual experience. Curr Biol. 2011;21(5):363–8. pmid:21333539
  47. 47. Sigalov N, Maidenbaum S, Amedi A. Reading in the dark: neural correlates and cross-modal plasticity for learning to read entire words without visual experience. Neuropsychologia [Internet]. 2016 Mar 1 [cited 2019 Dec 12];83:149–60. Available from: https://www.sciencedirect.com/science/article/pii/S0028393215302244 pmid:26577136
  48. 48. Cohen L, Dehaene S, Naccache L, Lehéricy S, Dehaene-Lambertz G, Hénaff M-A, et al. The visual word form area. Brain [Internet]. 2000 Feb [cited 2019 Apr 8];123(2):291–307. Available from: https://academic.oup.com/brain/article-lookup/doi/10.1093/brain/123.2.291 pmid:10648437
  49. 49. Striem-Amit E, Cohen L, Dehaene S, Amedi A. Reading with sounds: sensory substitution selectively activates the visual word form area in the blind. Neuron [Internet]. 2012 Nov 8 [cited 2017 Sep 26];76(3):640–52. Available from: http://www.ncbi.nlm.nih.gov/pubmed/23141074 pmid:23141074
  50. 50. Ellis K. Some Experiments on Reading Aids for the Blind. Radio Electron Eng. 1963;25(2):188–90.
  51. 51. Loomis JM. On the tangibility of letters and braille. Percept Psychophys [Internet]. 1981 Jan [cited 2019 May 6];29(1):37–46. Available from: http://www.springerlink.com/index/10.3758/BF03198838 pmid:7243529
  52. 52. Millar S. Aspects of size, shape and texture in touch: Redundancy and interference in children’s discrimination of raised dot patterns. J Child Psychol Psychiatry. 1986;27(3):367–81. pmid:3733917
  53. 53. Pinna B, Deiana K. New conditions on the role of color in perceptual organization and an extension to how color influences reading. Psihologija. 2014;47(3).
  54. 54. Röder B, Teder-SaÈlejaÈrvi W, Sterr A, Nature FR-, 1999 undefined. Improved auditory spatial tuning in blind humans [Internet]. nature.com. 1999 [cited 2020 Sep 20]. Available from: www.nature.com
  55. 55. Voss P, Lassonde M, Gougoux F, Biology MF-C, 2004 undefined. Early-and late-onset blind individuals show supra-normal auditory abilities in far-space. Elsevier [Internet]. [cited 2020 Sep 20]; Available from: https://www.sciencedirect.com/science/article/pii/S096098220400747X
  56. 56. Amadeo M, Campus C, Neuroimage MG-, 2019 undefined. Impact of years of blindness on neural circuits underlying auditory spatial representation. Elsevier [Internet]. [cited 2020 Sep 20]; Available from: https://www.sciencedirect.com/science/article/pii/S1053811919300795
  57. 57. Wan C, Wood A, Reutens D, Neuropsychologia SW-, 2010 undefined. Early but not late-blindness leads to enhanced auditory perception. Elsevier [Internet]. [cited 2020 Sep 20]; Available from: https://www.sciencedirect.com/science/article/pii/S0028393209003340
  58. 58. Faul F, Erdfelder E, Lang AG, Buchner A. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. In: Behavior Research Methods. Psychonomic Society Inc.; 2007. p. 175–91. https://doi.org/10.3758/bf03193146 pmid:17695343
  59. 59. Kauffman, Thomas; Théoret, Hugo; Pascual-Leone A. Braille character discrimination in blindfolded human subjec…: NeuroReport. Neuroreport [Internet]. 2002 [cited 2020 May 7];13(5):571–4. Available from: https://journals.lww.com/neuroreport/Fulltext/2002/04160/Braille_character_discrimination_in_blindfolded.7.aspx?casa_token = Ry23nhqwYMQAAAAA:hiVAL5SvVv2SJVovBLivJEG0xGvwVnJnWV7XPS4BLnAnSd0d7XCgYZsXLT196RgfyHGvEdr466ISR5nWwaHGbFrDIJSZtw
  60. 60. Debowska W, Wolak T, Nowicka A, Kozak A, Szwed M, Kossut M. Functional and structural neuroplasticity induced by short-term tactile training based on braille reading. Front Neurosci. 2016 Oct 13;10(OCT):460. pmid:27790087
  61. 61. Zhou Z, Vilis T, Strother L. Functionally separable font-invariant and font-sensitive neural populations in occipitotemporal cortex. J Cogn Neurosci. 2019 Jan 1;31(7):1018–29. pmid:30938590
  62. 62. Tobin MJ, Hill EW. Is literacy for blind people under threat? Does braille have a future? Br J Vis Impair [Internet]. 2015 Sep 4 [cited 2020 May 6];33(3):239–50. Available from: http://journals.sagepub.com/doi/10.1177/0264619615591866
  63. 63. Jiménez J, Olea J, Torres J, Alonso I, Harder D, Fischer K. Biography of Louis Braille and Invention of the Braille Alphabet. Surv Ophthalmol. 2009 Jan 1;54(1):142–9. pmid:19171217
  64. 64. Sadato N. How the Blind “See” Braille: Lessons From Functional Magnetic Resonance Imaging. Neurosci [Internet]. 2005 Dec 29 [cited 2020 May 19];11(6):577–82. Available from: http://journals.sagepub.com/doi/10.1177/1073858405277314 pmid:16282598
  65. 65. Levy-Tzedek S, Riemer D, Amedi A. Color improves “visual” acuity via sound. Front Neurosci [Internet]. 2014 Nov 11 [cited 2020 May 11];8(OCT):358. Available from: http://journal.frontiersin.org/article/10.3389/fnins.2014.00358/abstract pmid:25426015
  66. 66. Millar S. Early Stages of Tactual Matching. Perception [Internet]. 1977 Jun 25 [cited 2020 May 21];6(3):333–6. Available from: http://journals.sagepub.com/doi/10.1068/p060333 pmid:866090
  67. 67. Brown DJ, Simpson AJR, Proulx MJ. Auditory scene analysis and sonified visual images. Does consonance negatively impact on object formation when using complex sonified stimuli? Front Psychol. 2015;6(OCT). pmid:26528202
  68. 68. Pinna B, Deiana K. On the Role of Color in Reading and Comprehension Tasks in Dyslexic Children and Adults. Iperception [Internet]. 2018 May 9 [cited 2020 May 11];9(3):204166951877909. Available from: http://journals.sagepub.com/doi/full/10.1177/2041669518779098
  69. 69. Marcet A, Perea M, Baciero A, Gomez P. Can letter position encoding be modified by visual perceptual elements? Q J Exp Psychol [Internet]. 2019 Jun 1 [cited 2020 May 11];72(6):1344–53. Available from: http://journals.sagepub.com/doi/10.1177/1747021818789876 pmid:29969979
  70. 70. Veispak A, Boets B, Ghesquière P. Parallel versus sequential processing in print and braille reading. Res Dev Disabil. 2012 Nov 1;33(6):2153–63. pmid:22776823
  71. 71. Perea M, García-Chamorro C, Martín-Suesta M, Gómez P. Letter Position Coding Across Modalities: The Case of Braille Readers. PLoS One. 2012 Oct 2;7(10). pmid:23071522
  72. 72. Hughes B, McClelland A, Henare D. On the Nonsmooth, Nonconstant Velocity of Braille Reading and Reversals. Sci Stud Read [Internet]. 2014 Mar 4 [cited 2020 May 11];18(2):94–113. Available from: http://www.tandfonline.com/doi/abs/10.1080/10888438.2013.802203
  73. 73. Fischer-Baum S, Englebretson R. Orthographic units in the absence of visual processing: Evidence from sublexical structure in braille. Cognition. 2016 Aug 1;153:161–74. pmid:27206313
  74. 74. Vidyasagar TR. Visual attention and neural oscillations in reading and dyslexia: Are they possible targets for remediation? Neuropsychologia. 2019 Jul 1;130:59–65. pmid:30794841
  75. 75. Stein J. The current status of the magnocellular theory of developmental dyslexia. Neuropsychologia. 2019 Jul 1;130:66–77. pmid:29588226
  76. 76. Wolbers T, Klatzky RL, Loomis JM, Wutte MG, Giudice NA. Modality-independent coding of spatial layout in the human brain. Curr Biol [Internet]. 2011 Jun 7 [cited 2017 Sep 27];21(11):984–9. Available from: http://www.ncbi.nlm.nih.gov/pubmed/21620708 pmid:21620708
  77. 77. Strong GK, Torgerson CJ, Torgerson D, Hulme C. A systematic meta-analytic review of evidence for the effectiveness of the ‘Fast ForWord’ language intervention program. J Child Psychol Psychiatry [Internet]. 2011 Mar 1 [cited 2020 May 21];52(3):224–35. Available from: http://doi.wiley.com/10.1111/j.1469-7610.2010.02329.x pmid:20950285
  78. 78. Gabrieli JDE. Dyslexia: A new synergy between education and cognitive neuroscience. Vol. 325, Science. American Association for the Advancement of Science; 2009. p. 280–3. https://doi.org/10.1126/science.1171999 pmid:19608907
  79. 79. Peterson RL, Pennington BF. Developmental dyslexia. In: The Lancet. Lancet Publishing Group; 2012. p. 1997–2007.
  80. 80. Franceschini S, Gori S, Ruffino M, Viola S, Molteni M, Facoetti A. Action video games make dyslexic children read better. Curr Biol. 2013 Mar 18;23(6):462–6. pmid:23453956
  81. 81. Elisa Pedroli 1 Patrizia ,1 Andrea Guala,2 Maria Teresa Meardi,3 Giuseppe Riva,1,4 and Giovanni Albani. A Psychometric Tool for a Virtual Reality Rehabilitation Approach for Dyslexia. 2017 [cited 2020 May 11]; Available from: https://www.hindawi.com/journals/cmmm/2017/7048676/
  82. 82. Franceschini S, Gori S, Ruffino M, Pedrolli K, Facoetti A. A causal link between visual spatial attention and reading acquisition. Curr Biol. 2012 May 8;22(9):814–9. pmid:22483940
  83. 83. Zorzi M, Barbiero C, Facoetti A, Lonciari I, Carrozzi M, Montico M, et al. Extra-large letter spacing improves reading in dyslexia. Proc Natl Acad Sci U S A. 2012 Jul 10;109(28):11455–9. pmid:22665803
  84. 84. Vidyasagar TR, Pammer K. Dyslexia: a deficit in visuo-spatial attention, not in phonological processing. Trends Cogn Sci. 2010 Feb 1;14(2):57–63. pmid:20080053
  85. 85. Facoetti A, Trussardi AN, Ruffino M, Lorusso ML, Cattaneo C, Galli R, et al. Multisensory spatial attention deficits are predictive of phonological decoding skills in developmental dyslexia. J Cogn Neurosci [Internet]. 2010 May 4 [cited 2020 May 11];22(5):1011–25. Available from: http://www.mitpressjournals.org/doi/10.1162/jocn.2009.21232 pmid:19366290
  86. 86. Harrar V, Tammam J, Pérez-Bellido A, Pitt A, Stein J, Spence C. Multisensory integration and attention in developmental dyslexia. Curr Biol. 2014 Mar 3;24(5):531–5. pmid:24530067
  87. 87. Cunningham A, educator KS-A, 1998‏ undefined. What reading does for the mind‏. oregonliteracypd.uoregon.edu‏ [Internet]. [cited 2020 Sep 22]; Available from: http://oregonliteracypd.uoregon.edu/sites/default/files/topic_documents/16-R1-Cunningham_0.pdf