From science to technology: Orientation and mobility in blind children and adults

The last quarter of a century has seen a dramatic rise of interest in the development of technological solutions for visually impaired people. However, despite the presence of many devices, user acceptance is low. Not only are visually impaired adults not using these devices but they are also too complex for children. The majority of these devices have been developed without considering either the brain mechanisms underlying the deficit or the natural ability of the brain to process information. Most of them use complex feedback systems and overwhelm sensory, attentional and memory capacities. Here we review the neuroscientific studies on orientation and mobility in visually impaired adults and children and present the technological devices developed so far to improve locomotion skills. We also discuss how we think these solutions could be improved. We hope that this paper may be of interest to neuroscientists and technologists and it will provide a common background to develop new science-driven technology, more accepted by visually impaired adults and suitable for children with visual disabilities.


Introduction
The field of human locomotion has grown steadily over the past few decades (Andriacchi and Alexander, 2000;Bruijn et al., 2012;Glasauer et al., 2009). When we move through the environment our main goal is that of being able to find our own way, i.e. achieving good spatial navigation (Long and Giudice, 2010;VandenBos, 2013). Advances in our understanding of how the brain processes navigational skills have been attained by means of neurophysiological, psychophysical, neuropsychological, neuroimaging, and computational modeling studies. The principles that subtend human spatial navigation are now starting to become clearer. It is now evident, for example, that the human brain makes use of egocentric or allocentric coordinates to obtain different perspectives of the environment (Avraamides et al., 2004) that are comprised of amodal spatial representations of the surroundings, i.e. representations that do not necessarily maintain specific properties of the modality through which the signal is transmitted, which can therefore be used to accomplish successful spatial navigation (Loomis et al., 2013). Similarly, the development of locomotor abilities is now more clearly elucidated (Uchiyama et al., 2008). Locomotion, for example, seems to play an important role in the genesis of psychological changes (Anderson et al., 2013). One aspect which has received relatively less attention from the research community is how locomotion is processed and learned in children and adults with visual impairments. Visual information is fundamental for spatial processing and its absence directly impacts on the development of locomotion skills. Supporting this idea, various studies in visually impaired people have demonstrated that the absence of vision impacts on locomotor skills (e.g. (Nakamura, 1997;Rieser et al., 1986)). Understanding how these skills develop in individuals with visual impairments would provide significant benefits in the development of rehabilitation and sensory substitution systems. In the context of visual disability, orientation and mobility indicate different properties of human locomotion and exploration of environments. Orientation refers to the ability of understanding the spatial properties of an environment and being aware of one's position and its relationship with the surroundings; on the other hand, mobility indicates the capability of efficiently and safely moving in an environment (e.g. in a city by using public transport) unaccompanied (Giudice and Legge, 2008;Novi, 1998;Soong et al., 2001). Over the past decade, many groups have developed technological solutions to improve these skills in people with visual disabilities. However, only a small part of this technology is actually accepted and used by the visually impaired population. In this review we provide an overview of some of the most important factors of locomotor ability in children and adults, both with and without visual disabilities. We also provide a review of the most popular devices developed for improving spatial navigation in people with visual impairments. Our goal is to stress the importance of creating a link between scientific studies and the development of technology, in order to produce devices that are accepted by the users. Most of the technology developed to date, for example, does not take into consideration the needs of visually impaired people and provides information that is neither useful nor immediately understandable. In addition, these devices are usually tested only on small samples of visually impaired people and usually the efficacy is evaluated only in a qualitative manner. Moreover, differences depending on the visual impairment onset have been largely ignored. Many quantitative studies, carried out by neuroscientists, provide important information about how our brain processes sensorimotor signals for orientation and mobility. We believe that these studies could provide important information for the development and testing of new technological solutions. We hope that this review will stimulate neuroscientists and technology researchers to carefully consider the contribution that each discipline can pro-vide to each other, with the goal of improving the quality of life of visually impaired individuals. In addition, we hope that this review will provide a stimulus for neuroscientists to start a wider investigation into the development of locomotor skills in children with visual impairments.
Firstly, we will discuss orientation and mobility skills in adults with and without disabilities. A distinction can be made between the role that multisensory and sensorimotor signals have in people with and without disabilities. For example, it is now evident that, compared to sighted people, visually impaired individuals make different use of auditory, haptic and vestibular cues to perform efficient walking. We will then present a list of the technological devices for supporting orientation and mobility in visually impaired adults. The devices are subdivided into two categories: technological canes and robots for locomotion; we will discuss the positive and negative aspects of both categories, presenting the technological features of the solutions developed so far.
Following this, we will discuss the development of locomotor skills in children with and without disabilities. We will stress how locomotion plays a crucial role in the genesis of psychological changes and how a delay in locomotion development can impact on cognitive spatial and social skills of the visually impaired child (Anderson et al., 2013;Piaget, 1952a,b;Uchiyama et al., 2008). Finally, we will present the few devices which have been developed for children so far. These can be described as pre-canes, virtual games and advanced tools. At the end of the review we will discuss the course which we believe should be followed to develop systems that may be better accepted by visually impaired adults and that are more suitable for younger users.

Orientation and mobility skills in adults with and without visual disability
When navigating through space, our brain takes advantage of mental representations based on sensory signals that provide information about how our movement is accomplished in relation to our surroundings (e.g. visual, auditory) or absolutely in space (e.g. vestibular and proprioceptive). The use of egocentric or allocentric coordinates gives rise to either "route" or "survey" representations. The former is based on the observer's viewpoint whereas the latter assumes a map-like perspective where the observer is aware of the spatial relationship between elements of the surroundings, thus used as references. Following this nomenclature, spatial navigation can be differentiated in either route or inferential, which respectively rely on egocentric and allocentric coordinates (Loomis et al., 1993;Schmidt et al., 2013;Thinus-Blanc and Gaunet, 1997). Route navigation is mostly well accomplished by blind people, as they can rely on kinematic strategies relative to experienced movement by using an idiothetic reference. On the other hand, research on inferential navigation in blind individuals has provided inconsistent results, showing impaired performance (Herman et al., 1983;Rieser et al., 1986;Thinus-Blanc and Gaunet, 1997;Veraart and Wanet-Defalque, 1987). In these cases, the task requires complex inferential processes (e.g. to provide spatial links between previously explored locations) and early blind individuals show more errors than late blind and sighted individuals (Rieser et al., 1986). However, comparable performance of blind subjects compared to sighted individuals has been found in similar tasks (Thinus-Blanc and Gaunet, 1997) and some studies showed better performance in survey spatial cognition tasks (Tinti et al., 2006); for recent reviews see (Long and Giudice, 2010;Schinazi et al., 2016) Moreover, although studies focusing on spatial memory (e.g. triangle completion task) did not provide consistent differences between sighted and non-sighted individuals (Klatzky et al., 1997(Klatzky et al., , 1990Loomis et al., 1993;Thinus-Blanc and Gaunet, 1997), such tasks are often considered as less difficult than spatial inference tasks which instead are more related to orientation issues. These and other inconsistent results in the literature about blind individuals might be related to methodological differences as well as individual differences within the groups of tested subjects (Thinus-Blanc and Gaunet, 1997). Regarding the latter, physical activity, education and rehabilitation training, especially if related to orientation and mobility (e.g. participation to O&M courses), might underlie inter-individual differences in perception and particularly in spatial navigation (Thinus-Blanc and Gaunet, 1997).
From a perceptual perspective, efficient spatial navigation depends on a continuous update of self-position while moving by constantly monitoring self-motion, thus providing the means to accomplish successful orientation. In sighted individuals, visual and vestibular cues to self-motion combine when presented simultaneously (Butler et al., 2010;Fetsch et al., 2009;Prsa et al., 2012) and one can influence the other when stimulation is temporally adjacent but not superimposed (Brandt et al., 1974;Cuturi and MacNeilage, 2014), supporting the idea of a strong functional link between the two sensory processes. However, less is known about how vestibular information is processed in blind individuals where lack of vision might impair the development and properties of vestibular capabilities. Congenital blind individuals show impairments in inferential navigation based only on vestibular information whereas their performance in route navigation tasks is comparable to that of sighted individuals, showing intact kinetic strategies (Seemungal et al., 2007). Nevertheless, vestibular thresholds for low-frequency roll tilts have been found to be lower in a mixed group of congenital and late blind individuals. This suggests that compensatory mechanisms based on vestibular cues might take place (Moser et al., 2015). On the other hand, since roll-tilt thresholds in vestibular loss patients are close to normal (Valko et al., 2012), somatosensory or proprioceptive cues cannot be excluded.
Generally, blind individuals might compensate for their disability by using the remaining auditory sense to localize surrounding objects and themselves (Després et al., 2005). In this case, hearing provides the basis of spatial perception in the near and far space, as it covers a large spatial field. Therefore, auditory references might improve spatial navigation in blind people, as they permit allocentric perception of the surroundings (Loomis et al., 2001). Nonetheless, pure auditory perception in sighted and non-sighted individuals shows inaccuracies compared to the visual modality to be taken into account (Kolarik et al., 2015). For instance, distance perception was found to be compressed compared to its physical value both when sighted subjects' responses were given by verbal indication and by walking a distance to the target . Similarly, early blind individuals show impaired absolute reproduction of distance based on auditory cues (Cappagli et al., 2015). Although humans can only perceive few auditory events simultaneously because of masking induced by different levels of source loudness (Bregman, 1990), some blind individuals take advantage of echolocation, which is the ability to compute one's own location and the position of objects by sensing the echo emitted after actively producing a sound. Additional self-motion while echolocating introduces temporal dynamics in processing binaural cues, thus improving perceptual organization of auditory streams (Jacquet et al., 2006) and facilitates orientation in space (Wallmeier and Wiegrebe, 2014) demonstrating the functional role of proprioceptive and vestibular inputs in enhancing auditory perception in blind people.
In the near space of a blind individual, haptic perception serves to accomplish fundamental tasks, such as object recognition (Davidsont et al., 1974) and exploration of the peripersonal space (Gori et al., 2011;Strumillo, 2010;Ungar, 2000). Based on the combination of tactile, proprioceptive, and kinesthetic informa-tion, haptic perception allows blind individuals to perceive features of objects and build spatial representations (Morash et al., 2012). Moreover, haptic spatial-orientation cues improve postural stability more rapidly in blind individuals than in sighted people, indicating that adaptation-based processes might take place due to visual loss (Schieppati et al., 2014). Although touch provides only sequential information and might be limited to egocentric representations of space, it has recently been shown that sighted individuals can haptically learn spatial maps (Brayda et al., 2015) comparable to vision-based maps (Giudice et al., 2011), thus possibly allowing for the formation of allocentric representations (Morash et al., 2012).
Finally, audio-tactile integration together with sensorimotor feedback helps the visually impaired individuals to perform efficient walking. To this purpose, sensory feedback from their footsteps is particularly salient to blind people: haptic exploration of the foot's plantar surface can be used to probe the ground (Patla et al., 2004), while the sound of steps provides rhythmic information which might improve locomotion (Molloy-Daugherty, 2013). Nonetheless, the absence of visual information affects locomotion, as revealed by gait analyses in congenitally blind individuals showing slower walking speed, cautious posture, shorter stride length and longer duration of stance compared to sighted and late blind individuals (Nakamura, 1997). Some of these differences in gait performance (e.g. walking speed) are reduced by using the white cane or the guide dog (Clark-Carter et al., 1986) suggesting that the presence of additional sensory feedback and guiding cues improve locomotion (Nakamura, 1997).
Overall, multisensory integration provides a complete spectrum of sensory information that aids orientation and mobility. It has been proposed that diverse sensory and cognitive sources (including linguistic information) might flow into an amodal spatial image underlying several navigational tasks (e.g. spatial updating) . Such spatial image represents an externalized image of a stimulus which could be located in every direction with respect to the observer and scaled to the environment (Loomis et al., 2013) thus allowing for representation of the surroundings comprising egocentric and allocentric coordinates. Nonetheless, the absence of vision from an early age has been shown to be related to deficits in perception based on sensory modalities other than vision, supporting the idea that multisensory calibration strongly benefits from vision (Gori et al., 2008).

Technological devices for adults
As we have seen in the previous paragraph, visually impaired people show specific impairment in locomotion and environmental exploration. An improvement in locomotion, especially in orientation skills in visually impaired individuals, could guarantee a higher level of independence and an increment of opportunities. In the last few decades, many technological devices have been developed with the aim of improving locomotion of people with visual impairments. These devices can be divided into two categories: technological canes (see Table 1 and Table s1) and robots for mobility (see Table 2 and Table s2).

Technological canes
The first class consists of a set of sensors and multisensory displays that are mounted on the classic white cane, but which can sometimes be removed from the cane and used independently. In Table 1 the most popular technological canes available so far are reported, along with a short description of their features and functions (see table s1 reporting a list of robots for mobility, with a short description of their features). In general, these devices acquire Table 1 Summary of technological canes and information about the sensors used, the feedback produced and the participants tested.

Name
Sensors Feedback Participants tested LaserCane (Bolgiano and Meeks, 1967) Laser Non-verbal audio, Vibrotactile Not found RecognizeCane (Scherlen et al., 2007) Infrared, Brilliance, Water Not developed Not found Mini-Radar (Dakopoulos and Bourbakis, 2010) Ultrasound Non-verbal audio, Vibrotactile Not found K-Sonar Cane (Bay Advanced Technologies ltd, 2016) Ultrasound Non-verbal audio, Vibrotactile Not found Miniguide (Jacquet et al., 2006) Ultrasound Non-verbal audio, Vibrotactile Not found iSONIC (Kim et al., 2009) Ultrasound, Color, Brightness Verbal audio, Vibrotactile Not found Tom Pouce (Hersh et al., 2006) Infrared Non-verbal audio, Vibrotactile Sighted, Visually impaired Télétact (Hersh et al., 2006) Infrared, Laser Non-verbal audio, Vibrotactile Sighted, Visually impaired CyARM (Ito et al., 2005) Ultrasound Haptic Not found EyeCane Maidenbaum et al., 2014) Infrared Non-verbal audio, Vibrotactile Sighted, Visually impaired Navigation Aid for Blind People (Bousbia-Salah et al., 2011) Ultrasound Vibrotactile Sighted, Visually impaired Kinect Cane (Takizawa et al., 2012) Kinect (Infrared) Vibrotactile Not found GuideCane (Borenstein and Ulrich, 1997) Ultrasound Kinestetic Sighted, Visually impaired RoJi (Shim and Yoon, 2002) Ultrasound, Infrared, Antenna Non-verbal audio Sighted, Visually impaired information about their surrounding environment, i.e. the presence and the distance of obstacles in a range between 0.5 and 8 m, and transmit this information to the user in different ways. For example, the laser cane (Benjamin, 1974;Bolgiano and Meeks, 1967) produces three distinct audio signals for specific distances and it vibrates when there is an object in front of the user. Contrarily, the Mini-Radar (Dakopoulos and Bourbakis, 2010) provides audio language messages when an object is detected and gives information about its distance from the user. There are also systems that provide vibratory feedback, for example the Miniguide (Jacquet et al., 2006), the rate of vibration of which is related to the distance of the object from the user. With this device, as well as with many others (e.g. the iSONIC (Kim et al., 2009)), the audio or vibrotactile feedback can be associated with verbal feedback. A detailed list of the sensors used and the feedback provided to users by these devices is reported in Table 1 (other classifications have been provided in (Giudice and Legge, 2008)). Although some of these devices, such as the 'K' Sonar (Bay Advanced Technologies ltd, 2016), are available on the market, many of them are still at an engineering/concept stage with little or no user testing. In Table 1 the number of participants used for testing these technological solutions is also reported. As we can see, testing in human subjects is rare (especially in blind individuals). Moreover, in many cases only blindfolded sighted individuals participated in the testing. We consider this to be a significant limit of these devices. Indeed, given the differences in cognitive strategies observed in individuals with a visual impairment, as described in the previous paragraph, it is not guaranteed that the performance observed in sighted blindfolded participants reflects that of visually impaired individuals. Another limit of these systems is the fact that they act only in a restricted spatial range (between 0.5 and 8 m) and although they provide information about the presence of obstacles, they cannot determine the typology of obstacles and whether they are moving or not. Even if the range of action of the cane is extended, they do not provide the direct tactile and auditory cue that is given by the interaction between the cane and the ground. The indirect audio or vibratory feedback that is given by these devices therefore has to be interpreted correctly and quickly, in order to avoid the obstacles. In addition, since no rehabilitative training is provided to improve locomotion, subjects must continue to rely on these devices for their whole life. Finally, since the movement is determined by user movement, they provide only local spatial information, without giving a global spatial cue of the environmental setting (as vision does). Maybe these are some of the reasons why these devices are not extensively used by the visually impaired population and why visually impaired people require more powerful and flexible systems.

Robots for mobility
The need to find a more flexible and independent system for guiding visually impaired people has led to the development of robots for mobility. Contrary to technological canes, robots for mobility consist of a -generally richer but bulkier -set of sensors and multisensory displays mounted on an external and independent support which usually moves on wheels. Just like technological canes, robots also acquire information about the surrounding environment (typically the presence and the distance of obstacles in a range between 0.5 and 8 m) and usually convey this information through a dictionary. Table 2 reports the sensors used and the type of feedback provided by the robots for mobility (see Table s2 reporting a list of robots for mobility with a short description of their features). Like technological canes, they use many sensors (e.g. vision system, sonar, differential GPS -Global Positioning System -, dead reckoning system and a portable GIS-Geographic Information System) and provide feedback in various forms (e.g. verbal audio, electrocutaneous, Braille key). These robots have mainly been developed to guide visually impaired people indoors and in some cases outdoors (MELDOG MARK (Tachi et al., 1981)). Some others have been specifically developed to help users when shopping in supermarkets (Robocart (Kulyukin et al., 2002)) or for elderly visually impaired people (VA-PAMAID (Rentschler et al., 2003)). Many of these robots have an autopilot mode in which they try to define the optimal path and to actively guide the user to the destination. Unlike technological canes, these robots have the advantage of guiding the users completely. The users do not need to move the cane and interpret the feedback and they can rely entirely on the robotic feedback in a passive way. The limitation of such guidance is that since robotic platforms are not still adaptable and intuitive, the route and the information provided is also limited compared to the amount of cues that vision can provide to the sighted individual. Excluding few exceptions that provide the users with different levels of active control on the aid devices (Borenstein and Ulrich, 1997;Shim and Yoon, 2002), most of the developed technology has reduced user's active control, with the risk of engaging a passive guidance provided by the robot. This might reduce user's independent orientation in favor of a more passive behavior reliant on the device. On the other hand, more interactive and active guidance can be provided by using a guide dog. Supporting this point, it has been observed that the handler takes most of the initiative when performing actions related to spatial navigation, except for obstacle avoidance in which the dog seems to override the blind user (Naderi et al., 2001). Nonetheless, the usage rate of guide dogs amongst blind people remains quite low (Guiding Eyes for the Blind, 2016). This could be partly due to individuals' personal feelings about dogs and about the effort required to take care of them.
Generally, robots are better at showing the route from a current location to the destination, but they cannot walk up and down stairs. Finally, robots are usually superior to portable navigation systems in obstacle avoidance and physical support to keep balance while walking, but they are significantly inferior when it comes to portability. For example, tests carried out with the HITOMI (Mori et al., 1994) indoors and outdoors involving three visually impaired subjects suggest that the device can provide useful information (especially in open spaces), but has severe limitations due to its poor portability. Unfortunately, also in this case the evaluation is based on engineering and technological aspects and only few devices have been tested with visually impaired people. Just as for technological canes, the samples considered for robots were also generally made up of 2-3 subjects. The only reports of the use of these devices, in many cases, come from the websites of the robots themselves.

Orientation and mobility in children with and without visual impairments
Early independent mobility is a fundamental skill for healthy growth. The development of child's cognitive functions and comprehension are influenced by their first sensorimotor experiences (Anderson, 1955Piaget, 1952a. Locomotion is not simply a maturational precursor to psychological changes, but it plays a crucial role in their genesis (Uchiyama et al., 2008). For example, crawling provides experiences that will lead to developmental changes in several domains (e.g. visual and auditory motion processing, prehension and stability) and that influences cognitive functions (e.g. object interception and interaction with others). Locomotion can be considered as a self-teaching process, where a child consciously and unconsciously acquires information about himself, while developing cognitive and motor skills. For example, if a child wants to play with something, he has to implement different sub-goals in order to reach the object of interest (final goal). First, he moves because he is attracted by something that he wants to explore (curiosity); secondly, in order to reach the object of interest (e.g. a toy or a person), he needs to ignore possible distractors and localize it in space (using spatial cognition and attention). Then, the child must implement a motor program in order to coordinate his body to move and reach a specific goal (employing intention and motor coordination); finally, he will receive feedback on his actions, supporting learning mechanisms. In a later stage of development, a child experiences the same object/reality to be perceived through different senses, thus he will gradually develop multisensory integration which is then definitively acquired around 8 years of age (Gori et al., 2008). Moreover, by moving around, a child begins to develop the notions of space and time. Locomotion is therefore a crucial event in human life (Piaget, 1952a,b).
During the fetal and newborn periods, a child moves with spontaneous, rhythmic and alternating arm and leg movements. During normal development, this first step is followed by rolling and crawling (first year), then pulling themselves up to a standing position and balancing upright. Running, jumping, and more sophisticated forms of moving skills, but also simply walking, require a huge amount of practice in order to prepare muscles and vestibular apparatus (Robinson et al., 2013). Prechtl and colleagues (Prechtl et al., 2001) show how, during the first weeks of life, there are no significant effects of early blindness on spontaneous motor activity. This result is probably due to the limited role of vision at this early age, while around 2 months post term blind infants show a significant delay in head control (Prechtl et al., 2001). The authors report that at the age of unsupported sitting and standing, blind infants kept their heads bent forward, suggesting that vision has a fundamental role on vestibular calibration. This suggests a delay in the full development of vestibular functions due to a lack of normal calibration exerted by vision on proprioceptive and vestibular systems. Moreover, at around 2 months, visually impaired children exhibit abnormal exaggerated fidgety movements and prolonged periods of ataxia in postural control (Prechtl et al., 2001). Other studies (Fazzi et al., 2002;Levtzion-Korach et al., 2000) report delayed onset of different motor milestones, such as sitting, crawling, standing and walking, in visually impaired infants. In particular, Levtzion-Korach and colleagues (Levtzion-Korach et al., 2000) found significant delays in pre-walking motor skills in visually impaired children but noticed no differences compared to sighed children in sitting from a supine position. Generally, sighted children tend to move around and explore, even before being able to walk. Blind children, if not adequately stimulated to move, show a delay in this innate process (Bigelow, 1992;Sampaio et al., 1989). First locomotion skills tend to appear later in blind children (around 18-24 months) compared to sighted (within 12 months), but with high individual variability (Perez-Pereira and Conti-Ramsden, 2001;Sampaio et al., 1989). Houwen and co-workers (Houwen et al., 2008) compared normally-sighted children (aged 7-10 years) with severe and moderately visuallyimpaired age matched peers, finding that the latter showed poor performance on static and slow dynamic balance tasks, contrary to a fast dynamic balance task. As vision has an important role in keeping our balance, the gait of a blind man tends to become unsteady (Nakamura, 1997). An interesting study by Hallemans (Hallemans et al., 2011) investigated age related changes in locomotion performances in sighted (age range from 3 years 2 months to 46 years) and visually impaired people (age range 1 year 3 months to 44 years). She found movement performance and age to be correlated, showing better motor performance with increasing age. On the other hand, the two groups differed in the spatial and temporal parameters of gait cycle: blind individuals show a prolonged duration of the double support phase (i.e. both feet on the ground), generally considered as an indication of balance problems.
Reduced orientation and mobility can result from various processes in which visual modality is involved during development. Firstly, lack of vision impacts on the typical link between perception and action, fundamental in constructing a mental representation of the surrounding space (Anderson, 1955;Gori et al., 2010) as the basis of efficient navigation. Secondly, lack of vision impacts on sensorimotor integration. Visual information, in the sighted child, is integrated with sensory information from the vestibular apparatus, proprioceptive feedback, hearing and touch to coordinate locomotion (Strumillo, 2010). Thirdly, lack of vision impacts on cognitive processes associated with locomotion. Vision is indeed the forerunner of a huge cascade of cognitive and sensory processes, such as body representation and body balance. For instance, peripheral optic flow induces greater postural sway (Higgins et al., 1996) and emotional response (Uchiyama et al., 2008) in children with previous experience of locomotion compared to those without. This evidence could explain why infants with no experience of locomotion are impaired in postural stability (Anderson et al., 2013) and in orientation and mobility in general. Moreover, Bertenthal and Campos (Bertenthal and Campos, 1990) suggest that the absence of experienced locomotion is related to the late development of wariness of height, suggesting that the co-occurrence of visual and vestibular (and somatosensory) cues to self-motion during locomotion provide the means to discriminate the presence of a visual cliff (Anderson et al., 2013;Bertenthal and Campos, 1990). Fourthly, lack of vision requires the individual to rely on the remaining sensory and motor signals. As vision is indeed the dominant sense in acquiring spatial cognition, it calibrates (educates) the other senses on space representation (Gori et al., 2008;Pasqualotto and Newell, 2007). Due to lack of vision, blind children have to exploit the other senses in order to explore and build a representation of space around them. According to Gibson (Gibson, 1969), during development children begin to abstract from amodal features, suggesting that blind infants might eventually perceive space using their intact senses. For this reason, the more precociously a blind child is encouraged to utilize the other senses to encode external information, the better his spatial representation will be in providing the basis for the development of his orientation and mobility skills. However, the information capacity of vision is generally more powerful compared to the amount of information transmitted by non-visual senses, even when integrated.
Lack of vision is associated with delayed or impaired development of spatial capabilities. The impairment is more severe when the visual disability emerges at birth, when multisensory communication is fundamental for the sensorimotor feedback loop, which contributes to the development of spatial representations (Gori et al., 2010). Although some compensation might occur in visually impaired individuals (Fiehler et al., 2009;Pasqualotto and Proulx, 2012), children and adults with visual impairment show deficits in auditory and haptic spatial skills (Cappagli et al., 2015Finocchietti et al., 2015aGori, 2015;Gori et al., 2014Gori et al., , 2010Pasqualotto and Newell, 2007;Vercillo et al., 2015) and typically do not perform as well in imagery tasks (Kerr, 1983;Marmor, 1977;Röder and Rösler, 1998;Röder et al., 1997). In blind children, delays in the initiation of locomotion are mainly due to the lack of vision rather than a sensorimotor deficit per se. Vision and locomotion influence each other in such a way that the impairment in one sensory modality might induce a delay in the development of the other. Vision is a great motivator for exploration, as everything in a baby's field of view is an incentive to move. The lack of visual input affects this natural exploration interest. A deficient acquisition of locomotor and spatial abilities in blind children might in turn influence the acqui-sition of social skills considering that the capacity of being aware of the surroundings has a relevant role in the successful engagement of activities with peers. Moreover, while in the sighted child this exploratory behavior is encouraged by auditory-visual association, the blind child relies entirely on auditory cues, less spatially localizable in the environment (Levtzion-Korach et al., 2000). This produces less motivation for the blind child to move, thus influencing his motor skills and it may lead to impaired motor, social and cognitive abilities. In order to improve spatial representation of blind children, technological devices based on spatial navigation need to take into account the importance of locomotion and exploration of the surroundings during early stages of development.

Technological devices for children
As previously introduced, vision is essential in building complex spatial representations (Burr and Gori, 2012;Gori et al., 2012a,b;Gori et al., 2011). This lack of vision at an early age results in impairments in complex space representation, reflected in specific orientation and mobility impairments (Perez-Pereira and Conti-Ramsden, 2001;Sampaio et al., 1989). As these representations are built during the early years of sensorimotor development, early intervention is therefore fundamental in order to provide effective rehabilitation. Despite the presence of many devices to improve mobility in visually impaired adults, these systems are not suitable for children. Most of them convey sensory information with a complex language and the size of them is not suitable for children. Nonetheless, an early therapeutic intervention with these devices might reduce the impairment of visually impaired children observed in the previous paragraph. The few technological solutions for improving orientation and mobility in visually impaired children can be classified in three categories: powered mobility devices, pre-canes and virtual reality technology (see Table 3 reporting a list of devices for children, with a short description of their features). Children start to use powered mobility devices around three years of age (Tefft et al., 2007). There are mobility devices for toddlers that take advantage of self-body movement (bio driven). These devices exploit legs (Chen et al., 2010) or the upper body (Larin et al., 2012) movement in order to be driven. However, although they are extremely important for disabilities such as cerebral palsy or other motor disabilities, they are not widely used by visually impaired children. A second class of tools specifically developed for blind children are kiddie canes and adapted canes, also known as pre-canes or alternative mobility devices (AFB American Foundation for the Blind, 2016). These tools exploit the natural skills and tendencies of children. They are easy to use and their feedback is highly intuitive. However, while playing (e.g. pushing a dolly's stroller) children learn to probe the environment. Locomotion then provides information (e.g. through audio and haptic feedback) about obstacles, and other environmental properties like drop-offs and changes in ground texture along the route. White and pre-canes work in a similar way: children learn to use the information about their surroundings, conveyed by the devices, to maintain orientation in space while walking. Training with the pre-cane is very important in order to be able to use the white cane later (AFB American Foundation for the Blind, 2016). A third class of devices adopts virtual reality technology (e.g. ABES (Connors et al., 2013) or BLINDAID (Symposium et al., 2015)) in order to improve the navigational skills of blind people. These video games exploit either the single sensory modality (audio or haptic) or a combination of both. The aim of these devices is to develop spatial cognition skills in order to improve outdoor mobility. They are useful because they improve navigational skills in a safe but quite realistic environment. Finally, there are some more sophisticated devices for older children (e.g. BlindSquare (BlindSquare, 2016) and the Ultrabike (Sound Foresight Technology Limited, 2016)) that can Table 3 Summary of technological devices for children and short description.

Device Description
Pre-canes (AFB American Foundation for the Blind, 2016) The white cane and pre-cane provide audio and haptic feedback about obstacles and other environmental details. ABES (Audio-Based Environments Simulator) (Connors et al., 2013).
A virtual game. This software enables a blind user to navigate through a virtual representation of a real space in order to train his/her orientation and mobility skills. Relying only on audio-based cues, users gather relevant spatial information about a building's layout. This allows the user to develop an accurate spatial cognitive map. BLINDAID (Symposium et al., 2015) A virtual environment (VE) system that helps blind people to learn about new environments. The system exploits Phantom haptic interface and three-dimensional audio system. It gives a blind user the means to interact with virtual maps using a haptic stylus, which provides the user with a feedback based on the objects s/he is interacting with in the virtual environment. The user is also provided with audio feedback. HOMERE (Lecuyer et al., 2003) A multimodal system which allows blind people to navigate through virtual environments. It delivers different sensory information to the user during navigation: force feedback corresponding to the manipulation of a virtual blind cane, thermal feedback corresponding to the simulation of a virtual sun, and auditory feedback. BlindSquare (BlindSquare, 2016) An iPhone app that uses the phone's GPS to localize the user and deliver, via voice synthesizer, information about the surrounding environment. Ultrabike (Sound Foresight Technology Limited, 2016) A bike for visually impaired people, using ultrasound to detect obstacles and transmitting haptic feedback to the cyclist.
be used if the visually impaired person is confident in their mobility. For example, the Ultrabike (Sound Foresight Technology Limited, 2016) is a bike that uses ultrasound to detect obstacles and send tactile cues back to the cyclist through vibrations in the handle bars.
Overall, children with visual impairments have considerably less chance of moving independently, compared with typically developing children of the same age. This lack of physical activity is also the cause of poor muscular development that, in turn, contributes to low postural tone. The lack of such early independent mobility may result in a delay in their locomotion, social, emotional, perceptual, cognitive, and language skills. Technological solutions could therefore contribute to improving these impairments, but to date, few mobility devices for visually impaired children have been developed and tested.

Discussion
Recently, there has been an enormous increase in studies of locomotion associated with the development of new technological solutions for improving locomotion skills. In the present review, we highlight how neuroscientific works have contributed to our understanding of orientation and mobility in sighted and visually impaired individuals (see Table 4 for a summary of studies performed in visually impaired children and adults). We also present the technological solutions developed so far. In doing this, we highlight some limits of the current neuroscientific and technological works. In the next paragraph we will also discuss the main challenges that hamper the diffusion of the technology proposed.

Low attention on children
The first limit of neuroscientific research and technological development is that it has been focused mainly on studying the typical adult. From the neuroscientific point of view, studies that have looked at locomotion have given little attention to studying locomotion in children and in children and adults with visual impairments. From a technological point of view, the technology for visually impaired individuals developed so far is not adaptable to children and not widely accepted by adults. The presence of effective devices for the visually impaired population could improve their quality of life and their social inclusion. Moreover, since cor-tical plasticity (Renier et al., 2014) (Burton, 2003) is stronger during the first years of life, the use of this technology should be adopted as soon as possible, to facilitate the development of new skills.
A framework that might improve the accessibility of these devices to children is the creation of more immediate and natural systems. For example, Braille (that dates back to 1829) and the long-cane, that are the most popular devices used by the visually impaired population, are easy-to-use devices. Both Braille and the typical cane are active solutions, conveying sensory information to the brain thanks to a natural and immediate approach. The brain interprets the body movement and the deriving audio or tactile signal. Once these techniques are learned through specific training, their everyday use improves their efficacy and mastery. This makes their use intuitive and so they can also be learned by children. However, many technological solutions are too complex and not flexible enough. Some of them overload human attentional capability; others communicate through a language which is too complex. The technological cane is more immediate than the robotic platforms we have described above. On the other hand, it provides less information, and the information it does provide is local (not global). It also requires active control (the person actively moves the hand to interpret the environment) without passive guidance. Contrarily, with the robotic platform the user is passively driven towards the goal. However, robotic platforms are still not adaptable and intuitive and their navigation skills are limited. Limitations of the robotic platform can be observed in many everyday activities, such as walking up and down stairs and crossing the street, in which case a guide dog is more competent.
The complexity of these devices is also an obstacle that impacts on children's access to these systems. Most of the technology available collects information from the environment (e.g. visual images) and then processes the information by translating the nature of the signal to the user into another kind of sensory signal (e.g. in sounds or vibrations). To be interpreted, these signals must be coded in a "new language" that the user needs to learn. This process requires good attentional capabilities. Both these aspects make this technology difficult for adults to use, and impossible for children, who cannot learn a complex language due to their limited attentive resources. We think that the computational power and plasticity of the brain should be foregrounded to develop devices more accepted and used by adults and children. For example, a good Table 4 Studies divided by topic of interest investigated in visually impaired people. Asterisks indicate the studies that did not test blind children but which provide indirect references by comparing early and late visually impaired individuals.

Understanding the brain mechanisms that subtend the deficit for technological development
Another framework that might prove helpful in thinking about new devices would be to create a stronger link between neuroscience and technology. The problem of interdisciplinary collaboration is a general problem present in many fields (Ledford, 2015). In the visual disability field, not only do most of these technological solutions not start from neuroscientific results, but neither have they been first developed and then tested on users. We think that development should start from science and work its way to technology. It is important to test users in order to understand what the brain mechanisms behind the impairment are, instead of merely substituting one sensory signal with another. Indeed, it has been shown that visually impaired individuals use different frames of reference to sighted people. The absence of visual cues, in visually impaired people, impacts on the development of allocentric frames of reference that are global and immediate. This hampers the ability to integrate absolute coordinates of body movement (e.g. vestibular and proprioceptive), i.e. egocentric, with relative coordinates (e.g. auditory) informing us how we are moving in relation to the surroundings, i.e. allocentrically. The development of a device for locomotion has to consider the reduced use or even the absence of an allocentric frame of reference in blind individuals. Therefore, in order to meliorate independent spatial navigation, the link between these two different strategies of locomotion (allocentric vs. egocentric) should be strengthened or even created if absent. Similarly, in the development of robotic platforms, there has been a signifi-cant focus on obstacle avoidance. However, for the orientation and mobility of visually impaired people, the biggest problem is not the presence of obstacles, but the lack of adequate sensory feedback that can provide a good spatial representation of the environment. Therefore, the development of assistive technology should focus mainly on improving orientation skills rather than mobility as these skills are what is linked to an adequate and complete perceptual representation of space.
To conclude, we think that only by understanding the real problem behind the deficit, is it possible to develop more effective solutions tailored to the problem.

Little testing on users
Another significant limit of the technology developed so far for individuals with visual impairment is that most of it stops at technological development, without testing on humans. The few devices that are tested in humans have involved few subjects in the study and only rarely did these subjects have visual impairments. This is a general problem that is present not only in this field. If we consider for example the robotic exoskeleton domain, we can observe the same phenomenon. Nathanaël Jarrassé (Jarrassé et al., 2014), for example, reviewed robotic exoskeleton platforms, showing that many of these systems are not tested in patients or are tested only at a pre-clinical level ( Table 1 in the paper). This is a big issue if we consider that in order to reach users the devices have to be validated on large samples of patients through standardized clinical trials. Moreover, not much is known about the acceptance of technological devices specifically in blind users. Interviews conducted on few individuals have drawn an interesting profile of blind user acceptance, showing a predominant need for independence, especially in learning how to use the devices, and criticism related to the fact that most of the technology they use (e.g. smartphones) is developed with sighted users in mind (Sachdeva and Suomi, 2013). Nevertheless, individual differences might arise especially in relation to the disability onset and gravity, as these factors might induce perceptual differences that influence senses other than vision (Cappagli et al., 2015;Gori et al., 2010).

Substitutive and not rehabilitative systems
Another important aspect to consider is the fact that most of the devices available aim at substituting the visual sense without providing rehabilitation training that is at the base of neural plasticity. The term "rehabilitative technology" is sometimes used to refer to aids used to help people recover their functioning after injury or illness. Rehabilitation increases plasticity, which leads to structural and functional changes in the brain that are necessary, for example, to reorganize cortical maps (Johnston, 2009). "Assistive technology" may be as simple as a magnifying glass to improve visual perception or as complex as a computerized communication system. Unlike assistive technology, rehabilitation technology allows the device to be removed after rehabilitation, which means it is not necessary to use the device for the whole life. However, most of the technological solutions, being assistive, do not exploit this capability.

General limits of robotics today
There are currently several other factors that hamper the wide diffusion of personal robotic systems. Assuming that the issue of user acceptance can be solved by proper system engineering, three major hurdles still remain. These are: i) the problem of robot perception, ii) the lack of specific certification procedures and of safety standards and iii) the ethical and legal issues of autonomous machines.
Robot perception is (Fitzpatrick et al., 2008;Kemp et al., 2007) one of the first and most basic problems robot designers need to solve, as well as being a very actively researched topic. The quality of the robots' capabilities to self-localize and map their surrounding environment greatly depends on their sensors. State of the art systems are LIDARs (e.g. those used for experimental autonomous automobiles) that are generally priced above D 50,000. In recent years, affordable structured light sensors have become available, thanks to a technology push from the video gaming industry. These sensors still fail to yield comparable performance and reliability, are not suitable for all lighting conditions and generally do not work outdoors. In general, as the sensor suite grows in sophistication, so do the system complexity and overall cost. These two effects combined contribute to the difficulty of creating market-worthy mobile robotic systems.
Another factor is the fact that until very recently, specific certification procedures did not exist. Given previously available alternatives, most systems would fail standard certification procedures. This implies that the commercialization of these products would be extremely difficult, and hardly profitable. A significant effort to resolve this problem is being made by the ISO, an organization that since 2012 has started the development of a specific standard for personal robotics. The ISO13482 (International Organization for Standartization, 2014; Jacobs and Virk, 2014) standard, published in 2014, states guidelines, recommendations and requirements for personal care robots. In the near future this document is likely to serve as the basis for personal robotics certification procedures.
Closely related to the previous issue is that of safety (Vasic and Billard, 2013). Robotics systems capable of navigating autonomously are typically rather complex. It is therefore extremely difficult to guarantee that these systems will not fail in any possible circumstance. This aspect is particularly important for blind users: critical system failures would be particularly problematic for users accustomed to relying on their system for orientation and mobility. One of the current trends is to design systems capable of "failing gracefully", i.e. to recover, at least partially, functionality after severe malfunctions.
Finally, the issue related to the legal aspects of exploiting autonomous machines shall be considered. Robotic systems are being endowed more and more with the capability of taking autonomous decisions. As part of the decision making process is shifted from users to autonomous systems it is not clear how the responsibility for (potentially wrong) decisions is to be distributed. Although several solutions to this issue have been proposed, the legal infrastructures regarding this are still currently missing. To illustrate this point, the use of self-driving automobiles is currently permitted only in Nevada and California.
Altogether, these factors greatly complicate the role of service robotics providers. Indeed, at the time of writing, to our knowledge, no assistive autonomous robots have yet been deployed in outdoor environments. This is due to the fact that outdoor environments are typically highly dynamic and subject to change. Some assistive robots have been deployed in indoor environments (International Organization for Standartization, 2014) although these have little or no mobility capabilities. Moreover, no commercial solutions are currently available besides robotic pets and toys.

General conclusions
In the past decade our environment has become incrementally more technological. At the same time, technology has become more accessible to the entire population. Technological accessibility, however, requires elderly and young children to be inclined to learn new topics. To reach a large number of users, new technological solutions have to be simple and affordable (e.g. covered by suitable tactile materials and producing suitable sounds). In the visually impaired population we have to find and provide the right sensory modality in order to inform blind children about the surrounding environment, soon after their birth. Despite the fact that early intervention is potentially much more powerful because the brain is more plastic at a young age, all the technology for visually impaired persons that exists today has been developed for adults and is not well adapted for children. Creating new technology that can be used from the first year of life is a necessity. As we have seen above, different tasks rely on different sensory information: we have to find the best sensory feedback to convey a certain kind of information, and then, improve this signal. This early 'rehabilitation' is important for a child's independence and for them to master the use of future devices.
Simplicity of the device is important to cater for children's capabilities. Signals have to be easily-comprehensible without overloading the child with sensory feedback and useless information: it should help him to build a representation of space through the other senses. A device that provides useless cues not only overloads attentional capability, but also provides too complex signals, resulting in a product that has little appeal and usability. Simplicity also means better usability. Starting from the understanding of brain mechanisms, new technologies to assist spatial navigation should be shaped according to users' capabilities. This principle includes the idea that rather than sensory substitution or passive guiding technologies, more effort should be directed into those that can meliorate or even provide the blind user with the means to achieve more successful spatial navigation without being totally dependent on technological devices. One conceivable way to reach such results is to lean towards training devices that, after an instructed or even simple use of them, would improve orientation rather than mobility skills that have been found to be poor in blind and visually impaired people.
To conclude, although in the past decade significant advancement has been observed in the development of new technological solutions for locomotion, children are still far from having personalized solutions. We hope that this review can open a discussion on the necessity for more communication between neuroscientists and technologists. This communication should lead to a new way of developing systems that takes into consideration i) the brain mechanisms that subtend the deficit and ii) the attentional-cognitive capabilities that children and adults exploit to process the sensory signals provided by the technological solution.