Plasticity between visual input pathways and the head direction system

Animals can maintain a stable sense of direction even when they navigate in novel environments, but how the animal's brain interprets and encodes unfamiliar sensory information in its navigation system to maintain a stable sense of direction is a mystery. Recent studies have suggested that distinct brain structures of mammals and insects have evolved to solve this common problem with strategies that share computational principles; specifically, a network structure called a ring attractor maintains the sense of direction. Initially, in a novel environment, the animal's sense of direction relies on self-motion cues. Over time, the mapping from visual inputs to head direction cells, responsible for the sense of direction, is established via experience-dependent plasticity. Yet the mechanisms that facilitate acquiring a world-centered sense of direction, how many environments can be stored in memory, and what visual features are selected, all remain unknown. Thanks to recent advances in large scale physiological recording, genetic tools, and theory, these mechanisms may soon be revealed.


Introduction
Navigating requires one to form and maintain a stable sense of direction and location. Yet these must be flexible to quickly adapt to new environments. Indeed, mice and rats exposed to a novel environment develop a stable location sense with just a few rounds of exploration [1e 4], and flies' sense of direction can be experimentally modified in approximately 2 minutes [5,6].
The flexible yet stable nature of animals' navigation systems provides a window to understand how the brain quickly develops internal representations of the ever-changing external world and uses them to navigate new environments. Furthermore, recent progress in large scale recording techniques [7e9], genetic tools [10e12], anatomical data [13e15], and computational modeling [16e24] have begun to reveal the intricate dynamics of neurons underlying these phenomena. Here, we review recent advances in physiology and computational modeling that have advanced our understanding of the neural mechanisms that underly the plasticity between visual inputs and sense of direction in rodents and flies.

Attractor dynamics underlying the sense of direction
Walk into a familiar room that is completely dark, and you are unlikely to lose your sense of location; your brain can count the steps you take in the heading direction and update your mental image of the location. Such path integration is effective only if our sense of direction is reliable. As we have all likely experienced, we are uncertain about our direction and location while we grope to switch on the light; then, the scene of the lit room corrects our sense of direction. This example illuminates three main properties of the direction sense, (1) a unique sense of direction can be maintained in the absence of sensory input, (2) directional sense can be updated by incorporating self-motion d specifically, the angular velocity, which is a self-originated (idiothetic) cue d and (3) the error accumulated in darkness can be corrected by reliable externally originating (allothetic) cues such as visual input that shows distant landmarks (see [25] for extended review). In rodents, neurons in the lateral mammillary nucleus that are called Head Direction (HD) cells exhibit all three properties [26e 29]. Individual HD neurons fire when the animal is oriented toward the neuron's preferred direction [30]. At the population level, a subset of HD cells is active for any one HD, and a new, unique subset becomes active when the animal turns its head [31,32]. In flies, EPG neurons d called compass neuronsd tile the donutshaped Ellipsoid Body (EB) and encode HD [33e35]; the activity in these neurons also satisfies all three properties [34]. Strikingly, compass neurons are physically organized in a ring topography [33] and two-photon calcium imaging shows localized activity that changes position along the EB perimeter when the animal turns in a virtual reality arena [34].
Most theoretical studies that seek to explain the physiological properties of HD cells and compass neurons have converged onto a class of neural network models called Ring Attractors ( Figure 1) [18e22,31,32,36e40]. These ring attractor networks include HD neurons arranged in a ring topology, others that represent angular velocity, and visual input neurons. The HD neurons in the model inhibit each other, except for those with similar preferred heading directions, which excite each other (Figure 1a). This all-to-all inhibition and local excitation underlie network dynamics similar to winnertake-all dynamics, except the winner is not a single neuron but the small subset of neurons with similar preferred directions that are active at any given moment [41], as well as the local excitatory connections among them that allow persistent activity without external input. These HD neurons are responsible for the animal's persistent sense of direction in a dark environment [34, 37,42]. Angular velocity neurons in the model, on the other hand, receive multiple inputs. These include self-generated angular velocity information from sensory systems such as the vestibular system (mice)/halteres (flies) and proprioception to shift the signal of HD/ compass neurons [22,38,39,43e49]; they also likely receive a motor efference copy (Figure 1b). In most models, angular velocity is split into left-turning and right-turning populations, and neurons for each direction have been described in both rodents and flies. Finally, visual input neurons provide information about the environment to HD cells, correcting any error in HD information accumulated in the absence of visual input ( Figure 1c) [34, 47,50e52]. Retrosplenial cortex (RSC) is the major source of visual input to the HD system in rodents [16,53e56]; in flies, the anterior dorsal pathway provides diverse visual feature information via ringshaped neurons (called ring neurons; Figure 1c, right) that directly synapse onto all compass neurons [35,57e 59]. In the following, we focus on the flexible interface between the visual input and HD/compass neurons.

Plasticity between visual inputs and the HD system
Hippocampal place cells in rodents require only a short experience of a novel environment to establish a stable place field [1e4]. Thus, the mammalian navigation system can quickly adapt to a new environment. Indeed, the map from an unfamiliar visual scene to the matching HD/compass neurons should be established quickly.
This encoding process has been modeled using experience-dependent Hebbian plasticity between visual inputs and HD cells [60]. That is, if a visual neuron and an HD neuron are simultaneously active, then the map between them is modified to make coincident firing more likely in the future ( Figure 2). Therefore, if the visual input is excitatory, the map would be strengthened; if inhibitory, the map would be weakened. In models with this rule, when an animal is exposed to a new environment, the HD signal is first initialized in a random direction (Figure 2a). Then, the internally available angular velocity signal (e.g. the vestibular signal and/or the motor efference copy) d but not the visual inputs d updates the HD signal [17,21]. Assuming the angular velocity signal reliably updates the HD signal, the map between the visual scene and HD cells is updated at every moment ( Figure 2bed). After sufficient time for learning, this map becomes strong enough to correct the HD system if any error is accumulated in the darkness (Figure 2eef). This simple paradigm has guided most computational models of HD cells.
In rodents, there is direct evidence of Hebbian plasticity between visual input and place cells, though not between visual inputs and HD cells [3,61,62]. Recent studies [1e3] showed a new map can be experimentally created by pairing the animal's location in the environment and putative place cells by artificially manipulating the membrane potential of the place cells. This supports the idea that the mammalian navigation system, potentially including HD cells, uses experiencedependent learning rules to adjust its dynamics in new environments. Notably, as mentioned previously, the place cells' plasticity process does not require a long period of exploration [1e4]. In line with this observation, HD cells also require only a few minutes in a new environment to establish a stable HD signal [47].
In flies, Kim et al. [5] demonstrated direct evidence for Hebbian learning in compass neurons by placing the fly in a familiar environment, where the compass neurons had a stable map with scene orientation. The authors then optogenetically remapped the map between the compass position and the environment simply by pairing scene orientations with the activation of different compass neurons. Less than 5 min of optogenetically enforced remapping resulted in a stable sense of direction that was shifted from the original sense of direction. The authors could even change the map in the opposite angular direction of turning: When the fly turned clockwise, the compass activity was enforced to turn the opposite direction, effectively reversing the sense of direction. Fisher et al.
[6] also observed rapid remapping. They recorded the membrane potential from compass neurons in tethered flies walking on an airfloated ball and observed a change in the mapping from visual input to compass neurons activity in only a few minutes in novel environments. Notably, the environmental manipulations that both studies used were d of necessity d highly artificial [5,6]; thus, such plasticity may happen even more quickly in a natural setting. Ring attractor networks in rodents and flies. All components of the ring attractor model postulated for rodents (left column) have anatomical counterparts in the ring attractor network in flies (right column). (a) The inhibition between HD cells (or compass neurons) with different preferred directions turns off all HD cells except a small number of winning cells with similar preferred direction, which is maintained via local excitation among them. Left: The most active cell (dark gray circle) excites neighboring neurons (red arrows) and inhibits others (blue arrows). Right: The compass neuron (EPG neuron) in the EB tile 5 (E5) sends axons to protocerebral bridge (PB) glomeruli 5 (L5 and R5), where a feedback excitatory neuron, PEG, sends axons back to E5. This constitutes recurrent excitation. In R5, an inhibitory D7 neuron receives input from the compass neuron and inhibits PEG and EPG neurons in the R1 (in addition to R9 and L8), effectively turning off the local excitation of the recurrent connection from E1 to R1/L1 [35]. Another D7 neuron receiving input from Plasticity and allocentric coding of the HD signal Although visual input can correct errors in the sense of direction nearly instantaneously, visual information, especially of nearby objects, is not always a reliable source of allothetic information while navigating. If all objects (landmarks) in the visual scene are sufficiently distant from the animal, the entire scene would rotate exactly the same amount as the animal's turning, regardless of the position of the animal in the space. Therefore, the map from the scene to the corresponding HD cells would be consistent over time, and the problem becomes trivial to solve. However, objects close to the animal will change positions in the visual scene as the animal moves, a phenomenon called motion parallax. This makes the objects unreliable for computing the HD signal, and these objects might even interfere with the HD cells' allocentric coding.
Rodent HD cells have solved this problem. In a small arena, though the position of landmarks relative to HD significantly changes depending on the animal's position in the arena, HD cells maintain an allocentric HD signal [30, 63,64]. In flies, technical limitations have prevented testing for an allocentric compass signal, but anatomical and physiological evidence showed allocentric translational velocity signal is encoded in the fly brain by combining HD from compass neurons and the egocentric velocity signal from other inputs [65e67]. This computation would fail, however, if the HD signal of compass neurons is egocentric (i.e. depends only on retinal image), suggesting the HD signal of fly compass neurons may actually be allocentric.
What then are the underlying mechanisms of the allocentric coding? In rodents, recent modeling studies solved this problem [16,68] by expanding the potential memory capacity in the input neurons in RSC (Figure 3a). In a study by Bicanski and Burgess's [16], for example, the RSC was modeled to have multiple maps from the visual scene to HD neurons, each selected depending on the position of the animal in the space. During the initial exposure to a new environment, mapping from the visual scene to the HD system occurs in an experience-dependent manner as described previosuly. However, the mapping is independently performed across distinct locations in the arena. If the RSC has sufficient memory capacity to encode many scenes, this strategy solves the allocentric coding problem during the initial learning stage.
In flies, a similar strategy was modeled based on the behaviors of other insects, for example, ants and dung beetles. These insects take snapshots of their environments to guide navigation, such as homing [69,70]. In one model [71], connections from the anterior dorsal visual pathway to ring neurons serve as a potential memory system for diverse scenes. How this model agrees with recent advances in EM-based network reconstruction [13,65] remains to be determined.
High memory capacity carries an inescapable constraint, however; it requires many neurons. Because flies have much fewer neurons than rodents (see Memory Capacity section below), an alternative strategy that does not require high memory capacity has been proposed to explain allocentric HD coding in flies [5,72]. In this model (Figure 3b), the compass system's experiencedependent plasticity rule maps only distant landmarks to compass neurons during learning because it simply cannot encode close landmarks [72]. This situation arises because unreliable landmarks (i.e. close objects that do not follow the self-motion generated HD due to motion parallax) naturally cannot present enough repeating experience to be encoded by the compass neurons' relatively slow learning rate. In other words, the mapping process simply would not 'see' unreliable landmarks.
Either strategy assumes that the HD signal in a new environment is initially governed only by the selfgenerated turning signal (idiothetic cues) [5, 16,68,72]. Furthermore, mapping visual input to HD cells requires that the HD signal updated by idiothetic cues is highly reliable. Studies show, however, that the HD signal is not reliable in the first few minutes after an animal is placed in a new environment. In rodents, the mapping from the visual input to HD cells stabilizes after a few minutes [47]. In flies, no quantitative analyses have been reported, but from the author's experience, it might take between 30 s and a few minutes until the mapping from visual input to compass neurons is fully stabilized. This unstable HD activity conflicts with the model assumption that the HD signal by idiothetic cues is highly reliable during the learning phase. Clearly, there is a gap in our understanding of how the HD

Memory capacity
In mammals, the RSC contains many neurons and may store countless scenes; we do not know if scene memories are multiplexed (i.e. single neurons may be involved in coding multiple scenes) or segregated [16], although some level of predictability of the HD signal in a morphed environment suggests at least partial multiplexing [41,47]. In contrast, flies have 200e400 visual neurons to map each scene to compass neurons, of which there are w50 [33, 57,65]. Recent work [5,6] showed that if two scenes are similar (e.g. a scene with a single bar and a scene with two bars), compass neuron tuning was predictable, indicating potential memory multiplexing. Interestingly, Kim et al. [5] showed that when two scenes differed substantially, compass activity was significantly less predictable. However, each scene's map was preserved in trials separated by a trial of the other scene suggesting multiple scenes can be stored. The exact capacity of the HD memory system and the underlying computational mechanisms in flies are open questions (see [5] for a hypothesis).

Sensory information encoded in the HD system
Another lingering question is what visual features are prioritized for encoding by the HD neurons during the initial period in a novel environment [73]. In rodents, the role of visual features has been studied especially in the hippocampal place cells. In one study, place cells were shown to encode the position of buried food relative to the geometric appearance, but not surface properties, of the arena [74e77]. For example, in rotation-symmetric arenas (e.g. rectangular), rats searched the two rotation-symmetric sites even when a prominent symmetry-breaking surface attribute was present (e.g. a colored wall near the food). Interestingly, place cells may not encode all spatial information at once but rather accumulate it over time [78]; this suggests certain visual features, such as geometry, are prioritized for encoding. In the HD system, unreliable local features do not affect HD coding [63]. On the other hand, the arena's geometric appearance appears to affect the HD signal [73,79e81] via visual input from RSC in rodents [55,82].
In flies, the anatomy of visual inputs has been wellcharacterized by light and electron microscopy connectivity studies [13,57e59,65]. 17 types of ring neurons provide the main visual input to the compass system Experience-dependent plasticity between visual input neurons and HD (or compass) neurons. The x-axis of each plot represents visual input neurons (or ring neurons in flies; see Figure 1c). For clarity, a single array of neurons is sorted in the order of the visual location to which they respond. The anatomical position of these visual neurons in the brain may be random. Vertical lines represent axons. The y-axis represents the HD signal (or compass signal in flies). HD cells are aligned in the order of preferred HD. Horizontal lines represent dendrites. Each cross between a vertical line and a horizontal line represents a synapse. Axons or dendrites of active neurons are shown thicker. (a) Assume that an animal is placed in a new environment. Because of the ring attractor dynamics, a small number of HD cells with similar preferred HD are active (red, orange, and yellow cells in the y-axis). In this initial exposure, the environment activated four visual neurons (colored in the x-axis). (b) Over time, the synaptic weights between active visual input neurons and HD cells are strengthened. In flies, since ring neurons are inhibitory, the synaptic weights are weakened. Black dots represent active synapses being updated and also indicate the synaptic weight. (c) When the animal turns, the angular velocity signal shifts the HD signal to a new set of HD cells, while the scene is also shifted because of the turn and different visual neurons become active. Synaptic weights between active cells are then updated. Gray dots represent synapses without active axons and dendrites. (d) After sufficient learning, the synaptic weights are fully modified and stabilized. (e) If the animal is placed in the darkness, the HD signal will be updated but may accumulate errors. No synapses are active because there is no visual input. (f) When the animal is placed back in the now-familiar environment, the active visual neurons activate the HD cells via established synapses, and the most strongly activated HD cells will win over other HD cells. That is, the visual input corrects the error accumulated in the darkness. HD, head direction.
[35,65], sending their dendrites to a structure with three sub-compartments (superior, inferior, and anterior), collectively called the Bulb. All ring neurons receive strong visual inputs, yet ring neurons in the superior bulb strongly encode visual features, such as oriented edges, whereas inferior bulb ring neurons encode motor information, and the anterior bulb neurons encode polarized light [58,83e86] (also see [87]). Interestingly, all these neurons, which directly synapse onto compass neurons, are interconnected with each other [57,59,65], suggesting complex interaction, such as a hierarchy among diverse visual features in the generation of fly HD signals. The complex organization and the intricate dynamics among ring neurons are important future research topics that will reveal the visual attributes critical to developing a reliable HD signal in flies.

Conclusion
A key cognitive function of the brain is to create and use internal representations of the external world [88]. The HD system establishes such representations within the first minute or two in novel environments. We have just begun to comprehend the mechanisms underlying this remarkable phenomenon thanks to recent technical and theoretical advances, which have further revealed that core principles of these mechanisms appear to be shared between rodents and flies. This provides an unparalleled opportunity to fully investigate and discern how a key cognitive function is constructed and maintained, not just in a single species but across vastly different species. Such understanding will provide an indispensable template for deciphering the neural basis of more complex cognitive functions. Two learning algorithms for allocentric head direction coding. (a) Recent studies proposed a position-dependent memory system [16,68]. Computationally, the position dependency might be applied to neurons in RSC that receive the HD signal (as in [16]), or to neurons in RSC that receive visual input. In either case, the HD signal or visual information should be conjunctively encoded with the position information, which is likely from place cells in the hippocampus. In this figure, for simplicity, position-dependent visual input neurons are assumed. The rodent is in an arena with 8 areas. When the animal is in Position 2, only particular RSC visual neurons can be active; all other visual neurons are quiescent. Thus, the HD signal can be locally developed in Position 2. As the animal navigates the arena, different sets of visual neurons will become active and corresponding synapses will be updated. (b) An alternate algorithm does not encode proximal objects, which are an unreliable cue to correct HD errors. In this figure, the fly is navigating an open field. The tree is a distal landmark and provides reliable HD information, whereas the rock is not a reliable cue for the HD signal due to motion parallax. During the learning phase, the matching pairs of compass neurons (activated by the animal's turning) and ring neurons (activated by the rock) do not repeat because the retinal position of the rock does not consistently match with the animal's turning. Thus, if the synaptic plasticity is slow, the mapping from ring neurons activated by the rock to compass neurons cannot be established (shown as open circles). On the other hand, the retinal position of the distal tree is always in sync with the animal's turning. Therefore, over time, the synaptic weights will be weakened (because ring neurons are inhibitory) and stabilized. HD, head direction. Idiothetic cues: Self-motion cues generated when an animal moves, including vestibular signals in rodents, haltere signals in flies, proprioceptive sensation (e.g. step count), and motor efference copy.
Allothetic cues: Global cues that provide an animal's location and direction relative to the world.
Egocentric coordinates: The coordinate system relative to the animal. For example, the position of a stimulus on the retina is egocentric. 'Right side of my body' is egocentric.
Allocentric coordinates: The world-centered coordinate system. The position of a rat in an arena is allocentric. The head direction of a rat in an arena is also allocentric.
Path integration: Estimating one's own position only using idiothetic cues.
Motion parallax: The change in the position of a stimulus on the retina when an animal moves sideways differs with the distance to the stimulus. For example, a distant object does not appear to move, whereas a proximal object appears to move in the opposite direction as the animal's own motion.
Map: In this article, the map is used to describe the link, connection, or relationship from a particular scene to the corresponding HD neurons. In other contexts, such as in a cognitive map, a map can represent a mental image of the space.
Models grid cells using an attractor network with experiencedependent plasticity. The one-dimensional version is nearly identical to the plasticity model of the ring attractor dynamics of HD cells.