A small-scale hyperacute compound eye featuring active eye tremor: application to visual stabilization, target tracking, and short-range odometry

In this study, a miniature artificial compound eye (15 mm in diameter) called the curved artificial compound eye (CurvACE) was endowed for the first time with hyperacuity, using similar micro-movements to those occurring in the fly’s compound eye. A periodic micro-scanning movement of only a few degrees enables the vibrating compound eye to locate contrasting objects with a 40-fold greater resolution than that imposed by the interommatidial angle. In this study, we developed a new algorithm merging the output of 35 local processing units consisting of adjacent pairs of artificial ommatidia. The local measurements performed by each pair are processed in parallel with very few computational resources, which makes it possible to reach a high refresh rate of 500 Hz. An aerial robotic platform with two degrees of freedom equipped with the active CurvACE placed over naturally textured panels was able to assess its linear position accurately with respect to the environment thanks to its efficient gaze stabilization system. The algorithm was found to perform robustly at different light conditions as well as distance variations relative to the ground and featured small closed-loop positioning errors of the robot in the range of 45 mm. In addition, three tasks of interest were performed without having to change the algorithm: short-range odometry, visual stabilization, and tracking contrasting objects (hands) moving over a textured background.


Introduction
According to the definition originally proposed by Westheimer in 1975 [1] and recently reformulated in 2009 [2]: 'Hyperacuity refers to sensory capabilities in which the visual sensor transcends the grain imposed by its anatomical structure'. In the case of vision, this means that an eye is able to locate visual objects with greater accuracy than the angular difference between two neighboring photoreceptors Δφ. This study presents the first example of an artificial compound eye that is able to locate the features encountered with much greater accuracy than that imposed by its optics (i.e., Δφ). Based on findings originally observed in fly vision, we designed and constructed an active version of the previously described artificial compound eye CurvACE [3,4]. Active CurvACE features two properties usually banned by optic sensor designers because they impair the sharpness of the resulting images: optical blurring and vibration. The active visual principle applied here is based on a graded periodic back-and-forth eye rotation of a few degrees scanning the visual environment. Scanning micro-movements of this kind have been observed in humans [5] and several invertebrates such as crabs [6], molluscs [7], and arachnids [8,9].
The first micro-scanning sensor based on the periodic retinal micro-movements observed in the fly (for a review, see [10]) was presented in [11], whereas recent developments [12][13][14] have led to the implementation of bio-inspired vibrating sensors endowed with hyperacuity. However, hyperacuity has also been obtained in artificial retinas without using any retinal micro-scanning processes, based on the overlapping Gaussian fields of view of neighboring photosensors (for a review, see [15]). The authors of several studies have assessed the hyperacuity of an artificial compound eye in terms of its ability to locate a point source [16], a bar [17], a single line [18] (the bar and the line both take the form of a stripe in the field of view (FOV)), an edge [19], and to sense the edge's orientation [20]. The robustness of these visual sensors' performances with respect to the contrast, lighting conditions, and distance from the object targeted (contrasting edges or bars) has never been assessed prior to the present study involving the use of a retinal micro-scanning approach. However, assuming that a priori knowledge is available about the targets and obstacles' contrast, Davis et al [21] implemented efficient target tracking and obstacle avoidance behaviour onboard a ground-based vehicle equipped with a bulky apposition eye consisting of an array of 7 ommatidia.
As Floreano et al [3] have shown, an artificial curved compound eye can provide useful optic flow (OF) measurements. In addition, we established here that an artificial compound eye performing active periodic micro-scanning movements combined with appropriate visual processing algorithms can also be endowed with angular position sensing capabilities. With this visual sensing method, an aerial robot equipped with active CurvACE was able to perform short-range visual odometry and track a target moving over textured ground. The bio-inspired approach used here to obtain hovering behaviour differs completely from those used in studies involving the use of computer vision or OF flow.
In this context, it is worth quoting, for example, two recent studies using binocular vision [22] and monocular vision [23] to perform hovering without any drift and visual odometry, respectively. However, the latency (about 18 ms) of the embedded visual processing algorithms used by the latter authors still limits the reactivity of the supporting aerial robotic platform. Other strategies combined visual cues with barometric data to obtain a visual odometer [24] or with an ultrasonic range finder to implement a hovering autopilot [25]. In the case of hovering behaviour, many studies have been based on the assumption that the robots in question have previous knowledge of particular features present in the environment. Mkrtchyan et al [26] enabled a robot to hover using only visual cues by fixating three black rectangles, but its altitude was controlled by an operator. Likewise, landing procedures have been devised, which enabled robots equipped with initial measurement unit (IMU)s and visual sensors to detect specific geometrical patterns on the ground [27] and [28]. Bosnak et al [29] implemented an automatic method of hovering stabilization in a quadrotor equipped with a camera looking downward at a specific pattern. Gomez-Balderas et al [30] stabilized a quadrotor by means of an IMU and two cameras, one looking downward and the other one looking forward. In the latter study, the OF was computed on the basis of the images perceived when looking downward and the robot's position was determined by using a known rectangular figure placed on a wall, which was detected by the forward-facing camera.
OF has also been used along with IMU measurements to perform particular flight maneuvers such as hovering [31] and landing [32,33]. The robot developed by Carrillo et al [34] rejected perturbations by integrating the OF with information about the height obtained via an ultrasonic range finder. In their experiments, the goal was to follow a path defined by a contrasting line.
Honegger et al [35] also developed an optical flow sensor for stabilizing a robotic platform hovering over a flat terrain, but the performances of this sensor over a rugged terrain or a slope were not documented. Along similar lines, Bristeau et al [36] developed a means to estimate the speed of a quadrotor by combining the speed given by an OF algorithm with that provided by an accelerometer.
The visual processing algorithm presented here estimates displacement by measuring the angular position of several contrasting features detected by active CurvACE. In this respect, this method differs completely from those used in previous studies based on the use of the OF, which are similar to speed estimation methods.
The active version of the CurvACE sensor and the fly's retinal micro-movements are described in section 2.1, and a model for the vibrating eye, including its special optics, is presented in section 2.2. The visual processing algorithms resulting in hyperacuity are described in section 3. A complete description of the implementation of this sensor on a tethered robot named HyperRob is given in section 4, and the robotʼs capability to assess its own linear position relative to the environment and track a target thanks to the active CurvACE, based on the novel sensory fusion algorithm developed here, is established in section 5 (see also section 3.3).
2. Description of the visual sensor: active CurvACE 2.1. Inspiration from the fly's visual micro-scanning movements In this study, visual hyperacuity results from an active process whereby periodic micro-movements are continuously applied to an artificial compound eye. This approach was inspired by the retinal micro-movements observed in the eye of the blowfly Calliphora (see figure 1(a)). Unlike the fly's retinal scanning movements, which result from the translation of the photoreceptors (see figure 1(b)) in the focal plane of each individual facet lens (for a review on the fly's retinal micro-movements see [10]), the eye tremor applied here to the active CurvACE by means of a micro-stepper motor (figure 1(c)) results from a periodic rotation of the whole artificial compound eye. Section 2.3 shows in detail that both scanning processes lead to a rotation of the visual axis.
Here we describe in detail the active version of the CurvACE sensor and establish that this artificial compound eye is endowed with hyperacuity, thanks to the active periodic micro-scanning movements applied to the whole eye.
2.2. Modelling the optics As described in [3], the CurvACE photosensors array has similar characteristics to the fruitfly eye in terms of the number of ommatidia (630), light adaptation, the interommatidial angle (Δφ = 4.2°on average), and a similar, Gaussian-shaped angular sensitivity function . The MOT is one of the two muscles responsible for the periodic retinal translation (for a review on the fly's retinal micromovements see [10]). Modified with permission from [37], copyright 1972 Springer. (c) The active CurvACE with its vibrating mechanism based on the use of a small stepper motor. Δφ, defined as the angle between optical axes of two adjacent ommatidia, and the acceptance angle Δρ, defined as the angle at half width of the Gaussian-shaped ASF. This particular shape of ASF results from the combination of the airy diffraction pattern and the geometrical angular width of the rhabdom at the nodal point of the lens [38]. The diameter of the facet lenses in the male blowfly Calliphora ranges from 20-40 μm, whereas that of the peripheral rhabdomeres is 1.5-2 μm (see [39] for review). Adapted with permission from [40], copyright 1977 Scientific American Inc. (b) CurvACE sensor and (c) the horizontal ASFs measured for each artificial ommatidium along the equatorial row (red line) (see [3] for further details). The mean value of the interommatidial angle Δφ obtained in the middle row (red line) is 4.2°± 1.17°(SD) and that of the acceptance angle is 4.2°± 0.56°(SD).
To replicate the characteristics of its natural counterpart (see figure 2(a)), CurvACE was designed with a specific optical layer based on the assembly consisting of a chirped microlens array (lenslet diameter: 172 μm) and two chirped aperture arrays. In the case of active CurvACE, the optical characteristics remain constant during the scanning process.
Each ASF of active CurvACE (figures 2(b) and (c)) can be characterized by the acceptance angle Δρ, which is defined as the angular width at half of the maximum ASF. The ASF along 1D ψ s ( ) of one Cur-vACE ommatidium can therefore be written as follows: where ψ is the angle between the pixel's optical axis and the angular position of a point light source. with the rotation of the visual axes resulting from the rotation of the whole eye (e.g., like the mechanism underlying the micro-saccades in the human's camerular eye [41]), respectively. In the active CurvACE, we adopted the second strategy by subjecting the whole eye to an active micro-scanning movement that makes the eye rotate back and forth. As shown in figure 3, the retinal micro-scanning movements are performed by a miniature eccentric mechanism based on a small stepper motor (Faulhaber ADM 0620-2R-V6-01, 1.7 grams in weight, 6 mm in diameter) connected to an off-centered shaft [42]. This vibrating mechanism makes it possible to control the scanning frequency by just setting a suitable motor speed.

Mathematical description of signals generated by vibrating ommatidia
A general expression for the pixel's output signal is given by the convolution of the ASF ψ s ( )with the light intensity I of the 1D scene as follows: where ψ c is the angular position of a contrasting feature (edge or bar) placed in the sensor's visual field. For example, ψ I ( )can be expressed for an edge as follows: and for a bar: with L the width of the bar (expressed in rad). The micro-scanning movements of the pixels can be modelled in the form of an angular vibration ψ mod of the optical axes added to the static angular position ψ c of the contrast object (an edge or a bar). The equations for the two pixels (Ph 1 and Ph 2 ) are . Optical axis rotation resulting from (a) a micro-displacement ε of the pixels placed behind a fixed lens (e.g., in the case of a compound eye of the fly) or (b) a rotation of the whole sensor (e.g., in the case of the active CurvACE sensor). The micro-scanning of active CurvACE is subjected to active periodic rotational movements generated by a miniature eccentric mechanism. The angular vibration ψ mod is generated by a miniature stepper motor represented here by an orange shaft and a purple off-centered shaft, which translates along an elongated hole. The scanning frequency can be easily adjusted by changing the rotational speed of the motor. The scanning amplitude depends on the diameter of the off-centered shaft. therefore: with ψ mod obeying the following sinusoidal scanning law: With A and f mod describing the amplitude and the frequency of the vibration, respectively. In the case of a whole rotation of the eye, this scanning law is easily achievable by a continuous rotation of the motor with an off-centered shaft. In the case of a translation of the pixels behind a lens, the law should be weighted with the tangent of the ratio between the retinal displacement ε and the focal length f of the lens.
At the end, the photosensor response is a modulated convolution of the light intensity with the Gaussian sensitivity function.

Insights into the visual processing algorithms
In this paper, we reuse the local processing unit (LPU) presented in [14] and apply the principle to active CurvACE. An LPU is an elementary pair of photosensors endowed with hyperacuity by means of a periodic vibration. The LPU is able to locate very accurately an edge or a bar placed in its small FOV. An artificial compound eye like CurvACE can provide several LPU outputs, which can be merged to obtain a bigger FOV and used as a basis for a higher level visual processing algorithm. In the following sections, we describe the different steps of the visual algorithm, from the pixel processing to the novel fusion of the LPU's output signals.

LPU: from photosensors to an accurate edge and bar location
The LPU defined in figure 4 is the application of algorithms presented in [13] and [14]. The first paper ( [13]) leads to the signal Output Pos resulting from the difference-to-sum ratio of the demodulated pixel output signals described by equation (8). The demodulation is realized by means of a peak filter that acts as both a differentiator and a selective filter centered at the scanning frequency ( = f f p m o d ). Then, an absolute value function cascaded with a low-pass filter smooths out the pixel's output signals. The second paper ( [14]) explains in detail the edge/bar detection based on the observed phenomena that the two pixels' output signals are in phase in the presence of an edge and in opposite phase in the presence of a bar. At the output of the LPU, the signal θ i represents the position of the contrasting feature in the FOV (see equation (9)).
i c Detector Pos With Output Detector equal to (−1) or (1) and θ i the output signal of an LPU (see figure 4).

Hyperacute localization of contrasting bars and edges
The characteristic static curves of the active CurvACE obtained with a contrasting edge and a black bar 2.5 cm in width subtending an angle of°2.86 are presented in figure 5.
The curve in figure 5(a) has a tangent hyperbolic profile with respect to the angular position of the edge. It can be clearly seen by comparing the two curves (scanning frequency 50 Hz, amplitude about°5 peak to peak with Δφ =°4.2 and Δρ =°4.2 ). Two parallel processing pathways (one for edge/bar localization and one for edge/bar detection) were implemented. The edge/bar localization block gives the local angular position θ i of an edge or bar placed in the visual field of two adjacent photosensors. The edge/bar detection block detects the presence of a bar or edge and triggers the appropriate gain: +1 for edges and −1 for bars. The principle of this detector is described in [14]. The central frequency f p of the peak filter is equal to the scanning frequency (50 Hz), whereas the cut-off frequency of the second-order digital low-pass filter is equal to 10 Hz. Adapted with permission from Juston et al [14], copyright 2014 IEEE. plotted in figure 5 that the slopes of the characteristic static curves obtained with a bar and an edge are inverted. A theoretical explanation for the inversion of the slopes is given in [14]. This inversion justifies the use of an edge/bar detector in the LPU (see figure 4) to compensate for it and still be able to distinguish the direction of the movement.
Moreover, the characteristic curves are independent of the ambient lighting condition. Figure 6 shows that the Output Pos signal remains constant even if the ambient light varies over about one decade (from 180 to 1280Lux). A peak is visible at each light change and corresponds to the transient phase during light adaptation of the CurvACE photosensors, which lasts only 250 ms. In addition, figure 6 shows that active Cur-vACE is a genuine angular position sensing device able to provide the angular position of a contrasting object placed in its FOV. Indeed, when the scanning is turned off and on again, the value of the measurement remained the same. This experiment shows that the micro-scanning movement allows us to measure the position of a contrasting object without any drift. A  limitation comes here from the contrast, as it cannot be theoretically higher than 81.8% for an edge because the auto-adaptative pixel output signal is no longer log linear for changes of illuminance (in W m 2 ) greater than one decade (see [3]).

Merging the output of local pairs of processing units
To endow a robot with the capability to sense its linear speed and position, a novel sensory fusion algorithm was developed using several LPUs in parallel. In this article, a 2D region of interest (ROI) composed of 8 × 5 artificial ommatidia in the central visual field was used in order to expose several pixels to the same kind of movements (figures 9(a) and (c)). In other words, the pattern seen by the sensor during a translation of the robot is a succession of edges and bars. The algorithm used here and depicted in figure 7 implements the connection between the 8 × 5 photosensors' output signals to an array of 7 × 5 LPUs in order to provide local measurements of edge and bar angular positions. Then a selection is performed by computing the local sum S of two demodulated signals Ph demod obtained from two adjacent photosensors: Indeed, as a signal Output Pos is pure noise when no feature is in the FOV, an indicator of the presence of a contrast was required. The sum of the demodulated signals was used here to give this feedback, because we observed that the contrast is positively correlated with the sum and the signal-to-noise ratio. Therefore, at each sampling step, each local sum S is the thresholded in order to select the best LPU's outputs to use. All sums above the threshold value are kept and the others are rejected. The threshold is then increased or decreased by a certain amount until 10 local sums have been selected. The threshold therefore evolves dynamically at each sampling time step. Lastly, the index i of each selected sum S gives the index of the pixel pair to process. Thus, the computational burden is dramatically reduced. Moreover, this selection helps reduce the data processing because only the data provided by the 10 selected LPUs are actually processed by the micro-controller.
In a nutshell, the sensory fusion algorithm presented here selects the 10 highest contrasts available in the FOV. As a result, the active CurvACE is able to assess its relative linear position regarding its initial one and its speed with respect to the visual environment.
It is worth noting that the selection process acts like a strong non-linearity. The output signal θ fused is therefore not directly equal to the sum of all the local angular positions θ i . The parallel differentiation coupled to a single integrator via a non-linear selecting function merges all the local angular positions θ i , giving a reliable measurement of the angular orientation of the visual sensor within an infinite range. The active CurvACE can therefore be said to serve as a visual odometer once it has been subjected to a purely translational movement (see section 5.1). Mathematically, Figure 7. Visual processing algorithm Description of the sensory fusion algorithm to assess the robot's speedV x as well as its position X resulting here from a translation of the textured panel with respect to the arbitrary reference position (i.e., the initial position if not resetting during the flight). The 35 (7 × 5) LPU output signals corresponding to the ROI (8 × 5 photosensors) of the active CurvACE (see figure 9(a)) were processed by the 35 LPUs presented in figure 4. Here, the signal obtained before reaching the discrete integrator, denoted S fused , was used to compute the linear speed. This procedure involved scaling the angular data to millimetric data (gain K) and normalizing the time limited the noise. The robot's position X was scaled in millimeters by means of the gain K. The visual processing algorithm presented here providesV x and X to the robot's autopilot (see figure 13).
the position is given through the three equations as follows: As shown in figure 7, the robot's speed is determined by applying a low-pass filter to the fused output signal S fused (which is the normalized sum of the local displacement error ΔP isel ), whereas the robot's position is determined in the same way as θ fused , with the gain K.
To sum up, the algorithm developed here sums the local variation of contrast angular positions in the sensor FOV to be able to give the distance flown with the assumption that the ground height is known.

HyperRob: an aerial robot equipped with an hyperacute compound eye
The objective of this part of the study was to endow a visually controlled robot, named HyperRob, with the capability to: • stay at a desired position (reference position) with respect to the visual environment (a textured panel, see figure 8) • return to the reference position even in the presence of perturbation applied to the robot (lateral disturbance) or the textured panel over which the robot is flying.
• track visual target placed between the robot and a textured background environment. Figure 8 summarizes the four scenarios used to show the visual stabilization capabilities of HyperRob. Sections 2 and 3 presented the visual sensor and the algorithm we implemented on HyperRob. It is a twin-rotor robot tethered at the tip of a rotating arm. The robot was free to rotate around its roll axis and could therefore make the arm rotate around its vertical axis (the azimuth). The robot therefore travelled along a circular path with a radius of curvature equal to the length of the arm (1 m). Figure 9 shows the robot equipped with active CurvACE placed on the experimental testbench.
This section introduces HyperRob and we will see in section 5.2.1 that the robot will be able to stay at its initial position due to the vibrating active CurvACE estimating its linear speed and position, assuming that its gaze is stabilized.

Gaze stabilization
In order to determine the robot's speed and position accurately, the gaze direction should be orthogonal to the terrain. But as a simplification, we chose to align it with the vertical, assuming that the ground is mostly horizontal. Therefore, the eye has to compensate for the robot's roll angle. To this end, the eye is decoupled from the robot's body by means of a servo motor with a rotational axis aligned with the robot's roll axis. The gaze control system, composed of an inertial feedforward control, makes the eye looking always in a perpendicular direction to the movement during flight (figure 10). The rotational component introduced by the rotating arm supporting the robot can be neglected in this study.

Details of the robot HyperRob
HyperRob is an aerial robot with two propellers and a carbon fiber frame, which includes two DC motors. Each motor transmits its power to the corresponding propeller (diameter 13 cm) via a 8 cm long steel shaft rotating on microball bearings in the hollow beam, ending in a crown gear (with a reduction ratio of 1/5). Two Hall effect sensors were used to measure the rotational speed of each propeller, and hence its thrust, via four magnets glued to each crown gear. Based on the differential thrust setpoints adopted, HyperRob can control its attitude around the roll axis, which is sensed by a six-axis inertial sensor (here an InvenSense MPU 6000). The robot's position in the azimuthal plane is controlled by adjusting the roll angle. In the robot, where only the roll rotation is free, only one axis of the accelerometer and one axis of the rate gyro are used. As shown in figure 11, active CurvACE is mounted on a fast micro-servomotor (MKS DS 92A+) which makes it possible to control the gaze with great accuracy (°0.1 ) and fast dynamics (60°within 70 ms, i.e. 860°/s). This configuration enables the visual  sensor to be mechanically decoupled from the body (see section 4.1). The robot is fully autonomous in terms of its computational resources and its power supply (both of which are provided onboard). The robot alone weighs about 145 g and the robot plus the arm weigh about 390 g.
All the computational resources required for the visual processing and the autopilot are implemented on two power lean micro-controllers embedded onboard the robot. The first micro-controller (Microchip dsPIC 33FJ128GP802) deals with the visual processing, whereas the second one (Microchip dsPIC 33FJ128GP804) is responsible for stabilizing the robot. The two micro-controllers have a sampling frequency of 500 Hz. The robot's hardware architecture is presented in detail in figure 12.
The micro-controller (μC) Vision communicates with CurvACE via a serial peripheral interface (SPI) bus and collects the pixel values of the ROI. The signal S fused is computed and sent to the μC Control via another SPI bus. The latter completes the computation of the position X and the speed V x . This solution was chosen in order to keep the number of data sent via the SPI bus to a minimum. The μC Control estimates the robot's attitude on the roll axis via a reduced complementary filter (inspired by [43]) and controls the robot's linear position on the basis of the two visual measurements (X and V x see figure 13). The μC Control then sends the propeller speed setpoints to a custom-made driver controlling the rotational speed of each propeller in a closed-loop mode.

Control
Assuming the robot to be a rigid body and simplifying the dynamic model for a quad-rotor presented in [44] in the case of a single roll axis, the robot's dynamics Figure 11. Robot description (a) HyperRob mounted at the tip of the arm, leaving it free to rotate around the roll axis and move along a circular path in the azimuthal plane. (b) Schematic view of the robot. Four actuators were mounted onboard the robot: two DC motors driving the propeller's rotation are set in the carbon fiber body, one servomotor is used to stabilize the gaze and one stepper motor is used to produce the eye tremor (vibration). (c) CAD view showing the robot equipped with its visual sensor with a FOV of°3 3.6 by°20.2 . ω r1 and ω r2 are the two propellers' rotation speeds, θ r is the robot's roll angle, and θ er is the eye-in-robot angle.
can be written as follows: where X is the robot's lateral position, V x is its lateral speed, θ r is the roll angle, Ω r is the rotational roll speed, l is the robot's half span, I is the moment of inertia, δ is the differential thrust, and T nom is the nominal thrust. where c T is the thrust coefficient, and ω r1 and ω r2 are the right and left propeller speeds, respectively, and Figure 12. Electronic architecture. The autopilot's electronics board is composed of two micro-controllers. The first one is involved in the visual processing algorithm (dsPIC33FJ128GP802) and the second one (dsPIC33FJ128GP804) controls the robot's roll and its linear position. The micro-controller denoted μC Vision communicates with CurvACE and receives the digitized pixel output values via a SPI bus. The second micro-controller, denoted μC Control (dsPIC33FJ128GP804), receives the visual sensor's output data from the micro-controller μC Vision via an additional SPI bus. The μC Control then sends the propellers' setpoints to a custom-made driver including a closed-loop control of the rotational speed of each propeller. Figure 13. Description of the control loops. The robot's control system consists mainly of four nested feedback loops: one for the lateral position, one for the lateral speed V X , one for the roll angle θ r , and one for the rotational roll speed Ω r . The propellers' speeds ω r1,2 are controlled via an additional local feedback loop. All the controllers are detailed in table 1. Two Hall effect sensors are used to measure the propeller speed used in the feedback loop controlling the effective thrust. The inertial sensors give a biased rotational speed Ω r and the acceleration Ā cc . The active CurvACE sensor produces two visual measurements, corresponding to the robot's relative position X and its linear speedV X (see figure 7).
, where f prop (s) and δ ⋆ correspond to the closed-loop transfer function of the propellers' speed and the differential thrust reference, respectively.
As described in figure 13, the autopilot controlling both the robot's roll and its position is composed of four nested feedback loops: • the first feedback loop controls the robot's rotational speed by directly adjusting the differential thrust, and hence the roll torque.
• the second feedback loop yields setpoints on the previous one for tracking the robot's reference roll angle.
• the third feedback loop adjusts the robot's linear speed by providing roll angle setpoints.
• the fourth feedback loop controls the robot's linear position and yields the reference speed.
In the first and second feedback loops, the roll angle's estimation is obtained by means of a reduced version of a complementary filter described in ( [43]). In the case of HyperRob, since only a 1D filtering method is required, the attitude estimator becomes: where θ¯r is the roll angle calculated from the accelerometer measurement Y acc , bˆis the estimated rate gyro's bias, Ω r is the rate gyro's output measurement, θˆr is the estimated roll angle, and Ω r is the unbiased rotational speed. Here k a and k b are positive gains which were selected so as to obtain a convergence time of 3 s and 30 s for the estimated angle and the estimated rate gyro's bias, respectively. The complementary filter therefore yields the values of the rate gyro bias, the unbiased roll rotational speed, and the roll angle.

Application to short-range odometry, visual stabilization, and tracking
In the various experiments performed in this study, a serial communication with a ground station was used to record the data provided by the robot. A VICON motion tracking system was run at the same time to obtain the ground-truth data. A textured panel was placed 39 cm below the robot. During the experiment involving the translation of the panel, as the sensor can only sense the movement along one direction and the robot travels along a circular path, the data were projected in order to obtain a comparable dataset.
Thus, at each time step, we projected the position vector into the robot frame and took only the tangential components of the displacement.
In this section, we report on several experiments which were carried out in order to test the robot's capability to perform various tasks. In all these experiments, thanks to the efficient gaze control system compensating for the robot's roll, the visually controlled robot experienced a quasi-translational OF elicited by the perturbations applied either to the robot itself or to the textured panel. In the first experiment, the sensor played the role of an odometer. The robot achieved accurate hovering performances despite the lateral disturbances, as well as an efficient tracking capability. All these experiments confirmed that the robot was able to perform robust short-range visual odometry, i.e., to estimate its linear position before returning automatically to an arbitrary reference position adopted.

Short-range visual odometry
The fused visual output signal S fused provided by the active CurvACE depends on the visual environment: if there are no contrasts, the sensor will not detect any visual cues and will therefore not be able to specify the robot's position accurately. The richer the visual environment is (in terms of contrasts), the better the position measurement. In order to compare the output of the sensor with the ground-truth measurements, three experiments were conducted under different visual conditions.
In these experiments, the robot was moved manually over two different panels under two different ambient lighting conditions from right to left and back to the initial position with its gaze stabilization activated. The results obtained, which are presented in figure 14, are quite similar for each trip, giving a maximum error of 174 mm. Figure 14 shows that the output in response to a textured panel and one composed of a single 5 cm wide black bar was similar. Therefore, assuming the distance to the ground to be known, the active CurvACE was able to serve as a visual odometer by measuring the robot's position accurately in the neighbourhood of its initial position.

Above a horizontal textured panel
Lateral disturbances were applied by pushing the arm in both directions, simulating gusts of wind. In figure 15, it can be seen that all lateral disturbances were completely rejected within about 5 s, including even those as large as 40 cm. The dynamics of the robot could be largely improved by using a robot with a higher thrust or a lighter arm in order to reduce the oscillation. Figures 15(b) and (c) show that the robot was always able to return to its initial position. With its active eye, the robot can compensate for lateral disturbance as large as 359 mm applied to its reference position with a maximum error of only 25 mm, i.e., 3% of the flown distance. This error is presumably due to the selection process, which does not ensure the selection of the same features in the outward path and on the way back to the reference position; or maybe the assumption of a linear approximation of the inverse tangent function does not hold entirely within the entire FOV. As a consequence, thanks to the active visual sensor and its capability to measure the angular position of contrasting features, the robot HyperRob is highly sensitive to any motion and thus can compensate for very slow perturbation, ranging here from 0 to 391 mm. s −1 .

Above an evenly sloping ground
In the previous experiment, it was assumed that the ground height must be known to be able to use the conversion gain in the fused output signal S fused . The same experiment was repeated here above a sloping ground (see figure 16). The robot's height increased sharply in comparison with the calibration height. The robot's height varied in the range of +/−74 mm and starts with an offset of +96 mm compared to the calibration height. As shown in figure 16(b), the estimation of the traveled distance was always underestimated because the robot was always higher than the calibration height. However, the robot was still able to return to its starting position with a maximum error of only 45 mm (time = t 36. 5 s) for a disturbance of 210 mm (i.e., 10.5% of the flown distance).

Tracking
The robot's tracking performances are presented in this subsection. Three different tracking tasks were tested:  • tracking a moving textured panel (see section 5.3.1).
• tracking a moving textured panel with 3D objects placed on it (see section 5.3.2).
• tracking moving hands perceived above a stationary textured panel (see section 5.3.3).

Panel Tracking
In this experiment, the panel was moved manually and the robot's reference position setpoint ⋆ X was kept at zero. The robot faithfully followed the movements imposed on the panel. The few oscillations which occurred were mainly due to the robot's dynamics rather than to visual measurement errors. Each of the panel's movements was clearly detected, as shown in figure 17(b), although a proportional error in the measurements was sometimes observed, as explained above.

Tracking a moving rugged ground
In the second test, some objects were placed on the previously used panel to create an uneven surface. The robot's performances on this new surface were similar to those observed on the flat one, as depicted in figure 18. The visual error was not as accurately measured as previously over the flat terrain because of the changes in the height of the ground. But the robot's position in the steady state was very similar to that of the panel. The maximum steady-state error at = t 19 s was only 32 mm .

Toward figure-ground discrimination: hand tracking
The last experiment consisted of placing two moving hands between the robot and the panel. Markers were also placed on one of the hands in order to compare the hand and robot positions: the results of this experiment are presented in figure 19. As shown in the video in the Supplementary Data and in figure 19, the robot faithfully followed the hands when they were moving together in the same direction. By comparing the robot position error seen by the active CurvACE with the ground-truth error, it was established that the robot tracked the moving hands accurately with a maximum estimation error of 129 mm.
Our visual algorithm selects the greatest contrasts in order to determine its linear position with respect to an arbitrary reference position. Therefore, when the   hands were moving above the panel, some stronger contrasts than those of the hands were detected by the visual sensor, which decreased the accuracy of its tracking performances. However, this experiment showed that the robot is still able to track an object when a non-uniform background is moving differently, without having to change the control strategy or the visual algorithm. The robot simply continues to track the objects featuring the greatest contrasts.

Conclusion
In this paper, we describe the development and the performances of a vibrating small-scale cylindrical curved compound eye, named active CurvACE. The active process referred to here means that miniature periodic movements have been added in order to improve CurvACE's spatial resolution in terms of the localization of visual objects encountered in the surroundings. By imposing oscillatory movements (with a frequency of 50 Hz) with an amplitude of a few degrees (°5 ) on this artificial compound eye, it was endowed with hyperacuity, i.e., the ability to locate an object with much greater accuracy than that achieved so far because of the restrictions imposed by the interommatidial angle. Hyperacuity was achieved here by 35 LPUs applying the same local visual processing algorithm across an ROI of active CurvACE consisting of 8 × 5 artificial ommatidia. The novel sensory fusion algorithm used for this purpose, which was based on the selection of the 10 highest contrasts, enables the active eye (2D-FOV:°32 by°20 ) to assess its displacement with respect to a textured environment. We even established that this new visual processing algorithm is Figure 19. Tracking moving hands above a textured panel In this experiment, the textured panel was kept stationary while two hands were moving horizontally together between the robot and the panel. (a) Picture of the hands during the experiment conducted with VICON. Markers are only required to monitor the hands' position. In the video provided in the Supplementary Data, we showed that the robot's performances are similar without those markers. (b) Plots of the textured panel's position (red), the robot's position (blue), and the hands' position (green), all measured by VICON. The robot followed the moving hands faithfully over the ground. (c) Comparison between the error measured by the eye (blue), and the ground-truth error provided by the VICON system (green). The latter is equal to the (Hand_Position)-(Robot_Position).  a first step toward endowing robots with the ability to perform figure/ground discrimination tasks. By applying miniature movements to a stand-alone artificial compound eye, we developed a visual odometer yielding a standard error of 7.8% when it was subjected to quasi-translational movements of 1 m. Moreover, active CurvACE enabled a robot to hover and return to a position after perturbations with a maximal error of 2.5 cm for experiments based on a flat terrain, which is state-of-the-art performance in aerial robotics (see table 2), although our study is about a tethered robot flying indoors. All the solutions adopted in this study in terms of practical hardware and computational resources are perfectly compatible with the stringent specifications applying to low-power, small-sized, low-cost microaerial vehicles (MAVs). Indeed, thanks to active Cur-vACE, we achieved very accurate hovering flight with few computational resources (only two 16 bit microcontroller and few pixels (only 8 × 5). However, the 1D visual scanning presented here should be extended to a 2D scanning so as to enable free flight, which, however, would require a completely new mechanical design. In addition, the architecture of the 2D visual processing algorithm will have to be revised to make it compatible with low computational overheads. It is worth noting that the gaze stabilization reflex implemented onboard the present robot requires very few computational resources and allows CurvACE to process visual information resulting from purely translational movements. In addition, recent robotic studies have shown that gaze stabilization can be a useful means of achieving automatic heading [45] and vision-based hovering [46]. The MAVs of the future (e.g., [47]) will certainly require very few computational resources to perform demanding tasks such as obstacle avoidance, visual stabilization, target tracking in cluttered environments, and autonomous navigation. Developing airborne vehicles capable of performing these highly demanding tasks will certainly involve the use of the latest cutting-edge technologies and bio-inspired approaches of the kind used here.