Algorithm on Converting a 2D Scanning LiDAR to 3D for use in Autonomous Indoor Navigation

Indoor navigation schemes for use in robotic applications may rely on a suite of sensors to perform essential robot localization and mapping functions. In an autonomous robotic vehicle this is even more important as a lost robot has no more utility than a simple paperweight. Additionally, a suite of navigation sensors may provide streams of data to easily navigate the environment which can include sensors such as the Velodyne HDL-64E [1,2] multiple laser Light Detection and Ranging (LiDAR) sensor, or a 4k resolution at 60 frames per second video stream such as the one on the newly released iPhone X [3]. The data throughput for these can be much more than 2MB per second. This amount of data can provide for a well defined localization and mapping solution but the small form factor of an autonomous indoor UAS may not be able to accommodate these large amounts of data. The process to fuse multiple sources or to store the large amounts of data collected can be debilitating for a small processing system such as a Rasberry PI or Arduino. This presents a need to take a minimalistic approach towards the amount of sensors and data to be collected.


Introduction
Indoor navigation schemes for use in robotic applications may rely on a suite of sensors to perform essential robot localization and mapping functions. In an autonomous robotic vehicle this is even more important as a lost robot has no more utility than a simple paperweight. Additionally, a suite of navigation sensors may provide streams of data to easily navigate the environment which can include sensors such as the Velodyne HDL-64E [1,2] multiple laser Light Detection and Ranging (LiDAR) sensor, or a 4k resolution at 60 frames per second video stream such as the one on the newly released iPhone X [3]. The data throughput for these can be much more than 2MB per second. This amount of data can provide for a well defined localization and mapping solution but the small form factor of an autonomous indoor UAS may not be able to accommodate these large amounts of data. The process to fuse multiple sources or to store the large amounts of data collected can be debilitating for a small processing system such as a Rasberry PI or Arduino. This presents a need to take a minimalistic approach towards the amount of sensors and data to be collected.
Minimizing the amount of sensors used in an autonomous UAS can bring along many benefits. Benefits such as longer flight time, reduced weight, increased maneuverability, and the ability to add additional capability. In order to reduce the number of sensors an operational design must be calibrated, characterized, and evaluated against the desired operational performance characteristics.
There are two main genres of LiDAR sensor characterizations: intrinsic and extrinsic. Intrinsic calibration and characterization can be accomplished cconcurrently if the sensor equipment allows for enough access to the internal parameters. Okubo presented a quality characterization of a Hokuyo sensor in 2009 [4]. In this Okubo characterized the transfer rate of the sensor output, as well as the effect of drift, surface properties, and incident angles to the sensor measurements. He went on to describe the concept of a "mixed" pixel, which is the result of taking a range measurement from a sensor return that has landed on two distinct surfaces and the resulting range is a combination of the two. Finally, Okubo recommends to use statistical analysis on the raw LiDAR data to perform mapping. Intrinsic calibration has been described thoroughly [4][5][6], while extrinsic calibration can take on many forms and has been describe extensively [7][8][9][10][11][12][13][14][15][16]. Each different application may need a distinct extrinsic calibration procedure based on the individual application, and each paper underlines the need to perform this extrinsic calibration and corresponding characterization to utilize the sensor effectively.
This paper assumes that a user defined UAS sensor suite configuration used for localization and mapping is able to be minimized. The next major assumption is that a newly modified 3D scanning LiDAR capability will meet the needs of the user if calibrated properly. In addition to the previous assumptions, the final major assumption is that other forms of beam steering is impractical and that the use of a reflective superstructure will provide an avenue to the required performance parameters. The Procedure describes the overall methodology in converting a 2D scanning LiDAR sensor into an effective 3D sensor. It simultaneously discusses a relevant set of specific experiments recommended for characterization of both the original sensor and of the newly modified system. Next in the Calibration Procedure section the general calibration procedure is summarized as a conclusion to the first two sections. This paper uses an application of the recommended procedure as an example which is illustrated throughout Taking eqn. (2) and applying the range error associated with target angle, E a , target color, Ec, and target range, E r , then gives a correctable range, ( ) r x as seen in eqn. (3) where ( ) r x is the error corrected result of the range function estimate, ( ) r x  .
Because of these relationships it's important to understand the beam divergence angle. An example test scenario and beam spot shape is shown in Figure 2, which may have similar results as drawn in Figure  3 and in Table 1.
At each of the test scenes identified in this paper there is also an opportunity to collect data and evaluate the statistics. Recording the range data collected during this test for the Hokuyo UST-20LX sensor resulted in the statistics shown in Table 2.
The next major experimental test scene is the test to quantify the effects of target angle, with respect to the sensor, on the returned LiDAR sensor range. The resulting error may give an error function proportional to the cosine of the angle such as in egn. ( Where E a is the range error caused by the target angle, A a is a proportionality constant, B a is a constant offset, and θ N is the angle between the target normal and the LiDAR beam. An example test scenario is presented in Figure 4 which shows multiple LiDAR beam paths contacting the target boards. The target boards represent a spectrum of orientation angles ranging between -80° to 80° with respect to the LiDAR beam.
Using a position measuring device such as a calibrated VICON chamber, the error by the LiDAR sensor due to the target angle can be measured. And example can be seen in Table 3.
The last major error source is caused by the target's color. This part of the characterization can be exhaustive but is recommended to tailor it to the materials and color of the target to be used to calibrate the modified sensor. the paper. A much more detailed application based around the Hokuyo UST-20LX 2D scanning laser rangefinder can be examined [1].

Characterization of Base Sensor
This first major step is to characterize the original sensor to establish a baseline model with which to base any modifications upon. In this instance the Hokuyo UST-20LX has been chosen as seen in Figure 1 [24]. This particular model scans in a 270 planar area using 1081 measurements at 40 scans per second.
In order to characterize the sensor a motion capture system, such as a VICON chamber, is recommended to reduce error in measurements. Once a base sensor system and position collection technique is implemented a series of tests can then be performed. The parameters of the most interest are the mean and standard deviation of the range returns, the beam divergence rate, the effect color has on the range returns, and how the effect the target orientation has on the range returns.
Sometimes, only a small percentage of a LiDAR beam will land on the object being ranged. This depends on the size of the object, distance to the object, and the beam divergence angle. Only a fraction of the photons reaching the illuminated target will make it back to the sensor as described by the equation in eqn. (1) as simplified by Richard and Cain [19].
where P detector is the power received by the LiDAR detector, τ a =atmospheric transmission rate, τ o is the transmission rate of the optics, D R is the receiving aperture diameter of the detector, ρ t =target surface reflectivity, P t =Laser transmitted power, θ d =laser transmitted beam angular divergence, θ R =the target surface angular dispersion. dA is the smaller of the angular area of the target, area of the field of view of the sensor, or the area of the beam on the target, and R is the distance between the source and the target.
If enough photons land on the sensor then a range measurement can be taken. The range function, which can be seen in eqn. (2), is a function of the target coordinates, P P , and some unknown noise, v. The where A c is the proportionality constant, and B c is a constant offset. Eqn. (5) assumes a linear error profile along the operational range of interest.
The recommended test setup is to mimic the beam divergence scenario in Figure 2. Table 4 shows an example data set used when evaluating the Hokuyo UST-20LX sensor.
Once the characterization of the original sensor is finalized, a study on the required operational needs and what modifications are needed can be done.

Identify effects of modification
The effects of the modification will depend greatly on the modifications imposed and a recommended set of tests for every case would be too large for this paper to list. Figure 5 represents a possible modification. The base sensor is in the center and colored gray. The superstructure is in a lighter gray with the mirrored surfaces highlighted in red.
From the prototype design, the modifications may limit the original sensor's field of regard so a new field of regard needs to be    identified as in Figure 6 in green. To develop the new field of regard, the characterization process outlined in Section I-A should be repeated using the proposed modifications.

Develop parameter estimation for extrinsic characteristics
For each element within the identified field of regard of the modified sensor, some additional steps need to be taken in order to transform the raw data of the sensor into something usable. The sensor itself may not be programmable to incorporate the effects of the modification into the sensor output and therefore some post processing needs to occur. The first step is to calculate a range equation due to a potentially different beam path caused to the sensor modifications.
The range function, as seen in eqn. (6), is now a function of the target coordinates, P P , the base azimuth of the beam from the sensor origin, θ B , the azimuth of the deflection angle due to the mirror, θ a , the elevation of the deflection angle, φ, the distance to the deflection point on the superstructure mirror from the sensor origin, d, and some unknown noise, v. The result is the estimated range for the modified sensor, ( ) r x  .
( )= ( , , , , ) Figures 7 and 8 represent a planar view of the X-Y, and X-Z planes, respectively, each describing the range function from a different view. Figure 7 shows the base azimuth angle, θ B , in green that ends at point P L1 , which is the starting point of the intercept vector of the target. P L1 is also the point of deflection at distance d from the origin. The deflection angles, θ a , and φ are shown in Figure 8. P L2 is used to create a vector from P L1 which intercepts a target board described by P P1 , P P3 , and P P3 at an unknown point P target .
Taking this point-plane intercept concept [26,27] a little further it is possible to create another relevant test. Figure 9 shows an example test scenario using the laser source, bouncing the LiDAR beam off a mirror and intercepting a multitude of target boards. Only one target should be ranged against at a time, but each range measurement needs to range against a target at a different orientation. These range measurements ( ) r x , along with the orientation of the target board, θ, can be used to create a Jacobian matrix, H, as seen in eqn. (7) δθ, and δφ are small perturbations on the scale of 1 × 10 -3 , and n is the number of target board orientations ranged against. A standard Recursive Least Squares parameter estimation technique [27][28][29] can now be developed using this Jacobian as in eqn. (8) Where ∆θ and ∆φ are from the current estimates. The error between the estimated range and the measured range from the sensor using eqn. (8) where ( ) r x is the error corrected range measurements given by the sensor and ( ( )) z r x − are the residuals between the estimated range     and the sensor measured range. The RLS algorithm will iteratively minimize x until a user defined threshold. Figure 10 shows a series of 10 trials estimating the azimuth, θ B , in blue, and the elevation, φ, in magenta for one of the deflecting mirrors illustrated in the prototype in Figure 5. In this instance the correction factors, Ec, Ea, and Er are applied and the results are shown in bold. It can be seen that the corrections applied do in fact alter the mean and decrease the variance of the parameters.
Once this is performed for each individual element identified in Figure 6 then the final step in performing a transformation of each point along the original two-dimensional X-Y plane to the newly deflected LiDAR ranging location in 3D space can be completed. The resulting transformations will lead to a 3D point cloud comprised of all the usable elements as seen in eqns. (9) and (10) where ˆn R θφ is the rotation matrix in a 3-2 sequence for the n th beam,

Calibration Procedure
The calibration procedure is the retelling of the results of the previous steps in the methodology to include the characterization of the sensor, data filtering, and the validation of the range and RLS functions. The algorithm is intended to wrap all the previous steps into a generalized, but sufficiently detailed, pattern to be able to replicate the process with new and unique modifications.

Calibration algorithm
1. Characterize the operationally relevant parameters of the original sensor to form a baseline.
2. Take the fabricated mirror structure and attach to the 2D scanning LiDAR sensor. Arrange a placement of VICON markers on the structure such that minimal or no erroneous reflections will be seen on the sensor. Create a VICON object from these points using the corresponding chamber and markers.
3. Take initial measurements of each expected deflection point of the laser on the structure. It is preferable to take range measurements before any reflective surfaces are attached to the structure and then to add an offset to the returned range measurements due to the thickness of the reflective surface being added.
4. Run data collection (10k+ data points) in a known environment using the estimated d, φ, and θ to calculate the initial range vector measurements, r. Eliminate any range scan elements that return with abnormal or inconsistent range statistics.
5. Establish the transformation between the sensor object inside the fabricated mirror structure and the point of origin of the laser source (laser base frame).
6. Convert target board coordinates with respect to the laser base frame.
7. Place target in desired location that allows for the first set of elements in the sensor laser beam to intercept the target near the center of mass. Collect range and position data. Adjust target to a new unique orientation such that the same elements of the sensor laser beam fall near the center of mass of the target. Repeat for a minimum of three target positions. 8. Move target to a new location such that the next series of elements in the sensor beam intercept near the target's center of mass. 9. Adjust target to a new unique orientation such that the same elements of the sensor laser beam fall near the center of mass of the target. Repeat for a minimum of three target positions then move to the next series of elements in the sensor beam.
10. Repeat the previous step until all desired elements of the laser scan field of regard have data collected as intercepted at a minimum of three targets in unique positions.
11. Run RLS algorithm, separately for each beam to calculate the estimated φ, and θ  .
12. Calculated the transformation for each scan element using the measured d, and the calculated φ, and θ from the RLS algorithm.
13. Implement the transformations calculated and verify the expected output. Original 2-dimensional planar point cloud results from the sensor output should be realized as the 3-dimensional point cloud of the target environment.

Disclaimer
The views expressed in this paper are those of the authors, and do not reflect the official policy or position of the United States Air Force, Department of Defense, or U.S. Government.