Cyber-enhanced canine suit with wide-view angle for three-dimensional LiDAR SLAM for indoor environments

Understanding the topographical information of the disaster site can enhance the efficiency and safety of search and rescue missions. This study describes the development of a cyber-enhanced canine suit for three-dimensional (3D) simultaneous localization and mapping within indoor environments. The suit’s weight was approximately 3.0 kg and was compatible with large dogs (>30.0 kg). It could collect dense, wide view-angle, and long-range 3D point clouds. The collected point clouds were processed offline to create 3D maps. The suit was equipped with a protective cover that could prevent physical damage to the light imaging detection and ranging (LiDAR) sensor from collision. The protective cover was designed to (a) eliminate any effects on the performance of the 3D LiDAR and (b) sustain the stress from the impact at the velocity of 20.0 m/s with no physical damage. Additionally, the canine suit was also modified for better fitting and stability of the LiDAR on a wider range of dog bodies. The performance of the suit was demonstrated on real dogs in four different scenarios in areas with different space sizes, dog gait patterns, and ground layouts. The space size and position errors observed did not exceed 0.80% and 1.13%, respectively. GRAPHICAL ABSTRACT


Introduction
Topographic and spatial information at a disaster site can be key factors for the determination of the safety and efficiency of search and rescue missions. These types of information include the size of the environmental space, the presence of objects, and object alignment and orientation (among others). This advanced information can indicate the characteristics of a disaster site and provide a better understanding of the conditions that are crucial to any additional rescue planning. Topographic data are usually represented by the visualization of three-dimensional (3D) maps for easier understanding. In addition, the size of the space or any object in the environment can be observed on the 3D map, so that any action related to a specific object can be properly considered.
Simultaneous localization and mapping (SLAM) is commonly used to create maps as well as perform localization of ground robots or vehicles with the use of collected topographic data. The visual SLAM approach usually involves the use of a camera to track feature points in conjunction with a triangulation method to create CONTACT  the map. Light imaging detection and ranging (LiDAR)based SLAM uses a laser sensor to obtain precise distances to feature points, and is used in conjunction with point matching to create a map. While visual SLAM is more cost-efficient and the equipment is generally lighter, LiDAR-based SLAM is more effective and can collect a denser and more accurate point cloud [1]. Search and rescue (SAR) dogs are used in missions to search for victims because of their exceptional olfactory sensation and maneuverability skills. These can result in quick access to victims and to the initiation of a prompt rescue mission. SAR dogs can search for victims in diverse environments, on mountains, indoors, and under debris. Large and medium-size SAR dogs are used for the missions, and they have capability to carry luggage. Therefore, SAR dogs could carry equipment to collect spatial information in the area and perform SLAM. The mapping results could then be used to inform rescue personnel of possible risks in the area.
There is a variety of technical challenges that need to be considered when a SAR dog is used to collect pointcloud data on the environment. One problem is the risk of physical damage of the fragile LiDAR sensor. SAR dogs often move near obstacles even if they avoid them. Therefore, there is a possibility that a LiDAR device placed on a canine suit could collide with these obstacles. Another challenge is to build good 3D maps when the inclination and position of the LiDAR sensor changes owing to motion. Canine can traverse flat spaces, slopes, and stairs within indoor environments, but changes in inclination and position affect the performance of LiDAR 3D SLAM. A third challenge pertains to the installation of a heavy LiDAR device on a canine suit. Owing to the weight of the LiDAR device, which is concentrated at the center of the back of the dog, along with the slippery nature of the dog's fur, the equipment (and suit) can move in unwanted directions, and could potentially result in an ineffective orientation and inclination. These inclinations would also affect the 3D mapping process and would lower the quality of the map. Therefore, for good 3D LiDAR SLAM with canine, we need to implement physical damage countermeasures, provide an adjustment mechanism to secure the LiDAR device on the canine body, reduce changes in the inclination of the canine suit, and select 3D LiDAR SLAM frameworks that are suitable for mounting on these animals.
In this study, a cyber-enhanced canine suit is proposed to provide indoor 3D LiDAR SLAM. Figure 1 shows a canine wearing the suit which includes a 3D LiDAR scanner for collecting long range (up to 100 m) views with a wide viewing angle (360°along the horizontal and ±15°a long the vertical directions). It provides a dense 3D point cloud (20 Hz and 300,000 points/s). Additionally, there is a spatial sensor with an inertial measurement unit (IMU) to monitor the movement of the canine and a single-board personal computer (PC) to log the data. The 3D LiDAR can collect spatial and object information around the canine. When the canine wears the suit and traverses flat ground or moves up and down stairs, the LiDAR can collect point-cloud data that will be processed through the SLAM framework into a 3D map. Additionally, the length of the canine suit belt is adjustable, provides improves fitting on the dog, and limits the inclination angle (movement) of the suit and LiDAR. The 3D maps obtained using the SLAM canine suit can help us understand the topography of a disaster environment and thus enhances rescue planning at indoor disaster sites.
The main contributions of this study are as follows.
• Development of a cyber-enhanced canine suit to hold a 3D LiDAR SLAM rig. Large dogs ( > 30 kg) can wear the suit to collect dense, wide view-angle point cloud data for 3D mapping in large indoor environment. • Design of a physical damage protection fixing mechanism for the 3D LiDAR sensor on the canine suit. The protection mechanism has sufficient physical strength for impacts at a velocity of 20 m/s, whereby the highest stress observed was 3.741 × 10 −12 MPa. • Demonstration of 3D mapping with the use of a real canine in large indoor spaces. The 3D mapping was conducted for four different scenarios with varied area sizes, ground shapes, and gait patterns. • Evaluation of the 3D mapping visual quality and accuracy based on comparisons with highly accurate commercial 3D LiDAR scanner (FARO Focus 3D) maps generated by humans. 3D mapping results have adequate visual qualities and accuracies for grasping topological information of the space.

Related works
The utilization of SLAM and 3D reconstruction of disaster sites has been proposed by many researchers. Various methods have been used for the collection of data and for performing SLAM, including the use of radio frequency identification (RFID) technology to correct the trajectories of humans and robots [2], reconstruction of 3D disaster scenes with images from unmanned aerial vehicles (UAVs) [3], identification of disaster victims with the use of visual SLAM and real time 3D mapping [4], and online multirobot SLAM for 3D LiDAR [5]. One of the common methods employed involves the use of mobile robots and tracked vehicles for 3D map construction of disaster sites [6,7]. Real-time approaches were also proposed to enhance mobile robot trajectory estimations and mappings in disaster sites via a modified iterative closest point (ICP) method [8]. A 3D laser scanner (Velodyne LiDAR) with high-accuracy was also used to improve the mapping performance by Jozkow [9]. 3D SLAM for human is a key research topic because human has a good mobility rather than mobile robots. Scan-matching based approaches were used as the 3D SLAM for humans. These approaches do not require motion information. Kuramachi proposed an ICP based SLAM and built 3D maps of wide area by a handheld LiDAR scanner [10]. Koide and Nüchter used backpacks equipped with LiDAR sensors for building 3D maps [11,12]. Therefore, the odometry-free approach is a good solution for the 3D SLAM for humans and animals.
Canines are other solution to visualize disaster site and rescue activities because canines have good mobility on rubble piles. Canine augmentation technology was proposed by Ferworn to assist search and rescue operations [13,14]. Improved canine augmentation technology, including wireless communications and a wideangle video system, were also introduced by Tran [15] and Ferworn [16]. It is an interesting approach to visualize the disaster site using animals. We have studied the cyber-enhanced rescue canine system for grasping the disaster information and SAR dogs' activities [17].
Tran proposed a 3D SLAM for canines [18]. Kinect sensors on canines were used to collect red-green-bluedepth (RGB-D) image streams for the reconstruction of 3D maps of disaster sites. Only the good frames were selected from the Kinect sensor's image stream to generate a 3D map of the interior hallway. However, our target environment was a large indoor space with dimensions larger than 20 m. This required a longer range and a wider view-angle point-cloud capturing process. Therefore, we propose a canine suit equipped with high-accuracy, wide view-angle, and long-range 3D LiDAR rig for 3D mapping in large indoor environments.

Suit design issues for search and rescue dogs
In order to develop cyber-enhanced canine suits for SAR dogs, it is necessary to design a canine suit that allows SAR dogs to search concrete rubbles safety for a long time. When SAR dogs search for victims in concrete rubbles, there is a risk that the canine suits or other wearable devices will get caught on rebars of the concrete rubble and prevent the SAR dogs from moving. For this reason, handlers hesitate wearing canine suits or wearable devices at the search and rescue missions. In addition, when SAR dogs wear the canine suits, the weight and design of the canine suits may impair the SAR dogs' performance. Especially heavy or unbalanced canine suits significantly reduce the performance.
JRDA handlers pointed out issues for developing new canine suits. These issues are as follows: 1. Light weight and good balance canine suit that does not prevent canine's motions 2. Design of canine suit that does not get caught in the surrounding environment 3. Robustness of the canine suit when SAR dogs hit it to rubbles.
In this paper, first issue is solved by developing canine suits less than 10% of SAR dogs weight. Second issue is solved by embedding devices, cables and batteries other than sensors inside the canine suit. The third issue is solved by developing a robust sensor mount. Among them, concrete solutions of 1st and 3rd issues are described at the following sections. Solution of 2nd issue is important for adapting it to real rescue missions. However, this paper does not provide complete solutions. We discuss about it in the Discussion (Sec.6)

Requirements of canine suit for 3D SLAM
A Ruffwear Singletrak TM Pack for dogs with large/extralarge (L/XL) sizes was used as the main part of the SLAM canine suit to allow the attachment of equipment, such as the LiDAR sensor and spatial sensor to the canine. After a disaster, the indoor environment was filled with scattered objects and furniture that could block pathways within indoor spaces. Although a SAR dog can easily maneuver through these obstacles, the dog could impact on devices and equipment and damage them owing to its proximity with obstacles during evasive movements. Moreover, to obtain high-accuracy, dense, wide-angle, point-cloud data, a 3D laser scanner (Velodyne VLP16) was used to gather data at the disaster site. This sensor is vulnerable to physical damage. Therefore, the device must be modified to enhance its protection to avoid impact damage from canine movements.
Fixing the suit on a SAR dog was also considered. In this case, a LiDAR sensor was placed on the dog's back as close as possible to its center of gravity to avoid the restriction of the dog's movement and mobility. However, owing to the heavy weight of the LiDAR, the distribution of the weight on the dog's back can potentially cause unintentional side shifting of the suit. Changes in inclination can affect the mapping process owing to the shifting of the viewing angle of the 3D LiDAR, thus resulting in failures to capture the necessary point clouds of the objects. In addition, the variety of breeds and sizes of SAR dogs requires the use of a flexible suit to fit each dog. The suit design involves the use of length-adjustable belts at the neck, chest, and abdomen areas of the SAR dog.
Each search mission for a canine normally lasts 10-15 min. Accordingly, 3-4 missions would occur in total in a time span of 2 h, so the load on the canine should be kept minimal to avoid exhaustion. Therefore, the total load should not exceed 10% of the dog's weight, which is appropriate for the duration of a typical search and rescue mission [17].

Development of protective measures
Owing to the risk of physical damage to the LiDAR sensor, a protective sensor cover had to be developed with a material that has high-impact resistance as well as high transparency. In this way, the LiDAR field-of-view is not blocked, and the performance of the laser scanner in not inhibited. We developed a transparent cover composed of three main parts, namely, the top, middle, and base parts ( Figure 2). The middle part covers the entire laser scanner field-of-view (360°horizontally and 30°vertically), i.e. the most vulnerable part of the LiDAR itself. Thus, the selection of the most appropriate material to make the cover was critical.
The candidate materials for the middle part of the cover ( Figure 3) included transparent acrylic, polyvinyl chloride (PVC), and polycarbonate (PC). The properties and characteristics of each material are listed in Table  1. Because a LiDAR operates based on the use of laser reflections to calculate the distance to an object, the lighttransmittance property of the material can be calculated by observing the change in the number of points in a cloud detected by the LiDAR in static environment cases at which the covers were absent or present in the LiDAR's field-of-view. According to Table 1, polycarbonate (PC) has the highest impact resistance as well as an acceptable transmittance value. While acrylic has the highest lighttransmittance value, its impact resistance is the lowest among the three candidates, and was not enough to protect the sensor from physical harm. Conversely, polyvinyl chloride (PVC) has the lowest light transmittance, which would affect the performance of Velodyne. Thus, polycarbonate was selected as the material for the middle cover.
The top and base parts of the cover were used to hold the spatial sensor (with IMU and global navigation  satellite system (GNSS) signal devices) and to fix the sensor to the main part of the suit, respectively. The parts were 3D printed and the material selected was acrylonitrile butadiene styrene (ABS). While these parts did not cover the most vulnerable part of the sensor, they were also required to be able to protect the device from damage.
The computer aided design (CAD) file of each part was tested in a simulator of SOLIDWORKS to assess the stress distributions. Impact force of the parts or assembly was evaluated with rigid surface. This was achieved with a drop test in which the impact velocity was set to 20 m/s and the impact direction was at the side toward the LiDAR to simulate dog movement impacts into obstacles. The results are shown in Figure 4. The stress distributions of (a) all the individual covers and (b) the assembled end-product were investigated. In the individual drop tests, the maximum stress in the top and base parts were 4.093 × 10 −12 MPa and 8.154 × 10 −4 MPa, respectively. Regarding the assemblies, the maximum stress observed was 3.741 × 10 −12 MPa. The maximum tensile strength of the ABS was 48.3 MPa, and the average running velocity of the canine was in the range of 4-5 m/s (instead of 20 m/s used in the simulation). Thus, the cover developed was assumed to be practical and safe for use in general search and rescue missions.

Devices
The main devices in the suit included the Velodyne VLP-16, a spatial sensor with IMU, a GNSS signal receiver used to monitor the movement of the canine, a WiFi router, and a UP board TM which ran on a server (Ubuntu Server 16.04) for logging point-cloud data. In addition, there were also two 6.6 V LiFe batteries for the Velodyne sensor and a 5 V/2.4 A mobile battery for the UP board in the suit.

Specifications of canine suit
The developed canine suit is shown in Figure 5, and its specifications are listed in Table 2. The weight of the suit    was in the range of 2950-3000 g. Components such as batteries, a router, a Velodyne circuit board, and an UP board PC were installed inside the left and right-side bags of the canine suit in Figure 5. A LiDAR (Velodyne VLP-16) and a spatial sensor were fixed on the back of the canine suit. Figure 6 shows the layout and weight balance of these components inside the left and right-side bags. These components were immobilized inside the bags. The canine suit was balanced equally on both sides, as shown in Figure 6. According to the regulation, the load on the canine should not exceed 10% of the canine's weight. Accordingly, this suit can be used on large dogs (30 kg or more). The maximum runtime of this suit is approximately 3.5 h, even though the expected runtime for each mission is 15-20 min. The suit was also modified with elastic belts which were adjusted for the length at the neck, chest, and abdomen, to fit a wide range of canine bodies as well as to prevent the shifting of the suit when placed on a dog.

Data logging device
The Velodyne VLP-16 laser scanner was used to collect the point-cloud data, and the data from Velodyne was then logged by an UP board PC. The Velodyne scanner had 16 channels, with 360 horizontal and 30 vertical fields-of-view. It could capture longdistance views (up to 100 m away) and produce a dense point cloud up to 300,000 points/s. With this sensor, we obtained dense, wide-angle point clouds.

SLAM framework
The LiDAR Odometry and Mapping (LOAM) was proposed by Zhang and Singh and constitutes one of the best SLAM frameworks that deals with the 3D point cloud [19]. It could create a low drift, robust 3D map online, and can effectively handle the 3D point clouds from the Velodyne sensor without the need for loop closure to correct drifts. LOAM was ranked third by the Karlsruhe Institute of Technology (KITTI) benchmark for odometry (as of July 2019) with a minimum translational error of 0.57%. Even though LOAM could perform SLAM online, the mapping process was performed offline owing to the limitation of a single board PC in the suit.
The LOAM algorithm did not require motion data (odometry data) to operate and only needed point clouds as input. The algorithm first performed LiDAR odometry with the use of a 10 Hz input point cloud. The odometry data was then used along with the scan matching and point-registration processes in LiDAR mapping. The output from LOAM included trajectory information and 3D map outputs.

Indoor environment condition and size
A large indoor hall ( Figure 7) filled with objects was used as the environment in the experiment. The hall width and length were approximately 23 and 35.5 m, respectively. Within the hall, there was a small space (4 m wide and 6 m long) that included motion capture sensors to track the movement of the dog in small spaces as the ground truth for the experiment. In addition, a commercial grade 3D scanner (FARO Focus 3D), which was held by humans and has millimeter-order accuracy, was also used to create a precise 3D map of the environment as the ground truth for the obtained 3D map.

Object placement
In the experimental space, there was a collection of tables, chairs, small cars, a projector screen, cabinet, and rows of posters, as shown in Figure 8. There were also some reference objects with sizes which were measured to allow physical comparisons with the sizes of the objects obtained on the generated 3D map. A set of stairs was also placed (as shown in Figure 8(b)) in the motion capture area to allow canine to climb them.

Canine subjects
The dog used in this experiment was a German Shepherd dog (Canis lupus familiaris) that had received obedience training as a work dog, similar to that received by SAR dogs. The two-year-old dog weighed approximately 40 kg. Its neck, chest, and stomach circumference were 50, 81, and 68 cm, respectively.

Experimental procedure
All experimental procedures conformed to 'Regulations for Animal Experiments And Related Activities at Tohoku University', and were reviewed by the Institutional Laboratory Animal Care and Use Committee of Tohoku University, and finally approved by the President of University.
Canine was equipped with the Cyber-enhanced SLAM canine suit and was instructed to move around within the environment. The experiment was divided into four different dog movement scenarios as follows.

• Scenario A: Walking in small spaces
The dog started to walk from a set starting point, moved in a counterclockwise direction around the stairs for two rounds inside the space in which motion was captured, and subsequently moved in a clockwise direction for another two rounds until it reached the designated end point, as shown in Figure 9(b).

• Scenario B: Walking up the stairs in small spaces
The dog started to walk from the starting point and moved in a counterclockwise direction around the stairs two times, then climbed over the stairs and walked back   on the ground. The process was then repeated one more time. Finally, the dog moved to the end point, as shown in Figure 9(c).

• Scenario C: Walking in a large space
The dog started to walk from the starting point and moved in a counterclockwise direction around the stairs two times, then moved out of the motion capture area, and followed the planned path around the three set markers, as shown in Figure 10. Finally, the dog returned to the end point.

• Scenario D: Running in a large space
The path for this scenario is the same as that used in scenario C, but instead of walking, the dog moved along the path during running.

Evaluation of the 3D map visual quality and accuracy
The map's visual quality, including the position, pose, and orientation of the objects in each scenario, were observed. These attributes were then compared with the map obtained from a 3D scanner with a commercial grade (FARO Focus 3D) as the ground truth to evaluate the integrity and visual accuracy of the mapping results. The robustness and accuracy of the 3D map result could also be grasped visually based on the observations of the positions of the corresponding, registered, point clouds relative to those in the ground truth data and the amount of noise in the obtained map.
The accuracies of the 3D reconstruction of the space and objects were directly related to the quality of the map itself. The size of the objects and spaces created by LOAM were compared with the reference or actual sizes of the corresponding object or space. The experimental space errors in scenarios C and D compared to the ground truth were observed at four different locations, as shown in Figure 11. Shown also is the experimental setup for (a) a small space, the layouts of the motion capture areas and relevant paths for (b) scenario A and (c) scenario B. Moreover, the error size for the two objects in the environment of every scenario, including the screen and cabinet, were calculated to verify the mapping accuracy in each condition.

Evaluation of the dog's trajectory
Localization of the SLAM process was evaluated based on the calculation of the position error from the estimated trajectory which was obtained from LOAM's odometry data. Position errors were generated based on the accumulation of the error in the localization process after each scan. This was calculated based on the identification of the differences of the distances between the position of the same subject in the same state and the obtained and ground truth trajectories generated over the total distance traveled.
In scenarios A and B, the canine only moved within the motion capture area. Accordingly, we could easily compare the resulting trajectory with the motion capture data. However, in scenarios C and D, the part of the ground truth trajectory where the canines were outside of the motion capture area was unavailable owing to limitations associated with the motion capture area. Regardless of area where the motion caputure was unavailable, the position error was obtained at the end of the scenarios where the trajectory and the end point positions were present.

Accuracy and visual quality of the 3D map
From the experiment, the Cyber-enhanced SLAM canine suit collected point-cloud data as well as spatial information from the IMU. These data were later processed offline with the LOAM algorithm without the help of loop closure or IMU. The ground truth dataset was obtained with the use of a 3D scan device with a commercial grade. A comparison of the accuracies of the synchronized 3D map of the obtained results and the ground truth map is shown in Figure 12. The point clouds of the mapping result are color coded and the colors of the point cloud from the ground truth dataset are marked in white.
From the figure, the visual qualities of the 3D reconstruction of the environment in all scenarios are very high and these maps match almost perfectly with the ground truth. In scenario A (Small area + Flat ground), the obtained 3D map represents accurately most of the significant figures in the area according to the ground truth map. Compared with scenario B (Small area + Stairs), the overall visual quality of the mapping results from scenarios A and B are almost the same and can accurately show the topographical information of the environment. These results show that LOAM can generate good visual quality maps for both the flat grounds and the stairs Scenarios C and D extend the mapping areas to the entire experimental hall (Large area). Both scenarios can present features of the experimental hall effectively. With a lower movement velocity, denser point clouds and more accurate environmental details were captured in scenario C. Although scenario D is associated with a relatively sparse point cloud compared with scenario C, the mapping result shows correctly the features and object placement within the area. These results indicate that LOAM can also generate good visual results, while the canine was walking or running in the indoor environment.
An object's visual quality was evaluated based on the comparison of the corresponding object in the 3D ground truth map. The map from scenario C was used as an example and is shown in Figure 13. Three objects were used for comparisons. These are shown in the figure, and include the stairs, a rectangular box (cabinet), and a pole with a row of posters placed on the side. From this figure, the shapes and orientation of these objects can be easily identified. It can be inferred that the map's visual quality is good, and the map clearly represents the objects in the environment.
To verify the mapping performance of the canine suit and LOAM, the object sizing error was used to verify the accuracy in the reconstruction of the 3D environments. The cabinet and screen were used as indicator objects. Their sizing errors are shown in Tables 3 and 4, respectively. The cabinet was placed on the floor in the environment and the screen was placed vertically close to a wall. The dimensions of these objects were measured from the created map and compared with the actual dimensions to estimate the sizing error.
According to Table 3, the measured size of the cabinet, including its length and width, are very close to the reference values. The greatest sizing error was found in scenario D with a 2.695% difference in the width from the reference value, while the smallest error occurred in scenario C which yielded a length error of 0.413%. In general, the width's sizing error was larger than the length's sizing error. The measured screen dimensions in Table 4 include the length and height of the screen. The sizing varied slightly in each scenario. The height sizing error tended to be relatively high (and had a maximum value of 3.823%) in scenario D, while the length sizing error was lower.
The space sizing error in the map was also examined. First, the hall size at four different locations (as shown in Figure 11) in the obtained map and reference were compared. While the scenarios A and B did not have enough information to create full 3D maps of the hall, it was possible with scenarios C and D to create full-sized maps of the hall and the sizing results, as listed in Table 5. According to the table, the measured distance in scenario C was similar to the ground truth reference, with the greatest error (0.619%) documented at position L2. Scenario D yielded slightly greater errors, which ranged from 0.088% to the highest value of 0.796% at position W1.

Trajectory and position error
The ground truth reference for the trajectory in this experiment was obtained from a motion capture device that tracked the movement of the dog within the motion   capture area. The trajectory of the canine suit was calculated using odometry data from LOAM. The error distance between the end point of the obtained trajectory and the end point of the ground truth trajectory were measured for all the studied scenarios. The total time required, canine average velocity, distance traveled, and calculated position error per distance traveled, were measured. The results are listed in Table 6.
In scenario A, the canine walked around the stairs within the motion capture area for a total distance of 55.8 m. The end-point position from the obtained results was approximately 0.45 m different from the reference data from the motion capture device. Compared with the total distance, the position error was 0.809%. This indicates that the performance of SLAM is very precise. Scenario B produced the best accuracy of all the scenarios in this experiment. Based on the total distance traveled of 41.9 m, the position errors from the trajectory were 0.055 m. The position error was only 0.131% compared with the total distance. The travel distances in scenarios C and D were approximately the same, but the velocity of the canine varied between walking ( ∼ 0.8 m/s) in scenario C and running ( ∼ 3.2 m/s) in scenario D. In scenario D, the error was significantly greater compared with the previous scenarios. The estimated position was approximately 1.3 m from the actual position. However, compared with the total distance, the error was only 1.13%. Figure 14 shows a comparison between the trajectory obtained by LOAM from scenarios A and B, and the ground truth data from the motion capture. The  accumulation of position error can be observed from the differences in position from the trajectory. The position error was observed at the end point of the trajectory to obtain the total position error. Scenario A has a lowposition error at the beginning and then increases gradually to a value of 0.809% at the end of the trajectory. Because scenario B involves the stairs and height changes, a side-view angle is also shown in Figure 14(c). This scenario has a very low-position error (0.131%) and matches well the ground truth trajectory. There are minor trajectory changes when the canine climbed over the stairs compared with the ground truth. Despite the minor variation, the position error of scenario B was the lowest in all the studied scenarios. Figure 15 shows the full trajectory and ground truth trajectory in the motion capture area from scenarios C and D. The paths in scenarios C and D were similar. Thus, we can observe the differences in the trajectories from the results. While the ground truth trajectory outside the motion capture area could not be obtained, the trajectories from the odometry data are shown in the figure. This figure shows that scenario C yields minor trajectory differences and a low-position error equal to 0.294% after the experiment. Regarding scenario D, the position error can be observed in the latter part of the scenario when the canine moves back to the end point. This indicates that there was an accumulated error during the path outside of the area where motion was captured. The position error was 1.29 m, which accounts for 1.13% of the total distance traveled.

Discussion
We developed a cyber-enhanced SLAM canine suit and verified its performance based on the observation of the 3D map and trajectories in four experimental scenarios. We achieved very good accuracies in the 3D maps as well as good localization results for the canine in the environment. In scenarios A and B in which the canine only moved in small spaces at walking paces, the qualities of the generated 3D maps were very good and the position errors were less than 1.0%, while the object sizing errors did not exceed 3.2%. The visual qualities and accuracies of the maps from both scenarios were quite similar. This indicated that the changes of altitude during the canine movement did not negatively affect the map quality. Scenarios C and D had similar map qualities but were subject to minor differences. While in scenario C a very dense point cloud was captured because the canine slowly walked around the environment and there are no sudden movements, scenario D yielded a 3D map with a notably increased sparse point cloud owing to the high movement speed. If the suit fitting mechanism on the canine is good, the suit inclination is limited, and the effect on the mapping process is minimal. From these results, the highest space sizing errors were 0.62% and 0.80% is scenarios C and D, respectively. Regarding the object sizing, scenario C yielded the error of 1.05%, while scenario D yielded a slightly higher error equal to 3.82%. Trajectory results from both scenarios were quite accurate with the highest error observed in scenario D at 1.13%. These results suggested that the movement velocity of a canine during mapping had minimal effects on the mapping process.
The object size error varied in the dimension type and the canine movement scenario. For the cabinet, the width yielded higher errors than the length on average. This may have been caused by the orientation angle of the object to the LIDAR as well as the resolution of the LIDAR itself. With a low-reflecting angle relative to the LiDAR, the accuracy for capturing an object's point cloud may drop, and the lower resolution of the LIDAR will make the measurement of small objects in the environment difficult. However, the highest observed object sizing error from the experiment was only 3.82%, which means that the object representation in the 3D map is very good.
Additionally, the use of the IMU may increase the accuracy of the 3D map. With the help of IMU, LOAM can predict the movement of the LiDAR and improve the accuracy of the scan matching process in SLAM, and can concurrently reduce the limitation of movement that can cause mapping errors. This approach should be able to lower the error observed in the map and improve accuracy for movements in adverse conditions. The integration of the IMU usage will be investigated in more depth in the future.
Evaluation of advantages of the canines SLAM will be an interesting research topic for disaster response robotics. As future works, we want to compare the canines SLAM with mobile robots or humans SLAM. In such case, there are tradeoff between the accuracy and mobility. Tracked vehicles and a motion capture system may not be applied in the rubbles where canines can really demonstrate their performance. We will consider the good evaluation conditions and measurement methods to show the advantages of the canines SLAM.
There are problems to be solved for adapting the 3D SLAM canine suit to practical use in the search and rescue mission. The biggest problem is that the current canine suits cannot guarantee SAR dogs against Hung-up in the search and rescue mission. SAR dogs with the canine suits may be caught by rebars in the concrete rubbles. Hungup puts SAR dogs at high risk situation. In addition, the SAR dogs also may not enter the narrow spaces because of LiDAR sticking out of the canine suit. We will have to solve the problem of Hung-up and reduce the LiDAR sticking out of the canine suit for the safety of SAR dogs in the future works.

Conclusion
We developed a cyber-enhanced canine SLAM suit equipped with 3D LiDAR for large dogs ( > 30 kg). The suit can protect a LiDAR sensor from physical damages. Thus, the suit can be used practically at disaster sites. The suit can also be modified to be able to custom-fit a wide range of dog body sizes for better fitting and stability of the LiDAR on canines. We also demonstrated the performance of the canine suit with the use of an indoor mapping experiment in different scenarios. The mapping results were compared with those obtained from a commercial 3D scanner as a reference. The accuracy of the trajectory was also confirmed with the use of a high-accuracy motion capture device. We found that the accuracy of 3D space reconstruction with the use of the developed canine SLAM rig ranged from 0.01% to 0.80%, while the position error ranged from 0.13% to 1.13%. The SLAM suit could be used on canine to obtain dense, wide-angle point clouds based on which the data could be used to successfully create and represent the 3D topographical features of an indoor environment on a robust 3D map. The reduction of the suit weight and the size and implementation of the IMU will be investigated in more detail in the future in the effort to reduce the burden on canines and to improve the mapping accuracy.

Notes on contributors
Chayapol Beokhaimook received his Bachelor's degree in Mechanical and Aerospace Engineering from Tohoku University, Sendai, Japan in 2019. Currently, he has been pursuing Master's degree in Mechanical Engineering at Ohio State University's College of Engineering since January, 2020. His research interests are SLAM, rescue robotics, localization, and 3D mapping. Hiroyuki Nishinoma received his Master of Engineering in 2020 from Tohoku University, Japan. Hiroyuki's past research has focused on developing suits for supporting working dogs using robot technology, which are Cyber-enhanced canine suits, and canine motion control suits using bright spotlights. researchers that created Cyber Rescue Canine, Dragon Firefighter, etc. His research team in Tohoku University has developed various rescue robots, two of which called Quince and Active Scope Camera are widely recognized for their contribution to disaster response including missions in the Fukushima-Daiichi NPP nuclear reactor buildings. IEEE Fellow, RSJ Fellow, JSME Fellow, and SICE Fellow.