Earthshaker: A Mobile Rescue Robot for Emergencies and Disasters through Teleoperation and Autonomous Navigation

Overview of the rescue robot Earthshaker, the first place in the Advanced Technology & Engineering Challenge (A-TEC) championships. Abstract: To deal with emergencies and disasters without rescue workers being exposed to dangerous environments, this paper presents a mobile rescue robot, Earthshaker. As a combination of a tracked chassis and a six-degree-of-freedom robotic arm, as well as miscellaneous sensors and controllers, Earthshaker is capable of traversing diverse terrains and ful-filling dexterous manipulation. Specifically, Earthshaker has a unique swing arm – dozer blade structure that can help clear up cumbersome obstacles and stabilize the robot on stairs, a multimodal teleoperation system that can adapt to different transmission conditions, a depth camera aided robotic arm and gripper that can realize semi-autonomous manipulation, a LiDAR aided base that can achieve autonomous navigation in unknown areas. It was these special systems that supported Earthshaker to win the first Advanced Technology & Engineering Challenge (A-TEC) championships, standing out of 40 robots from the world and showing the efficacy of system integration and the advanced control philosophy behind it.


Introduction
Rescue workers' lives are often under threat during their rescue work in and after emergencies and disasters. Sometimes even casualties have to be suffered unfortunately. With the development of robotics in general, robots have seen their prosperity in replacing human beings to fulfill miscellaneous tasks in those dangerous scenarios [1−4] .
During rescue work, it is often required to traverse unstructured and complicated terrains, even climb up and down stairs, while carrying miscellaneous equipment and sensors for dealing with dangerous situations and clearing up cumbersome obstacles [5] . Therefore, most rescue robots have been developed based upon legged or tracked robotic platforms to ensure mobility. To date, plenty of legged robots have been developed by different research organizations and industrial companies [6,7] , and some of them have shown up in various competitions like DARPA Robotics Challenge [8] , DARPA Subterranean Challenge and so on [9] . To further improve the mobility of legged robots, there have also been hybrid legged robots that have wheels or tracks attached at the end of their legs to replace the feet, e.g. RoboSimian [10] , Momaro [11] , CHIMP [12] , etc. However, these robots have very complicated structures and low-level controls that consume a lot of computation power and control time, thus resulting in a relatively fragile system under consistent large workload during rescue work. So far, only the quadrupedal robot ANYmal has been successfully deployed in real rescue scenarios [13] . As with tracked robots, popular ones are often equipped with swing arms that can help cross diverse obstacles, afford high payload, and perform stable locomotion. After the 2016 earthquake in Italy, a such tracked robot TRADR was used to inspect damaged buildings [14] . In the ARGOS Challenge [15] , Team Argonauts also used a tracked robot with swing arms to win the championship [16] . Other groups have even realized autonomous navigation for such tracked robots when climbing stairs [17] and slippery slopes [18] . However, due to the existence of the swing arms, those robots lose the capability of clearing up cumbersome obstacles. To overcome those drawbacks in this work, we have designed a unique structure to provide the tracked system the capability of both climbing stairs and clearing obstacles.
Besides thoughtful structural designs, autonomous operation can also greatly improve rescue robots' efficiency in different rescue works. A typical application scenario would be exploring signal blocked areas after emergencies or disasters have happened. To overcome the loss of telecommunication between the robots and the operators, autonomous navigation is potentially desired for rescue robots [19,20] . Miscellaneous sensors can be taken advantage of to conduct simultaneous localization of the robot and mapping of the unknown area [21−23] . We have integrated a Light Detection and Ranging (LiDAR) and an Inertia Measurement Unit (IMU) to build grid maps of the surrounding environment, evaluate position and pose of the robot and realize autonomous navigation. Note that no Global Positioning System (GPS) is needed in this process, making it particularly suitable for signal blocked areas.
Even with autonomous navigation, the robot still needs the operators' help when it comes to dexterous manipulation tasks like opening doors. There was even a case where seven operators were needed to cooperate on controlling a robot [24] . To release the operation burden, we have developed depth camera aided semi-autonomous manipulation for robotic arm in door opening tasks that can quickly locate the position of the door handle. The whole operation process just needs two operators' cooperation to control the base and the arm, respectively, largely reducing the operation complexity and increasing operation efficiency.
Consequently, the teleoperation system on a rescue robot becomes quite critical for successful rescue works. The effectivity and reliability of the teleoperation system determines the lower boundary of the rescue robot's performance. Therefore, a multimodal teleoperation system to provide enough redundancy and deal with different conditions become necessary.
Based upon the above articulation, we present in this paper our newly designed rescue robot, which has successfully addressed the aforementioned four points of functionality. We have named it Earthshaker, not only because it "shakes" the ground when it moves around, but also because we hope it can bring earthshaking improvement on the role of robots in real rescue work. An overview of Earthshaker is shown in Figure 1. The remainder of the paper first introduces the various systems of Earthshaker in Section 2, including the tracked chassis, the robotic arm and gripper, the perception system, the teleoperation system, and their mechatronic integration. In Section 3, control frameworks of the multimodal teleoperation, the depth camera aided semi-autonomous manipulation, and the LiDAR aided autonomous navigation are presented in detail. Section 4 summarizes the performance of Earthshaker in the finals of the 2020 A-TEC championships as the experi-mental validation of the system integration and control philosophy. The experience obtained from the competition and possible future directions are discussed in Section 5.

Tracked chassis for robust locomotion
The tracked chassis supports all the other systems onboard with corresponding mechanical and electronic interfaces to form the robot as a whole. It determines the upper limit of the whole system's mobility [25] . The tracked chassis is made of alloy steel through casting and welding. It combines the design of Christie suspension and Matilda suspension to achieve excellent traversing capability. The vibration and impact from rough terrains can be effectively absorbed by the chassis to maintain a stable operation environment for the systems onboard. The chassis is driven by two 1000 watts brushless motors, which can support a maximum running speed of 1.6 m/s and a maximum climbing inclination of 40 degrees. Four packs of LiPo batteries inside the chassis can power Earthshaker to continuously work for 3 hours at medium workloads. Each battery pack supports an individual system to ensure power isolation and security, namely, one 48 volts 60 ampere-hour pack for the chassis, three 24 volts 16 amperehour packs for the manipulation system, the perception system and the teleoperation system, respectively. In the end, the Earthshaker is 0.72 m wide and 1.22 m high, and its length can vary from 1.33 m to 1.49 m, with a total weight of about 250 kg.
To promote the robot's capability of clearing cumbersome obstacles and climbing up and down stairs, a swing armdozer blade structure has been designed and attached to the rear end of the chassis. The structure consists of an electric linear actuator, two tracked swing arms, and a dozer blade. The electric linear actuator can be controlled under teleoperation to rotate the swing arms, thus adjusting the pose of the dozer blade from 65 degrees to -45 degrees with respect to the horizontal direction. On flat terrains, the structure is folded to reduce motion resistance and increase agility, while on stairs it can be used to adjust pitch angle of the robot to improve stability, as shown in Figure 2. When there are cumbersome obstacles in the way, the dozer blade can be put vertical to the ground to push them away obstacles efficiently, as long as they are under 75 kg.

Robotic arm and gripper for dexterous manipulation
Without dexterous manipulation, tasks like pressing buttons, opening doors, turning off valves, picking up small objects, moving around wounded victims, etc., cannot be accomplished. Earthshaker has been equipped with a UR5e robotic arm and an AG95 two-finger gripper for those purposes. The arm can realize dexterous manipulation within a radius of 750 mm, with a maximum payload of 5 kg [26] . The arm is installed at the front end of the robot to guarantee enough workspace and balance the extra weight introduced by the swing armdozer blade at the rear end. The Original Equipment Manufacturer (OEM) control box of the arm has been customized to save space on the robot and can work under 24 volts DC power instead of 220 volts AC power. The velocity control of each joint on the arm and the gripper is mapped to the remote controller, thus precise impedance control can be achieved. Also, to facilitate semi-autonomous manipulation, an Intel D435i RGBD camera has been mounted on the gripper, the use of which is discussed later in Section 3.2.

Sensors for diverse perception
Earthshaker has a platform for sensor installation between the robotic arm and the swing arm. Four sides of the rectangular platform have four wide-angle cameras, which are headed slightly downwards to provide a panorama of the environment surrounding the robot. Thus, the remote operator can plan paths and avoid obstacles accordingly. There are also two infrared cameras on the sensor platform that can help identify objects in smoky environment. The two infrared cameras are placed opposite to each other, with one pointing forward and one pointing backward [27] . At the front panel of the chassis, there is another micro camera that can provide a wide view of the environment in front of the robot. With further help from the lasers on both sides of the robot, the operator can precisely drive Earthshaker to pass through narrow doors or corridors without any problem.

Teleoperation and communication
Earthshaker is teleoperated by two operators using two AT9S remote controllers, one for the tracked chassis and one for the robotic arm and gripper. Each controller has 12 channels to transmit digital commands via 2.4 GHz communication frequency to the receiver on the robot. A STM32F091 based microcontroller is then utilized to decode the signals to achieve closed-loop control of the chassis, as well as other peripherals like the swing arm -dozer blade, the lasers, the LEDs etc. Meanwhile, the signals to the receiver for the robotic arm and gripper are translated into specific actions by an Intel NUC minicomputer, which has a RAM of 16GB and an Intel Core i7-1165G7 CPU with a maximum clock frequency of 4.7 GHz. On the other hand, the video images transmitted back to the operators consist of images from the wide-angle cameras, the infrared cameras, the micro camera and the operating system screen of the NUC minicomputer. These eight images are selected and combined into one single image for transmission to save bandwidth. Besides the 2.4 GHz direct communication, there are also two redundant communication paths on Earthshaker, the 1.8 GHz MIMO-mesh radios [28] and the 4G/5G mobile telecommunication. These additional paths can overcome the relative short communication distance of the 2.4 GHz signals and ensure the robustness of teleoperation for Earthshaker. Figure 3 summarizes the major mechatronic components of Earthshaker, as well as the signal paths for multimodal teleoperation. Note that the NUC minicomputer can also control the chassis, depending on its priority comparing with the STM32F091 microcontroller on the CAN bus. Consequently, the switch between teleoperation and autonomous navigation can be organized. To accomplish autonomous navigation and dexterous manipulation, the NUC is also connected to the LiDAR, the RGBD camera and the gripper via USB cables, and to the robotic arm via a switch. The same switch is also connected to the MIMO-mesh radio and the 4G/5G router. As a result, the switch builds a 100 Mbps network with the operators' computer, the signal quality of which affects the latency of teleoperation.

Control logic of the base
In real rescue work, it is inevitable to face environments with detrimental magnetic fields or poor signal transmission conditions, where the regular 2.4 GHz teleoperation signals would decay greatly with reduced signal-to-noise ratio and increased data package loss. To maintain robust signal transmission between the operators and Earthshaker for real-time teleoperation, a framework for multimodal teleoperation has been developed to ensure the communication path is unblocked, as shown in Figure 4(a). Within the framework, when Earthshaker is close to the operator such that the AT9S remote controllers can talk to the receivers on the robot directly, the 2.4 GHz communication frequency is used. Once the distance in between increases or for some reason the signals get blocked to a point where the direct communication fails, the 1.8 GHz communication frequency would be adopted and the signals are transmitted through the MIMO-mesh radios. When necessary, the robotic arm can even drop an extra relay radio onboard to further increase the communication distance and quality. Multiple MIMO-mesh radios can form a distributed network with various forms, e.g., a line, a star, a net, and even a mixture of those. The network can flexibly adapt to fast node movement and node-to-node signal quality variation, realizing high quality signal transmission consistently. To ensure the teleopera-tion communication in case even the MIMO-mesh radios fail, one more redundant communication path realized by the 4G/5G router has been added to Earthshaker. The router can either connect to nearby base stations from the selected Internet Service Provider or be relayed by nearby Unmanned Aerial Vehicles, to build a network with a preset cloud server. The operators can then access the server, monitor the real-time data from the robot and give corresponding commands.

J norm
Earthshaker checks control signals from these three paths according to their priority levels and signal quality. If effective data are not received within a prescribed time, the path with a lower priority level would be checked. If all three paths fail, the program would determine whether to enter the autonomous navigation mode or an emergency stop mode. Once any communication path is successfully established, the remote controller in the base operator's hands can drive the robot to move forward, backward and rotate around its center point. The microcontroller on the robot first unifies the joystick values obtained from the remote controller to , where , representing the values from each joystick, , representing the unified joystick commands, and and denote the maximum and minimum values of the joysticks, respectively. When the value is zero, the robot is still. The unified values are then interpreted into the base's linear velocity and angular velocity as and where and are the maximum linear and angular velocities supported by the base. Through the kinematic model of differential drive, the angular velocities of left and right motors of the base, and , can be calculated as where is the distance between tracks and represents the radius of the drive wheel. The calculated and are then sent to the motors as control commands.

Control logic of the arm and gripper
The programs for teleoperation of the arm and gripper consist of an operation assisting module for door opening task and several interface modules for maintaining communication between the NUC minicomputer and the other components, including the signal receiver, the UR5e arm, the AG95 gripper and the D435i RGBD camera. Inside these programs, a network socket is first created according to the arm control-ler's IP address and port number, such that the built-in input/output functions can be called to read or write to the socket to interact with the arm controller. At the same time, the serial ports connected to the signal receiver and the gripper are initialized in the programs through RS485 protocols. Once the NUC receives remote control instructions through the signal receiver, it parses them into the positions and velocities for each joint of the arm, as well as the opening angle and holding force for the gripper.

K
To facilitate semi-autonomous door opening for Earthshaker, the operation assisting module is developed using the depth camera in an eye-to-hand manner. This module, as shown in Figure 4(b), can greatly simplify the process of door opening, avoiding the potential mistakes that could be introduced by the operator through teleoperation. In the module, the coordinates of the camera and the arm are first calibrated to obtain the transformation relationship between them. With the intrinsic parameter matrix , the pixels on the depth images obtained from the RGBD camera can be converted into three-dimensional point cloud as   J u s t A c c e p t e d where is the coordinates of the 3D point, denotes the depth measured on the ray of the pixel, is the coordinates of the point in the image. Next, objects can be identified within the point cloud converted from the depth image. Specifically, in the task of door opening, the position and orientation of the door and the handle should be estimated to serve as the goal for path planning of the arm and gripper. The position of the door is determined through fitting planes to the point cloud. Before the operation assisting module is started, the robot needs to be in front of the door such that the door is inside the Field of View (FOV) of the RGBD camera. Consequently, the point cloud corresponding to the door can be recognized by planar segmentation. To figure out the orientation of the door, the Principal Component Analysis (PCA) method [29] is exploited to calculate the normal vector of the door plane in the point cloud. Then the axis-angle of the door's normal direction can be calculated as θ n x x n a x q where denotes the angle between the normal vector of the door plane and the unit vector of -axis, and represents the rotation axis from the vector to the vector . Thus, the rotation matrix can be further calculated with Rodrigues's Formula [30] , I p q p where is the identity matrix. Subsequently, DBSCAN algorithm [31] is used to cluster the cloud points that are close to the door plane. The cluster with a proper size is identified as the point cloud of the door handle, and the cluster center is calculated as the mean of all the cluster points and set as the target position for arm to grip. As a result, the orientation and position serve as the target pose when approaching the handle. However, due to the observation model of the RGBD sensor, the depth measurement error is proportional to the square of distance. The eye-to-hand method leads to a relatively long separation between the target and the sensor, inevitably causing observation errors for the gripping pose. Additionally, the vibration introduced by the movement of the base also makes it difficult to realize visually feedback control of gripping. Hence, at this point, the algorithm is only adopted to provide an initial pose for the door opening task. The remaining operation still needs to be completed by the operators. Even with this level of semi-autonomy, the operating steps have been greatly simplified and the operation burden on the operators is sufficiently released.

Autonomous navigation
When autonomous navigation is desired, the control authority of Earthshaker could be given to the NUC. This helps the robot explore unknown and signal blocked areas actively and search for an exit towards a desired direction, the algorithm of which can be found in Figure 4(c). Once the autonomous navigation is started, the NUC analyzes the data scanned by the

X L c
LiDAR to build a grid map of current environment and estimate its ego-motion simultaneously. Feature matching based method such as LOAM [32] is a popular pose estimation method that demonstrates robustness and efficacy in complex offroad environment. Therefore, scan matching is also incorporated into Earthshaker's autonomous navigation algorithms. Features are extracted from each frame of the LiDAR sweep for the smoothness as where denotes the -th point within the sweep, and defines a set of consecutive points obtained by the same laser beam near point . The point number within is empirically set to 10. By setting a threshold for smoothness, the curve can be determined as edge feature with greater smoothness and planar feature with less smoothness. Then the edge and planar features of consecutive frames can be registered separately to restore the motion between frames using Iterative Closest Point (ICP) algorithm [33] . The object function for the ICP algorithm is set to minimize the cost with respect to the estimated transformation as where and denote the number of matched edge features and planar features, respectively, represents the distance between two matched edge features and represents the distance between two matched planar features.
Due to the vibration of the base caused by tracks and rugged terrains, IMU pre-integration [34] is introduced into the system to further improve the robustness of the localization results. As shown in Figure 4(c), the Extended Kalman Filter [35] is used to infer the state of the robot, fusing the scanmatching results and the IMU pre-integration results in a tight coupling manner.
With the high precision Laser-Inertial odometry estimated from EKF fusion, the laser scans are then merged into the occupancy grid map. In general, the exploration task is to maximize the covered area on the grid map. Herein, a frontier based method [36] is used to guide the robot to explore along the boundary between unknown area and free area on the grid map. In the method, random tree incrementally expands toward boundaries during the exploration process by sampling viewpoints as new nodes. The newly added nodes in the random tree are then evaluated with information gain and traversing cost as where is the expected information gain in position , is the distance cost between robot and position , and denotes a coefficient that controls the penalty on the distance cost. By selecting the branch with maximum score, the first edge of this node is set as the next best view to navigate. The move_base navigation module provided by Robot Operating System (ROS) [37] is employed to calculate the shortest path based on the Dijkstra algorithm [38] . The robot follows the gen- erated path to explore the environment gradually. Once the target point is reached, the next round of exploration planning continues. The whole process gets repeated until the robot covers the whole area or finds the exit.

Experimental Validation
To examine the functionality of Earshaker and demonstrate its superiority, it was sent to attend the first A-TEC championships in 2020 as experimental validation. The competition was held by the government of Shenzhen in Guangdong, China to further enhance robotic techniques and seek industrial opportunities [39] . In the finals of the championships, the competition was divided into five sessions and all the teams were ranked based on their performance in those sessions, including task difficulty, task completion, and time consumption. The five sessions, in turn, were traversing rough terrains, clearing cumbersome obstacles and opening doors, climbing up and down regular stairs, passing through signal blocked areas, and searching and rescuing in smoky indoor environments, as illustrated in Figure 5. Specifically, passing through signal blocked areas required the robots to autonomously navigate inside a maze and search for the only exit, while in the other sessions the robots were remotely controlled by operators in first person point of view from hundreds of meters away. These diverse sessions examined the capability of participated robots in locomotion, manipulation, perception, telecommunication, etc. [40−42] Robust and consistent performance in all sessions became more important than outstanding performance in any single session [43] . During the intense championships, 15 teams globally entered the finals in total. Out of those teams, Earthshaker took the first place with a score of 109 points, whereas the robots from Tsinghua University and Chongqing University took the second and third places with scores of 79 and 70.5 points, respectively.
Compared to the Seeker robot from Tsinghua University, the MIST-Robot from Chongqing University, and many other robots from the rest teams, Earthshaker realized transformation of the tracked system for climbing stairs and improvement on cumbersome obstacle clearing capability in the most economical way, the swing arm -dozer blade structure. Earthshaker won the competition also by the diverse sensors integrated into the robot that allowed the robot to be robustly teleoperated and even achieve autonomous navigation. The following sub-sections describe the performance of Earthshaker in each session of the finals.

Traversing miscellaneous terrains
This session required the robot to first traverse a 30m-by-3m rough terrain that could be covered by rubbles, bricks, or irregular concrete debris depending on the selected difficulty by each team. Following that, the robot needed to pass through an area covered by large immobile obstacles, climb up and down slopes of 36 degrees at most, and travel on a bridge tilted to the side by 27 degrees. Even though these tasks were relatively easy, they relied heavily on the robots' speed and agility. Because the robotic arm did not need to be operated during this session, the corresponding operator was able to fly a DJI Mavic Unmanned Aerial Vehicle (UAV) to provide a global view of the field from above, which allowed the base operator to plan operation beforehand and greatly reduced the time consumption. Benefited from the great horsepower and well-designed suspension of the chassis, Earthshaker performed excellently in these tasks and was ranked the first place among all the robots.
Besides the aforementioned regular tasks, there were also challenge tasks in this session, where the robots needed to traverse muddy terrains with potholes, flat terrains with trenches of various widths, and pools filled with water of different heights. Earthshaker accomplished these challenging tasks successfully, as shown in Figure 6. Specifically, when faced with the trenches, Earthshaker put down the swing armdozer blade to increase the body length of the chassis. As a result, it crossed the trench with a width of 600 mm. As with the water pools, because the whole body of Earthshaker was waterproof of the level IP64 and the chassis was even waterproof of the level IP66, Earthshaker was capable of dealing with the pool with water depth of 500 mm. It is worth noting that Earthshaker prepared for the possible rainy weather during the competition, whereas many other robots did not have this preparation. Consequently, some robots suffered from the rainy weather with their naked electronic interfaces, and ended up not being able to finish the competition.

Approaching buildings
In this session, the robots were required to first clear up a 10m-by-20m area by moving obstacles to designated places, and then open and enter a door with automatic closers. The obstacles included hollow steel tubes as light as 5 kg, and steel beams and concrete blocks as heavy as 50 kg. Earthshaker successfully utilized the dozer blade to push all the obstacles to the target positions. There were multiple difficulty levels for door opening, with different types of doors and door handles. Options are unifold or bifold doors with spherical handles, L-shape handles or valves. The most challenging combination, a unifold door with a spherical handle, was selected for Earthshaker in the competition. Because of the door closer, the robot needed to rotate the handle and maintain the rotation while opening the door. As a result, the two operators needed to cooperate in the process. One operator needed to first align the 0.8 m wide Earthshaker with the 1 m wide door frame under the help of the equipped laser pointers, and then keep commanding the base to move forward slowly as the door handle was rotated, until the front end of the chassis was pushed against the door and the handle could be released by the gripper. The other operator needed to fine tune the robotic arm and the gripper after the initial semi-autonomous manipulation, grip and rotate the door handle as the chassis was approaching the door until the handle could be released from the gripper. Figure 7(ae) shows some snapshots of the whole process of this session. Earthshaker finished this session within 31 minutes and 12 seconds.

Manipulation inside buildings
Robots in this session needed to climb up to and down from the platform as shown in Figure 7(f-h). Optional ways were through vertical ladders or regular stairs. The tracked chassis determined that Earthshaker could only pick the regular stairs, which was the common choice among all the robots in the competition. The stairs had 24 steps one-way, every step of which had a depth of 300 mm and a height of 175 mm. Thus, the inclination angle was about 34 degrees. There was a turning platform between two sections of stairs.
When Earthshaker was climbing up the stairs, the swing arm -dozer blade structure was adjusted to provide enough contact length for the chassis and help the robot move smoothly. However, the swing arm was not put fully flat due to the detrimental friction generated by the passive arm tracks, which would hinder the robot from thrusting upward. The angle of the swing arm was empirically set to just enough to support the robot to climb up the stairs. On the other hand, when the robot was climbing down the stairs, the swing arm could be put fully flat to take advantage of its length and the passive friction generated to increase stability. Earthshaker was able to finish this session within 6 minutes and 13 seconds, where the climbing up process took the majority of the consumed time. Compared to the other small tracked robots in the competition, Earthshaker was slower due to its relatively cumbersome body on the stairway.

Autonomous navigation
The autonomous navigation session tested the robot's intelli- gence in building maps and finding exits within unknown areas without human's help. To simulate the situation of signal loss in reality, during the competition, the referee turned on the signal blocker once the robot entered the maze. The operators inside the control room were also not allowed to touch the remote controllers during this period. The maze had three possible entrances and three possible exits. When the robot arrived at the maze, only one entrance would be open, and also only one exit would be usable. To be fair, inside the maze, there were moveable doors that were adjusted for each robot to form a different unknown structure. Earthshaker was able to show up at the exit within 41.13 seconds in this session, ranked the second fastest among all the robots. To check the built map for the maze, the point cloud stored in the NUC has been extracted after the competition, as shown in Figure 8.

Search and rescue in smoky environment
The last session of the competition involved indoor rescue work. The robot was supposed to enter dense black smoke filled rooms and search for a fire source and a wounded person. The smoke was real, spread by some smoke generators. However, the fire source was represented by an electric oven, and the wounded person was actually a sand bag in human shape. The dummy weighed about 50 kg. To simulate a real person, clothes were put on for the dummy that could generate heat for a period of time. There were in total eight similar rooms. The fire source and the wounded person were randomly distributed among them. There were also other common items like tables, chairs, cabinets, etc., inside those rooms, just like regular rooms people can find in their daily life. The robot needed to find the fire source and turn it off, and also needed to find the wounded person and carry it out of the room to a designated area. The smoke was quite dense and the visible distance was less than half a meter inside the rooms. Earthshaker had to search every room under teleoperation to locate the wounded person and the oven with two infrared cameras, then use the gripper to turn the oven off and carry the wounded person out. This again required cooperation between the two operators. To carry the wounded person out of the room, a customized lasso was installed onto Earthshaker before setoff. Once the wounded person was located, the robotic arm and gripper picked up the lasso using preset control trajectories and put the lasso around the wounded person's arm through teleoperation. The lasso then automatically locked up once the robot started to drag the wounded person. Figure 9 shows scenes from this session. Earthshaker finally spent 11 minutes and 36 seconds finishing all the tasks.

Summary
Earthshaker performed reasonably well in each session of the competition, even ranked first in two of the five sessions. That eventually allowed Earthshaker to take the first place among all the robots. The overall score table is shown in Table 1. As a demonstration of dominance, Earthshaker got 109 points in the finals, whereas the runner-up only got 79 points. Earthshaker stood out by its multimodal teleoperation, its modular and waterproof mechatronic design, and sufficient experiments and practice before the ultimate test. The competition required a complete and robust rescue robot as a whole, not just any advanced individual module of it. However, some of the Earthshaker's shortcomings were reflected in the competition. The excessive size limited its flex-ibility of movement, making it hard to pass through certain narrow spaces in actual use. At the same time, the payload to the manipulator is limited, thus it cannot complete dexterous manipulation tasks with large loads. Even though Earthshaker still had a lot of room to improve, it was the excellent mechatronic integration and the advanced control philosophy that made it the winner of the A-TEC championships 2020.

Conclusions
This paper introduces a rescue robot Earthshaker, including   the system integration and the control algorithms of it. The performance of the robot has been evaluated to be excellent during the A-TEC robotic championships in 2020. The unique swing arm -dozer blade structure of Earthshaker helps extends the capability of conventional tracked chassis, improving its performance on cumbersome obstacle clearing and regular stair climbing. The multimodal teleoperation system provides the robot redundancy and robustness when the operators cannot show up on site. The finite autonomy in the operation of the robotic arm and gripper helps release the operators' work burden to a suitable extent. When teleoperation signals are lost, the robot could also enter the autonomous navigation mode to search for an exit by itself and give back the control authority to the operators. Overall, the championship that Earthshaker earned has shown the efficacy of all the aforementioned efforts. It can play a huge role in search and rescue in disaster scenarios such as nuclear accidents, toxic gas leaks, and fires, where human workers cannot be deployed due to radiation, danger of toxic contamination or architecture collapse. Future efforts can be put into improving the robot's autonomy in many foreseeable tasks for emergencies and disasters to further increase its efficiency and robustness. More earthshaking endeavors in helping the human community can be expected from Earthshaker.