The method of building 3D map by the reconfigurable computing environment for unmanned aerial vehicles, drones

. Unmanned aerial vehicle (UAV) is a flying device that is integrated with appropriate hardware and software, besides it is also a device to collect and process data to meet the needs and purposes of users. Currently, most drones use GPS technology to determine the flight path, and other sensors on it are used to determine distance, avoid collisions. However, with small, narrow spaces, dangerous environments that humans cannot go into that space, GPS not working, small drones with small hardware devices placed on it to map themselves and navigation would be a suitable solution. In this paper, a method to build a high-performance reconfigurable computing environment model to map drone as well as UAV flying in 3D space is proposed. Mapping based on Boolean formulas with reconfigurable structure and adjustable automatons will generate high-precision maps for efficient robot navigation.


Introduction
UAV in general and drone in particular is a breakthrough technology in performing difficult and dangerous tasks that humans cannot perform. Nowadays, it is becoming one of the most popular drones in the world. Moreover, the drone is designed to be compact, flexible and capable of flying at altitudes and speeds that cannot be reached by humans, thereby helping to improve efficiency and reduce costs for production activities. A UAV that wants to navigate needs a map of the surrounding environment, so it becomes necessary to build a map for the UAVs in the indoor space. Based on sensors mounted on the robot, it is possible to know the position of the robot by processing the collected sensor data, and then build a map for navigation. This is actually a multi-stage process, which involves processing sensor data using multiple algorithms that are suitable for the system's parallel processing capabilities.

Building a reconfigurable computing environment system for UAV and drones in 3D space
The process of controlling a robot is considered as the operation of a finite automaton with a set of robot input states as well as a set of its actions. These automatons will be built on top of logical operations AND, OR, NOT [1,2] and can change their structure depending on the signals being built. The operating process of the system is shown in the overall structural model of Figure 1. Starting from the current state of the Robot and the corresponding actions, the rule set and through which the database of control rules will be built. From there, form the operating state space of the robot in the 3D environment. This space is considered as the basis for building the control algorithm for the Robot. However, for the robot to be self-propelled (selfcontrol operating from the environment state), the control algorithm must be integrated with the hardware system when building the robot.
To build a reconfigurable computing environment system for UAVs and drones in 3D space, the processing of navigation information received from the lidar sensor on the robot plays an important role. In 3D space (Oxyz coordinates), consider the case of the robot flying in a straight forward direction. This space is partitioned by 3-dimensional matrices (l×w×h) (where l: length of space, w: width of space, h: height of space). Assuming that In 3D space (Oxyz coordinates), consider the case of the robot flying in a straight forward direction. This space is partitioned by 3-dimensional matrices (l×w×h) (where l: length of space, w: width of space, h: height of space). The space where the robot flies will be subdivided according to the rules of the mapping algorithm for the robot in 3D space. Specifically, the robot circumscribed cube will be considered as the unit size of a block in the space in which the robot flies, which will consist of blocks equal in size to these units [2]. Therefore, the RCE environment built for the robot will be an environment consisting of identical cells, each of which will be the same size as the robot's circumscribed block.
By structuring the robot's operation, this robot can be viewed as a finite automaton with a set of states represented by each position of the robot when flying in the environment and action functions representing moving actions of the robot. Each moving position of the robot will be structured by logic functions (AND, OR, NOT) with the input and output connections of the function set by the current position and the next position of the robot. However, the robot's operation is customizable depending on the environment, the connection between the structures specific to each position and the corresponding motion of the robot is quite flexible, requiring a flexible way to control the robot's hardware. Then, using a reconfigurable computing environment with a pre-installed algorithm, the space in which the flying robot is built, the spatial map of the environment in which the flying robot is built.
Building the automaton by changing its internal structure by setting signals by basic logic functions is a new method. This method allows the construction of the hardware structure to be changed dynamically. From there, it is possible to use this method of integrating hardware on the robot to control the robot's operation to adapt to the external environment and be selfpropelled [3,4]. Therefore, the idea of building a reconfigurable computing environment for a UAV or drone was born. In essence, this reconfigurable computing environment is the interconnection of the automatons at the physical level considering the input and output of each automaton and the interconnection between the automatons, each automaton plays a role as an elementary computing (EC) in the reconfigurable computing environment (RCE) [6].
In order to build suitable algorithms for creating maps for UAVs, it is necessary to take into account the cases of the presence or absence of obstacles at each position on the RCE environment. From there, the formula for the connection between the elements in the RCE environment is built, through which, the general formula for an automaton is built based on those cases. From there, a general signal circuit diagram for automaton is formed as shown in Figure 2. In which, x1, x2, x3,...,x10 are the inputs, f1, f2, f3, f4, f5 are the outputs, z1, z2,…, z6 are the settings.
With the 3D environment, the reconfigurable computing environment model will be built into several layers, which will overlap. When working with the 3D environment, the values of the cells in the RCE environment will receive a binary value of 0 or 1, but this time the value corresponding to each cell in the 3D environment.
It is assumed that the 3D environment will be divided into blocks of equal size and the same size as the robot. This 3D robot reconfigurable automaton construction algorithm will be based on the association between ECs in the same layer and between layers in the 3D environment. Assuming that each layer will have 36 elements (6×6), and the algorithm will build 6 layers that are connected from bottom to top in a space of 6×6×6 blocks.
In principle the algorithm will be divided into 2 cases, combined with Octree algorithm to form groups, or not to group.
In case of no grouping: When the robot flies straight ahead and detects an obstacle at each position in each layer, each cell in each layer in the RCE environment will be updated. In addition, due to the use of Lidar, there is also a separate rule that is set out that if an obstacle is detected, by default the cells behind the obstacle are also considered to have an obstacle.
For the case of grouping: an elementary computational model (EC) consisting of 4 layers is built, each layer has 4 groups, each group has 4 elements. Each element will have 10 inputs and 5 outputs. The graphical representation of these layers is shown in Figure 3.  The principle of building the connection between the elements in the reconfigurable computing environment in this case is as follows: A group will consist of 8 elements (2×2×2) (4 elements of the group at the moment being considered and 4 elements of the above group directly connected to the 4 elements of the current group) [5]. When the robot flies straight ahead, consider each coordinate frame that the robot moves, at each time, when a cell in the RCE environment is considered to have obstacles, then 3 cells in the same group connected to that cell in the same layer are also considered to have obstacles. 4 cells above those 4 cells in the upper layer are also considered obstacles.
With the additional use of the grouping Octree algorithm, the processing speed of areas with obstacles in the map will be 2.2 times faster than without using grouping. Furthermore, Octree is an efficient algorithm for organizing and storing data in a 3D environment due to its memory efficiency, high access speed, adaptability, flexibility, and support for working with sparse data.

Reconfigurable computing environment simulation model to process maps in 3D space
Without losing the generality of the mapping algorithm in 3D space, the modeling principles should be used, the built simulation model of RCE is shown in Figure 4 with dimensions 2 × 2 × 2. In Figure 4, the entire model is divided into structural blocks (SB) marked with dotted lines. How this simulation works is showed in more detail: SB1 defines and receives input data for the model, namely information about the robot's distance from the obstacle and information about the position of the robot being processed, as well as a set of RCE setup codes zij, where ij is the number of ECs in the RCE matrix. SB2 here represents 6 layers RCE with dimensions of 2 × 2 × 2 EC. Each layer is 6×6 EC in size. Layer 6 shows a map of the environment on the top floor (6 meters), layer 5 shows a map on the 5th floor below layer 6 (with a UAV height of 5 meters). Layer 4 shows the map on the 4th floor corresponding to 4 meters, layer 3 shows the map on the 3rd floor corresponding to 3 meters. The 2nd and 1st floors show maps of the 2nd and 1st floors of 2 meters and 1 meter respectively. SB3 represents the display blocks for the 3D environment, and the display block shows the exact value of the area built in the 3D environment by 216 cells. After implementing the map processing algorithm, the visual map image and the result are formed.

Connecting the robot to the RCE simulation system, testing and evaluating
The robot simulation model (Robot Operating System "ROS") is connected, allowing the robot to fly in Gazebo space and connect to the reconfigurable computing environment model in Matlab. To connect the Robot simulation environment (Robot Operating System "ROS") to Matlab, there are two ways to configure: the first way is to connect using the main URI (Uniform Resource Identifier) with the hostname or the second way is to use the IP address of the server that is running on. The set of related ROS capabilities, such as "publishers", "subscribers", and "services", form a complete ROS node to connect to Simulink in Matlab. ROS organizations ("Publishers", "Subscribers", и "Services") process data and exchange messages. "Publishers" send messages about a specific topic and the subscriber receives these messages.
To connect to ROS, initialization of the ROS network with "rosenut" in Matlab was used. By default, rossini creates a ROS master in MATLAB and starts a shared node connected to the master. The share button is used automatically by other ROS functions. In the Matlab environment, the subscriber uses "/scan" to transmit the laser data placed at the workplace, and "/tf" monitors the coordinate frames over which the robot flies in real time. All information about the robot's coordinate system will be provided by "/tf". For that reason, it is possible to easily determine the coordinate position that the robot is moving, as well as the distance from it to the surrounding obstacles, in order to create a separate map for the robot based on the built-in RCE system in Matlab.
However, communicating with different robots has unique features. Especially, with the help of Hector-quadrotor UAV robot, through "rossubscriber" it is possible to get information about robot position from topic "ground_truth_to_tf / pose" and get information about environment laser scan data through via the subject "/scan". New sensors are usually quantified by comparing them to a "ground_truth" measurement.
To communicate between ROS, Gazebo and RCE systems, Matlab uses S-Function. It makes it easy to connect with ROS, and it can also use our algorithm to solve the problem of this topic. After the connection works in the simulation environment described above using Matlab, data from the laser and data on the position of the robot will be obtained. The general architectural model of the combination of these three components is showed in Figure 4.1. In parallel with controlling each coordinate frame that the robot moves, the transmitted data will be processed and connected to the RCE model designed through the "Demux" block. The "Demux" unit helps to separate the output signal from the S-Function and it also converts the input signal into the RCE model.
A Hector-Quadrotor drone environment in 3D was created to test the operation of the RCE environment. To look at the specific relationship between layers in the 3D environment, the environment for the Hector-Quadrotor robot ( Figure 6) will be built on a smaller area, namely "6×6×6". In accordance with this environment, RCE environment with 216 EC and integrated connection between 6 layers will be built. When the Hector-Quadrotor robot flies out of the built-in 6×6×6 area, all ECs in the RCE environment get values 0 (Figure 7).
To more effectively manage layer data in a RCE environment, the following rules have been developed for layers:  Considering only the robot going in the direction of going straight ahead, the robot will fly from the starting point to the end of the length of the space considered for each layer, then it will turn back and fly from the end to the beginning of the space considered with subsequent layers (Figure 8). Corresponding to each movement of the robot, the map will be updated, the locations with obstacles will receive the value 1 and the locations without obstacles will receive the value 0. The results of the 2 cases are not grouped and grouped that used in conjunction with the Octree algorithm is shown in Figure 9.

Conclusion
Building a 3D map of the robot in the environment is the result obtained by building a reconfigurable computing environment model system. The map received is completely accurate with the environment in which the robot flies, moreover it is generated in real time, with fast data processing speed. The ECs in the RCE model offer reliability, fast handling, and uniformity. The simplicity of the technology for manufacturing these computing tools is ensured through the use of the same elements and similar connections between them. Therefore, it contributes significantly in saving the production cost of integrated circuits. Furthermore, reconstructing the right structure for each task by programmatically changing the relationships between the elements and the automatic functions of the elements themselves make them perform the task faster and more flexibly.