Research on Key Technologies of Autonomous Driving Platform

With the continuous development of the economy, people's economic level is constantly improving, therefore, the travel demand is also increasing. The increasing demand for travel has brought many problems to transportation, such as travel efficiency, traffic safety, and environmental protection. Autonomous driving technology provides new ideas for solving current traffic problems. Autonomous driving is a hotspot and frontier in the field of artificial intelligence. Autopilot system is a very complex system, which integrates today's automotive electronics technology, sensor technology, computer technology and artificial intelligence technology. It is one of the frontiers and research hotspots of today's automotive technology revolution. The article first analyzes and summarizes three typical auto-driving methods of automobiles, and then analyzes the principles of auto-driving, focusing on the research on key technologies of auto-driving, aiming to design a safe and reliable auto-driving system.


Introduction
In recent years, with the development of society and economy, automobiles, an increasingly popular means of transportation, has become an indispensable part of life. With the continuous development of the economy and the ever-increasing demand for travel, people are becoming more and more dependent on transportation, which also brings many problems in travel efficiency, traffic safety, environmental protection and other aspects; the autonomous driving system provides new solutions to current traffic problems. At present, autonomous driving technology has become an important development direction of the automotive industry. At present, autonomous driving technology has become an important development direction of the automotive industry. The development of autonomous driving technology benefits from the application and promotion of artificial intelligence technology. Among them, the key technologies of autonomous driving include environmental perception, precise positioning, decision planning, control execution, high-precision maps and Internet of Vehicles, as well as related testing and verification technologies. This paper studies the key technologies of autonomous driving systems, hoping to provide some reference for the research of autonomous driving technologies. decision-making and planning of autonomous driving. Its system structure is shown as in Figure 1. The method includes multiple sub-tasks of computer vision, including subsystems such as target detection, target tracking, scene semantic segmentation, and 3D reconstruction. The environmental perception part includes lane line detection, curb detection, guardrail detection, pedestrian detection, motor vehicle detection, non-motor vehicle detection, traffic sign detection, etc. Almost all driving-related scene information must be detected. The advantage of the indirect perception method is that the modularity is clear. Each subtask of computer vision can be updated and optimized according to the latest research results. The coupling between the modules is low, and it can be combined and adjusted more easily. It is easy to troubleshoot and the system has high interpretability. The direct perception method is a perception method that directly learns the vehicle state information represented behind the image to guide driving decisions. This method does not need to analyze specific target information in the image. The direct perception method is optimized on the basis of the indirect perception method. Instead of subdividing and understanding the scene, it directly learns the key indicators related to driving. These key indicators generally include the distance to the surrounding vehicles and the marking line. The system structure is shown in Figure 2. In the direct perception method, the target detection algorithm is no longer used to directly learn the various road parameters represented by the first-view image of the car through the convolutional neural network, which greatly reduces the complexity of the system.

End-to-end control method.
End-to-end is a concept in deep learning. In the field of autonomous driving, the end-to-end method refers to directly inputting the signals collected by the vehicle body vision sensor to a unified neural network system, and the system directly outputs indicators closely related to control. The system structure of this method is shown in Figure 3. It turns out that this end-to-end approach is very powerful. With minimal training data, the system can learn to drive on roads with or without lane markings and highways.  Figure 3. End-to-end controlled autonomous driving system structure

Principles of Autonomous Driving
Self-driving cars integrate the most advanced automation, electronics, computers, and artificial intelligence technologies, and use vision, radar, monitoring devices and other assisted driving technologies to cooperate and cooperate, allowing computers to replace the human brain to receive information, think and judge, and replace humans. The driver's operation automatically and safely operates the driven vehicle. The auto-driving system collects operating parameters (voltage, current, temperature, pressure, fuel consumption, steering, braking, acceleration, parking, emissions, etc.), vehicle driving environment parameters (obstacle information, road environment characteristics, front and rear vehicle conditions through input sensors), Road congestion, environmental weather conditions, parking lot information, map environment, etc.). After obtaining this information, the automobile automatic driving system performs real-time calculation, planning and decision-making through the planning and decision-making module, forming an automatic driving execution instruction, and outputting it to the execution module. The execution module accepts and interprets these instructions to drive the actuator (electronic accelerator, electronic steering, electronic brake, electronic shift, etc.), automatically and safely operate the motor vehicle, and complete automatic driving. The automatic driving system is actually a hierarchical structure, and the perception, planning, and control modules each play different roles and influence each other. Among them, the sensing part and the vehicle sensor hardware interact and communicate. The planning is mainly responsible for the calculation of the behavior of the car, and the control is the electronic operation of the car's components, as shown in Figure 4.

Environmental perception technology
In order to ensure the correct understanding and corresponding decision-making of the surrounding environment of the vehicle, the environment perception part of the autonomous driving system usually needs to obtain a large amount of surrounding environment information. Specifically, it mainly includes: lane line detection, traffic light recognition, traffic sign recognition, pedestrian detection, vehicle detection, etc. The fusion of lidar and camera is a method to realize pedestrian detection, which mainly includes space fusion and time fusion. Spatial fusion mainly refers to the establishment of a coordinate conversion relationship from the lidar coordinate system to the image coordinate system, which can map points in the lidar coordinate system to the image coordinate system. Time fusion is mainly to keep the output of each sensor on the same timeline to facilitate multi-sensor fusion.
The two mainstream methods of lane line detection are vision-based lane line detection and radar-based lane line detection. The method of traffic sign recognition is similar to that of traffic light detection. The deep neural network can be directly used to detect traffic signs on the original image. It can also be combined with a high-precision map to store the traffic sign information in a high-precision map. When the vehicle is running, it can obtain the traffic sign information from the high-precision map directly according to the location of the vehicle. There are two common ways to detect pedestrians and vehicles. One is to directly use lidar data for target detection and the other is to merge lidar and camera for target detection.

SLAM autonomous navigation technology
The research on autonomous navigation in known environments has been relatively mature, but the research on autonomous navigation in unknown environments has not yet formed a unified and complete system. In a known environment, the car obtains the known landmark information in the environment through the on-board external sensor, thereby correcting its position, thereby making up for the error caused by the internal sensor; as for the unknown environment, the car cannot obtain the surrounding environment information in advance, and the synchronous positioning and map construction technology is a solution for the unknown environment. SLAM technology is a technology that enables the mobile platform to gradually build a map from the initial position in an unknown environment, and at the same time calculate the absolute position of the mobile platform from the map. In solving the problem of autonomous navigation, the navigation system occupies a very important position. At present, the most commonly used autonomous navigation system is a combination of GPS and inertial navigation; However, when the GPS signal cannot be obtained or is interfered, the navigation function cannot be accurately realized, and other auxiliary or alternative methods are needed. SLAM technology fills this gap well.
The realization of SLAM in the sensor can be divided into two categories: laser and vision. In recent years, computer vision has developed rapidly, and cameras have become smaller and lower in cost. Visual SLAM has become a hot research direction. Figure 5 shows the composition of visual SLAM and the division of research directions. Visual SLAM can be divided into monocular, binocular and RGB-D cameras from the sensor. The difference between them is that the monocular camera cannot directly obtain the depth information of the pixels, but can restore the scene structure through continuous image matching and tracking at one scale, which is suitable for scenes of various scales. The binocular camera obtains the pixel matching relationship of the left and right images through stereo matching, and then triangulates to obtain the pixel depth information, but it has higher requirements on the system structure and is limited by the baseline length, so it is not suitable for long-distance measurement. The RGB-D camera directly obtains depth information based on the principle of flight time. Compared with the first two cameras, it has a higher cost, a limited range and is susceptible to interference.

Autonomous driving design architecture design
Whether it is an ordinary car or a self-driving car at this stage, the final control will be implemented in the control of the steering, brake, and accelerator pedal. The steering control is the key link to realize the vehicle's ideal trajectory. The automatic steering control of a self-driving car is the core to ensure that the car runs on a predetermined trajectory and ensures safe driving. Self-driving cars use on-board equipment and electronic equipment on the roadside and road table to perceive changes in the surrounding driving environment. The decision-making controller can integrate vehicle and road information, and through the identification of lane lines and the judgment of the driver's intention to operate, Calculate the target driving trajectory, and then generate control instructions, and transmit the control instructions to the bottom-level execution controller through the CAN bus to realize the control of braking, steering and accelerator. However, for steering control, it is to transfer the control quantity to the EPS, where the control quantity can be motor torque, steering wheel angle and so on. EPS realizes the automatic steering control of the vehicle by executing the steering control amount of the decision-making layer. The control flow of the autonomous vehicle is shown in Figure 6.  Figure 6. Self-driving car control flow

Conclusion
Autonomous driving technology integrates today's automotive electronics, sensors, computers and artificial intelligence technologies, and is one of the frontiers and research hotspots of today's automotive technology revolution. This technological breakthrough will definitely bring revolutionary changes to modern automobile technology, liberate drivers fundamentally, greatly improve driving comfort, safety, and reliability, and will surely generate strong market competitiveness and become a new growth in the automobile economy point. On the basis of a deep understanding of the concept,