Artificial Intelligence Platform Construction and Integration Based on Multi-sensor Fusion

. Sensor is an important source of data transmission, through the integration of mechanical radar, inertial measurement unit and other sensors, building an artificial intelligence platform, will effectively save data information. By taking advantage of sensor fusion, the data of gyroscope and accelerometer are transferred to the background, and then the time stamp of artificial intelligence platform is aligned to get useful and usable information. By combining the mechanical radar and millimeter wave radar, an artificial intelligence platform is built, which makes full use of the synergetic advantage of the related sensors, captures the data information independently and saves it according to the time stamp. This paper focuses on the construction and integration of artificial intelligence platform based on multi-sensor fusion.

The development of artificial intelligence depends on the combination of algorithms, data and computing power to achieve better results.At this stage, computing power needs a breakthrough in hardware, the progress of algorithm needs more algorithm engineers' efforts to have a breakthrough.Whether can use the data efficiently is each big company's disparity, the artificial intelligence data center is solves this question the best solution without doubt.Therefore, take the lead in building, continuous optimization, platform intelligence in the direction of a big step forward.

Artificial intelligence platform based on multi-sensor fusion
In modern control systems, sensors are located at the interface between the object under test and the system under test, and constitute the main "window" for system information input, it provides the original information necessary for the control, processing, decision-making and execution of the system, and directly influences and decides the function of the system.Multi-sensor integration and fusion is a research hotspot in the field of intelligent machine and system at present.Based on the types and types of sensors selected, their physical connection mode is designed: most of them are connected by Ethernet, a few are connected via the CAN bus.And on the basis of the SDK of each sensor, the control of reading, writing, circulation, pause and other functions of each sensor is realized, a man-machine user interface (GUI) was developed to allow users to choose which sensors to start to collect data.The artificial intelligence platform can be used for laboratory personnel to push out to collect data, and later researchers can use algorithms to study the collected data.The artificial intelligence platform provides the tool kit for the user to build the intelligent application.These platforms combine intelligent decision algorithms with data to enable developers to create business solutions.Some platforms provide pre-built algorithms and simple workflows, with drag-and-drop modeling and visual interfaces to easily connect the necessary data to the final solution, other platforms require more development and coding knowledge.In addition to other machine learning functions, these algorithms can include image recognition, natural language processing, speech recognition, recommendation systems, and predictive analytics.
The AI platform is an infrastructure for building large-scale intelligent services, providing step-by-step building and life-cycle management of required algorithmic models, so that we can sink our business into algorithm model to achieve reuse, composition innovation, scale building intelligent services and the role of business empowerment.Each application continuously produces data, each business module data collection, after a unified cleaning, classification, error correction, labeling, definition, granulation and indexing, data center.Then according to various algorithms and machine learning, thus forming artificial intelligence platform.
With the development of artificial intelligence, the Internet platform will be more intelligent and informative.The emergence of multi-sensor integration and fusion technology is precisely to solve these problems faced by a single sensor, multiple sensors can not only be described the same environment characteristics of multiple redundant information, but also, can describe different environmental characteristics, for example, a mobile robot with multiple sensors, it can get the distance information of an object through stereo vision and ultrasonic sensors, respectively, while, you can also get information about the shape of the object visually.
The use of multiple sensors can also parallelize the process of data acquisition and processing, thus improving the performance of the whole system.In addition, other technological developments.Especially, the development of computer technology and sensor technology lays a foundation for the development of multi-sensor integration and fusion technology in both hardware and software.Compared with single sensor system, the integration and fusion of multiple sensors can not only obtain more comprehensive and accurate information, but also reduce the cost and time.To sum up, the characteristics of multi-sensor integration and fusion can be simply summarized as redundancy, complementarity, timeliness and low cost.

The mode of artificial intelligence platform construction and integration based on multi-sensor fusion
Artificial Intelligence and big data platform is one of the main research topics in the industry.The organic fusion of artificial intelligence and big data platform will make the big data platform more intelligent.Based on multi-sensor fusion, an artificial intelligence platform will be built, it is of great practical significance to widen the application field.The integrated pattern and framework mainly studies how to build a general information processing pattern for multi-sensor fusion, that is, the fusion structure, at present, the commonly used methods are layered phase plane pattern, neural network, logic sensor and object-oriented design.In the layered phase-plane model, information processing is divided into four stages according to the time and scope of perception, namely remote stage, approaching stage, contact stage and Operation Stage.In a neural network, trained neurons are used to represent sensory information, and through associative memory, neurons are assembled in complex combinations to respond to different sensory stimuli, the physical sensor is abstracted as a logic sensor, which provides a unified framework for multi-sensor integration.It can adopt the network structure as the structure of information processing.In object-oriented design, sensors can be expressed as objects, and objects can communicate with each other by messages, the sensor's processing can be invoked with the message.Neural network method does not pay attention to the details of information processing, but only cares about the mapping relationship between input and output.It has a good application prospect in some fields such as object recognition.Logic sensor and object-oriented design methods are more convenient for the design and implementation of multi-sensor system.At present, there are many systems using these two methods, the last two methods pay more attention to the details of information processing, both using layered structure, which provides a flexible framework for multi-sensor integration and fusion.
The content of fusion method research is the algorithm related to data fusion.The essence of multi-sensor fusion is the processing of multi-source uncertain information.It's a complicated process.As mentioned before, information in the system from the bottom-up process, the presentation of information in the form of constant change.In addition, the uncertainty of information can be in the form of random, fuzzy and prior information, or in the form of no prior information.There are different processing methods for different information representation forms.This paper introduces some typical fusion methods, such as weighted average, Kalman filter, Bayesian estimation with consistent sensors, multibayes, statistical decision theory, evidential reasoning, fuzzy logic and production rules.
The weighted average method is the simplest and most intuitionistic fusion method.It is suited for dynamic environments.It performs a weighted average processing of a set of redundant raw sensor data as a result of final fusion; Kalman filter is widely used in multisensor fusion systems.It is suitable for processing dynamic, low-level, redundant data.Its real-time performance is good; the specific step of adopting the Bayesian estimation method of the uniform sensor is to first remove the sensor information that may be wrong, then the redundant information provided by the remaining "consistent sensor" is fused.It is suitable for static environments; in a multibayes approach, all sensors are considered as a group of multiple decision makers who must determine the group's consistent results.
Since most of the algorithms are implemented in Python, we'll wrap the Python implementation in an HTTP service, and we'll start with the flask framework in Python.The clustering implementation of the algorithm, we use NGINX to do reverse proxy forwarding, an nginx cluster, can control many different algorithm clusters (such as each different type of algorithm to build a cluster) , with the sidecar component, the algorithm service is also registered in spring cloud as a service, and then called the algorithm service as sping cloud internal service.All the hardware (mechanical radar, millimeter wave radar, Tripod head, dual-beam camera, IMU, industrial camera, mainframe Ubuntu, switch) needs to be connected together, the mechanical radar, dual optical cameras, IMU, industrial cameras through the switch with the host Ubuntu Connection.Millimeter-wave radar, pan-tilt through USB2CAN and host Ubuntu connection.

Artificial Intelligence platform construction and integrated design based on multi-sensor fusion
The research content of control structure is how to effectively control the process of multi-sensor integration and fusion, according to the characteristics of different applications, there are three methods: sensor and control hierarchical structure, distributed blackboard and adaptive learning.The hierarchical structure of sensing and control is composed of three parts: sensing process, task decomposition process and the world model connecting them, through the cooperation among these three parts, the hierarchical control is realized, a hierarchical structure is more suitable for handling complex tasks in a distributed blackboard structure, which allows communication between subsystems and effective distributed control through the information center.The structure is especially suitable for real-time applications with certain requirements.
In adaptive learning, the system can find the appropriate control signal according to the output of the sensor.This ability is to be acquired through training.This structure is suitable for application control in dynamic environment, but there are still some problems in the learning method itself, such as the generalization ability of neural networks and whether it can be online learning and so on.Sensor selection is part of multi-sensor integration, which enables a multi-sensor system to select the most appropriate combination of available sensors.Pre-selection method and real-time selection method are two basic methods at present.Pre-selection method is an optimal method for sensor design.It is based on the specific application and optimal design criteria, prior to the optimal configuration of sensors.Compared with the pre-selection method, the real-time selection method is used to configure the sensors in the process of system operation.According to the change of the system, real-time configuration can be made to achieve local optimization.The applications of the two methods are different.The former is suitable for static environment, while the latter is suitable for dynamic environment.
Given the highly scalable nature of the system and the many other services that may be added to the platform in the future, and the potential for smart innovation, when it comes to web framework selection, the extensibility of the system must be prepared in advance.Since the service was not implemented initially, the focus was on the iwogh-web module.All requests are still subject to reason control using the ZUUL module in springcloud.Eureka's in charge of the service registry.The Mysql database that the database uses, because the data is not big at the beginning, the Mysql of concern type can satisfy the demand, do not need to consider the solution of big data in the beginning.
In order to control all sensors on a GUI, the project uses Python tkinter to develop the interface and manages the executables of multiple sensors in a multi-process manner.Each sensor's data is named after a time stamp and stored as a file.But since these timestamps are from Ubuntu, the host, and not the actual sensor, they can cause errors and can be time stamped aligned.For example, if you need to know the absolute time of a mechanical Lidar, you need to provide a GPS module.When a large amount of data collected by each sensor is obtained, the data processing algorithm can play a corresponding role.Different from the traditional analytical database, DRUID provides real-time stream data analysis, and uses LSM-Tree structure to make Druid have high real-time writing performance At the same time, the visualization of real-time data in sub-second level is realized.

Conclusion
Because of the complexity of object and process, there is not a systematic method to solve all the problems in multi-sensor fusion.Although there are many fusion methods, each method has its own scope of application.It is obviously impossible to solve the problem in a single way.The ideal solution is the integration of multiple fusion methods.Therefore, how to choose specific fusion methods according to specific problems and how to organize these methods are worth studying.