Next Article in Journal
Performance Analysis of RCU-Style Non-Blocking Synchronization Mechanisms on a Manycore-Based Operating System
Previous Article in Journal
Influence of Five Additives on No Loading Swelling Potential of Red Clay
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Air-to-Ground Recognition of Outdoor Injured Human Targets Based on UAV Bimodal Information: The Explore Study

1
Department of Military Biomedical Engineering, Air Force Military Medical University, Xi’an 710032, China
2
Drug and Instrument Supervisory & Test Station of Xining Joint Service Support Center, PLA, Lanzhou 730050, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2022, 12(7), 3457; https://doi.org/10.3390/app12073457
Submission received: 4 March 2022 / Revised: 24 March 2022 / Accepted: 26 March 2022 / Published: 29 March 2022

Abstract

:
The rapid air-to-ground search of injured people in the outdoor environment has been a hot spot and a great challenge for public safety and emergency rescue medicine. Its crucial difficulties lie in the fact that small-scale human targets possess a low target-background contrast to the complex outdoor environment background and the human attribute of the target is hard to verify. Therefore, an automatic recognition method based on UAV bimodal information is proposed in this paper. First, suspected targets were accurately detected and separated from the background based on multispectral feature information only. Immediately after, the bio-radar module would be released and would try to detect their corresponding physiological information for accurate re-identification of the human target property. Both the suspected human target detection experiments and human target property re-identification experiments show that our proposed method could effectively realize accurate identification of ground injured in outdoor environments, which is meaningful for the research of rapid search and rescue of injured people in the outdoor environment.

1. Introduction

The search for injured people in outdoor environments has always been a hot topic in the field of social public safety and emergency rescue medicine, and mainly include two types [1]. The first search scenario is about the trapped survivors under ruins in an abnormal post-disaster environment, such as natural or sudden disasters (earthquakes, building collapses, landslides, etc.). The detection challenge under this scenario is how to penetrate the ruins to reliably detect weak physiological movements of surviving human beings in the post-disaster site with relatively limited area. To address this problem, a new bio-radar detection technology, which is a combination of biomedical engineering technology and radar technology, was firstly proposed by our group in the academic field. Bio-radar emits electromagnetic waves to detect survivors’ physiological activities (breathing and heartbeat) through the ruins and corresponding vital signs would be acquired by demodulating the radar echo. Leveraging the bio-radar and various signal processing technologies [2,3,4], a series of bio-radar equipment was developed, and corresponding functions are gradually enriched so that we can not only detect the vital signs [5,6,7,8] but also localized information. Specifically, with our proposed equipment and algorithms, we can even detect multiple (Max. 3) survivors simultaneously [9,10]. Currently, our latest technologies are aiming to distinguish between human and animals (non-human targets) under ruins [11,12], and even recognize human activities [13,14]. In practical applications, typical equipment we have developed was also successfully used in many search-and-rescue operations (such as “SJ-3000” and “SJ-6000” search-and-rescue UWB bio-radar used in the Wenchuan earthquake in 2008, Yushu earthquake in 2010, Ludian earthquake in 2014, and so on in China) and made a great contribution.
Another widespread and frequently occurring search scenario is about the injured person in a normal outdoor environment, namely a natural environment, such as the lost donkeys, crashed parachute jumpers, and wilderness travelers almost submerged in the vast and diverse natural environment. For example, during a distressing disaster at a sporting event, a large number of athletes suffered a safety accident due to the sudden change of weather in the 2021 4th Yellow River Shilin Mountain Marathon 100 km cross-country race in China [15]. Many athletes experienced the severe hypothermia phenomenon and were trapped in the mountains, making it urgent but difficult to locate and detect them quickly. However, due to the lack of air-to-ground rapid search and location technology, the information of the trapped people in the outdoor environment was not sent to the rear security center in time, resulting in a number of deaths that could not be treated in time.
In general, human target search technology in an outdoor natural environment can be divided into constrained and unconstrained modes. For constrained methods, humans need pre-wear auxiliary positioning devices, such as portable radio stations, wearable GPS personal terminals, or wireless search devices with vital signs monitoring functions. Nevertheless, these methods suffer from some inevitable deficiencies including increasing body load, inconvenient operation, and high costs [16,17,18]. Moreover, in the above scenario, our searching targets are often located in some extreme mountain forest environments or carry-on devices might have been seriously damaged, making it hard or even incapable to receive the information. Instead of wearing the auxiliary device, the unconstrained search technology based on unmanned detection technology could effectively avoid the above problems [19].
Currently, there is indeed some unconstrainedly unmanned aerial vehicle (UAV)-based air-to-ground detection technologies for human target search in ideal background environments. Different carried payloads mean different detection principles and detection capabilities, and current technologies mainly include RGB high-definition camera [20,21] and thermal imaging camera [22] on UAV for low altitude search. But the RGB camera still appears to have insufficient resolution and low SNR when the detection distance is long or the object is similar in color with the environment, and even appears underexposure or overexposure when the ambient light changes [23]. Similarly, the thermal signal of the human body would be covered by the halos when the ambient temperature is higher than 30 °C.
As an optimized form of hyperspectral technology, multispectral could relatively streamline data volume and realize real-time imaging processing by rationally selecting 4~10 characteristic spectrum bands for data processing [24], which could ensure sufficient information volume. By analyzing the differences of spectral characteristic curves between target and circumstances, specific features in different bands can be exploited to identify the target. At present, UAV-based multispectral detection technology is widely used in agricultural, forestry, and environmental monitoring under low altitude cruise conditions. Based on this technology, researchers have successfully achieved some detection performance, such as the damage assessment to rapeseed crop after winter [25], decision support system design for variable rate irrigation [26], fast Xylella symptoms detection in olive trees [27] and inferring the spatial distribution of chlorophyll concentration and turbidity in surface waters to monitor the nearshore–offshore water quality [25] and so on.
However, for the rapid detection and identification of injured human subjects in an outdoor environment, the detection scenario is remarkably different from the above scenarios, showing more severe detection difficulties and challenges. Unfortunately, there is still a lack of corresponding effective technology. Fundamentally, the key difficulties lie in: (1) the injured human subject in an outdoor environment is always in a still state without remarkable motion, such as lying on the ground, making it difficult to be detected; (2) the injured human target is just a small target with a much smaller area compared with the surrounding environment under an airborne high-altitude view; (3) the difference between the clothing of injured people and the surrounding environment is weak, showing low target-background contrast; (4) additionally, all these methods cannot acquire unique physiological characteristics to identify the human attribute of the suspected target.
In this paper, we propose an automatic recognition method for injured human targets in an outdoor natural environment based on UAV bimodal information. It can not only efficiently detect the suspected human subjects based on multispectral features under low target-background contrast but also acquire the vital signals for re-identification through autonomously scanning large areas using a quad-rotor UAV equipped with a multispectral imaging sensor and bio-radar.
This paper will be organized as follows: The materials and methods will be introduced in Section 2, including multispectral image acquisition, multispectral image pre-processing, and target identification. Section 3 presents the results of each step mentioned above. Section 4 discusses the experiment results and conclusion.

2. Bimodal Information Collection System

In order to obtain bimodal information of ground targets, this study builds a bimodal information collection system based on a UAV carrying dual sensors (shown in Figure 1), with the main function of obtaining spectral feature information and vital sign information. It is mainly composed of three components. Their relationship and system operation principle is illustrated by Figure 2 and Figure 3, respectively: (1) the optimal multispectral sensing module acquires spectral information for the preliminary detection of suspected human targets; (2) miniature bio-radar module acquires vital sign information for human subject reconfirmation; (3) UAV carrying system and ground workstation are mainly responsible for carrying out flight and information transmission control tasks. In the following, we will give a detailed description of the 3 main components of the whole system and the coordination between them.

2.1. The Optimized Multispectral Sensing Module

The multispectral sensing (MSS) module on a cruising UAV at high altitude is used to acquire some specific spectral features of the target background for suspected human targets detection, which is referred to in stage 1 in Figure 3 during the entire search process. In our study, a few multispectral bans are specifically selected by observing and analyzing the sensitivity of different spectral bands to the environment and background. Consequently, some optimal bands would be picked out from numerous hyperspectral bands, which has the greatest ability to distinguish suspected human targets from the natural environment with complex background situations.
Since the relative reflectivity was defined as follows:
R r e l a t i v e = L s u b j e c t L b o a r d · R b o a r d
where R r e l a t i v e and R b o a r d are the reflectivity of subject and reference board, L s u b j e c t and L b o a r d are the spectral radiance of subject and reference board. According to the spectral curve, we could pick out corresponding characteristic points which are promising to distinguish green clothes from grassland.
In our study, a large number of preliminary measurement experiments were carried out to obtain the wavelength-relative reflectivity curve for green vegetation (to simulate background) and green camouflage (to simulate suspected injured human target outdoors) using a spectrometer. Here, the ATP9100 portable object spectrometer was used to measure the current spectral parameters of ground objects (wavelength range of 300~1100 nm, 2048-pixel CCD detector, spectral resolution of 1.4 nm, wavelength accuracy of 0.5 nm, and signal-to-noise ratio > 800) [28]. The instrument parameters are set as follows: automatic integral time, multiple detections, and followed with dark-current correction. Specifically, we measured three times the ground objects in one validation experiment. The measurement duration is 1–10 s and they will be integrated optimally. Then, the average spectral parameters of ground objects could be acquired by averaging. Finally, the spectral reflectance curve of green clothes and green vegetation can be acquired as shown in Figure 4.
Target recognition based on single-band features is vulnerable and unstable. Therefore, by constructing sensitive spectral indexes through inter-band operations, background-target differences could be enhanced, which would facilitate the target extraction. By observing the wavelength-relative reflectivity curve in Figure 4, there are six bands showing significant differences between background and the target; thus, they are promising for later target identification: (1) in the 400~500 nm band, the spectral curves of camouflage clothing and green vegetation follow the same trend, and the reflectance is low and difficult to distinguish, so the band of 450 nm can be used as the reference point for the inter-band calculation; (2) there is a reflection peak caused by vegetation chlorophyll in the spectrum curve of green vegetation in 550~560 nm band, which is much higher than that of green camouflage. So, the band of 555 nm is selected; (3) in the 660~690 nm band, there is an absorption peak that corresponds to the reflectance of green vegetation, which is much lower than the reflectance of camouflage clothing, so the band of 660 nm is selected; (4) nearby 710 nm, the reflectance of green vegetation rose sharply and reached the peak rapidly, thus the band of 710 nm is adopted as the first high-reflectivity band; (5) the high-reflectivity-platform continues until 1000 nm. This band of camouflage clothing has high contrast with green vegetation, and the characteristics are obvious. Therefore, the band of 840 nm and 940 nm are reasonably selected for feature recognition. Within a certain band number range, more spectra contain richer information. Through extracting features from these selected specific bands in the blue band, green band, red band, red edge band, near-infrared band, the ability to specifically characterize the target will be strengthened, which would greatly facilitate the target identification.
Based on the analysis above and market research, a multispectral sensor that meets our requirements above, the MS600 camera, was adopted in our study. It is composed of six single-band cameras. Each camera has its own optical path and photocarrier that records the data of the corresponding imaging band. Six spectral bands (shown in Table 1) could be acquired to form MS600 multispectral camera. Band selection should meet the requirements of large amounts of information and small correlation between bands. The camera was equipped with 1/3-inch sensor size, 1280 × 960 pixels, global shutter, 1.5 s maximum capture rate, 7.5 cm ground pixel resolution at 120 m altitude, size 77 × 72 × 47 mm, weight 170 g, DLS (downwelling light sensor) and GPS module, each image can record GPS information in real-time. The image data collected by MS600 multispectral sensor were sets of DN values (remote sensing image pixel brightness value) of six bands, which were used to record the gray value of ground objects.

2.2. Micro-Bio-Radar Module

After the suspected targets are detected by multispectral technology, the bio-radar module will play another critical role in re-identifying the human nature of these suspected targets, which is referred to stage 2 in Figure 3 during the entire search process. Just as the working principle illustrated by Figure 3, after the suspected human target is detected, the bio-radar module will be triggered and accurately released around the target. Based on the radar Doppler principle [4], the physiological activity information of the suspected target can be obtained. Our group has successfully realized the detection of human respiration and body movement in the field environment [29]. So, taking physiological motion features (respiration rate or heartbeat rate) as a reference, accurate re-identification of surviving human targets will be possible.
The bio-radars, which were thrown out to remote sensing the respiratory in our study is JC122-3.3UA6 module. It is a 24 GHz continuous wave (CW) micro-radar and its effective detection range is 10 m. The hardware of this system mainly includes a radar sensor for human chest breathing detection [4], STM32 single-chip microcomputer, and LoRa module for initial data A/D convert and data sending.

2.3. UAV Carrying System and Ground Workstation

The overall architecture of the bimodal information collection system is shown in Figure 1. The M100 Quad-rotor UAV system [30], which could vertical take-off and land with a payload of 1245 g, is adopted here to carry the multispectral camera and the bio-radar module. The ground workstation is responsible for information transmission, target identification, and system control. The coordination between the system mainly includes four parts: (1) the MSS module on the cruising UAV first detect the suspected targets; (2) ground workstation sends commands to the UAV, which automatically triggers an airborne dropper to drop the bio-radar to a location around the target; (3) the bio-radar sensor can transmit asymmetric wide beam signal and obtain the microwave echo reflected by human chest moving. After being filtered and amplified, the respiratory signal is converted into digital signals through STM32 A/D and sent to the ground -workstation receiver by LoRa module; (4) by further analysis and judgment based on embedded algorithms, both respiration rate detection and human target property re-identification could be realized.

3. Bimodal-Information-Based Human Targets Recognition Method

Based on the ground target information collected by the above system, a bimodal-information-based human target recognition method, namely a scheme, is proposed for accurate injured human subject identification in an outdoor environment. The flowchart of this method is shown in Figure 5 and is consist of two main parts: (1): suspected target detection based on multispectral feature information, including multispectral images preprocessing spectral feature extraction, and decision tree (DT) construction for suspected target detection; (2) human subject reconfirmation based on respiration information detected by bio-radar.

3.1. Suspected Target Detection Based on Multispectral Feature Information

(1)
Multispectral image preprocessing
Before further analyzing and processing the multispectral images, it is necessary to preprocess them to remove multifactor interference effects. The preprocessing process is shown in Figure 6.
1st step: Radiometric calibration. Radiometric calibration consists of radiometric correction and reflectance calculation. Radiometric correction mainly converts the DN value into spectral radiance value and corrects radiation distortion caused by lens transmission and sensor response [31]. The calculated reflectivity can reflect the real spectral characteristics of ground objects. We should identify and calculate the average radiance of the gray board pixels from gray board images, which were obtained by MS600 before the flight. At the first step, identify the gray board area in the images by selecting the largest nonzero connected region after running the function of binarization and image closure. Then, locate the center of the gray board using the average values of row coordinates and column coordinates. Make a square mask centered on the center coordinate point. The DN value of the gray board area can be extracted by dot multiplication of mask and gray board image. Finally, we can calculate the radiance of each pixel. With the above steps, the DN value of the multispectral images was converted into the reflectivity value, and the vignetting effect was corrected.
2nd step: Image registration. Each exposure of the MS600 camera can generate six single-band images. Although this kind of sensor has the advantages of simple design and low cost, the limitation of the structure will lead to a dislocation between imaging bands. Image registration must be done before further processing. In this stage, the main task is to synthesize these six single-band images into one data file in TIFF format. Firstly, the SURF (Speeded Up Robust Features) algorithm is used to detect the feature points in every image, which is a robust local feature point detection and description algorithm [32]. The Hessian matrix is constructed to generate all interest points, build scale space, locate feature points, and generate feature point descriptors. Then, after matching the feature points between every two images, the RANSAC (random sample consensus) algorithm is used to eliminate mismatching points [33]. Finally, abnormal matching points would be eliminated by computing the affine transformation matrix between each image and the registration will be completed through the affine transform.
(2)
Multispectral feature extraction
The regions of interest (ROI) can be extracted manually according to the low altitude high-definition images, which can show the distribution of experimental subjects (3 typical objects are selected, including human subjects, vegetation, and soil). The reflectivity feature is the top preferred feature. Here, a statistic experiment on the reflectivity feature distribution of different targets is carried out based on numerous collected data. The statistical result of the reflectivity of 3 ground objects in the image analyzed by ENVI software is shown in Figure 7a. It is obvious reflectivity features of six bands could be exploited to distinguish between target and background to some extent but cannot solve the problem entirely.
The spectral index can be another kind of feature to enhance target difference and quantify it by band math. According to the spectral curves captured from the green camouflage, vegetation, and bare soil, the following 8 spectral indexes were taken into consideration, and they are calculated as shown in Table 2. Where NDVI stands for Normalized Difference Vegetation Index, NDGI (Normalized Difference Green Index), NGBDI (Normalized Green-Blue Difference Index), PSRI (Plant Senescence Reflectance Index), SIPI (Structure Insensitive Pigment Index), mNDVI (modified red edge Normalized Difference Vegetation Index), MSR (Modified Simple Ratio Index), EVI (Enhanced Vegetation Index), R means reflectivity, Rblue (band1), Rgreen (band2), Rred (band3), R710 (band4), Rnir and R840 (band5).
The statistical results of the 6 spectral indices over 3 ground objects based on numerous data are shown in Figure 7b. It can be noticed that NDVI, mNDVI, MDR, PSRI, and SIPI could separate vegetation from the ground subjects effectively. We chose NDVI to set the upper bound and PSRI to set the lower bound that could achieve the best effect of separating vegetation. Then, bare soil should be separated from the remaining area, EVI and NGBDI were chosen to set a reasonable threshold to exclude the bare soil area. Band 6 was selected to filter the human subjects according to the band reflectivity figure and combined with the size of the target area that noise spot could be excluded out.
(3)
Decision tree construction for suspected target detection
The DT is exploited as the target classification and identification model for automatic recognition and positioning in this study. As a method of extracting species information, DT classification has the characteristics of convenience and high efficiency, which is widely used in the classification of below three species [34]. Vegetation index and band reflectivity were selected as spectral variables to classify by threshold method. Based on the spectral index, band reflectance, and target area, we constructed a DT and set reasonable thresholds to complete target identification as shown in Figure 8. Specifically, the NDVI threshold of 0.75 comes from the maximization of interclass variance (Otsu) and statistical experience based on a large number of measured experiments.

3.2. Human Subject Reconfirmation Based on Respiration Information Detected by Bio-Radar

After detecting the suspected human target based on the spectral features above, the bio-radar module will be triggered and released near ground targets. It aims to detect the physiological features of the suspected target for the reconfirmation of human attributes. The CW bio-radar will transmit electromagnetic waves to illuminate the human target around and the echo is modulated by micromotions of the body surface, such as breathing. By demodulating the echo and obtaining the phase information, we can obtain the respiration-related features. However, the outdoor environment is full of noise and clutter, resulting in serious interference with radar echo, making it difficult to obtain a stable and effective breathing signal. Therefore, finding a way to filter out the clutter and noise interference from the radar is highly important.
Based on the characteristics analysis of the noise and breathing signals, the adaptive line enhancer (ALE) is adopted for noise cancellation. The ALE is a deformation of the adaptive pair eliminator, and we can use various adaptive filtering algorithms to process the signal when the main and reference inputs are determined. Here the Normalized Last Mean Squares (NLMS) error algorithm is adopted for many reasons such as robustness, timeliness, and convergence speed. The flow of the algorithm is as follows (Figure 9).
s ( k ) is the useful respiration signal, n 0 ( k ) is noise signal, namely wideband signal. s 1 ( k ) is the delayed respiration signal of s ( k ) ; thus, they have a strong correlation after the delay, so the correlation between them is weak. According to the principle of self-use pair eliminator, during the process of e ( k ) minimizing through z ( k ) and y ( k ) gradually becoming close, the only way is to make the s 1 ( k ) and s ( k ) closest through FIR filter with adjustable coefficients. Consequently, respiration signals with high SNR could be acquired from z ( k ) .
After the noise and clutter cancellation, the respiration rate could be acquired by performing some frequency domain analysis on the preprocessed signal, which would be the most direct and specific feature for human attributions reconfirmation.

4. Experiment and Results

4.1. Experimental Setup

The setup of the bimodal information collection system is shown in Figure 1. The MS600 multispectral module containing those six specific bands and bio-radar module are carried by the M100 Quad-rotor UAV system for target detection and recognition. The flight mission was carried out in a wheat field in Hu County, Xi’an City, Shaanxi Province on 24 March 2021. The weather is sunny, cloudless with downwards third-class winds, which is suitable for the UAV flight. The flight remained at an altitude of 120 m, 5.3 m/s, 5 routes with 80% course overlap rate and 75% side overlap rate.
Two kinds of typical experiments were carried out in this study, namely the suspected target detection experiment based on multispectral feature information only and the human subject reconfirmation experiment with auxiliary human vital sign information. More specifically, the first kind of experiment was also conducted in two typical scenarios illustrated by Figure 10. Scenario 1 holds a relatively homogeneous environmental background with a few kinds of components (green grassland and, a small piece of land). Twelve sets of green camouflage clothes were laid on the field randomly to disguise as human subjects. Scenario 2 is purposely selected as an outdoor bush environment in autumn, which is more complex and closer to practical applications as shrubs, green lawns, trees, dark yellow, and other vegetation randomly scattered. Here, three colors of camouflage (11 pieces of camouflage and one real person in desert camouflage) were placed in different vegetation areas of similar colors. Further, in the complex Scenario 2 described above, the subsequent human subject reconfirmation experiment aims to distinguish between fake targets and true human targets from multiple suspected targets with additional respiration information detected by bio-radar.

4.2. Preliminary Detection of Suspected Human Targets

(1)
Suspected human target detection in Scenario 1
Based on the multispectral sensor module, each exposure would acquire six bands of remote sensing images (a certain area under the field of view for one exposure) as shown in Figure 11. We can notice that the DN value of each band image is different because different subjects have different spectral reflectance for different bands. In addition, we can clearly observe the vignetting effect in the band six image that we must correct to achieve accurate results.
To facilitate data analysis, it is necessary to preprocess the original multispectral images. After the preprocessing on multiple sets of images from multiple exposures, including image registration and radiometric calibration, and so on, a complete reflectance panorama is formed as shown in Figure 12. Although we can visually find some suspected target points, they need to be identified automatically and reasonably. Especially, when the search range is large, it is impossible to find and locate the target quickly and accurately rely on the human eye. Additionally, human subjectivity and visual fatigue have a great impact on the results.
Next, the selected multispectral indexes and features in this study were fed to the decision mechanism and applied to preliminarily identify and extract suspected human targets automatically by setting a reasonable threshold. As the suspected target recognition result shown in Figure 13, the red targets mean suspected targets. Obviously, 13 remarkable suspected targets (corresponding to 13 red parts) were detected, including one undesired recognition mistake and 12 non-human fake targets (camouflage). Results show that this detection method based on multispectral feature information can effectively detect suspected targets in this outdoor environment with a relatively single background.
Moreover, we also compared the classification or recognition performance of the DT with the state-of-art machine learning algorithms including Back Propagation neural network (BP), Random Forest (RF), and Support Vector Machine (SVM) with a radial basis kernel function. All these four classifiers are trained based on the same training dataset, and the preprocessed image of one exposure marked in the red dashed box in Figure 12 is taken as the test data. The classification results are as shown in Table 3. Obviously, based on the same training data and test data, the recognition performance of different classifiers does not differ greatly. It means that spectral features are the main factor affecting the recognition results to some extent.
(2)
Suspected human target detection in Scenario 2
Scenario 2 is comprehensive and more representative for practical applications. The background consists of many components with different colors (shrubs, green lawns, trees, dark yellow, and other vegetation). The detection targets (eleven pieces of camouflage and one real person in desert camouflage to disguise the human subject) contain three types of color attributes and they were placed in different vegetation areas of similar colors. Since the above analysis has demonstrated that the impact of different classifiers is not significant, here we just chose DT as the representative classification algorithm in the subsequent study.
After the same preprocessing on multiple sets of images from multiple exposures, a complete reflectance panorama is formed in Scenario 2 as shown in Figure 14. Obviously, it is visually difficult to observe the ground suspected target in this complex scenario. Especially for the target T9–T12, it is almost impossible to find them, due to their color characteristics being similar to the background. Then, the proposed method based on multispectral information only is applied to preliminarily detect suspected human targets automatically as shown in Figure 15. It is clear that all the 12 suspected human targets were still successfully detected in such a complex environment.
However, we need to note that only suspected target T12 is a real human target. In other words, just based on the multispectral information alone, we cannot distinguish clothes and true human targets just relying on multispectral images. Therefore, this complex Scenario 2 is taken as a typical example to test the effectiveness of the human subject property re-identification method based on vital sign information proposed in this paper.

4.3. Accurate Re-Identification of Surviving Human Targets Using Bio-Radar

Based on the multispectral technology above, we can only detect the suspected human targets in Scenario 2 but cannot make an accurate identification of surviving human targets. Therefore, when suspected human subjects were found by aerial multispectral imaging, the bio-radar will be released around the suspected target (within 8 m) to non-contact detect respiratory signal (the detection direction of radar can be adjusted by the included control components).
Just as shown in Figure 16a, since the human subjects are distributed in the outdoor environment, we can find that the radar echo contains a lot of interference and noise, which drowns out the breathing signal severely. Consequently, although the respiratory frequency of the subject can be detected as 0.24 Hz in the normalized frequency spectrum, it still contains a large amount of noise and interference components. When the environmental interference is more intense, respiration detection would be more difficult. The noise cancellation result of this signal using our proposed NLMS-ALE method is shown in Figure 16b. Obviously, noise and interference are effectively removed from radar echo, and significant respiration frequency can be acquired in the normalized frequency spectrum. This means that even when the outdoor environment interference is strong, we are still able to detect the breathing characteristics of the target using bio-radar. Then, this respiration information would be transmitted wirelessly. Consequently, we can make accurate identification of surviving human target (T12) as shown in Figure 16c and eliminate fake targets.

5. Discussion

Based on suspected target detection experiments under two typical Scenarios, experimental results above clearly show that all suspected targets could be accurately detected and separated from the background based on multispectral feature information only (with only an additional detection mistake), even under a complex outdoor bush environment. However, the study also amply demonstrates that the human attributes of the suspected target cannot be confirmed and verified based on unimodal information alone. Furthermore, combined with auxiliary physiological information from the bio-radar, only the true human target in Scenario 2 was accurately identified. Therefore, the bimodal-information-based human target recognition method proposed is initially proven to be effective for ground injured human subject search in an outdoor environment. For this challenging identification task, the recognition is satisfactory to some extent. However, this method also has a few limitations for practical application in the future.
With respect to the initial suspected target detection, the recognition algorithm adopted in this experiment is the decision tree in supervised learning. Objects are classified by different properties thresholds, which is conducive to real-time processing. Meanwhile, it also leads to the problem that the robustness of the classification method is weak because we cannot conclude spectral of all the clothes and find appropriate thresholds. Therefore, our future research work will focus on superior target features with higher stability and more intelligent and excellent recognition algorithms, such as deep learning recognition algorithms. In addition, reducing the time complexity of the model and more robust target recognition-based multimodal optoelectronic information fusion are also our future research directions.
On the other hand, just as what we tried in this paper, the bio-radar is applied to acquire the vital signals, which can help us to make the judgment whether the subject is human or not and to infer the state of life. However, the detection environment and condition are totally different and even more challenging than that of our previous indoor vital sign detection in the medical field [35], and survivors detection under the ruins in the post-disaster search-rescue field [7]. (1) Firstly, the effective detection range of the micro-bio-radar is only 10 m, so finding a way to throw the bio-radar from the UAV platform to a proper location around the suspected target to acquire vital signals is still a challenging job. Secondly, the problem of “differentiation between human and non-human (animal) live targets” and “respiration rate varies in different physiological conditions” are inevitable issues to consider in the outdoor detection environment. Fortunately, our past studies have tried some very similar exploring experiments [12,36,37]. These studies confirmed that some differences can be observed between human body respiration and that of other animals (cats, dogs, pigs, etc.) in the time domain, but they are not significant. Nevertheless, the human and other non-human (animal) live targets can be accurately differentiated based on multi-domain features (time domain, time-frequency domain) combined with some machine learning method. Therefore, we will try to embed these algorithms into the UAV bio-modal system and improve it by carrying out some test experiments in our future work. And for the problem of “the same subject’s respiration varies in different conditions”, it can be solved by similar ideas.

6. Conclusions

In response to the challenging task of rapid search of ground injured human targets in outdoor environments, a UVA-based detection and recognition technology combined with bimodal sensors is developed. Considering that the key difficulties of the identification task arise from the characteristics of the target and the detection environment, such as small-scale target property compared to the ground background, low target-background contrast, and weak physiological sign, an automatic recognition method based on bimodal information is proposed. First, suspected targets were accurately detected and separated from the background based on multispectral feature information only. Immediately after, a bio-radar module would be released to detect the physiological information for accurate re-identification of human target property. Both the suspected human target detection experiments and human target property re-identification experiments show that our proposed method could effectively realize accurate identification of ground injured in outdoor environments, which is important for the research of rapid search and rescue of injured people in the outdoor environment. Meanwhile, the robustness and convenience of this technology in a practical application need to be further improved.
Our future work will focus on extracting superior target features with higher stability and trying a more intelligent integration approach of feature extraction and recognition, aiming to improve the recognition accuracy and application robustness. In this experiment, there are few kinds of ground objects, and the complexity of the experimental environment is not enough. In the follow-up research, the complexity of the field condition will be gradually increased, thus the reliability of the identification model would be further improved. Moreover, if the state of the target’s physical signs detected by bio-radar can be quickly assessed based on vital signs and transmitted to the rear command timely, it is of great significance for the search and rescue of injured people in the outdoor environment.

Author Contributions

Conceptualization, G.L. and J.W.; methodology, M.Z.; software, T.L.; formal analysis, J.X.; investigation, Y.Y. and L.Z.; data curation, Z.L.; writing—original draft preparation, M.Z.; writing—review and editing, F.Q.; visualization, M.Z.; supervision, G.L.; funding acquisition, G.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Key Research and Development Program of Shaanxi (2021ZDLGY09-07 and 2022SF-482). And the APC was funded by (2021ZDLGY09-07).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Brüggemann, B.; Wildermuth, D.; Schneider, F.E. Search and Retrieval of Human Casualties in Outdoor Environments with Unmanned Ground Systems—System Overview and Lessons Learned from ELROB 2014. In Field and Service Robotics; Springer: Berlin/Heidelberg, Germany, 2016; pp. 533–546. [Google Scholar] [CrossRef]
  2. Jianqi, W.; Chongxun, Z.; Guohua, L.; Xijing, J. A New Method for Identifying the Life Parameters via Radar. EURASIP J. Adv. Signal Process. 2007, 2007, 031415. [Google Scholar] [CrossRef] [Green Version]
  3. Guohua, L.; Jianqi, W.; Yu, Y.; Xijing, J. Study of the Ballistocardiogram signal in life detection system based on radar. In Proceedings of the 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Lyon, France, 22–26 August 2007; pp. 2191–2194. [Google Scholar]
  4. Zhang, Y.; Qi, F.; Lv, H.; Liang, F.; Wang, J. Bioradar Technology: Recent Research and Advancements. IEEE Microw. Mag. 2019, 20, 58–73. [Google Scholar] [CrossRef]
  5. Li, Z.; Li, W.; Lv, H.; Zhang, Y.; Jing, X.; Wang, J. A Novel Method for Respiration-Like Clutter Cancellation in Life Detection by Dual-Frequency IR-UWB Radar. IEEE Trans. Microw. Theory Tech. 2013, 61, 2086–2092. [Google Scholar] [CrossRef]
  6. Lv, H.; Jiao, T.; Zhang, Y.; An, Q.; Liu, M.; Fulai, L.; Jing, X.; Wang, J. An Adaptive-MSSA-Based Algorithm for Detection of Trapped Victims Using UWB Radar. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1808–1812. [Google Scholar] [CrossRef]
  7. Lv, H.; Li, W.; Li, Z.; Zhang, Y.; Jiao, T.; Xue, H.; Liu, M.; Jing, X.; Wang, J. Characterization and Identification of IR-UWB Respiratory-Motion Response of Trapped Victims. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7195–7204. [Google Scholar] [CrossRef]
  8. Ren, W.; Qi, F.; Foroughian, F.; Kvelashvili, T.; Liu, Q.; Kilic, O.; Long, T.; Fathy, A.E. Vital Sign Detection in Any Orientation Using a Distributed Radar Network via Modified Independent Component Analysis. IEEE Trans. Microw. Theory Tech. 2021, 69, 4774–4790. [Google Scholar] [CrossRef]
  9. Lv, H.; Liu, M.; Jiao, T.; Zhang, Y.; Yu, X.; Li, S.; Jing, X.; Wang, J. Multi-target human sensing via UWB bio-radar based on multiple antennas. In Proceedings of the 2013 IEEE International Conference of IEEE Region 10 (TENCON 2013), Xi’an, China, 22–25 October 2013; pp. 1–4. [Google Scholar] [CrossRef]
  10. Zhang, Y.; Ma, Y.; Yu, X.; Wang, P.; Lv, H.; Liang, F.; Li, Z.; Wang, J. A coarse-to-fine detection and localization method for multiple human subjects under through-wall condition using a new telescopic SIMO UWB radar. Sens. Actuators A Phys. 2021, 332, 113064. [Google Scholar] [CrossRef]
  11. Zhao, L.; Yangyang, M.; Yang, Z.; Fulai, L.; Xiao, Y.; Fugui, Q.; Hao, L.; Guohua, L.; Jianqi, W. UWB Radar Features for Distinguishing Humans From Animals in an Actual Post-Disaster Trapped Scenario. IEEE Access 2021, 9, 154347–154354. [Google Scholar] [CrossRef]
  12. Ma, Y.; Liang, F.; Wang, P.; Lv, H.; Yu, X.; Zhang, Y. An Accurate Method to Distinguish Between Stationary Human and Dog targets Under Through-Wall Condition Using UWB Radar. Remote Sens. 2019, 11, 2571. [Google Scholar] [CrossRef] [Green Version]
  13. Qi, F.; Li, Z.; Ma, Y.; Liang, F.; Lv, H.; Wang, J.; Fathy, A.E. Generalization of Channel Micro-Doppler Capacity Evaluation for Improved Finer-Grained Human Activity Classification Using MIMO UWB Radar. IEEE Trans. Microw. Theory Tech. 2021, 69, 4748–4761. [Google Scholar] [CrossRef]
  14. Qi, F.; Lv, H.; Wang, J.; Fathy, A.E. Quantitative Evaluation of Channel Micro-Doppler Capacity for MIMO UWB Radar Human Activity Signals Based on Time–Frequency Signatures. IEEE Trans. Geosci. Remote Sens. 2020, 58, 6138–6151. [Google Scholar] [CrossRef]
  15. Xia, H. 16 Dead During Mountain Marathon in China’s Gansu. Available online: http://www.xinhuanet.com/english/2021-05/23/c_139963997_2.htm (accessed on 23 May 2021).
  16. Takagi, Y.; Yamada, K.; Goto, A.; Yamada, M.; Naka, T.; Miyazaki, S. Life Search—A Smartphone Application for Disaster Education and Rescue. In Proceedings of the 2017 Nicograph International (NicoInt), Kyoto, Japan, 2–3 June 2017; p. 94. [Google Scholar] [CrossRef]
  17. Maciel-Pearson, B.G.; Akcay, S.; Atapour-Abarghouei, A.; Holder, C.; Breckon, T.P. Multi-Task Regression-Based Learning for Autonomous Unmanned Aerial Vehicle Flight Control Within Unstructured Outdoor Environments. IEEE Robot. Autom. Lett. 2019, 4, 4116–4123. [Google Scholar] [CrossRef] [Green Version]
  18. Hou, X.; Bergmann, J. Pedestrian Dead Reckoning With Wearable Sensors: A Systematic Review. IEEE Sens. J. 2020, 21, 143–152. [Google Scholar] [CrossRef]
  19. Rosser, J.C.; Vignesh, V.; Terwilliger, B.A.; Parker, B.C. Surgical and Medical Applications of Drones: A Comprehensive Review. J. Soc. Laparoendosc. Surg. 2018, 22, e2018.00018. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Sun, J.; Song, J.; Chen, H.; Huang, X.; Liu, Y. Autonomous State Estimation and Mapping in Unknown Environments with Onboard Stereo Camera for MAVs. IEEE Trans. Ind. Inform. 2019, 16, 5746–5756. [Google Scholar] [CrossRef]
  21. Bency, A.J.; Karthikeyan, S.; De Leo, C.; Sunderrajan, S.; Manjunath, B.S. Search Tracker: Human-Derived Object Tracking in the Wild Through Large-Scale Search and Retrieval. IEEE Trans. Circuits Syst. Video Technol. 2016, 27, 1803–1814. [Google Scholar] [CrossRef] [Green Version]
  22. Chen, Y.; Song, B.; Du, X.; Guizani, M. Infrared Small Target Detection Through Multiple Feature Analysis Based on Visual Saliency. IEEE Access 2019, 7, 38996–39004. [Google Scholar] [CrossRef]
  23. Lygouras, E.; Santavas, N.; Taitzoglou, A.; Tarchanidis, K.; Mitropoulos, A.; Gasteratos, A. Unsupervised Human Detection with an Embedded Vision System on a Fully Autonomous UAV for Search and Rescue Operations. Sensors 2019, 19, 3542. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Jiang, J.; Zheng, H.; Ji, X.; Cheng, T.; Tian, Y.; Zhu, Y.; Cao, W.; Ehsani, R.; Yao, X. Analysis and Evaluation of the Image Preprocessing Process of a Six-Band Multispectral Camera Mounted on an Unmanned Aerial Vehicle for Winter Wheat Monitoring. Sensors 2019, 19, 747. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Jeowicki, Ł.; Sosnowicz, K.; Ostrowski, W.; Osińska-Skotak, K.; Bakuła, K. Evaluation of Rapeseed Winter Crop Damage Using UAV-Based Multispectral Imagery. Remote Sens. 2020, 12, 2618. [Google Scholar] [CrossRef]
  26. Shi, X.; Han, W.; Zhao, T.; Tang, J. Decision Support System for Variable Rate Irrigation Based on UAV Multispectral Remote Sensing. Sensors 2019, 19, 2880. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Nisio, A.D.; Adamo, F.; Acciani, G.; Attivissimo, F. Fast Detection of Olive Trees Affected by Xylella Fastidiosa from UAVs Using Multispectral Imaging. Sensors 2020, 20, 4915. [Google Scholar] [CrossRef] [PubMed]
  28. OPTOSKY. ATP9100 Portable Object Spectrometer. Available online: https://www.optosky.com/ (accessed on 11 February 2022).
  29. Li, C.; Chen, F.; Qi, F.; Liu, M.; Li, Z.; Liang, F.; Jing, X.; Lü, G.; Wang, J. Searching for Survivors through Random Human-Body Movement Outdoors by Continuous-Wave Radar Array. PLoS ONE 2016, 11, e0152201. [Google Scholar] [CrossRef] [PubMed]
  30. DJI. M100 Quad-Rotor UAV System. Available online: https://www.dji.com/cn/matrice100 (accessed on 12 February 2022).
  31. Imai, M.; Kurihara, J.; Kouyama, T.; Kuwahara, T.; Fujita, S.; Sakamoto, Y.; Sato, Y.; Saitoh, S.-I.; Hirata, T.; Yamamoto, H.; et al. Radiometric Calibration for a Multispectral Sensor Onboard RISESAT Microsatellite Based on Lunar Observations. Sensors 2021, 21, 2429. [Google Scholar] [CrossRef]
  32. Bansal, M.; Kumar, M.; Kumar, M. 2D object recognition: A comparative analysis of SIFT, SURF and ORB feature descriptors. Multimed. Tools Appl. 2021, 80, 18839–18857. [Google Scholar] [CrossRef]
  33. Wang, G.; Sun, X.; Shang, Y.; Wang, Z.; Shi, Z.; Yu, Q. Two-View Geometry Estimation Using RANSAC With Locality Preserving Constraint. IEEE Access 2020, 8, 7267–7279. [Google Scholar] [CrossRef]
  34. Berhane, T.M.; Lane, C.R.; Wu, Q.; Autrey, B.C.; Anenkhonov, O.A.; Chepinoga, V.V.; Liu, H. Decision-Tree, Rule-Based, and Random Forest Classification of High-Resolution Multispectral Imagery for Wetland Mapping and Inventory. Remote Sens. 2018, 10, 580. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Qi, F.; Li, C.; Wang, S.; Zhang, H.; Wang, J.; Lu, G. Contact-Free Detection of Obstructive Sleep Apnea Based on Wavelet Information Entropy Spectrum Using Bio-Radar. Entropy 2016, 18, 306. [Google Scholar] [CrossRef] [Green Version]
  36. Ma, Y.; Wang, P.; Huang, W.; Qi, F.; Liang, F.; Lv, H.; Yu, X.; Wang, J.; Zhang, Y. A robust multi-feature based method for distinguishing between humans and pets to ensure signal source in vital signs monitoring using UWB radar. EURASIP J. Adv. Signal Process. 2021, 2021, 27. [Google Scholar] [CrossRef]
  37. Wang, P.; Zhang, Y.; Ma, Y.; Liang, F.; An, Q.; Xue, H.; Yu, X.; Lv, H.; Wang, J. Method for distinguishing humans and animals in vital signs monitoring using IR-UWB radar. Int. J. Environ. Res. Public Health 2019, 16, 4462. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. The overall architecture of the bimodal information collection system.
Figure 1. The overall architecture of the bimodal information collection system.
Applsci 12 03457 g001
Figure 2. The diagram of the bimodal information collection system.
Figure 2. The diagram of the bimodal information collection system.
Applsci 12 03457 g002
Figure 3. Suspected target detection and human nature re-identification based on UAV bio-modal information from multispectral camera and bio-radar.
Figure 3. Suspected target detection and human nature re-identification based on UAV bio-modal information from multispectral camera and bio-radar.
Applsci 12 03457 g003
Figure 4. The spectral relative reflectivity of green vegetation and green camouflage.
Figure 4. The spectral relative reflectivity of green vegetation and green camouflage.
Applsci 12 03457 g004
Figure 5. The flowchart of bimodal-information-based human targets recognition method.
Figure 5. The flowchart of bimodal-information-based human targets recognition method.
Applsci 12 03457 g005
Figure 6. Flow chart of multispectral imagery preprocessing.
Figure 6. Flow chart of multispectral imagery preprocessing.
Applsci 12 03457 g006
Figure 7. Two kinds of features for 3 typical subjects. (a) Reflectivity of six bands, (b) Spectral indexes.
Figure 7. Two kinds of features for 3 typical subjects. (a) Reflectivity of six bands, (b) Spectral indexes.
Applsci 12 03457 g007
Figure 8. The DT for identification of suspected target.
Figure 8. The DT for identification of suspected target.
Applsci 12 03457 g008
Figure 9. Block diagram of ALE.
Figure 9. Block diagram of ALE.
Applsci 12 03457 g009
Figure 10. Illustration of experimental settings for two scenarios. (a) Scenario 1: 12 sets of green camouflage clothes in a homogeneous grassland background, (b) 11 sets of green camouflage clothes, and one real personnel in green camouflage clothing in a complex background.
Figure 10. Illustration of experimental settings for two scenarios. (a) Scenario 1: 12 sets of green camouflage clothes in a homogeneous grassland background, (b) 11 sets of green camouflage clothes, and one real personnel in green camouflage clothing in a complex background.
Applsci 12 03457 g010
Figure 11. Local multispectral remote sensing images in Scenario 1 acquired by MS600 (one exposure): (a) 450 nm; (b) 555 nm; (c) 660 nm; (d) 710 nm; (e) 840 nm; (f) 940 nm.
Figure 11. Local multispectral remote sensing images in Scenario 1 acquired by MS600 (one exposure): (a) 450 nm; (b) 555 nm; (c) 660 nm; (d) 710 nm; (e) 840 nm; (f) 940 nm.
Applsci 12 03457 g011
Figure 12. The complete reflectance panorama in Scenario 1.
Figure 12. The complete reflectance panorama in Scenario 1.
Applsci 12 03457 g012
Figure 13. Suspected target recognition result in Scenario 1 (green for vegetation, yellow for soil, red for suspected target, blue for noise spot).
Figure 13. Suspected target recognition result in Scenario 1 (green for vegetation, yellow for soil, red for suspected target, blue for noise spot).
Applsci 12 03457 g013
Figure 14. The complete reflectance panorama in Scenario 2.
Figure 14. The complete reflectance panorama in Scenario 2.
Applsci 12 03457 g014
Figure 15. Suspected target recognition result in Scenario 2 (green for vegetation, yellow for soil, red for suspected target, blue for noise spot).
Figure 15. Suspected target recognition result in Scenario 2 (green for vegetation, yellow for soil, red for suspected target, blue for noise spot).
Applsci 12 03457 g015
Figure 16. Radar echo of human target T12 from 8 m and corresponding frequency analysis, (a) original signal and normalized frequency spectrum, (b) noise cancellation signal and normalized frequency spectrum, (c) human target reconfirmation result.
Figure 16. Radar echo of human target T12 from 8 m and corresponding frequency analysis, (a) original signal and normalized frequency spectrum, (b) noise cancellation signal and normalized frequency spectrum, (c) human target reconfirmation result.
Applsci 12 03457 g016
Table 1. The selected spectral bands.
Table 1. The selected spectral bands.
Band NumberBand NameCentre Wavelength (nm)Bandwidth
(nm)
1blue450 ± 3 nm22 ± 5 nm
2green555 ± 3 nm22 ± 5 nm
3red660 ± 3 nm22 ± 5 nm
4red edge710 ± 3 nm32 ± 5 nm
5near-infrared840 ± 3 nm32 ± 5 nm
6940 ± 3 nm32 ± 5 nm
Table 2. Calculation formula of 8 spectral indexes.
Table 2. Calculation formula of 8 spectral indexes.
NumberCalculation FormulaNumberCalculation Formula
1 N D V I = R n i r R r e d R n i r + R r e d 5 S I P I = R n i r R b l u e R n i r + R r e d
2 N D G I = R r e d R g r e e n R r e d + R g r e e n 6 m N D V I = R n i r R r e d g e R n i r + R r e d g e 2 R b l u e
3 N G B D I = R g r e e n R b l u e R g r e e n + R b l u e 7 M S R = R n i r / R r e d 1 R n i r / R r e d + 1
4 P S R I = R r e d R g r e e n R n i r 8 E V I = 2.5 ( R n i r R r e d ) R n i r + 6 R r e d 7.5 R b l u e + 1
Table 3. Recognition performance comparison of the state-of-art machine learning algorithms.
Table 3. Recognition performance comparison of the state-of-art machine learning algorithms.
ML AlgorithmsAccuracyF1 Score
DT99.76%98.08%
BP99.58%99.62%
RF99.37%99.41%
SVM99.65%99.67%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Qi, F.; Zhu, M.; Li, Z.; Lei, T.; Xia, J.; Zhang, L.; Yan, Y.; Wang, J.; Lu, G. Automatic Air-to-Ground Recognition of Outdoor Injured Human Targets Based on UAV Bimodal Information: The Explore Study. Appl. Sci. 2022, 12, 3457. https://doi.org/10.3390/app12073457

AMA Style

Qi F, Zhu M, Li Z, Lei T, Xia J, Zhang L, Yan Y, Wang J, Lu G. Automatic Air-to-Ground Recognition of Outdoor Injured Human Targets Based on UAV Bimodal Information: The Explore Study. Applied Sciences. 2022; 12(7):3457. https://doi.org/10.3390/app12073457

Chicago/Turabian Style

Qi, Fugui, Mingming Zhu, Zhao Li, Tao Lei, Juanjuan Xia, Linyuan Zhang, Yili Yan, Jianqi Wang, and Guohua Lu. 2022. "Automatic Air-to-Ground Recognition of Outdoor Injured Human Targets Based on UAV Bimodal Information: The Explore Study" Applied Sciences 12, no. 7: 3457. https://doi.org/10.3390/app12073457

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop