Automatic Detection of Aerial Vehicle in Cloudy Environment by Using Wavelet Enhancement Technique

Automatic target detection of surface to air surveillance and tracking systems in video applications is an important issue, because it is the first step for track initiation and continuity. Target detection can be readily overcome for clear sky conditions, but it may be a complicated problem for cloudy sky conditions. In order to fulfill automatic target detection by using conventional image processing techniques may be a hard problem in cloudy sky conditions and improper light affects. The difficulty comes from the clutter rates stem from clouds, and the target may get lost in the clutters that occupies in the whole frame area. In order to increase the detection possibility background clutters should be eliminated by using image processing techniques frame by frame. In this work a novel approach is proposed to detect air vehicles in every kind of cloudy sky conditions. For this purpose a wavelet based image enhancement algorithm is implemented to the video frames, then conventional techniques are used. These conventional techniques are the reciprocal pixel intensity measurement data usage, Sobel operator and thresholding processes for edge detection. The proposed algorithm gives outstanding results for flying object detection in different sky conditions.


Introduction
Moving object detection is an important issue in computer vision, which plays an essential role for various applications, such as regional security, object tracking, behavior recognition, and situational awareness.Video based surface to air (SA) surveillance and guidance systems present a remarkable application area for moving object detection.SA surveillance and guidance systems require automatic target detection, and tracking abilities.Target detection is an important issue that is used in the initiation and continuation phases of tracking systems and algorithms.Moving object detection in complex environments has been considerably studied [1][2][3], and it is still an important issue to be strived on.Three common methods for detecting moving objects are background subtraction, optical flow and temporal difference.The most popular one is background subtraction, which identifies moving objects from the portion of a video frame that differs significantly from a background model.There are various background subtraction algorithms for detecting moving objects [4][5][6].These are composed of simple techniques such as frame differencing and adaptive median filtering, to more sophisticated probabilistic modeling techniques.Background subtraction method is sensitive to the changes of light.The other approach for moving object detection is optical flow, which is defined as a velocity field in the image which transforms one image into the next image in a sequence [7], [8].Optical flow method is a complex algorithm, and not suitable for real-time processing.Frame difference method is simple and easy to implement, but the results are not as accurate as the results of the other methods.This is because the changes taking place in the background brightness cause misjudgment [9][10][11].
In this work a novel approach for flying object detection in cloudy sky is focused.For this purpose developing an accurate and real-time method for moving object detection is aimed.Also, the algorithm should be affected as little as possible from the background brightness variation.For achieving these requirements a wavelet based background suppression method with edge detection technique is proposed.Automatic target discrimination can be readily overcome for cloudless or almost cloudless sky conditions, but can be a complicated problem for denser cloud sky conditions.Then a question arises about the classification of sky conditions.In this study sky conditions are used as in [12], and four sky conditions are taken into account; First of all sequential frames are taken into account with predetermined sample periods.Then a wavelet enhancement technique is used for increasing object detection possibility.
Wavelet is used for video resolution enhancement in numerous studies [13][14][15].In this study wavelet based image enhancement algorithm, which is given in [13], is implemented to the video frames.This method is called wavelet based image enhancement (WBIE) technique.In this technique, the discrete wavelet transform (DWT) is used for dimension reduction and obtaining wavelet coefficients.Then Dynamic Range Compression (DRC) and contrast enhancement algorithms are applied to the approximation coefficients.After this point, the inverse DWT (IDWT) is applied, and the enhanced image is obtained by linear color restoration such that it tunes the intensity of each pixel magnitude based on its surrounding pixels.By using the process the bright regions are compressed up to a level and dark regions are enhanced.Thus, the output of WBIE process is a transformed frame with lower dynamic range.After applying WBIE to the frames, they are converted to density images and the reciprocal pixel density measurement (RPIM) technique [16] is applied to this data.At the end of RPIM process, a pixel intensity coded frame is obtained.This coded frame assigns high intensity values for target pixels, and relatively smaller intensity values for clutter pixels, and background noise pixels.Then, edge detection is applied in the continuation of the automatic target detection process.For edge detection Sobel operator [17], [18] is used.In addition, a threshold value is used to exclude the clutter pixels and background noise pixels from the process.This method is abbreviated as RST (RPIM-Sobel-Threshold).Thus, the target originated pixels take higher values than the background clutter related pixels and automatic target detection is achieved.The resulting algorithm is called WBIE-RST.For determining the detection performance of WBIE-RST algorithm a comparison study is conducted.For this purpose, results of WBIE-RST algorithm is compared with the results of RST process, frame difference, and optical flow methods.

WBIE Technique
In this section application of WBIE to SA video frames is described.WBIE application is the first step for SA video target discrimination process.However, in cloudless or almost cloudless sky conditions an additional wavelet enhancement method may not be required, and only RST operation will be sufficient for automatic target discrimination.In sky conditions Mode 2-4, WBIE method may increase the automatic discrimination accuracy in different levels.In the fourth sky condition, which is called overcast, the background of the frame is almost constant, and mostly creates low clutter.On the other hand, in some cases of Mode 4, the collocation of bright and dark clouds can cause excessive clutters.In the second and the third sky conditions the clouds generally scattered randomly and have different irradiance levels.The random scattered distribution of the clouds will result with non-uniform clutter distribution.The aim of this work is to reduce the number of background clutter in every sky conditions, and increase the probability of discrimination.For this purpose the effects of random scattered clouds in video frames are reduced by using WBIE.The flow of the WBIE process is composed of four steps: • Step 1: DWT and normalization.
The DRC and contrast enhancement steps are conducted in the DWT domain.The steps are described separately in the following subsections.

DWT and Normalization
In two dimensions a scaled basis function Φ j 0 ,k,l (m, n) and horizontal, vertical, and diagonal translated basis functions Ψ i j,k,l (m, n) are used for DWT.These wavelet functions measure intensity or gray level variations for frames along horizontal (i = h), vertical (i = v) and diagonal (i = d) directions [17].In the first phase of DWT step, RGB color frames are converted to intensity (gray-scale) images using the National Television System Committee (NTSC) standard, as defined in [17]: where R(m, n), G(m, n) and B(m, n) are the values of the red, green and blue color band of a pixel.The enhancement algorithm is applied on this intensity image.Haar wavelet is used for the DWT step.DWT decomposes the input into the four lower resolution coefficients; approximation, horizontal, vertical and diagonal detail coefficients as in ( 2) In the above equation; A j 0 ,k,l are the approximation coefficients at the starting scale j 0 with the corresponding scaling functions Φ j 0 ,k,l (m, n), D i j,k,l are the vertical, horizontal and diagonal detail coefficients at scale j ≥ j 0 with the corresponding wavelet functions Ψ i j,k,l (m, n).Here, k and l are integers representing the spatial shift difference for pixel-by-pixel scanning of the entire image.The normalized approximation coefficients are obtained by using (3)

DRC
Using linear input-output intensity relationships typically does not produce a good visual representation compared with direct viewing of the scene.Therefore, nonlinear transformation for DRC is used, which is based on some information extracted from the image histogram.To do this, the histogram of the intensity images is subdivided into four ranges: r 1 = 0 − 63, r 2 = 64 − 127, r 3 = 128 − 191, and r 4 = 192 − 255.Normalized approximation coefficients (A norm ) are mapped to DRC coefficients (A DRC norm ) [19] by using following mapping equation: In the above equation x is the mapping exponent.For 0 < x < 1 the details in the dark regions are pulled out and for x ≥ 1 suppresses the bright overshoots [19].The aim of this work is to enhance the details in the dark regions, but not to suppress the bright overshoots more than enough.Suppressing of bright areas is limited with reducing intensities at the passing edges of the clouds.For this reason only 0 < x < 1 is used for all ranges.This will lead to success the process, because intensity reciprocals of each peak cell is taken in the RPIM step, and unwanted effects of bright clouds are eliminated.Additionally, in (4) α is the offset parameter, helping to adjust the brightness of image.α is assumed "0" and omitted in this study.As a result, x values are empirically obtained as shown in ( 5), (5) where f (r i ) refers to number of pixels between the range (r i ), and ∧ is the logical AND operator.By applying this transformation the dynamic range of the frame is reduced.After obtaining approximation coefficients A DRC n , de-normalization operation is taken place as shown in ( 6)

Contrast Enhancement
In this study center/surround ratio, proposed by Hurlbert [20], is used.Hurlbert showed that the Gaussian is the optimal surround for center-surround natural vision operations.The surround for the approximation coefficient is obtained by using a 2-dimensional discrete convolution operation with a Gaussian kernel or Gaussian surrounds.The Gaussian surround function is obtained as follows: where σ s is the standard deviation of the Gaussian surrounds.
The magnitude of σ s controls the extend of the surround: smaller values of σ s result in narrower surrounds, higher values result in wider surrounds.A combination of three scales representing narrow, medium, and wide surrounds is sufficient to provide both dynamic range compression and tonal rendition [21].K is determined under the constraint that m,n G(m, n) = 1.At this point of the process de-normalized approximation coefficients A DRC dn are replaced with enhanced coefficients A enh as in ( 8) where In equation ( 9) * is the convolution operator.S is the adaptive constant enhancement parameter, and it is related to the standard deviation of the input intensity image (10).The values for S are used as in [19].If σ < 7, the image has poor contrast and the contrast of the image will be increased.
If σ ≥ 20, the image has sufficient contrast and the contrast will not be changed.
At this point of the process approximation coefficients A j 0 ,k,l , obtained in the beginning of the DWT step, are replaced with the enhanced coefficients A enh (m, n) before reconstruction.Before applying IDWT the detail coefficients are modified in a similar way.To meet this requirement, the detail coefficients are modified using the ratio between the enhanced and original approximation coefficients as given in (11): After obtaining enhanced approximation coefficients A enh (m, n), and modified detail coefficients D i new (m, n), enhanced intensity image I enh (m, n) is derived from the inverse wavelet transform of these coefficients as follows:

Color Restoration
In the last step of the WBIE process enhanced image is obtained.The enhanced color image can be obtained through a linear color restoration process based on the chromatic information contained in the input image [22].Mathematically, the color restoration process for images in RGB color space can be expressed as in (13).
where j = r, g, b represents the R, G, B spectral band respectively, and S r , S g , S b are the RGB values of the enhanced color image.λ j 's are color tone adjusting parameters, and each of them is a constant very close to 1, which takes different values in different spectral bands.In this study they are selected as λ r = 0.9, λ g = 0.9, and λ b = 0.9 .When all λ j 's are equal to 1, ( 13) can preserve the chromatic information of the input color image for minimal color shifts.The resulting intensity image for restored color is obtained as follows:

RPIM Transformation and Edge Detection
In this section reciprocal pixel intensity measurement (RPIM) transformation of WBIE data and edge detection of the resulting RPIM data is given.For obtaining RPIM data of the WBIE image equation ( 15) is used as given in [16] By using RPIM data, objects stem from flying targets are illustrated with high intensity, and background clutters are illustrated with low intensities or eliminated.In this study the input of RPIM process is the resulting intensity image coming from the WBIE process.In DRC operation of WBIE process mapping exponents "x" are selected smaller than 1.So that the details in the dark regions are enhanced, and the bright overshoots are suppressed within low limits.This will result in WBIE data with high detailed dark regions and reasonably reduced dynamic range.At this point of the process classic Sobel edge detection operator given in [18] and [23] is applied in order to reduce clutter effects and increase the probability of target discrimination.Sobel operator is a discrete differentiation operator, and computes an approximation of the gradient of the image intensity function.At each point in the image, the result of the Sobel operator is either the corresponding gradient vector or the norm of this vector.Sobel operator is the partial derivative of two-dimensional f (x, y) function as the central computing 3 × 3 neighborhood at x, y directions.In order to suppress noise, a certain weight is correspondingly increased on the center point, and its digital gradient approximation equations with respect to x, y directions are given in ( 16) and (17), respectively; The Sobel operator is the magnitude of the above gradients as follows: Sobel's convolution template operators are given as in equation (19): In this study Sobel operator is applied to detect the edge of the image obtained in the output of RPIM process I R , given in equation (15).In this case the horizontal template T x and vertical template T y can be used to convolute with the image, without taking into account the border conditions.The same size of two gradient matrix M 1 and M 2 may be obtained as the original image.Then the total gradient value g(x, y) is obtained by adding the two gradient matrices.By using total gradient value Sobel operator returns the edges in higher intensity values than the noise floor.It can be concluded that there is an edge where the gradient of the image is maximum or where the intensity level changes.To prune the pixel intensities that most probably emerges from clutter rather than target edges a threshold value is used [23], [24].At this point a question is emerged about threshold selection.If the selected threshold is too low, target detection probability and false edge generation will increase.If the threshold is too high, many of the edges may not be detected,thus target detection probability and false edge generation may decrease.In this study a constant threshold, which is empirically determined optimal value, is selected.As a result the applied process can be summarized as RPIM, Sobel operator, and Thresholding (RST) operation.Also, the algorithm obtained by using this process with WBIE is called WBIE-RST.

Experimental Results
The proposed algorithm for target discrimination was tested with 40 different video data of flying objects for different sky conditions.The classification of sky has been made in accordance with the four modes given in Section 1.For each sky state, ten different experiments were performed, two of them multi-targeted.Each trial was conducted for 22 sequential frames and the area was taken 250 × 250 pixels.MATLAB was used for applying the proposed algorithm to the frames of different video data.Before giving the overall results, the outcomes of the algorithm are given for each sky condition.The RGB image, or unprocessed images, and related WBIE applied images are given in Fig. 1-4.Also in the figures, the results of RST application to RGB images and the results of WBIE-RST application to WBIE images are given in 3-dimensional intensity mesh plots.
In this study, two classical algorithms (frame difference [9][10][11], optical flow [7], [8]) are also implemented in accordance with the situation.In Tab. 1, the results of WBIE-RST process are compared with the results of RST process, frame difference process, and optical flow process.Comparing WBIE-RST and only RST algorithms clearly shows the effects of adding WBIE algorithm to the process.Frame difference is applied to the two consecutive frames.Additionally, Horn and Schunck optical flow method is applied to the video frames.In optical flow algorithm SNR based parameter λ is selected 49 for optimal value as in [3].The surround space constant σ s , given in (7), takes values from small to large in order to obtain multiple scale surrounds [21].In this study σ s values for narrow, medium, and wide surrounds are selected 5, 100, and 220, respectively.And their combination is taken by using arithmetic mean.In the table peak-peak Target to Clutter intensity Ratio (TCR) is defined as follows: TCR P-P = 10 × log 10 I T-Peak I C-Peak (20) where I T-Peak is Peak Target Intensity, and I C-Peak is Peak Clutter Intensity.In this study threshold is selected 30, which is     empirically determined optimal value.In the trials noise floor level was selected as −10 dB.The clutter intensity levels vary between 1.7 dB and 5.5 dB, beside target intensity levels vary between 4.4 dB and 14 dB.In practical applications 1.25 dB peak-peak target to clutter intensity ratio is sufficient for target discrimination from clutter by using a threshold level.
From the results it can be seen that the improved moving object detection algorithm, WBIE-RST is more successfull than the other algorithms in every sky conditions.Especially, the results of WBIE-RST algorithm outperforms to the results of RST algorithm, which shows the effectiveness of adding WBIE to the algorithm.Also WBIE-RST algorithm gives better results than optical flow algorithm in each sky condition.The results of frame difference are quite successful for each sky condition, but low peak-peak TCR especially in sky conditions Mode-2 and 3 show the superiority of the WBIE-RST algorithm.It has been seen that the target shapes obtained from the frame difference and optical flow algorithms do not match the original target shapes.On the other hand, the target shapes obtained from the WBIE-RST and RST algorithms are mostly compatible with the original.
In sky condition Mode-1, the number of background clutters due to clouds is very low.As a result, the performance of each algorithm is quite satisfactory.The situation totally changes for sky condition Mode-2, because clouds produces background clutter and cloud edges come up with high intensity clutters.This time WBIE-RST algorithm outperforms the other algorithms.Peak-Peak TCR values of WBIE-RST algorithm are higher than that of the other algorithms.Also, the elimination level of the clutter-related pixels in the WBIE-RST, frame difference and optical flow algorithms is higher than that of RST algorithm.Target detection failure takes place in some trials of RST, frame difference, and optical flow algorithms.On the other hand, in this mode, WBIE-RST has performed detection in all trials.In sky condition Mode-3 the results are similar with Mode-2, except frame difference algorithm increases its performance.The reason for this is that the amount of cloud edge in this mode is less than that of Mode 2. In Mode 3, the WBIE-RST algorithm maintains its successful detection performance.Mode-4 is similar to Mode-1, except varying light intensity levels.For homogeneous light intensities in background, all the algorithms are successful.However, the WBIE-RST algorithm performs better than the other algorithms for varying light intensity levels in the background.

Conclusion
In this work, a wavelet-based technique is presented for automatically detecting flying objects in cloudy environments for video applications.For this purpose different sky conditions are taken into account.In order to evaluate the performance of the WBIE-RST technique, the results of trials are compared with the results obtained by using RST, frame difference, and optical flow algorithms.At the end of the evaluation process, it has been observed that WBIE-RST performed better than the accepted algorithms in high clutter environments and gave similar results in low clutter environments.Experimental studies have shown that the target shapes obtained with RST and WBIE-RST are consistent with the true target shape.This consistency is due to the structure of the RST.On the other hand, the target shapes obtained by frame difference and optical flow algorithms do not match the actual target shapes.By using WBIE-RST technique an automatic video discrimination technique for flying objects is obtained in every kind of sky conditions.For further study, this technique can be applied to different medical imaging systems, IR systems and sonar systems.It is also considered that the detections obtained with the WBIE-RST algorithm can also be used for target initiation and continuation.

Fig. 1 .
Fig. 1.RGB Image (upper-left) & Histogram of RST Applied RGB Image (lower-left) and WBIE Applied Image (upper-right) & Histogram of RST Applied WBIE Image for Mode-1 Sky Condition, Standart Deviation of the Intensity Image is 7.46.

Fig. 2 .
Fig. 2. RGB Image (upper-left) & Histogram of RST Applied RGB Image (lower-left) and WBIE Applied Image (upper-right) & Histogram of RST Applied WBIE Image for Mode-2 Sky Condition, Standart Deviation of the Intensity Image is 30.25.

Fig. 3 .
Fig. 3. RGB Image (upper-left) & Histogram of RST Applied RGB Image (lower-left) and WBIE Applied Image (upper-right) & Histogram of RST Applied WBIE Image for Mode-3 Sky Condition, Standart Deviation of the Intensity Image is 47.03.

Fig. 4 .
Fig. 4. RGB Image (upper-left) & Histogram of RST Applied RGB Image (lower-left) and WBIE Applied Image (upper-right) & Histogram of RST Applied WBIE Image for Mode-4 Sky Condition, Standart Deviation of the Intensity Image is 17.21.