Next Article in Journal
Time Series Analysis of Very Slow Landslides in the Three Gorges Region through Small Baseline SAR Offset Tracking
Previous Article in Journal
Evaluation of Green-LiDAR Data for Mapping Extent, Density and Height of Aquatic Reed Beds at Lake Chiemsee, Bavaria—Germany
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SAR Image De-Noising Based on Shift Invariant K-SVD and Guided Filter

1
Institute of Information Science, Beijing Jiaotong University, Beijing 100044, China
2
Beijing Key Laboratory of Advanced Information Science and Network Technology, Beijing 100044, China
3
Electronic Information Engineering College, Hebei University, Baoding 071002, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2017, 9(12), 1311; https://doi.org/10.3390/rs9121311
Submission received: 7 November 2017 / Revised: 10 December 2017 / Accepted: 12 December 2017 / Published: 13 December 2017
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Finding a way to effectively suppress speckle in SAR images has great significance. K-means singular value decomposition (K-SVD) has shown great potential in SAR image de-noising. However, the traditional K-SVD is sensitive to the position and phase of the characteristics in the image, and the de-noised image by K-SVD has lost some detailed information of the original image. In this paper, we present one new SAR image de-noising method based on shift invariant K-SVD and guided filter. The whole method consists of two steps. The first deals mainly with the noisy image with shift invariant K-SVD and obtaining the initial de-noised image. In the second step, we do the guided filtering for the initial de-noised image. Finally, we can recover the final de-noised image. Experimental results show that our method not only has better visual effects and objective evaluation, but can also save more detailed information such as image edge and texture when de-noising SAR images. The presented shift invariant K-SVD can be widely used in image processing, such as image fusion, edge detection and super-resolution reconstruction.

Graphical Abstract

1. Introduction

Synthetic aperture radar (SAR) technology is high-resolution, works anytime and in any weather, and allows multi-polarization, variable angle, and so on. SAR is a joint high-tech tool of space technology, electronics technology, and information technology, and it has been widely used in the military field for applications in strategy, tactics, and so on [1]. As shown in Figure 1, compared with the color optical image, the SAR image which is obtained by microwave imaging can reflect the underground information through covers such as surface vegetation. However, because of the unique imaging mechanism, there is always some speckle noise [2]. The existence of this noise has greatly increased the complexity of the image, resulting in a negative influence on the subsequent image processing. Therefore, finding a way to suppress or remove the speckle effectively is a hot topic for many scholars.
Traditional SAR image de-noising methods can be divided into two types [3]: image de-noising method based on spatial domain and image de-noising method based on transform domain. The first type mainly deals with the noisy image with filters, and the other uses a different filter with a different function. They realize the de-noising result by convolution filtering the noisy image with a filter function. Common spatial domain methods include median filtering [4], mean filtering [5], Wiener filtering [6], and so on. However, the image de-noising method based on transform domain makes use of the fact that the noise is not sparse. At first, we can represent the image by a fixed orthogonal transform function and obtain many frequency coefficients. Then, we conduct the image de-noising by processing these frequency coefficients. Finally, the de-noised image can be obtained by the inverse transform. For example, SAR image de-noising based on non-local similar block matching in NSST domain in [7], synthetic aperture radar image de-noising based shearlet transform using the context-based model in [8], and scattering-based SAR-Block Matching 3D (SARBM3D) [9]. Common transforms include Fourier transform, discrete cosine transform (DCT), wavelet and multi-scale multi-directional transform [10]. However, as SAR images contain rich feature information, an ideal de-noising result can hardly be realized if only by limited orthogonal transform, which cannot represent all the image features.
The recent development of sparse representation has led to its widespread use in image processing, such as super-resolution reconstruction [11], edge detection [12], and face recognition [13]. The image de-noising method based on sparse representation of dictionary learning is applied to suppress the speckle of SAR images in [14,15,16,17]. Compared with the image de-noising method based on transform domain, these methods adopt a redundant dictionary rather than a fixed orthogonal basis function to express the noisy image. Since the noise is not sparse, the sparse coefficients do not contain noise, and the restored image by a linear combination of redundant dictionary and sparse coefficients does not contain noise either. Finally, we can realize the result of SAR image de-noising. However, the traditional sparse representation methods (e.g., K-means singular value decomposition, K-SVD) are sensitive to the position and phase [18], which means that even the same image feature with different position or phase may lead to different atoms of the training dictionary. It is well known that the image is shift-invariant; when we use the traditional K-SVD to do the image de-noising, there will be some Gibbs effects and the training of shift atoms will be time-consuming [19]. As a result, in this paper we will incorporate shift invariance into sparse representation of the noisy SAR image to prevent Gibbs effects and improve the sparsity of the coefficient.
In contrast to ordinary optical images, SAR images contain a great deal of rich texture information and edge features. A certain point in a SAR image may correspond to one real building, and misjudgment of it has an inestimable negative influence on the SAR image application. If we conduct the SAR image de-noising by sparse representation, when we decompose the residual image with SVD in the step of updating the dictionary [20], we usually give up some smaller singular values to obtain the atoms. Therefore, information of the original image cannot be completely represented by the obtained atoms. In other words, some detailed information is misjudged as noise so that it cannot be represented by dictionary atoms. All of these lead to edge blurring, poor spatial resolution, and even loss of some important points and edges in the restored de-noised image, and have a very negative influence on the subsequent image processing. As one fast and non-approximate linear-time method, the guided filter [21] can serve as one good edge-preserving smoothing method. Therefore, in this paper we present one new SAR image de-noising method by combining the shift-invariant K-SVD and guided filter. At first, we de-noise the noisy SAR image with shift-invariant K-SVD and obtain the initial de-noised image. Then, we make the obtained image as the input image and guidance image at the same time, and use a guided filter on them to preserve more edge and detailed information of the original image. Finally, the de-noised image can be obtained by sparse representation. Experimental results show that our method is an effective SAR image de-noising method, due to its de-noising effect and its edge-preserving ability.
The rest of this paper is organized as follows: sparse representation is introduced briefly in Section 2.1, and the shift-invariant K-SVD is presented in detail in Section 2.2. Then, in Section 3, we provide some information about the guided filter. The new SAR image de-noising method is presented in Section 4, and some experiments are conducted in Section 5. Finally, the conclusion is made in Section 6.

2. Shift Invariant K-SVD

Here, we introduce some detail about the shift invariant K-SVD. In Section 2.1, we present the basic model of sparse representation. The main contribution of our method is presented in Section 2.2.

2.1. Sparse Representation

Suppose that the over-complete dictionary is D R M × T , which contains dictionary atoms in column vector with the number of T; and the noisy image is I R M × N , which can be a linear combination of these atoms [19]. Because the dictionary is over-completed, the sparse coefficients by the linear representation have many solutions. Finding ways to obtain the most sparse solution to make the de-noised image be as close to the original image as possible has become increasingly important. This step directly determines the result of the image de-noising. The more similar the de-noisied image and the original image are, the more information of the original image is preserved in the de-noisied image. The model of the image de-noising method based on sparse representation can be calculated as follows [22]:
α ^ = arg min α α 0 s . t . I D α 2 2 ε
where α denotes sparse coefficients of the image, D α is the linear representation, and 0 is the l 0 n o r m . Under most conditions, α 0 L max M , L max denotes the maximum sparse number, and ε denotes the limiting error.
Sparse representation can be divided into two steps: sparse coding and dictionary updating. In general, we make an over-completed DCT dictionary—the Gabor dictionary [23]—as the initial dictionary, or obtain the over-completed adaptive dictionary by the noisy image itself. On the other hand, when doing the sparse representation, orthogonal matching pursuit (OMP) and base pursuit (BP) are adopted to obtain the optical sparse coefficients [1]. For the convenience of calculation by the sparse representation such as OMP, the model in Equation (1) can be expressed as follows:
α ^ = arg min α I D α 2 2 + μ α 0
where μ denotes the penalty factor.

2.2. Shift Invariant K-SVD

It is well known that the image is shift invariant, and so only when the over-completed dictionary to represent the image also is shift invariant can we obtain the optimal sparse representation of the image. Suppose the dictionary atom in Section 2.1 is a t ( 1 t T ) . After a series of shifts of these atoms, we can obtain the atom family A t = ( S τ a t ) τ , and S τ denotes the shift operator, τ denotes the shift amount [19]. Therefore, the shift invariant dictionary D can be made up of these atom families, which can be formulated as follows:
D = ( A t ) t = ( S τ a t ) τ , t
As a result, the acquisition of dictionary D can be translated into the acquisition of atom family A t . The sparse model in Equation (2) can be expressed as follows:
α ^ = arg min α I t τ a T ( t τ ) α t , τ 2 2 + μ α 0 = arg min α I t τ S τ a t T α t , τ 2 2 + μ α 0
Since the dictionary is efficient only for image blocks with smaller size, if we do the dictionary learning by the noisy SAR image, the sparsity of the image must be broken so that we cannot get the optimal coefficients, and the robustness of the method has been greatly reduced [1]. Therefore, in this paper, at first we process the noisy image with image blocking. To prevent a blocking effect, we overlap blocks with the step length of one. In addition, the image block p must satisfy Equation (5) for unbiased reconstruction of the original image [19].
τ σ k , S τ * ( ς σ t S ς p ς ) = S τ * I
where σ t = { τ α t , τ 0 } , and S τ * denotes the adjoint matrix of S τ . More detail about the derivation can be found in [19].
The sparse model of the image block family P t = ( p t , τ ) τ can be calculated as:
α ^ P = arg min α P τ σ t p τ t S τ a t T α t , τ 2 2 + μ α P 0
Normally, the noisy SAR image satisfies the following multiplicative noise model [24]:
I ( x , y ) = I R ( x , y ) * S ( x , y )
where ( x , y ) denotes azimuthal and distance coordinates of the center pixel. I ( x , y ) denotes the SAR image polluted by speckle noise, and I R ( x , y ) denotes the real landscape in the actual scene. S ( x , y ) denotes the speckle noise, a multiplicative noise which obeys Γ distribution, has second-order stability, and a mean value of one. In addition, the variance of speckle noise and equivalent number of looks (ENL) is inversely proportional.
In the model above, the relationship between the image and noise is multiplicative, and the noise exists and disappears with the image. When we use the image de-noising method based on spatial domain, we usually get the de-noised pixel by the surrounding pixels. Thus the multiplicative noise model is ok, and this kind of methods can be regarded as modifing the value of every pixel in the image. But for the methods based on transform domain and sparse representation, we need to decide whether the pixel is useful or not. Especially, the sparse coefficient is one or zero. When we still use the multiplicative model, the results must be bad. Therefor, we cannot de-noise the image directly. In this paper, before de-noising the noisy SAR image, we change the noisy model in Equation (7) into the additive noise model shown in Equation (8) by logarithmic transform [8].
I = I R + S
If all of the image block of the noise-free SAR image I R meets the model of Equation (6), the image de-noising model can be expressed as:
( α ^ P , P ^ ) = arg min α P , P λ P P I 2 2 + τ σ t p τ t S τ a t T α t , τ 2 2 + μ α P 0
where λ P P I 2 2 denotes the similarity of noisy image block family P I and the real image block family P, which can be realized by P P I 2 2 C o n s t σ 2 , and σ 2 denotes the noise variance. μ α P 0 and τ σ t p τ t S τ a t T α t , τ 2 2 are the priori conditions of the conformance between the sparsity and image decomposition, which can ensure all the reconstructed image blocks have minimum error of the original image blocks.
There are two kinds of algorithms for calculating the optimal solution based on over-completed dictionary and sparse representation: greedy algorithm and global optimal algorithm [22]. Greedy algorithms mainly include the matching pursuit (MP) and orthogonal matching pursuit (OMP) algorithms, while global optimal algorithms include the basis pursuit (BP) algorithm and so on. Because the obtained shift-invariant dictionary atoms have little difference when they are in the same atom family, the selected algorithm should do well in a highly coherent dictionary with big size [19]. However, MP in [19] is not the most optimal because its residuals are only perpendicular to the current projection direction, which leads to project on the same direction in next projection. Compared with MP, OMP also converges faster with the same precision. In this paper, we adopt OMP to solve the sparse model in Equation (9). Additionally, since the over-completed dictionary D in the aforementioned model is fixed, it may make the image edge blur, produce Gibbs phenomenon, and so on when de-noising the image. In this paper, we fuse a Bayesian framework into the dictionary updating, and use the noisy SAR image to train the adaptive dictionary for the sparse representation [25]. The objective function can be formulated as:
( D ^ , α ^ P , P ^ ) = arg min D , α P , P λ P P I 2 2 + τ σ t p τ t S τ a t T α t , τ 2 2 + μ α P 0
When updating the dictionary by SVD, the detailed steps can be seen below [19]:
Step 1: Initialize the dictionary D, and solve the sparse coefficients { α } by OMP.
Step 2: Update one column a k of the atom family A t at a time, which is shown in Equation (11).
p τ t S τ a t T α t , τ 2 2 = p τ t k S τ a t T α t , τ S τ a k T α k , τ 2 2 = R k S τ a k T α k , τ 2 2
When we update a k , the residual R k = p τ t k S τ a t T α t , τ in Equation (11) is fixed. Considering the limitation in Equation (5), we update the dictionary atoms by multiplying S τ * into Equation (11). Since the shift operator S τ is unitary, S τ * S τ = S τ E = E . Then, Equation (11) can be expressed as follows.
S τ * R k S τ a k T α k , τ 2 2 = S τ * R k a k T α k , τ 2 2
Step 3: Solve Equation (12) by SVD, and we can obtain a k and α k .

3. Fast Guided Filter

As one kind of edge-preserving smoothing filter, a guided filter can not only retain the ability of edge preserving smoothing like a bilateral filter [26], but can also overcome the gradient reversal artifacts [8]. It is also a linear filter, and its computational complexity—which is O ( N ) in the image with the number of pixel N—does not rely on the filtering kernel size. In this paper, we adopt the fast guided filter in [27], which speeds up from O ( N ) to O ( N / s 2 ) for a subsampling ratio s.
Suppose the guidance image is I, input image is p, and the output image is q. Then, the guided filter can be modelled as [21]:
q i = a k I i + b k , i ω k
where ω k denotes the square window centered at k with a radius i. Additionally, because q = a I in the local window, the output image has the same edge as the guidance image. However, not all of the image edges can be obtained by the gradient of the image. By calculating the linear coefficients a k and b k , we can obtain the output image with richer edges as shown in Figure 2. Figure 2a is the initial de-noised image by sparse representation, and Figure 2b is the final de-noised image by adding the guided filter into Figure 2a. a k and b k can be obtained by the following cost function.
E ( a k , b k ) = i ω k ( ( a k I i + b k q i ) 2 + γ a k 2 )
where γ is the regularization parameter adjusting the value of a k from being very large, and the solutions of Equation (14) are listed as Equations (15) and (16).
a k = 1 ω i ω k I i p i μ k p ¯ k σ k 2 + γ
b k = p ¯ k a k μ k
where μ k and σ k 2 are the mean and variance of the image I in the local window ω k . ω denotes the pixel number in ω k and p ¯ k = 1 ω i ω k p i denotes the mean value of pixels.
Since the local window ω k is changing in the whole image, some certain pixels such as q i in Equation (13) may be included by different windows, and the output pixel has many values [21]. To overcome this problem, we simply adopt the mean of all the values of q i . Then, the output pixel can be calculated as:
q i = 1 ω k : i ω k ( a k I i + b ) = a ¯ i I i + b ¯ i
where a ¯ i = 1 ω i ω i a k , b ¯ i = 1 ω i ω i b k .
Since the output image of the guided filter considers the information in the guidance image, the guidance image can be the input image itself or another image whose information is important for the output image. It is better to make the noise-free image as the guidance image and the initial de-noised image as the input image. However, there is no noise-free image in the real scene, and we need to conduct the image de-noising for the obtained noisy image. In this paper, we make the initial de-noised image by shift-invariant K-SVD as the input image and guidance image at the same time.

4. The SAR Image De-Noising Method

In this paper, we present one advanced method for SAR image de-noising. For our method, it not only overcomes the shift variance of the traditional K-SVD by adding the shift invariance into the dictionary training, but also preserves more detailed information of the original image (e.g., image edges in the noise-free image) by combining the guided filter and the adaptive K-SVD. The whole algorithm is presented in Algorithm 1. First, we change the multiplicative noise model into an additive model by logarithm transformation and more detail can be seen in Section 2.2. Then, we initialize a random dictionary D i n i t and some related parameters shown in Algorithm 1. Next, we do the iteration process to get the shift invariant adaptive dictionary and the optimal sparse coefficients by SVD and OMP. The detail of OMP [28] can be seen in Algorithm 2. After that, we can recover the initial de-noised image. Then, the guided filter is adopted, and we make the initial de-noised image as the input and guidance image. Finally, the de-noised image is recovered by exponential transform.
Algorithm 1 The SAR image de-noising method.
Input : the noisy SAR image I
    Step 1 : recover the initial de-noised image by shift-invariant K-SVD
       Initialize : dictionary D i n i t , iterNum N
      block I with s l i d i n g s t e p = 1
        for  i = 1 to N
         extract the sparse coefficient α by OMP
         update dictionary D by SVD in Section 2.2
        end
      de-noise I by obtained D
      recover the initial de-noised image I i n i t
    Step 2 : recover the final de-noised image by guided filtering
       Initialize : guidance and input image I i n i t
             regularization parameter γ
             local window ω k
      calculate a k and b k
      recover the de-noised image by Equation (16)
Output : the de-noised image I R
Algorithm 2 OMP [28].
 Input : dictionary D R M × T , image I, iteration number T
    Initialize : residual r 0 = I , iterNum t = 1 , D 0 =
       Initialize : dictionary D i n i t , iterNum N
      block I with s l i d i n g s t e p = 1
       Repeat
        find λ t = arg max t = 1 , 2 , . . . , T r t 1 , φ t , φ t : t t h c o l u m n o f D
        set new D t = [ D t 1 , φ λ t ]
        obtain α ^ t = arg min α I D t α ^ 2 2 by least square method
        update D t = [ D t 1 , φ λ t ]
         t = t + 1
     end until t > T
 Output : sparse coefficients α

5. Experimental Results and Analysis

To testify the superior performance of our method, we conducted a series of experiments on the simulated SAR image and various standard real SAR images with different sizes from different data sets, which were first polluted artificially by noise. Then, we applied the state-of-the-art image de-noising methods to them, such as the enhanced Lee filter (Lee) [29], shearlet (ST) [7], SAR-BM3D [30], K-SVD [25], iterative nonlocal sparse model (It-NSM) [31], and our method. In the part of the sparse model in It-NSM, we adopted K-SVD. Moreover, all the experiments were carried out by Matlab codes on an Intel Core i5 3.1 GHz with 4 GB RAM.

5.1. Experiments on the Simulated SAR Image

At first, we ran our experiment on the simulated SAR image. Figure 3a is a noise-free SAR image with size 131 × 131 [1]. Then, we added speckle noise with ENL of L = 2 into Figure 3a and obtained the noisy image shown in Figure 3b. Figure 4 shows the de-noised images by different methods. From Figure 4 we can see that there is still some speckle noise in Figure 4a, which indicates that the enhanced Lee filter has limited image de-noising ability. From Figure 4b,c we can see that some dots in the first line of Figure 3a are lost. Although ST and SAR-BM3D could realize the image de-noising effect, it mistook some important information in the original image for noise, and led to incomplete information in the de-noised image. It is obvious that the first bar on the left of Figure 4d is very blurry and the image de-noising method of K-SVD made the image too over-smoothed. Finally, the de-noised image by our method in Figure 4f has better visual effect and image de-noising and edge-preserving ability.
In order to evaluate the de-noised images by different methods more accurately, we adopted the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). The larger the index values are, the better the de-noising method is. The value of SSIM ranges from 0 to 1. In addition, when we used the noisy image to obtain the adaptive shift invariant dictionary, the results were random with a small range. To be objective, we adopted the mean value of the corresponding indexes from ten experimental replications.
Table 1 shows the objective evaluation index values of Figure 4. From the table we can see that the de-noised image by our method has better values of PSNR and SSIM. Additionally, the value of PSNR by our method is 1.8509 larger than the second-largest value of PSNR. SSIM by ours is 0.9757 and near to 1. All of these indicate that the presented image de-noising method is a good method.

5.2. Experiments on the Real SAR Images

In order to better prove the image de-noising ability of our method, we conducted our experiments on a real SAR image. The original SAR image in Figure 5a was taken by TerraSAR-X High Resolution SpotLight 1-m acquisition on 20 March 2009 [32], of which the size is 475 × 475 . Similarly, we added the speckle with ENL from L = 10 to 35 into Figure 5a. When L = 25 , the noisy image is representative and has real meaning. Here we only show the noisy image in Figure 5b when L = 25 . Then, we processed the noisy image with different de-noising methods and obtained the de-noised images shown in Figure 6. From Figure 6, we can see that the de-noised image by our method is better in terms of the visual effect.
In addition to the aforementioned PSNR and SSIM, we adopted some other objective evaluation indexes [1], such as equivalent number of looks (ENL), standard deviation (Sd), edge-preserving index (EPI), and computation time (Time). Except for Time and Sd, the larger all the other index values are, the better the de-noising method is. Additionally, the unit of Time is seconds. If the value of EPI is less than 1, the edge of the de-noised image is weaker than the original image; otherwise, the edge is strengthened.
We adopted some objective indexes to evaluate the de-noised images. The values of PSNR in the first column denote the PSNR of the noisy images with the different ENL of the speckle, while the values of PSNR in the third columns denote the PSNR of the de-noised images by different image de-noising methods. Table 2 shows the experimental results of Figure 5a with different noise. From Table 2 we can see that when the image has the same level of noise, the PSNR of ours is better. Additionally, the less noise the image has, the better the ability of our method is. The minimum difference of the PSNR between our method and others is 3.429 in the noisy image with L = 35 . Of all the SSIM, ours is near to one and has a better value for the same level of noise, which indicates that the de-noised image of our method has the maximal similarity with the original. Additionally, except for L = 35 , the EPI of ours is also better, which indicates that our method has better edge-preserving ability. Among all the methods, the ENL of ours is better, which indicates that the de-noised image by our method has better visual effect. Except for L = 15 , Sd of ours has the least value and better result. Though Time of Lee is the least of all, compared with all the sparse representation, Time of our method is better, and is nearly one minute shorter than others. Additionally, for almost all images with different noise, Time of the methods such as Lee, ST, and SAR-BM3D has little difference. For the methods based on sparse representation, the more noise the image has, the shorter the Time. For the methods based on spatial domain and transform domain, the whole method is fixed and Time has little difference. However, for methods based on sparse representation, the complexity of dictionary learning and sparse representation is different, so Time is different when the noise is different. Additionally, when the noise level increases, Time of the dictionary learning decreases and PSNR of the de-noised image also decreases. As the main Time is spent on the dictionary learning, the whole computation time decreases when Time of the dictionary learning decreases. When there is a great deal of noise in the image, the image de-noising method can suppress most of the noise, but cannot eliminate all the noise. This explains why PSNR decreases when the noise level increases.

6. Conclusions

In this paper, one new SAR image de-noising method based on shift-invariant K-SVD and guided filter is presented. The experimental results show that compared with the state-of-the-art image de-noising methods, not only dose our method realize the results of image de-noising, but also more detail information such as edges of the original image has been preserved by combining the adaptive shift-invariant K-SVD and guided filter. However, the limitation of our method is that we only ran our experiments on SAR images, and the computation of our method is time-consuming. In our next work, we would love to attempt our experiments on polarized SAR image de-noising. And exploring new method with good image de-noising results and less time is our goal and next work.

Acknowledgments

This work is supported by the Natural Science Foundation of China (No. 61572063) and the Fundamental Research Funds for the Central Universities (No. K17JB00150).

Author Contributions

Xiaole Ma and Shaohai Hu conceived and designed the experiments; Xiaole Ma performed the experiments; Xiaole Ma and Shuaiqi Liu analyzed the data; Xiaole Ma and Shaohai Hu contributed analysis tools; Xiaole Ma wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SARSynthetic Aperture Radar
K-SVDK-means Singular Value Decomposition
NSSTNon-Sampled Shearlet Transform
DCTDiscrete Cosine Transform
OMPOrthogonal Matching Pursuit
BPBase Pursuit
ENLEquivalent Number of Looks
MPMatching Pursuit
SVDSingular Value Decomposition
LeeLee filter
STShearlet Transform
SAR-BM3DSAR-Block Matching 3D
It-NSMIterative Nonlocal Sparse Model
PSNRPeak Signal-to-Noise Ratio
SSIMStructural Similarity
SdStandard Deviation
EPIEdge-Preserving Index
TimeComputation Time
COMSARCommercial Synthetic Aperture Radar

References

  1. Liu, S.; Liu, M.; Li, P.; Zhao, J.; Zhu, Z.; Wang, X. SAR Image Denoising via Sparse Representation in Shearlet Domain Based on Continuous Cycle Spinning. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2985–2992. [Google Scholar] [CrossRef]
  2. Deledalle, C.-A.; Denis, L.; Tupin, F.; Reigber, A.; Jäger, M. NL-SAR: A Unified Non-Local Framework for Resolution-Preserving (Pol)(In)SAR denoising. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2021–2038. [Google Scholar] [CrossRef]
  3. Xia, Q.; Xing, S.; Ma, D.; Mo, D.; Li, P.; Ge, Z. An improved K-SVD-based denoising method for remote sensing satellite images. J. Remote Sens. 2016, 20, 441–449. [Google Scholar] [CrossRef]
  4. Brownring, D.R.K. The weighted median filter. Commun. ACM 1984, 27, 807–818. [Google Scholar] [CrossRef]
  5. Zhang, X.; Xiong, Y. Impulse noise removal using directional difference based noise detector and adaptive weighter mean filter. IEEE Signal Process. Lett. 2009, 16, 295–298. [Google Scholar] [CrossRef]
  6. Chen, J.; Benesty, J.; Huang, Y.; Diclo, S. New insights into the noise reduction Wiener filter. IEEE Trans. Audio Speech Lang. Process. 2006, 14, 1218–1234. [Google Scholar] [CrossRef]
  7. Hu, S.; Ma, X.; Liu, S.; Yang, D. SAR image de-noising based on non-local similar block matching in NSST domain. In Proceedings of the 2016 IEEE 13th International Conference on Signal Processing, Chengdu, China, 6–10 November 2016; pp. 832–836. [Google Scholar] [CrossRef]
  8. Liu, S.; Shi, M.; Hu, S.; Xiao, Y. Synthetic aperture radar image de-noising based on Shearlet transform using the context-based model. Phys. Commun. 2014, 13, 221–229. [Google Scholar] [CrossRef]
  9. Di Martino, G.; Di Simone, A.; Iodice, A.; Poggi, G.; Riccio, D.; Verdoliva, L. Scattering-based SARBM3D. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2131–2144. [Google Scholar] [CrossRef]
  10. Hu, S.; Yang, D.; Liu, S.; Ma, X. Block-matching based multimodal medical image fusion via PCNN with SML. In Proceedings of the 2016 IEEE 13th International Conference on Signal Processing, Chengdu, China, 6–10 November 2016; pp. 13–18. [Google Scholar] [CrossRef]
  11. Yang, J.; Wright, J.; Huang, T.S.; Ma, Y. Image super-resolution via sparse representation. IEEE Trans. Image Process. 2010, 19, 2861–2873. [Google Scholar] [CrossRef] [PubMed]
  12. Ma, X.; Liu, S.; Hu, S.; Geng, P.; Liu, M.; Zhao, J. SAR image edge detection via sparse representation. Soft Comput. 2017, 1–9. [Google Scholar] [CrossRef]
  13. Zhang, Q.; Li, B. Discriminative K-SVD for dictionary learning in face recognition. In Proceedings of the 2010 IEEE Computer society conference on computer vision and pattern recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 2691–2698. [Google Scholar] [CrossRef]
  14. Liu, S.; Hu, S.; Xiao, Y.; An, Y. Bayesian Shearlet shrinkage for SAR image de-noising via sparse representation. Multidimens. Syst. Signal Process. 2014, 25, 683–701. [Google Scholar] [CrossRef]
  15. Baloch, G.; Ozkaramanli, H. Image denoising via correlation-based sparse representation. Signal Image Video Process. 2017, 1–8. [Google Scholar] [CrossRef]
  16. Deledalle, C.-A.; Denis, L.; Poggi, G.; Tupin, F.; Verdoliva, L. Exploiting Patch Similarity for SAR Image Processing: The nonlocal paradigm. IEEE Signal Process. Mag. 2014, 31, 69–78. [Google Scholar] [CrossRef]
  17. Lu, T.; Li, S.; Fang, L.; Benediktsson, J.A. SAR image despeckling via structural sparse representation. Sens. Imaging 2016, 17, 2. [Google Scholar] [CrossRef]
  18. Yang, B.; Liu, R.; Chen, X. Fault diagnosis for a wind turbine generator bearing via sparse representation and shift-invariant K-SVD. IEEE Trans. Ind. Inform. 2017, 13, 1321–1331. [Google Scholar] [CrossRef]
  19. Mailhé, B.; Lesage, S.; Gribonval, R.; Bimbot, F. Shift-invariant dictionary learning for sparse representation: Extending K-SVD. In Proceedings of the EUSIPCO 2008 16th European Signal Processing Conference, Lausanne, Switzerland, 25–29 August 2008; pp. 1–5. [Google Scholar]
  20. Liu, H.; Dong, H.; Ge, J.; Guo, P.; Bai, B.; Zhang, C. An Improved Tuning Control Algorithm Based on SVD for FID Signal. J. Adv. Comput. Intell. Intell. Inform. 2017, 21, 133–138. [Google Scholar] [CrossRef]
  21. He, K.; Sun, J.; Tang, X. Guided image filter. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  22. Zhang, Z.; Xu, Y.; Yang, J.; Li, X.; Zhang, D. A survey of sparse representation: Algorithms and application. IEEE Access 2015, 3, 490–530. [Google Scholar] [CrossRef]
  23. Wei, J.; Huang, Y.; Lu, K.; Wang, L. Fields of experts based multichannel compressed sensing. J. Signal Process. Syst. 2017, 86, 111–121. [Google Scholar] [CrossRef]
  24. Goodman, J.W. Some fundamental properties of speckle. J. Opt. Soc. Am. 1976, 66, 1145–1150. [Google Scholar] [CrossRef]
  25. Aharon, M.; Elad, M.; Bruckstein, A. The K-SVD: An algorithm for designing of overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
  26. Paris, S.; Kornprobst, P.; Tumblin, J. Bilateral filtering. Int. J. Numer. Methods Eng. 2009, 63, 1911–1938. [Google Scholar]
  27. He, K.; Sun, J. Fast guided filter. arXiv, 2015; arXiv:1505.0099. [Google Scholar]
  28. Barthélemy, Q.; Larue, A.; Mayoue, A.; Mercier, D.; Mars, J.I. Shift and 2D Rotation Invariant Sparse Coding for Multivariate Signals. IEEE Trans. Geosci. Remote Sens. 2012, 60, 1597–1611. [Google Scholar] [CrossRef]
  29. Lang, F.; Yang, J.; Li, D. An Adaptive Enhanced Lee Speckle Filter for Polarimetric SAR Image. Acta Geod. Cartogr. Sin. 2014, 43, 690–697. [Google Scholar] [CrossRef]
  30. Parrilli, S.; Poderico, M.; Angelino, C.V.; Verdoliva, L. A Nonlocal SAR Image Denoising Algorithm Based on LLMMSE Wavelet Shrinkage. IEEE Trans. Geosci. Remote Sens. 2012, 50, 606–616. [Google Scholar] [CrossRef]
  31. Xu, B.; Cui, Y.; Li, Z.; Yang, J. An Iterative SAR Image Filtering Method Using Nonlocal Sparse Model. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1635–1639. [Google Scholar] [CrossRef]
  32. Commercial Synthetic Aperture Radar (COMSAR). Available online: http://apogeospatial.com/commercial-sar-comes-to-the-u-s-finally/ (accessed on 07 November 2017).
Figure 1. Images from different sensors: (a) Color optical image; (b) Synthetic aperture radar (SAR) image.
Figure 1. Images from different sensors: (a) Color optical image; (b) Synthetic aperture radar (SAR) image.
Remotesensing 09 01311 g001
Figure 2. Edge preservation of guided filter: (a) Initial de-noised image; (b) Final de-noised image by guided filter.
Figure 2. Edge preservation of guided filter: (a) Initial de-noised image; (b) Final de-noised image by guided filter.
Remotesensing 09 01311 g002
Figure 3. The simulated SAR image which is carried out by Matlab codes on an Intel Core i5 3.1 GHz with 4 GB RAM: (a) Noise-free image; (b) Noisy image.
Figure 3. The simulated SAR image which is carried out by Matlab codes on an Intel Core i5 3.1 GHz with 4 GB RAM: (a) Noise-free image; (b) Noisy image.
Remotesensing 09 01311 g003
Figure 4. The de-noised images of Figure 3b: (a) Lee; (b) Shearlet (ST); (c) SAR-Block Matching 3D (SAR-BM3D); (d) K-means singular value decomposition (K-SVD); (e) Iterative nonlocal sparse model (It-NSM); (f) Proposed method.
Figure 4. The de-noised images of Figure 3b: (a) Lee; (b) Shearlet (ST); (c) SAR-Block Matching 3D (SAR-BM3D); (d) K-means singular value decomposition (K-SVD); (e) Iterative nonlocal sparse model (It-NSM); (f) Proposed method.
Remotesensing 09 01311 g004
Figure 5. Naarden, The Netherlands [32]: (a) Original image; (b) Noisy image.
Figure 5. Naarden, The Netherlands [32]: (a) Original image; (b) Noisy image.
Remotesensing 09 01311 g005
Figure 6. The de-noised images of Figure 5b: (a) Lee; (b) ST; (c) SAR-BM3D; (d) K-SVD; (e) It-NSM; (f) Proposed method.
Figure 6. The de-noised images of Figure 5b: (a) Lee; (b) ST; (c) SAR-BM3D; (d) K-SVD; (e) It-NSM; (f) Proposed method.
Remotesensing 09 01311 g006
Table 1. Peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) of Figure 4 by the tested image de-noising methods.
Table 1. Peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) of Figure 4 by the tested image de-noising methods.
Noisy ImageLee [29]ST [7]SAR-BM3D [30]K-SVD [25]It-NSM [31]Proposed Method
PSNR20.194228.346728.942730.467432.693432.975434.8263
SSIM0.60690.79770.84340.89420.95680.95970.9757
Table 2. Objective evaluation index values of the noisy Naarden images by the tested image de-noising methods. ENL: equivalent number of looks; EPI: edge-preserving index; Sd: standard deviation; Time: computation time.
Table 2. Objective evaluation index values of the noisy Naarden images by the tested image de-noising methods. ENL: equivalent number of looks; EPI: edge-preserving index; Sd: standard deviation; Time: computation time.
L/PSNRMethodPSNRENLSdSSIMEPITime(s)
Lee [29]27.13641.697535.45710.94340.34583.3467
ST [7]28.22141.705956.07440.98660.55225.6916
10/28.1181SAR-BM3D [30]27.45711.798549.46750.98740.6475502.5384
K-SVD [25]30.32331.842634.34660.98970.85192998.5896
It-NSM [31]30.35651.844033.46760.99190.85163004.0813
Proposed method39.23611.867730.05720.99890.87002919.8994
Lee [29]23.46571.157435.15740.93460.32763.3946
ST [7]25.33100.781154.63610.97330.50065.6565
15/24.6173SAR-BM3D [30]25.98431.257548.46740.97830.59743500.2160
K-SVD [25]27.52751.880745.23020.98440.63351698.2485
It-NSM [31]27.56201.879245.23090.98450.63311703.6639
Proposed method34.63121.922442.57040.99690.68821666.4547
Lee [29]22.47541.787435.45740.92470.29743.4354
ST [7]23.66421.840053.60230.96030.28545.7601
20/22.1212SAR-BM3D [30]23.15421.843448.16740.96420.3674500.5725
K-SVD [25]22.12861.919933.51150.97590.51321051.7810
It-NSM [31]25.73461.914332.51150.97610.51711057.0876
Proposed method31.59081.980130.77130.99370.5872986.3493
Lee [29]21.45711.844535.97420.91720.31473.5647
ST [7]22.65381.889952.72570.94920.38225.7038
25/20.1842SAR-BM3D [30]20.38441.609047.46760.90460.4727503.7184
K-SVD [25]23.37031.757534.03980.96700.5246724.1567
It-NSM [31]24.42591.456833.03340.96840.5424735.1635
Proposed method29.28831.944532.15240.98910.6420679.3054
Lee [29]20.14541.897536.02480.89750.26743.4678
ST [7]21.87201.938951.90030.93840.23685.6985
30/18.6005SAR-BM3D [30]22.15471.843647.54610.93670.2943501.4645
K-SVD [25]23.39411.801232.98440.95820.4053472.2370
It-NSM [31]23.45051.995732.60370.95870.4060487.5391
Proposed method27.55001.900931.58090.98350.5448419.1741
Lee [29]19.34641.784135.98740.88640.18463.5147
ST [7]21.30241.883251.25270.92920.14955.6939
35/17.2555SAR-BM3D [30]21.97541.944147.14640.93420.2746489.8395
K-SVD [25]22.63401.843432.15160.94970.3485333.4654
It-NSM [31]22.69871.835732.18750.95040.3864344.6388
Proposed method26.12771.954231.04670.97690.2891284.7186

Share and Cite

MDPI and ACS Style

Ma, X.; Hu, S.; Liu, S. SAR Image De-Noising Based on Shift Invariant K-SVD and Guided Filter. Remote Sens. 2017, 9, 1311. https://doi.org/10.3390/rs9121311

AMA Style

Ma X, Hu S, Liu S. SAR Image De-Noising Based on Shift Invariant K-SVD and Guided Filter. Remote Sensing. 2017; 9(12):1311. https://doi.org/10.3390/rs9121311

Chicago/Turabian Style

Ma, Xiaole, Shaohai Hu, and Shuaiqi Liu. 2017. "SAR Image De-Noising Based on Shift Invariant K-SVD and Guided Filter" Remote Sensing 9, no. 12: 1311. https://doi.org/10.3390/rs9121311

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop