Next Article in Journal
Remaining Useful Life Prediction of Aircraft Turbofan Engine Based on Random Forest Feature Selection and Multi-Layer Perceptron
Previous Article in Journal
Breast Cancer Detection in the Equivocal Mammograms by AMAN Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Unsupervised Image Denoising Method Using a Nonconvex Low-Rank Model with TV Regularization

1
Key Laboratory of Grain Information Processing and Control, Ministry of Education, Henan University of Technology, Zhengzhou 450001, China
2
Zhengzhou Key Laboratory of Machine Perception and Intelligent System, Henan University of Technology, Zhengzhou 450001, China
3
School of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(12), 7184; https://doi.org/10.3390/app13127184
Submission received: 7 May 2023 / Revised: 11 June 2023 / Accepted: 14 June 2023 / Published: 15 June 2023

Abstract

:
In real-world scenarios, images may be affected by additional noise during compression and transmission, which interferes with postprocessing such as image segmentation and feature extraction. Image noise can also be induced by environmental variables and imperfections in the imaging equipment. Robust principal component analysis (RPCA), one of the traditional approaches for denoising images, suffers from a failure to efficiently use the background’s low-rank prior information, which lowers its effectiveness under complex noise backgrounds. In this paper, we propose a robust PCA method based on a nonconvex low-rank approximation and total variational regularization (TV) to model the image denoising problem in order to improve the denoising performance. Firstly, we use a nonconvex γ -norm to address the issue that the traditional nuclear norm penalizes large singular values excessively. The rank approximation is more accurate than the nuclear norm thanks to the elimination of matrix elements with substantial approximation errors to reduce the sparsity error. The method’s robustness is improved by utilizing the low sensitivity of the γ -norm to outliers. Secondly, we use the l 1 -norm to increase the sparsity of the foreground noise. The TV norm is used to improve the smoothness of the graph structure in accordance with the sparsity of the image in the gradient domain. The denoising effectiveness of the model is increased by employing the alternating direction multiplier strategy to locate the global optimal solution. It is important to note that our method does not require any labeled images, and its unsupervised denoising principle enables the generalization of the method to different scenarios for application. Our method can perform denoising experiments on images with different types of noise. Extensive experiments show that our method can fully preserve the edge structure information of the image, preserve important features of the image, and maintain excellent visual effects in terms of brightness smoothing.

1. Introduction

Due to the transmission channel’s limitations, it is easy to introduce noise during the process of image shooting, which has a significant negative impact on the image’s visual effect. Noise introduces random or stochastic variations into the images, which may blur, distort, or even hide useful information [1]. Moreover, the effect of noise removal is also directly related to the subsequent processing of the image, such as image segmentation, object recognition, and edge extraction. Therefore, the topic of image denoising is of great significance in scientific research, the engineering field, and medical clinical diagnosis [2].
As the name suggests, image denoising aims to preserve as much of the original information in the image as possible while minimizing noise. Over the past decades, many efficient denoising methods have emerged. Spatial domain denoising methods [3] directly act on the source image and include classical filtering methods such as mean filtering [4], median filtering [5], and Wiener filtering [6]. While these methods are easy to implement, they are less adaptable and stable, and the denoised images are not sharp enough. Transform domain denoising methods [7] can effectively avoid image distortion by transforming the image into the frequency domain for processing, including Fourier transform [8], discrete cosine transform [9], and wavelet transform [10], but such methods usually have a high complexity and uncertainty. Moreover, deep-learning-based denoising methods enhance the denoising effect to some extent due to their strong feature representation capabilities, but they face higher training-set requirements and more time-consuming computational cost [11].
Robust principal component analysis (RPCA) is a typical representative matrix decomposition method that decomposes a matrix into two matrices, namely a low-rank matrix and a sparse matrix [12]. This idea can effectively avoid complex computational procedures and enhance the robustness of the processing process. It has a wide range of applications in image processing, machine learning, earth science, and medicine [13]. For example, Bayati [14] proposed an adaptive weighted rank reduction (AWRR) method and applied it to synthetic and real seismic data to test its efficiency; Wang et al. [15] used a weighted Schatten p-norm minimization to denoise impulse noise in medical clinical images; Xiu et al. proposed a new Laplacian regularized RPCA framework, where the “robust” aspect came from the introduction of a sparse term [16] and a novel process monitoring approach using the structured joint sparse canonical correlation analysis (SJSCCA) [17]; Javed et al. [18] proposed a spatiotemporal structured sparse RPCA method for moving-object detection, which imposed spatial and temporal regularization on the sparse component in the form of graph Laplacians; Liu et al. [19] proposed a method that used manifold constraints to maintain the local geometric structure and introduced nonconvex joint sparsity to capture the global progressive sparse structure. Among these methods and other RPCA image processing methods that have not been mentioned, we find that most of them are optimized on the basis of the original RPCA algorithm, namely, the optimization of the sparse matrix. Most rank function methods for approximating low-rank matrices still use the standard nuclear norm for approximation. Of course, there are some improvements to the nuclear norm, such as the weighted nuclear norm [20], the singular value threshold [21], and the truncated nuclear norm [22]. However, the nuclear norm can overpenalize large singular values of the matrix during the computation, resulting in the nuclear norm minimization problem not obtaining an optimal solution, which greatly affects the performance of the associated method. Recent literature [23,24] has shown that nonconvex functions provide a better estimation accuracy and variable selection consistency and can provide more accurate rank approximations than the nuclear norm, avoiding its singular value penalty problem. Some specific works have been proposed by a number of scholars. For example, Sun et al. [25] proposed a nonconvex formula consisting of the upper-bound trace norm and the l1-norm, which further restored the low rank of the data matrix; Xiang et al. [26] extended a nonconvex paradigm to a sparse group feature selection and proposed an efficient algorithm for large-scale problems. Nonetheless, these nonconvex models require further optimization in terms of efficiency, robustness, and low-rank recovery accuracy.
In image denoising problems, an image can be viewed as a matrix where the low-rank part represents the structure and texture information, and the sparse part represents the noise information in the image. Therefore, it is feasible to apply the RPCA model to the domain of image denoising. However, RPCA methods based on nonconvex function approximations are less applicable to the domain of image denoising. To address the efficiency and robustness issues of previous nonconvex models, more efficient image denoising schemes have been devised. We design an unsupervised image denoising method based on the nonconvex γ -norm and RPCA. Firstly, the γ -norm overcomes the shortcomings of the nuclear norm overestimation and pays more attention to the singular values in the matrix. Compared to other convex functions, the low-rank matrix approximation performs better. Secondly, the γ -norm has a strong robustness because it is less sensitive to outliers. In practical applications, there are often some outliers in the matrix, and the γ -norm can deal with these outliers more robustly. Finally, the γ -norm can be processed by using various optimization algorithms, such as the proximal gradient descent [27], the iterative threshold shrinkage algorithm [28], and the augmented Lagrange multiplier algorithm [29]. These algorithms improve the convergence properties and denoising effectiveness of the overall denoising model. In other aspects, to address the structural smoothness of the images and further improve the denoising performance, we introduce the TV regularization and l 1 -norm, respectively. We use the augmented Lagrangian multiplier method [29] to construct Lagrangian functions to solve the final mathematical model, transform the constrained problem into an unconstrained problem, and find the global optimal solution by alternating the updates of variables. The contributions of this work are summarized as follows:
1.
We design an unsupervised image denoising method based on nonconvex γ -norm and RPCA. To the best of our knowledge, this should be the first application of the nonconvex γ -norm to image denoising. The use of the γ -norm avoids the problem of overpunishing large singular values by the nuclear norm and provides a high robustness and rank approximation. Combined with the associated solution algorithm, the overall model is highly efficient and converges fast, which facilitates the overall denoising model in quickly discarding matrix elements with large approximation errors and providing a high estimation accuracy.
2.
The denoising effect is facilitated by combining the l 1 -norm and a TV norm regularization. The l 1 -norm can effectively enhance the sparsity of the noise, while the TV norm can exploit the sparsity of the image in the gradient domain to further enhance the smoothness of the image while preserving the edge information of the denoised image.
3.
Our method does not require any labeled images for training, and its unsupervised denoising principle makes it easy to generalize to different scenarios for application. Extensive experiments show that the proposed method can preserve the image’s edge structure information, preserve important features of the image, and maintain excellent visual effects in brightness smoothing under different scenes and different levels of noise.
In the remainder of this paper, we present the related work in Section 2. In Section 3, we introduce our proposed method. Section 4 presents an experimental evaluation of our method for image denoising. Finally, the conclusion is drawn in Section 5.

2. Related Work

2.1. Image Denoising Literature Review

We divided the literature on image denoising into three types: traditional denoising, deep learning denoising, and RPCA denoising. Traditional denoising methods include spatial domain denoising methods [1] and transform domain denoising methods [7], which mostly appear in the early stages. For example, Donoho and John Stone [30] proposed a wavelet threshold denoising method; Huang et al. proposed that the median filtering method was the most widely used method [31]; Saito et al. [32] designed and used the LMMSE filter to improve the denoising effect by restoring image texture; and the antileakage least-squares spectral analysis proposed by Ghaderpour [33] used the Fourier transform and least squares spectral analysis to process noise in seismic data. However, traditional denoising methods often face issues such as a high complexity and poor robustness. Deep learning denoising methods have been proposed in recent years. For instance, Chen et al. [34] used a GAN to model the noise information extracted from real noisy images and completed denoising under the constraint of a correlation loss; Valsesia et al. [35] designed a denoising method based on graphical convolution by introducing an edge attention mechanism and using a convolution operator. Deep learning denoising models typically face high computational costs and a large quantity of labeled data, whose unexplained nature makes denoising models less adaptable. RPCA achieves denoising with a new idea. In recent years, there have been a relatively large number of denoising works on RPCA. For example, Wang et al. [36] proposed an improved RPCA method using a penalty function. This method replaced the nuclear norm and L1-norm in the original RPCA with regularized and weighted versions of the penalty function, respectively, and built an improved model for medical image denoising. However, its convergence rate was slow and time-consuming, and the maximum effect of the low-rank recovery could not be brought into play. Peng et al. [37] proposed a nonconvex, local, low-rank, and sparse separation denoising method for hyperspectral images. This method focused on simultaneously developing a more accurate approximations to both rank- and columnwise sparsity for the low-rank and sparse components, respectively, and proved its effectiveness in denoising HSIs. Nonetheless, since hyperspectral images consist of many channels, this method is not suitable for grayscale images. Liu et al. [38] proposed a new denoising method based on nonlocal weighted robust principal component analysis (RPCA). They used the local similarity to build the objective function of RPCA and solved the problem by using an iterative log-thresholding algorithm to achieve the denoising function. This approach used the kernel norm and thus overpenalized large singular values, resulting in an increased error, which in turn led to poor image denoising. Moreover, the compressed sensing method is the inspiration for the RPCA method, which has also been studied in the domain of image denoising. For example, Mahdaoui et al. [39] proposed a compressed sensing method combining total variation regularization and the nonlocal self-similarity constraint for image denoising. This approach addressed the problem that reconstruction techniques often fail to preserve the texture of the image and significantly improved the denoising performance. In addition, several scholars have proposed methods that combine multimodal image fusion with sparse representation denoising. For instance, Qi et al. [40] proposed a novel multimodality image simultaneous denoising and fusion method. This method decomposed the noisy source image into cartoon and texture components and used the cartoon texture decomposition to separate the noise from the original image information. A sparse representation (SR) model based on a Gaussian scale mixture was proposed for the denoising and fusion of texture components. Although this method obtained better or comparable performance in computation costs compared with existing SR-based methods, it was still time-consuming due to numerous matrix computations in sparse coding and dictionary learning. After reviewing the above literature, our proposed denoising method effectively improves the robustness and estimation accuracy of the model by introducing a nonconvex γ -norm and TV norm, provides a more accurate rank approximation, and avoids the singular value penalty problem. Moreover, the proposed model not only improves image denoising and enhances image smoothness but also preserves important features of the image while removing image noise.

2.2. Robust Principal Component Analysis (RPCA)

RPCA is arguably the most widely used modification of the procedure of PCA with an improved robustness especially for a gross corruption scenario. The RPCA model decomposes the corrupted observation data matrix D into the low-rank matrix L and the l 0 -norm of sparse matrix S. The mathematical expression is as follows.
min L , S rank ( L ) + λ S 0 s . t . D = L + S
where L is a low-rank matrix; S denotes the sparse matrix; λ denotes a regular factor that balances low rank and sparsity; · 0 is the l 0 -norm of the matrix; the rank (L) is the rank of the matrix L. It is truly challenging to tackle the NP-hard task of minimizing the matrix rank function and the l 0 -norm because of the matrix’s highly nonconvex and nonlinear characteristics. Wright et al. [41] conducted the convex relaxation to facilitate problem (1) as follows
min L , S L + λ S 1 s . t . D = L + S
where L = i σ i ( L ) is the nuclear norm, which is represented by the sum of the singular values in matrix L. σ i ( L ) is the singular value of matrix L. · 1 is the l 1 -norm of the matrix.

2.3. Nonconvex γ -Norm

Kang et al. [42] presented a nonconvex γ -norm that can be used as a tighter approximation to the rank of a matrix than the nuclear norm, resolving the issue with the imbalance penalty of different singular values in the convex nuclear norm. Although this technique is frequently used in target identification and image recognition, there has not been any more in-depth research suggested for image denoising. Furthermore, the theoretical convergence study demonstrates that the iterative optimization technique based on the nonconvex γ -norm converges to at least one stationary point, and the calculation of the γ -norm is rather straightforward. This indirectly infers that the γ -norm is more stable and succinct than other nonconvex norms when used to constrain the low-rank properties of images. Assuming that L is a matrix, the γ -norm of a matrix L is:
L γ = i ( 1 + γ ) σ i ( L ) γ + σ i ( L ) , γ > 0
where
lim γ 0 L γ = rank ( L ) , lim γ L γ = L
and this is the same as the true rank of σ i ( L ) , i = 1 , , min ( m , n ) . In addition, L γ is unitarily invariant. L γ = ULV γ for any orthonormal U R m × m and V R n × n .

2.4. TV Norm

The total variation (TV) model first proposed by Rudin et al. measures the structural variations in the gradient domain and serves as a good sparsity regularization for image denoising [43]. This technique can keep the edge information and improve the smoothness of the denoised image. Additionally, it can save significant image information in addition to producing pleasing visual effects, which plays a constructive role in some areas. For instance, in the medical field, reducing noise from clinical computed tomography pictures while maintaining their fundamental diagnostic information is very beneficial in identifying the condition. Previous research suggested that the anisotropic TV methods could produce better denoising results than the isotropic TV method. The anisotropic TV is defined as follows:
L T V = i = 1 m 1 j = 1 n 1 L i , j L i + 1 , j + L i , j L i , j + 1 + i = 1 m 1 L i , n L i + 1 , n + j = 1 n 1 L m , j L m , j + 1
where L i , j is a pixel of the matrix L.

2.5. l 1 -Norm

The l 1 -norm assumes that the parameters obey the Laplace distribution and are not completely differentiable. Its regularized output is sparse, which can generate a sparse model. The l 2 -norm assumes that the parameters conform to the Gaussian distribution and are completely differentiable, which can prevent the model from overfitting. The l 1 -norm has a strong stability and is not sensitive to outliers, so its robustness is better than that of the l 2 -norm. Moreover, the l 2 -norm squares the error, and the error of the model is much larger than with the l 1 -norm. More importantly, this approach constrains the matrix S using the l 1 -norm rather than the l 2 -norm because the l 1 -norm may better ensure that the defined optimization problem can find a unique solution. The l 1 -norm of a matrix S with elements s k j is given by the maximum value of the column sum.
S 1 = max k = 1 m s k j

3. Proposed Method

The system framework of the proposed method is presented in Figure 1. As discussed, the RPCA model based on nonconvex γ -norm can improve the approximation accuracy of the rank function, and the TV regularization can solve the smoothing problem of the image’s structure edge information. This section introduces our proposed RPCA model based on the nonconvex γ -norm and anisotropic total variation technique. The mathematical model of the method is as follows:
min L , S L γ + λ S 1 + τ L T V s . t . D = L + S
where D is the data matrix of the observed image; L is the low-rank matrix; S is the sparse matrix; L γ is the γ -norm of the matrix L; S 1 represents the l 1 -norm of sparse matrix S; L T V represents the TV regularization of matrix L; τ is a trade-off parameter for controlling the TV norm; λ is also a trade-off parameter; and λ = 1 / max ( m , n ) . m and n are the dimensions of the observed image matrix D.
Figure 1. Illustration of the proposed method.
Figure 1. Illustration of the proposed method.
Applsci 13 07184 g001
Considering the nonconvex property of the γ -norm and the introduction of an anisotropic TV regularization, the augmented Lagrange multiplier algorithm [29], also known as the alternating direction method of multipliers (ADMM), is used to solve the optimization problem. We first introduce a new auxiliary variable Z into Model (6):
min L , S L γ + λ S 1 + τ Z T V s . t . D = L + S , Z = L
By adding the Lagrange multiplier and a positive punishment scalar, Formula (7) can be further derived as the following augmented Lagrange function:
h L , S , Z , Λ 1 , Λ 2 = arg min L , S , Z L γ + λ S 1 + τ Z T V + Λ 1 , D L S + Λ 2 , L Z + μ 2 D L S F 2 + L Z F 2
where μ > 0 represents a penalty parameter; τ > 0 represents a penalty parameter; · , · represents the inner product of two matrices, while Λ 1 and Λ 2 represent Lagrange multipliers; · F is the Frobenius norm of a matrix.
The variable L, S, Z, Λ 1 , and Λ 2 are updated under alternate iterations to improve the solution accuracy. The optimization process is summarized in Algorithm 1. The flowchart of the proposed method is presented in Figure 2.
Algorithm 1 Solving (7) by ADMM
Input: Observed data Y R m × n and λ > 0
Initialization: compute L ( 0 ) = U Σ V T and S ( 0 ) , μ 0 = 10 4 , ρ > 1 , μ m a x = 10 4 , i = 0
While D L S F D F > 10 8 and i < inneriter do
L i + 1 = arg min L L γ + μ i L 1 2 D + Z i S i + Λ 1 i μ i + Λ 2 i μ i F 2 ; S i + 1 = arg min S λ S 1 + μ i 2 S D L i + 1 + Λ 1 i μ i F 2 ; Z i + 1 = arg min Z τ Z T V + μ i 2 Z L i + 1 Λ 2 i μ i F 2 ; Λ 1 i + 1 = Λ 1 i + μ i D L i + 1 S i + 1 ; Λ 2 i + 1 = Λ 2 i + μ i L i + 1 Z i + 1 ; μ i + 1 = min ρ μ i , μ m a x ; i = i + 1 ;
end while
Output: L i , S i .

3.1. Updating L

To solve for variable L, we minimize over h L , S , Z , Λ 1 , Λ 2 with the remaining optimization variables ( S , Z , Λ 1 , Λ 2 ) fixed:
L i + 1 = arg min L h L , S i , Z i , Λ 1 i , Λ 2 i = arg min L L γ + Λ 1 i , D L S i + Λ 2 i , L Z i + μ i 2 D L S i F 2 + L Z i F 2 = arg min L L γ + μ i L 1 2 D + Z i S i + Λ 1 i μ i + Λ 2 i μ i F 2
To solve (9), the following theorem is introduced.
Theorem 1. 
Let [42] A = U Σ A V be the SVD of A R m × n and Σ A = d i a g ( σ A ) . Let F ( X ) = f σ A be a unitarily invariant function and μ > 0 . Then, an optimal solution to the following problem
min z F ( X ) + μ 2 X A F 2
is X = U Σ X V T , where Σ X = d i a g σ and σ = p r o x f , μ σ A . Here, p r o x f , μ σ A is the proximity operator of f with penalty μ, defined as
p r o x f , μ σ A : = arg min σ 0 f ( σ ) + μ 2 σ σ A 2 2
Based on the theorem above, the optimal solution to (9) is
L i + 1 = U d i a g σ V
where σ , the solution to (11), can be approximated by linearizing the concave term f ( σ ) iteratively. Specifically, in the (k + 1) inner iteration, σ can be updated as follows:
σ k + 1 = arg min σ 0 δ f ( σ k ) + μ i 2 σ σ A 2 2
where
Figure 2. The flowchart of the proposed method.
Figure 2. The flowchart of the proposed method.
Applsci 13 07184 g002
A = 1 2 D + Z i S i + Λ 1 i μ i + Λ 2 i μ i
and δ f ( σ k ) is the gradient of f at σ k .

3.2. Updating S

The variable S is updated while minimizing over h L , S , Z , Λ 1 , Λ 2 with variables ( L , Z , Λ 1 , Λ 2 ) fixed:
S i + 1 = arg min S h L i + 1 , S , Z i , Λ 1 i , Λ 2 i = arg min S λ S 1 + Λ 1 i , D L i + 1 S + μ 2 D L i + 1 S F 2 = arg min S λ S 1 + μ i 2 S D L i + 1 + Λ 1 i μ i F 2
where S is the matrix of the sparse foreground part. The foreground is equivalent to an l 1 regularization. The solution is as follows:
S i + 1 = T λ 2 μ i D L i + 1 + Λ 1 i μ i
and here, T λ 2 μ i is the contraction operator which is defined as follows:
T θ ( x ) = sgn ( x ) · max ( | x | θ , 0 )
with the function sgn ( · ) returning the sign of the given operand.

3.3. Updating Z

Similarly, to solve for variable Z, we minimize over h L , S , Z , Λ 1 , Λ 2 with the remaining optimization variables ( L , S , Λ 1 , Λ 2 ) fixed:
Z i + 1 = arg min Z h L i + 1 , S i + 1 , Z , Λ 1 i , Λ 2 i = arg min Z τ Z T V + Λ 2 i , L i + 1 Z + μ 2 L i + 1 Z F 2 = arg min Z τ Z T V + μ i 2 Z L i + 1 Λ 2 i μ i F 2
We define Q = L i + 1 Λ 2 i μ i and Q = Q 1 , Q 2 , , Q n R m × n where optimization (18) can be rewritten as:
arg min Z j τ Z j T V + μ 2 Z j Q j F 2
In this paper, we used the fast gradient-based algorithm introduced in [44] to solve (19).

3.4. Updating Λ 1 and Λ 2

Finally, the Lagrange multiplier matrices Λ 1 and Λ 2 are updated:
Λ 1 i + 1 = Λ 1 i + μ i D L i + 1 S i + 1
Λ 2 i + 1 = Λ 2 i + μ i L i + 1 Z i + 1

4. Experimental Results and Analysis

In this section, we compare our proposed RPCA method based on a nonconvex low-rank approximation and TV regularization with five state-of-the-art methods qualitatively and quantitatively, including the robust principal component analysis (RPCA) [45], nonconvex log total variation model (AMlogtv) [46], nonconvex rank RPCA (NonRPCA) [42], weighted nuclear norm minimization–robust principal component analysis (WNNM−RPCA) [47], and total directional variation (TDV) [48]. We explore the denoising effect of our method on three common noises (salt-and-pepper noise (impulse noise), Poisson noise, and Gaussian noise). Additionally, we also perform facial image denoising experiments in Section 4.3 to demonstrate that our method is capable of preserving important image features while removing noise. An objective evaluation can avoid the uncertainty of subjective vision. Therefore, we use the peak signal-to-noise ratio (PSNR) [49], Structure similarity index measure (SSIM) [50], and feature similarity index measure (FSIM) [51] to evaluate the image quality objectively. The PSNR refers to the ratio of the maximum possible power of the maximum achievable signal strength to the destructive noise power that impacts its representation accuracy. It is often used to evaluate the quality of a denoised image compared to the original image. The SSIM is an index to measure the similarity of two images, and its value is usually affected by the brightness, contrast, and structure of the image. The FSIM is a quality assessment method based on the degree of feature similarity between the ground truth and the denoised image created using the human visual system. High values of the three metrics are desirable.
In order to verify the denoising effect of our method in different scenarios, in terms of dataset selection, we used standard, authoritative, and widely used natural image, clinical medical image, and facial image datasets for denoising experiments. The experiment used eight standard grayscale images of 256 × 256 randomly chosen from the digital image processing dataset, namely Butterfly, House, Peppers, Lena, Living Room, Bird, Camera, and Starfish. The medical image denoising experiment was conducted by randomly selecting eight 512 × 512 lung CT images from the LungCT-Diagnosis dataset (https://wiki.cancerimagingarchive.net, accessed on 26 May 2023). The data in LungCT-Diagnosis are from the Moffitt Cancer Center. The dataset has a total of 61 patients, each patient has a different number of 512 × 512 lung tumor images, and these images are diagnostic-enhanced CT scan images. In addition, the images in the Extended Yale B dataset were used in the facial image denoising experiment. The dataset has a total of 38 subjects; each subject has 64 images with a size of 192 × 168, all taken under different lighting. Due to different lighting conditions, the images of each object have different degrees of serious damage.
All the experiments in this paper were implemented in MATLAB 2018a. The host processor was an Intel(R) Core (TM) i5-9400F CPU 2.90 GHz, and the operating system was 64-bit. Furthermore, the method proposed in this paper was based on a matrix of m × n , with the following parameter settings: μ = 1 × 10 3 , γ = 0.01 , τ = 1 × 10 5 , and λ = 1 / max ( m , n ) .

4.1. Natural Image Denoising

The eight natural images we selected for testing are shown in Figure 3. We chose three examples to demonstrate the denoising effect of the proposed method and five comparison methods on three types of noise, as shown in Figure 4, Figure 5 and Figure 6. In Figure 4, Figure 5 and Figure 6, example ROIs of size 30 × 30 pixels are taken (center) and enlarged (bottom right corner) for visualization, and visible image quality upgrades after restoration are presented. Additionally, based on these six methods, we compared the singular values between the original noise-free image and the denoised image, as shown in Figure 4a, Figure 5a and Figure 6a. The results showed that, compared with the five comparison methods, the singular value curve of our method was closer to the original image curve, which indicated that the method obtained the singular value closest to the original clean image.
The majority of the images in Figure 4, Figure 5 and Figure 6 show that, despite the method’s various denoising impacts for different noises, our proposed method was still the best when compared to the other five methods. For example, in the denoising results of impulse noise (Figure 4), there are apparent unclear image texture phenomena in Figure 4d. Figure 4e not only retains a lot of noise, but the image structure cannot be identified either. Figure 4f has blurred edges and decreased brightness, while Figure 4g has a certain structural fidelity but a lot of noise residues. Although the noise is removed in Figure 4h, the overall appearance is very blurred, and the image edge cannot be clearly identified. Conversely, Figure 4i not only removes the noise completely but also further enhances the smoothness of the image while preserving the detailed structure of the image. This is due to the positive and effective use of our designed γ -norm and TV norm. In addition, our proposed method preserved the contrast and brightness of the original image well. In the Poisson noise denoising results (Figure 5), we can see that the enlarged parts of Figure 5d–g clearly show that these comparison methods blurred the noise part but did not remove the blurred part, resulting in a less clear image. The detail structure of Figure 5h is very blurred, and the overall clarity of the picture is seriously reduced. Our denoising method (Figure 5i) is more visible to the naked eye than other denoising methods and provides a smoother image. Although the overall denoising results of our method and WNNM-RPCA for Gaussian noise (Figure 6) did not differ significantly, it is nevertheless obvious from the details that our method preserved the clearest image structure and performed the most complete denoising.
Figure 5. Denoising images of Starfish by different methods (Poisson noise). (a) Singular value comparison; (b) original clean image; (c) noisy image (Poisson noise); (d) RPCA (PSNR = 27.11 dB, SSIM = 0.9119, FSIM = 0.9561); (e) AMlogtv (PSNR = 25.65 dB, SSIM = 0.7654, FSIM = 0.9682); (f) NonRPCA (PSNR = 28.45 dB, SSIM = 0.9086, FSIM = 0.9541); (g) WNNM−RPCA (PSNR = 28.83 dB, SSIM = 0.9155, FSIM = 0.9582); (h) TDV (PSNR = 20.84 dB, SSIM = 0.5388, FSIM = 0.7005); (i) our method (PSNR = 29.30 dB, SSIM = 0.9324, FSIM = 0.9690).
Figure 5. Denoising images of Starfish by different methods (Poisson noise). (a) Singular value comparison; (b) original clean image; (c) noisy image (Poisson noise); (d) RPCA (PSNR = 27.11 dB, SSIM = 0.9119, FSIM = 0.9561); (e) AMlogtv (PSNR = 25.65 dB, SSIM = 0.7654, FSIM = 0.9682); (f) NonRPCA (PSNR = 28.45 dB, SSIM = 0.9086, FSIM = 0.9541); (g) WNNM−RPCA (PSNR = 28.83 dB, SSIM = 0.9155, FSIM = 0.9582); (h) TDV (PSNR = 20.84 dB, SSIM = 0.5388, FSIM = 0.7005); (i) our method (PSNR = 29.30 dB, SSIM = 0.9324, FSIM = 0.9690).
Applsci 13 07184 g005
We used the PSNR, SSIM, and FSIM as the objective indicators of image quality evaluation, and the data results are shown in Table 1, Table 2 and Table 3. In order to show the denoising performance of our method more intuitively, we drew histograms for the average of evaluation indexes of eight natural images under different noises according to the six methods, as shown in Figure 7. Our method achieved the best results in different noise experiments as indicated by the data from the objective indicators. In the impulse denoising experiment, our average PSNR was 23% higher than that of WNNM−RPCA, and the average SSIM was even 31% higher than that of WNNM−RPCA. In the Poisson noise denoising experiment, the average PSNR of our method was 11% higher than that of the classical algorithm RPCA, and the average FSIM was 32% higher than the TDV. In studies including Gaussian noise denoising, the average PSNR of our method was 51% better than that of AMlogtv.
Figure 6. Denoising images of Lena by different methods (Gaussian noise). (a) Singular value comparison; (b) original clean image; (c) noisy image (Gaussian noise); (d) RPCA (PSNR = 33.61 dB, SSIM = 0.9416, FSIM = 0.9698); (e) AMlogtv (PSNR = 25.24 dB, SSIM = 0.7445, FSIM = 0.9847); (f) NonRPCA (PSNR = 37.56 dB, SSIM = 0.9437, FSIM = 0.9712); (g) WNNM−RPCA (PSNR = 38.86 dB, SSIM = 0.9539, FSIM = 0.9766); (h) TDV (PSNR = 23.03 dB, SSIM = 0.6737, FSIM = 0.7617); (i) our method (PSNR = 42.37 dB, SSIM = 0.9795, FSIM = 0.9890).
Figure 6. Denoising images of Lena by different methods (Gaussian noise). (a) Singular value comparison; (b) original clean image; (c) noisy image (Gaussian noise); (d) RPCA (PSNR = 33.61 dB, SSIM = 0.9416, FSIM = 0.9698); (e) AMlogtv (PSNR = 25.24 dB, SSIM = 0.7445, FSIM = 0.9847); (f) NonRPCA (PSNR = 37.56 dB, SSIM = 0.9437, FSIM = 0.9712); (g) WNNM−RPCA (PSNR = 38.86 dB, SSIM = 0.9539, FSIM = 0.9766); (h) TDV (PSNR = 23.03 dB, SSIM = 0.6737, FSIM = 0.7617); (i) our method (PSNR = 42.37 dB, SSIM = 0.9795, FSIM = 0.9890).
Applsci 13 07184 g006

4.2. Medical Image Denoising

In this section, we selected eight medical images from public datasets for testing, as shown in Figure 8. We show the visual comparison results of three images in Figure 9, Figure 10 and Figure 11. Among them, Figure 9a, Figure 10a, and Figure 11a are the singular value distribution curves of the denoised image and the original image of each comparison method. We learn that the singular value of the denoised image obtained by the proposed method is closer to the real ground value, which also proves that it has the advantage of suppressing the overpenalized singular value of the classical competition method (RPCA) while maintaining a small singular value.
By observing the denoising experiment findings, we discover that in the denoising results of impulse noise (Figure 9), the enlarged images of Figure 9d,h have clear artifacts and little to no noise. In the enlarged image of Figure 9e, the noise is mixed with the structure of the image itself, and the image and noise cannot be distinguished. Although there is not any evident noise in the magnified version of Figure 9f, there is a small blurring of the edge information of the lung tissue and some “oil painting” phenomenon. In Figure 9g, there is a great deal of noise, which makes the image unrecognizable. The enlarged image of Figure 9h is very blurred, and there is obvious noise in the image. Conversely, our method (Figure 9i) has the best denoising effect. The image edge structure is highly clear and noise-free, which is nearly identical to the original image. In the experiment of removing Poisson noise, we find that although the denoising effects of these six algorithms are very close, we can still find that Figure 10d,g are obviously darkened, and there are still some impurities in the background of Figure 10f. Figure 10e,h blur the original details of the image, but our method (Figure 10i) does not exhibit these phenomena. Furthermore, when denoising the image containing Gaussian noise, it is obvious from the comparison result diagram in Figure 11 that although the comparison method removes noise to a certain extent, it does not smooth the image when saving the image detail information, leaving the image with insufficient clarity. In conclusion, our approach had a good edge recovery effect, entirely preserved lung parenchyma, and effectively reduced noise. It can be determined that our denoising method was superior to other methods by a subjective visual evaluation.
The objective evaluation is shown in Table 4, Table 5 and Table 6 and Figure 12. It can be seen from Figure 12 that our proposed method also achieved the best effect in medical image denoising. Specifically, when denoising impulse noise, the average PSNR value of NonRPCA was roughly 40% higher than that of the classical RPCA algorithm, and the average PSNR value of our method was about 8% higher than that of NonRPCA. For Poisson noise, the average PSNR of our method was 36% higher than the latest method, AMlogtv, and the average SSIM was 31% higher than TDV, which also showed the effectiveness of our method. Additionally, against the background of Gaussian noise, the average SSIM values of all methods were low, which may be due to the characteristics of Gaussian noise itself, causing the brightness, contrast, and image structure to affect the SSIM value. Moreover, the average SSIM value of our proposed method was still higher than other methods. In summary, the objective evaluation results of images were essentially consistent with the subjective analysis.
Figure 8. The LungCT images used in our experiments. The image numbers are given from left to right as 1 to 8.
Figure 8. The LungCT images used in our experiments. The image numbers are given from left to right as 1 to 8.
Applsci 13 07184 g008
Figure 9. Denoising images of LungCT image no. 1 by different methods (impulse noise). (a) Singular value comparison; (b) original clean image; (c) noisy image (impulse noise); (d) RPCA (PSNR = 34.11 dB, SSIM = 0.8401, FSIM = 0.9904); (e) AMlogtv (PSNR = 20.81 dB, SSIM = 0.2979, FSIM = 0.9630); (f) NonRPCA (PSNR = 39.39 dB, SSIM = 0.7965, FSIM = 0.9924); (g) WNNM−RPCA (PSNR = 30.07 dB, SSIM = 0.7915, FSIM = 0.9761); (h) TDV (PSNR = 22.66 dB, SSIM = 0.2491, FSIM = 0.8230); (i) our method (PSNR = 43.94 dB, SSIM = 0.8709, FSIM = 0.9968).
Figure 9. Denoising images of LungCT image no. 1 by different methods (impulse noise). (a) Singular value comparison; (b) original clean image; (c) noisy image (impulse noise); (d) RPCA (PSNR = 34.11 dB, SSIM = 0.8401, FSIM = 0.9904); (e) AMlogtv (PSNR = 20.81 dB, SSIM = 0.2979, FSIM = 0.9630); (f) NonRPCA (PSNR = 39.39 dB, SSIM = 0.7965, FSIM = 0.9924); (g) WNNM−RPCA (PSNR = 30.07 dB, SSIM = 0.7915, FSIM = 0.9761); (h) TDV (PSNR = 22.66 dB, SSIM = 0.2491, FSIM = 0.8230); (i) our method (PSNR = 43.94 dB, SSIM = 0.8709, FSIM = 0.9968).
Applsci 13 07184 g009
Figure 10. Denoising images of LungCT image no. 2 by different methods (Poisson noise). (a) Singular value comparison; (b) original clean image; (c) noisy image (Poisson noise); (d) RPCA (PSNR = 25.99 dB, SSIM = 0.9620, FSIM = 0.9656); (e) AMlogtv (PSNR = 29.25 dB, SSIM = 0.8792, FSIM = 0.9870); (f) NonRPCA (PSNR = 37.75 dB, SSIM = 0.9732, FSIM = 0.9918); (g) WNNM−RPCA (PSNR = 31.22 dB, SSIM = 0.9776, FSIM = 0.9786); (h) TDV (PSNR = 23.51 dB, SSIM = 0.6897, FSIM = 0.8187); (i) our method (PSNR = 41.38 dB, SSIM = 0.9895, FSIM = 0.9956).
Figure 10. Denoising images of LungCT image no. 2 by different methods (Poisson noise). (a) Singular value comparison; (b) original clean image; (c) noisy image (Poisson noise); (d) RPCA (PSNR = 25.99 dB, SSIM = 0.9620, FSIM = 0.9656); (e) AMlogtv (PSNR = 29.25 dB, SSIM = 0.8792, FSIM = 0.9870); (f) NonRPCA (PSNR = 37.75 dB, SSIM = 0.9732, FSIM = 0.9918); (g) WNNM−RPCA (PSNR = 31.22 dB, SSIM = 0.9776, FSIM = 0.9786); (h) TDV (PSNR = 23.51 dB, SSIM = 0.6897, FSIM = 0.8187); (i) our method (PSNR = 41.38 dB, SSIM = 0.9895, FSIM = 0.9956).
Applsci 13 07184 g010
Figure 11. Denoising images of LungCT image no. 3 by different methods (Gaussian noise). (a) Singular value comparison; (b) original clean image; (c) noisy image (Gaussian noise); (d) RPCA (PSNR = 33.85 dB, SSIM = 0.6098, FSIM = 0.9916); (e) AMlogtv (PSNR = 25.68 dB, SSIM = 0.3639, FSIM = 0.9968); (f) NonRPCA (PSNR = 34.89 dB, SSIM = 0.5844, FSIM = 0.9867); (g) WNNM−RPCA (PSNR = 35.28 dB, SSIM = 0.5988, FSIM = 0.9890); (h) TDV (PSNR = 23.30 dB, SSIM = 0.2879, FSIM = 0.8248); (i) our method (PSNR = 37.94 dB, SSIM = 0.6233, FSIM = 0.9980).
Figure 11. Denoising images of LungCT image no. 3 by different methods (Gaussian noise). (a) Singular value comparison; (b) original clean image; (c) noisy image (Gaussian noise); (d) RPCA (PSNR = 33.85 dB, SSIM = 0.6098, FSIM = 0.9916); (e) AMlogtv (PSNR = 25.68 dB, SSIM = 0.3639, FSIM = 0.9968); (f) NonRPCA (PSNR = 34.89 dB, SSIM = 0.5844, FSIM = 0.9867); (g) WNNM−RPCA (PSNR = 35.28 dB, SSIM = 0.5988, FSIM = 0.9890); (h) TDV (PSNR = 23.30 dB, SSIM = 0.2879, FSIM = 0.8248); (i) our method (PSNR = 37.94 dB, SSIM = 0.6233, FSIM = 0.9980).
Applsci 13 07184 g011
Figure 12. Histogram of evaluation indicators of medical images. (a) Average PSNR; (b) average SSIM; (c) average FSIM.
Figure 12. Histogram of evaluation indicators of medical images. (a) Average PSNR; (b) average SSIM; (c) average FSIM.
Applsci 13 07184 g012
Table 4. PSNR comparisons of RPCA, AMlogtv, NonRPCA, WNNM−RPCA, TDV, and the proposed method for noise removal in LungCT images (optimal value: red line; suboptimal value: cyan line).
Table 4. PSNR comparisons of RPCA, AMlogtv, NonRPCA, WNNM−RPCA, TDV, and the proposed method for noise removal in LungCT images (optimal value: red line; suboptimal value: cyan line).
Noise TypeImagesRPCAAMlogtvNonRPCAWNNM−RPCATDVOurs
Impulse noise134.108620.810939.391030.066122.660643.9398
229.733920.748839.898128.680822.514042.6776
327.528720.787236.546329.316722.306240.9121
434.526020.785741.947631.189222.329643.9249
522.742120.681736.597126.615522.697239.0744
621.657820.532736.264624.550223.014438.7777
721.573220.462633.814925.839123.056336.7197
820.800120.370732.777622.953722.924636.3267
Average26.583820.647537.154627.401422.687840.2941
Poisson noise125.379629.275037.853330.408923.474540.7487
225.995429.257637.755131.221923.509141.3824
332.085529.046340.071840.071823.836542.0702
427.229529.026239.088832.728224.240240.3903
524.713729.288435.070327.022523.742538.5899
625.391429.088736.524327.403622.302540.6466
728.268628.880729.397731.199227.388737.9662
826.273228.869133.415828.956427.438835.8507
Average26.917129.091536.918630.976024.491639.7056
Gaussian noise129.022825.744134.971332.393623.566137.0809
229.029725.776933.800733.154923.502237.4827
333.849725.683934.887335.287123.296737.9355
430.354725.656433.337731.676323.463335.8458
527.542525.729034.794830.760823.789936.1815
627.946925.503132.820928.533424.350535.7714
726.757325.418132.188627.201124.478234.9603
829.371425.312334.181128.405824.173536.1523
Average29.234425.603033.872830.926623.827536.4263
The Average values are in bold black.
Table 5. SSIM comparisons of RPCA, AMlogtv, NonRPCA, WNNM−RPCA, TDV, and the proposed method for noise removal in LungCT images (optimal value: red line; suboptimal value: cyan line).
Table 5. SSIM comparisons of RPCA, AMlogtv, NonRPCA, WNNM−RPCA, TDV, and the proposed method for noise removal in LungCT images (optimal value: red line; suboptimal value: cyan line).
Noise TypeImagesRPCAAMlogtvNonRPCAWNNM−RPCATDVOurs
Impulse noise10.84010.29790.79650.79150.24910.8709
20.78930.30240.78300.78470.25330.8524
30.76050.31580.74020.79370.26700.8437
40.84740.31090.81790.79870.26180.8750
50.65460.29220.71030.73590.25130.7766
60.63370.27410.69180.70150.24750.7741
70.62760.26900.70900.71400.24810.7159
80.62850.27320.68600.64300.24370.7080
Average0.72270.29190.74180.74530.25270.8020
Poisson noise10.95890.88260.97400.97660.69230.9883
20.96200.87920.97320.97760.68970.9895
30.98160.87390.98470.98400.71490.9903
40.96540.87790.97940.97740.74300.9840
50.94790.89400.95390.96100.70550.9791
60.95730.90220.96280.96640.69000.9821
70.96820.90700.96580.97340.87660.9745
80.96000.91600.95090.96450.89440.9694
Average0.96260.89160.96800.97260.75080.9821
Gaussian noise10.50950.33300.53070.56270.26810.5862
20.51850.34250.51440.57370.27390.5981
30.60980.36390.58440.59880.28790.6233
40.57250.35350.53100.57900.28660.5832
50.52930.32870.53520.54280.27260.5680
60.45500.30710.45370.51420.26960.5432
70.48610.30420.44120.50220.27160.5115
80.48830.29910.46280.50050.26570.5224
Average0.52110.32900.50670.54670.27450.5669
The Average values are in bold black.
Table 6. FSIM comparisons of RPCA, AMlogtv, NonRPCA, WNNM−RPCA, TDV, and the proposed method for noise removal in LungCT images (optimal value: red line; suboptimal value: cyan line).
Table 6. FSIM comparisons of RPCA, AMlogtv, NonRPCA, WNNM−RPCA, TDV, and the proposed method for noise removal in LungCT images (optimal value: red line; suboptimal value: cyan line).
Noise TypeImagesRPCAAMlogtvNonRPCAWNNM−RPCATDVOurs
Impulse noise10.99040.96300.99240.97610.82300.9968
20.97610.96560.99280.96750.82190.9956
30.96450.97090.98800.96620.80860.9938
40.99070.96570.99520.97590.83280.9759
50.93920.96520.98500.96020.85730.9912
60.92430.96510.98270.94410.86810.9909
70.91590.96650.98160.95010.87710.9870
80.90570.97320.98310.92260.86580.9879
Average0.95080.96690.98760.95780.84430.9925
Poisson noise10.96190.99000.99190.97690.81630.9954
20.96560.98700.99180.97860.81870.9956
30.99170.97750.99380.99280.82210.9957
40.97560.96730.99390.98340.85330.9945
50.94900.99210.98990.95040.84430.9944
60.95700.98800.99050.95770.74830.9943
70.98460.99020.98980.97710.92130.9917
80.97820.98850.98440.96870.92310.9894
Average0.97040.98510.99070.97320.84340.9938
Gaussian noise10.97490.99440.99070.99070.83880.9949
20.97520.99610.98820.98210.83740.9965
30.99160.99680.98670.98900.82480.9980
40.98210.97700.98230.97600.85470.9960
50.96400.99020.98400.97010.87550.9909
60.96980.98730.97820.96010.88840.9890
70.96030.98480.97560.94780.89350.9853
80.97910.99150.98930.96250.88580.9926
Average0.97460.98970.98440.97090.86230.9929
The Average values are in bold black.

4.3. Face Denoising with Illumination Variation

Face images of the same subject under different illumination conditions generally lie in a low-dimensional subspace, while the outliers resulting from lighting variations can be assumed to be sparse [52]. RPCA can therefore balance the uneven brightness of the image surface and retain important facial features while removing noise. Moreover, given enough facial images of the same person, it is possible to reconstruct real facial images. We experimented with 64 images from the Extended Yale B database for each participant. All images were converted to 32,256-dimensional column vectors, hence Y R 32256 × 1 for each subject. Since the images were well aligned, L should have a rank of one.
In order to demonstrate the effectiveness of the proposed method, we selected facial images of three subjects from the Extended Yale B database and added different noises to each subject to demonstrate the method’s performance, as shown in Figure 13, Figure 14 and Figure 15. By observing these three sets of photos, it can be shown that in comparison to the experimental outcomes of the comparison methods, the proposed method could more effectively remove the occlusion shadow of the image and keep the important features of the face image on the basis of successfully removing the noise, thereby restoring the true face image. The denoising results of RPCA and WNNM-RPCA were similar, and all of them eliminated a significant amount of noise when removing impulse noise (Figure 13). However, the brightness of the image surface was unbalanced, making it difficult to clearly distinguish facial characteristics. The AMlogtv method could not remove noise in facial images. The TDV method removed noise, but the shadow in the image was not removed, which made the face difficult to identify. Although NonRPCA removed most of the shadows of the face, the denoised image was not smooth enough to cause the detail structure to be very blurred. Similarly, even when removing Poisson and Gaussian noise, respectively, these comparison methods’ denoising effects remained poor. For instance, when removing Poisson noise (Figure 14), WNNM-RPCA (Figure 14f) had the worst impact, not only not entirely removing the noise but also leaving half of the facial image in the shadow. Although NonRPCA (Figure 14e) preserved facial features well, it produced some artifacts in the denoising process. When removing Gaussian noise (Figure 15), RPCA (Figure 15c) and AMlogtv (Figure 15d) still had a small amount of noise, and the uneven brightness of the image surface was not balanced, which blocked the facial features of the image. It can be seen that the experimental effect of our method was the best.

4.4. Implementation Computational Cost

To verify the time cost performance of the proposed method, the starfish image with impulse noise and a size of 256 × 256 was chosen as the test image.
As shown in Table 7, the running time of the proposed method was shorter compared with that of NonRPCA and TDV. Although the running time of the proposed method was longer compared with that of RPCA, AMlogtv, and WNNM-RPCA, the denoising effect of the proposed method was significantly higher than that of the above three methods. The proposed method sacrificed a certain degree of computational cost and improved the accuracy of image restoration.
Figure 13. Denoising images of yaleB01 by different methods (impulse noise). (a) Original facial image; (b) noisy image (impulse noise); (c) RPCA; (d) AMlogtv; (e) NonRPCA; (f) WNNM-RPCA; (g) TDV; (h) our method.
Figure 13. Denoising images of yaleB01 by different methods (impulse noise). (a) Original facial image; (b) noisy image (impulse noise); (c) RPCA; (d) AMlogtv; (e) NonRPCA; (f) WNNM-RPCA; (g) TDV; (h) our method.
Applsci 13 07184 g013
Figure 14. Denoising images of yaleB06 by different methods (Poisson noise). (a) Original facial image; (b) noisy image (Poisson noise); (c) RPCA; (d) AMlogtv; (e) NonRPCA; (f) WNNM-RPCA; (g) TDV; (h) our method.
Figure 14. Denoising images of yaleB06 by different methods (Poisson noise). (a) Original facial image; (b) noisy image (Poisson noise); (c) RPCA; (d) AMlogtv; (e) NonRPCA; (f) WNNM-RPCA; (g) TDV; (h) our method.
Applsci 13 07184 g014
Figure 15. Denoising images of yaleB02 by different methods (Gaussian noise). (a) Original facial image; (b) noisy image (Gaussian noise); (c) RPCA; (d) AMlogtv; (e) NonRPCA; (f) WNNM-RPCA; (g) TDV; (h) our method.
Figure 15. Denoising images of yaleB02 by different methods (Gaussian noise). (a) Original facial image; (b) noisy image (Gaussian noise); (c) RPCA; (d) AMlogtv; (e) NonRPCA; (f) WNNM-RPCA; (g) TDV; (h) our method.
Applsci 13 07184 g015

5. Conclusions

In this study, we proposed a low-rank matrix approximation method that combined the nonconvex γ -norm, the l 1 -norm, and the anisotropic TV regularization to remove noise in images. A denoising experiment on standard natural images, clinical medical images, and facial images was carried out in this paper. The results showed that: (1) When compared with five state-of-the-art methods (RPCA, AMlogtv, NonRPCA, WNNM-RPCA, and TDV), our method was superior in terms of clarity, brightness, and detail characterization, and its denoising results were more consistent with subjective vision. (2) In the objective analysis, the quantitative results based on PSNR, SSIM, and FSIM showed that our method had higher scores, and the objective index was obviously improved compared with some methods. In addition, in the experiment of removing facial image noise, our method could highlight facial features more clearly, without artifacts, while removing noise. At the same time, our method had a lower computational cost than NonRPCA and TDV. Although its cost was higher than the other three methods, our denoising effect was better than theirs. To summarize, the proposed RPCA framework based on a nonconvex low-rank approximation and TV norm showed good performance in noise removal and in the retention of important features of images, which provides a reference for the future application of nonconvex functions in the field of image denoising. However, our method can only process grayscale images at present, and it needs to be optimized in terms of computational cost. Therefore, in the future, we will continue to investigate new iterative optimization methods for solving nonconvex optimization problems, as well as extend our research to a tensor framework for solving the problem of color image denoising in complex noise backgrounds.

Author Contributions

Conceptualization, T.C., Q.X. and D.Z.; methodology, T.C., Q.X. and D.Z.; formal analysis, Q.X.; resources, T.C., Q.X., D.Z. and L.S.; data curation, T.C. and Q.X.; writing—original draft preparation, T.C., Q.X. and D.Z.; writing—review and editing, T.C. and Q.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under grant nos. 62173127 and 61973104, the Scientific and Technological Innovation Leaders in Central Plains (no. 224200510008), the Henan Excellent Young Scientists Fund (no. 212300410036), the Program for Science and Technology Innovation Talents in Universities of Henan Province under grant no. 21HASTIT029, the Innovative Funds Plans of Henan University of Technology under grant no. 2020ZKCJ06, the Zhengzhou Science and Technology Collaborative Innovation Project (no. 21ZZXTCX06), the Cultivation Program of Young Backbone Teachers in Henan University of Technology under grant chentianfei, the Open Fund from Research Platform of Grain Information Processing Center in Henan University of Technology under grant no. KFJJ-2022-003.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

RPCARobust principal component analysis
TVTotal variational regularization
AWRRAdaptive weighted rank reduction
SJSCCAStructured joint sparse canonical correlation analysis
SRSparse representation
LMMSELinear minimum mean-squared error estimator
CTComputed tomography
PSNRPeak signal-to-noise ratio
SSIMStructure similarity index measure
FSIMFeature similarity index measure

References

  1. Diwakar, M.; Singh, P. CT image denoising using multivariate model and its method noise thresholding in non-subsampled shearlet domain. Biomed. Signal Process. Control 2021, 57, 101754. [Google Scholar] [CrossRef]
  2. Fan, L.; Zhang, F.; Fan, H.; Zhang, C. Brief review of image denoising techniques. Vis. Comput. Ind. Biomed. Art 2019, 2, 7. [Google Scholar] [CrossRef] [PubMed]
  3. Pradeep, S.; Nirmaladevi, P. A review on speckle noise reduction techniques in ultrasound medical images based on spatial domain, transform domain and CNN methods. In IOP Conference Series: Materials Science and Engineering; IOP Conference: Erode, India, 2021; p. 012116. [Google Scholar]
  4. Rakshit, S.; Ghosh, A.; Shankar, B.U. Fast mean filtering technique (FMFT). Pattern Recognit. 2007, 40, 890–897. [Google Scholar] [CrossRef]
  5. Justusson, B.I. Median filtering: Statistical properties. In Two-Dimensional Digital Signal Prcessing II: Transforms and Median Filters; Springer: Berlin/Heidelberg, Germany, 2006; pp. 161–196. [Google Scholar]
  6. Chen, J.; Benesty, J.; Huang, Y.; Doclo, S. New insights into the noise reduction Wiener filter. IEEE Trans. Audio Speech Lang. Process. 2006, 14, 1218–1234. [Google Scholar] [CrossRef]
  7. Özen Acarbay, E.; Özkurt, N. Performance analysis of the speech enhancement application with wavelet transform domain adaptive filters. Int. J. Speech Technol. 2023, 26, 245–258. [Google Scholar] [CrossRef]
  8. Grotevent, M.J.; Yakunin, S.; Bachmann, D.; Romero, C.; Vázquez de Aldana, J.R.; Madi, M.; Calame, M.; Kovalenko, M.V.; Shorubalko, I. Integrated photodetectors for compact Fourier-transform waveguide spectrometers. Nat. Photon. 2023, 17, 59–64. [Google Scholar] [CrossRef]
  9. Scribano, C.; Franchini, G.; Prato, M.; Bertogna, M. DCT-Former: Efficient Self-Attention with Discrete Cosine Transform. J. Sci. Comput. 2023, 94, 67. [Google Scholar] [CrossRef]
  10. Scribano, C.; Zheng, M.; Prato, M.; Zuo, W.; Zhang, B.; Zhang, Y.; Zhang, D. Multi-stage image denoising with the wavelet transform. Pattern Recognit. 2023, 134, 109050. [Google Scholar]
  11. Farmani, B.; Pal, Y.; Pedersen, M. Motion sensor noise attenuation using deep learning. First Break 2023, 41, 45–51. [Google Scholar] [CrossRef]
  12. Wang, S.; Nie, F.; Wang, Z.; Wang, R.; Li, X. Robust Principal Component Analysis via Joint Reconstruction and Projection. IEEE Trans. Neural Netw. Learn. Syst. 2022. early access. [Google Scholar] [CrossRef]
  13. Pokala, P.K.; Hemadri, R.V.; Seelamantula, C.S. Iteratively reweighted minimax-concave penalty minimization for accurate low-rank plus sparse matrix decomposition. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 8992–9010. [Google Scholar] [CrossRef]
  14. Bayati, F.; Trad, D. 3-D Data Interpolation and Denoising by an Adaptive Weighting Rank-Reduction Method Using Multichannel Singular Spectrum Analysis Algorithm. Sensors 2023, 23, 577. [Google Scholar] [CrossRef]
  15. Wang, L.; Xiao, D.; Hou, W.S. Weighted Schatten p-norm minimization for impulse noise removal with TV regularization and its application to medical images. Biomed. Signal Process. Control 2021, 66, 102123. [Google Scholar] [CrossRef]
  16. Xiu, X.; Yang, Y.; Kong, L. Laplacian regularized robust principal component analysis for process monitoring. J. Process Control 2020, 92, 212–219. [Google Scholar] [CrossRef]
  17. Xiu, X.; Yang, Y.; Kong, L. Data-driven process monitoring using structured joint sparse canonical correlation analysis. IEEE Trans. Circuits Syst. II Express Briefs 2020, 68, 361–365. [Google Scholar] [CrossRef]
  18. Javed, S.; Mahmood, A.; Al-Maadeed, S. Moving object detection in complex scene using spatiotemporal structured-sparse RPCA. IEEE Trans. Image Process. 2018, 28, 1007–1022. [Google Scholar] [CrossRef]
  19. Liu, J.; Xiu, X.; Jiang, X. MManifold constrained joint sparse learning via non-convex regularization. Neurocomputing 2021, 458, 112–126. [Google Scholar] [CrossRef]
  20. Zhong, X.; Xu, L.; Li, Y. A nonconvex relaxation approach for rank minimization problems. In Proceedings of the AAAI Conference on Artificial Intelligence, Austin, TX, USA, 25–30 January 2015; AAAI: Palo Alto, CA, USA, 2015. [Google Scholar]
  21. Peng, Y.; Suo, J.; Dai, Q.; Xu, W. Reweighted low-rank matrix recovery and its application in image restoration. IEEE Trans. Cybern. 2014, 44, 2418–2430. [Google Scholar] [CrossRef]
  22. Dong, J.; Xue, Z.; Guan, J.; Han, Z.F.; Wang, W. Low rank matrix completion using truncated nuclear norm and sparse regularizer. Signal Process. Image Commun. 2018, 68, 76–87. [Google Scholar] [CrossRef]
  23. Wang, Z.; Liu, Y.; Luo, X. Large-scale affine matrix rank minimization with a novel nonconvex regularizer. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 4661–4675. [Google Scholar] [CrossRef]
  24. Yang, Y.; Yang, Z.; Li, J. ovel RPCA with nonconvex logarithm and truncated fraction norms for moving object detection. Digit. Signal Process. 2023, 133, 103892. [Google Scholar] [CrossRef]
  25. Sun, Q.; Xiang, S.; Ye, J. Robust principal component analysis via capped norms. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Chicago, IL, USA, 11–14 August 2013; ACM: New York, NY, USA, 2013. [Google Scholar]
  26. Xiang, S.; Tong, X.; Ye, J. Efficient sparse group feature selection via nonconvex optimization. In Proceedings of the International Conference on Machine Learning, Atlanta, GA, USA, 11–19 June 2013; PMLR: Atlanta, GA, USA, 2013. [Google Scholar]
  27. Lv, T.; Pan, Z.; Wei, W. Iterative deep neural networks based on proximal gradient descent for image restoration. PLoS ONE 2022, 17, e0276373. [Google Scholar] [CrossRef] [PubMed]
  28. Kim, D.; Park, D. Element-wise adaptive thresholds for learned iterative shrinkage thresholding algorithms. IEEE Access 2020, 8, 45874–45886. [Google Scholar] [CrossRef]
  29. Jia, X.; Kanzow, C.; Mehlitz, P.; Wachsmuth, G. An augmented Lagrangian method for optimization problems with structured geometric constraints. Math. Program. 2022, 199, 1–51. [Google Scholar] [CrossRef]
  30. Donoho, D.L.; Johnstone, I.M. Threshold selection for wavelet shrinkage of noisy data. In Proceedings of the 16th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Baltimore, MD, USA, 3–6 November 1994; IEEE: Baltimore, MD, USA, 1994. [Google Scholar]
  31. Huang, G.; Bai, M.; Zhao, Q. Erratic noise suppression using iterative structure-oriented space-varying median filtering with sparsity constraint. Geophys. Prospect. 2021, 69, 101–121. [Google Scholar] [CrossRef]
  32. Saito, Y.; Miyata, T. Recovering Texture with a Denoising-Process-Aware LMMSE Filter. Signals 2021, 2, 286–303. [Google Scholar] [CrossRef]
  33. Ghaderpour, E.; Liao, W.; Lamoureux, M.P. Antileakage least-squares spectral analysis for seismic data regularization and random noise attenuation. Geophysics 2018, 83, V157–V170. [Google Scholar] [CrossRef]
  34. Chen, Z.; Zeng, Z.; Shen, H.; Zheng, X.; Dai, P.; Ouyang, P. DN-GAN: Denoising generative adversarial networks for speckle noise reduction in optical coherence tomography images. Biomed. Signal Process. Control 2020, 55, 101632. [Google Scholar] [CrossRef]
  35. Valsesia, D.; Fracastoro, G.; Magli, E. Deep graph-convolutional image denoising. IEEE Trans. Image Process. 2020, 29, 8226–8237. [Google Scholar] [CrossRef]
  36. Wang, S.; Xia, K.; Wang, L. Improved RPCA method via non-convex regularisation for image denoising. IET Signal Process. 2020, 14, 269–277. [Google Scholar] [CrossRef]
  37. Peng, C.; Liu, Y.; Kang, K. Hyperspectral Image Denoising Using Nonconvex Local Low-Rank and Sparse Separation With Spatial-Spectral Total Variation Regularization. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5538617. [Google Scholar] [CrossRef]
  38. Liu, X.; Chen, X.; Li, J. Nonlocal weighted robust principal component analysis for seismic noise attenuation. IEEE Trans. Geosci. Remote Sens. 2020, 59, 1745–1756. [Google Scholar] [CrossRef]
  39. Mahdaoui, A.E.; Ouahabi, A.; Moulay, M.S. Image denoising using a compressive sensing approach based on regularization constraints. Sensors 2022, 22, 2199. [Google Scholar] [CrossRef]
  40. Qi, G.; Hu, G.; Mazur, N. A novel multi-modality image simultaneous denoising and fusion method based on sparse representation. Computers 2021, 10, 129. [Google Scholar] [CrossRef]
  41. Wright, J.; Ganesh, A.; Rao, S. Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. Adv. Neural Inf. Process. Syst. 2009, 22, 1–9. [Google Scholar]
  42. Kang, Z.; Peng, C.; Cheng, Q. Robust PCA via nonconvex rank approximation. In Proceedings of the 2015 IEEE International Conference on Data Mining, Atlantic City, NJ, USA, 14–17 November 2015; IEEE: Orlando, FL, USA, 2015. [Google Scholar]
  43. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  44. Beck, A.; Teboulle, M. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 2009, 18, 2419–2434. [Google Scholar] [CrossRef]
  45. Candès, E.J.; Li, X.; Ma, Y. Robust principal component analysis? J. ACM 2011, 58, 1–37. [Google Scholar] [CrossRef]
  46. Zhang, B.; Zhu, G.; Zhu, Z. Alternating direction method of multipliers for nonconvex log total variation image restoration. Appl. Math. Model. 2023, 114, 338–359. [Google Scholar] [CrossRef]
  47. Gu, S.; Xie, Q.; Meng, D.; Zuo, W.; Feng, X. Weighted nuclear norm minimization and its applications to low level vision. Int. J. Comput. Vis. 2017, 121, 183–208. [Google Scholar] [CrossRef]
  48. Parisotto, S.; Lellmann, J.; Masnou, S. Higher-order total directional variation: Imaging applications. SIAM J. Imaging Sci. 2020, 13, 2063–2104. [Google Scholar] [CrossRef]
  49. Singhadia, A.; Pati, P.S.; Singhal, C. Efficient HEVC encoding to meet bitrate and PSNR requirements using parametric modeling. Circuits Syst. Signal Process. 2022, 41, 4479–4511. [Google Scholar] [CrossRef]
  50. Bakurov, I.; Buzzelli, M.; Schettini, R. Structural similarity index (SSIM) revisited: A data-driven approach. Expert Syst. Appl. 2022, 189, 116087. [Google Scholar] [CrossRef]
  51. Sara, U.; Akter, M.; Uddin, M.S. Image quality assessment through FSIM, SSIM, MSE and PSNR—A comparative study. J. Comput. Commun. 2019, 7, 8–18. [Google Scholar] [CrossRef]
  52. Pan, J.; Li, R.; Liu, H. Highlight removal for endoscopic images based on accelerated adaptive non-convex RPCA decomposition. Comput. Methods Programs Biomed. 2023, 228, 107240. [Google Scholar] [CrossRef]
Figure 3. The eight test images: Butterfly, House, Peppers, Lena, Room, Bird, Camera, Starfish.
Figure 3. The eight test images: Butterfly, House, Peppers, Lena, Room, Bird, Camera, Starfish.
Applsci 13 07184 g003
Figure 4. Denoising images of Butterfly by different methods (impulse noise). (a) singular value comparison; (b) original clean image; (c) noisy image (impulse noise); (d) RPCA (PSNR = 32.52 dB, SSIM = 0.9635, FSIM = 0.9802); (e) AMlogtv (PSNR = 20.87 dB, SSIM = 0.6670, FSIM = 0.9860); (f) NonRPCA (PSNR = 33.86 dB, SSIM = 0.9318, FSIM = 0.9592); (g) WNNM−RPCA (PSNR = 30.95 dB, SSIM = 0.7651, FSIM = 0.9015); (h) TDV (PSNR = 18.41 dB, SSIM = 0.6336, FSIM = 0.7521); (i) our method (PSNR = 41.64 dB, SSIM = 0.9863, FSIM = 0.9911).
Figure 4. Denoising images of Butterfly by different methods (impulse noise). (a) singular value comparison; (b) original clean image; (c) noisy image (impulse noise); (d) RPCA (PSNR = 32.52 dB, SSIM = 0.9635, FSIM = 0.9802); (e) AMlogtv (PSNR = 20.87 dB, SSIM = 0.6670, FSIM = 0.9860); (f) NonRPCA (PSNR = 33.86 dB, SSIM = 0.9318, FSIM = 0.9592); (g) WNNM−RPCA (PSNR = 30.95 dB, SSIM = 0.7651, FSIM = 0.9015); (h) TDV (PSNR = 18.41 dB, SSIM = 0.6336, FSIM = 0.7521); (i) our method (PSNR = 41.64 dB, SSIM = 0.9863, FSIM = 0.9911).
Applsci 13 07184 g004
Figure 7. Histogram of evaluation indicators of natural images. (a) Average PSNR; (b) average SSIM; (c) average FSIM.
Figure 7. Histogram of evaluation indicators of natural images. (a) Average PSNR; (b) average SSIM; (c) average FSIM.
Applsci 13 07184 g007
Table 1. PSNR comparisons of RPCA, AMlogtv, NonRPCA, WNNM−RPCA, TDV, and the proposed method for noise removal in natural test images (optimal value: red line; suboptimal value: cyan line).
Table 1. PSNR comparisons of RPCA, AMlogtv, NonRPCA, WNNM−RPCA, TDV, and the proposed method for noise removal in natural test images (optimal value: red line; suboptimal value: cyan line).
Noise TypeImagesRPCAAMlogtvNonRPCAWNNM−RPCATDVOurs
Impulse noiseButterfly32.524620.874533.836530.950018.408541.6482
House32.550124.545035.050731.129924.320641.7090
Peppers30.956922.182131.938426.993521.872734.1587
Lena33.170422.685534.862030.486322.034741.1695
Room33.374022.932135.344133.883421.447141.4602
Bird30.485021.638230.442928.090320.894833.2591
Camera30.836621.797630.719630.905921.200333.4341
Starfish30.261121.961132.705031.835520.764833.7062
Average31.769822.327033.112430.534321.367937.5681
Poisson noiseButterfly33.176125.264337.885437.512219.344641.8914
House33.105930.185435.145634.776325.956337.3046
Peppers31.829025.513632.045731.948523.322333.0805
Lena34.395326.226035.491535.302423.086537.6541
Room35.242925.731735.520435.288522.221137.6050
Bird31.089326.018131.440731.321521.907134.0229
Camera30.694025.472832.706932.543022.058833.8362
Starfish27.114625.653828.458628.830420.840829.3038
Average32.080826.258233.586833.440322.342135.5873
Gaussian noiseButterfly33.268323.662437.401338.958620.469042.1629
House33.117128.572537.959938.426725.766542.1794
Peppers31.049924.541633.454034.077423.037934.6498
Lena33.610725.239237.556938.869923.026142.3696
Room33.650225.052837.444237.928322.066842.1284
Bird30.687524.757032.365633.094921.706833.6199
Camera30.735824.660832.542432.595821.905333.6987
Starfish30.762124.661732.810533.521020.764834.0942
Average32.110225.143535.191935.934022.342938.1129
The Average values are in bold black.
Table 2. SSIM comparisons of RPCA, AMlogtv, NonRPCA, WNNM−RPCA, TDV, and the proposed method for noise removal in natural test images (optimal value: red line; suboptimal value: cyan line).
Table 2. SSIM comparisons of RPCA, AMlogtv, NonRPCA, WNNM−RPCA, TDV, and the proposed method for noise removal in natural test images (optimal value: red line; suboptimal value: cyan line).
Noise TypeImagesRPCAAMlogtvNonRPCAWNNM−RPCATDVOurs
Impulse noiseButterfly0.96350.66700.93180.76510.63360.9863
House0.94550.60960.88490.61570.73670.9735
Peppers0.90250.59650.86510.63680.67420.9220
Lena0.95140.58350.91720.67050.64690.9783
Room0.96940.53890.94420.78070.46610.9859
Bird0.90860.56580.83950.69330.66910.9280
Camera0.89350.52370.81280.79300.63570.9174
Starfish0.92260.56960.93010.86670.53820.9403
Average0.93210.58180.89070.72770.62510.9539
Poisson noiseButterfly0.95590.85630.96520.96000.66890.9902
House0.92550.84270.92200.91040.76220.9764
Peppers0.89130.81960.89810.89100.70940.9310
Lena0.93660.79060.94820.94200.67740.9857
Room0.95780.68950.96460.95970.49690.9905
Bird0.89360.81640.89970.89200.70450.9410
Camera0.87840.78140.88030.87310.67410.9241
Starfish0.91190.76540.90860.91550.53880.9324
Average0.91880.79520.92330.65400.91800.9589
Gaussian noiseButterfly0.95810.81540.95890.96840.72260.9848
House0.93390.81000.92560.93160.75930.9706
Peppers0.89330.76450.89480.91070.70170.9258
Lena0.94160.74450.94370.95390.67370.9795
Room0.95980.65560.96180.94760.48810.9851
Bird0.89120.76880.89060.91250.69470.9306
Camera0.88120.75130.87440.87050.66260.9168
Starfish0.91750.72170.91980.93210.53820.9421
Average0.92200.75390.92120.92840.65510.9544
The Average values are in bold black.
Table 3. FSIM comparisons of RPCA, AMlogtv, NonRPCA, WNNM−RPCA, TDV, and the proposed method for noise removal in natural test images (optimal value: red line; suboptimal value: cyan line).
Table 3. FSIM comparisons of RPCA, AMlogtv, NonRPCA, WNNM−RPCA, TDV, and the proposed method for noise removal in natural test images (optimal value: red line; suboptimal value: cyan line).
Noise TypeImagesRPCAAMlogtvNonRPCAWNNM−RPCATDVOurs
Impulse noiseButterfly0.98020.98600.95920.90150.75210.9911
House0.97510.97250.95190.85930.76160.9880
Peppers0.95510.94360.93740.85100.77260.9647
Lena0.97680.97370.95960.87350.74230.9888
Room0.98270.97170.97200.93070.59290.9925
Bird0.95880.95260.93330.88780.74800.9666
Camera0.95300.95790.91480.91650.70020.9594
Starfish0.95580.95830.96380.94520.70040.9676
Average0.96710.96450.94900.89560.72120.9773
Poisson noiseButterfly0.97550.98480.98040.97920.76750.9942
House0.96670.97740.96550.96110.78480.9883
Peppers0.94950.94150.95380.95130.79250.9697
Lena0.96940.98430.97460.97180.76260.9929
Room0.97820.97460.98240.98060.61770.9950
Bird0.95470.96610.95760.95420.76190.9709
Camera0.94960.95780.95020.94720.71180.9641
Starfish0.95610.96820.95410.95820.70050.9690
Average0.96240.96930.96480.96300.73740.9805
Gaussian noiseButterfly0.97440.98350.97620.98070.80070.9900
House0.96900.98130.96640.96960.78210.9865
Peppers0.94990.95900.95210.95860.78990.9663
Lena0.96980.98470.97120.97660.76170.9890
Room0.97880.97620.98000.97650.61060.9919
Bird0.95270.95580.95230.95990.75830.9666
Camera0.94520.94880.94290.94190.70830.9596
Starfish0.95550.95920.95730.96330.70040.9687
Average0.96190.96850.96230.96580.73900.9773
The Average values are in bold black.
Table 7. Computational time for different methods.
Table 7. Computational time for different methods.
MethodsProcessing time
RPCA2.1188
AMlogtv1.5780
NonRPCA3.6567
WNNM-RPCA2.3520
TDV3.5570
Ours3.4204
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, T.; Xiang, Q.; Zhao, D.; Sun, L. An Unsupervised Image Denoising Method Using a Nonconvex Low-Rank Model with TV Regularization. Appl. Sci. 2023, 13, 7184. https://doi.org/10.3390/app13127184

AMA Style

Chen T, Xiang Q, Zhao D, Sun L. An Unsupervised Image Denoising Method Using a Nonconvex Low-Rank Model with TV Regularization. Applied Sciences. 2023; 13(12):7184. https://doi.org/10.3390/app13127184

Chicago/Turabian Style

Chen, Tianfei, Qinghua Xiang, Dongliang Zhao, and Lijun Sun. 2023. "An Unsupervised Image Denoising Method Using a Nonconvex Low-Rank Model with TV Regularization" Applied Sciences 13, no. 12: 7184. https://doi.org/10.3390/app13127184

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop