Next Article in Journal
Some New Properties of Exponential Trigonometric Convex Functions Using up and down Relations over Fuzzy Numbers and Related Inequalities through Fuzzy Fractional Integral Operators Having Exponential Kernels
Previous Article in Journal
Geraghty Type Contractions in Relational Metric Space with Applications to Fractional Differential Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mixed Fractional-Order and High-Order Adaptive Image Denoising Algorithm Based on Weight Selection Function

Department of Mathematics, Kunming University of Science and Technology, Kunming 650500, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2023, 7(7), 566; https://doi.org/10.3390/fractalfract7070566
Submission received: 11 June 2023 / Revised: 20 July 2023 / Accepted: 21 July 2023 / Published: 24 July 2023

Abstract

:
In this paper, a mixed-order image denoising algorithm containing fractional-order and high-order regularization terms is proposed, which effectively suppresses the staircase effect generated by the TV model and its variants while better preserving the edges and details of the image. Adding different regularization penalties in different regions is fundamental to improving the denoising performance of the model. Therefore, a weight selection function is designed using the structure tensor to achieve a more effective selection of regularization terms in different regions. In each iteration, the regularization parameters are adaptively adjusted according to the Morozov discrepancy principle to promote the performance of the algorithm. Based on the primal–dual theory, the original algorithm is improved by using the predictor–corrector scheme to obtain a more accurate approximate solution while ensuring the convergence of the algorithm. The effectiveness of the proposed algorithm is demonstrated through simulation experiments.

1. Introduction

The field of digital image processing requires high-resolution and clean images. However, in most cases, images are often degraded due to noise and other factors, resulting in the loss of valuable image information. Therefore, it poses a significant challenge to model the noise accurately, employ efficient algorithms to remove it from the image, and achieve algorithm convergence.
Image denoising can be regarded as a reversible problem, but it is ill-posed, which means that the solution to this problem does not exist or is not unique. To overcome this problem and achieve better denoising performance, several methods have been proposed, including regularization methods [1], wavelet transforms [2,3], non-local methods such as NLM [4,5] and BM3D [6,7], deep learning methods such as DnCNN [8], DIVA [9], and BM3D-net [10], and quantum-based methods such as DeQuIP [11] and QAB [12]. The role of these methods in image denoising is acknowledged, but certain limitations are also present in them. For instance, the appearance of new unrealistic structures or textures is prone to occur in denoised images using the wavelet transform. Non-local methods are characterized by high computational complexity and are easily influenced by outliers. Deep learning methods necessitate a substantial dataset for model training, leading to longer modeling times. Quantum-based methods are still in the early stages of research and have many technical challenges and limitations. Although there are some limitations to regularization methods, they are supported by a solid understanding of mathematical theory, and the logical reasoning of the computational derivation process is also strong. Being easy to code and having a shorter modeling time, they are still widely utilized at present.
In models associated with regularization techniques, there are two prevalent issues: how to suppress the staircase effect caused by the total variation (TV) regularization model proposed by Rudin et al. [13], and how to adaptively adjust the regularization parameters based on image features. Currently, a considerable amount of research has been conducted on these problems. Regarding the first issue, researchers have presented using higher-order regularizers [14,15,16] or fractional-order regularizers [17,18,19] to modify the TV regularization term and suppress the staircase effect. However, studies have found that the higher-order TV model (HOTV) does not smooth noise as effectively as the TV model. Although the fractional-order TV model (FOTV) has better noise filtering and detail preservation effects, its solving process is challenging. For the second issue, researchers have designed different adaptive regularization parameters based on prior information about the image [20,21]. However, there are few algorithms that simultaneously address both of these problems. Therefore, the advantages of the HOTV and FOTV models are inherited in this paper, and a mixed-order TV (MOTV) image denoising algorithm is proposed. Moreover, adaptive regularization parameters are designed based on image features. To efficiently solve the model while minimizing the number of parameters, an improved primal–dual algorithm based on the predictor–corrector scheme is proposed, and the convergence of this algorithm is theoretically proven.
The main contributions of this paper are summarized below: Firstly, a new MOTV image denoising algorithm is proposed to suppress the staircase artifacts while filtering out more noise. The approximate optimal solution of the model is obtained by using the improved primal–dual algorithm. Secondly, an edge-weighted selection function is constructed by using the structure tensor, which preserves more edge details of the image. Finally, an adaptive regularization parameter is designed using the Morozov discrepancy principle to facilitate the performance of the algorithm.
The rest of this article is organized as follows: Section 2 briefly introduces the relevant knowledge background and model. In Section 3, a new model is proposed, and the derivation process of the model solution is given. Then an adaptive regularization parameter is set, and the convergence of the algorithm is evidenced. In Section 4, simulation experiments are carried out to prove the effectiveness of the proposed algorithm. Section 5 provides a simple summary.

2. Related Works

2.1. Fractional-Order Variation Model

The fractional-order derivative, as a generalization of the integer derivative, can preserve more image details. Therefore, it has been extensively studied by a quantity of scholars in recent decades [22,23]. Firstly, the definition of fractional-order derivatives and their related properties are reviewed here.
The Euclidean space R m × n of an image u with size m × n is represented as V. To facilitate interpretation, the image domain is discretized into a matrix grid { ( x i , y i ) : 1 i m , 1 j n } , where ( i , j ) represents the position of the pixel in the matrix. Consequently, the image u V can be defined as u i , j = u ( x i , y j ) . Various definitions of fractional-order derivatives exist in the literature [24], including the Grünwald–Letnikov (G-L) definition, the Riemann–Liouville (R-L) definition, the Caputo definition, and others. In digital image processing, the G-L definition is the most popular. The G-L definition is also employed in this paper, and the relevant fractional-order gradient is defined as follows:
α u = ( 1 α u , 2 α u ) T , α > 0 ,
where 1 α u , 2 α u V denote the derivatives of the image u in the horizontal and vertical directions, respectively, which can be discretized and approximated as
( 1 α u ) i , j = k = 0 K 1 ( 1 ) k C k α u i k , j , ( 2 α u ) i , j = k = 0 K 1 ( 1 ) k C k α u i , j k .
Here K 3 is the number of adjacent pixels used to approximate the fractional-order derivative at each pixel point. The coefficient { C k α } k = 0 K 1 is defined as C k α = Γ ( α + 1 ) Γ ( k + 1 ) Γ ( α + 1 k ) , where Γ ( · ) is the Gamma function, U = u V : u = max i j | u i , j | 1 . Then the FOTV model can be defined as
min u λ 2 H u f 2 2 + α u 1 ,
where
| | α u | | 1 : = i j | ( α u ) i , j | , | ( α u ) i , j | = ( 1 α u ) i , j 2 + ( 2 α u ) i , j 2 ,
Here, H is a blur operator, u is the original image, and f is the noisy image. According to Ref. [25], an important relation can be derived as α u , p = u , ( 1 ) α ¯ div α p . Here, p = ( p 1 , p 2 ) Z where Z = V × V , and p represents a dual variable. Utilizing the aforementioned relationship, the adjoint divergence of the discrete gradient (1) can be obtained as follows:
( div α p ) i , j = ( 1 ) α k = 0 K 1 ( 1 ) k C k α ( p i + k , j 1 + p i , j + k 2 ) .

2.2. High-Order Variation Model

If the TV model is defined in the bounded variation (BV) function space, then the reconstruction of discontinuous points can be achieved. Accordingly, the HOTV models, which can better model noise, have been proposed by plenty of scholars to suppress the staircase effect and achieve improved results [14,15,16,26]. The HOTV model is generally defined as follows:
min u H u f 2 2 + β R ( u ) ,
where β is a predefined parameter and R ( u ) is a high-order regularization term.

2.3. Structural Tensor

In mathematics, the structure tensor, also known as the second moment matrix, is a matrix derived from the gradient of a function. It can summarize the main directions of the gradients in a specified neighborhood of a given point, as well as the degree of coherence in those directions. As a result, the structure tensor finds extensive application in the fields of computer vision and image processing [27,28]. In image processing, it excels at distinguishing between edge regions, flat regions, and corner regions. The structure tensor for the image u can be expressed as
S T = u x 2 u x u y u y u x u y 2 .
Here, u x , u y denote the gradients of the image u in the x and y directions, respectively. If S K and S H are used to represent the determinant and trace of matrix S T , respectively, then the judgment conditions for each region are summarized as follows:
  • flat regions: S H = 0 ;
  • edge regions: S H > 0 and S K = 0 ;
  • corner regions: S H > 0 and S K > 0 .
In Figure 1, the white portions of subgraphs (a–c), respectively, represent the flat regions, edge regions, and corner regions of the Lena image. However, in practice, the values of S K and S H are not ideal, so approximate judgment is usually used.

2.4. Primal–Dual Algorithm with Guaranteed Convergence

The standard Euclidean inner product and norm of the finite dimensional vector spaces X and Y are expressed as , and | | | | = , 1 2 , respectively. If a continuous linear operator A is a map on X Y , then its induced norm can be expressed as
L = | | A | | = max { | | A x | | : x X , | | x | | 1 } .
According to Fenchel duality theory [29], the generalized saddle point problem can be expressed as
min x X max y Y A x , y + G ( x ) F * ( y ) ,
where G and F are lower semi-continuous functions, and F * is the conjugate function of F.
It is well known that the primal–dual problem of the following primal optimization problem (10) corresponds to this saddle point problem (9) mentioned above. The primal problem can be expressed as follows:
min x X F ( A x ) + G ( x ) .
Then, according to the primal–dual problem, the dual problem related to the primal problem can be calculated, which is expressed as
max y Y ( G * ( A * y ) ) + F * ( y ) .
Chambler and Pock proposed a primal–dual algorithm for solving the saddle point structure problem (9), and the convergence of the algorithm was proven [30]. By fixing the primal variable x of the primal–dual problem and solving the first-order derivative function about the dual variable y, the expression for y can be obtained:
y = ( I + F * ) 1 ( y + A x ) .
Similarly, it can be obtained that the expression about the primal variable x is:
x = ( I + G ) 1 ( x A * y )
where F * and G represent the subgradients of F * and G, respectively. The operators ( I + F * ) 1 and ( I + G ) 1 are known as resolvent operators. When the concept of minimizing an energy functional is applied to define the resolvent operator, it can be expressed as
x = ( I + τ F ) 1 ( y ) = arg min x x y 2 2 τ + F ( x ) .
See [31] for details of the subgradient and resolvent operators. Then, the primal–dual algorithm proposed in the literature [30] can ultimately be summarized as
  • Initialization: Choose t 0 , s 0 > 0 with t 0 s 0 L 2 1 , ( x 0 , y 0 ) X × Y , and x ¯ 0 = x 0 .
  • Iteration ( n 0 ) : update as follows
    y n + 1 = ( I + s n F * ) 1 ( y n + s n A x ¯ n ) , x n + 1 = ( I + t n G ) 1 ( x n t n A * y n + 1 ) , θ n = 1 / 1 + 2 γ t n , t n + 1 = θ n t n , s n + 1 = s n / θ n , x ¯ n + 1 = x n + 1 + θ n ( x n + 1 x n ) .

3. Model Proposal and Solution

3.1. Proposed Model

In this study, the following image denoising problems with additive noise are considered:
f = H u + η ,
where η represents the Gaussian noise. TV regularization is a classical method for restoring high-quality images, guaranteeing the stability of the solution. Models (3) and (6) are classical FOTV and HOTV models, respectively, used for removing Gaussian noise. Building upon the strengths of both models, a mixed-order image denoising model is proposed in this paper, described as follows:
min u λ 2 H u f 2 2 + ξ i α u 1 + ξ i 2 u 2 + β u 1 .
Here, λ represents the regularization parameter, β is the balance parameter, and ξ i ( i = 1 , 2 ) is a weight selection parameter. The purpose of introducing this weight selection parameter is to select different regions of the image by the structure tensor and to process the image using different regularization terms in different regions. To achieve this purpose, ξ i is defined as a function of pixel point characteristics. In other words, the intensity of regularization varies for pixel points in the flat regions, edge regions, and corner regions. At present, the weight function, which is the inverse function of the image gradient, is used by most researchers [29,32,33], but this paper takes a different approach.
Next, a matrix related to the structure tensor is defined:
K σ ( u ( x , y ) ) = G σ * ( u u T ) = G σ * u x 2 G σ * u x u y G σ * u y u x G σ * u y 2 = a 11 a 12 a 21 a 22 ,
where G σ = ( σ 2 π ) 1 exp ( | x | 2 / 2 σ 2 ) is a Gaussian filter function with standard deviation σ .
The matrix K contains two eigenvectors that are standard orthogonal. Different eigenvalues λ 1 , λ 2 describe the average contrast in the corresponding eigenvector direction. Different regions exhibit different eigenvalue relationships. In the edge regions, λ 1 λ 2 0 ; in the flat regions, λ 1 , λ 2 0 ; and in the corner regions, λ 1 > λ 2 0 [34]. Hence, the characteristics of each pixel can be measured using the difference between the eigenvalues. The edge coherence [35] based on the eigenvalue is defined as
C ( u ) = ( λ 1 λ 2 ) 2 = ( a 11 a 22 ) 2 + 4 a 12 2 .
Furthermore, the weight selection function ξ i can be defined as
ξ i = 1 exp [ 1 ( C ( u ) / ϖ ) 2 ] , i = 1 exp [ 1 ( C ( u ) / ϖ ) 2 ] , i = 2
where ϖ [ 10 4 10 2 ] is the comparison parameter. The role of ϖ is equivalent to a threshold, which can be used to distinguish various regions. Note that in order to prevent the impact of abnormal values, C ( u ) should be normalized before calculating ξ i .
Obviously, ξ i depends on the edge consistency of pixel features. In the edge and corner regions, C 1 , ξ 1 0 , ξ 2 1 , at this time, ξ 2 2 u 2 + β u 1 is selected as a regularization term of the model, which can better preserve the edge details of the image. In the flat regions, C 0 , ξ 2 0 , ξ 1 1 , at this time, ξ 1 α u 1 + β u 1 is selected as a regularization term, which can reduce noise and artifacts in the image while also reinforcing the regularization process for the flat regions. By designing the selection function ξ i and performing correspondence matching within the model, the image edges and fine details can be effectively preserved, ultimately resulting in the recovery of high-quality images. Below is the corresponding matching model:
min u Φ ( u , λ ) λ 2 H u f 2 2 + ξ 1 α u 1 + ξ 2 2 u 2 + β u 1 .
Solving the MOTV model is equivalent to solving the saddle point of the primal–dual problem. The iterative steps and convergence analysis of the model are introduced in detail below.

3.2. Numerical Algorithm

The MOTV model is a variant of the TV model. The famous Euler–Lagrange method can be utilized to solve both the TV model and its variant. However, the model contains a non-differentiable derivative, making its calculation process complex. To overcome the difficulties, the primal–dual algorithm is employed. The conjugate sets of all the dual variables p , y , and w are P , Y , and W, respectively. Let δ ( u ) = α u 1 , then its conjugate function satisfies the following properties:
δ P * ( p ) = 0 , p P + , p P
where P = p Z : p = max i j | p i , j | 1 denotes the conjugate set. Similarly, let ζ ( u ) = 2 u 2 , ς ( u ) = u 1 , then the associated conjugate functions ζ Y * ( y ) , ς W * ( w ) and conjugate sets Y, W also have the same properties.
Applying Fenchel duality theory to the primal problem (18), a general primal–dual model of the MOTV model can be obtained, which is expressed as
min u max p , y , w λ 2 H u f 2 2 + ξ 1 p , α u + ξ 2 y , 2 u + β w , u ξ 1 δ P * ( p ) ξ 2 ζ Y * ( y ) β ς W * ( w ) .
Of these, y , w has the same properties as p. Similarly, the corresponding dual problem can be obtained as follows:
max p , y , w ξ 1 ( α ) T p + ξ 2 ( 2 ) T y + β w , f λ 2 ξ 1 ( α ) T p + ξ 2 ( 2 ) T y + β w 2 2 ξ 1 δ P * ( p ) ξ 2 ζ Y * ( y ) β ς W * ( w ) .
The discretized M T V of the image u is also the Legendre–Fenchel conjugate of δ , ζ , and ς [36,37]. That is:
M T V ( u ) = max p , y , w { ξ 1 p , α u + ξ 2 y , 2 u + β w , u ξ 1 δ P * ( p ) ξ 2 ζ Y * ( y ) β ς W * ( w ) } = max p , y , w { ξ 1 p , α u + ξ 2 y , 2 u + β w , u } .
Legendre–Fenchel’s duality is used to represent the M T V ( u ) norm, (18) can be represented as
min u max p , y , w L ( u , p , y , w ; λ ) .
where L ( u , p , y , w ; λ ) = λ 2 H u f 2 2 + ξ 1 p , α u + ξ 2 y , 2 u + β w , u . Let G ( u ) = λ 2 H u f 2 2 , then the following equality holds [38].
p k + 1 = ( s ξ 1 δ * + I ) 1 p ˜ , p ˜ = p k + s ξ 1 α u k
y k + 1 = ( s ξ 2 ζ * + I ) 1 y ˜ , y ˜ = y k + s ξ 2 2 u k
w k + 1 = ( s β ς * + I ) 1 w ˜ , w ˜ = w k + s β u k
u k + 1 = ( t G + I ) 1 u ˜ , u ˜ = u k t ξ 1 ( α ) T p k + 1 t ξ 2 ( 2 ) T y k + 1 t β w k + 1
Among them, four resolvent operators are involved, namely ( δ * + I ) 1 , ( ζ * + I ) 1 , ( ς * + I ) 1 , ( G + I ) 1 , the solution of the resolvent operators is very difficult. Therefore, the method of solving saddle points can be employed to equivalently solve the above equations and reduce computational complexity. According to the literature [39], the function L ( u , p , y , w ; λ ) has a saddle point, which can be obtained by Lemma 1 as ( u * , p * , y * , w * ) .
Lemma 1.
If ( u * , p * , y * , w * ) is a point of the function L ( u , p , y , w ; λ ) , and the point satisfies the inequality
L ( u * , p , y , w ; λ ) L ( u * , p * , y * , w * ; λ ) L ( u , p * , y * , w * ; λ ) ,
then, this point can be represented as a saddle point of the function L ( u , p , y , w ; λ ) .
In connection with the literature [40], it is clear that the solution procedure for the maximum and minimum of (22) is interchangeable, i.e.,
min u max p , y , w L ( u , p , y , w ; λ ) = L ( u * , p * , y * , w * ; λ ) = max p , y , w min u L ( u , p , y , w ; λ ) .
This further reflects that the minimalized primal problem can be solved by finding the saddle point of the primal–dual problem, where the dual variable is updated alternately with the primal variable [41,42]. If u * , p * , y * , w * is the optimal solution of (22), then ( u * , p * , y * , w * ) is a saddle point of (22) [39], i.e.,
u * = arg min u L ( u , p * , y * , w * ; λ ) ,
p * = arg min p L ( u * , p , y * , w * ; λ ) ,
y * = arg min y L ( u * , p * , y , w * ; λ ) ,
w * = arg min w L ( u * , p * , y * , w ; λ ) .
Combining (22) and (27) yields
λ H T ( H u * f ) + ξ 1 ( α ) T p * + ξ 2 ( 2 ) T y * + β w * = 0 .
According to the Karush–Kuhn–Tucker (KKT) necessary conditions of the dual optimality, the Lagrange multipliers μ 1 i , j , μ 2 i , j , μ 3 i , j 0 can be yielded, such that
( α u * ) i , j + μ 1 i , j p i , j * = 0 , ( 2 u * ) i , j + μ 2 i , j y i , j * = 0 , u i , j * + μ 3 i , j w i , j * = 0 .
where either μ 1 i , j > 0 with | p i , j | = 1 or μ 1 i , j = 0 with | p i , j | < 1 . At this time, μ 1 i , j = | ( α u * ) i , j | . Similarly, the other two formulas can be obtained. Thus, if the solutions of (28)–(30) are p * , y * , w * , respectively, then
( α u * ) i , j + | ( α u * ) i , j | p i , j * = 0 , ( 2 u * ) i , j + | ( 2 u * ) i , j | y i , j * = 0 , u i , j * + | u i , j * | w i , j * = 0 .
Therefore, the following lemma can be obtained:
Lemma 2.
Suppose p, y, w are dual variables, and if a saddle point of the function L ( u , p , y , w ; λ ) is ( u * , p * , y * , w * ) , then (31) and (32) hold.
There exist plenty of methods available for computing saddle points through maximization and minimization techniques [37,43,44]. However, the hybrid gradient primal–dual algorithm is utilized by this paper to solve saddle points, which improves the original primal–dual algorithm. By employing this method to calculate the primal and dual variables alternately, a solution can be obtained as follows:
p k + 1 = arg max p L ( u , p , y , w ; λ ) 1 2 s p p k 2 2 ,
y k + 1 = arg max y L ( u , p , y , w ; λ ) 1 2 s y y k 2 2 ,
w k + 1 = arg max w L ( u , p , y , w ; λ ) 1 2 s w w k 2 2 ,
u k + 1 = arg min u L ( u , p , y , w ; λ ) + 1 2 t u u k 2 2 .
Here, s , t > 0 represent the update steps for the dual and primal variables, respectively, which satisfies s · t 1 / 16 . Note that the regularization parameter λ is not involved in the solution of the dual variables p , y , w . Before discussing the solution of the subproblem (33)–(36), the projection of q on P is first defined as follows:
ρ P ( q ) = arg min p P p q 2 2 .
The Lagrange method can be used to solve (37). The corresponding Lagrangian function is expressed as
p q 2 2 + i , j γ i , j ( p i , j 2 1 ) .
Under the constraint | p i , j | 2 1 , the Lagrange multiplier γ i , j 0 can be obtained. It follows from the complementarity condition that either γ i , j = 0 with | p i , j | < 1 and | q i , j | < 1 or γ i , j > 0 with | p i , j | = 1 and | q i , j | 1 . In the first case, p i , j = q i , j . In the other case, the KKT condition yields
p i , j q i , j + γ i , j p i , j = 0 , i , j .
Furthermore, γ i , j = | q i , j | 1 . Thus, p i , j = q i , j / | q i , j | . Then, the expression of ρ P ( q ) is:
( ρ P ( q ) ) = q max ( 1 , | q | ) .
In what follows, the solution to each subproblem is discussed in detail. For the p-subproblem in (33), the solution of
p k + 1 = arg max p ξ 1 p p k , α u 1 2 s p p k 2 2 = arg max p 1 2 s { 2 p p k , s ξ 1 α u p p k 2 2 } = arg min p p ( p k + s ξ 1 α u ) 2 2 .
The minimization problem is equivalent to computing the projection of ( p k + s ξ 1 α u ) onto P set. Therefore
p k + 1 = ρ P ( p k + s ξ 1 α u ) .
Similarly, y k + 1 in (34) and w k + 1 in (35) can be computed by
y k + 1 = ρ Y ( k y + s ξ 2 2 u ) ,
w k + 1 = ρ W ( w k + s β u ) .
For the u-subproblem in (36), the subproblem (36) is a quadratic function of image u, which has a closed-form solution. The solution process can be expressed as follows:
u k + 1 = arg min u ξ 1 p , α ( u u k ) + λ 2 H u f 2 2 + ξ 2 y , 2 ( u u k ) + β w , u u k + 1 2 t u u k 2 2 ,
hence, u k + 1 can be easily computed by
u k + 1 = ( t λ H T H + I ) 1 ( t λ H T f + u ˜ ) .
Here, u ˜ = ( u k t ξ ( α ) T p t ξ ( 2 ) T y t β w ) . Then, the solutions to each subproblem are summarized as follows:
p k + 1 = ( s ξ 1 δ * + I ) 1 p ˜ p k + 1 = p k + s ξ 1 α u k max ( 1 , | p k + s ξ 1 α u k | ) ,
y k + 1 = ( s ξ 2 ζ * + I ) 1 y ˜ y k + 1 = y k + s ξ 2 2 u k max ( 1 , | y k + s ξ 2 2 u k | ) ,
w k + 1 = ( s β ς * + I ) 1 w ˜ w k + 1 = w k + s β u k max ( 1 , | w k + s β u k | ) ,
u k + 1 = ( t E + I ) 1 u ˜ u k + 1 = ( t λ H T H + I ) 1 ( t λ H T f + u ˜ ) .
To obtain more accurate computational results, the primal–dual algorithm can be further enhanced using the predictive correction method [43]. This method also facilitates the convergence analysis of the algorithm.
In summary, the solution process can be summarized as the following Algorithm 1.
Algorithm 1: Primal–Dual Algorithm for MOTV Image Restoration
Require:  f, H, β .
1:
Initialize u, p, y and w. Set the size of s and t.
2:
while stopping criterion is not satisfied do
3:
     C ( u ) = ( λ 1 λ 2 ) 2 = ( a 11 a 22 ) 2 + 4 a 12 2 ;
4:
     ξ 1 ( u ) = 1 exp [ 1 ( C ( u ) / γ ) 2 ] ;
5:
     ξ 2 ( u ) = exp [ 1 ( C ( u ) / γ ) 2 ] ;
6:
     p k + 1 2 = p k + s ξ 1 α u k max ( 1 , | p k + s ξ 1 α u k | ) ;
7:
     y k + 1 2 = y k + s ξ 2 2 u k max ( 1 , | y k + s ξ 2 2 u k | ) ;
8:
     w k + 1 2 = w k + s β u k max ( 1 , | w k + s β u k | ) ;
9:
     u ˜ = u k t ξ 1 ( α ) T p k + 1 2 t ξ 2 ( 2 ) T y k + 1 2 t β w k + 1 2 ;
10:
     u k + 1 = ( t λ H T H + I ) 1 ( t λ H T f + u ˜ ) ;
11:
     p k + 1 = p k + 1 2 + s ξ 1 α u k + 1 max ( 1 , | p k + 1 2 + s ξ 1 α u k + 1 | ) ;
12:
     y k + 1 = y k + 1 2 + s ξ 2 2 u k + 1 max ( 1 , | y k + 1 2 + s ξ 2 2 u k + 1 | ) ;
13:
     w k + 1 = w k + 1 2 + s β u k + 1 max ( 1 , | w k + 1 2 + s β u k + 1 | ) ;
14:
end while
15:
return  u = u k + 1 .

3.3. Adaptive Regularization Parameter

Algorithm 1 involves an important regularization parameter λ when solving the subproblem u. In fact, λ has a tremendous influence on the denoising performance of the algorithm. In general, a multitude of researchers have manually set the regularization parameter λ , and several researchers have adaptively selected the parameter λ according to the prior information of the image [45,46]. In order to simplify the solution process, this paper designs an adaptive parameter λ by using the structural features of the data fidelity term while combining the Morozov deviation principle [40], so that the denoised image is always in the feasible set D. The feasible set D is defined as
D = u : H u f 2 2 c 2 .
Here c is the positive number that depends on the noise variance σ 2 [17,40]. If the noise variance σ 2 is known, then the upper bound of the feasible set D can be expressed as c 2 = τ m n σ 2 . If the noise variance σ 2 is unknown, then σ 2 can be estimated by referring to Reference [47]. τ is a predefined parameter, and τ = 1 is chosen empirically.
For the denoised image to retain more image detail, an error associated with the parameter λ is set, which in turn allows the algorithm to adaptively select the parameter λ . The error is defined as
e k + 1 = H u k + 1 ( λ k + 1 ) f = ( λ k + 1 t H T H + I ) 1 ( H u ˜ f ) .
Whether H T H is a singular matrix or not, ( λ k + 1 t H T H + I ) is always invertible. According to the Lemma 3, the updated rules for λ in the iterative process are as follows:
λ k = 0 , i f H u ˜ f 2 2 c 2 t h e r o o t o f e k + 1 ( λ ) 2 2 = c 2 , i f H u ˜ f 2 2 > c 2
Lemma 3.
Let
ϕ ( u , λ ) = e 2 2 .
Then ϕ ( u , λ ) is a convex function and strictly monotone decreasing. This function makes the following equation have a unique solution.
ϕ ( u , λ ) = c 2
Proof. 
The first-order derivative of the function ϕ ( u , λ ) is ϕ ( u , λ ) λ = 2 t ( H u ˜ f ) 2 ( t λ + I ) 3 0 . From this, it can be seen that the function ϕ ( u , λ ) is a monotonically decreasing function concerning the regularization parameter λ . The second-order derivative of the function ϕ ( u , λ ) is 2 ϕ ( u , λ ) λ 2 = 6 t 2 ( H u ˜ f ) 2 ( t λ + I ) 4 0 . It is evident that the function ϕ ( u , λ ) is a strictly positive convex function about the regularization parameter λ . Therefore, the above equation has a unique solution. □
Contacting Algorithm 1, a new MOTV denoising algorithm can be obtained, see Algorithm 2.
Algorithm 2: Primal–Dual Algorithm for New MOTV Image Restoration
Require: f, H, β , c 2 .
1:
Initialize u, p, y and w. Set the size of s and t.
2:
while stopping criterion is not satisfied do
3:
     C ( u ) = ( λ 1 λ 2 ) 2 = ( k 11 k 22 ) 2 + 4 k 12 2 ;
4:
     ξ 1 ( u ) = 1 exp [ 1 ( C ( u ) / γ ) 2 ] ;
5:
     ξ 2 ( u ) = exp [ 1 ( C ( u ) / γ ) 2 ] ;
6:
     p k + 1 2 = p k + s ξ 1 α u k max ( 1 , | p k + s ξ 1 α u k | ) ;
7:
     y k + 1 2 = y k + s ξ 2 2 u k max ( 1 , | y k + s ξ 2 2 u k | ) ;
8:
     w k + 1 2 = w k + s β u k max ( 1 , | w k + s β u k | ) ;
9:
     u ˜ = u k t ξ 1 ( α ) T p k + 1 2 t ξ 2 ( 2 ) T y k + 1 2 t β w k + 1 2 ;
10:
    if  H u ˜ f 2 2 < c 2 ; then
11:
         λ k + 1 = 0 ;
12:
    else
13:
         λ k + 1 is set to the root of ϕ ( λ k + 1 , u k ) = c 2 ;
14:
    end if
15:
     u k + 1 = ( t λ H T H + I ) 1 ( t λ H T f + u ˜ ) ;
16:
     p k + 1 = p k + 1 2 + s ξ 1 α u k + 1 max ( 1 , | p k + 1 2 + s ξ 1 α u k + 1 | ) ;
17:
     y k + 1 = y k + 1 2 + s ξ 2 2 u k + 1 max ( 1 , | y k + 1 2 + s ξ 2 2 u k + 1 | ) ;
18:
     w k + 1 = w k + 1 2 + s β u k + 1 max ( 1 , | w k + 1 2 + s β u k + 1 | ) ;
19:
end while
20:
return  u = u k + 1 .

3.4. Convergence Analysis

The sequence u k + 1 generated by Algorithm 2 converges to the minimum u * of M T V ( u ) . When u belongs to the feasible set D, the sequence λ k converges to the Lagrange multiplier λ * . Through the primal problem (18) and the dual problem (22), Φ ( u ; λ * ) = max p P , y Y , w W L ( u , p , y , w ; λ * ) can be obtained. Therefore, when the dual variables satisfy the constraints p P , y Y , w W , it is only necessary to prove that the sequence ( u k , p k , y k , w k ) generated by Algorithm 2 converges to a saddle point of the function L ( u , p , y , w ; λ * ) . Firstly, two lemmas are introduced to aid in proving the convergence of the algorithm.
Lemma 4.
Let the sequence generated by Algorithm 2 be ( u k , p k , y k , w k , λ k ) , then
u k + 1 u k 2 2 max ( 1 2 s ξ 1 k ( v 0 2 + v 1 2 + + v k 1 2 ) u k + 1 u k , ( 1 ) α ¯ div α ( p k + 1 p k + 1 2 ) , 1 24 s ξ 2 u k + 1 u k , div 2 ( y k + 1 y k + 1 2 ) , 1 s β u k + 1 u k , w k + 1 w k + 1 2 ) .
Proof. 
Let q 1 = p k + s ξ 1 α u k + 1 , q 2 = p k + s ξ 1 α u k . According to the properties of div and ∇, the following can be obtained:
s ξ 1 u k + 1 u k , ( 1 ) α ¯ div α ( p k + 1 p k + 1 2 ) = s ξ 1 α ( u k + 1 u k ) , ρ p ( q 1 ) ρ p ( q 2 ) = q 1 q 2 , ρ p ( q 1 ) ρ p ( q 2 ) .
Using the classical inequality for any vector: 2 a T b a 2 2 + b 2 2 , it can be obtained that
q 1 q 2 , ρ p ( q 1 ) ρ p ( q 2 ) 1 2 ( q 1 q 2 2 2 + ρ p ( q 1 ) ρ p ( q 2 ) 2 2 ) q 1 q 2 2 2 = ( s ξ 1 ) 2 α ( u k + 1 u k ) 2 2 .
Let v i = ( 1 ) i C i α , it can be obtained that
α u 2 = i j [ ( α u 1 ) i , j 2 + ( α u 2 ) i , j 2 ] = i j [ ( v 0 u i , j 1 + + v k 1 u i + k 1 , j 1 ) 2 + ( v 0 u i , j 2 + + v k 1 u i , j + k 1 2 ) 2 ] 2 k i j [ ( v 0 u i , j 1 ) 2 + ( v 0 u i , j 2 ) 2 + + ( v k 1 u i + k 1 , j 1 ) 2 + ( v k 1 u i , j + k 1 2 ) 2 ] 2 k ( v 0 2 + v 1 2 + + v k 1 2 ) u 2 .
Consequently,
u k + 1 u k 2 2 1 2 s ξ 1 k ( v 0 2 + v 1 2 + + v k 1 2 ) u k + 1 u k , ( 1 ) α ¯ div α ( p k + 1 p k + 1 2 ) ,
where ξ 1 0 . Similarly, it follows that
u k + 1 u k 2 2 1 24 s ξ 2 u k + 1 u k , div 2 ( y k + 1 y k + 1 2 ) , ξ 2 0 , u k + 1 u k 2 2 1 s β u k + 1 u k , ( w k + 1 w k + 1 2 ) .
The 2 u 2 2 appearing in the solution process is expressed as
2 u 2 2 = i j ( 1 2 u i , j ) 2 + ( 2 2 u i , j ) 2 = i j ( u i + 1 , j 2 u i , j + u i 1 , j ) 2 + ( u i , j + 1 2 u i , j + u i , j 1 ) 2 2 i j ( u i + 1 , j 2 + u i 1 , j 2 + 8 u i , j 2 + u i , j + 1 2 + u i , j 1 2 ) 24 u 2 2 .
In summary,
u k + 1 u k 2 2 max ( 1 2 s ξ 1 k ( v 0 2 + v 1 2 + + v k 1 2 ) u k + 1 u k , ( 1 ) α ¯ div α ( p k + 1 p k + 1 2 ) , 1 24 s ξ 2 u k + 1 u k , div 2 ( y k + 1 y k + 1 2 ) , 1 s β u k + 1 u k , w k + 1 w k + 1 2 ) ,
which is proved. □
Lemma 5.
Let the sequence generated by Algorithm 2 be ( u k , p k , y k , w k , λ k ) , then
1 t u * u k 2 2 + 1 s p * p k 2 2 + 1 s y * y k 2 2 + 1 s w * w k 2 2 1 t u * u k + 1 2 2 + 1 s p * p k + 1 2 2 + 1 s y * y k + 1 2 2 + 1 s w * w k + 1 2 2 .
In particular, the sequence ( u k , p k , y k , w k , λ k ) converges to the limit point ( u , p , y , w , λ ) , at which point there are two cases, if u belongs to the feasible set D, then λ = 0 . Conversely, λ takes the solution of the equation H u ( λ ) f 2 2 = c 2 .
See [40] for a related proof.
Next, prove that a saddle point of L ( u , p , y , w ; λ ) is ( u , p , y , w ) , when p P , y Y , w W , it follows from Algorithm 2 that ( u , p , y , w ) satisfies the following equation:
u = ( t λ H T H + I ) 1 ( t λ H T f + u t ξ 1 ( α ) T p t ξ 2 ( 2 ) T y t β w ) .
Further collation yields,
λ H T ( H u f ) + ξ 1 ( α ) T p + ξ 2 ( 2 ) T y + β w = 0 .
This process is also the process of replacing λ with λ . Starting from Algorithm 2, it is known that ( u , p , y , w ) also satisfies the following equation system:
p = ρ P ( p + s ξ 1 α u ) , y = ρ Y ( y + s ξ 2 2 u ) , w = ρ W ( w + s β u ) .
The solution of the projection operator is shown in (38), which thus yields
p i , j = p i , j + s ξ 1 ( α u ) i , j max ( 1 , | p i , j + s ξ 1 ( α u ) i , j | ) , y i , j = y i , j + s ξ 2 ( 2 u ) i , j max ( 1 , | y i , j + s ξ 2 ( 2 u ) i , j | ) , w i , j = w i , j + s β u i , j max ( 1 , | w i , j + s β u i , j | ) .
If | p i , j + s ξ 1 ( α u ) i , j | 1 , then ( α u ) i , j = 0 and | p i , j | 1 . If | p i , j + s ξ 1 ( α u ) i , j | > 1 , then | p i , j | = 1 and s ξ 1 ( α u ) i , j = ( 1 | p i , j + s ξ 1 ( α u ) i , j | ) p i , j . Similarly, y i , j , w i , j has a similar property. Therefore, the p P , y Y , w W and the KKT optimality condition of (32) hold.
Thus, the following theorem can be obtained.
Theorem 1.
When λ * is the Lagrange multiplier on the constraint u D , assuming that the saddle point of L ( u , p , y , w ; λ * ) is ( u * , p * , y * , w * ) , then the sequence ( u k , p k , y k , w k ; λ k ) generated by Algorithm 2 converges to ( u * , p * , y * , w * ; λ * ) . At this time, s · t 1 / 16 . In particular, u k converges to the minimum, λ k converges to the Lagrange multiplier satisfying the constraint u D .

4. Simulation Results

4.1. Quantitative Evaluation Index Description

In this paper, the peak signal-to-noise ratio (PSNR) and structural self-similarity (SSIM) are used to assess the quality of denoised images. PSNR is used to evaluate the quality similarity between two images, while SSIM measures the structural similarity between two images. Here, the PSNR is defined as
PSNR = 10 log 10 ( b 2 MSE ) .
where b = max ( m , n ) and m , n denotes the size of the image. MSE denotes the mean square error between the two images, which is defined as
MSE = 1 mn i = 0 m 1 j = 0 n 1 ( [ u 0 ( i , j ) u ( i , j ) ] 2 ) .
Here, u 0 , u represents the original image and the denoised image, respectively. The definition of SSIM is as follows:
SSIM ( u 0 , u ) = ( 2 A u 0 A u + d 1 ) ( 2 cov ( u 0 , u ) + d 2 ) ( A u 0 2 + A u 2 + d 1 ) ( B u 0 2 + B u 2 + d 2 ) .
where A denotes the mean, cov ( u 0 , u ) denotes the covariance, B denotes the standard deviation, and d 1 and d 2 are two constants, and it is commonly recommended based on experience to set d 1 = 0.01 and d 1 = 0.03 . To verify the feasibility of the algorithm, a set of images is first introduced as test images, as depicted in Figure 2.

4.2. The Influence of Fractional-Order α

The regularization parameter λ and the fractional-order α are both indefinite parameters in image denoising algorithms, but they both play a significant role. The effect of the fractional-order α on the performance of the algorithm is investigated by adding Gaussian noise with the noise standard deviation of σ = 10 , 20 to the test images Lena, Boats, and Pepper. From Figure 3, it can be observed that when σ = 10 , with the increase of α , PSNR shows a decreasing trend. When σ = 20 , with the increase of α , PSNR shows an increasing trend. The value near α = 1.5 can be considered to ensure that the changing trend of PSNR has certain adaptability to different noise standard deviations. From Figure 4, it can be observed that if α is small, the image is prone to a blocky effect, and if α is large, the image is not well denoised. To effectively suppress the phenomenon observed in Figure 4, it is also possible to consider selecting values around α = 1.5 . In conclusion, it is preferable to select α from the range of α [ 1.4 , 1.6 ] .

4.3. The Influence of λ on Algorithm Performance

The regularization parameter λ is also known to affect the denoising performance of the algorithm. Hence, Gaussian noise with σ = 30 , 40 is added to the Lena, Boats, and Pepper images to verify the impact of the regularization parameter λ on the performance of the algorithm. From Figure 5, it can be observed that when manually selecting the parameter λ , the PSNR and SSIM of each algorithm are not satisfactory. However, when using the adaptive parameter λ selection method proposed in Section 3.3, the algorithms achieve higher PSNR and SSIM. Therefore, it can be concluded that the adaptive parameter λ proposed can improve the performance of the algorithm to a certain extent.

4.4. Comparison with Other Methods

To further prove the denoising performance of the new algorithm, a first set of comparative experiments was conducted using test images Parrots, Lena, Barbara, Cameraman, Pepper, Foot, and Head, in comparison with some state-of-the-art methods [48,49,50]. Gaussian noise with σ = 5 , 10 , 20 , 30 was added to the images, and the PSNR and SSIM of each algorithm after denoising are shown in Table 1. Figure 6 and Figure 7 demonstrate the denoising effects of the Pepper and Foot images with σ = 30 . Overall, the algorithm proposed can achieve better results.
Next, a second set of comparative experiments [51,52,53] was conducted by adding Gaussian noise with σ = 10 , 20 , 30 to Cameraman, Boats, Barbara, and Pepper. The experimental results are shown in Table 2. Based on the experimental results, it can be observed that the results obtained in this study are slightly higher than those of other algorithms. In conclusion, the algorithm proposed is effective in a certain sense.

5. Conclusions

In summary, the achievements obtained in this article are as follows:
*
The proposed MOTV model plays a significant role in improving the staircase effect and eliminating residual noise.
*
The appropriate fractional-order α and the selection of λ based on the deviation principle have a certain effect on the performance of the algorithm.
*
According to the results of two groups of comparative experiments, it can be concluded that the algorithm proposed outperforms other algorithms and is effective.
In fact, the algorithm proposed also has certain limitations. Specifically, when the noise variance increases, the noise suppression effect of the algorithm may decrease. In future research, it will be considered by the authors to incorporate the ideas of this paper into other methods mentioned in the introduction section. Additionally, the application of regularizers will be considered to achieve better results. Furthermore, efforts will be made to reduce the computation time.

Author Contributions

Conceptualization, S.B. and M.L.; methodology, S.B.; software, S.B. and M.L.; validation, S.B.; formal analysis, S.B.; investigation, S.B.; resources, S.B.; data curation, S.B., G.C. and M.L.; writing—original draft, S.B.; writing—review and editing, S.B.; visualization, S.B., supervision, S.B., G.C. and M.L.; project administration, S.B.; funding acquisition, G.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China under Grant No. 11461037, and the High-Quality Postgraduate Courses of Yunnan Province under Grant No. 109920210027.

Data Availability Statement

Not applicable.

Acknowledgments

The author thank the National Natural Science Foundation of China under Grant No. 11461037, and the High-Quality Postgraduate Courses of Yunnan Province under Grant No. 109920210027.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Li, X.; Meng, X.; Xiong, B. A fractional variational image denoising model with two-component regularization terms. Appl. Math. Comput. 2022, 427, 127178. [Google Scholar] [CrossRef]
  2. Donoho, D.L. De-noising by soft-thresholding. IEEE Trans. Inf. Theory 1995, 41, 613–627. [Google Scholar] [CrossRef] [Green Version]
  3. Miclea, A.V.; Terebes, R.M.; Meza, S.; Cislariu, M. On spectral-spatial classification of hyperspectral images using image denoising and enhancement techniques, wavelet transforms and controlled data set partitioning. Remote Sens. 2022, 14, 1475. [Google Scholar] [CrossRef]
  4. Smitha, A.; Febin, I.P.; Jidesh, P. A retinex based non-local total generalized variation framework for OCT image restoration. Biomed. Signal Process. Control 2022, 71, 103234. [Google Scholar] [CrossRef]
  5. Bera, S.; Biswas, P.K. Noise conscious training of non local neural network powered by self attentive spectral normalized Markovian patch GAN for low dose CT denoising. IEEE Trans. Med. Imaging 2021, 40, 3663–3673. [Google Scholar] [CrossRef]
  6. Zhao, T.; Hoffman, J.; McNitt-Gray, M.; Ruan, D. Ultra-low-dose CT image denoising using modified BM3D scheme tailored to data statistics. Med. Phys. 2019, 46, 190–198. [Google Scholar] [CrossRef] [Green Version]
  7. Wen, Y.; Guo, Z.; Yao, W.; Yan, D.; Sun, J. Hybrid BM3D and PDE filtering for non-parametric single image denoising. Signal Process. 2021, 184, 108049. [Google Scholar] [CrossRef]
  8. Zhong, T.; Cheng, M.; Dong, X.; Wu, N. Seismic random noise attenuation by applying multiscale denoising convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5905013. [Google Scholar] [CrossRef]
  9. Dutta, S.; Basarab, A.; Georgeot, B.; Kouamé, D. DIVA: Deep unfolded network from quantum interactive patches for image restoration. arXiv 2022, arXiv:2301.00247. [Google Scholar] [CrossRef]
  10. Yang, D.; Sun, J. BM3D-Net: A convolutional neural network for transform-domain collaborative filtering. IEEE Signal Process. Lett. 2017, 25, 55–59. [Google Scholar] [CrossRef]
  11. Dutta, S.; Basarab, A.; Georgeot, B.; Kouamé, D. A novel image denoising algorithm using concepts of quantum many-body theory. Signal Process. 2022, 201, 108690. [Google Scholar] [CrossRef]
  12. Dutta, S.; Basarab, A.; Georgeot, B.; Kouamé, D. Quantum mechanics-based signal and image representation: Application to denoising. IEEE Open J. Signal Process. 2021, 2, 190–206. [Google Scholar] [CrossRef]
  13. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  14. Chan, T.; Marquina, A.; Mulet, P. High-order total variation-based image restoration. SIAM J. Sci. Comput. 2000, 22, 503–516. [Google Scholar] [CrossRef]
  15. Liu, X.; Li, Q.; Yuan, C.; Li, J.; Chen, X.; Chen, Y. High-order directional total variation for seismic noise attenuation. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5903013. [Google Scholar] [CrossRef]
  16. Shi, B.; Gu, F.; Pang, Z.F.; Zeng, Y. Remove the salt and pepper noise based on the high order total variation and the nuclear norm regularization. Appl. Math. Comput. 2022, 421, 126925. [Google Scholar] [CrossRef]
  17. Tian, D.; Xue, D.; Wang, D. A fractional-order adaptive regularization primal–dual algorithm for image denoising. Inf. Sci. 2015, 296, 147–159. [Google Scholar] [CrossRef]
  18. Liu, Q.; Sun, L.; Gao, S. Non-convex fractional-order derivative for single image blind restoration. Appl. Math. Model. 2022, 102, 207–227. [Google Scholar] [CrossRef]
  19. Tian, C.; Zheng, M.; Zuo, W.; Zhang, B.; Zhang, Y.; Zhang, D. Multi-stage image denoising with the wavelet transform. Pattern Recognit. 2023, 134, 109050. [Google Scholar] [CrossRef]
  20. Chen, Y.; Cao, W.; Pang, L.; Cao, X. Hyperspectral image denoising with weighted nonlocal low-rank model and adaptive total variation regularization. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5544115. [Google Scholar] [CrossRef]
  21. Kulathilake, K.A.S.H.; Abdullah, N.A.; Sabri, A.Q.M.; Bandara, A.M.R.R.; Lai, K.W. A review on self-adaptation approaches and techniques in medical image denoising algorithms. Multimed. Tools Appl. 2022, 81, 37591–37626. [Google Scholar] [CrossRef]
  22. Gu, X.M.; Huang, T.Z.; Ji, C.C.; Carpentieri, B.; Alikhanov, A.A. Fast iterative method with a second-order implicit difference scheme for time-space fractional convection–diffusion equation. J. Sci. Comput. 2017, 72, 957–985. [Google Scholar] [CrossRef]
  23. Li, M.; Gu, X.M.; Huang, C.; Fei, M.; Zhang, G. A fast linearized conservative finite element method for the strongly coupled nonlinear fractional Schrödinger equations. J. Comput. Phys. 2018, 358, 256–282. [Google Scholar] [CrossRef]
  24. Miller, K.S.; Ross, B. An Introduction to the Fractional Calculus and Fractional Differential Equations; Wiley: Hoboken, NJ, USA, 1993. [Google Scholar]
  25. Lian, W.; Liu, X. Non-convex fractional-order TV model for impulse noise removal. J. Comput. Appl. Math. 2023, 417, 114615. [Google Scholar] [CrossRef]
  26. Li, M.; Han, C.; Wang, R.; Guo, T. Shrinking gradient descent algorithms for total variation regularized image denoising. Comput. Optim. Appl. 2017, 68, 643–660. [Google Scholar] [CrossRef]
  27. Estellers, V.; Soatto, S.; Bresson, X. Adaptive regularization with the structure tensor. IEEE Trans. Image Process. 2015, 24, 1777–1790. [Google Scholar] [CrossRef]
  28. Prasath, V.B.S.; Pelapur, R.; Seetharaman, G.; Palaniappan, K. Multiscale structure tensor for improved feature extraction and image regularization. IEEE Trans. Image Process. 2019, 28, 6198–6210. [Google Scholar] [CrossRef]
  29. Hsieh, P.W.; Shao, P.C.; Yang, S.Y. A regularization model with adaptive diffusivity for variational image denoising. Signal Process. 2018, 149, 214–228. [Google Scholar] [CrossRef]
  30. Chambolle, A.; Pock, T. A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 2011, 40, 120–145. [Google Scholar] [CrossRef] [Green Version]
  31. Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1997. [Google Scholar]
  32. Duan, J.; Qiu, Z.; Lu, W.; Wang, G.; Pan, Z.; Bai, L. An edge-weighted second order variational model for image decomposition. Digit. Signal Process. 2016, 49, 162–181. [Google Scholar] [CrossRef]
  33. Deng, H.; Tao, J.; Song, X.; Zhang, C. Estimation of the parameters of a weighted nuclear norm model and its application in image denoising. Inf. Sci. 2020, 528, 246–264. [Google Scholar] [CrossRef]
  34. Harris, C.; Stephens, M. A combined corner and edge detector. Alvey Vis. Conf. 1988, 15, 10–5244. [Google Scholar] [CrossRef]
  35. Phan, T.D.K.; Tran, T.H.Y. Edge coherence-weighted second-order variational model for image denoising. Signal Image Video Process. 2022, 16, 2313–2320. [Google Scholar] [CrossRef]
  36. Chan, T.F.; Golub, G.H.; Mulet, P. A nonlinear primal-dual method for total variation-based image restoration. SIAM J. Sci. Comput. 1999, 20, 1964–1977. [Google Scholar] [CrossRef] [Green Version]
  37. Wen, Y.W.; Chan, R.H.; Zeng, T.Y. Primal-dual algorithms for total variation based image restoration under poisson noise. Sci. China Math. 2016, 59, 141–160. [Google Scholar] [CrossRef]
  38. Boutaayamou, I.; Hadri, A.; Laghrib, A. An optimal bilevel optimization model for the generalized total variation and anisotropic tensor parameters selection. Appl. Math. Comput. 2023, 438, 127510. [Google Scholar] [CrossRef]
  39. Bertsekas, D.; Nedic, A.; Ozdaglar, A. Convex Analysis and Optimization; Athena Scientific: Nashua, NH, USA, 2003. [Google Scholar]
  40. Wen, Y.W.; Chan, R.H. Parameter selection for total-variation-based image restoration using discrepancy principle. IEEE Trans. Image Process. 2011, 21, 1770–1781. [Google Scholar] [CrossRef] [Green Version]
  41. Jiang, F.; Zhang, Z.; He, H. Solving saddle point problems: A landscape of primal-dual algorithm with larger stepsizes. J. Glob. Optim. 2022, 85, 821–846. [Google Scholar] [CrossRef]
  42. Jiang, F.; Wu, Z.; Cai, X.; Zhang, H. A first-order inexact primal-dual algorithm for a class of convex-concave saddle point problems. Numer. Algorithms 2021, 88, 1109–1136. [Google Scholar] [CrossRef]
  43. Chen, G.; Teboulle, M. A proximal-based decomposition method for convex minimization problems. Math. Program. 1994, 64, 81–101. [Google Scholar] [CrossRef]
  44. Nedić, A.; Ozdaglar, A. Subgradient methods for saddle-point problems. J. Optim. Theory Appl. 2009, 142, 205–228. [Google Scholar] [CrossRef]
  45. Pang, Z.F.; Zhang, H.L.; Luo, S.; Zeng, T. Image denoising based on the adaptive weighted TVp regularization. Signal Process. 2020, 167, 107325. [Google Scholar] [CrossRef]
  46. Sun, K.; Simon, S. Bilateral spectrum weighted total variation for noisy-image super-resolution and image denoising. IEEE Trans. Signal Process. 2021, 69, 6329–6341. [Google Scholar] [CrossRef]
  47. Galatsanos, N.P.; Katsaggelos, A.K. Methods for choosing the regularization parameter and estimating the noise variance in image restoration and their relation. IEEE Trans. Image Process. 1992, 1, 322–336. [Google Scholar] [CrossRef] [PubMed]
  48. Guo, J.; Chen, Q. Image denoising based on nonconvex anisotropic total-variation regularization. Signal Process. 2021, 186, 108124. [Google Scholar] [CrossRef]
  49. Li, M.; Cai, G.; Bi, S.; Zhang, X. Improved TV image denoising over inverse gradient. Symmetry 2023, 15, 678. [Google Scholar] [CrossRef]
  50. Phan, T.D.K. A weighted total variation based image denoising model using mean curvature. Optik 2020, 217, 164940. [Google Scholar] [CrossRef]
  51. Nguyen, M.P.; Chun, S.Y. Bounded self-weights estimation method for non-local means image denoising using minimax estimators. IEEE Trans. Image Process. 2017, 26, 1637–1649. [Google Scholar] [CrossRef] [Green Version]
  52. Zhang, X. Center pixel weight based on wiener filter for non-local means image denoising. Optik 2021, 244, 167557. [Google Scholar] [CrossRef]
  53. Zhang, B.; Zhu, G.; Zhu, Z.; Kwong, S. Alternating direction method of multipliers for nonconvex log total variation image restoration. Appl. Math. Model. 2023, 114, 338–359. [Google Scholar] [CrossRef]
Figure 1. Region image. (a) Flat regions, (b) edge regions, (c) corner regions.
Figure 1. Region image. (a) Flat regions, (b) edge regions, (c) corner regions.
Fractalfract 07 00566 g001
Figure 2. Original images. (a) Lena, (b) Parrots, (c) Pepper, (d) Barbara, (e) Boats, (f) Cameraman, (g) Foot, (h) Head.
Figure 2. Original images. (a) Lena, (b) Parrots, (c) Pepper, (d) Barbara, (e) Boats, (f) Cameraman, (g) Foot, (h) Head.
Fractalfract 07 00566 g002aFractalfract 07 00566 g002b
Figure 3. (ac) represent the PSNR curves of Boats, Lena, and Pepper under different α when σ = 10 , respectively. (df) represent the PSNR curves of Boats, Lena, and Pepper under different α when σ = 20 , respectively.
Figure 3. (ac) represent the PSNR curves of Boats, Lena, and Pepper under different α when σ = 10 , respectively. (df) represent the PSNR curves of Boats, Lena, and Pepper under different α when σ = 20 , respectively.
Fractalfract 07 00566 g003
Figure 4. The denoising effect of Boats at different fractional-orders α .
Figure 4. The denoising effect of Boats at different fractional-orders α .
Fractalfract 07 00566 g004
Figure 5. (ac) represent the PSNR change curves of Pepper, Lena, and Boats under λ adaptive and non-adaptive conditions when σ = 30 , 40 , respectively.
Figure 5. (ac) represent the PSNR change curves of Pepper, Lena, and Boats under λ adaptive and non-adaptive conditions when σ = 30 , 40 , respectively.
Fractalfract 07 00566 g005
Figure 6. (a) Noise image of σ = 30 , (b) Pepper denoising effect image of [48], (c) Pepper denoising effect image of [49], (d) Pepper denoising effect image of [50], (e) Pepper denoising effect image of the proposed algorithm.
Figure 6. (a) Noise image of σ = 30 , (b) Pepper denoising effect image of [48], (c) Pepper denoising effect image of [49], (d) Pepper denoising effect image of [50], (e) Pepper denoising effect image of the proposed algorithm.
Fractalfract 07 00566 g006
Figure 7. (a) Noise image of σ = 30 , (b) Foot denoising effect image of [48], (c) Foot denoising effect image of [49], (d) Foot denoising effect image of [50], (e) Foot denoising effect image of the proposed algorithm.
Figure 7. (a) Noise image of σ = 30 , (b) Foot denoising effect image of [48], (c) Foot denoising effect image of [49], (d) Foot denoising effect image of [50], (e) Foot denoising effect image of the proposed algorithm.
Fractalfract 07 00566 g007
Table 1. The PSNR and SSIM of each algorithm under the first set of comparative experiments.
Table 1. The PSNR and SSIM of each algorithm under the first set of comparative experiments.
Image σ PSNR/SSIM [48]PSNR/SSIM [49]PSNR/SSIM [50]PSNR/SSIM Proposed
Parrots532.8362/0.907132.3292/0.905034.3292/0.905834.8288/0.9080
1027.7650/0.908531.4979/0.900533.4346/0.908933.6375/0.9101
2023.9519/0.914628.2574/0.916228.5716/0.921828.5996/0.9241
3022.7536/0.9201823.1230/0.920424.2050/0.927726.3916/0.9279
Lena533.1229/0.902332.2596/0.893833.1639/0.899233.4520/0.9000
1027.7962/0.896331.4268/0.897432.4873/0.902332.5379/0.9011
2024.0047/0.914428.0919/0.905028.1660/0.914128.4511/0.9153
3023.0918/0.894223.9741/0.897324.2123/0.907525.8134/0.8896
Barbara533.1016/0.890829.2736/0.890431.0013/0.890931.0157/0.8924
1027.7793/0.895428.8140/0.897330.0985/0.897430.1247/0.8978
2023.9799/0.902426.7437/0.903326.5292/0.905126.5692/0.9133
3022.1423/0.893722.9751/0.910222.6868/0.918822.9827/0.9124
Cameraman533.1716/0.902129.8097/0.899131.8976/0.900032.2383/0.9006
1027.8417/0.905629.3846/0.901530.3292/0.903431.4454/0.9059
2023.9418/0.914027.0952/0.908427.6781/0.908227.7247/0.9144
3022.8149/0.890223.1143/0.890723.5803/0.890524.6137/0.8911
Pepper534.6851/0.869832.6693/0.862034.5614/0.871334.7247/0.8729
1028.8834/0.876432.0109/0.868932.9939/0.878633.8284/0.8767
2024.2617/0.885328.6892/0.880829.1745/0.890229.3960/0.8913
3022.5738/0.893724.4121/0.893825.4702/0.895029.0906/0.9016
Foot537.5861/0.853637.4102/0.852435.4676/0.871335.7180/0.8561
1033.8384/0.834033.9815/0.831234.0148/0.831134.6360/0.8350
2027.1051/0.828628.2964/0.827928.7614/0.825329.8339/0.8272
3025.4720/0.805025.6541/0.814425.9459/0.819327.5594/0.8074
Head533.5699/0.895333.4973/0.887434.6343/0.903033.6142/0.8423
1032.3203/0.847031.0355/0.830932.1153/0.857632.3258/0.8310
2022.9786/0.793322.8761/0.7823727.6259/0.811728.1583/0.7446
3019.4785/0.740319.4509/0.737123.3897/0.764924.5390/0.6468
Table 2. The PSNR and SSIM of each algorithm under the second set of comparative experiments.
Table 2. The PSNR and SSIM of each algorithm under the second set of comparative experiments.
Image σ PSNR/SSIM [51]PSNR/SSIM [52]PSNR/SSIM [53]PSNR/SSIM Proposed
Cameraman1031.3925/0.894833.1347/0.904925.0623/0.908933.6375/0.9101
2028.2944/0.827228.5723/0.830924.9803/0.921828.5996/0.9241
3027.1039/0.770027.2705/0.731524.8771/0.927726.3916/0.9279
Boats1032.3014/0.901732.3438/0.906525.6609/0.903232.4058/0.8889
2028.6036/0.880228.7042/0.884025.5432/0.901928.3939/0.9011
3027.5687/0.820427.5816/0.825325.3779/0.908726.0102/0.9112
Barbara1030.0487/0.918631.8501/0.934022.0302/0.897430.1247/0.8978
2026.1845/0.925926.4952/0.924721.9840/0.905126.5692/0.9133
3025.7061/0.879025.7384/0.879021.9057/0.918822.9827/0.9124
Pepper1033.3949/0.895033.9108/0.903228.8618/0.903434.7247/0.8729
2029.2765/0.859329.5572/0.889728.6174/0.859629.3960/0.8913
3028.2187/0.874628.2854/0.873128.4424/0.890529.0906/0.9016
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bi, S.; Li, M.; Cai, G. Mixed Fractional-Order and High-Order Adaptive Image Denoising Algorithm Based on Weight Selection Function. Fractal Fract. 2023, 7, 566. https://doi.org/10.3390/fractalfract7070566

AMA Style

Bi S, Li M, Cai G. Mixed Fractional-Order and High-Order Adaptive Image Denoising Algorithm Based on Weight Selection Function. Fractal and Fractional. 2023; 7(7):566. https://doi.org/10.3390/fractalfract7070566

Chicago/Turabian Style

Bi, Shaojiu, Minmin Li, and Guangcheng Cai. 2023. "Mixed Fractional-Order and High-Order Adaptive Image Denoising Algorithm Based on Weight Selection Function" Fractal and Fractional 7, no. 7: 566. https://doi.org/10.3390/fractalfract7070566

Article Metrics

Back to TopTop