A class of singular diffusion equations based on the convex–nonconvex variation model for noise removal

This paper focuses on the problem of noise removal. First, we propose a new convex–nonconvex variation model for noise removal and consider the nonexistence of solutions of the variation model. Based on the new variation method, we propose a class of singular diffusion equations and prove the of solutions and comparison rule for the new equations. Finally, experimental results illustrate the effectiveness of the model in noise reduction.


Introduction and motivation
Image denoising is used to recover/decompose a true image from an observed noisy image. Specifically, let f : → R be a given image defined on the domain ⊂ R N . Image denoising is used to decompose f into two functions u and n with f = u + n, where u contains the most meaningful signals depicted by f and n represents the noise. In the ideal case, the noise part n has no signal information. The task of removing noise can be accomplished in traditional ways such as employing linear filters, which, though very simple to implement, may cause the restored image to be blurred at the edges. Various adaptive filters for noise removal have been proposed. Among these the variational method is one of the most extensively used techniques. In genal, nonlinear PDEs associated to the variational method are used as anisotropic diffusion filters because they apply different strengths of diffusivity to different locations in the image. These variational methods can be classified into the following two cases.

Convex variational model and forward diffusion equation
A classical variational model for image denoising was proposed by Rudin, Osher, and Fatemi [1]. In [1], for a given noisy image f ∈ L 2 ( ), the image denoising problem is equivalent to the following minimization problem (the ROF model): where λ > 0 is a tuning parameter. In [2], Vese proposed the following general framework of variational model for image denoising: The author discussed the minimizing problem min u∈BV( ) E(u), when φ(s) is a strictly convex function. In order to use the direct method for the calculus of variations, the convexity of the function φ(s) is always assumed. The BV norm, i.e., the total variation, is well suited for φ(|∇u|). And the total variation has also been widely used in other tasks of image processing, since it can help prevent the noise from staying in the denoised image u because the noise part yields a large total variation of u.
The ROF model yields very satisfactory results for removing image noise while preserving edges and contours of objects. However, it also possesses some unfavorable properties under some circumstances, such as the loss of image contrast, the smearing of corners, and the staircase effect. For instance, in [3], Meyer showed that the ROF model cannot preserve image contrast (cf. Theorem 3, p. 32) and cannot keep corners (cf. Proposition 6,p. 39). A study of the loss of image contrast can also be found in [4]. And in [5], Bellettini, Caselles, and Novaga pointed out what kind of shapes can be preserved by the ROF model, which indicates that the ROF model will smear object corners. A full discussion of these undesirable properties of the ROF model can also be found in [6].
To remedy these unfavorable properties of the ROF model, new models or techniques have been proposed [7][8][9][10][11][12][13][14][15][16][17]. Chan and Strong [7] proposed an adaptive total variation based on a control factor. Chambole and Lions [8] proposed to minimize a combination of total variation and the integral of the squared norm of the gradient. Yunmei et al. [9] observed that this model is successful in restoring images where homogeneous regions are separated by distinct edges, but may become sensitive to the thresholding parameter, in the event of nonuniform image intensities or heavy degradation. And Yunmei et al. [9] proposed a variable-exponent approach adaptive model which exploits the benefits of Gaussian smoothing and the strength of TV regularization. On the other hand, in [10,11], the authors introduced new variational models based on high-order derivatives of the denoised image u. In addition to the basic requirements of images denoising, such as edge preservation and noise removal, these new models effectively ameliorate the staircase effect.
It is worth mentioning that the diffusion equations associated to these methods are the forward anisotropic diffusion equations, which smooth homogeneous regions while preserving edges. However, these diffusion equations cannot enhance the image, for example, by preserving corners, smoothing parts of objects, as well as image greyscale intensity contrasts.

Nonconvex variational model and backward diffusion equation
Most of those existing algorithms are based on a convex potential. For a convex potential, φ needs to increase near-linearly at least; but for better edge-preservation, φ needs to increase less-linearly, then φ becomes nonconvex, and such a form of φ has been suggested in [18][19][20][21][22][23]. It is interesting that Vese proposes several variational models for image denoising, when φ(s) is nonconvex and then implements numerical simulations for this case in [2]. Unfortunately, there is not necessarily a unique solution for the variational model with nonconvex potential, and Chipot et al. [22] proved that there is no minimizer in any reasonable space if f is not a constant. And they introduced the following energy: They proved that E ε (u) is convex for ε ≥ λ/4 and nonconvex for ε < λ/4; for ε < λ/4, E ε (u) has quadratic growth at infinity, and then they use convexification tools to obtain the existence of a minimizer for E ε (u) in the one-dimensional case. For dimensions greater than one, the problem is quite open. The behavior of the minimizing sequence is also a challenging problem, which is closely related to Perona and Malik anisotropic diffusion [23] whose associated potential is nonconvex also. In spite of the lack of a rigorous mathematical theory for the continuous minimization problem with nonconvex potential, its associated discrete version can be solved numerically, for example, with the gradient decent algorithm [23], the simulated annealing algorithm [24], the half-quadratic algorithms [18,20,21,[25][26][27][28][29], and so on. The nonconvex potential always leads to the backward diffusion equation or the forward-backward diffusion equation, which can sharpen the edges, corners, as well as the singular features.
In this paper, we intend to propose a new convex-nonconvex variational model for image denoising. In addition to removing noise and keeping edges and contours of objects, the new model aims at preserving corners, smoothing parts of objects, as well as image greyscale intensity contrasts. As corners and edges differ from ordinary points or contours in their singularities, a natural idea is to incorporate the related geometric quantities into the process of denoising. Our idea can be described as follows: First, inspired by [22], instead of ε |∇u| 2 dx, we consider the linear growth functional ε ψ(|∇u|) dx, which can also preserve edges and corners. The new variational model is a combination of the convex and nonconvex variational models. Second, based on the new idea, we propose a new variational framework for image denoising, which is under some basic hypotheses and may not satisfy the convexity condition. Third, we propose and analyze a class of singular diffusion equations associated with the new variational model. To efficiently solve the singular diffusion equations, one might employ some fast methods, such as AOS [30], etc. In this paper, we also use the standard time marching scheme and PM scheme [1].
In fact, the anisotropic diffusion equation has been widely used in the modeling of image processing during the last two decades. In the famous work [23], Perona and Malik proposed a framework to deal with the denoising problem based on the diffusion equation. To make the images more pleasing to the eye, it would be useful to reduce staircasing effects. Many models reducing this effect have been proposed in the literature. In [31,32], Charbonnier and Weickert developed and studied the forward diffusion equation by proposing the different diffusivity. In [33], Catt et al. proposed the regularization of the Perona and Malik model to obtain a smoother image. In [34], Keeling et al. proposed the nonlinear anisotropic diffusion filtering for multiscale edge enhancement. In [35] Gilboa et al. proposed forward-backward diffusion processes for adaptive image enhancement and denoising. In [36], Smolka proposed combined forward and backward anisotropic diffusion filtering of color images. These forward-backward diffusions are related to nonconvex potentials. In all these works, the nonlinear anisotropic diffusion equation was considered, while in our present work, based on the new nonconvex variational model, we consider the singular forward-backward diffusion equation for denoising, which can obtain the singular solution to preserve the singular parts of the image, such as edges, corners, and so on.
The rest of this paper is organized as follows. In Sect. 2, our convex-nonconvex variational model is introduced in detail. We discuss the ill-posed problem. We prove the existence of Young measure solutions in Sect. 3. Of course, we are very interested in the investigation of the properties of Young measure solutions in Sect. 4. Numerical implementation is then developed in Sect. 5. We list numerical experiments for synthetic image denoising as well as real world image denoising, and compare our results with those obtained by the ROF model. A conclusion is subsequently given in Sect. 6.

Convex-nonconvex variational framework for denoising model
The following new variational model is proposed: where μ 1 > 0, μ 2 > 0. The functions ψ C (s) and ψ NC (s) are convex and nonconvex, respectively. For the image processing, ψ C (s) satisfies the following assumptions [2]: • ψ C is a strictly convex, nondecreasing function from R + to R + , with ψ C (0) = 0 (without a loss of generality); • lim s→+∞ ψ C (s) = +∞; • There exist two constants c > 0 and b ≥ 0 such that and ψ NC (s) satisfies the following assumptions [18]: • ψ NC is nonconvex; • ψ NC ≈ cs 2 as s → 0 + ; • lim s→+∞ ψ NC (s) ≈ γ > 0. Compared with the conditions about φ C and φ NC in [2] and [18], the hypotheses on ψ NC and ψ C in this paper are as follows: (H1) ψ C ∈ C 1 (R N ). There exist two constants 0 < λ ≤ such that (H2) Z(X) = ∇ψ C (X) and |Z(X)| ≤ ; (H3) Moreover, we assume that there exist a sequence {ϕ p } 1<p<2 ⊂ C 1 (R N ) and C 0 > 0 such that {Z p = ∇ϕ p } 1<p<2 locally and uniformly converges to Z in R N . For all p ∈ (1, 2), ϕ p and Z p satisfy the structure conditions  The new variational model is a combination of the convex and nonconvex variational models. Hence, the new model is demonstrated to be capable of achieving a good tradeoff between noise removal and edge preservation which the convex and nonconvex variational models are respectively good at. This is not a simple combination: ψ C controls the growth of the new functional and the regularization of the solution; ψ NC can't only influence the growth of the new functional, but also preserve the singular parts of the image, such as corners, image contrast, edges, and so on, and furthermore, ψ NC controls the convexity of the functional.
In order to use the Young measure theory in [37][38][39][40], we have to assume Hypothesis (H3) on ψ C . However, Hypothesis (H3) is easy to be satisfied, for example, if ψ C = 1 + |s| 2 . On the other hand, the new hypotheses are different from the assumptions in [2,18]
In this paper, the following model is considered: which can be rewritten as Following the proof given by Chipot et al. [22], we have is not a constant and f ∈ L ∞ ( ), then the function E NC (u) has no minimizer in W 1,2 ( ) and inf u∈W 1,2 ( ) E NC (u) = 0.
Proof The theorem in the one-dimensional case = (a, b) is proved for the clarity sake, and the same proof goes for N ≥ 2. It's clear that and then we will prove that the theorem is true for E α (u) with 0 < α < 1. By density, we always may find a sequence of step functionsũ n such that In fact, we can find a partition a = x 0 < x 1 < · · · < x n = b such thatũ n is the constantũ n,i on each interval (x i , x i ), h n = max i (x ix i-1 ) < 1 with lim n→+∞ h n = 0. Let us set σ i = x ix i-1 . Next, we define a sequence of continuous functions u n by Note that x i and therefore, Since taking the limit on both sides yields Thus and finally, The first equality is possible only if f ∈ W 1,2 ( ), and in this case the second equality implies f = 0, which is possible only if f is a constant. Therefore, excluding this trivial case, E NC (u) has no minimizer in W 1,2 ( ).
Remark 1 As we know, if the region is bounded, Note that E NC (u) ≥ 0, and therefore However, we cannot obtain any information about the minimizer of E NC (u) in BV( ).

Singular diffusion equations based on the convex-nonconvex variation
Based on the new variational model, the following diffusion equation is proposed: For this special equation, what we obtain in this paper will reveal another aspect for the existence, namely the existence of a discontinuous solution. Note that the equation is strongly degenerate at the discontinuous points of such a solution. On the other hand, the new equation can be considered as a perturbation of Perona-Malik model [23]. Such a perturbation is not the usual viscous one, for example, u or 2 u, which has standard regularity effects. The perturbation has no hazard for the equation to permit the existence of discontinuous solutions, which has particular meaning: with the new perturbation, the new model is still an anisotropic diffusion equation. That is to say, inside the regions where the magnitude of the gradient of u is weak, the new equation acts as Gaussian smoothing, resulting in isotropic smoothing; near the region's boundaries where the magnitude of the gradient is large, the regularization is "stopped" and the edges are preserved. Let for X ∈ R N . Therefore, the new diffusion equation can be rewritten as Let ϕ * * denote the convexification of ϕ, namely, Since ϕ ∈ C 1 (R N ), ϕ * * ∈ C 1 (R N ) is convex.

Definition 1 A Young measure solution to problem (3)-(5) is a function
where id is the identity mapping in R N , and in the sense of trace.

Preliminaries
We use C 0 (R d ) to denote the closure of the set of continuous functions on R d with compact supports. The dual of C 0 (R d ) can be identified with the space M(R d ) of signed Radon measures with finite mass via the pairing As noted in [11], the space We define which is an inseparable space with the above norm.
We also call ν the W 1,p (D)-gradient Young measure generated by {∇u k } ∞ k=1 and {∇u k } ∞ k=1 the W 1,p (D)-gradient generating sequence of ν. In addition, the representation formula (11) also holds for ψ ∈ E p (R d ). By the fundamental theorem for Young measure, we see that Definition 3 Let {z k } ∞ k=1 ⊂ L 1 (D) and z ∈ L 1 (D). We say that {z k } ∞ k=1 converges to z in the bitting sense if there is a decreasing sequence of subsets E j+1 ⊂ E j of D with lim j→∞ meas(E j ) = 0 such that {z k } ∞ k=1 converges weakly to z in L 1 (D\E j ) for all j.
We also call ν the W 1,p (D)-bitting Young measure generated by {z k } ∞ k=1 and {z k } ∞ k=1 the W 1,p (D)-bitting generating sequence of ν. By the fundamental theorem for Young measure, we see that Kinderlehrer and Pedregal [37] showed a property which characterizes W 1,p -gradient Young measures as described in the following lemma.
(ii) Jensen's inequality holds for all ψ ∈ E p (R d ) continuous, quasiconvex, and bounded below; We give the following two lemmas. The proofs can be found in [38,39].

, is quasiconvex and bounded below and let
k=1 are weakly sequentially precompact in L 1 (D) and the sequence converges weakly to f (∇u).

Lemma 3 Let f and {u k } ∞ k=1 be as in Lemma 2 (ii) and assume in addition that
We now state and prove a result for the sequences of gradient-generated Young measures [40].

Existence of solution to the approximation problem
Since equation (3) is degenerate, singular, and forward-backward, some necessary approximations are required for discussing the existence of solutions. Our approximations will be divided into two steps. For this purpose, we need to approximate the initial datum f . By the density properties of BV functions in [7], there exists some subsequence {f p } ⊂ C ∞ 0 ( ) such that f p L ∞ ( ) and ∇f p L 1 ( ) are uniformly bounded in p, and {f p } 0<p<1 converges to f in L 1 ( ).
As the first step, we consider the following evolution problem: where Let ϕ * * p denote the convexification of ϕ p , namely, Since ϕ p ∈ C 1 (R N ), ϕ * * ∈ C 1 (R N ) is convex. In addition, Definition 5 A Young measure solution to problem (12)-(14) is a function supp ν x,t ⊂ X ∈ R N : ϕ p (X) = ϕ * * p (X) a.e. (x, t) ∈ Q T , and in the sense of trace.
The following existence proof follows by the ideas due to Kinderlehrer and Pedregal [39], Demoulini [38], and Yin and Wang [40].
In order to obtain the theorem above, the following functionals defined on W 1,p ( ) are considered: and where 0 < h < 1, u h,0 = f p , j is an integer and 1 ≤ j ≤ T/h + 1.

Lemma 5 There exists u h,j
Moreover, where only depends of μ 1 and μ 2 .
Proof By the relaxation theorem (cf. [43]), we get that  (20) and, for k sufficiently large, From the growth condition, we see that and Thus u h,j is a minimum of F * * h (v; u h,j-1 ) and Note that and then Hence x ) x∈ be the Young measure generated by {∇u h,j,k } ∞ k=1 in the proof of Lemma 5. By Lemma 3, ν h,j is a W 1,p -gradient Young measure. Then Noticing that ϕ * * p ≤ ϕ p , we see that Thus, Let χ h,j be the indicator function of [hj, h(j + 1)) and Then Based on the facts above, we can obtain the following lemma.

Lemma 6
The functions u h , w h , and the Young measure ν h defined above satisfy where M are independent of p and h.
By Lemma 2, we can see that which implies the equilibrium equation ∀ξ ∈ C ∞ 0 ( ). At the minimizer u h,j , the Gâteaux derivative of F * * h is zero, and we obtain and, by Lemma 5, we get the estimate From (24)- (26) and (29), for ζ ∈ C ∞ (Q T ), with ζ (x, 0) = ζ (x, T) = 0. From the direct calculation, we see that ∂ t u h can be chosen as the test function in (27), and therefore When h → 0 and k → ∞, we have the following lemma.

Now let us prove that
by Lemma 5, we have Hence ∇u = ∇v = ν, id , which implies (7). Similar to the above arguments, we also obtain that {v h m } ∞ m=1 converges to u in L p (Q T ).

Proof of Theorem 3 From Lemmas 6 and 7, we can obtain
for ζ ∈ C ∞ (Q T ), with ζ (x, 0) = ζ (x, T) = 0. Therefore, if we prove (8), we will obtain the weak solution of the problem (3)-(5). Let {u h,j,k } ∞ k=1 ⊂ W 1,p ( ) be a minimizing sequence of F h in the proof of Lemma 5. For all ξ ∈ W 1,p ( ), we see that Since Z * * p (∇u h,j,k )·∇u h,j,k converges weakly to ν h,j , Z * * p ·id in L 1 ( ), Z * * p (∇u h,j,k ) converges weakly to ν h,j , Z * * p in L p/(p-1) ( ), ∇u h,j,k converges weakly to ν h,j , id in L p ( ) as k tends to infinity, we get that Thus (23) implies that By the definition of ν h in (26), we see that From Lemma 7, we can obtain that ν h m x,t , Z p · id converges weakly to ν x,t , Z p · id in the biting sense, ν h m x,t , Z p converges weakly to ν x,t , Z p in L p/(p-1) (Q T ), ν h m x,t , id converges weakly to ν x,t , id in L p (Q T ) as m tends to infinity. Thus for all η ∈ C ∞ (Q T ), with η(x, 0) = η(x, T) = 0, (17) and (19) imply that Hence, So ν h m x,t , Z p · ν h m x,t , id weakly converges to ν x,t , Z p · ν x,t , id in L 1 (Q T ). Since ν h m x,t , Z p · id converges to ν x,t , Z p · id in the bitting sense, we obtain that which implies (8). Hence, u is the desired Young measure solution of problem (3)-(5). The proof of Theorem 3 is complete.
Remark 2 Let u be the Young measure solution of problem (12)- (14) obtained in the proof of Theorem 3. Then from the proof we see that there exists a constant M depending only on f p W 1,p ( ) , f p L ∞ ( ) , , and meas( ), but independent of p and T, such that
Proof of Theorem 2 Let u p be the Young measure solution of problem (12)- (14) with the initial data f p with respect to the W 1,p (Q T )-gradient Young measures ν p generated by the sequence {∇w p,k } ∞ k=1 , which we obtained in the proof of Theorem 3. We see that and there exists a constant M 0 depending only on f p W 1,p( ) , f p L ∞ ( ) , , and meas( ), but independent of p, such that So there exist u ∈ L ∞ ((0, T); BV( )) ∩ L ∞ (Q T ) with ∂u/∂t ∈ L 2 (Q T ) and a subsequence of By Lemma 4, there exist a W 1,1 (Q T )-gradient Young measure ν ∈ L ∞ ((0, T); (E 1 0 (R N )) ) and a subsequence of {ν p m } ∞ m=1 , denoted the same, such that which implies that there is a decreasing sequence of subsets E j+1 ⊂ E j of Q T with lim j→∞ meas(E j ) = 0 such that ν p m , ψ converges weakly to ν, ψ in L 1 (Q T \E j ) for all ψ ∈ E 1 0 (R d ) and all j ≥ 1. By (18) we get that which implies (9). By Lemma 4, there exist w ∈ L ∞ ((0, T); BV( )) and a subsequence of is the W 1,1 (Q T )-biting generating sequence of ν, namely, there is a decreasing sequence of subsets G j+1 ⊂ G j of Q T with lim j→∞ meas(G j ) = 0 such that ν p m , ψ converges weakly to ν, ψ in L 1 (Q T \G j ) for all ψ ∈ E 1 0 (R d ) and all j ≥ 1. To prove (6), we first prove that { ν p m , Z p m } ∞ m=1 converges weakly to ν, Z in L 1 (Q T ). For i ≥ 1, define Noticing that lim i→∞ meas (x, t) : ∇w p,k ≥ i = 0, we see that I tends uniformly to 0 in p as i → ∞. For II, we get that Therefore, So { ν p m , Z p m } ∞ m=1 converges weakly to ν, Z in L 1 (Q T ). Thus (6) holds, namely, Since {w k } ∞ k=1 converges to w in L 1 (Q T ), we get that for all η ∈ C ∞ 0 (Q T \G j ; R N ), Thus for all j ≥ 1, We now prove (8). For all η ∈ C ∞ 0 (Q T ), we get that Since ν p m x,t , id converges to ν x,t , id in the biting sense, we see that for all j ≥ 1 and all η ∈ C ∞ 0 (Q T \E j ), Thus for all j ≥ 1, Hence By the arbitrariness of 0 ≤ η ∈ C ∞ 0 (Q T \E j ), we see that Thus for all j ≥ 1, Since ν x,t , Z · id , ν x,t , Z · ν x,t , id ∈ L 1 (Q T ), we see that So (8) holds, and therefore u is the Young measure solution of the problem (3)- (5). The proof of the theorem is complete.
Remark 3 Note that if the initial data f ≡ C, and then f is the Young measure of the problem (3)-(5).

Properties of the Young measure solution
Theorem 4 Let u 1 , u 2 ∈ B be the Young measure solutions of the problem (3)-(5) with the initial data f 1 , f 2 and satisfy (H6), respectively. Then for (x, t) a.e. in Q T , In particular, Proof Denote Let G(t) ∈ C 1 (R) be such that where B is a positive constant, and Then ψ ∈ C(R + ) ∩ H 1 (R + ), ψ(0) = 0, ψ(t) ≥ 0, for t ∈ R + , and G(u 1 (x, t)u 2 (x, t) -K) ∈ BV( ). Note that G(u k 1 (x, t)u k 2 (x, t) -K) ∈ W 1,1 ( ) can be chosen as the test function and, taking Q s for s ∈ [0, T] as the domain of integration, we obtain From the proof of Theorem 4 and due to 0 < G ≤ B, Hence ψ(s) = 0, a.e. s ∈ [0, T].
And therefore Then which implies the right-hand side inequality of (33). Changing the position of u 1 and u 2 will yield the left-hand side inequality of (33). When the initial data f ≡ ess sup x∈ u + 0 , f is the Young measure solution of (3)-(5); when the initial data f ≡ -ess sup x∈ u -0 , f is also the Young measure solution of (3)-(5), which completes the proof of the theorem.

Numerical scheme
In this section, numerical results are presented to demonstrate the performance of our proposed algorithm for image restoration involving white Gaussian noise. The results are compared with those obtained by the PM method in [23] and the TV method in [8]. In the next two subsections, two numerical discrete schemes, the PM scheme (PMS) and the AOS scheme, will be proposed.
Using the scheme in [43], the problem (3)-(5) can be discretized as Imτ A l u k -1 u n + λτ fu n , div n = u n+1u n /τ , with N (i) being the set of the two neighbors of pixel i (boundary pixels have only one neighbor).

The PM scheme
Similar to the original PM method, the discrete explicit scheme of the problem (3)-(5) is as follows: u n i,0 = u n i,1 , u n 0,j = u n 1,j , u n I,i = u n I-1,i ,

Denoising performance
The denoising algorithms are tested on four images: a synthetic image (128 × 128 pixels), Lena image (300 × 300 pixels), and a tower image (500 × 500 pixels). For each image, a noisy observation is generated by adding Gaussian noise with the standard deviation σ ∈ {30, 50} to the original image.
Peak signal-to-noise ratio (PSNR) and the mean absolute deviation error (MAE) are used to measure the quality of the restoration results. They are defined as PSNR = 10 log 10 255 2 MN/ u Ou 2 2 dB, where u O and u are the original and restored image, respectively. The stopping criterion of all methods is set to achieve the maximal PSNR or the best MAE. All methods are implemented in Matlab R2007b on a 2.8 GHz Pentium 4 processor.

Measure of similarity of edges
The PSNR does not always give a clear guide as to whether one image is less staircased than another, so the authors [44] take into account the value of PSNR grad which they define as 1/2(PSNR(∂ x u, ∂ x u O )+PSNR(∂ y u, ∂ y u O )), and this should measure how well the derivatives of reconstruction match those of the true image.
The edge maps are defined as where G σ (x) = 1 4πσ exp(-|x| 2 4σ ). If all images are normalized, their gray-scale is in the range [0, 255]. The authors [45] find that a value of 0.025 ≤ c ≤ 0.0025 and σ = 0.5 give the best edge map. In [46], the authors propose the following PSNR: where | max u O -min u O | gives the gray-scale range of the original image. It is noticed that PSNR can measure how well the reconstruction data match the true data, and the data need not to be an image. Combining (35) and (36), in [47], we define the following PSNR E in order to measure the similarity of edges: If there are wrong edges in the restored image by some method, then the PSNR E will be positive.

Comparison with other methods
The results are depicted in Figs 3 illustrate that the proposed model is able to reconstruct sharp edges and nonuniform regions while avoiding staircasing. TV-based diffusion reconstructs sharp edges, but the staircasing effect is obvious. PM-based diffusion also reconstructs sharp edges, but it creates isolated black and white speckles in the restored image. The proposed model reconstructs sharp edges as effectively as PM-based diffusion and meanwhile recovers smooth regions as effectively as pure isotropic diffusion (in particular, without staircasing). Figure 3 shows the edge functions when the smoothed images by the new methods, TV, and PM attain the largest PSNR, respectively. Note that PM and TV methods create many new edges in the restored images.    Tables 1 and 2, we observe that both PSNRs and MAEs of the restored images in our methods are better than those in the PM and TV methods. And the increase in PSNR E is obvious with the new diffusion operators. The denoising performance results are tabulated in Tables 1-2 where the best PSNR, MAE, PSNR E , and CPU time are shown in boldface. The PSNR improvement brought by our approach can be quite high, particularly for σ = 50 (see, e.g., Figs. 1-2), and the visual resolution is quite remarkable. For σ = 30, the PSNRs of our algorithm also can be higher than that of PM and TV methods. Moreover, the new algorithm by AOS scheme shows high PSNRs on real images (Figs. 3-4). Note that for big size images (Figs. 3-4, 300 × 300), the new methods take less time than TV and PM methods.

Conclusion
In this paper, based on convex and nonconvex linear growth functionals, we proposed a class of singular diffusion equations for noise removal. In our method, the convex and nonconvex functionals are combined into a new functional. It is hard to consider the analysis of the new functional. However, the new singular forward-backward diffusion equation is introduced from the functional. And the existence, uniqueness, and long-time behavior of solutions for the new equation are investigated. Finally, experimental results illustrate the effectiveness of the model in noise reduction. This work proposes quite an original and efficient method for noise removal. Noise removal is a difficult problem that arises in various applications relevant to active imaging system. The main ingredients of our method are as follows: (1) The new framework is based on a combination between convex and nonconvex functions; (2) The new equation is a forward-backward diffusion and it is singular; (3) The Young measure solution is obtained, and some useful properties of the solution are considered, such as long-time behavior, stability, maximum principle, and so on; (4) The new diffusion can be simulated by the efficient AOS scheme.
The obtained numerical results are really encouraging since they outperform the most recent methods in this field.