Efficient Dark Channel Prior Based Blind Image De-blurring

Dark channel prior for blind image de-blurring has attained considerable attention in recent past. An interesting observation in blurring process is that the value of dark channel increases after averaging with adjacent high intensity pixels. Lo regularization is proposed to curtail the value of dark channel. Half quadratic splitting method is used to solve the non-convex behavior of Lo regularization. Furthermore, Discrete Wavelet Transform has been incorporated prior to de-blurring to increase the efficiency of algorithm. The most significant finding of this paper is a universal blind image de-blurring algorithm with reduced computational complexity. Experiments are performed and their results are comparable with state of the art de-blurring methods to evaluate the performance of algorithm. Experimental results also reveals that wavelet based dark channel prior image de-blurring is efficient for both uniform and nonuniform blur.


Introduction
Image restoration is the operation of recovering a true image from blurred image. Working on digital image restoration starts back in 1960s [1]. Image is described by two components reflectance and illuminance. Blurred image is represented by where I B represents blur image, is the convolutional operator, I L is the latent image (an invisible image, produced on a sensitized emulsion by exposure to light, that will emerge in development.) that convolves with blur kernel K B and η is additive noise in the image. To make blind de-blurring well posed, existing methods make assumptions on blur kernel, latent image or both. Methods discussed in [2][3][4][5] assume sparsity of image gradient, which has been widely used in low-level vision tasks including de-noising, stereo and optical flow. The problem arises when these methods are used in maximum a posterior (MAP) framework which degrades the performance of algorithms in term of effectiveness. Heuristic edge selection methods are used to achieve better results in MAP framework [6]. But heuristic edge selection method increases the computational complexity.
Furthermore latent image not always contain sharp edges. In contrary to MAP method image gradient prior via variation Bayesian inference [7] was proposed but it was computationally expensive. Another method [4] in which normalized sparsity prior is used for natural image de-blurring and internal patch recurrence [8] is also used for natural images. However these methods for natural image de-blurring do not perform effectively on specific images like text [9] and low light images [10]. Different methods [5,[10][11][12] are made for specific images like face, text or low illumination.
To overcome above problems blind image de-blurring using dark channel prior [13], [14] is used due to its consolidation property. Dark channel prior based algorithm produces comparable results on natural, text, face, and low illumination images. Proposed algorithm is based on the fascinating observation that low intensity pixels of blurred image are not as much dark as of clear image. In blurred image low intensity pixels are averaged with nearer high intensity pixels. This observation is confirmed by both theoretical and empirical analysis.
Although in natural image dark channel is assumed zero in most of cases. L o regularization is introduced to minimize the dark channel of blurred image. L o regularization is non-convex in nature and its optimization involves non-linear minimum operator to compute dark channel. An approximate linear operator is introduced based on lookup table to solve non-linearity and this linear operator is solved using half quadratic splitting method. Dark channel prior algorithm converges fast and this can also be extended to non-uniform blurring. DWT is used as preprocessing step to make dark channel based blind image de-blurring efficient.

The Proposed Work
Input image is preprocessed using haar wavelet for dimensionality reduction of image that reduces computational complexity of algorithm. Dark channel is computed and minimized using L o regularization. Resultant image with curtailed dark channel is processed in parallel to estimate the latent image and blur kernel. Latent image is estimated using half quadratic splitting method and blur kernel is estimated using gradient based estimation. Estimated blur kernel and latent image are convolved and the resultant is subtracted from the blur image to produce the clear image. Figure 1 shows the flow of proposed algorithm.

Dimensionality Reduction
For dimensionality reduction of input image DWT is used as preprocessing step. Haar wavelet is used as it is simplest wavelet. It decomposes image into approximation, horizontal, vertical and diagonal features. DWT is used to compact the energy of image. It has large coefficients which have maximum features of image and small coefficients are replaced by zeros by using universal thresholding to meet our desired image. Universal thresholding is where σ is standard deviation (L 2 norm) of noise η in (2). DWT provides us best compression ratio without degrading the image quality and it helps us to reduce the computational complexity and memory burden.

Dark Channel
Dark channel of blurred image D I B is represented as where x and y are pixel location, N x is an image patch centered at x and I c is the color of channel. Dark channel represents the smallest pixels values in image. Algorithm is based on proposition that in blurred image dark pixels are averaged with its neighboring pixels which are of high intensity and in result of this convolution intensity of dark pixel increases. And from this proposition two properties are derived for blurred image which are Property 1: D I B is dark channel pixel of blurred image and subsequently D I C is dark channel of a clear image.
Averaging of low intensity pixels with high intensity pixels increases the area of dark pixels in blur image as compared to the clear image as given in (3). Property 2: Second property is that non-zero elements (dark pixels) of blurred image will be greater than that of clear image. Equation (4) counts the elements that are non-zero in clear image and blurred image. And certainly dark channel in blurred image will be greater than that of clear image. Using (4) and (5) we compute that how much dark channel of blurred image is scattered and is added in equation formulated to de-blur the image.
The first term in (6) is restored image; in second term blur kernel is regularized; third one is image gradient in which little information is discarded and only large gradient is retained. µ and λ are parameters have been used for weight. To solve (5), estimated blur kernel and latent image have to be estimated.

Estimation of Latent Image
The equation used to compute latent image is following and derived from (6) min Solving this equation is computationally complex because of L o regularization and non-linear term for calculating dark pixels. To handle L o half quadratic splitting optimizing approach is used and for computing dark pixels auxiliary variables are introduced. So new equation will be min u,g,I L In (8) α and β are penalty parameters, u and g are auxiliary variables. To solve this equation for latent image estimation these penalty parameters should approach infinity else auxiliary variables should be adjusted. And while adjusting auxiliary variable it does not include computing non-linear function D . and to solve this non-linear function which is used as minimum operator equation will be min I L According to our observation the non-linear function of computing dark pixel is equivalent to a linear function M that converts image into vector. Let y = arg min z∈N x I z , where M satisfies Pixels values are gained by multiplying linear function M with I. By using previously computed interposed latent image matrix can be constructed according to given conditions in (10). Iteration of linear function M results in a solution closer to D(.). After number of experiments it is confirmed that optimization scheme converges well. For linear function M equation to solve I will be T K is Toeplitz convolution matrix. And this equation is solved using fast fourier transform (FFT) [14]. The u and g are solved separately using L 2 and L o norm as given in (12) and (13).

Estimation of Blur Kernel
Equation for blur kernel is derived from (5): Blur kernel (14) is estimated using FFT and after that k is normalized so that it satisfies our condition.

Results and Analysis
Results of our algorithm are compared with benchmark method of blind image de-blurring. We evaluated our method on specific images of four different data-sets; Natural [8], face [11], text [5] and low illumination [10]. It gives competitive results compared to techniques specifically handling these type of images. And eventually this algorithm is also tested on non-uniform blur. After analyzing different combinations of parameters by doing experiments on different images, we fixed parameters λ = µ = 0.005, γ = 1.85 and maximum iterations = 6 for all type of images considering quality and computational complexity.  [15], (c) Xu and Jia [6], (d) Zhe Hu [5], (e) Jishnan [13], (e) Proposed.
Quality analysis are performed on four different types of scenarios (natural, low light, text and face images). For natural images data-set of [18] text [9] is used and our algorithm has results with improved peak signal-to-noise ratio (PSNR), structure similarity index measure (SSIM) as compared to state of the art methods as shown in Fig. 4 for natural images.
Moreover our algorithm also handles ringing artifact better as compared to [4], [13]. Blurring effect are difficult to be handled in facial images as they have fewer edges, but proposed algorithm gives very close results to [11] as shown in Fig. 2, which explicitly designed to handle face images through exemplar [11]. Our algorithm has also comparable results in de-blurring text image to methods which are specifically designed for this purpose, see Fig. 3. Low light images are most difficult to handle due to saturation of pixel of image. But the proposed algorithm produces equivalent results to [10] in terms of PSNR which is specially designed to handle images captured in low light as shown in Fig. 5.  [17], (c) Xu et al. [15], (d) Jinshan [13], (e) Proposed. Table 1 summarizes the image quality comparison of proposed technique with the previous approaches. It is evident from Tab. 1 that average PSNR and SSIM value of our algorithm are comparable to state of the art image de-blurring techniques. Resultant image of dark channel prior also have lower energy. There are other methods [2], [8] with lower energy but their performance on specific scenarios is not effective. L o regularized intensity method used in proposed algorithm to preserve the contrast and helps in de-blurring text images. As compared to other L o regularized intensity method, dark channel prior method needs to compute dark channel and lookup table which increases computational complexity. To solve this bottleneck and to accelerate the process haar wavelet is used as preprocessing step which reduces the image dimension while keeping important features. The proposed technique reduces computational complexity and gives comparable results to [13]. Proposed method takes average 16.1 seconds to de-blur an image as compared to 17 seconds of [14] using Intel Core-i7 with 28 GB RAM. Figure 6 demonstrates the subjective evaluation for the proposed algorithm. A total of 20 observers, including 10 male and 10 female aged between 23 and 30, participated in the subjective analysis.

Conclusion
In dark channel based method first of all dark channel of blurred image is computed. Then to recover a clear image after regularization of dark channel optimized latent image and blur kernel are estimated using half quadratic splitting method and lookup tables. This algorithm reduces computational complexity by using DWT as preprocessing step. Moreover, this algorithm can also be used for images with non-uniform blur. And most importantly experimental results verify that quality of de-blurred images of proposed algorithm is comparable to methods explicitly designed for handling low light, face and text blurs.