Color demosaicking via fully directional estimation

Given a natural image from the single sensor, the key task is to properly reconstruct the full color image. This paper presents an effectively demosaicking algorithm based on fully directional estimation using Bayer color filter array pattern. The proposed method smoothly keeps access to current reconstruction implementations, and outperforms the horizontal and vertical estimating approaches in terms of the perceptual quality. To analyze the target of existing methods, the proposed algorithm use the multiscale gradients in single green channels as the diagonal information for the auxiliary interpolation. Furthermore, two group of weights (one is from the horizontal and vertical directions, another is from the diagonal and anti-diagonal directions) are built. Combinational weight is better suited for representing neighbor information. Another contribution is to better use the prior result. While calculating the same type of color difference, we divide all the color difference values into two interleaved parts. Estimated value in the first part will guide the subsequent color difference in the second part. It less brings the artifact of the interpolation procedure. Experimental results show that this adaptive algorithm is efficient both in the objective and subjective output measures.

directional interpolation scheme, the latter methods expanded ACPI by enough employing the directional weighted estimator. For example, Zhang proposed the horizontal and vertical direction weights via linear minimum mean square error estimation (Zhang and Wu 2005). In this demosaicking method, the larger directional variance means the smaller weight. Another type of early methods belonged to nonheuristic method. The high frequencies of the green values primarily guided the interpolation of red and blue channels (Gunturk et al. 2002). Later, multiple method fusion was formulated as an optimal problem. By analyzing the color local property, linear minimum mean-square estimation and support vector regression were grouped into a unified scheme (Zhang et al. 2009). Based on the high-frequency information preservation, the effective luminance at three color channels was designed using the Fourier transforms of down-sampled signals (Lian et al. 2007). Two detailed comparisons were also provided for an early assessment of the performance in the famous review papers (Li et al. 2008;Menon and Calvagno 2011).
In previous methods, direction effect is usually calculated by consecutive narrow line edges. Therefore, many instable weights seriously mislead the estimation of color difference. To solve this problem, new edge-sensing measure called integrated gradient can effectively extract gradient variance at the border. The edge indicator could serve as an individual guide to many successive demosaicking methods (Chung and Chan 2010). If we consider all from the digital devices, the prior knowledge from natural images will leverage on the properties of exploring intrinsic correlation (Menon and Calvagno 2009). Recent works showed gradients were the ultimate factor for extracting directional data from digital images (Pekkucuksen and Altunbasak 2013). Multiscale gradients (MG) estimated horizontal and vertical color difference using more scales into the difference equation. If the method properly used the relationships of intra and intercolor correlation and will balance the color difference results (Jaiswal et al. 2014). The interpolation errors based on geometric duality using in the low resolution image partially compensated the missing information in demosaicking (Kim et al. 2014). After the initial interpolation is completed, the refinement method in the distinct channels could significantly improve the ultimate performance (Menon and Calvagno 2011). In fact, the principle of the smoothness of color difference led to the weight fusion in the local image (Zhou et al. 2012). Recently, a type of algorithm based on residual interpolation (RI) was proposed (Kiku et al. 2013). It gives another interpolation structure and uses color residual to interpolate all the missing points. Its succeeding versions (Kiku et al. 2014;Monno et al. 2015;Ye and Ma 2015) generates the distinct definitions of the residuals. RI is greatly efficient to run on the image which has the weaker correlation between colors (Kiku et al. 2016).
The traditional methods in demosaicking depended on the balance between horizontal and vertical directions (Menon and Calvagno 2011;Pekkucuksen and Altunbasak 2013). In this section, we provide a new attempt to estimate the fully directional weight from the color difference and design two group of weights (one is from the horizontal and vertical directions, another is from the diagonal and anti-diagonal directions). It is seldom seen in the aforementioned methods.

Green channel estimation
Since the number of the green pixels is the most prevalent, many demosaicking methods try to interpolate all green pixels first. Difference gradient-based interpolation in various directions at each pixel will guide the interpolation along the smooth edge. One solution to tackle the problem of avoiding cross the edge is to adopt the second-order Laplacian interpolation filter (Zhang and Wu 2005). For red and green rows, directional interpolations at red and green points can be given by where the superscript − means the operation in the horizontal estimate. Similarly, we compute the vertical estimate as R | i,j and G | i,j at the coordinate (i, j). The interpolated direction estimate will result in directional color difference shown in and the second order color differential (Pekkucuksen and Altunbasak 2013) followed by Multiscale color gradients over a narrow window is equivalent to average the color difference using the lowpass filter. Moreover, we define the second order differential in the main diagonal direction as follows.
The second order differential D / gr i,j in the anti-diagonal direction is similarly defined. For the green position, in diagonal directions, only green values can be provided for calculating differential information. Meanwhile, the color differences between green and blue can be obtained in the same way, occurred in the Eqs. (1-5). The green and red pixels combination estimation of the first step are alternatively filtered by where f = [1/4, 1/2, 1/4] and the operator · denotes the inner product of vectors. D | gr i−2:i+2,j and D − gr i,j−2:j+2 are the column and row vectors consisted of D − gr and D | gr , respectively. The weights for each direction (ω − , ω | ) are calculated using color difference gradients in the horizontal and vertical directions as: where ε is a small positive number to avoid zero denominator. M T normalizes the total weights. Because horizontal and vertical weights simply decompose the edge into two directions. This is not sufficient to represent the edge shape. To better solve this problem, we first detail directional weights as follows Here, all weights are normalized to the [0, 1] interval by dividing the sum ω for the sake of simplicity. In the subsequent section, all calculated weights are normalized using the same way. Except for the previous weight factor, supplementary information from main diagonal and anti-diagonal directions is used to have a better decision in a texture region. Because we add new four directional weights, it provides the feasibility of improving the green channel result by updating the initial color difference estimates. Another four directional weights are In next part, we eventually reach a green-red color difference for estimating the missing green values.
where In experiments, we set w 1 = 0.6 and w 2 = 0.8. In our method, two group of weights (one group is from the horizontal and vertical directions, another is from the diagonal direction) are build to be better suited for representing neighbor information. Ultimate estimation to the green value at the red pixel is designed by For the green/blue row and column, the same procedures as above can be performed. Until this step, all the green pixels have been interpolated.

Red/blue channel estimation at blue/red position
After the fulfillment of the green channel, we initially reconstruct the red and blue value at the blue and red corresponding pixel. Because the interpolations of red and blue channels are similar at this time, without loss of generality, we only discuss the red channel reconstruction. These red pixels are reconstructed based on 7 × 7 windows. The similar weight matrix is also proposed in Pekkucuksen and Altunbasak (2013) (8) The color difference between green and red is derived from the local window at the pixel coordinate (i, j). In essence, this matrix is similar to the negative Laplace filter in order to strengthen the center region.
where GR i−3:i+3,j−3:j+3 is the G/R color difference and the operator ⊗ denotes elementwise multiplication in the equal size matrix and subsequent summation. Furthermore, each point can be refined using the Eq. (14). where

Red and blue channel at green component position
In the following, we interpolate the red and blue pixels at green components located in all even coordinates pixels. This procedure includes two phases. In the first phase, we estimate the red value in the green channel using the horizontal and vertical color difference. Points in the even row are interpolated. This procedure can avoid the sensitive estimation because the parameter from the single row and column is enlarged using the inverse ratio. Since the red/green color difference in the interlaced diagonal direction, in the second phase, we reconstruct red/green color difference using fully directional weight in the rest location (all odd coordinates pixels). The detailed interpolating equation is followed by the Eq. (16).
where The interpolation is based on the prior value in the same color channel. After pixels in the even rows are interpolated, these recovered color value can serve as the interpolation processing in the odd rows. These prior results can further improve the performance. The Eq. (18) is re-performed at the corresponding. This refinement scheme elaborates the color difference based on the previous estimation.
The missing blue values at the green component positions are executed in the likelihood way. The whole demosaicking processing chain is shown in Fig. 2. We only give the processing between green and red channels. G-B estimation is same as this processing. The matrices M i , i = 1, 2, 3, 4, 5 corresponds to the processing chain in this flowchart.

Experimental verification
In this section, we investigate the performance of the proposed demosaicking method by analyzing two known benchmark image sets. One image source is the Kodak database containing 24 films captured and then digitized at the resolution of 512 × 768. We number these images from one to twenty four. These images is illustrated in Fig. 3. Another image source is McMaster sets containing 18 color images with the size of 512 × 512. The McMaster image set is shown in Fig. 4. We sample them according to the Bayer pattern to obtain a grey image and recreate them with different demosaicking techniques, comparing the interpolated images with the original ones. Menon and Calvagno (2011) have systematically investigated the performance beyond ten methods for the Kodak data sets tested in the previous works. Here, we select some representative algorithms and compare the performance of the proposed algorithms, including directional linear minimum mean square-error estimation (DL) (Zhang and Wu 2005), alternating projections (AP) (Gunturk et al. 2002), adaptive filtering (AF) (Lian et al. 2007), Fig. 2 Demosaicking processing chain. Two directions mean horizontal and vertical directions, and four directions mean horizontal, vertical and two diagonal directions integrated gradients (IGD) (Chung and Chan 2010), regularization approaches to demosaicing (RAD) (Menon and Calvagno 2009) and the state-of-art multiscale gradients (MG) (Pekkucuksen and Altunbasak 2013) and residual interpolation (RI) (Kiku et al. 2016) algorithms. Note that we have implemented the MG method and found that it has a slightly performance difference compared to the results occurred at the reference Pekkucuksen and Altunbasak (2013) and the average PSNR values coincide exactly with the MG method. Since this implementation ignores the processing of pixels at the border, we exclude those pixels whose distance to the border is fewer than 10 pixels.
We evaluate these algorithms using objective quality metric color-peak signal-to-noise ratio (CPSNR) and structural similarity index (SSIM) value (Wang et al. 2004). CPSNR is calculated by CPSNR = 10log 10 (255 2 /CMSE).    quoted from their work Pekkucuksen and Altunbasak (2013) in the Kodak image set. For the McMaster dataset, the performance of RI is the best. The proposed method achieves the best performance in color difference series. Comparing with PSNR which is an statistical average quality measure, SSIM value achieves high correlation with human perception of image quality, which is designed on the basis of characteristics of human visual system. For computing SSIM, we use the code provided by the original authors with default parameters and average three color channel values. Tables 3 and 4 show that average SSIM values of the proposed algorithm outperform other comparative methods.
It is shown in Fig. 5 that the visual quality comparison of local roof in image (No. 19) of the Kodak set is executed by various interpolation methods. We can see some obvious color artifact using other different methods. Demosaicking image of the proposed method is the most slightly blurred. On the whole, the proposed method produces the most desired visual quality. Table 5 gives a comparison of computational complexity among the algorithms. The simulations have been conducted in the Matlab platform running on the desktop PC (Intel i7-2600 CPU). It is stated that the proposed is slower than MG, because the proposed method provides the improvement based on MG and keeps the most MG's architecture. However, AF is the fastest among all the algorithms.

Conclusion
In this paper, an efficiently fully directional estimation-based demosaicking method is developed. Computational weighting parameters adopted here inherit the actual result from eight directional information. Unlike the other standard weight allocation algorithms, new approach allows the adaptive adjustment satisfied to local interpolation and optimal target. The proposed method need integrate the weight allocation interpolation, and finally perform an entirely demosaicking application. At the same time, the quality of the resulting images produced by the proposed approach is better in perception than that produced by those without priority estimation. Experimental results show that the proposed method is more efficient than other methods such as DL, AP, AF, IGD, RAD as well as the state-of-art MG and RI algorithms. The results of PSNR and SSIM proves that the proposed method is valid, and can obtain high performance accuracy and good results in the application.