Poisson Denoising for Astronomical Images

A denoising scheme for astronomical color images/videos corrupted with Poisson noise is proposed. The scheme employs the concept of Exponential Principal Component Analysis and sparsity of image patches. The color space RGB is converted to YCbCr and K-means++ clustering is applied on luminance component only. The cluster centers are used for chromatic components to improve the computational efficiency. For videos, the information of both spatial and temporal correlations improves the denoising. Simulation results verify the significance of proposed scheme in both visual and quantitative manner.


Introduction
Recent advancements in astronomy and digital systems emphasize the development of more sophisticated image processing algorithm.Acquisition of astronomical images in low photon count region is a dominant source of Poisson noise.The images containing intensity dependent Poisson noise (having same mean and variance value) cannot be accurately modelled with Gaussian distribution.
In [1], Poisson noise is converted into Gaussian noise with unit variance using Variance Stabilization Transform (VST).Further improvement is made by applying Bayesian multiscale likelihood models [2] and  1 penalization hypothesis testing [3].However these schemes [1][2][3] provide significantly degraded results for high Poisson noise.Close form approximation for unbiased inverse of Anscombe transform yields increased error peak for Poisson noise [4,5].Structured dictionary learning approach exploiting the self-similarity of image patches [6] requires significant amount of computational time especially for color images and videos.
In [7], multiscale Poisson denoising based on shrinkage operators to attain maximum  2 error is proposed.However  2 distance is not suitable for Poisson corrupted measurements.In [8] spatially adaptive total variation framework is proposed with split Bregman optimization for Poisson denoising to handle high computational load but yields poor results in low photon count regime which is usually the case with astronomical images.Directional lapped orthogonal transform overcomes the limitation of diagonal edges and textures which separable wavelets fail to denoise efficiently [9].Computationally efficient technique for estimation of unknown noise parameters requires a blank image under same mechanical shutter [10].In a nutshell, most of abovementioned techniques either suffer from high computational complexity or give poor results in high Poisson noise.
A denoising scheme for astronomical color images/videos corrupted with Poisson noise is proposed.The scheme first converts the color space from RGB to YCbCr and applies -means++ clustering on luminance component only.The same cluster centers are used for chromatic components to improve the computational efficiency.Framework of Exponential Principal Component Analysis (EPCA) [11] is employed that better suits high Poisson noise.The proposed algorithm addresses several drawbacks of existing schemes while ingeriting their main strengths.For instance, patch based approach is used to exploit redundancies in image that have been shown to improve performance of image denoising algorithms [6].Similarly clustering algorithms are used in different denoising algorithms to group similar patches together [5].Note that careful clustering has significant importance in case of high Poisson noise (where only few photons are acquired by detector).Similarly taking into account temporal correlation of videos also improves the performance of restoration algorithms [12,13].Simulation results on different images and

Proposed Technique
Let   be the observed image from image acquisition device having dimensions  × , where  = 1, 2, . . .,  are rows,  = 1, 2, . . .,  are columns, and  ∈ {red, green, blue} are color bands.For {1, . . ., } let   [] (column stack representation of   ) be the observed pixel at location  whose entries are independent Poisson random variables with true value   [] > 0 which is to be estimated.Given the clean image   [] likelihood of observing noisy Poisson image   [] can be written as Noisy image is converted into luminance chrominance (YCbCr) color space by matrix transformation as Let   denote the noisy image in YCbCr color space where {, , }; then matrix   of vectorized patches is constructed from the   component by extracting all overlapping patches  of size √  × √  as where   , represents  th pixel in  th patch for 1 ≤  ≤  and  = ( − √  + 1)( − √  + 1) (considering image border issues) and let   , be defined for clean image as   , for noisy one.Similarly matrices   and   are built for   and   , respectively.
-means++ algorithm (having more accuracy and efficiency as compared to conventional -means) is then applied on matrix   to obtain clusters  = 1, 2, . . .,  and denote   for the size of th cluster.As   (luminance component) contains most of the valuable information (edges, shades, texture, etc.) of image the same cluster centers are then assigned to  and  components for computational efficiency of algorithm.
Different algorithms have used PCA to project the extracted noisy image patches to low dimensional space.Let     , be the  th pixel of  th patch in cluster ; using PCA it can be written as where  is coefficient matrix of size   ×  and  is dictionary atom matrix of size  × .  is cluster size (patches in cluster ) and  is number of principal axes to be retained.However, in this paper, the framework of [11] is used as it is more suitable for exponentially distributed data.Equation ( 4) can be written as where the exponential operator is element-wise.
After neglecting the constant term and applying maximization of negative Poisson likelihood alongwith approximation of (5), the loss function for  th cluster is Equation ( 6) is solved using fast Newton method [14], where  and  are initialized randomly.Then  th rows of  (denoted as    ,: ()) at iteration ) for cluster  are updated as where   = diag(exp(( + 1)()) 1  , , diag(exp(( + 1)()) 2  ,+1 , . .., exp(( + 1)())   , ).Row of  and columns of  are updated iteratively.Stopping criteria for proposed algorithm are either number of iterations (maximum 20) or a defined error threshold (0.01), whichever is reached first.Equation ( 6) after solving reduces to and denoising estimate for cluster  is ( is clean original image while Î is denoised image) After denoising of all  clusters, denoised patches are projected onto pixels.As each pixel has multiple presentation in different patches, it is done by uniformly averaging all estimates of those patches containing given pixel.Denoising image Î is then converted to RGB domain Î as  (e) Anscombe [18] (f) K-SVD [15] (g) NLPCA [5] (h) Proposed by  (total cubes then become  × ).After concatenating every cube row-wise, patch by patch, in single row matrix form (  ,, ),  th pixel of  th cube in frame  is written as Note that index  is changed after every  th pixel as these are total entries on front side of cube (size √  × √ ).-means++ clustering is then applied on these cubes and clusters are denoised using (6) (cubes are vectorized in this case).After denoising these cubes are projected onto pixels by averaging and converted back to RGB color space.
Figures 1-3 show visual comparison of existing and proposed techniques on three astronomical images (Disk, Nebula, and Jupiter) with different noise peaks.It can be observed that the proposed technique is able to reconstruct a better output image as compared to existing techniques (even in low noise peak values).Tables 1 and 2 provide quantitative analysis of existing and proposed techniques using Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) [20].Note that the proposed technique provides better quantitative values as compared to the state-of-the-art existing techniques.
Figure 5 shows the PSNR and SSIM values of individual frames for different noise peaks.It can be observed that the proposed technique provides better quantitative values almost for all frames as compared to the state-of-the-art existing techniques.Table 3 shows the average quantitative results (PSNR and SSIM) of video frames (311-325).For Figure 5, the Saturn video created from Hubble images taken over about 9-hour span is used.Note that the PSNR and SSIM of only 10 frames (from 315 to 325) are shown in the paper, as scenes in video do not change much due to slow motion of Saturn.Consequently, the PSNR and SSIM values of frames are almost constant.