A sensor-data-based denoising framework for hyperspectral images

Many denoising approaches extend image processing to a hyperspectral cube structure, but do not take into account a sensor model nor the format of the recording. We propose a denoising framework for hyperspectral images that uses sensor data to convert an acquisition to a representation facilitating the noise-estimation, namely the photoncorrected image. This photon corrected image format accounts for the most common noise contributions and is spatially proportional to spectral radiance values. The subsequent denoising is based on an extended variational denoising model, which is suited for a Poisson distributed noise. A spatially and spectrally adaptive total variation regularisation term accounts the structural proposition of a hyperspectral image cube. We evaluate the approach on a synthetic dataset that guarantees a noise-free ground truth, and the best results are achieved when the dark current is taken into account. © 2015 Optical Society of America OCIS codes: (100.2980) Image enhancement; (110.4280) Noise in imaging systems; (110.4234) Multispectral and hyperspectral imaging. References and links 1. H. Li and L. Zhang, “A hybrid automatic endmember extraction algorithm based on a local window,” IEEE Trans. Geosci. Remote Sens. 49, 4223–4238 (2011). 2. X. Liu, S. Bourennane, and C. Fossati, “Denoising of hyperspectral images using the PARAFAC model and statistical performance analysis,” IEEE Trans. Geosci. Remote Sens. 50, 3717–3724 (2012). 3. H. Othman and S.-E. Qian, “Noise reduction of hyperspectral imagery using hybrid spatial-spectral derivativedomain wavelet shrinkage,” IEEE Trans. Geosci. Remote Sens. 44, 397–408 (2006). 4. J. Martı́n-Herrero, “Anisotropic diffusion in the hypercube,” IEEE Trans. Geosci. Remote Sens. 45, 1386–1398 (2007). 5. D. Letexier and S. Bourennane, “Noise removal from hyperspectral images by multidimensional filtering,” IEEE Trans. Geosci. Remote Sens. 46, 2061–2069 (2008). 6. Q. Yuan, L. Zhang, and H. Shen, “Hyperspectral image denoising employing a spectral spatial adaptive total variation model,” IEEE Trans. Geosci. Remote Sens. 50, 3660–3677 (2012). 7. J. Yang and Y. Zhao, “Poisson-Gaussian mixed noise removing for hyperspectral image via spatial-spectral structure similarity,” in “32nd Chinese Control Conf.” (Xi’an, 2013), pp. 3715–3720. 8. X. Gong, B. Lai, and Z. Xiang, “A L0 sparse analysis prior for blind poissonian image deconvolution,” Opt. Express 22, 370–375 (2014). 9. F. Deger, A. Mansouri, M. Pedersen, J. Y. Hardeberg, and Y. Voisin, “A variational approach for denoising hyperspectral images corrupted by Poisson distributed noise,” in Image Signal Process (Springer, 2014), pp. 106–114. 10. H. Zhang, W. He, L. Zhang, H. Shen, and Q. Yuan, “Hyperspectral image restoration using low-rank matrix recovery,” IEEE Trans. Geosci. Remote Sens. 52, 4729–4743 (2014). #224335 $15.00 USD Received 7 Oct 2014; revised 11 Dec 2014; accepted 8 Jan 2015; published 26 Jan 2015 (C) 2015 OSA 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.001938 | OPTICS EXPRESS 1938 11. T. Skauli, “Sensor noise informed representation of hyperspectral data, with benefits for image storage and processing.” Opt. Express 19, 13031–13046 (2011). 12. L. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60, 259–268 (1992). 13. S. Osher and T. Goldstein, “The Split Bregman method for L1 regularized problems,” SIAM J. Imaging Sci. 2, 323–343 (2009). 14. E. L. Dereniak and G. D. Boreman, Infrared Detectors and Systems (Wiley, 1996). 15. T. Skauli, “An upper-bound metric for characterizing spectral and spatial coregistration errors in spectral imaging.” Opt. Express 20, 918–933 (2012). 16. (HySpex / Norsk Elektro Optikk AS), “Imaging spectrometer (user manual),” Tech. Rep. (2013). 17. T. Le, R. Chartrand, and T. J. Asaki, “A variational approach to reconstructing images corrupted by Poisson noise,” J. Math. Imaging Vis. 27, 257–263 (2007). 18. P. Getreuer, “Rudin-Osher-Fatemi total variation denoising using Split Bregman,” Image Process. Line (2012). 19. R. Zanella, P. Boccacci, L. Zanni, and M. Bertero, “Efficient gradient projection methods for edge-preserving removal of Poisson noise,” Inverse Probl. 25, 1–24 (2009). 20. M. D. Fairchild and G. M. Johnson, “Metacow: a public-domain, high-extended-dynamic-range, spectral test target for imaging system analysis and simulation,” in “Color Imaging Conf.”, (IS&T, 2004), pp. 239–245. 21. J. Padfield, “Library of illumination spectral power distributions,” http://research.ng-london.org.uk/scientific/spd/. Accessed: 2014-12-10. 22. R. Shrestha, R. Pillay, S. George, and J. Y. Hardeberg, “Quality evaluation in spectral imaging–quality factors and metrics,” JAIC-Journal of the International Colour Association 12 (2014). 23. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity.” IEEE Trans. Image Process. 13, 600–612 (2004). 24. J. Romero, A. Garcı́a-Beltrán, and J. Hernández-Andrés, “Linear bases for representation of natural and artificial illuminants,” J. Opt. Soc. Am. A 14, 1007–1014 (1997).


Introduction
Hyperspectral imaging (HSI) is affected by noise, which impacts the precision of all further processing steps, such as unmixing [1] or classification [2]. Noise is inevitable during the acquisition, and caused at different stages in both the optics and the photodetector. An ongoing research challenge is to find appropriate image processing methods that reduce the influence of noise in a post-processing step. Most approaches adapt techniques from grey-level image processing and extend them to the needs of HSI. Othman and Qian [3] extended wavelet shrinkage denoising to a hybrid spatial-spectral wavelet shrinkage, Martín-Herrero [4] adjusted anisotropic diffusion for the HSI cube, Letexier and Bourennane [5] adapted a Wiener filter to HSI and Liu et al. [2] used a higher order generalization of singular value decomposition. Yuan et al. [6] extended a variational denoising model to HSI using a spectral-spatial adaptive total variation (TV) semi-norm.
State of the art approaches do not only take the structural cube properties into account, but also adapt to the type of noise. In nowadays hyperspectral scanners, the most relevant noise source is photon noise, also known as shot noise. Yang and Zao [7] propose a Poisson-Gaussian mixed model for HSI using a multistep approach including principal component analysis transformation. Gong et al. [8] use a blind deconvolution to smooth HSI contaminated by Poisson noise. We recently proposed a variational approach for denoising HSI corrupted by Poisson distributed noise [9]. Zhang et al. [10] employed a low-rank matrix recovery, which can simultaneously remove Gaussian noise, impulse noise, dead pixels or lines, and stripes.
These current denoising approaches adapt for the type of noise and the structural properties of the image cube, but remain rather vague about the parameterisation, and do not clarify whether the HSI should be stored as a radiometric calibrated radiance values or raw sensor output. These two questions are closely linked, as noise is a random process that can be well characterised using the knowledge of the sensor characteristics. Skauli [11] analysed different image formats like the raw sensor response or calibrated spectral radiance (with the unit W sr −1 m −2 nm −1 ), and proposed a representation that facilitates the use of physical noise-estimates. The format accounts for the most important noise contributions, such as the random arrival of the photons  1. Different stages of the proposed denoising framework for HSI. Knowledge of the sensor characteristics allows the conversion to a photon corrected image (presented in Section 3). This is a better representation to find an appropriate noise model and to estimate the corresponding parameters. and the contribution of the dark current, but neglects less relevant details, such as the nonuniformity of the sensor elements.
The proposed denoising framework for HSI (see Fig. 1) uses sensor data to transform the image to a photon corrected representation. This format is similar to the one proposed in [11], but accounts for the contribution of the dark current and does not use a constant weighting factor. In this format, the noise is mathematically described by a Poisson distribution, and the standard deviation can be directly estimated. For the denoising, we use a Rudin-Osher-Fatemi (ROF) [12] variational model and a Split Bregman optimisation [13]. The data term is adapted to a Poisson distributed noise, and the TV regularisation accounts for the structural properties of the HSI cube. Knowledge of the noise variance allows estimating a good parameterisation that weights the contribution of the data fidelity and regularisation term. We convert the output to device independent spectral radiance values. The approach is evaluated deliberately only on a synthetic dataset. In contrast to a real acquisition, this ensures a noise free ground truth.
The remaining paper is structured as following: In Section 2, we introduce a basic signal model and derive important noise contributions. Section 3 presents the transformation to a photon corrected image format, which is the foundation for the denoising process. Two formats are presented: One is accounting the contribution of the dark current and a second one is simpler and does not account for it. In Section 4 the spatially and spectrally adaptive variational ROF model is described that we have previously presented in [9]. The evaluation in Section 5 uses a sensor model, and a realistic parameterisation to evaluate the proposed denoising framework and to analyse the influence of the dark current. We conclude and propose future work in Section 7.

Hyperspectral noise model
HSI is affected by different noise sources, such as the random arrival of photons, the contribution of the dark current, readout noise, and rounding errors. A basic signal model [11,14] can be applied to pushbroom-and whiskbroom-scanning sensors, and helps to identify different noise contributions. Considering the spectral radiance values L[i, j] of a scene at a spatial location i and a spectral band j, the acquisition sensor will receive N ph [i, j] photons, which can be calculated as where t is the acquisition time, A the sensor aperture, Ω the solid angle of a single pixel, Δλ the spectral sampling, λ the respective wavelength and h the Planck's constant and c the speed of light. These photons will excite the following number of photoelectrons where η[i, j] describes the quantum efficiency depending on the spatial and spectral location. It includes non-uniformities of the sensor and all signal losses in both the optics and the detector.
is the dark current and δ N describes the read-out noise. The photoelectrons are multiplied by a constant gain factor g f and discretized to the raw sensor values This signal model implies a couple of assumptions. The light samples are supposed to be within the capacity of the photodetector. In Eq. (1), the dense spectral sampling is simplified to a constant energy at the centre of every band λ [ j], and the spectral sampling Δλ is assumed to be constant in the acquisition range. The model is therefore only suited for devices that sample the spectral information densely. The following noise sources can be described within this sensor model.
Photon noise , or shot noise is a fundamental physical limit, and the dominating noise contribution in current hyperspectral sensors. It is caused by the random arrival of the photons as well as the random absorption at the photodetector, and can be described by a Poisson distribution of photoelectrons (Eq. 2). The standard deviation is σ pn = N[i, j].
Readout noise accounts for the variability in the transfer and amplification of the photoelectron signal. In Eq. (2) it is characterised as an additive Gaussian distribution with zero mean and the standard deviation δ N.
The noise contribution of the dark current is characterised by I d [i, j]t in Eq. (2), as δ N has zero mean. In many cases it is small compared to the actual photoelectron count However, for acquisitions in dark environments such as in astronomy, the dark current is more dominant.

Digitization noise occurs, when N[i, j] is multiplied by a g f and converted to an integer value in Eq. (3).
This model is not treating optical co-registration errors, such as keystone and smile distortions. Such distortions are better described and compensated separately [15]. This paper focuses on the effects of the photon noise.

Photon corrected image
HSI are conventionally converted to radiance values. This format has the advantage of being device independent, and can be interpreted and processed without any prior sensor knowledge. An estimation of the spectral radiance L n [i, j] can be calculated by applying Eqs. (1) -(3) in reverse order The estimated radiance L n [i, j] is similar to the original sample L[i, j], but corrupted by noise. Although a noise description for L n [i, j] is possible [11], the noise characteristics in the radiance format does not only depend on the signal intensity, but also the spatial location i and the spectral band j. A denoising model would become unnecessarily complicated and computationally expensive. Therefore, Skauli [11] proposed a corrected raw data f rc , especially suited for low-level operations, such as denoising. It hides uninteresting details, while allowing to access important sensor properties. In contrast to the raw sensor data, it reduces the quantum efficiency to its spectral domain where i is the spatial location along a line, and M is the number of pixels in a single line. This can be justified, as the spectral variation of the quantum efficiency is much more significant than the spatial nonuniformity. In this format, the dark current is not accounted. The corrected raw data f rc [i, j] is spatially proportional to the radiance at any location i, described by where s dw is a constant weighting factor to increase the numerical precision during the digitization. The corrected raw data f rc can be efficiently estimated from raw sensor values, and the commercial push-broom scanner line HySpex has an option to save directly in this format [16].
The photon corrected image f pc , that we use for the denoising process is an extension that accounts the contribution of the dark current. To remain spatially proportional to radiance values, we add an average of the dark current whereĪ d is the mean value of the dark current. Readout noise, as described in Eq. (2) is not accounted as the Gaussian distribution is assumed to have a zero mean value. We compensate the weighting factor s dw , because we do not apply a digitization and correct magnitude allows a straightforward parameter estimation. In many acquisitions the contribution of the dark current seems insignificant. A simplified corrected photon image f cs only corrects the scaling All calculation for the corrected images f rc , f c and f cs are fully reversible. Radiance values can be estimated from f c by subtracting an offset (Eq. 7) and multiplying a location invariant scaling factor k[ j] Eq. (6). The corrected images include all relevant noise contributions, which can be characterised by a single Poisson distribution. The noise contribution is independent of the location and the standard deviation depends only on the signal value at every location. The corrected raw data f rc [i, j] can be used alternatively as an input for the denoising framework, just the noise variance, Eq. (9), must be scaled accordingly.

Total variation denoising
The measurement f c is corrupted by a Poisson distributed noise. Let u be the noise free image that we want to reconstruct. This reconstruction benefits from the spatial and spectral relationship of neighboring locations. Instead of analyzing a single point, as in the previous sections, we now assume a HSI cube. A push-broom scanner is required to acquire multiple lines. Both, f c and u are of dimensions M 1 × M 2 × B, in which M 1 and M 2 are the spatial dimensions of the image, and B represents the spectral band-number. Noise corruption is an ill-posed problem and a reconstruction is often based on a regularisation approach. The presented denoising approach [9] is based on ROF models [12] that are successful at denoising. Traditionally these models assume Gaussian white noise, but they have been adapted to Poisson distributed noise by Le et al. [17]. A reconstructed imageû can be described aŝ where is the image domain of the HSI. The first term · TV (H) is the TV semi-norm that serves as a regularisation term. The second term is the data fidelity, in which the logarithmic component accounts for the Poisson distributed noise in f c , as described in [9,17]. Both components are balanced by the regularisation parameter β . The TV semi-norm permits a stronger denoising in smooth areas, while preserves edges and structures. A spatial-spectral adaptive TV semi-norm (SSATV) performs best for HSI [6,9] u SSATV = where G i is the gradients of all bands at a single location i with ∇ x and ∇ y being the discrete horizontal and vertical derivation in the image plane M 1 ×M 2 . This means that G i discourages a large oscillation in the reconstructed image. A weighting factor W i in Eq. (11) helps to preserve structure. It allows a stronger denoising in comparably smooth regions, and lower denoising for sharp edges A discrete ROF model of Eq. (10) is used for variational denoising approach that preserves the structure of a HSI and accounts the appropriate Poisson distributed noise. It is denoted aŝ To efficiently solve this minimization, a Split-Bregman Optimisation [13,18] can be applied. The unconstrained minimization is split into constrained problems that can be solved more easily. The complexity is hereby reduced to O(M 2 ), M being the number of voxels in the HSI cube.

Parameter estimation
An appropriate parameterisation is important for a good denoising result. When β in Eq. (14) is zero, only the regularisation gets accounted, and the result is over-smoothed. In the case of a too large β , the data term is too strong, and the resulting image remains similar to the noisy image f c . The parameterisation depends on both the noise level, and the image. Even with the knowledge of the standard deviation, there is no closed form to estimate β , and a metaoptimisation has to be applied. In Gaussian noise corrupted images, the discrepancy principle is applied [17][18][19]. The principle states that the mean-squared error between the reconstruction and the noisy data should be equal to the variance of the noise. A proposed adaptation to Poisson noise uses the error of the data term, and optimizes it to match the mean variance of the image [17,18].

Experimental evaluation
Many existing denoising approaches have been evaluated on real HSIs as a ground truth (GT), in which Gaussian noise of different intensity is added [3,6,7]. Such an evaluation does not necessarily reflect a realistic noise contribution. Furthermore, the GT itself cannot be assumed to be noise free. Therefore, we decided to evaluate the framework only on synthetic datasets, based on the sensor model described in Section 2. To be as realistic as possible, we use a nonuniform quantum efficiency, and dark current from a real pushbroom HSI scanning device. The spectral variation of the quantum efficiency and illumination automatically lead to different noise levels in different spectral bands. We include readout noise and digitization noise. The first aim of this evaluation is to quantify the denoising potency of the proposed framework. We also investigated the influence of the dark current on the two image formats f c , Eq. (7) compared to f cs , Eq. (8).

Generating synthetic datasets
The computer generated Metacow image [20] has its origins in the field of colour imaging. The image is a noiseless, high contrast HSI. It shows 24 cows with different spectral surfaces, and is freely available. We used an image size of 600 px × 400 px and 70 spectral bands from 415 nm to 760 nm with Δλ = 5nm. The reflectance values are scaled between 0 and 1, and multiplied by different, spatially uniform illuminants, see Fig. 2(a). For simplicity, the spectral irradiance is directly interpreted as spectral radiance values. The resulting image L is shown in Fig. 2(b). We simulate a pushbroom scanner according to our sensor model. To be as realistic as possible, we use the parameterisation from a real pushbroom scanner, the HySpex VNIR 1600 [16], and adapted the dimensions accordingly. We assumed an aperture of A = 0.0008 m 2 , and an integration time t = 0.08 s. The solid angle is the same for all pixels Ω = 7.03 × 10 −8 (obviously with different orientations). The quantum efficiency η[i, j] is shown in Fig. 3(a), and includes effects of the optics and the photodetector. The dark current I d [i, j]t is shown in Fig. 3(b), and has a variance of Var(I d [i, j]t) = 6.5. The constant offset is measured in a laboratory environment at room temperature and averaged over 200 measurements. In Eq. (2) we added readout noise with zero mean and the same variance as the fixed pattern dark current. This ensures that only few locations become negative due to the readout noise. Negative values require a truncation to zero, which is a further anomaly. A Poisson distributed noise is then applied to the result of Eq. (2). This noise distribution has no parameterisation, and depends only on the magnitude of each value. The gain factor in Eq.  [20]. The image width is 600 px and every cow is 100 px × 100 px. In total there are 24 cows in different colors. For this visualization, the bands 40, 30, 9 are assigned to the red, green and blue channel.
synthetic CIE D65 is a standardized daylight illumination with a color temperature of approximately 6500 K and the CIE illuminant A represents a standardized tungsten filament lamp. Additionally we used a spectral power distribution (SPD) of the GE FC8T9/CW lamp with 4100K that is online available [21]. Further datasets were generated by weighting the contribution of the dark current. We did not modify the dark current as these values are device and temperature dependent, nor did we increase the acquisition time. Instead, we multiplied each illuminant by a constant factor g l . A brighter illumination leads to a larger number of photoelectrons, and the contribution of the acquisition becomes larger compared to the contribution of the dark current. This simulation does not include a saturation limit for the sensor, and along with the illumination increases the signal-to-noise ratio. In total we generated nine datasets, based on three different illuminants having three different intensities.

Hyperspectral image formats
We compared four approaches, based on different HSI formats, and a noisy image. We applied the denoising, as described in Section 4.1 directly to the radiance image L n , to show the benefit of a converted image format. The two proposed corrected data formats f c and f cs are calculated as described in Section 3. Additionally, we applied the denoising directly on the true number of photoelectrons N, Eq. (2). N includes all non-linearities, and is available during the simulation, but not necessarily during a real acquisition.
Referring to the framework, Fig. 1 the results are converted to radiance values, after the denoising process. The approaches are denoted L L for the denoising directly applied to radiance values, without a conversion of the data format. L c and L cs for a denoised HSI based on the corrected data format f c , respectively the simplified variation f cs . In the same manner, L t describes a denoised radiance image, based on the true number of photoelectrons. Following the previous denotations, L n is the noisy HSI, and the GT is the initial spectral radiance image L, Fig. 2(b).

Evaluation metrics
Different metrics for a quality evaluation of HSI have been proposed [22]. To evaluate the denoising framework, we use the following three metrics. The peak signal to noise ratio (PSNR) between the GT and an estimationL indicates how well the signal is reconstructed PSNR(L,L) = 10 log 10 max(L) (16) Additionally, we use the structural similarity (SSIM) index [23] to evaluate for visual artifacts that might have been introduced in the course of the denoising. It is calculated where μ is the average and σ 2 is the variance, both in a local neighborhood (We use 5×5 pixel). The constants c 1 and c 2 stabilize the division with weak denominator and are set to fixed ratio of the maximum image value. PSNR is well defined on an image and a HSI cube, while SSIM is calculated band-wise and averaged for a global value. For hyperspectral images, the spectral feature is very important. The goodness of fit coefficient (GFC) [24] describes how well a single spectral curve is preserved where p is the radiance value of a single pixel in the HSI cube. Analyzing several locations allow a statistical evaluation of different approaches.

Results and discussion
The denoising was applied with the best parameterization on the datasets described in Section 5.1. The different HSI formats that are described in Section 5.2 are compared to each other.
The results for PSNR are denoted in Table 1. The three illuminants have a different noise level and the PSNR increases with the intensity factor g l . The proposed photon corrected format L c , and the true photoelectron count L t show quit similar results. Both take the dark current into account and are notably more effective than the simplified L cs . In most cases there is a marginal advantage for L c . A reason for this is the conversion to radiance values for the evaluation.
The proposed format f c is spatially proportional to spectral radiance. Therefore, the denoising results from the spatial adaptive TV denoising model are preserved in L c . In Table 2, we observe that the results of PSNR and SSIM are in agreement for different illuminants and intensities. The denoising improves both the structure and the PSNR of an HSI for all formats. In lower light intensities the L L format shows only a poor performance. This HSI format applies the SSATV denoising directly to the radiance values and a conversion to the photon corrected image format is skipped. The mathematical denoising model assumes a Poisson distributed noise, but this assumption does not hold true in L L , due to the radiance format and the different noise contributions.
A band-wise evaluation for the different illuminants in Fig. 4 show different noise intensity in different bands. These different noise intensities are a common phenomenon in HSI and caused by the spectral variation of illuminant and the quantum efficiency. L c preserves better for varying spectral features, which can be best seen in Fig. 4(c). As the evaluation is performed in radiance values, the formats that remain proportional to L benefit stronger from the SSATV denoising model. We evaluate the formats for different light intensities, and multiply therefore all illuminants with a constant factor g l = 0.5 to simulate a low-photon environment. Accordingly, we increase the illumination with g l = 5. As expected for a darker illumination, the difference between L c and L cs is amplified, and it is consequentially more important to account the dark current in a low-photon environment. The noise level is generally higher, and the effects of all denoising are more efficient. For brighter illuminations, we observe that there is practically no difference between the image formats. The values of SSIM and PSNR are closer to noisy HSI. For lower light intensities, the best quality is achieves on a photon corrected image format that takes the dark current into account.
In Table 3 we denote the GFC values and analyze how well the spectral feature are preserved. We do not only look at the average value, but more importantly the minimum GFC. This describes the worst reconstruction of a spectral reflectance curve. L L , which is a SSATV denoising without a conversion of the HSI format has for all three illuminants a worse minimum value than the noisy L n . The proposed L c performs best for all illuminants. Table 1. PSNR for different illuminants, intensities g l , applied to different HSI image formats. The grey columns correspond to Fig. 2(a) and Fig. 4, and bold values show the best result for each column. In a low-photon environment (g l = 0.5) the difference between L c and L cs is larger, and it is more important to take the dark current into account. In brighter environments (g l > 1) the denoising does not improve the results.

Conclusion
We presented a denoising framework for HSI that uses sensor information to transform an acquisition to a photon corrected image format. Sensor data is often available, and some hyperspectral scanners allow to record directly in a corrected raw format that only requires to account a constant scaling and to add an average of the dark current. The values directly represent the noise contribution as a Poisson distribution, and remain proportional at every location to a spectral radiance values. An extended variational denoising model is applied that builds on a mathematical description of the Poisson distributed noise, and has a regularization term that accounts the structural composition of a HSI cube. The framework is evaluated on a synthetic dataset. We use realistic parameters and simulate different noise contributions, such as photon noise (shot noise), readout noise, and digitization noise. The proposed framework only accounts the contribution of the photon noise, and shows a good performance for the different evaluation metrics. In low-photon environments, it is important to account the dark current, and the best denoising results can be obtained by this means. In fact, the dark current is important for the most relevant illumination levels, and all denoising approaches should account it.
Only for high PSNR acquisitions, the contribution of the dark current seems negligible. However, the proposed denoising framework cannot improve the results beyond a certain point, because it does not account the contribution of additive Gaussian noise sources. The framework could be extended in a future work, but the improvements of such high PSNR acquisitions might be just marginal.