Abstract
In this paper, we propose a novel model, combining the physical model and the visual model (visual-physical model), to describe the formation of a haze image. We describe the physical process of degraded image based on incorporation of optical imaging physical model and visual cognitive process, enriching the degradation factor. The variational approach is employed to eliminate the atmospheric light, then estimate transmission map via the median filter. We can recover the scene radiance based on MRF model, and use contrast limited adaptive histogram equalization to correct colors after defogging. Experimental results demonstrate that the proposed model can be applied efficiently to outdoor haze images.
1 Introduction
Different weather conditions such as haze, fog, smoke, rain, or snow will cause complex visual effects of spatial or temporal domains in images [1]. Such artifacts may significantly degrade the performances of outdoor vision systems relying on image feature extraction [2] or visual attention modeling [3], such as event detection, object detection, tracking, and recognition, scene analysis and classification, image indexing and retrieval. Removal of weather effects has recently received much attention, such as removals of haze, rain, and snow from image.
In this paper, we focus on haze removal from a single image. Based on the fact that haze is dependent on the unknown depth, dehazing is therefore a challenging problem. Furthermore, if the available input is only one single hazy image, the problem is under-constrained and more challenging. Hence, most traditional dehazing approaches [4] have been proposed based on using multiple hazy images as input or additional prior knowledge. Polarization-based methods [5] were proposed to remove the haze effects through two or more images taken with different degrees of polarization. In [6], more constraints obtained from multiple images of the same scene under different weather conditions were employed for haze removal. Nevertheless, taking multiple input images of the same scene is usually impractical in several real applications. Single image haze removal [7] has recently received much attention. In [29], He et al. obtained depth image of scenes and removed haze effect relying on the dark channel prior. In [30], Fattal removed the scattered light to restore haze-free color images using an uncorrelation principle, but the algorithm cannot handle gray level haze images.
In this paper, inspired by the Retinex model [8, 9], Atmospheric Transmission Function (ATF) [10], and Monochromic Atmospheric Scattering Model (MASM) [11], we propose a novel model called the visual-physical model. Based on this new model, we propose to estimate the atmospheric light via the variational approach. We can then recover the scene radiance using MRF method, estimate the transmission map and refining it via the median filter. As a result, high-quality haze-free images can be recovered.
2 Visual-Physical Model
The VPM can be widely used to describe the formation of a hazy image \( I{(x)} \), where \( x \) is the pixel index, is shown as:
where \( I{(x)} \) is the observed intensity, \( c{(x)} \) is the scene radiance (the original haze-free image to be recovered) [19], \( L{(x)} \) is the global atmospheric light, and \( d{(x)} \) is the atmospheric scattering rate.
Obviously, when \( d(x) = 0 \), VPM can be used to describe the condition of complex illumination. When there are haze in atmosphere,\( d(x) > 0 \), VPM can be used to describe the haze degraded image and the image with both haze degradation and complex light degradation [12, 13]. In addition, the atmospheric light \( L{(x)} \) is local smooth, and in this paper, we assume that the atmospheric scattering rate \( d{(x)} \) does not have a local smoothing properties. So, when \( \left| {d(x_{0} ) - d{(x)}} \right| > > 0,x \in N_{0} (x_{0} ) \), \( x_{0} \) is the additive noise that cause interference, VPM can also describe the noise.
2.1 Eliminate the Atmospheric Light
The VPM can be described as follows:
Based on the variational approach, we propose to estimate the atmospheric light via Kimmel algorithm [14, 20].
As a consequence, it can be written as:
where \( l = \log L \), \( i = \log I \), \( \varOmega \) is the domain of definition of the image, \( \varOmega \) is edge of the image. \( \alpha ,\beta \) are two punishment parameter. \( \vec{n} \) is the normal vector of the edge. \( \left| {\nabla l} \right|^{2} \) is a bound term to keep the smoothness of the atmospheric light. \( \left| {l - i} \right|^{2} \) is another bound term to keep the similarity of the atmospheric light and the input image [27]. The last bound term \( \left| {\nabla (l - i)} \right|^{2} \) is to keep the smoothness of \( (c + d) \).
With Kimmel’s method [14], \( I^{\prime} \) can be obtained.
2.2 Recovering the Scene Radiance
When \( I^{\prime} \) is known, if we can get the evaluation of atmospheric scattering rate \( d^{\prime} \), we can recover the scene radiance from the foggy image by maximizing the posterior probability [15]. \( d^{\prime} \) can be estimated via dark channel prior based on median filter in order to preserve both edges and corners [21].
where \( \varOmega_{x} \) is a local patch centered at \( x \), \( c \) denotes one of the three color channels \( (R,G\;or\;B) \) in the RGB (red, blue, and green) color space, and \( I^{\prime}_{c} (x) \) denotes the color channel \( c \) of \( I^{\prime}(x) \) [22]. Function med() is median filter function. The noise point which satisfied the equation \( \left| {d(x_{0} ) - d{(x)}} \right|> > 0,x \in N_{0} (x_{0} ) \) will be filtrated [23].
By using the Bayes rule, this probability can be written as:
where \( p\left( {I^{\prime}|d^{\prime},c} \right) \) is the data likelihood and \( p\left( {c|d^{\prime}} \right) \) is the prior on the unknown \( c \). In practice, the log-likelihood is minimized instead of the maximization over the probability density function [24]. The energy derived from (5) using log-likelihood is:
The energy is thus the sum of two terms: the data and prior terms [28]. Data term \( E\left( {I^{\prime}|d^{\prime},c} \right) \) can be denoted as \( E_{data\_c} \). And prior term \( E\left( {c|d^{\prime}} \right) \) can be denoted as \( E_{prior\_c} \).
By definition, the data term is the log-likelihood of the noise probability on the intensity. As a consequence, it can be written as:
where \( X \) is the set of image pixels and \( f() \) is a function related to the distribution of the intensity noise with scale \( \sigma \). In practice, a Gaussian distribution is usually used [25].
The prior term enforces the smoothness of the restored image by penalizing large difference of intensity between neighboring pixels [26]. It can be written as:
Where \( \xi \) is a factor weighting the strength of the prior on the data term, \( N{(x)} \) is the set of relative positions of pixel neighbors, and \( g() \) is a function related to the gradient distribution of the scene radiance. In practice, the identity function for \( g() \) gives satisfactory results. \( \left( {1 - d^{\prime}_{x} } \right) \) is an exponential decay, without this decay, the effect of the data term becomes less and less important as haze density increases, compared to the prior term, and the result is over smoothed at high haze density [15]. To avoid this effect, the exponential decay is introduced in the prior term.
The energy function as (6) can be described as MRF model:
In this paper, we use \( \alpha \)-expand algorithm [16, 17] to get the optimal result of the energy function.
We adjust the image colors via contrast limited adaptive histogram equalization (CLAHE) [18] in this paper.
3 Experimental Results
To demonstrate the practicability and performance of the proposed Visual-Physical model, we evaluate the performance of the various outdoor images with and without applying the proposed dehazing method as a preprocessing step, in Sect. 3.1. Then, in Sect. 3.2, we compare our method with He’s [29] and Fattal’s [30] methods.
3.1 Experiments on Different Haze Image
As shown in Fig. 1, top of Fig. 1 shows the input image. The second line of Fig. 1 shows the atmospheric scattering rate image. The third line of Fig. 1 shows the atmospheric light image. The bottom of Fig. 1 shows the scene radiance image. The proposed algorithm can successfully restore degraded contrast and color of images.
Figure 1 shows the scenes recovered by the same restoration algorithm derived from visual-physical model with different input image. Obviously, the scene recovered by our method has good quality and visibility. In the red rectangle in Fig. 1d1, we can obtain that our method can well deal with large scene depth image. In the red rectangle in Fig. 1d2, we can obtain that our method can well retain edges. In the red rectangle in Fig. 1d3, we can obtain that our method can well recover texture.
Figure 1d1 shows dehazed image of city scene. Figure 1a1 has a relatively low contrast and appears gray. We produced a better result in Fig. 1d1, where the fog area has clearly been weakened. At the same time, the color of the building has been restored and the whole image is brighter. Figure 1d2 shows the result of close scene image and Fig. 1d3 shows the result of natural scene. We can also obtain that our method can well enhance the contrast of image and restore the color of scene.
3.2 Comparison Experiment
Figures 2, 3, and 4 shows the comparison of three methods, where it can be observed that the proposed method can unveil the details and recover vivid color information, but often over saturated in some regions.
Figures 2, 3, and 4 shows a comparison between results obtained by He’s [29], Fattal’s [30] and our algorithm. Figures 2C, 3C, and 4C show the results obtained by He’s method [29], which can retain most of the details while its color is not consistent with the original one, but will introduce false contour on some edges. Figures 2D, 3D, and 4D show the results obtained by Fattal’s method [30], this approach cannot well handle heavy haze images and may be failed in the cases that the assumption which surface shading are locally uncorrelated is broken.
Figures 2B, 3B, and 4B show the recovered image obtained by our approach. It can be seen that our results retain very fine details and preserve the color of the original scene. But in some large depth scene regions, the recovered image is too bright. And it contains some white block effects on those regions.
Additionally, Table 1 shows the entropy, PSNR and variance of He’s [29], Fattal’s [30] and our method. The entropy and variance denote the quantity of information of image. The better recovered image has higher value of entropy and variance. The PSNR denotes the integrality of image structure information.
Based on Table 1, the proposed method can restore the naturalness of original image, enhance the contrast of image, and recover the details.
4 Conclusions
In this paper, we proposed an effective and efficient model to describe the haze image. Based on this visual-physical model, we can eliminate the atmospheric light via variational approach, estimate transmission map via the median filter, recover scene radiance via MRF model and correct colors via CLAHE. The proposed method is especially good performance for nature scene. It enhances the contrast of objects but will be over saturated in some regions.
References
Narasimhan, S.G., Nayar, S.K.: Vision and the atmosphere. Int. J. Comput. Vis. 48(3), 233–254 (2002)
Maji, S., Berg, A.C., Malik, J.: Classification using intersection kernel support vector machines is efficient. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8. Anchorage, Alaska, USA (2008)
Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998)
Narasimhan, S.G., Nayar, S.K.: Interactive deweathering of an image using physical models. In: Proceedings of IEEE Workshop Color and Photometric Methods in Computer Vision (2003)
Schechner, Y.Y., Narasimhan, S.G., Nayar, S.K.: Instant dehazing of images using polarization. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 325–332. Kauai, Hawaii, USA (2001)
Nayar, S.K., Narasimhan, S.G.: Vision in bad weather. In: Proceedings of IEEE International Conference on Computer Vision, pp. 820–827. Kerkyra, Greece (1999)
Tan, R.: Visibility in bad weather from a single image. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8. Anchorage, Alaska, USA (2008)
Xiao, J., Hays, J., Ehinger, K.A., Oliva, A., Torralba, A.: Sun database: large-scale scene recognition from abbey to zoo. In: CVPR, pp. 3485–3492. IEEE (2010)
Saxena, A., Sun, M., Ng, A.Y.: Make3D: learning 3D scene structure from a single still image. PAMI 31(5), 824–840 (2009)
Adelson, E.H.: Lightness perception and lightness illusion. In: Gazzaniga, M. (ed.) The New Cognitive Neurosciences, 2nd edn, pp. 339–351. MIT Press, Cambridge (2000)
Narasimhan, S.G., Nayar, S.K.: Vision and the atmosphere. Int. J. Comput. Vis. 48(3), 55–232 (2002)
Matlin, E., Milanfar, P.: Removal of haze and noise from a single image. In: Proceedings of SPIE Conference on Computational Imaging. SPIE, California, USA (2012)
Buades, A., Coll, B., Morel, J.: A non-local algorithm for image denoising. In: Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, pp. 60–65. IEEE, San Diego, USA (2005)
Kimmel, R., Elad, M., Shaked, D., et al.: A variational framework for retinex. Comput. Vis. 52(1), 7–23 (2003)
Caraffa, L., Philippe, J.: Markov random field model for single image defogging. In: Proceedings of 2013 IEEE Intelligent Vehicles Symposium (IV), pp. 994–999. IEEE, Gold Coast, Australia (2013)
Boykov, Y., Veksler, O., Zabih, R.: Fast approximation energy minimization via graph cuts. IEEE Trans. PAMI 23(11), 1222–1239 (2001)
Gupta, A., Efros, A.A., Hebert, M.: Blocks world revisited: image understanding using qualitative geometry and mechanics. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part IV. LNCS, vol. 6314, pp. 482–496. Springer, Heidelberg (2010)
Treibitz, T., Schechner, Y.Y.: Polarization: beneficial for visibility enhancement. In: Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, 525–532 (2009)
Chen, Z., Wong, K.Y.K., Matsushita, Y., Zhu, X., Liu, M.: Self-calibrating depth from refraction. In: ICCV, pp. 635–642 (2011)
Liu, B., Gould, S., Koller, D.: Single image depth estimation from predicted semantic labels. In: CVPR, pp. 1253–1260. IEEE (2010)
Oswald, M.R. Toppe, E., Cremers, D.: Fast and globally optimal single view reconstruction of curved objects. In: CVPR, pp. 534–541 (2012)
Tighe, J., Lazebnik, S.: Finding things: image parsing with regions and per-exemplar detectors. In: CVPR, pp. 3001–3008. June 2013
Yeh, C.H., Kang, L.W., Lin, C.Y.: Efficient image/video dehazing through haze density analysis based on pixel-based dark channel prior. In: 2012 International Conference on Information Security and Intelligence Control (ISIC), pp. 238–241 (2012)
Zhou, C., Cossairt, O., Nayar, S.: Depth from diffusion. In: CVPR, pp. 1110–1117. IEEE (2010)
Mittal, A., Moorthy, A.K., Bovik, A.C.: No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 21(12), 4695–4708 (2013)
Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2013)
Kristofor, B.G., Dung, T.V., Truong, Q.N.: An investigation of dehazing effects on image and video coding. IEEE TIP 21(2), 662–673 (2012)
Ancuti, C.O., Ancuti, C., Hermans, C., Bekaert, P.: A fast semi-inverse approach to detect and remove the haze from a single image. In: Kimmel, R., Klette, R., Sugimoto, A. (eds.) ACCV 2010, Part II. LNCS, vol. 6493, pp. 501–514. Springer, Heidelberg (2011)
He, K.M., Sun, J., Tang, X.O.: Single image haze removal using dark channel prior. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1956–1963. Miami (2009)
Fattal, R.: Single image dehazing. ACM Trans. Graph. 27, 1–9 (2008)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Lin, W., Du-Yan, B., Quan-He, L., Lin-Yuan, H. (2015). Single Image Dehazing Based on Visual-Physical Model. In: Zhang, YJ. (eds) Image and Graphics. Lecture Notes in Computer Science(), vol 9219. Springer, Cham. https://doi.org/10.1007/978-3-319-21969-1_31
Download citation
DOI: https://doi.org/10.1007/978-3-319-21969-1_31
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-21968-4
Online ISBN: 978-3-319-21969-1
eBook Packages: Computer ScienceComputer Science (R0)