Fog Removal Algorithm for Geographic Images Using Generative Adversarial Nets

In order to improve the imaging effect of geographic images in a complex weather environment. This paper designs a geographic image defogging algorithm by using the generation countermeasure network to improve the clarity of the image. Firstly, the imaging mode and degradation process of the foggy image is studied in this paper, and the atmospheric scattering model is established to simulate the degradation of the image. Secondly, according to the data characteristics of image defogging, an optimized image defogging model is generated based on Conditional Generation Adversarial Nets (CGAN), so that the image output by the model is closer to the real image. The experimental results show that the image defogging model designed in this paper has good contrast and color richness, which can effectively solve the problem of image degradation in fog. Moreover, the reduction of paired data sets will reduce the measurement index of the geographic image defogging model. This paper provides a reference for the study of geographic image defogging.


Introduction
Fog is a phenomenon of reduced visibility in the air caused by suspended liquid droplets, dust, and fine sand particles, which is visually white. In foggy imaging, the reflected light of distant objects is blocked and scattered by the air, which attenuates when transmitted to the camera, resulting in image contrast and saturation decrease, details loss, color deviation, and other problems. It seriously affects the analysis and recognition of geographic images and the detection and recognition of buildings based on computer image recognition and processing. Image defogging is to reduce the coverage and interference of fog on image information by enhancing or restoring the collected image, and to restore the details and color information of the image, so as to better reflect the characteristic information of the scene in the image [1]. Park et al. (2020) designed an indoor image defogging algorithm using the generative adversarial network, this algorithm use period consistent generative adversarial network generated haze images, use the conditional generative adversarial network to keep indoor image texture information, and in the process of fusion in the network training added loss function, maximum reduce generative adversarial network generated artifacts, the image texture details and keep the color information. The experimental results show that the algorithm is effective in defogging indoor images, but has poor effectiveness in processing outdoor images [2]. Wu et al. (2020) proposed a kind of image-to-image defogging model based on a generative adversarial network and with dark channel prior. This model takes a fuzzy image as input and directly outputs a non-fuzzy image through Unet-based generator. Experimental results show that the algorithm can reduce the pixel and perceptual loss of images, generate more natural images, and bring better texture details and perception characteristics [3]. Gan et al. (2020) studied the multistage image defogging algorithm based on a conditional generative adversarial network. The fuzzy image can be generated through a generator network to generate the composite image composed of transmission image and atmospheric light value, and the optimized atmospheric scattering model can be used to calculate the defogging image. The experimental results show that this method has a good defogging effect in both synthetic and real fuzzy images [4].
In order to realize the geographic image defogging, this paper designs a geographic image defogging algorithm based on the generative adversarial network. Firstly, the atmospheric scattering model is used to describe the degradation process of the image in fog. Secondly, based on Conditional Generation Adversarial Nets (CGAN), the defogging network and discrimination network are established to realize the defogging of geographic images, and the relevant parameters are optimized to improve the image defogging effect.

Foggy image degradation mechanism
The image defogging principle is mainly divided into two ways, namely enhancement, and restoration. The enhanced image defogging method makes the image clearer by increasing the image contrast and saturation, but it does not take into account the influencing factors of image degradation in fog. The image defogging method based on restoration can obtain the original clear image through the inversion of the relevant model. However, in the absence of additional information support, the relevant parameters of the image defogging model need to be calculated through prior information, and the image defogging effect can be improved by combining machine learning or deep learning [5][6][7].
Where d is the distance between the object and the imaging device,  is the wavelength of light, is the atmospheric light model. According to the attenuation model, the farther the distance between the object and the imaging device is, the more obvious the attenuation effect of the suspended particles in the air on the light is. The attenuation degree of the light can be expressed as: denotes the scattering degree of incident light when the imaging distance is 0 and ) (  denotes the scattering coefficient of atmospheric light. According to the atmospheric light model, the farther the imaging distance is, the greater the interference of the imaging device by environmental light will be. The interference intensity of environmental light can be expressed as: is the total intensity value of atmospheric light. In the processing of the foggy image, the foggy image is represented by a linear model, and the formula is: is the fog image taken by the imaging equipment, represents the original image without degradation, ) ( t x is the transmission image corresponding to the fog image, and corresponds to the atmospheric scattering factor of the atmospheric light model; A is atmospheric light, describes the attenuation model, and describes the atmospheric light model.
When the parameters d in the model tend to infinity, the value of ) ( t x is tends to 0, so ) (x I is approximate to the value A , obtained by A . In the actual processing, the approximate value of A can be approximated to the maximum illumination intensity at the minimum transmittance from the fog image. Atmospheric scattering model describes the filming of the fog, image degradation, through the way of the inversion solution can be foggy image restoration to the original image, and avoid using image enhancement methods image distortion caused by the problem, but in the process of inversion to solve, need to pass a priori conditions and additional information to estimate the impact of transmission diagram ) ( t x , and combining the perspective illustration of the related parameters are obtained.
The common prior information includes image contrast, color, and dark channel. Contrast prior is to establish the correlation between atmospheric light A and transmission image ) ( t x by white balance processing of fog image and then solve atmospheric light A . Gradient is used to represent the contrast of the image. The more visible edges in the image, the stronger the contrast is. The formula is as follows: Where dges C e represents the contrast, ) ( c x I represents the contrast of the fog image, and then the atmospheric light A is modeled for the optimal solution. This method can enhance the texture detail At this point, the transmittance is: The results  are obtained by using the independent component analysis method, and the parameters are repaired to obtain the transmittance ) ( t x , and the original image is reflected according to the atmospheric scattering model. The color prior method can recover the original image and obtain the depth map. The disadvantage is that it requires a high degree of image color richness, but the image color loss is serious on foggy days, so the actual image defogging effect is poor [8,9].
Dark channel prior divides the region in the clear image into several image blocks, and the pixel value of the color channel of at least one point in each image block approaches 0. The formula is as follows: Where dark J is the dark primary color image corresponding to the image ( ) J x , C J is the c channel of the image ( ) J x , and ( ) x  is the neighborhood of the pixel x . Both the shadow of the object and the bright color of the object can form the dark primary color. By calculating the local dark primary color of the image, the transmittance is estimated and the atmospheric scattering model is transformed into: The initial value of transmittance can be obtained through a dark channel prior, then the transmittance distribution diagram can be improved through matting, and finally, the Laplace matrix can be used to get a more accurate influence transmittance diagram.

A geographic image defogging algorithm based on generating a countermeasure network
Generative Adversarial Nets (GAN) is an unsupervised learning process that promotes the network model through the Generative process, including the Generative model and the discriminant model.  and combining the judgment feedback of the discriminant network. The combination of a foggy image and a label image or the combination of foggy image and generated image is used as the input to distinguish the two types of input images. The final image output by the network not only has a high similarity with the label image, but the final judgment network cannot judge the source of the image and the two models reach a state of balance [12][13][14].
The input of the discriminant network is the joint input with conditional variables. Its goal is to distinguish the two input images correctly so that the joint output probability of the generated image and the foggy image is approximately equal to 0, and the joint output probability of the label image and the foggy image is approximately equal to 1. The structure of the discriminant network is shown in Figure 2, including 5 convolutional layers. The first four layers are connected with leakage correction linear cell function layer and instance normalization layer, and the last layer is connected with Sigmoid function. Each pixel value of the output probability graph corresponds to an image block of the input image, and its value is between. By discriminating the input image block, the accuracy of the discriminating network can be improved.

Fig. 2 Schematic diagram of discriminating network structure
In order to evaluate the effect of the image after defogging, peak signal to noise ratio (PSNR), structural similarity (SSIM), Entropy, and AveGrad are used as the evaluation indexes for image defogging effect. The larger the PSNR is, the smaller the difference between the two images is. The larger the structural similarity is, the higher the structural similarity between the two images is. The higher the average gradient is, the higher the image clarity is. The higher the value of information entropy, the more average information the image carries.
In this paper, the standard foggy image data set was used as the training set, including 12,000 training image pairs and 800 test image pairs. Set the value range of atmospheric light value  In this paper, alternate optimization is adopted for geographic image defogging, and the captured geographic image is used for testing. The fogging image is input into the fogging network. After feedforward calculation inside the model, the fogging image is finally output. It can be seen that compared with the fogged image, the geographic image after fogging has better image contrast and color richness, and the image texture is clearer and the visual effect is significantly enhanced.

Fig. 3 Geographic image defogging rendering
In order to train the number of samples for the image of fog removal effect of the model in this paper, the design reduced the number of data sets from 12000 to 7000, and the number of images of non-paired data sets remained 7000, and the test result was GAN-1. Compare the test result GAN without reducing the data set, and the comparison result is shown in Figure 4. As can be seen from Figure 4, the reduction in the number of training samples reduces the relevant evaluation indexes of the geographic image defogging model.

Conclusion
In order to study the method of geographic image defogging, improve the clarity and color information of the image. In this paper, a geographic image defogging algorithm is designed based on the generation countermeasure network. Firstly, the atmospheric scattering model is used to describe the imaging mechanism and degradation process of the foggy image, and the transmission map is solved by inversion of relevant prior conditions. Secondly, the geographical image defogging model is established by using the condition generated antagonistic network, and the defogging network and the discrimination network are designed to improve the image processing effect of the defogging network model. The experimental results show that the geographic image defogging model designed in this paper based on the generation condition countermeasure network has a good processing effect, can improve the contrast and color richness of the image, and effectively solve the problem of geographic image degradation caused by fog. However, there are still some deficiencies in this paper. There are few test samples in this paper, and the defogging effect of the imaging situation with a more complex environment is poor, resulting in the halo phenomenon. Therefore, optimization is needed in subsequent experiments.