Estimation Method of Blurred Parameters in Moving Blurred Image

In order to improve the quality of image restoration and remove the blur caused by motion in the image, a blurred parameter recognition algorithm with point spread function is proposed. This algorithm combines the Fourier spectrum of the image with the edge detection based on phase consistency and identifies the blurred angle by determining the direction of the edge of the center bright stripe. Then bilinear interpolation is used to generate the sub-pixel image of the spectrum, calculate the distance between the dark bands, and estimate the blurred length. Experimental results show that, compared with the traditional algorithm, the proposed method can not only obtain higher precision of blurred angle and blurred length, but also have better stability, and can correctly detect blurred parameters of blurred images with different scales and contents.


Introduction
Due to the advancement of technology, the application of images to various fields is an important way for humans to obtain information. Due to camera shake in camera shooting and other reasons, motion blur images are very common in life. Taking pictures on mobile devices may cause motion blur of the image. Therefore, in order to obtain high-quality images, improving the restoration effect of motion blurred images is of great scientific significance.
In the case of ignoring noise, the degradation process of the motion blurred image can be regarded as the convolution process of the clear image and the point spread function (PSF). The PSF in the degradation process of the motion blurred image includes two parameters, the blur direction and the blur scale. For the parameter estimation of motion blurred images, there have been many methods to achieve certain results, such as Hough transform [1], Radon transform [2]. Wang Lin et al. [3] adopted the method of spectral blocking to avoid the interference of bright cross lines to estimate the motion blur parameters; Zhou Guodong [4] used statistical and morphological methods to estimate the motion blur parameters; Wang Haili [5] used the gradient cepstrum Method and differential autocorrelation method to obtain motion blur parameters. This paper mainly uses the Fourier spectrum characteristics of the blurred image, uses phase consistency and Radon transform to detect the motion blur angle, and uses the improved calculation of the dark line distance of the spectrum to calculate the blur length. The image degradation caused by motion can be modeled as a linear translation-invariant process [6]. The formula is expressed as Formula (1).

Motion blur image degradation model
( , ) ( , ) ( , ) ( , ) g x y f x y h x y n x y    ( 1 ) In Formula (1) ( , ) g x y is the degraded image; ( , ) f x y is the target restored image; ( , ) h x y is the point spread function, ( , ) n x y is additive noise.
For uniform linear motion, the point spread function is defined as [7]. 1 cos , 0 cos ( , ) 0, others In formula (2) L is the length of motion blur and  is the angle of motion blur. At this time, the Fourier transform of ( , ) h x y is ( cos sin ) sin( ( cos sin ) ) ( , ) ( cos sin ) When there is only motion blur in the horizontal direction, that is 0 y  , the discrete Fourier In formula (4) N is the image size. That is, formula (4) shows that the frequency spectrum of the point spread function exhibits a SINC shape. Therefore, there are regular bright and dark stripes in the frequency spectrum of the motion blurred image.

Blurred angle estimation
There are parallel dark stripes at equal intervals in the motion blur image without noise. Experiments show that these dark stripes have a certain relationship with the direction of motion blur [8].
Among them, is the angle between the motion direction of the object and the horizontal positive direction; is the direction of the dark stripe direction, that is, the angle corresponding to the slope of the parallel straight line; M , N is the image size. When the image is square, the motion blur direction is perpendicular to the parallel dark line direction. Therefore, you only need to obtain the parallel dark line direction to know the direction of motion blur. The following uses the method based on phase consistency and Radon transform to estimate the dark line direction.

Phase consistency
The motion blur angle can be obtained indirectly according to the direction of the dark line in the obtained spectrum. Experiments have shown that based on phase consistency (PC), it can overcome the effects of noise, brightness, and contrast [3,9]. Phase consistency means that in the frequency domain of an image, features with similar edges appear more frequently at the same stage.
Compared with gray-level edge extraction, an example of a significant difference is an image composed of a line, such as the letter "I". Many edge detection algorithms extract two adjacent edges: from white to black, and from black to white. A method based on phase consistency can extract a single line.

Radon transform
The Radon transform is an integral transform, which defines a function on a two-dimensional plane and performs line integration along any straight line on the plane, which is equivalent to performing a CT scan of the function. The definition of Radon transform in two-dimensional space is Equation (

Blurred length estimation
After the blur angle is determined by the above algorithm, the image can be rotated into only horizontal motion blur. The blurred length can be calculated by formula (7).
Where N is the image size, 0 d is the distance between two consecutive dark lines, and L is the blur length. Fig.4 Schematic diagram of sub-pixel blur length measurement In order to obtain the sub-pixel accuracy of the blur length, we use a bilinear interpolation method for the rotated spectrum. The bilinear interpolation operation of subdivision k times is used for the horizontal direction and the vertical direction of the image, as shown in Fig.4. To further improve accuracy, measure the blur length by measuring the distance between 4 dark lines, where and finally average the spectrum image after bilinear interpolation to obtain a row vector by column, so the blur length at this time. The measurement formula is  Fig.5 The intensity distribution of the mean value of the frequency series after bilinear interpolation

Blurred angle estimation
For the accuracy of the algorithm in this paper on the blur angle, Lena, Cameraman, Liftingbody, Mri and Moon images in the standard image library were selected for testing, and compared with the traditional Radon transform and R. Dash  It can be seen from Table 1 that the blurred angle algorithm proposed in this paper has more advantages than the traditional blurred angle calculation method, and the measurement error can be guaranteed within a range of less than 1°.

Blurred length estimation
In order to verify the accuracy of the algorithm of this paper on the blurred length test, the algorithm of this paper is compared with the differential autocorrelation algorithm, and the standard image Lena is selected for testing. Among them, the value of this experiment is used to enlarge the image by bilinear interpolation operation to Obtain a higher-precision blur length.
Tab.  Table 2, it can be seen that the calculation accuracy of the algorithm proposed in this paper can reach the sub-pixel level, and can obtain a better blur length estimation effect than the traditional algorithm.

Conclusion
An algorithm based on the combination of phase consistency, Radon transform and bilinear interpolation is proposed to estimate the point spread function of motion blurred images. Experiments on blur angle estimation and blur length estimation have been carried out on different standard images. The absolute errors of the blur direction and blur length estimation errors are within 1° and 1 pixel, respectively. And compared with the traditional Radon transform, R. Dash method and differential autocorrelation method, the algorithm in this paper can achieve better estimation of point spread function.