Methods of spatial and temporal processing of images in optoelectronic control systems

The data about the effect of image quality on accuracy of information parameters are presented. Studies on accuracy of measuring characteristics of objects in their images depending on image quality are described. Studies showed that accuracy of measurement algorithms improves with an increase in sample sizes of frames in the optoelectronic system (OES). Image quality is improved by processing the frame samples with algorithms for increasing resolution and reducing noise levels, both in single-point and multi-position OESs based on digital cameras with matrix photodetectors. The maximum effect of an increase in information content of the image and accuracy of measuring algorithms is determined by variations in the pixel structure of each image in the region of brightness gradient boundaries. In the single-point OESs which ensure high quality of optical images without noise factors and variations of other external parameters, the optimal number of frames in the OES is four.


Introduction
Images obtained in optoelectronic systems (OES) are used to determine informational characteristics of objects. Currently, various methods are used to improve image quality, including joint processing of images obtained in time sequence with one OES or several independent OESs. Principles of constructing a single-chamber light field (LF) OES are known [1,2]. They record coordinates and direction of the rays. These cameras are equivalent to a large array of low-resolution digital cameras. LF file processing algorithms ensure images with different sets of rays which correspond to a change in the viewing angle of the OES and the depth of the sharply displayed space.
There is a practical issue of improving quality of images to obtain accurate information about the object. Currently, there are approaches to quantitative assessment of quality and complexity of images based on the information theory and the concept of entropy [3]. The complexity of discrete images can be estimated by comparing their two-dimensional variations. A common metric characteristic of variability and complexity of a one-dimensional function [a, b] is a complete variation. A lot of its generalizations to the functions of many variables were suggested (Vitali, Arzela, Frechet, Tonelli's, variations). All of them are variants of the function gradient modulus integral in region [a, b]. In fact, their values are close to real functions. Each variation results in a single value based on the value similar to the function gradient modulus at a point. Based on the generalizations of conclusions and theorems formulated using the above variations, A.S. Kronrod concluded that the function of several r=(x1,x2,…) should be characterized by several independent functionals that determine the length of the boundaries of the components and the number that characterizes severity of local extrema f(r). Complex of the image can be determined by its fractality [4]. The Minkowski's dimension is a way to specify the fractal dimension of a limited set in the metric space. It is closely related to the Hausdorff's dimension [4]. In practice, there may be other methods for assessing complexity and information content of images (e.g., segmentation) [5]. Using the watershed method, the absolute value of the image gradient is considered as topographic surfaces.
A relevant issue is to increase accuracy and reliability of the data about the objects of the OES obtained as a result of the analysis of the pixel structure.
In some cases, image processing methods are used [6] to improve quality parameters of the object's structure and prepare it for the uniform application of measurement algorithms. Measuring algorithms are constructed by analyzing the pixel structure of the image, in particular, curves of the gradient of their brightness [7]. The greatest accuracy in measuring geometric parameters is ensured by integrated methods used for analyzing the contrast gradient, for example, using the wavelet basis [8,9] which reduces the requirements for the pixel structure and the gradient of brightness (contrast) of the edges.
Obviously, using the means for obtaining spatio-temporal (R 3 -t) information from a single-point or multi-position OES and choosing an image processing method, it is necessary to determine possibilities of obtaining the measurement information, its accuracy and reliability, and formulate requirements for the OES.
Due to a large number of configurations of existing OESs, there are no specific recommendations on the optimal choice of means for obtaining and processing information. There is no general method for constructing algorithms for processing and obtaining information parameters of various objects. We studied the effect of image quality on the accuracy of information parameters of the objects. The issues of the need and ways to improve quality of the source images, including in the time sequence, were analyzed.

Materials and methods
We used an OES based on a smart camera by National Instruments (NI) -NI 1742 (CCD 640x480 matrix).
A light field recorder based on a digital camera Lytro ILLUM (version B5-0036 ILLUM) [2] with a CMOS sensor (Aptina MT9F002 14.4 MP, 1 / 2.3 ″, effective image area: 6.14 × 4.6 mm, pixel size 1.4 μm) was used. The number of microlenses in the array is 130,000 (the focal length is 25 microns, the pitch is 13.89 microns). The camera had a main lens of 9.5-77.8 mm with a relative aperture of 1:2. To process light field files, the Lytro Desktop application was used.
To construct the processing and measurement algorithms, we used an application development environment based on virtual instruments NI LabVIEW [10]; an additional NI Advanced Signal Proceedings Toolset module which includes signal processing functions based on the combined frequency-time and wavelet analysis was used. The driver functions of the NI IMAQ Vision technical vision module were used [11]. To capture image frames in real time, the NI Vision Builder for Automated Inspection (Vision Builder AI) application was used, and the NI Vision Assistant application was used to build prototypes of the processing and measurement algorithm.

Studies of accuracy of identification of information characteristics in OES images
The OES with a set of photodiodes (linear, matrix) can ensure a four-dimensional information structure which can be analyzed using the known methods: . As a result of the study of accuracy and reliability of information parameters of the object, the size of the contour was measured using standard algorithms (NI IMAQ-Vision) for finding edges of the brightness gradient. At the first stage, we used algorithms for determining the edge lines yi=tg( with true values taking into account the calibration of the optical system (determination of the linear increase in the optical system). For more accurate measurements of linear dimensions, an algorithm based on the continuous wavelet transform (CWT) of the illumination distribution in the image profile lines [7] was used. It eliminates the effect of the pixel structure of the edges [13]. The results of finding straight edges along the contour edges for a satellite image using the Find Straight Edge algorithm are presented in Fig. 1 and Table 1. An estimate of fractality of the image ( fig. 1a) is shown in Fig. 1b.  In order to determine the effect of image quality improvement on the measuring accuracy, we used the method of combining (adding) a series of images implemented in RegiStax, AutoStakkert, program PIPP and PhotoAcute Studio. Application of the algorithms of these programs increases the resolution of the image, eliminates the effect of noise and random artifacts, temporary mechanical instability of the OES, the gradient of the refractive index of the medium. Moreover, image quality was compared with its fractality and entropy and measuring accuracy of the geometric parameters of the objects. In some cases, measuring accuracy increases as a result of processing a single image, for example, by increasing the local contrast [14]. However, the effect of quality improvement is reduced.
Edges are determined by the re-pixelation of the image structure which depends on the angle of its inclination in the coordinate system of the registrar matrix [13,15]. The nature of the change in the pixel structure is determined by the recorder and sensor's topology. Therefore, accuracy can be improved by scanning the profile line along the inclination angle to the edge [7,13].
The angle of the satellite R as a complex object and its coordinate position (X, Y) were measured by the image of quality KI( ImNxM). Image quality was obtained by decreasing and increasing the resolution (NxM) obtained from a series of images with variable parameters of the pixel structure.
Images obtained by the light field recorder [2,6] were studied. The light field recorder can be considered as an array of low resolution DC [1]. The depth of field in a flat image was ensured by installing a virtual lens aperture. The conversion of volumetric characteristics of an object was specified in the Lytro Desktop application. The dimensional spatial calibration was taken into account [12]. An increase in the resolution of the resulting image in the digital cameras LF is achieved by an algorithm based on the four-dimensional Fourier transformation Ḟ 4 of coordinates (x, y) and the direction of ray propagation [16,1]. Similarly, this algorithm can be used in a series of images of small-sized objects Ḟ 4 {ImK (R 3 -t)} obtained in a sufficient number of images of one OES at various time points or by scanning the image using an OES sensor. The effect of this conversion can increase the resolution of small-sized objects (100 pixels) by more than 100 times. Variations in pixels were determined by their number under microlenses. The use of an image generation system on a single sensor is equivalent to the DC LF principle with an array of microlenses. Processing of the time sequence of images requires solving the problem of their spatial synchronization and changing a significant parameter of variation in the image. Variations in microlens images are determined by directions of the ray [1,16]. Thus, in the registration systems, it is necessary to have or create conditions of variation, for example, a shift in the pixel range, a change in the time of accumulation of illumination (in case of brightness).
The experiments showed that the measurements obtained by summing a series of images from a single-point OES improve measuring accuracy up to 30 % (two images) and 70 % in a multi-position OES. This is due to the pixel blending of the second image. For multi-position OESs, the maximum effect is achieved in two images, after which the errors of transformations and combinations have a strong influence. A further increase in resolution and accuracy occurs when there are random uncontrolled factors and significant noise. Image quality is determined by its fractality which is determined by the number (length) and distribution of the brightness gradient curves. Increased resolution makes it possible to resolve and analyze closely spaced brightness gradient lines ( fig. 2), determine the parameters of smaller details and image fragments. The function «Find edges» by the radial lines (Find Circular Edge) has algorithm settings (Fig 3). It determines coordinates of the contour edges, the circle center for the found polar coordinate system, and the statistical variation of the radius. Varying the parameters of the function allows us to achieve stable results for the shape of the object according to its shape and evaluate its reliability. The number of radial search lines determines the details of the analyzed contour. We can determine spatial coordinates and the size of the satellite by deviation of its shape from the sphere. For images of various quality, the parameters change. Studies have shown the dependence of the measured parameters on the resolution of the original image (Table 2).

Conclusion
After the research of measurement algorithms on images of various quality, we can conclude that increasing resolution of a multi-frame image can improve measurement accuracy by 2-5 times. For small objects with a high noise level, measurement accuracy can be doubled. For the image of the light field obtained by digital cameras in one exposure, accuracy of the measurement algorithms is ensured by the physical size of the sensor. Multi-frame image processing reduces the noise caused by high light sensitivity. Improving quality of the multi-frame processing has an effect when changing the view angle of the camera. Multiposition systems provide more opportunities for obtaining higher quality images, but require knowledge of exact configuration parameters of the OES.