Reducing dimension of parametric space based on the approximation of components for interpolation of multidimensional signals

In this paper, we address the problem of parametric space dimension reduction in the interpolation of multidimensional signals task. We develop adaptive parameterized interpolation algorithms for multidimensional signals. We perform a dimension reduction of the parameter space to reduce the complexity of optimizing such algorithms. The dependences of the samples inside the signal sections and between the signal sections are taken into account in various ways to reduce the dimension. We consider the dependencies between the signal sections through the approximation algorithm for the sections. We take into account the sample dependencies inside sections due to an adaptive parameterized interpolation algorithm. As a result, we solve the optimization problem of an adaptive interpolator in the parameter space of lower dimension for each signal section separately. To study the effectiveness of adaptive interpolators, we perform computational experiments using real-world multidimensional signals. Experimental results showed that the proposed interpolator improves the efficiency of the compression method up to 10% compared with the prototype algorithm.


Introduction
When working with multidimensional signals, such as hyperspectral signals [1], video signals [2], remote sensing data (RSD) [3], etc., it is necessary to process huge data. The increase in storage capacity does not solve this problem. This way is not fundamentally suitable for onboard processing systems that use a limited bandwidth communication channel. The only solution in this situation is to use compression methods. Many compression methods, one way or another, use interpolation of a multidimensional signal [4] as one of the processing steps.
Currently, a large number of digital signal compression methods have been developed [5][6][7][8]. For example, fractal compression methods [9] provide the highest compression ratio. However, these methods are rarely used in real systems due to high computational complexity and unnatural distortions.
Wavelet compression methods [10] are the most preferable in terms of compression ratio and computational complexity. The only relatively widespread of these methods is JPEG-2000 [11]. In the future, these methods will be used frequently, but at the moment they are used much less often than the less efficient method of JPEG compression [12], based on discrete cosine transforms (DCT) [13].
However, methods based on discrete orthogonal transformations, as well as other computationally complex transformations (DCT, wavelet, Fourier, fractal, etc.) have a rather high computational complexity, which may be unacceptable in real-time systems, onboard compression systems and data transfer, etc. It should also be noted that such systems usually impose strict requirements on the quality management of compressed data, which is also difficult to ensure for compression methods based on these transformations since control of many types of error in the transformed spaces is difficult or impossible.
So, with strong requirements for computational complexity and the need for strict quality control of compressible data, attention should be paid to compression methods that do not use any spectral spaces. These compression methods that work directly in the spatial domain can provide data quality control with acceptable computational complexity.
One of the representatives of these methods is the compression method based on hierarchical grid interpolation (HGI) [14][15]. This method also has many advantages that are essential for groundbased signal processing complexes (scale-independent access speed to a fragment, etc.).
HGI compression is based on interpolation of compressible signal samples using resampled copies of the same signal and subsequent entropy [16] encoding of interpolation errors. The critical stage of the HGI compression method is the interpolator.
One of the most effective interpolators is an adaptive interpolator [17], which automatically switches between different interpolation functions at each point of the signal. This interpolator is parameterized and should be optimized in its parameter space for each signal separately. With an increase in the dimension of the signal, the dimension of the interpolator parametric space grows also. This can lead to a noticeable increase in the computational complexity of the entire processing procedure. Thus, the critical task is to reduce the dimension of the interpolator parametric space.
In this paper, we solve this problem by using various algorithms of signal processing along different axes of signal coordinates. Cross sections of the multi-dimensional hypercube of the signal are processed sequentially, and the correlation between these sections is used by approximating each section based on the sections already processed. Such processing decorrelates the approximation errors inside each section. As a result, we can use independent adaptive interpolators inside each section of the signal, and the dimension of these interpolators is reduced by one.

Hierarchical compression of multidimensional signals
The hierarchical method of compression of multidimensional signals is based on the ideas presented in [14][15]. This method uses a hierarchical representation of the original multidimensional signal (see is the vector of its coordinates (arguments).
We write the representation of this signal as a set of L scale levels: where I l is the set of sample indices of scale level l X : (2) The scale level with the number (L -1) is a "grid" of signal samples with step 2 (L -1) for both coordinates. Every following level number l is a grid of samples with a step 2 l from which samples of step 2 l+1 are excluded.
During compression, the scale levels are processed sequentially. At the same time, more resampled scale levels (with smaller numbers) are interpolated based on more resampled levels (with large numbers). Since the post-interpolation residues are decorrelated, the encoding of these residues is more efficient than encoding the original sample values.

Adaptive interpolation of multidimensional signals during compression
With significant limitations on computational complexity, algorithms [4, 6] are often used for interpolation, using "smoothing" (averaging) over some set of reference samples: where ( )n x k  are the reference samples for interpolation, N is the number of these reference samples.
With hierarchical compression, these samples belong to the previous (more resampled) scale level of the signal. The "averaging" interpolation algorithm of the form (3) is exact on slowly varying signal segments due to averaging of noise signal samples. However, the interpolation error increases in rapidly changing signal sections, i.e. at the boundaries of the uniform areas. To improve the interpolation accuracy in these parts of the signal, we use nonlinear [4] algorithms, including adaptive [6] algorithms.
With adaptive interpolation, we calculate some features in the local neighborhood of each signal sample. We choose an interpolation function for each sample depending on the values of these local features. We use the values of the nearest reference samples or the absolute values of the differences of these samples as these local features: where N x is the number of possible boundary directions in the local neighborhood of the current sample ( ) are the nearest reference samples located along the boundary of the i-th direction, and t i δ is the algorithm parameters that control the switching of interpolation functions. Optimization of the adaptive interpolator by parameters is performed for each signal realization separately [17]. At the same time, the dimension of the parametric space grows with the dimension of the signal, and therefore this optimization can be computationally complex.

Approximation of cross-sections of a multidimensional signal during the compression
In this paper, we propose to process the cross-section (bands) of a K-dimensional multidimensional signal sequentially. For most natural signals, these cross-sections are interdependent. Consider the cross sections s X : is the column vector of correlation coefficients between components X s and X s-i-1 .
Thus, when a multidimensional signal is compressed, instead of compressing the original section s X , a differential section ( ) s s X X −  is compressed, which is equal to the difference between the original section and its approximation. For each section, a hierarchical compression method is used with the adaptive interpolator described above. The adaptive interpolator is optimized using an algorithm that summarizes the algorithm described in [17]. The difference lies in the fact that data of a smaller dimension arrive at the input of the interpolator, which considerably facilitates the solution of the interpolator optimization problem.
So, interdependence in one of the signal coordinates is taken into account using the approximation of signal cross-sections, and interdependence in all other coordinates is taken into account through the use of an adaptive parameterized interpolator, which is optimized in a smaller dimensionality space.

Experimental study of the adaptive interpolator
In this paper, the proposed adaptive interpolator (5) of multidimensional signal, including the signal cross-section approximator, was implemented programmatically in C++ within the described hierarchical compression method. The software implementation was used to study the effectiveness of the proposed interpolator. Natural 16-bit AVIRIS hyper-spectrometer data [18] were used as test multidimensional signals (some fragments are shown in Figure 2).
As a base for comparison, the averaging interpolator (3) was used in the study of the adaptive interpolator. A measure of the effectiveness of the adaptive interpolator (as part of the compression method) was the gain in the archive size (in percent), which was provided by replacing the smoothing interpolator with the adaptive interpolator: where base S , new S are archive sizes of the hierarchical compression method when using the primary (averaging) and new (adaptive) interpolators. In this case, we also investigated the dependence of the efficiency measure (9) on the maximum error [19] introduced during compression: where x, y are the original and decompressed signals, respectively.
Typical results of the experiments are presented in Fig. 3. We can see that the adaptive interpolator wins up to 10% of the averaging interpolator, and the gain increases with increasing maximum error E.  5 We also calculate the gain of an adaptive interpolator with an approximation for an adaptive interpolator without an approximation. In this case (see Fig. 4), we calculated the dependence of this gain on N a (number of base sections for the approximation). We can see that the adaptive interpolator with an approximation wins up to 89%.   . Gain ∆ of an adaptive interpolator with an approximation for an adaptive interpolator without approximation depending on N a (number of base sections for the approximation) in four test multidimensional signals.

Conclusion
We performed the dimension reduction of the parameter space of the adaptive interpolator of multidimensional signals. We performed this dimension reduction due to the use of structurally different interpolation algorithms, taking into account relationships within the signal sections and between signal sections. We used the sample dependencies between signal sections due to the approximation algorithm of some sections by others. The dependencies within the sections we used due to the parameterized interpolation inside the sections. We built the considered adaptive interpolator into the hierarchical method of signal compression. We experimentally proved that the use of an adaptive interpolator improves the efficiency of the considered compression method.