Sparse Analyzer Tool for Biomedical Signals

The virtual (software) instrument with a statistical analyzer for testing algorithms for biomedical signals’ recovery in compressive sensing (CS) scenario is presented. Various CS reconstruction algorithms are implemented with the aim to be applicable for different types of biomedical signals and different applications with under-sampled data. Incomplete sampling/sensing can be considered as a sort of signal damage, where missing data can occur as a result of noise or the incomplete signal acquisition procedure. Many approaches for recovering the missing signal parts have been developed, depending on the signal nature. Here, several approaches and their applications are presented for medical signals and images. The possibility to analyze results using different statistical parameters is provided, with the aim to choose the most suitable approach for a specific application. The instrument provides manifold possibilities such as fitting different parameters for the considered signal and testing the efficiency under different percentages of missing data. The reconstruction accuracy is measured by the mean square error (MSE) between original and reconstructed signal. Computational time is important from the aspect of power requirements, thus enabling the selection of a suitable algorithm. The instrument contains its own signal database, but there is also the possibility to load any external data for analysis.


Introduction
The processing of under-sampled signals has attracted significant research interest in the last decade [1][2][3][4][5][6][7][8][9][10]. The signal is under-sampled if the number of available samples is less than the number of samples required by the Shannon Nyquist sampling theorem. The under-sampling can be done intentionally during the acquisition procedure or signal samples could be lost during the transmission or discarded owing to noise. Intentional omitting of signal samples and their later reconstruction found its usage in the applications dealing with a large number of signal samples, with the purpose of increasing the processing speed. Such an approach could be important in medical applications. For example, in magnetic resonance imaging (MRI), lowering the required amount of data reduces the time of patient exposure to the harmful MR waves [11].
With the increasing use of wireless technology and smart devices in our everyday life, portable medical devices become very popular [12][13][14]. Modern technology is changing the way medicine approaches various health problems in a number of diseases and for a large number of patients. Using portable medical devices, the possibility to monitor patient condition every moment and in every place modeling, the proposed tool allows the selection of an appropriate approach combined with the suitable transformation for achieving the most accurate reconstruction results for the considered signal types. The tool is designed in a user-friendly manner even for the non-specialists in the field, as it does not require certain pre-knowledge about the implemented approaches. On the basis of the statistical parameters for measuring reconstruction efficiency, the users are able to choose the most suitable among the offered solution. However, it is also convenient for the researchers working in the field, as it provides a set of comprehensive solutions for the processing of biomedical data that can be further extended or adapted for different purposes.
The paper is structured as follows. Section 2 provides a theoretical background on the CS. Section 3 describes approaches for signal recovery that are implemented within the instrument. Section 4 describes the software, while Section 5 gives the results applied to concrete biomedical signals and images. The conclusion is given in Section 6.
If the most of the coefficients of X(k) are zero-valued or negligible, then X(k) represents a sparse presentation of signal x(n). For a K-sparse signal, it might be said that only K coefficients of signal X(k) are non-zero, where K << N.
Signal x(n) and X(k) are related via the following [2,4]: where ψ k (n) and its inverse ψ k *(n) are basis functions. In the case of the discrete fourier transform (DFT) as one of the commonly used transforms, the basis function equals ψ k (n) = e j2πnk/N . Note that, in general, CS algorithms deal with arbitrary linear transformations like discrete cosine transform (DCT), Hermitian transform (HT), wavelet transform [33,34], and so on, while some of the algorithms like the gradient-based one [11,32] deal even with nonlinear transformations. In this work, the HT, DCT, and DFT transforms are considered, as they found many applications in biomedical data representation and analysis [35][36][37][38][39]. For example, DFT is used for the analysis of electroencephalography (EEG) signals and blood pressure signals [35], and DCT and DFT transforms are used in MRI image processing [36,37], while the HT is used in electrocardiograph (ECG) and QRS signal analysis [38,39]. Discrete Hermite function of an order p is defined as follows [5,6]: where H p is a Hermite polynomial of the p-th order, λ is the scaling factor used for "stretching" or "compressing" the Hermite functions, while the HT matrix can be defined as follows: Sensors 2020, 20, 2602 4 of 18 The "stretching" or "compressing" of the Hermite functions is used for better fitting of the function to the signal [38], providing better sparsity of the signal in the HT domain.
When dealing with the HT, signal x(n) should be sampled at the non-uniform points that correspond to the roots of the Hermite polynomial. Another approach is to interpolate the uniformly sampled signal in order to obtain requested signal values at the non-uniform points. Hermite functions obtained by the uniform sampling of the corresponding continuous functions are not orthogonal. Therefore, in order to obtain the orthogonal function, the interpolation of the Hermite function is used. The second approach, interpolation, is also implemented within the instrument. The signal sampled according to the sampling theorem is interpolated using the sinc interpolation formula [11,27]: where T denotes the sampling period. The optimal value of the parameter σ produces the best possible concentration in the transform domain, and it can be found using the 1 -norm optimization: The instrument implements several commonly used sparsity measures that are suitable for the observed signals. Besides the 1 -norm concentration, there are a lot of approaches for measuring sparsity, for example, entropy based approaches [40,41] or the Gini index [41][42][43][44]. The Gini index satisfies most of the desirable characteristics of measures of sparsity and overcomes the limitations of standard norm-based sparsity measures, as proven in [41]. It is suitable for comparing the sparsity of a signal in different transform domains [42], and is also used as a measure of sparsity for biomedical signals [43,44]; therefore, this measure is implemented within the instrument together with the 1 -norm concentration. The Gini index is calculated according to the following relation: where x s is sorted version of a set of elements in ascending order. The Gini index values can be between 0 and 1. A higher value of the Gini index corresponds to better sparsity. Here, the HT transform is mainly implemented for the application in electrocardiograph (ECG) signals and their QRS complexes. An ECG signal represents the electrical activity of a heart over time, while the QRS complex is formed by three of the graphic deflections of the ECG signal [45]. The analysis of the ECG signals and QRS complexes is used in heart function monitoring and disease diagnostic. The interval between two successive QRS provides information about the regularity of the cardiac rhythm. Moreover, the QRS observation is used in the diagnosis of other heart abnormalities such as myocardial infarction or arrhythmia and, therefore, the QRS represents an important part of the ECG signal. Having in mind that the Hermite functions show physical similarity with QRS complexes, they are found to be a suitable tool for their analysis [6,38,46].
Another observed transformation, DCT, has the following form: where k = 0, . . . , N − 1 and c(k) is as follows: Sensors 2020, 20, 2602 5 of 18 In the matrix notation, the above equations can be generally written as follows: X = Ψx and x = Ψ −1 X, where the vector X has elements X(k), and vector x has elements x(n). Both vectors are of length N, and Ψ is N × N transform matrix with elements ψ k *(n).
Suppose an M-length vector y, which is a linear combination of elements from vector X. This vector is obtained as follows [4]: where A is an M × N matrix. The CS task is to reconstruct signal X (or x) from vector y. Note that the length of vector y is lower than the length of X, because (M < N). Construction of matrix A attracts significant research interest. Namely, the randomly constructed sensing matrix is considered in all implemented algorithms. The sensing matrix is a random matrix Φ that is multiplied by the transform matrix Ψ, resulting in the compressive sensing matrix A. The compressive sensing matrix A is called the random partial transform domain matrix containing the random combination of rows from Ψ that corresponds to the random position of the available samples. The vector y of length M is equal to samples of signal x, taken at the positions corresponding to preserved rows in matrix Ψ −1 , that is, The reconstruction error may depend on the choice of the domain of sparsity and reconstruction algorithm, but in certain cases, it may also be the consequence of the quantization influence, as discussed in [47]. Namely, the limitations of the number of bits used for the representation of the available signal samples can affect the reconstruction performance. If the measurements y(m) are normalized to the range −1 ≤ y(m) ≤ 1, and the B-bit registers are assumed, then sparse coefficients X(i) have to be within the range −min √ M/K, 1 < X(i) < min √ M/K, 1 , in order for product y = AX to produce amplitudes below 1 [47]. The reconstruction error related to the number of bits is given by [47] e 2 = 3.01 × log 2 K − 6.02B − 7.78. The proposed solution is designed for the 64-bit computer device, so the effects of quantization in this case can be considered negligible. However, the quantization issues should be carefully considered especially for hardware realizations of considered reconstruction algorithms.

Approaches for Under-Sampled Signal Reconstruction
Another direction within the CS area is related to the reconstruction algorithms for compressed sensed data. Many signal reconstruction algorithms have been proposed depending on the type of signal and CS scenario [4,11,[16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32]. The performance of these algorithms may vary depending on the number of missing samples, level of sparsity, and amount of external noise, and there is still a lack of general instructions for their practical use.
Among the algorithms for 1D signals' reconstruction that are included in the Virtual instrument, the 1 -magic algorithm is included from the class of convex algorithms. Next, the orthogonal matching pursuit (OMP) algorithm [21] from the class of greedy algorithms is implemented, as well as the single iteration construction algorithm (SIRA), based on the analytical threshold derivations [9,11,26] and generalized deviation-based reconstruction algorithm [23]. Finally, as an efficient and simple solution, the gradient-based convex algorithm is implemented [32]. This algorithm suits for a general class of signals, and for both 1D and 2D cases. It can be successfully used when the measurements are affected by the noise and also provides satisfactory results for the natural images reconstruction from a very reduced set of pixels. Greedy approaches such as SIRA, OMP, and generalized deviation-based reconstruction algorithm (GDBRA) are faster, but less precise compared with convex optimization algorithms and also require a priori knowledge about the signal (e.g., a number of components). The implemented reconstruction algorithms are briefly described in the sequel.

L1-Magic
The solution of problem (9) requires exhaustive searches over subsets of columns of the CS matrix A and, therefore, is not computationally feasible. Computationally more suitable approaches solve a convex optimization problem through linear programming, such as basis pursuit (BP), basis pursuit de-noising (BPDN), least angle regression (LARS), least absolute shrinkage and selection operator (LASSO) [4,11,18,21,22], and so on. The near-optimal approach is provided using the convex min X 1 subject to y = AX. (11) Standard linear program form can be recast as follows [22]: At the optimal point, there will exist dual vectors ν * ∈ R K and λ * ∈ R M that satisfy Karush-Kuhn-Tucker conditions: The algorithm finds the needed vector (optimal dual vector) by solving the system of nonlinear equations. At the interior point ( X, ν * , λ * ), the system is linearized and solved.

Gradient-Based Algorithm
The recently proposed gradient algorithm [32] for sparse signal reconstruction is a computationally efficient algorithm that belongs to a wide class of gradient CS algorithms. The idea behind this algorithm is to observe missing samples in a dense domain as the variables, which are reconstructed in a way to produce minimal concentration measure in the sparse domain. Reconstruction of missing samples is the main difference of this algorithm compared with the others, which mainly reconstruct the signal in the sparse domain. The implementation steps for the 1D signal are given in Algorithm 1.

SIRA-Single Iteration Reconstruction Algorithm
Single iteration reconstruction algorithm [3,6,7,25] is based on the idea of separating signal components in the sparsity domain from the noise components that appear as the consequence of missing samples. The required assumption is that all signal components are above the calculated threshold, while the noise components are under the threshold. The probability that values corresponding to noise are lower than T is P(T).
Depending on the sparsity domain, the threshold is calculated according to the relations derived in [6,7,25]. In the case of the DFT as a sparsity domain, the threshold T DFT is as follows: If the DCT is considered as a transformation domain, the threshold T DCT is calculated as follows: while in the HT domain, the threshold T HT is follows: where a ← 0.147, L ← log 1 − (P(T)) 2/M and P(T) is desired probability (e.g., 0.99).

Algorithm 1. Gradient-based algorithm
Input: set of the positions of the missing samples: N x ; measurement vector y; Set In Equations (14)- (16), M denotes the number of missing samples, N is the signal length, and K is the sparsity. The parameter N − K in T DFT could be approximated as N, based on the fact that K << N. The steps of the algorithm are given in Algorithm 2. The resulting vector X contains the signal components values X R at positions k, while the rest of the transform domain coefficients are zero.

Input:
Measurement vector y; M × N matrix A; signal sparsity K Set desired probability (e.g., P ← 0.99 ) Calculate variance-variance is calculated by using one of the relations, depending on the chosen transformation domain. The corresponding equations are in the following table:

Transformation domain DFT DCT HT
Variance For a given P calculate threshold according to relation (13); Calculate the initial transform domain vector X 0 : X 0 = A −1 y; Find positions of components in X higher than normalized threshold T, k = arg{|X 0 | > T/N}; Form CS matrix by using only k columns from A, A CS ← A(k)

GDBRA-Generalized Deviation-Based Reconstruction Algorithm
This algorithm uses the model of general deviations in the transform domain instead of transform representation itself. The algorithm works with signals sparse in the DFT domain, and the generalized deviations are derived for the DFT case. The use of generalized deviations is inspired by the robust statistics theory, where the form of transformation is adapted to the specific noise environment. Consequently, this approach provides flexibility of using different types of minimization norms for different noisy environments. The algorithm [23] is described through the steps in Algorithm 3.
Step 2: Determine the signal support k = arg GD(k) < T where T can be calculated with respect to median GD(k) , or p · median GD(k) (where p is constant close to 1) for example, but also with respect to mean or minimal value. The vector of positions k should contain all signal frequencies k i ∈ k for any I = 1, . . . , K.
Step 3: Set X(k) = 0 for frequencies k k;; Step 4: the estimates of the DFT values can be calculated by solving the set of equations, at the localized frequencies from the vector k, where k contains K signal frequencies k = k 1 , k 2 , . . . k k . A system of equations is set as follows: Step 5: the CS matrix A CS is formed as a partial DFT matrix: columns correspond to the positions of available samples, rows correspond to the selected frequencies. The system is solved in the least square sense:

OMP-Orthogonal Matching Pursuit
Orthogonal matching pursuit (OMP) is a kind of greedy algorithm that finds the best correlation between measurements vector y and the matrix A through iterations. In each iteration, the column of A corresponding to certain sparse domain coefficient is found. The OMP implementation is given in Algorithm 4.

TV Minimization
Another approach, the total variation (TV) minimization, is implemented in the Virtual instrument. This approach can be successfully applied to both 1D and 2D signals. Generally, the images are not strictly sparse either in the spatial or in the space domain. Therefore, the 1 -minimization of the image gradient, named TV minimization, is found to be a more suitable approach than minimizing the image coefficients. This approach is suitable for noisy signals as well and is described with the following relation: Sensors 2020, 20, 2602 9 of 18 In this paper, TV minimization is combined with the DFT and DCT transformations.

Algorithm 4: Orthogonal matching pursuit
Input: Measurement vector y, M × N matrix A, signal sparsity K Set initial residual: r 0 ← y Set initial indices: Ω 0 ← ∅ Set matrix of chosen columns: Output: x K and Ω K

Douglas-Rachford Algorithm
An efficient solution for 2D signal recovery is based on the Douglas-Rachford (DR) algorithm and its special case, alternating directions methods of multipliers (ADMM) [31,48]. The ADMM, that is, a variation on the method of multipliers, is a special case of Douglas-Rachford splitting. The DR problem is defined as follows [31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47][48]: where f and g are convex functions that should not be smooth, but only proximable. The proximal mappings for f and g are computed as follows [48]: In CS terms, f (x) = ŋ H and g(x) = x 1 , where the affine space H is defined as H = {x: Ax = y}, while ŋ H is an indicator function [48]: Therefore, the proximal operators for functions f (x) and g(x) are as follows: The DR iteratively finds a solution according to the following relation: where rprox denotes reversed-proximal mappings given (for function h):

Design of the Software-Virtual Instrument for Biomedical Signals Reconstruction
The detailed description of the developed software is described in the sequel. The proposed instrument is designed to work with various types of biomedical signals (both the 1D and 2D signals). The instrument is implemented in MATLAB 2017a version. The flowchart of the instrument is shown in Figure 1.

Design of the Software-Virtual Instrument for Biomedical Signals Reconstruction
The detailed description of the developed software is described in the sequel. The proposed instrument is designed to work with various types of biomedical signals (both the 1D and 2D signals). The instrument is implemented in MATLAB 2017a version. The flowchart of the instrument is shown in Figure 1. There is a specially designed panel where the required parameters for signal reconstruction can be easily chosen, for instance, the percent of missing samples, sparse transformation domain, and so on. The following reconstruction algorithms are implemented: ℓ1-magic algorithm and gradientbased algorithm from the convex optimization group, as well as the OMP algorithm, SIRA, and There is a specially designed panel where the required parameters for signal reconstruction can be easily chosen, for instance, the percent of missing samples, sparse transformation domain, and so on. The following reconstruction algorithms are implemented: 1 -magic algorithm and gradient-based algorithm from the convex optimization group, as well as the OMP algorithm, SIRA, and GDBRA from the greedy approaches group. The gradient-based approach is also suitable for 2D signals together with TV minimization, which belongs to the convex optimization group. Another 2D recovery approach is Douglas-Rachford splitting, implemented in several commonly used solvers for sparse reconstruction, such as YALL1, SALSA, and SpaRSA.
A special section of the Virtual instrument is devoted to the statistical analyzer, which provides some interesting statistical parameters for the evaluation of reconstruction algorithms, such as mean square error (MSE) and two concentration measures used for choosing the most suitable transform or the most suitable reconstruction algorithm. The second part of the proposed Virtual instrument is dedicated to 2D signals, that is, biomedical images. A number of images are provided in order to test several algorithms for sparse recovery.

Part 1-Reconstruction of 1D Signals
The outlook of the solution for 1D signal recovery is presented in Figure 2. The instrument includes all described reconstruction algorithms and their relevant parameters, with the possibility for further extension with additional methods. In order to provide a user-friendly environment, the instrument is structured through few sections that will be explained in detail.
Section 1-Signal parameters adjustment: This part of the instrument is used to generate different types of signals in order to test implemented algorithms. The database contains ECG signals, extracted QRS complexes (there is also the option for users to extract QRS using the instrument), signals from respiration monitor, and brain activity signals such as electroencephalography (EEG) and electrooculography (EOG) signals. Some signals are taken from the open biomedical signals databases [49,50], while the others (e.g., respiration signal) are recorded using the National Instruments Elvis platform.
The first option within the instrument is to choose a signal from the drop-down menu, or to load a certain signal by using the Load external signal option. The measurements are obtained from time-domain samples, so there is an option to set a percentage of available data (e.g., time-domain samples). For example, if the percent of available samples is set to 25% for the signal length of 256 samples, then the length of measurement vector y will be 64, and the remaining 192 samples have to be recovered.
Furthermore, the options for sparse transform domain selection are provided. The transform can be selected from the drop-down menu. Selecting a transformation automatically calculates the concentration measure in the statistical analyzer panel. This is useful if the user does not have prior knowledge about the signal and its sparse transform domain. Two measures are provided, the 1 -norm and the Gini index, and these are calculated using the transform domain coefficients. The transformation that provides minimal 1 -norm or maximal value of Gini index is the sparsest among those observed. However, for choosing the most effective reconstruction algorithm, the MSE between the original and reconstructed signals should also be taken into account together with the concentration measures.
In the drop-down menu, the user can choose one of the commonly used transform basis: DFT, DCT, or HT. These are shown to be suitable for most of the considered biomedical signals [5,11,46]. It is important to note that, for the QRS complexes, the most suitable transformation is HT, owing to the similarity between the QRS signals and the Hermite functions. Therefore, for QRS, there is an option to optimize HT by choosing the sigma parameter.
In this part of the Virtual instrument, there is also a drop-down menu for choosing one of the offered algorithms. On the basis of the chosen algorithm, the panel changes and additional options appear, showing algorithm input parameters. Gradient and 1 -magic algorithms do not need any additional parameters, while OMP requires an expected number of components in the sparse domain; SIRA requires a value for probability P; while GDBRA requires four parameters, p, c, norm selection, and a number of components in the sparse domain (p is constant that has values between 0 and 1, while c can be 1 for a max, 2 for a min, 3 for a mean, or 4 for a median in generalized deviation calculation). TV minimization requires a number of components in the low and middle frequencies, respectively. In this part of the Virtual instrument, there is also a drop-down menu for choosing one of the offered algorithms. On the basis of the chosen algorithm, the panel changes and additional options appear, showing algorithm input parameters. Gradient and ℓ1-magic algorithms do not need any additional parameters, while OMP requires an expected number of components in the sparse domain; SIRA requires a value for probability P; while GDBRA requires four parameters, p, c, norm selection, and a number of components in the sparse domain (p is constant that has values between 0 and 1, while c can be 1 for a max, 2 for a min, 3 for a mean, or 4 for a median in generalized deviation calculation). TV minimization requires a number of components in the low and middle frequencies, respectively.
The part for the choice of optimal sigma parameter in HT according to Relation (5) is provided. The sigma can be changed using the scroll button. Moreover, the optimal value, providing the best possible concentration for the selected signal, can be automatically calculated by pressing the Optimal button. For other transforms, the value providing the best sparsity is chosen with the help of concentration measure values-the ℓ1-norm and the Gini index. The reconstruction is initiated by pressing the Start button. Section 2-Graphical presentation of results. This is part of the instrument where the results are graphically presented. Depending on the chosen algorithm, there are several graphics within this section, as presented in Figure 2 (block denoted by number 2). For the ECG signal, there is a possibility to test the reconstruction on the whole signal, but also to choose and observe the QRS complexes using the scroll button. Here, the original and recovered signal are displayed. Moreover, red marks on the selected QRS complex denote the available samples. The transformation can be calculated and displayed for the available samples only, as shown in the panel. The reconstructed sparse representation is presented in the special graph within this section. Section 3-Statistical analyzer: An important part of the proposed instrument is the statistical analyzer part. Here, the concentration measures by the ℓ1-norm and Gini are calculated for the available samples of the signal, as well as for the signal after the reconstruction. After the signal is recovered using the chosen algorithm, the MSE between the original and reconstructed signal (calculated in the time domain) is displayed on the last part of the statistical analyzer. Concurrently, the execution time of the algorithm is also presented in the same panel. Figure 3 shows the performance of the statistical analyzer demonstrated on the example of the gradient-based reconstruction algorithm.
Example in Figure 2 shows the whole ECG signal on the upper left graph, while the selected QRS complex is shown in the upper right graph. The scroll button below the QRS graph enables QRS selection. HT is selected as a sparsity domain, while the percentage of the available samples is set to The part for the choice of optimal sigma parameter in HT according to Relation (5) is provided. The sigma can be changed using the scroll button. Moreover, the optimal value, providing the best possible concentration for the selected signal, can be automatically calculated by pressing the Optimal button. For other transforms, the value providing the best sparsity is chosen with the help of concentration measure values-the 1 -norm and the Gini index. The reconstruction is initiated by pressing the Start button.
Section 2-Graphical presentation of results. This is part of the instrument where the results are graphically presented. Depending on the chosen algorithm, there are several graphics within this section, as presented in Figure 2 (block denoted by number 2). For the ECG signal, there is a possibility to test the reconstruction on the whole signal, but also to choose and observe the QRS complexes using the scroll button. Here, the original and recovered signal are displayed. Moreover, red marks on the selected QRS complex denote the available samples. The transformation can be calculated and displayed for the available samples only, as shown in the panel. The reconstructed sparse representation is presented in the special graph within this section. Section 3-Statistical analyzer: An important part of the proposed instrument is the statistical analyzer part. Here, the concentration measures by the 1 -norm and Gini are calculated for the available samples of the signal, as well as for the signal after the reconstruction. After the signal is recovered using the chosen algorithm, the MSE between the original and reconstructed signal (calculated in the time domain) is displayed on the last part of the statistical analyzer. Concurrently, the execution time of the algorithm is also presented in the same panel. Figure 3 shows the performance of the statistical analyzer demonstrated on the example of the gradient-based reconstruction algorithm.
Example in Figure 2 shows the whole ECG signal on the upper left graph, while the selected QRS complex is shown in the upper right graph. The scroll button below the QRS graph enables QRS selection. HT is selected as a sparsity domain, while the percentage of the available samples is set to 50%. The optimal value for the sigma parameter is used, calculated according to (5). The graph of selected QRS shows also the available samples denoted by red marks. Two lower graphs show HT transform of the available samples (left lower graph) and (sparse) HT of the reconstructed signal. The statistical analyzer displays concentration measures, 1 -norm, and Gini index: (1) measures are calculated using available samples of the selected QRS and (2) using the reconstructed signal. It can be seen that 1 -norm has a lower value for reconstructed signal compared with the value when it is calculated for available samples only, while the value of the Gini index is higher for the recovered signal, as expected. its parameters, as described in Section 1. The reconstruction results are displayed graphically and evaluated numerically through MSE, computational time, and concentration measures. The outlook of this part of the software is shown in Figure 3. In the presented example (Figure 3), for a chosen QRS complex and 50% of available samples, the best reconstruction performance is provided by the Gradient algorithm and ℓ1-magic algorithms in terms of concentration measures. However, the gradient/based algorithm provides slightly better MSE.

Part 2-Reconstruction of 2D Signals
The second part of the Virtual instrument is designed for image reconstruction. Switching between two parts of the instrument can be done using the 1D/2D button at the top left corner of the instrument. The reconstruction of images is performed by several algorithms. Radial-Fourier uses TV minimization with the 2D DFT as a transform basis. The measurements are collected along radial lines from the 2D DFT domain, while the number of radial lines is set as an input parameter of the algorithm.
TV minimization is used with the 2D DCT transform within the second algorithm from the dropdown menu, where there is a possibility to set the number of low and middle frequency coefficients. Another approach is the 2D gradient algorithm, as this algorithm produces efficient results with natural images even in a very low number of available samples. A variation on the method of multipliers is implemented through the Douglas-Rachford algorithm for biomedical image recovery (the UNLocBoX software is used for Douglas-Rachford algorithm implementation [51]). Section 4-Compare algorithms: the virtual instrument is designed in a way to provide a comparison of the reconstruction results achieved by different algorithms. This is an important part of the instrument, as it enables testing the most suitable approach for a considered biomedical signal or image, in terms of reconstruction quality and processing speed. For this reason, an additional set up section is included and it provides the selection of parameters for the comparative algorithm. This part of the instrument, denoted by 4, is used to select one of the offered algorithms as well as to set its parameters, as described in Section 1. The reconstruction results are displayed graphically and evaluated numerically through MSE, computational time, and concentration measures. The outlook of this part of the software is shown in Figure 3. In the presented example (Figure 3), for a chosen QRS complex and 50% of available samples, the best reconstruction performance is provided by the Gradient algorithm and 1 -magic algorithms in terms of concentration measures. However, the gradient/based algorithm provides slightly better MSE.

Part 2-Reconstruction of 2D Signals
The second part of the Virtual instrument is designed for image reconstruction. Switching between two parts of the instrument can be done using the 1D/2D button at the top left corner of the instrument. The reconstruction of images is performed by several algorithms. Radial-Fourier uses TV minimization with the 2D DFT as a transform basis. The measurements are collected along radial lines from the 2D DFT domain, while the number of radial lines is set as an input parameter of the algorithm.
TV minimization is used with the 2D DCT transform within the second algorithm from the drop-down menu, where there is a possibility to set the number of low and middle frequency coefficients. Another approach is the 2D gradient algorithm, as this algorithm produces efficient results with natural images even in a very low number of available samples. A variation on the method of multipliers is implemented through the Douglas-Rachford algorithm for biomedical image recovery (the UNLocBoX software is used for Douglas-Rachford algorithm implementation [51]).
The outlook of this part of the instrument is shown in Figure 4. Users can use some of the predefined images, or load their own images. Predefined images can be selected in Section 1. The image database is downloaded from [52].
image after reconstruction is performed is shown on the right. Section 4: Numerical results of reconstruction: Here, MSE, computational time, and input signal to noise ratio SNR and output peak signal to noise ratio PSNR are calculated and displayed. The wait bar is implemented in order to visually show the reconstruction progress. It is important to note that the user can select the arbitrary image that needs to be located within MATLAB m-file (Section 5). Section 6 shows the selection of the algorithms.

Additional Experimental Evaluation
In this section, some additional reconstruction scenarios are provided and discussed. Example 1: Firstly, the 1D signals are tested. The plots from the virtual instrument are extracted and presented within the diagram in Figure 5. Therefore, the approach that provided successful reconstruction in all considered signals, TV minimization, is used for obtaining the results in Figure 5. Signals for heart rate monitoring, the ECG and the extracted QRS complex; the respiration signal; and the signals for brain activity monitoring, the EEG and EOG signals, are tested. The signal is firstly reshaped from vector to matrix (2D) form. Reshaping is done column-wise. The DCT is considered as a domain of sparsity and the measurements are collected randomly from this domain. The next step is the reconstruction of 2D data using TV minimization, as shown in Figure 5. Finally, the reconstructed signal is back to its 1D form.
Example 2: In this example, the reconstruction efficiency is tested for the MRI image. Unlike the 1D case, the MRI image can be successfully recovered with almost all implemented algorithms. The percentage of missing samples is considered to be around 45. The original and image with missing samples are shown in Figure 6. The reconstruction procedures show that the radial-Fourier provided Section 2: For the selected image, the percent of available samples is selected (i.e., a number of radial lines when considering the radial-Fourier algorithm or number of middle and/or low-frequency coefficients in the TV min approach). The gradient algorithm requires a number of iterations. Section 3: In this section, the original image with missing pixels is shown (left), and the image after reconstruction is performed is shown on the right. Section 4: Numerical results of reconstruction: Here, MSE, computational time, and input signal to noise ratio SNR and output peak signal to noise ratio PSNR are calculated and displayed. The wait bar is implemented in order to visually show the reconstruction progress. It is important to note that the user can select the arbitrary image that needs to be located within MATLAB m-file (Section 5). Section 6 shows the selection of the algorithms.

Additional Experimental Evaluation
In this section, some additional reconstruction scenarios are provided and discussed. Example 1: Firstly, the 1D signals are tested. The plots from the virtual instrument are extracted and presented within the diagram in Figure 5. Therefore, the approach that provided successful reconstruction in all considered signals, TV minimization, is used for obtaining the results in Figure 5. Signals for heart rate monitoring, the ECG and the extracted QRS complex; the respiration signal; and the signals for brain activity monitoring, the EEG and EOG signals, are tested. The signal is firstly reshaped from vector to matrix (2D) form. Reshaping is done column-wise. The DCT is considered as a domain of sparsity and the measurements are collected randomly from this domain. The next step is the reconstruction of 2D data using TV minimization, as shown in Figure 5. Finally, the reconstructed signal is back to its 1D form.
Sensors 2020, 20, x 15 of 18 the best PSNR. However, the processing time is the longest using the radial-Fourier approach. The reconstruction results are shown in (a) (b) Figure 6. The original (a) and image with missing pixels (b); 45% of pixels are unavailable. Example 2: In this example, the reconstruction efficiency is tested for the MRI image. Unlike The 1D case, the MRI image can be successfully recovered with almost all implemented algorithms. The percentage of missing samples is considered to be around 45. The original and image with missing samples are shown in Figure 6. The reconstruction procedures show that the radial-Fourier provided the best PSNR. However, the processing time is the longest using the radial-Fourier approach. The reconstruction results are shown in Table 1 and Figure 7. the best PSNR. However, the processing time is the longest using the radial-Fourier approach. The reconstruction results are shown in Table 1 and Figure 7. (a) (b) Figure 6. The original (a) and image with missing pixels (b); 45% of pixels are unavailable.

Conclusions
Virtual instrument for compressive sensing signal reconstruction is proposed. The software is composed for a specific purpose, that is, biomedical signal recovery, considering both 1D and 2D biomedical signals. Several commonly used algorithms for sparse signal recovery are implemented. Additionally, the proposed instrument enables a comparison of different algorithms, where specific parameters can be changed independently for each algorithm. The second part of the instrument is used for image reconstruction. All functions within the instrument can be used, upgraded, or changed with some other algorithms in order to build other application-related instruments for solving specific problems. This software can be a useful tool for clinicians and healthcare professionals in an era when low-power portable medical devices become widely used and when safe and fast communication is of great interest. The part enabling the comparison of algorithms and choosing the most suitable one can be useful in medical practice, as it enables selection of the most accurate and fastest approach.

Conclusions
Virtual instrument for compressive sensing signal reconstruction is proposed. The software is composed for a specific purpose, that is, biomedical signal recovery, considering both 1D and 2D biomedical signals. Several commonly used algorithms for sparse signal recovery are implemented. Additionally, the proposed instrument enables a comparison of different algorithms, where specific parameters can be changed independently for each algorithm. The second part of the instrument is used for image reconstruction. All functions within the instrument can be used, upgraded, or changed with some other algorithms in order to build other application-related instruments for solving specific problems. This software can be a useful tool for clinicians and healthcare professionals in an era when low-power portable medical devices become widely used and when safe and fast communication is of great interest. The part enabling the comparison of algorithms and choosing the most suitable one can be useful in medical practice, as it enables selection of the most accurate and fastest approach.