A Truncated Matrix Completion Algorithm Using Prior Information for Wind Turbine Clutter Suppression

-e conventional matrix completion (MC) regularizes each singular value equally, and thus the rank cannot be well approximated, which greatly limits the flexibility and accuracy of MC usage. In this paper, a truncated MC algorithm using prior information to determine the threshold while generating the target rank is proposed for the wind turbine clutter suppression of weather radar. During the singular value shrinking process, an appropriate threshold is selected to obtain the optimal approximation of the sampling matrix. Specifically, the mean value of the diagonal element in the recovered weather matrix is calculated to improve the robustness of the recovery result effectively. Simulation results demonstrate that the proposed algorithm reduces the computational complexity as well as further improves the MC accuracy and realizes the effective suppression of the wind turbine clutter.


Introduction
In response to the global energy crisis and climate change, countries around the world have a huge demand for renewable and clean energy. As an important form of renewable and clean energy, wind power [1] generation has attracted great attention all over the world. However, motion clutter caused by the rotation of wind turbine blades has a serious impact on radio signal processing [2]. Many scholars delve into the Doppler characteristics of wind turbine clutter (WTC) on scanning and bunching under two different working modes and propose some algorithms for WTC suppression [3][4][5][6]. However, due to the rotation of large blades, the Doppler spectrum of WTC is broadened and even overlapped, and thus, the echo of weather radar is submerged in the WTC. ere have been some studies on clutter suppression technologies such as [7,8], but none of them can simultaneously suppress WTC and recover weather information under lossless weather signal processing.
It is known that the weather echo in the space-time domain meets the stationary distribution features, and the echo signal in the distance-Doppler two-dimensional transform domain has the characteristic of low-rank sparse distribution.
In recent years, with the deepening of research on compressed sensing theory, matrix completion (MC) technology has been widely used in many scientific and engineering fields [7][8][9] including computer vision [10], machine learning [11], image inpainting [12], radar signal processing, and so on. When the signal is expressed in the form of a matrix, and the singular values of the matrix are sparse (the matrix is low-rank), and the number of samples meets certain conditions, most matrices can accurately restore all elements by solving the kernel norm minimization problem. is is matrix completion (MC). As a new powerful branch of signal and image processing, MC has become another important signal acquisition tool following compressed sensing. In many practical scenarios, the information to be recovered is usually represented by a data matrix. However, these data often encounter problems such as loss, damage, and noise pollution. In order to obtain accurate data in these situations, researchers insightfully extend the compressed sensing theory from the vector space to the matrix space [13], and then, the sparse characteristic (the low rank of the matrix) is used to recover the target matrix by sampling some elements from the data matrix.
Many optimization algorithms have been proposed to solve the MC problem. ey can be summarized into two main categories with respect to the nature of the optimization problem. One of them employs nuclear norm minimization (NNM) such as the accelerated proximal gradient (APG) algorithm [14] and the augmented Lagrange multiplier (ALM) algorithm [15]. e other minimizes an approximation error objective function on a Grassmann manifold such as the subspace evolution and transfer (SET) algorithm [16]. Compared with the optimization algorithms (SET), a distinct advantage of NNM is that it is the tightest convex relaxation to the nonconvex LRMF problem with certain data fidelity terms, and hence, it has been attracting great research interest in recent years. erefore, for the matrix completion problem, the subspace evolution and transfer (SET) algorithm is not as efficient as the NNM algorithm. In addition, matrix filling under noise [8], accelerated singular value threshold method [17], and so on have also been studied for the MC problem.
Among these algorithms, almost all algorithms need to calculate the singular value decomposition (SVD) of the matrix. However, in order to pursue the convexity of the objective function, the standard NNM equivalently regularizes all singular values. At the same time, because of the nonconvexity and discontinuous nature of the rank function, existing algorithms cannot directly and effectively solve the rank minimization problem. eoretical studies show that the nuclear norm, the sum of singular values of a matrix, is the tightest convex lower bound of the rank function of matrices [18]. erefore, a widely used approach is to apply the nuclear norm as a convex surrogate of the nonconvex matrix rank function [19]. Due to the low-rank structure of the sparse matrix, only a few large singular values affect the result of MC. However, when the singular values of the nuclear norm are thresholded with the same constant, the information of large singular values will be lost [20], and then, the recovered data will generate a low peak signal-tonoise ratio (SNR), which greatly limits its ability and flexibility to deal with many practical problems such as denoising.
erefore, this paper fully integrates the prior information in the matrix to obtain prior knowledge about the weights of different singular values according to the observation matrix, then introduces a truncated nuclear norm in the objective function, and proposes a truncated matrix completion (TMC) algorithm [21,22] using prior information.
is algorithm studies different weight assignment methods of singular value instead of thresholding all singular values with the same constant and ignores unimportant or noisy parts to obtain the optimal approximation of the original matrix on low rank. According to the particularity of Toeplitz observation matrix structure, the mean value method is also used to process the restored data after completion. Simulation results show that the proposed algorithm has better completion performance than conventional MC methods. e rest of this paper is organized as follows. In Section 2, the signal model is introduced. e rank minimization is performed based on NNM in Section 3. After that, the TMC algorithm is designed using the mean value technique in Section 4. In Section 5, simulation results are presented. At last, the conclusion of this paper is drawn in Section 6.

Signal Model
In radar systems, suppose that the lth range bin contains both WTC and the weather signal. e received signals in a given azimuth-range cell within the coherent integration time (CIT) can be expressed as follows: where M is the number of pulses in CIT. s l (m) represents the weather signal, c l (m) is the ground clutter, w l (m) is the WTC, and n l (m) represents the noise.
Let X be the Toeplitz matrix form of weather radar echo signal. Similarly, S, C, W, and N are the Toeplitz matrix form of weather signal, ground clutter signal, WTC, and noise, respectively. erefore, equation (1) can be written as follows: e weather signal is formed by the coherent superposition of all the scattering echoes in the lth range bin, assuming that the target is moving with a constant radial speed. e weather signal can be expressed as follows: A u e jω t (m− 1) , l � 1, 2, . . . , L, m � 1, 2, . . . , M, where U denotes the number of all scattering particles, A u represents the complex amplitude of each scattering particle, and ω t is the time domain angular frequency expressed as follows: where f r is the pulse repetition frequency, and f d is the Doppler frequency.

e Construction of the Objective Function
Using the Truncated Nuclear Norm. Based on the observation that the visual data probably has a low-rank form, it is natural to assume that the underlying matrix is with low rank or approximately has a low-rank form. Specifically, when the incomplete data matrix X ∈ R m×n of low rank is given, the matrix completion problem can be formulated as follows: where min(·) stands for minimization. e constraint condition can also be expressed as P Ω (X) � P Ω (S), where P Ω is the projection operator.
In equation (5), since the rank(·) is a nonconvex function, it is difficult to be solved directly. en the nuclear norm of the matrix is introduced, and the following can be obtained: However, the standard nuclear norm minimization regularizes each singular value equally to pursue the convexity of the objective function, which causes weak fusion ability of prior information, low computational efficiency, and poor flexibility. erefore, the prior information in the matrix is integrated to obtain different singular values according to the observation matrix. Due to the fast attenuation of singular value sequences, the weight information of the singular value sequence needs to be obtained, and the optimal threshold can be selected. In order to analyze the optimization problem, the truncated nuclear norm for an observation matrix X ∈ R m×n ‖X‖ r is represented as the sum of its smallest min(m, n) − r singular values e following can be obtained: en, the matrix completion model of the traditional nuclear norm optimization algorithm in equation (6) can be reconstructed as follows: In the truncated nuclear norm, ‖X‖ r is a nonconvex smooth function, which is different from the definition of traditional nuclear norm. erefore, the extreme value of the objective function in the above equation must be the optimal solution of the corresponding low-rank regularization constraint. Literature [15] shows that the minimization algorithm based on truncated nuclear norm can effectively solve the rank minimization problem with high efficiency. e following theorem defines the computational efficiency of TMC compared with the traditional method. For any obtained data matrix X ∈ R m×n and any nonnegative integer r(r ≤ min(m, n)), it can be written as follows: where A ∈ R r×m , B ∈ R r×n , and the eigenvalues of the matrix are all 0. en, the following can be obtained: Since rank(B T A) � s ≤ r and σ i (X) ≥ 0, the following can be obtained: en the following can be obtained: After singular value decomposition of the observation matrix X where U � (u 1 , . . . , u m ) ∈ R m×m and V � (v 1 , . . . , v n ) ∈ R n×n denote the subspaces of the matrix X.
At last, the following can be obtained: Since the NNM algorithm has been reconstructed into the truncated matrix completion, in order to solve equation (8), the relation between truncated nuclear norm and conventional nuclear norm can be given as follows: Mathematical Problems in Engineering erefore, equation (8) can be re-expressed as the following nonconvex optimal function: From the above equation, it can be seen that unlike the conventional NNM algorithm that minimizes the sum of all eigenvalues, truncated nuclear norm only minimizes the sum of the minimum min(m, n) − r, while the maximum nonzero eigenvalues of the matrix are not restricted by summation in the process of solving, which avoids thresholding all singular values with the same constant and, thus, obtains the optimal approximation of the original matrix with low rank. en better approximation of the exact solution can be implemented.

Selection of the Truncated Length.
After the matrix passes through the SVD, the singular value sequence of the matrix can be obtained. e amount of information contained in the singular value of the matrix is proportional to the size of the singular value. e larger singular value represents the main component of the matrix, and the smaller singular value mainly contains the noise information used to reconstruct the matrix. erefore, it is necessary to ensure that the smaller singular values representing noisy data are eliminated as many as possible without losing the larger singular values, which contain the effective information of the matrix while minimizing the nuclear norm. Without changing the size of the first r largest singular values, the truncated nuclear norm solves the problem of minimizing the rank of the matrix by minimizing the sum of the smallest k � min(m, n) − r singular values.
To select r properly, the attenuation point has to be found where the singular value begins to decrease significantly. In order to quickly and accurately determine the truncation position, the last significant jump point of the singular value can be found and used as the value of r. A diagonal matrix S can be obtained by performing SVD on the matrix X. e elements in the diagonal matrix S are the singular values of the matrix X, being arranged in large-tosmall order from, and decay rapidly. e sum of the first 10% or even 5% singular values occupies more than 90% of the sum of all singular values. For the rank operator of the matrix, the number of singular values is equal to the rank of the matrix. As shown in Figure 1, the most significant attenuation point O is observed [23], and this point is used as the best cutoff position that all the smaller singular values after the O point are cut off. e above operation can better retain the effective components of the matrix and filter out the noise data from the echo signal interference. erefore, by choosing a suitable truncation length, r can effectively improve the accuracy of the truncated matrix completion algorithm to recover the weather signal, which is of important significance.

Weather Signals Recovery Using Truncated Matrix
Completion. Compared with NNM, the truncated nuclear norm is a lower-rank constraint that is approximated to the real low-rank matrix. For the TMC problem in equation (8), some efficient algorithms are introduced to optimize it. Since the high-dimensional nuclear norm minimization can effectively solve the rank minimization problem, the RegL1-ALM [24] algorithm is used to calculate the TMC problem [25], which is called TMC-IALM for short in this paper. e original low-rank optimization problem is reconstructed by introducing a new relaxation variable. Since each iteration has a closed solution, the calculation efficiency can be higher. In this way, the original regular constrained optimization problem in TMC is reconstructed into a series of unconstrained convex and nonconvex subproblems, so that it can be solved quickly. What is different from equation (8) is that it considers the influence of interference noise. e TMC problem is transformed into a convex optimization solution to the problem of the following formula by introducing a relaxation variable to complete the unknown element [25]: where P Ω is the projection to the indicator set with the only nonzero subspace of sparse matrix, it keeps the entries in Ω unchanged and sets those outside Ω (i.e., in Ω) zeros. Specifically, for formula (18), firstly, the augmented Lagrange function of TMC-IALM is defined as follows: where ‖S‖ r represents the truncated nuclear norm of the weather matrix S. Y represents the Lagrangian multiplier matrix, the initial value is 0. μ > 0 is the penalty parameter. R(·) represents the real part of a complex number. <X, Y> � tr (X H Y) represents the inner product of a matrix.
In the reconstruction of TMC-IALM, relaxation variables are used to minimize the truncated nuclear norm to make full use of the target rank information. During the optimization process, the next solution of each iteration can be derived strictly, and the optimal approximation solution is obtained to guarantee the validity of the constraint.

TMC Algorithm Design Based on Mean Value Technique
In practice, sampling matrix often has a special structure, such as Toeplitz structure and Hankel structure, so it is very meaningful to study the filling of such special matrices. Many scholars have conducted in-depth research on the special structure and properties of Toeplitz matrix [26][27][28].
In addition, as an important special matrix, Toeplitz matrix plays an important role in a wide range of scientific and engineering fields especially in signal and image processing [29,30]. e Toeplitz matrix T of one dimension has the following form: During the whole receiving time, the array elements which contain the WTC can be set to zero, and then construct the signals x l (m), l � 1, . . . L, m � 1, . . . , m under each pulse as an m 1 × m 2 Toeplitz matrix as shown below, M times in total. In the new matrix, the high-precision completion is realized by the sparseness and strong incoherence property of the Toeplitz matrix.
By observing equation (21), it can be found that the matrix is determined by m 1 × m 2 − 1 elements, namely, the first row and the first column. According to the special structure and properties of the above Toeplitz matrix, the mean value method [31] can be used in the original TMC-IALM iterative algorithm, and the iterative matrix Toeplitz structure is maintained in the recovered weather signal matrix S. While ensuring fast SVD, the recovery data of matrix completion is optimized, which reduces the uncertainty of recovery data. Specifically, during the TMC-IALM iteration process in Algorithm 1, the mean value of the elements in the output matrix can be obtained after the weather matrix is updated. e completed weather signal is generated through the TMC-IALM algorithm and then is processed by the mean value method to obtain a completed weather echo matrix. e flow chart of adaptive clutter suppression for wind turbines based on TMC is shown in Figure 2.

Simulation Results
In this section, a computer simulation of the weather signal recovery is built and tested for the performance of the proposed algorithm. For the experiment, 64 pulses and 100 distance units are selected to test the algorithm, assuming the existence of WTC in the 30th range bin; then it has to be set to zero and constructed as a Toeplitz matrix. e remaining 63 pulse signals are simulated successively. e 30th vector element of each pulse extracted constitutes the echo signal after WTC suppression, and the simulation parameters are shown in Table 1. Taking into account the accuracy of the test, the results below are the average of 100 independent Monte Carlo experiments.
In order to verify the effect of inherent prior information such as singular values on recovery performance, a truncated nuclear norm is used in the objective function to filter out the smaller values based on different truncation lengths, which generates different truncated segments. It also verifies the importance of choosing the cutoff length properly. Figure 3 shows the original curve of the weather signal together with the weather signal recovered by the TMC after gradually adding singular values. It can be seen that when fewer singular values are used, the reconstructed curve is less effective. If the matrix rank is known as r, then the first r singular value matrices contain more effective information. When the number of singular values increases to r, the reconstructed curve is getting closer to the original curve. Since the reduction of the truncated nuclear norm is accompanied by the reduction of the matrix rank when the TMC is minimized, the truncated nuclear norm can better approximate the rank function of the matrix. Meanwhile, in the process of singular value contraction, the part of unimportant noise is ignored, and the largest part of the singular value related to the main component of the matrix is not contracted, which better protects the effective components of the matrix. In addition, the top 1% and 10%, respectively, represent different truncation points in the singular value sequence.
rough the comparison of the Mathematical Problems in Engineering performance of the two data recovery, it can be seen that the selection of the truncation point is very important for either the advantages of the TMC algorithm can be exerted or the recovery accuracy of the weather signal can be improved. After properly selecting the truncation length, the TMC-IALM algorithm is used to restore the weather signal sequence sequentially under each pulse and reconstruct it into an equivalent Toeplitz matrix. Figure 4 compares the Doppler spectrum of weather signals after the suppression of WTC based on the NNM and TMC algorithms. When the frequency is between 300 and 400 Hz, the noise interference is weak, and the weather signal recovery accuracy is high. e peak side lobe has a significant effect on suppressing high interference noise, and the fluctuation of noise at the side lobe is reduced, which improves the SNR. It can also be seen that the TMC algorithm based on this paper has better interference suppression performance than the NNM-based algorithm. Figure 5 shows the estimated mean value of the radial velocity of the weather signal recovered by the TMC-IALM algorithm. When the SNR is low, the radial velocity value fluctuates greatly; the degree of dispersion is high; and there (1) Input: Sampled matrix X, sampled set Ω, penalty factor μ 0 , S k (2) Initialization: l ∈ 0, . . . , m 1 , . . . , m 1 + m 2 − 1 (where m 1 , m 2 , respectively represents the number of rows and columns of the matrix S k ) (7) Set ALGORITHM 1: e mean value method used to reprocess the recovered data.

Weather radar receives signals
Zero the range bin with WTC

Reconstruction of observation matrix based on Hankel
Extract the singular value sequence The selection of singular value threshold The construction of the target function The completion signal is generated by IALM The mean value method processes the output   is a large deviation from the true value. However, as the SNR increases, the error of the radial velocity estimation gradually decreases and eventually converges to true value. Comparing the NNM-IALM and TMC-IALM algorithms before and after improvement, it is found that the improved algorithm proposed in this paper fluctuates much more smoothly, and it only has a smaller difference from the original value, which shows its higher convergence and better performance. Figure 6 shows the signal recovery results under different SNRs. e root-mean-square error (RMSE) of its estimated and true values is defined as RMSE � From the average results of 100 independent Monte Carlo experiments, it can be seen that with the increase of the SNR of the input signal, the effect of singular value decomposition and the error of matrix completion are significantly reduced.
In addition, compared with the IALM algorithm before and after improvement, the RMSE of matrix completion decreases with the increase of the SNR, and the algorithm proposed by TMC-IALM has smaller recovery errors and better performance. Although the algorithm needs to perform singular value decomposition on the correlation matrix, and noise factors will affect the results of the singular value decomposition, the algorithm proposed in this paper assigns different weights to each singular value in the nuclear norm. While intercepting the small singular values with the noise information used to reconstruct the matrix, the large singular value information is retained, which reduces the negative impact of the noise on the weather data and improves the recovery performance of the signal. e matrix elements recovered by the TMC-IALM algorithm only contain the vector sequence under the single pulse of the weather signal, and the Toeplitz matrix element values corresponding to the same vector unit are more discrete. erefore, the mean value method is used to process the recovered data. Figure 7 shows the weather signal under the 10th pulse and the 30th distance unit recovered by the TMC algorithm before and after the mean value method. It can be seen that the weather signal obtained by the TMC-IALM algorithm has obvious fluctuations and the accuracy of the recovered data is relatively low. However, the mean value method is used to optimize the matrix to complete the recovery data, which reduces the uncertainty of the recovery data and has smaller output fluctuation, which improves the stability and accuracy of the recovery results. Figure 8 shows the estimated mean value of the weather signal radial velocity recovered after the mean value processing TMC-IALM algorithm. Similar to the above analysis, the lower the SNR, the higher the degree of radial velocity      value dispersion, and the velocity estimation error gradually converges to the true value as the SNR increases. After the mean value method is processed, the weather data recovered by the TMC algorithm is optimized, which reduces the numerical uncertainty and has better robustness. Figure 9 shows the recovery error curves of the TMC algorithm before and after the mean value method. Both algorithms decrease with the increase of SNR of the input signal. However, compared with the TMC algorithm before and after the improvement, it can be seen that the influence of noise factor on singular value decomposition is reduced, and the recovery error of the algorithm after the mean treatment is lower, and thus, the performance is better.
Since matrix completion theory is based on low-rank characteristics, it can reduce noise or disturbance and improve SNR in the solving process. Figure 10 shows the denoising results of the three algorithms that can restore the missing data and reduce the noise disturbance in the signal subspace of the weather signal. Comparing the performance curves of the three algorithms, it can be seen that the noise reduction effect based on the TMC algorithm is significantly better than the NNM algorithm. When the input SNR is greater than 10∼15 dB, the ratio of the noise disturbance in the signal subspace and noise subspace changes accordingly. e ratio in the noise subspace increases, and the noise reduction performance is more obvious.

Conclusion
Based on the traditional nuclear norm optimization algorithm, this paper has studied an optimization completion algorithm based on truncated nuclear norm and further deduced the algorithm based on TMC-IALM. e algorithm uses the truncated nuclear norm instead of the nuclear norm and only minimizes the sum of min(m, n) − r to avoid the problem that the traditional nuclear norm optimization algorithm has insufficient ability to approximate low-rank matrices. In addition, in the process of iteratively performing singular value shrinkage, the unimportant or noise-producing parts are ignored, and the prior knowledge of the observation matrix is fully and flexibly fused, which improves the utilization rate of the prior singular value information to a certain extent. e algorithm better protects the effective singular values related to the main components of the matrix. At the same time, the algorithm aims to pursue the convexity of the objective function, reducing the time complexity of singular value decomposition.
e above process has sped up the algorithm convergence and maintained the accuracy and robustness of the recovery results.
Data Availability e underlying data supporting the results of our study can be obtained upon request to the authors (smw_hhu1981@ 163.com and mmh_1988@126.com).

Conflicts of Interest
e authors declare that they have no conflicts of interest.