Color Image Restoration Using Sub-Image Based Low-Rank Tensor Completion

Many restoration methods use the low-rank constraint of high-dimensional image signals to recover corrupted images. These signals are usually represented by tensors, which can maintain their inherent relevance. The image of this simple tensor presentation has a certain low-rank property, but does not have a strong low-rank property. In order to enhance the low-rank property, we propose a novel method called sub-image based low-rank tensor completion (SLRTC) for image restoration. We first sample a color image to obtain sub-images, and adopt these sub-images instead of the original single image to form a tensor. Then we conduct the mode permutation on this tensor. Next, we exploit the tensor nuclear norm defined based on the tensor-singular value decomposition (t-SVD) to build the low-rank completion model. Finally, we perform the tensor-singular value thresholding (t-SVT) based the standard alternating direction method of multipliers (ADMM) algorithm to solve the aforementioned model. Experimental results have shown that compared with the state-of-the-art tensor completion techniques, the proposed method can provide superior results in terms of objective and subjective assessment.


Introduction
In recent years, image restoration methods using low-rank models have achieved great success. However, how do we construct a low-rank tensor? In image restoration, the most common method is to use the nonlocal self-similarity (NSS) of images. It uses the similarity between image patches to infer missing signal components. Similar patches are collected into a group so that these blocks in each group can have a similar structure to approximately form a low-rank matrix/tensor, and the image is restored by exploiting a low-rank prior in the matrix/tensor composed of similar patches [1,2]. However, when an image lacks enough similar components, or its similar components are damaged by noise, the quality of the reconstructed image will be poor. Therefore, in some cases, the method of using NSS to find similar blocks to construct low-rank tensors is not feasible. In addition, the large-scale searching of NSS patches is very time-consuming, which will affect the efficiency of the reconstruction algorithm.
It is well known that most high-dimensional data such as color images, videos, and hyperspectral images, can naturally be represented as tensors. For example, a color image with a resolution of 512-by-512 can be represented as a 512-by-512-by-3 tensor. Because of the similarity of tensor content, it is considered to be low-rank [3]. Especially, many images that contain many texture regions are often low rank. Nowadays, in most low-rank tensor completion (LRTC) algorithms, the low-rank constraint is performed on the whole of the high-dimensional data, not a part of it. Many algorithms follow this idea. These typical algorithms include fast low rank tensor completion (FaLRTC) [4], LRTC based tensor nuclear norm (LRTC-TNN) [5], the low-rank tensor factorization method (LRTF) [6], and the method of integrating total variation (TV) as regularization term into low-rank tensor completion based on tensor-train rank-1 (LRTV-TTr1) [7]. However, this simple representation does not make full use of the low-rank nature of this data. In this paper, we propose a novel method called sub-image based low-rank tensor completion (SLRTC) for image restoration. To start with, we utilize the local similarity in sampling an image to obtain a sub-image set which has a strong low-rank property. We use this sub-image set to recover the low-rank tensor from the corrupted observation image. In addition, the tensor nuclear norm is direction-dependent: the value of the tensor nuclear norm may be different if a tensor is rotated or its mode is permuted. In our completion method, the mode (row × column × RGB) of a third-order tensor is permuted to the mode with RGB in the middle (row × RGB × column), and then the low-rank optimization completion is performed on the permuted tensor. Finally, the alternating direction method of multipliers is used to solve the problem.
The main contributions of this paper can be summarized as follows: • We propose a novel framework of sub-image based low-rank tensor completion for color image restoration. The tensor nuclear norm is based on the tensor tubal rank (TTR), which is obtained by the tensor-singular value decomposition (t-SVD) in this framework. In order to achieve the stronger low-rank tensor, we sample each channel of a color image into four sub-images, and use these sub-images instead of the original single image to form a tensor.

•
The optimization completion is performed on the permuted tensor in the proposed framework. The mode of a third-order tensor of a color image is usually denoted by (row × column × RGB). It is permuted to the mode (row × RGB × column) in our framework. This permutation operation can make a better restoration and decrease the running time.
The remainder of the paper is organized as follows. Section 2 introduces the definitions and gives the basic knowledge about the t-SVD decomposition. In Section 3, we propose a novel model of low-rank tensor completion, and use the standard alternating direction method of multipliers (ADMM) algorithm to solve the model. In Section 4, we compare our model with other algorithms, and analyse the performance of the proposed method. Finally, we draw the conclusion of our work in Section 5.

Related Work and Foundation
This section mainly introduces some operator symbols, related definitions, and theorems of the tensor SVD.

Notations and Definitions
For convenience, we first introduce the notations that will be extensively used in the paper. X ∈ R I×J×K (each element can be written as X ijk or X (i, j, k)) represents a thirdorder tensor, and the real number field and the complex number field are represented as R and C. And X(i, :, :),X(:, j, :) and X(:, :, k) are the horizontal slice, lateral slice and frontal slice of the third-order tensor, respectively. For simplicity, we denote the k-th frontal slice X(:, :, k) as X (k) . For X ∈ R I×J×K , we denoteX ∈ C I×J×K as the discrete Fourier transform (DFT) results of all tubes of X . By using the Matlab function fft, we getX = f f t(A, [], 3).

Tensor Singular Value Decomposition
Recently, the tensor nuclear norm that is defined based on tensor singular value decomposition (t-SVD) has shown that it can effectively utilize the inherent low-rank structure of tensors [8][9][10]. Let M ∈ R N 1 ×N 2 ×N 3 be an unknown low rank tensor, the entries of M are observed independently with probability p and Ω represents the set of indicators of the observed entries (i.e., i f (i, j, k) ∈ Ω, X (i, j, k) = M(i, j, k); else X (i, j, k) = 0). So, the problem of tensor completion is to recover the underlying low rank tensor M from the 3 of 16 observations {M ijk , (i, j, k) ∈ Ω}, and the corresponding low-rank tensor completion model can be written as: argmin where X * is the tensor nuclear norm (TNN) of tensor X ∈ R N 1 ×N 2 ×N 3 .
In order to enhance the low-rank feature of an image, we utilize the local similarity to sub-sample an image to obtain a sub-image set which has a strong low-rank property, and propose a sub-image based TNN model to recover low-rank tensor signals from corrupted observation images.
where the unfold operation maps X to a matrix of size IK × J, and fold is its inverse operator.

Definition 3 (T-product [8])
. Let X ∈ R N 1 ×N 2 ×N 3 and Y ∈ R N 2 ×t×N 3 , then the T-product Z = X * Y ∈ R N 1 ×t×N 3 is defined as and where the operation • is circular convolution.
Definition 4 (f-diagonal tensor [8]). If each frontal slice of a tensor is a diagonal matrix, it is called f-diagonal tensor.

Definition 5 (t-SVD [8])
. Let X ∈ R N 1 ×N 2 ×N 3 , then it can be factored as where U ∈ R N 1 ×N 1 ×N 3 and V ∈ R N 2 ×N 2 ×N 3 are orthogonal, and S ∈ R N 1 ×N 2 ×N 3 is an f-diagonal tensor. The frontal slice ofX has the following properties: where · represents the downward integer operator. We can effectively obtain the t-SVD by calculating a series of matrix SVDs in the Fourier domain.
Definition 6 (tensor tubal rank [16]). For X ∈ R N 1 ×N 2 ×N 3 , the tensor tubal rank is denoted as rank t (X ), which is defined as the number of non-zero singular tubes of S, whereS is the t-SVD decomposition of X , namely rank t (X ) = #{i, S(i, i, :) = 0} (8) Definition 7 (tensor average rank [18]). ForX ∈ R N 1 ×N 2 ×N 3 , the tensor tubal rank is denoted asrank a (X ), is defined as Definition 8 (tensor nuclear norm [18]). ForX ∈ R N 1 ×N 2 ×N 3 , the tensor nuclear norm of A is defined as where r = rank t (X ).

Proposed Model
In this section, we propose a sub-image tensor completion framework based on the tensor tubal rank for image restoration.

Sub-Image Generation
As we all know, real color images can be approximated by low-rank matrices on the three channels independently. If we regard a color image as a third-order tensor, and each channel corresponds to a frontal slice, then it can be well approximated by a low-tubal-rank tensor. Figure 1 shows an example to illustrate that most of the singular values of the corresponding tensor of an image are zero, so a low-tubal-rank tensor can be used to approximate a color image.
Although the aforesaid representation can approximate a color image, it does not make full use of the similarity of image data. In order to enhance its low-rank property, we sampled an image to obtain four similar images (All sampling factors in this paper are horizontal sampling factor: vertical sampling factor = 2:2), and each image is divided into four sub-images, and there is no pixel overlap between the sub-images, as shown in Figure 2a. Each small square represents a pixel. For a three-channel RGB image, its sampling method is illustrated in Figure 2b. Although the aforesaid representation can approximate a color image, it does not make full use of the similarity of image data. In order to enhance its low-rank property, we sampled an image to obtain four similar images (All sampling factors in this paper are horizontal sampling factor: vertical sampling factor = 2:2), and each image is divided into four sub-images, and there is no pixel overlap between the sub-images, as shown in Figure  2a. Each small square represents a pixel. For a three-channel RGB image, its sampling method is illustrated in Figure 2b.  According to the prior knowledge of image local similarity, the four sub-images are similar, so they are composed of a sub-image tensor which has a low-rank structure. It should be noted that if the pixels of the image rows and columns are not even, we can add  Although the aforesaid representation can approximate a color image, it does not make full use of the similarity of image data. In order to enhance its low-rank property, we sampled an image to obtain four similar images (All sampling factors in this paper are horizontal sampling factor: vertical sampling factor = 2:2), and each image is divided into four sub-images, and there is no pixel overlap between the sub-images, as shown in Figure  2a. Each small square represents a pixel. For a three-channel RGB image, its sampling method is illustrated in Figure 2b.  According to the prior knowledge of image local similarity, the four sub-images are similar, so they are composed of a sub-image tensor which has a low-rank structure. It should be noted that if the pixels of the image rows and columns are not even, we can add one row or one column and then do the down-sampling processing. It can be seen that the tensor representation of the color image kodim23 is According to the prior knowledge of image local similarity, the four sub-images are similar, so they are composed of a sub-image tensor which has a low-rank structure. It should be noted that if the pixels of the image rows and columns are not even, we can add one row or one column and then do the down-sampling processing. It can be seen that the tensor representation of the color image kodim23 is A ∈ R 512×768×3 in Figure 1. After sampling, we get the sub-image set denoted by A s ∈ R 256×384×12 . Here we give the singular values of the tensor bcirc(A s ) as shown in Figure 1d. Compared with Figure 1c, it can be seen that most of the singular values of the corresponding tensor of the sub-image set appear to be smaller. Therefore, compared with the original whole image, the sub-image data has stronger property of low rank.

Mode Permutation
It is important to note that the TNN is orientation-dependent. If the tensor rotates, the value of TNN and the tensor completion results from Formula (1) may be quite different. For example, a three-channel color image of size n 1 × n 2 can be represented as three types of tensors, i.e., X 1 ∈ R n 1 ×n 2 ×3 ,X 2 ∈ R n 1 ×3×n 2 and X 2 * ∈ R 3×n 1 ×n 2 , where X 1 is the most common image tensor representation, X 2 * denotes the conjugate transpose of X 2 , and X 1 * = X 2 * , X 2 * = X 2 * * .
In order to further improve the performance, we perform the mode permutation [19] after sampling. Here we give an example of the mode permutation as shown in Figure 3.
appear to be smaller. Therefore, compared with the original whole image, the sub-image data has stronger property of low rank.

Mode Permutation
It is important to note that the TNN is orientation-dependent. If the tensor rotates, the value of TNN and the tensor completion results from formula (1) may be quite different. For example, a three-channel color image of size 1 2 n n × can be represented as three types of tensors, i.e., , where 1  is the most common image tensor representation, 2 *  denotes the conjugate transpose of 2  , and In order to further improve the performance, we perform the mode permutation 19 after sampling. Here we give an example of the mode permutation as shown in Figure 3.
In Figure 3, the size of the color image kodim23 is 1 2 n n × . Its tensor representation is . After the mode permutation, it is denoted by The mode permutation option can avoid scanning an entire image, which reduces the overall computational complexity19.

Solution to the Proposed Method
For the completion problem of the color image tensor , we propose a color sub-image tensor ) low-rank optimization model: where s *  is the tensor nuclear norm of s  , s  and s  are third-order tensors of the same size. The problem (11) can be solved by the ADMM 20, where the key step is to calculate the proximity operator of the TNN, namely: According to the literature 18, let * = * *     be the t-SVD of , and for each 0 λ > , define the tensor singular value threshold (t-SVT) operator, as follows: In Figure 3, the size of the color image kodim23 is n 1 × n 2 . Its tensor representation is X 1 ∈ R n 1 ×n 2 ×3 . After the mode permutation, it is denoted by X 2 ∈ R n 1 ×3×n 2 . So X 2 ∈ R n 1 ×3×n 2 is called the mode permutation of X 1 ∈ R n 1 ×n 2 ×3 .
The mode permutation option can avoid scanning an entire image, which reduces the overall computational complexity [19].

Solution to the Proposed Method
For the completion problem of the color image tensor X ∈ R N 1 ×N 2 ×N 3 , we propose a color sub-image tensor X s ∈ R n 1 ×n 2 ×n 3 (where n 1 = N 1 /2; n 2 = N 2 /2; n 3 = 4N 3 ) low-rank optimization model: argmin where X s * is the tensor nuclear norm of X s , M s and X s are third-order tensors of the same size. The problem (11) can be solved by the ADMM [20], where the key step is to calculate the proximity operator of the TNN, namely: According to the literature [18], let Y = U * S * V * be the t-SVD of Y ∈ R n 1 ×n 2 ×n 3 , and for each λ > 0, define the tensor singular value threshold (t-SVT) operator, as follows: . It is worth noting that the t-SVT operator only needs to apply a soft threshold rule to the singular valuesŜ (instead of S) of the frontal slice ofŶ. The t-SVT operator is a proximity operator related to TNN. Based on t-SVT, we exploit the ADMM algorithm to solve the problem of (11). The augmented Lagrangian function of (11) is defined as where Y ∈ R n 1 ×n 2 ×n 3 is the Lagrangian multiplier and µ > 0 is the penalty parameter. We then update L s by alternately minimizing the augmented Lagrangian function L. The subproblem has a closed-form solution, with the t-SVT operator related to TNN. A pseudo-code description of the entire optimization problem (11) is given in Figure 4.
 ) of the frontal slice of  . The t-SVT operator is a proximity operator related to TNN. Based on t-SVT, we exploit the ADMM algorithm to solve the problem of (11). The augmented Lagrangian function of (11) is defined as is the Lagrangian multiplier and 0 μ > is the penalty parameter. We then update s  by alternately minimizing the augmented Lagrangian function L . The sub-problem has a closed-form solution, with the t-SVT operator related to TNN. A pseudo-code description of the entire optimization problem (11) is given in Figure 4.
end while Output: In the whole procedure of algorithm SLRTC, the main per-iteration cost lies in the update 1 k s +  , which requires computing fast Fourier transform (FFT) and The per-iteration complexity is ( log ) n n n n n n n Ο + .
Therefore, the overall framework process of color image restoration based on subimage low-rank tensor completion proposed is shown in Figure 5. In the whole procedure of algorithm SLRTC, the main per-iteration cost lies in the update L s k+1 , which requires computing fast Fourier transform (FFT) and n 3 +1 2 SVDS of n 1 × n 2 matrices. The per-iteration complexity is O(n 1 n 2 n 3 log n 3 + n (1) n 2 (2) n 3 ). Therefore, the overall framework process of color image restoration based on subimage low-rank tensor completion proposed is shown in Figure 5.  Figure 5. Flowchart of the proposed framework. The downsampling method is as shown in Section 3.1. After the mode permutation of the sub-images, t-SVT is performed to obtain the recovered subimages. Finally, the final recovered image can be obtained by aggregating the recovered sub-images.

Experiments
In this section, we will compare the proposed SLRTC with several classic color image restoration methods (including FaLRTC 4, LRTC-TNN5, TCTF6 and LRTV-TTr1 7). Among them, the frontal slice of the input tensor in the LRTC-TNN1 method corresponds to R, G, and B channels, while the lateral slice of the input tensor in the LRTC-TNN2 method corresponds to R, G, and B channels; the LRTC-TNN method is based on TNN, which solves the tensor completion problem by solving the convex optimization.

Experiments
In this section, we will compare the proposed SLRTC with several classic color image restoration methods (including FaLRTC [4], LRTC-TNN [5], TCTF [6] and LRTV-TTr1 [7]). Among them, the frontal slice of the input tensor in the LRTC-TNN1 method corresponds to R, G, and B channels, while the lateral slice of the input tensor in the LRTC-TNN2 method corresponds to R, G, and B channels; the LRTC-TNN method is based on TNN, which solves the tensor completion problem by solving the convex optimization.
To evaluate the performance of different methods for color image restoration, we used the widely used peak signal-to-noise ratio (PSNR) and structure similarity (SSIM) [21] indicators in this experiment.

Color Image Recovery
We first use the original real nine color images for testing, as shown in Figure 6.
Recovered sub-images Recovered image Figure 5. Flowchart of the proposed framework. The downsampling method is as shown in Section 3.1. After the mode permutation of the sub-images, t-SVT is performed to obtain the recovered subimages. Finally, the final recovered image can be obtained by aggregating the recovered sub-images.

Experiments
In this section, we will compare the proposed SLRTC with several classic color image restoration methods (including FaLRTC 4, LRTC-TNN5, TCTF6 and LRTV-TTr1 7). Among them, the frontal slice of the input tensor in the LRTC-TNN1 method corresponds to R, G, and B channels, while the lateral slice of the input tensor in the LRTC-TNN2 method corresponds to R, G, and B channels; the LRTC-TNN method is based on TNN, which solves the tensor completion problem by solving the convex optimization.
To evaluate the performance of different methods for color image restoration, we used the widely used peak signal-to-noise ratio (PSNR) and structure similarity (SSIM)21 indicators in this experiment.

Color Image Recovery
We first use the original real nine color images for testing, as shown in Figure 6. The size of each image is 256 × 256 × 3,i.e., row × column × RGB. In order to test the repair effect of various algorithms on the images, we randomly lost 30%, 50%, and 70% of the pixels in each image, and formed an incomplete tensor Table 1 lists the PSNR and SSIM comparisons of all methods to restore the images. Compared with the LRTC-TNN1, LRTC-TNN2 and TCTF methods, the SLRTC method can usually obtain the best image restoration results when the missing rate is 30%, 50%, and 70%. By analysing the specific data in Table 1, it can be seen that when the local smoothing in the image accounts for a large proportion, such as Airplane, Pepper and Sailboat, the effect will be better if the LRTV-TTr1 method is utilized to restore the images, because the advantage of TV regularization is to make use of the local smoothness of the image. In addition, when the missing rate is higher (greater than or equal to 70%), the LRTV-TTr1 method is better than SLRTC. When the image contains a large number of The size of each image is 256 × 256 × 3, i.e., row × column × RGB. In order to test the repair effect of various algorithms on the images, we randomly lost 30%, 50%, and 70% of the pixels in each image, and formed an incomplete tensor X ∈ R 256×256×3 . Table 1 lists the PSNR and SSIM comparisons of all methods to restore the images. Compared with the LRTC-TNN1, LRTC-TNN2 and TCTF methods, the SLRTC method can usually obtain the best image restoration results when the missing rate is 30%, 50%, and 70%. By analysing the specific data in Table 1, it can be seen that when the local smoothing in the image accounts for a large proportion, such as Airplane, Pepper and Sailboat, the effect will be better if the LRTV-TTr1 method is utilized to restore the images, because the advantage of TV regularization is to make use of the local smoothness of the image. In addition, when the missing rate is higher (greater than or equal to 70%), the LRTV-TTr1 method is better than SLRTC. When the image contains a large number of texture regions, that is, the image itself has a strong low rank, the best effect can be achieved by applying low rank constraints to the restoration of degraded images, such as facade.  In order to further verify the effectiveness of the proposed algorithm, we additionally used 24 color images from the Kodak PhotoCD dataset (http://r0k.us/graphics/kodak/ (accessed on 3 January 2018)) for testing. The size of each image is 768 × 512 × 3. As in the previous test, in order to test the repair effect of various algorithms on the image, we randomly lost 30%, 50%, and 70% of the pixels in each image, and formed an incomplete tensor X ∈ R 768×512×3 . Table 2 lists the PSNR and SSIM comparisons of all methods to restore the images when the missing rate is 30%. The best values among all these methods are in boldface. It can be seen that our algorithm SLRTC can surpass the other algorithms in terms of PSNR, and the PSNR value is about 1.5-5db higher than the other methods. However, in terms of SSIM, it can be seen that the LRTV-TTr1 method is basically optimal.  Tables 3 and 4 are the comparison of PSNR and SSIM when the missing rate is 50% and 70%, respectively. As shown in Table 2, our algorithm can surpass the other algorithms in terms of PSNR, but the SSIM value of image restoration is lower than that of the LRTC-TTr1 method. The reason is mainly due to the sampling process from image to sub-image in the first step of the SLRTC method. Although the sub-image has a stronger low rank than the original image, it weakens the overall structural relevance of the image.      At the same time, we also give a comparison of the algorithm complexity of restoring 24 test images with a resolution of 768 × 512, as shown in Figure 11. It can be seen that SLRTC is much faster than the FaLRTC, LRTC-TNN1, and LRTV-TTr1 methods, and is comparable to the TCTF and LRTC-TNN2 methods.

Color Video Recovery
Next, we tested the performance of different methods in completing the task of video data. The test video sequences are City, Bus, Crew, Soccer, and Mobile. (https://engineering. purdue.edu/~reibman/ece634/ (accessed on 3 January 2018)).
The main consideration is the third-order tensor. Here, the following preprocessing is performed on each video: adjust the video size of 352 × 288 × 3 × 30 (row × column × RGB × number of frames) to a third-order tensor X ∈ R 352×288×90 .

Color Video Recovery
Next, we tested the performance of different methods in completing the task of video  Figures 12-15 show the visual quality comparison of the Mobile and Bus video sequences repaired by different methods when the missing rate is 50% and 80%. When the missing rate is 50%, SLRTC can capture the inherent multi-dimensional characteristics of the data, and the video frame recovery effect is better than other methods; when the missing rate is 80%, the subjective visual quality of SLRTC in the Mobile video frame repair is better than other methods, while the subjective visual quality of the repair effect on the Bus video frame is not as good as LRTV_TTr1.
Randomly removing 30%, 40%, 50%, 60%, 70%, 80%, and 90% pixels in the videos, Figure 16 shows the performance comparison of various methods for videos recovery. It can be seen that the SLRTC algorithm proposed is better than other methods.

Color Video Recovery
Next, we tested the performance of different methods in completing the task of video data. The test video sequences are City, Bus, Crew, Soccer, and Mobile. (https://engineering.purdue.edu/~reibman/ece634/(accessed on January 3, 2018)) The main consideration is the third-order tensor. Here, the following preprocessing is performed on each video: adjust the video size of 352 288 3 30 × × × (row × column × RGB × number of frames) to a third-order tensor Figures 12-15 show the visual quality comparison of the Mobile and Bus video sequences repaired by different methods when the missing rate is 50% and 80%. When the missing rate is 50%, SLRTC can capture the inherent multi-dimensional characteristics of the data, and the video frame recovery effect is better than other methods; when the missing rate is 80%, the subjective visual quality of SLRTC in the Mobile video frame repair is better than other methods, while the subjective visual quality of the repair effect on the Bus video frame is not as good as LRTV_TTr1.     Randomly removing 30%, 40%, 50%, 60%, 70%, 80%, and 90% pixels in the videos, Figure 16 shows the performance comparison of various methods for videos recovery. It can be seen that the SLRTC algorithm proposed is better than other methods. Randomly removing 30%, 40%, 50%, 60%, 70%, 80%, and 90% pixels in the videos, Figure 16 shows the performance comparison of various methods for videos recovery. It can be seen that the SLRTC algorithm proposed is better than other methods.  Randomly removing 30%, 40%, 50%, 60%, 70%, 80%, and 90% pixels in the videos, Figure 16 shows the performance comparison of various methods for videos recovery. It can be seen that the SLRTC algorithm proposed is better than other methods. Figure 16. The PSNR metric on video data recovery. Figure 16. The PSNR metric on video data recovery.

Conclusions
This paper proposes a color image restoration method called SLRTC. Based on the nature of the tensor tubal rank, our method does not minimize the tensor nuclear norm on the observed image, but uses the local similarity characteristics of the image to decompose the image into multiple sub-images through downsampling to enhance the tensor of low rank. Experiments show that the proposed algorithm is better than the comparison algorithms in terms of color image restoration quality and running time.
Obviously, the method proposed in this paper is parameter-independent, and parameter adjustment usually requires complex calculations. In the absence of TV regularization terms, the method proposed in this paper uses the image partial smoothing prior to downsampling the image. It is well integrated into the sub-image low-rank tensor completion, and the effectiveness of the model is proved through experiments.
Since deep learning-based algorithms have shown their potential to tackle this problem of image restoration in recent years [22,23], we next will integrate the low-rank prior into the neural networks to achieve better performance.
Author Contributions: Conceptualization, X.L. and G.T.; Methodology, X.L.; Software, X.L.; Writingoriginal draft, X.L.; Writing-review and editing X.L. and G.T. All authors have read and agreed to the published version of the manuscript.