Quantum tensor singular value decomposition* * This research is supported in part by Hong Kong Research Grant council (RGC) grants (No. 15208418, No. 15203619, No. 15506619) and Shenzhen Fundamental Research Fund, China under Grant No. JCYJ20190813165207290.

Tensors are increasingly ubiquitous in various areas of applied mathematics and computing, and tensor decompositions are of practical significance and benefit many applications in data completion, image processing, computer vision, collaborative filtering, etc. Recently, Kilmer and Martin propose a new tensor factorization strategy, tensor singular value decomposition (t-svd), which extends the matrix singular value decomposition to tensors. However, computing t-svd for high dimensional tensors costs much computation and thus cannot efficiently handle large scale datasets. Motivated by advantage of quantum computation, in this paper, we present a quantum algorithm of t-svd for third-order tensors and then extend it to order-p tensors. We prove that our quantum t-svd algorithm for a third-order N dimensional tensor runs in time Npolylog(N) if we do not recover classical information from the quantum output state. Moreover, we apply our quantum t-svd algorithm to context-aware multidimensional recommendation systems, where we just need to extract partial classical information from the quantum output state, thus achieving low time complexity.

During the last decade, plenty of research has been carried out on t-svd and its applications. The t-svd factorization strategy is first proposed by Kilmer and Martin [1] for third-order tensors, then it is extended to order-p tensors by Martin et al in 2013 [26]. The t-svd algorithm extends the matrix svd strategy to tensors while avoiding the loss of information inherent during unfolding tensors as operated in CP and TUCKER decompositions. The general idea of t-svd factorization is performing matrix svd in the Fourier domain, consequently it allows other matrix factorization techniques, e.g., QR decomposition, to be extended to tensors easily using the similar idea. Moreover, the t-svd is superior to Tucker or HOSVD in the sense that truncated t-svd gives an optimal approximation of a tensor measured by the Frobenius norm, while this best approximation cannot be obtained by truncating the full HOSVD or Tucker decomposition. Due to this optimality property, t-svd is shown to have better performance than HOSVD in facial recognition [27] and tensor completion [2,28].
However, the complexity of calculating full t-svd for third-order N dimensional tensors is N 4 ( )  , which is extremely high for large scale datasets. Hence many works have devoted to low-rank approximated t-svd representation which gives up the optimality property and has comparatively low comlexity. In [29], Zhang et al propose a randomized t-svd method which can produce a factorization with similar properties to the t-svd, and the computational complexity is reduced to kN N N log 3 3 ( )  + , where k is the truncated term.
Considering the high cost of the existing classical algorithms of t-svd, we present a quantum version of t-svd for third-order tensors which reduces the complexity to N N polylog ( ( ) )  . To our best knowledge, the efficiency of this algorithm beats any known classical t-svd algorithms in the literature. In section 4, we extend the quantum t-svd algorithm to order-p tensors.
An important step in a classical t-svd algorithm is to perform discrete Fourier transform (DFT) along the third mode of a tensor N N N 1 2 3   Î´´, obtaining with computational complexity N N log 3 3 ( )  for each tube i j , , : ( )  , i = 0, L ,N 1 − 1; j = 0, L ,N 2 − 1. Thus, the complexity of performing the DFT on all tubes of the tensor  is N N N N log . In the quantum t-svd algorithm to be proposed, this procedure is accelerated by the quantum Fourier transform (QFT) [30] whose complexity is only N log 3 2 (( ) )  . Moreover, due to quantum superposition, the QFT can be simultaneously performed on the third register of the state |ñ, which is equivalent to performing the DFT for all tubes of  parallelly, so the total complexity of this step is still N log 3 2 (( ) )  . After performing the QFT, in order to further accelerate the second step in the classical t-svd algorithm which performs the matrix svd for every frontal slice of, we apply a modified quantum singular value estimation (QSVE) algorithm originally proposed in [31] to the frontal slices i :,:, ( )  parallelly with complexity at most N N polylog ( ( ) )  for N dimensional tensors. Traditionally, the quantum singular value decomposition of non-sparse low-rank matrices involves exponentiating matrices and outputs the superposition state of singular values and their associated singular vectors. However, this Hamiltonian simulation method requires that the matrix to be exponentiated be low-rank, which is difficult to be satisfied in general. In our algorithm, we use the modified QSVE algorithm, where the matrix is unnecessarily low-rank, sparse or Hermitian.
The main contributions of this paper are listed as follows: The original QSVE algorithm proposed in [31] has to be carefully modified to become a useful subroutine in our quantum tensor-svd algorithm. Specifically, the original QSVE, stated in lemma 3, requires the matrix A be stored in the classical binary tree structure, then the singular values of A can be estimated efficiently. Given a tensor The difficulty lies in the fact that we cannot require all A m ( ) be stored in the data structure since they are obtained after QFT. It is more reasonable to assume that the frontal slices of the original tensor  is stored in the binary tree structure. Therefore, the main obstacle we need to overcome is how to estimate the singular values of A m ( ) on the condition that every frontal slice of the original tensor A ( k) is stored in the data structure. This problem is solved by theorem 2, and whose proof presents a detailed illustration of this process.
In section 5, we design a quantum tensor approximation algorithm based on algorithm 3, and present an application of this algorithm, namely the context-aware multidimensional recommendation systems. Our quantum tensor approximation algorithm suits the 3D recommendation systems model very well for mainly two reasons.
First, compared with other tensor decompositions, t-svd has been shown to be superior in capturing the spatial-shifting correlation [28], so it is suitable to model the 3D recommendation systems. Suppose the preference information of a user is encoded in a third-order tensor in which the three modes represent locations, point-of-interests, and time frames respectively, a user's preference in a certain time is very likely to affect the recommendation for him/her at other time in general. The QFT in t-svd algorithm is used to bind a user's preferences in different time together, so the recommendation using t-svd can better user experience by integrating the relations between different context. Second, our quantum t-svd algorithm can achieve good result with low cost when applied to 3D recommendation systems. Actually, it is not necessary to reconstruct the entire tensor as in the classical 3D recommendation systems algorithms based on tensor factorizations, such as the truncated t-svd with complexity kN N N log 3 3 ( )  + [3] and the truncated HOSVD (T-HOSVD) with complexity kN 3 3 ( )  [32] for third-order N dimensional tensors, where k is the truncation rank. Our quantum 3D recommendation systems algorithm only samples high-value elements from the approximated tensor (corresponding to measuring the output state in computational basis a certain times), and this is exactly what we need for recommendation systems. Consequently, our quantum 3D recommendation systems algorithm achieves the complexity if the preference tensor has several dominating (namely, high-value) elements. The rest of this paper is organized as follows. A standard classical t-svd algorithm and several related concepts are introduced in section 2.2; section 2.3 summarizes the quantum singular value estimation algorithm proposed in [31]. Section 3 presents our main algorithm, quantum t-svd, and its complexity analysis. We extend the quantum t-svd algorithm to order-p tensors in section 4. In section 5, we design a quantum tensor approximation algorithm based on classical truncated t-svd and then provide an application on context-aware multidimensional recommendation systems. In section 6, we conclude the paper.

Preliminaries
In section 2.1, we first introduce the concept of tensor, and notation used throughout the paper. In section 2.2, we review the concept of t-product and the t-svd algorithm proposed by Kilmer et al [1] in 2011. Then in section 2.3, we briefly retrospect the quantum singular value estimation algorithm (QSVE) [31] proposed by Kerenidis and Prakash.   Î´´is a thirdorder tensor of complex values with dimension N i for mode i, i = 1, 2, 3, respectively. In this sense, a matrix A can be considered as a second-order tensor, and a vector x is a tensor of order 1. For a third-order tensor, we use terms frontal slice (see figure 1). By fixing all indices but the last one, the result is a tube of size 1 × 1 × N 3 , which is actually a vector. For example, i j , , :

Tensor background and notation
is the (i, j)-th tube of .
Notation. In this paper, script letters are used to denote higher-order tensors (, , L). Capital nonscript letters are used to represent matrices (A, B, L), and vectors are written as boldface lower case letters (x, y, L). DFT(u) refers to performing the discrete Fourier transform (DFT) on u, which is computed by the fast Fourier transform represented in Matlab notation fft(u). The tensor after DFT along the third mode of  is denoted bŷ , i.e. fft , , 3 , which is the inverse of the above operation. We use A ( i) to denote the i-th frontal slice i :,:, ( )  for short, hence the m-th frontal slice of is A m ( ) . There are three types of product we would like to clarify here: u # v refers to the circular convolution between vectors u and v, e is the element-wise product and *   represents the t-product between tensors  and .

The t-svd algorithm and t-product
In this subsection, we present the classical t-svd as well as its pseudocode, algorithm 1. For readability of the main text, all mathematical details are put in appendix A. Simply speaking, the t-svd of a tensor can be interpreted as the usual matrix svd in the Fourier domain, as can be seen in algorithm 1.
Theorem 1 tensor singular value decomposition (t-svd) [1]. For   Î´´, its t-svd is given by   Î´´is a diagonal matrix (see figure 2). Algorithm 1. t-svd for third-order tensors [1] Input:   For an order-p tensor N N N p 1 2   Î´´ , the frontal slices of  are referenced using linear indexing by reshaping the tensor into an N 1 × N 2 × N 3 N 4 L N p third-order tensor, then the i-th frontal slice is i :,:, ( )  Using this representation, one version of MATLAB pseudocode of the t-svd algorithm for order-p tensors is provided below.
Algorithm 2. t-svd for order-p tensors [26] Input: ; :,:, ; :,:, In the remainder of this subsection, we summarize some properties of t-svd, which will be used in the quantum tensor approximation algorithm to be developed in section 5.
In the t-svd literature, the diagonal elements of the tensor  are called the singular values of . Moreover, the l 2 norms of the nonzero tubes i i , , : are in descending order, i.e.
However, it should be noticed that the diagonal elements of  may be unordered and even negative due to the inverse DFT. Thus, the truncated t-svd method for data approximation or tensor completion is designed by truncating the diagonal elements of instead of  , as the diagonal elements of the former are non-negative and ordered in descending order; see lemma 1.

Quantum singular value estimation
In [31], Kerenidis and Prakash propose a quantum singular value estimation (QSVE) algorithm. They assume that the input data is stored in a classical binary tree data structure, as stated in the following lemma, such that the QSVE algorithm with access to this data structure can efficiently create superpositions of rows of the subsample matrix.  Î´with τ nonzero entries. Let A i be its i-th row, and , , .
There exists a data structure storing the matrix A in N N log 2 1 2 ( ( ))  t space such that a quantum algorithm having access to this data structure can perform the mapping The following lemma summarizes the main idea of the QSVE algorithm and its detailed description can be found in [31].
be stored in the data structure as mentioned in lemma 2. Let the singular value decomposition of A be A u v l r l l l 0 Let 0  > be the precision parameter. Then there is a quantum algorithm, denoted as U SVE , that runs in time Remark 1. With regard to the matrix A stated in lemma 3, we can also choose the input state as , corresponding to the vectorized form of the normalized matrix A A F   represented in the svd form. This representation of the input state is adopted in section 3. Note that we are able to express the state A | ñ in the above form even if the singular pairs of A are not known. According to lemma 3, we can obtain l s , an estimate of l s , stored in the third register superposed with the singular vectors u v ,

The algorithm
In this section, we first present our quantum t-svd algorithm, algorithm 3, for third-order tensors Assumption 1. Every frontal slice of  is stored in a tree structure introduced in lemma 2.
Assumption 2. We can prepare the state efficiently. Without loss of generality, we assume that 1 Algorithm 3. Quantum t-svd for third-order tensors |fñ 1: Perform the QFT on the third register of the quantum state |ñ in (1) to get where The quantum circuit of algorithm 3 is shown in figure 3, where the blocks U m Before illustrating the algorithm, we first interpret the final quantum state |f〉. Similar to the quantum singular value decomposition for matrices [33] that the output allows singular values and associated singular vectors to be revealed in a quantum form, the output state |f〉 in our algorithm also finds the estimated values of Next, we give a detailed explanation on Step 2. We first rewrite the state in (2) for further use. For every fixed m, the unnormalized state  in (2) corresponds to the matrix namely, the m-th frontal slice of the tensor Normalizing the state in (6) produces a quantum state Therefore, the state |ñ in (2) can be rewritten as  completely, thus our algorithm can output a quantum state whose representation is similar to the matrix svd. Another consideration is that we do not need the information unrelated to the tensor  (e.g. an arbitrary state) to be involved in our algorithm.

Complexity analysis
For simplicity, we consider a tensor N N N   Î´´with the same dimensions on each mode. In Steps 1 and 3 of algorithm 3, performing the QFT or the inverse QFT parellelly on the third register of the state |ñ has the complexity of N log 2 (( ) )  , compared with the complexity N N log of the DFT performed on N 2 tubes of the tensor  in the classical t-svd algorithm. Moreover, in the classical t-svd, the complexity of performing the matrix svd (Step 2 of algorithm 1) for all frontal slices of is N 4 ( )  . In contrast, in our quantum t-svd algorithm, this step is accelerated by theorem 2 (the modified QSVE) whose complexity is N N polylog ( ( ) )  on each frontal slice A m ( ) . Since we perform this modified QSVE on each A m ( ) parallely, m = 0, L ,N − 1, the running time of Step 2 is still N N polylog ( ( ) )  . Therefore, the total computational complexity of algorithm 3 is N N polylog ( ( ) )  .

Quantum t-svd for order-p tensors
Following a similar procedure, we can extend the quantum t-svd for third-order tensors to order-p tensors easily.
We assume that the quantum state |ñ corresponding to the tensor Finally, we recover the |m 3 〉 L |m p 〉 expression and perform the inverse QFT on the p-th to the third register, obtaining the final state corresponding to the quantum t-svd of order-p tensor .

Algorithm 4. Quantum t-svd for order-p tensors
. Output: state p |f ñ. 1: Perform the QFT parallelly from the third to the p-th register of quantum state |ñ, obtain the state |ñ. , by using the controlled-U SVE acting on the state |ñ, to obtain the state p |y ñ. 3: Perform the inverse QFT parallelly from the third to the p-th register of the above state and output the state p |f ñ.
For order-p tensor N N   Î´ , compared with the classical t-svd algorithm [26] whose time complexity is N p 1 ( )  + , our algorithm output a quantum state with the classical t-svd information encoded, and the time complexity of our quantum algorithm is N N polylog ( ( ) )  followed by a similar analysis.

Application to context-aware POI recommendation systems
In this section, we apply algorithm 3 to another quantum algorithm, algorithm 5, which implements a classical machine learning task: context-aware POI recommendation systems.
In point-of-interest (POI) recommendations, a user's preference for a POI, such as restaurants and sightseeing sites, is seriously influenced by context, such as the time slot in a day, his/her current location, etc. Therefore, many works focus more on integrating multiple contexts in a more efficient manner to improve user experience. Tensors are a natural choice to model high-order contextual information, e.g., a user's rating for different POIs can be encoded in the preference tensor  in which the three modes represent locations, POIs, and time frames respectively. The classical (non-quantum) context-aware POI recommendation systems utilizing various tensor decomposition methods have been tested by experiment to have better result (both accuracy and execution time) than non-contextual modeling [35,36]. However, these methods are computational expensive due to the high cost of tensor decomposition. In fact, the classical 3D recommendation systems algorithm based on tensor decomposition, such as the truncated t-svd with complexity kN N N log 3 3 ( )  + [3], or the truncated HOSVD (T-HOSVD) with complexity kN 3 3 ( )  [32], where k is the truncation rank and N is the dimension of the preference tensor. Considering the effectiveness and high cost of context-aware POI recommendation systems, we propose a quantum context-aware POI recommendation systems algorithm (QC-POI) with lower complexity N N polylog ( )  for some suitable parameters, and the result of this algorithm is the classical information of POI recommendation indices.
The problem of the context-aware POI recommendations can be stated as follows. Suppose there is a hidden preference tensor  that encodes the preference information of a given user under two contexts, e.g. locations and time. In practical applications, only a part of the entries of  can be observed. Actually, users typically engage with only a small subset of POIs and a considerable amount of possible interactions stays unobserved. For example, a person may be simply unaware of existing alternatives for the POIs of his/her choice. Finding out those entries helps to make better predictions. We denote the tensor whose entries are observed ratings as , which is sparse in general. Our goal is to predict these unobserved triples (location, POI, time) and recommend some of his/her favorite POIs among all contexts based on those comparatively high predicted values. For example, Alice has visited several cites (locations) in America, and her ratings of different POIs when she was in these cites are encoded in a tensor  whose three modes are cities, POIs, time respectively. For some reasons, she only scores a part of POIs of these cites, denoted as , and our task is to provide her favorite POIs among all cites and time slots based on the truncated t-svd of the tensor .
In this application, we make two assumptions. Firstly, we assume that tensor the  is of low tubal-rank. Secondly, the original tensor  has several dominating entries. In fact, this low tubal-rank assumption is also adopted in the classical truncated t-svd data completion problem [2,3,37]. For the second assumption, in real situations, it is very likely that a user has some favorite POIs among all contexts (dominating entries).
Our QC-POI algorithm, algorithm 5, includes three processes: quantum t-svd, quantum state projection and quantum measurement. After algorithm 3 and the quantum state projection process, we can obtain the state |   ñ s corresponding to an approximation of the hidden preference tensor  under certain conditions; similar conclusion could be seen in [19], theorem 1. Thus, previously unobserved values of triples corresponding to this user's favourite POIs in the hidden preference tensor  might be boosted after projected t-svd on observed tensor . As   s is non-sparse in general, we can predict the missing entries based on the relatively high predicted values in   s , and provide POI recommendations by measuring the state |   ñ s in the computational basis. The complexity of our QC-POI of getting a good recommendation is N N polylog ( )  for suitable chosen parameters, see the analysis in the paragraph below algorithm 5. The output is a POI recommendation index for this user under all locations and time slots.
In fact, our quantum 3D recommendation systems algorithm borrows the idea of quantum recommendation systems for matrices proposed by Kerenidis and Prakash [31]. For recommendation systems modeled by an m × n preference matrix, Kerenidis and Prakash designed a quantum algorithm that offers recommendations by just measuring the quantum state representing an approximation of the hidden preference matrix obtained by truncated matrix SVD.
The following is a summary of the steps of algorithm 5. We first follow the Steps 1 and 2 of algorithm 3, then perform the quantum projection with a pre-specified threshold σ on state (4). Specifically, for state (4), we apply the unitary operator V on the register a and an ancillary register |0〉 that maps |t〉 a |0〉 → |t〉 a |1〉 if t < σ and |t〉 a |0〉 → |t〉 a |0〉 otherwise. Therefore, after Step 2 of algorithm 3, we get The probability that we obtain the outcome |0〉 is  s is the inverse QFT ofˆ  s . Based on amplitude amplification, we have to repeat the measurement 1 ( )  a times in order to ensure the success probability of getting the outcome |0〉 is close to 1. Thus, the complexity of getting the state 18 is a . Combining (9) with (11), we find that the state (18) can be seen as an approximation of |ñ. In Step 4, we perform the inverse QFT on state (18) which is an approximation of the state |ñ in the classical counterpart algorithm [3]. The effectiveness of the classical counterpart of algorithm 3 has been tested by numerical experiments; see more detail in remark 3.
Remark 3. The classical counterpart of algorithm 5 corresponds to a tensor completion method proposed in [3] and its effectiveness has been verified by numerical experiments. In what follows we briefly summarize this algorithm. For an N N N 1 2 3´t ensor , we use algorithm 1 to get, , and . The total number of diagonal entries of is rN 3 , where r N N min , 1 2 After sorting them from the largest to the smallest, suppose these entries decay rapidly or small values are the majority, there exists a number k such that keeping the top k diagonal entries of , denoted as k  , and setting other entries to be 0 result in a good approximation. Thus, the approximate tensor is k , where k  , k  and k  are the inverse DFT of , k k  and k  respectively. According to the simulation results in [3], this approximation method has the lowest relative square error (RSE) compared with the other two compression methods, and it has good performance for video datasets from both stationary and non-stationary cameras. In the final step, we measure the output quantum state |   ñ s in the computational basis in order to extract some classical information (recommendation indices) from the approximate quantum state (21). Here, a good recommendation means that the amplitude of the measurement outcome corresponds to that dominating entries in the hidden preference tensor. Next we give a rough estimation on the complexity of obtaining a good recommendation by algorithm 5. Denote the sum of the squares of the dominating entries of tensor   s as υ (corresponding to the large amplitudes of the state |   ñ s ). According to above analysis, the total cost of first four steps of algorithm 5 is N N polylog , where α is the Frobenius norm of tensor   s and it is defined in (19). Then after Step 5, we need to repeat the measurement ( )  a u times in order to ensure the probability of getting a good POI recommendation index is close to 1. Therefore, the complexity of our QC-POI algorithm is N N polylog According to that dominating entries assumption, the complexity of algorithm 5 could be N N polylog ( )  for large N. The measurement outcome corresponds to a triplet (location, POI, time slot) whose second entry is the recommended POI index, and this algorithm achieves a polynomial speedup compared with the classical 3D recommendation systems algorithm.
We summarize the main steps of this algorithm in algorithm 5.

Conclusion
In this paper, we propose a quantum t-svd algorithm for third-order tensors with complexity N N polylog ( ( ) )  . The key tools that accelerate this process are quantum Fourier transform and quantum singular value estimation, then we extend this third-order tensor algorithm to order-p tensors. Moreover, based on the quantum t-svd algorithm, a quantum tensor approximation algorithm is proposed, which is applied to contextaware 3D recommendation systems.

Appendix A. Relevant definitions and results of the classical t-svd algorithm
In this section, we provide the relevant definitions and results of the classical t-svd algorithm, algorithm 1 in the main text.
Definition 2 circular convolution. Let u v , . The circular convolution between u and v produces a vector x of the same size, defined as As a circulant matrix can be diagonalized by means of the discrete Fourier transform (DFT), from definition 2, we have DFT(x) = diag(DFT(u))DFT(v), where diag(u) returns a square diagonal matrix with elements of the vector u on the main diagonal. As a result, the circular convolution between two vectors in definition 2 is better understood in the Fourier domain, which is given by the following result.
Theorem 3 Cyclic Convolution theorem [38]. Given u v , as defined in definition 2. We have where  denotes the element-wise product.

If a tensor
  Î´´is considered as an N 1 × N 2 matrix whose (i, j)-th entry is a tube of dimension N 3 , i.e. i j , , : , then based on definition 2, the t-product between tensors is defined as follows.
Definition 3 t-product [1]. Let Similar as the circular convolution in definition 2, the t-product in definition 3 can be better interpreted in the Fourier domain. Specifically, let be the tensor whose (i, j)-th tube is i j DFT , , : Then by theorem 3  and definition 3, we have   i j  i k  k j  , , : , , : , , : , which is the Fourier counterpart of equation (A.2). Interestingly, for a fixed index l in the third mode, , which is the conventional matrix product. This nice equivalence relations between t-product and matrix multiplication (in the Fourier domain) are summarized in the following theorem.

Theorem 4 [1] For tensors
By the equivalence relation given in theorem 4 above, the t-svd defined in theorem 1 in the main text can be interpreted in as the matrix SVD in the Fourier domain, as reflected in algorithm 1. In what follows, we list some definitions used in theorem 1 and algorithm 1.
Firstly, tensor transpose operation is used in theorem 1, which is defined as follows.
Definition 4 tensor transpose [1]. The transpose of a tensor is obtained by transposing all the frontal slices and then reversing the order of the transposed frontal slices 1 through N 1 3 -.
The tensor transpose defined in definition 4 has the same property as the matrix transpose, i.e.
Secondly, orthogonal tensors are used in theorem 1, whose definition is given below. identity matrix and all the other frontal slices are zero matrices.
Finally, we given the definition of tensor Frobenius norm as it is quite useful for tensor approximation.
Definition 6 tensor Frobenius norm [1]. The Frobenius norm of a third-order tensor Similar to orthogonal matrices, the orthogonality defined in definition 5 preserves the Frobenius norm of a tensor, i.e. given an orthogonal tensor , we have Moreover, when the tensor is secondorder, definition 5 coincides with the definition of orthogonal matrices.

Appendix B. The proof of theorem 2
Before proving theorem 2, we would like to sketch the proof first. According to Assumption 1, each frontal slice A ( k) is stored in the binary tree structure, hence based on the proof of lemma 3 in [31], the states i k , :, corresponding to the i-th row of A ( k) , can be prepared efficiently by operators P ( k) . Based on these operators, two new isometries P m and Q m are constructed in order to perform QSVE on A m ( ) . Moreover, the input of our modified QSVE is also different from that in [31]. The detail of the proof is given below.
Proof. Since every A k ( ) , k N 0, , 1 , is stored in the binary tree structure, the quantum computer can perform the following mapping in N polylog 1 ( Define the degenerate operator P k N N N Based on the efficiently implemented operators P k and U P k , we define another operator , which is analyzed below.
The LCU technique is first proposed by Long in [43] in a more general form, and Shao et al summarize this result in [42]. The problem of LCU can be formulated as follows: Given i  a Î and unitary operators U i ,