Abstract
Tensors are increasingly ubiquitous in various areas of applied mathematics and computing, and tensor decompositions are of practical significance and benefit many applications in data completion, image processing, computer vision, collaborative filtering, etc. Recently, Kilmer and Martin propose a new tensor factorization strategy, tensor singular value decomposition (t-svd), which extends the matrix singular value decomposition to tensors. However, computing t-svd for high dimensional tensors costs much computation and thus cannot efficiently handle large scale datasets. Motivated by advantage of quantum computation, in this paper, we present a quantum algorithm of t-svd for third-order tensors and then extend it to order-p tensors. We prove that our quantum t-svd algorithm for a third-order N dimensional tensor runs in time if we do not recover classical information from the quantum output state. Moreover, we apply our quantum t-svd algorithm to context-aware multidimensional recommendation systems, where we just need to extract partial classical information from the quantum output state, thus achieving low time complexity.
Export citation and abstract BibTeX RIS
Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
1. Introduction
A tensor (hypermatrix) refers to a multi-array of numbers. It has been applied in several areas including image deblurring, denoising, video recovery, data completion, tensor networks, multi-partite quantum systems, and machine learning [1–19], owing to the flexibility of tensors in representing data. Some of these applications utilize tensor decompositions including CANDECOMP/PARAFAC (CP) [20], tensor-train decomposition (TT) [21], TUCKER [22], higher-order singular value decomposition (HOSVD) [12, 23, 24], tensor singular value decomposition (t-svd) [1, 3, 19, 25], and etc.
During the last decade, plenty of research has been carried out on t-svd and its applications. The t-svd factorization strategy is first proposed by Kilmer and Martin [1] for third-order tensors, then it is extended to order-p tensors by Martin et al in 2013 [26]. The t-svd algorithm extends the matrix svd strategy to tensors while avoiding the loss of information inherent during unfolding tensors as operated in CP and TUCKER decompositions. The general idea of t-svd factorization is performing matrix svd in the Fourier domain, consequently it allows other matrix factorization techniques, e.g., QR decomposition, to be extended to tensors easily using the similar idea. Moreover, the t-svd is superior to Tucker or HOSVD in the sense that truncated t-svd gives an optimal approximation of a tensor measured by the Frobenius norm, while this best approximation cannot be obtained by truncating the full HOSVD or Tucker decomposition. Due to this optimality property, t-svd is shown to have better performance than HOSVD in facial recognition [27] and tensor completion [2, 28].
However, the complexity of calculating full t-svd for third-order N dimensional tensors is , which is extremely high for large scale datasets. Hence many works have devoted to low-rank approximated t-svd representation which gives up the optimality property and has comparatively low comlexity. In [29], Zhang et al propose a randomized t-svd method which can produce a factorization with similar properties to the t-svd, and the computational complexity is reduced to , where k is the truncated term.
Considering the high cost of the existing classical algorithms of t-svd, we present a quantum version of t-svd for third-order tensors which reduces the complexity to . To our best knowledge, the efficiency of this algorithm beats any known classical t-svd algorithms in the literature. In section 4, we extend the quantum t-svd algorithm to order-p tensors.
An important step in a classical t-svd algorithm is to perform discrete Fourier transform (DFT) along the third mode of a tensor , obtaining with computational complexity for each tube , i = 0, ⋯ ,N1 − 1; j = 0, ⋯ ,N2 − 1. Thus, the complexity of performing the DFT on all tubes of the tensor is . In the quantum t-svd algorithm to be proposed, this procedure is accelerated by the quantum Fourier transform (QFT) [30] whose complexity is only . Moreover, due to quantum superposition, the QFT can be simultaneously performed on the third register of the state , which is equivalent to performing the DFT for all tubes of parallelly, so the total complexity of this step is still .
After performing the QFT, in order to further accelerate the second step in the classical t-svd algorithm which performs the matrix svd for every frontal slice of , we apply a modified quantum singular value estimation (QSVE) algorithm originally proposed in [31] to the frontal slices parallelly with complexity at most for N dimensional tensors. Traditionally, the quantum singular value decomposition of non-sparse low-rank matrices involves exponentiating matrices and outputs the superposition state of singular values and their associated singular vectors. However, this Hamiltonian simulation method requires that the matrix to be exponentiated be low-rank, which is difficult to be satisfied in general. In our algorithm, we use the modified QSVE algorithm, where the matrix is unnecessarily low-rank, sparse or Hermitian.
The main contributions of this paper are listed as follows:
The original QSVE algorithm proposed in [31] has to be carefully modified to become a useful subroutine in our quantum tensor-svd algorithm. Specifically, the original QSVE, stated in lemma 3, requires the matrix A be stored in the classical binary tree structure, then the singular values of A can be estimated efficiently. Given a tensor , in algorithm 3, QSVE is performed on the matrices which are frontal slices of , m = 0, ⋯ ,N3 − 1. The difficulty lies in the fact that we cannot require all be stored in the data structure since they are obtained after QFT. It is more reasonable to assume that the frontal slices of the original tensor is stored in the binary tree structure. Therefore, the main obstacle we need to overcome is how to estimate the singular values of on the condition that every frontal slice of the original tensor A(k) is stored in the data structure. This problem is solved by theorem 2, and whose proof presents a detailed illustration of this process.
In section 5, we design a quantum tensor approximation algorithm based on algorithm 3, and present an application of this algorithm, namely the context-aware multidimensional recommendation systems. Our quantum tensor approximation algorithm suits the 3D recommendation systems model very well for mainly two reasons.
First, compared with other tensor decompositions, t-svd has been shown to be superior in capturing the spatial-shifting correlation [28], so it is suitable to model the 3D recommendation systems. Suppose the preference information of a user is encoded in a third-order tensor in which the three modes represent locations, point-of-interests, and time frames respectively, a user's preference in a certain time is very likely to affect the recommendation for him/her at other time in general. The QFT in t-svd algorithm is used to bind a user's preferences in different time together, so the recommendation using t-svd can better user experience by integrating the relations between different context.
Second, our quantum t-svd algorithm can achieve good result with low cost when applied to 3D recommendation systems. Actually, it is not necessary to reconstruct the entire tensor as in the classical 3D recommendation systems algorithms based on tensor factorizations, such as the truncated t-svd with complexity [3] and the truncated HOSVD (T-HOSVD) with complexity [32] for third-order N dimensional tensors, where k is the truncation rank. Our quantum 3D recommendation systems algorithm only samples high-value elements from the approximated tensor (corresponding to measuring the output state in computational basis a certain times), and this is exactly what we need for recommendation systems. Consequently, our quantum 3D recommendation systems algorithm achieves the complexity if the preference tensor has several dominating (namely, high-value) elements.
The rest of this paper is organized as follows. A standard classical t-svd algorithm and several related concepts are introduced in section 2.2; section 2.3 summarizes the quantum singular value estimation algorithm proposed in [31]. Section 3 presents our main algorithm, quantum t-svd, and its complexity analysis. We extend the quantum t-svd algorithm to order-p tensors in section 4. In section 5, we design a quantum tensor approximation algorithm based on classical truncated t-svd and then provide an application on context-aware multidimensional recommendation systems. In section 6, we conclude the paper.
2. Preliminaries
In section 2.1, we first introduce the concept of tensor, and notation used throughout the paper. In section 2.2, we review the concept of t-product and the t-svd algorithm proposed by Kilmer et al [1] in 2011. Then in section 2.3, we briefly retrospect the quantum singular value estimation algorithm (QSVE) [31] proposed by Kerenidis and Prakash.
2.1. Tensor background and notation
A tensor is a multidimensional array of data, where p is the order and (N1, ⋯ ,Np ) is the dimension. The order of a tensor is the number of modes. For instance, is a third-order tensor of complex values with dimension Ni for mode i, i = 1, 2, 3, respectively. In this sense, a matrix A can be considered as a second-order tensor, and a vector x is a tensor of order 1. For a third-order tensor, we use terms frontal slice , horizontal slice and lateral slice (see figure 1). By fixing all indices but the last one, the result is a tube of size 1 × 1 × N3, which is actually a vector. For example, is the (i, j)-th tube of .
Download figure:
Standard image High-resolution imageNotation. In this paper, script letters are used to denote higher-order tensors (, , ⋯). Capital nonscript letters are used to represent matrices (A, B, ⋯), and vectors are written as boldface lower case letters ( x , y , ⋯). DFT( u ) refers to performing the discrete Fourier transform (DFT) on u , which is computed by the fast Fourier transform represented in Matlab notation fft( u ). The tensor after DFT along the third mode of is denoted by , i.e. . Hence we have , which is the inverse of the above operation. We use A(i) to denote the i-th frontal slice for short, hence the m-th frontal slice of is . There are three types of product we would like to clarify here: u ⊛ v refers to the circular convolution between vectors u and v , ⊙ is the element-wise product and represents the t-product between tensors and .
2.2. The t-svd algorithm and t-product
In this subsection, we present the classical t-svd as well as its pseudocode, algorithm 1. For readability of the main text, all mathematical details are put in appendix A. Simply speaking, the t-svd of a tensor can be interpreted as the usual matrix svd in the Fourier domain, as can be seen in algorithm 1.
tensor singular value decomposition (t-svd) [1].Theorem 1 For , its t-svd is given by , where and are orthogonal tensors, and every frontal slice of is a diagonal matrix (see figure 2).
Algorithm 1. t-svd for third-order tensors [1]
Input:
Output:
;
for
do
;
;
end for
;
For an order-p tensor , the frontal slices of are referenced using linear indexing by reshaping the tensor into an N1 × N2 × N3 N4 ⋯ Np third-order tensor, then the i-th frontal slice is Using this representation, one version of MATLAB pseudocode of the t-svd algorithm for order-p tensors is provided below.
Algorithm 2. t-svd for order-p tensors [26]
Input:
for
do
;
end for
for
do
;
;
end for
for
do
;
end for
In the remainder of this subsection, we summarize some properties of t-svd, which will be used in the quantum tensor approximation algorithm to be developed in section 5.
In the t-svd literature, the diagonal elements of the tensor are called the singular values of . Moreover, the l2 norms of the nonzero tubes are in descending order, i.e.
However, it should be noticed that the diagonal elements of may be unordered and even negative due to the inverse DFT. Thus, the truncated t-svd method for data approximation or tensor completion is designed by truncating the diagonal elements of instead of , as the diagonal elements of the former are non-negative and ordered in descending order; see lemma 1.
[1, 29].Lemma 1 Suppose the t-svd of the tensor is . Then we have
where the matrices and and the vector are regarded as third-order tensors. For , define . Then
where . Therefore, is the theoretical minimal error, given by .
2.3. Quantum singular value estimation
In [31], Kerenidis and Prakash propose a quantum singular value estimation (QSVE) algorithm. They assume that the input data is stored in a classical binary tree data structure, as stated in the following lemma, such that the QSVE algorithm with access to this data structure can efficiently create superpositions of rows of the subsample matrix.
[31], theorem 5.1.Lemma 2 Consider a matrix with τ nonzero entries. Let Ai be its i-th row, and There exists a data structure storing the matrix A in space such that a quantum algorithm having access to this data structure can perform the mapping , for and , for in time .
The following lemma summarizes the main idea of the QSVE algorithm and its detailed description can be found in [31].
[31], theorem 5.2.Lemma 3 Let and be stored in the data structure as mentioned in lemma 2. Let the singular value decomposition of A be , where . The input state can be represented in the eigenstates of A, i.e. . Let be the precision parameter. Then there is a quantum algorithm, denoted as , that runs in time and achieves
with probability at least , where is the estimated value of satisfying for all l.
Remark 1. With regard to the matrix A stated in lemma 3, we can also choose the input state as , corresponding to the vectorized form of the normalized matrix represented in the svd form. This representation of the input state is adopted in section 3. Note that we are able to express the state in the above form even if the singular pairs of A are not known. According to lemma 3, we can obtain , an estimate of , stored in the third register superposed with the singular vectors after performing , i.e. the output state is , where for all .
3. Quantum t-svd for third-order tensors
3.1. The algorithm
In this section, we first present our quantum t-svd algorithm, algorithm 3, for third-order tensors , then explain it in detail.
Assumption 1. Every frontal slice of is stored in a tree structure introduced in lemma 2.
Assumption 2. We can prepare the state
efficiently. Without loss of generality, we assume that .
Algorithm 3. Quantum t-svd for third-order tensors
Input: tensor prepared in a quantum state , precision , , .
Output: state
1: Perform the QFT on the third register of the quantum state in (1) to get
. (2) 2: Perform the controlled operation
(3) on the state to obtain
, (4) where is the estimated value of , and the singular value decomposition of is . 3: Perform the inverse QFT on the last register of (4) and output the state expressed as
, (5) where .
The quantum circuit of algorithm 3 is shown in figure 3, where the blocks , m = 0, ⋯ ,N3 − 1, are illustrated in figure 4.
Download figure:
Standard image High-resolution imageDownload figure:
Standard image High-resolution imageBefore illustrating the algorithm, we first interpret the final quantum state ∣ϕ〉. Similar to the quantum singular value decomposition for matrices [33] that the output allows singular values and associated singular vectors to be revealed in a quantum form, the output state ∣ϕ〉 in our algorithm also finds the estimated values of , , which are stored in the third register, in superposition with corresponding singular vectors. Although the singular values of the tensor are defined as , according to algorithm 1 for the classical t-svd, the singular values of , , have wider applications than the singular values of , . For example, some low-rank tensor completion problems are solved by minimizing the tensor nuclear norm of the tensor, which is defined as the sum of all the singular values of [3, 5]. Moreover, the theoretical minimal error truncation is also based on the singular values of ; see lemma 1. Therefore, in algorithm 3, we estimate the values of , m = 0, ⋯ ,N3; l = 0, ⋯ ,r − 1, and store them in the third register of the final state ∣ϕ〉 for future use. Furthermore, in terms of the circulant matrix defined in definition 1, is the right singular vector corresponding to its singular value . Similarly, the corresponding left singular vector is .
Next, we give a detailed explanation on Step 2. We first rewrite the state in (2) for further use. For every fixed m, the unnormalized state
in (2) corresponds to the matrix
namely, the m-th frontal slice of the tensor Normalizing the state in (6) produces a quantum state
Therefore, the state in (2) can be rewritten as
In Step 2, we utilize a controlled operation U defined in (3) to estimate the singular values of parallelly, m = 0, ⋯ ,N3 − 1. Due to the quantum parallelism, the operator U performed on the superposition state is thus equivalent to performed on each of the components as a single input. That is,
Next, we focus on the result of in (10). The state can be rewritten in the form
where is the rescaled singular value of . The following theorem describes , a modified quantum singular value estimation process on each matrix utilizing the corresponding input represented in (11).
Theorem 2. Given every frontal slice of the original tensor stored in the data structure (Lemma 2), there is a quantum algorithm, denoted as , that uses the input in (11) and outputs the state
with probability at least , where is the singular triplet of the matrix in (7), and is the precision such that for all . For tensor with same dimension N on every order, the running time to implement is .
Proof. See appendix B.
Actually, the process proposed in theorem 2 is quite different from the original QSVE technique introduced in lemma 3. In theorem 2, it is proved that we can estimate the singular values of on the condition that each frontal slice of the original tensor A(k) is stored in the binary tree. The proof of theorem 2 presents a detailed illustration of the procedure of , and the circuit shown in figure 4 can help to understand it.
Thus after Step 2, the state in (10) becomes the state in (4) based on theorem 2.
Our quantum t-svd algorithm can be used as a subroutine of other algorithms, that is, it is suitable for some specific applications where the singular values of are used. For example, some third order tensor completion problems can be efficiently solved by extracting the singular values of and only keeping the greater ones. Moreover, some context-aware recommendation systems also utilize tensor factorizations, such as the truncated t-svd [3] and the truncated HOSVD [32]. See more details in section 5.
Remark 2. Note that the input of is instead of an arbitrary quantum state commonly used in some quantum svd algorithms [34]. This input fits our quantum t-svd algorithm better since it keeps the entire singular information completely, thus our algorithm can output a quantum state whose representation is similar to the matrix svd. Another consideration is that we do not need the information unrelated to the tensor (e.g. an arbitrary state) to be involved in our algorithm.
3.2. Complexity analysis
For simplicity, we consider a tensor with the same dimensions on each mode. In Steps 1 and 3 of algorithm 3, performing the QFT or the inverse QFT parellelly on the third register of the state has the complexity of , compared with the complexity of the DFT performed on N2 tubes of the tensor in the classical t-svd algorithm. Moreover, in the classical t-svd, the complexity of performing the matrix svd (Step 2 of algorithm 1) for all frontal slices of is . In contrast, in our quantum t-svd algorithm, this step is accelerated by theorem 2 (the modified QSVE) whose complexity is on each frontal slice . Since we perform this modified QSVE on each parallely, m = 0, ⋯ ,N − 1, the running time of Step 2 is still . Therefore, the total computational complexity of algorithm 3 is .
4. Quantum t-svd for order-p tensors
Following a similar procedure, we can extend the quantum t-svd for third-order tensors to order-p tensors easily.
We assume that the quantum state corresponding to the tensor can be prepared efficiently, where with ni being the number of qubits on the corresponding mode and
Next, we perform the QFT on the third to the p-th mode of the state , and then use ∣m〉 to denote ∣m3〉 ⋯ ∣mp 〉, i.e. . The value of m ranges from 0 to ι − 1, where ι = N3 N4 ⋯ Np + N4 N5 ⋯ Np + ⋯ + Np .Specially, when N3 = ⋯ = Np = N. Then we obtain
Let the matrix
and perform the modified QSVE on , m = 0, ⋯ ,ι − 1, parallelly using the same strategy described in section 3.1, we can get the state
after Step 2.
Finally, we recover the ∣m3〉 ⋯ ∣mp 〉 expression and perform the inverse QFT on the p-th to the third register, obtaining the final state
corresponding to the quantum t-svd of order-p tensor .
Algorithm 4. Quantum t-svd for order-p tensors
Input: tensor prepared in a quantum state, precision , .
Output: state . 1: Perform the QFT parallelly from the third to the p-th register of quantum state , obtain the state . 2: Perform the modified QSVE for each matrix with precision parallelly, , by using the controlled- acting on the state , to obtain the state . 3: Perform the inverse QFT parallelly from the third to the p-th register of the above state and output the state .
For order-p tensor , compared with the classical t-svd algorithm [26] whose time complexity is , our algorithm output a quantum state with the classical t-svd information encoded, and the time complexity of our quantum algorithm is followed by a similar analysis.
5. Application to context-aware POI recommendation systems
In this section, we apply algorithm 3 to another quantum algorithm, algorithm 5, which implements a classical machine learning task: context-aware POI recommendation systems.
In point-of-interest (POI) recommendations, a user's preference for a POI, such as restaurants and sightseeing sites, is seriously influenced by context, such as the time slot in a day, his/her current location, etc. Therefore, many works focus more on integrating multiple contexts in a more efficient manner to improve user experience. Tensors are a natural choice to model high-order contextual information, e.g., a user's rating for different POIs can be encoded in the preference tensor in which the three modes represent locations, POIs, and time frames respectively. The classical (non-quantum) context-aware POI recommendation systems utilizing various tensor decomposition methods have been tested by experiment to have better result (both accuracy and execution time) than non-contextual modeling [35, 36]. However, these methods are computational expensive due to the high cost of tensor decomposition. In fact, the classical 3D recommendation systems algorithm based on tensor decomposition, such as the truncated t-svd with complexity [3], or the truncated HOSVD (T-HOSVD) with complexity [32], where k is the truncation rank and N is the dimension of the preference tensor. Considering the effectiveness and high cost of context-aware POI recommendation systems, we propose a quantum context-aware POI recommendation systems algorithm (QC-POI) with lower complexity for some suitable parameters, and the result of this algorithm is the classical information of POI recommendation indices.
The problem of the context-aware POI recommendations can be stated as follows. Suppose there is a hidden preference tensor that encodes the preference information of a given user under two contexts, e.g. locations and time. In practical applications, only a part of the entries of can be observed. Actually, users typically engage with only a small subset of POIs and a considerable amount of possible interactions stays unobserved. For example, a person may be simply unaware of existing alternatives for the POIs of his/her choice. Finding out those entries helps to make better predictions. We denote the tensor whose entries are observed ratings as , which is sparse in general. Our goal is to predict these unobserved triples (location, POI, time) and recommend some of his/her favorite POIs among all contexts based on those comparatively high predicted values. For example, Alice has visited several cites (locations) in America, and her ratings of different POIs when she was in these cites are encoded in a tensor whose three modes are cities, POIs, time respectively. For some reasons, she only scores a part of POIs of these cites, denoted as , and our task is to provide her favorite POIs among all cites and time slots based on the truncated t-svd of the tensor .
In this application, we make two assumptions. Firstly, we assume that tensor the is of low tubal-rank. Secondly, the original tensor has several dominating entries. In fact, this low tubal-rank assumption is also adopted in the classical truncated t-svd data completion problem [2, 3, 37]. For the second assumption, in real situations, it is very likely that a user has some favorite POIs among all contexts (dominating entries).
Our QC-POI algorithm, algorithm 5, includes three processes: quantum t-svd, quantum state projection and quantum measurement. After algorithm 3 and the quantum state projection process, we can obtain the state corresponding to an approximation of the hidden preference tensor under certain conditions; similar conclusion could be seen in [19], theorem 1. Thus, previously unobserved values of triples corresponding to this user's favourite POIs in the hidden preference tensor might be boosted after projected t-svd on observed tensor . As is non-sparse in general, we can predict the missing entries based on the relatively high predicted values in , and provide POI recommendations by measuring the state in the computational basis. The complexity of our QC-POI of getting a good recommendation is for suitable chosen parameters, see the analysis in the paragraph below algorithm 5. The output is a POI recommendation index for this user under all locations and time slots.
In fact, our quantum 3D recommendation systems algorithm borrows the idea of quantum recommendation systems for matrices proposed by Kerenidis and Prakash [31]. For recommendation systems modeled by an m × n preference matrix, Kerenidis and Prakash designed a quantum algorithm that offers recommendations by just measuring the quantum state representing an approximation of the hidden preference matrix obtained by truncated matrix SVD.
The following is a summary of the steps of algorithm 5. We first follow the Steps 1 and 2 of algorithm 3, then perform the quantum projection with a pre-specified threshold σ on state (4). Specifically, for state (4), we apply the unitary operator V on the register a and an ancillary register ∣0〉 that maps ∣t〉a ∣0〉 → ∣t〉a ∣1〉 if t < σ and ∣t〉a ∣0〉 → ∣t〉a ∣0〉 otherwise. Therefore, after Step 2 of algorithm 3, we get
Next, we apply the inverse modified QSVE on state (17) and discard the register a. Then we measure the third register of (17), and postselect the outcome ∣0〉, obtaining
where
The probability that we obtain the outcome ∣0〉 is
since the Frobenius norm is unchanged under the Fourier transform. Tensor denotes the tensor whose m-th frontal slice is obtained by truncating with threshold σ, and is the inverse QFT of . Based on amplitude amplification, we have to repeat the measurement times in order to ensure the success probability of getting the outcome ∣0〉 is close to 1. Thus, the complexity of getting the state 18 is . Combining (9) with (11), we find that the state (18) can be seen as an approximation of .
In Step 4, we perform the inverse QFT on state (18), obtaining the final state denoted as
which is an approximation of the state in the classical counterpart algorithm [3]. The effectiveness of the classical counterpart of algorithm 3 has been tested by numerical experiments; see more detail in remark 3.
Remark 3. The classical counterpart of algorithm 5 corresponds to a tensor completion method proposed in [3] and its effectiveness has been verified by numerical experiments. In what follows we briefly summarize this algorithm. For an tensor , we use algorithm 1 to get , , and . The total number of diagonal entries of is , where . After sorting them from the largest to the smallest, suppose these entries decay rapidly or small values are the majority, there exists a number k such that keeping the top k diagonal entries of , denoted as , and setting other entries to be 0 result in a good approximation. Thus, the approximate tensor is , where , and are the inverse DFT of and respectively. According to the simulation results in [3], this approximation method has the lowest relative square error (RSE) compared with the other two compression methods, and it has good performance for video datasets from both stationary and non-stationary cameras.
Algorithm 5. Quantum context-aware POI recommendation systems
Input: The observed tensor satisfying Assumption 1, threshold σ, .
Output: a recommendation index. 1: Follow the Steps 1 and 2 of algorithm 3. 2: Apply the unitary operator V on the register a and an ancillary register that maps if and otherwise, obtaining the state in (17). 3: Perform the inverse modified QSVE on (17), discard the register a and postselect the outcome to obtain state in (18). 4: Perform the inverse QFT on state (18) to obtain in (21). 5: Measure the output state in (21) in the computational basis to get a recommendation index.
In the final step, we measure the output quantum state in the computational basis in order to extract some classical information (recommendation indices) from the approximate quantum state (21). Here, a good recommendation means that the amplitude of the measurement outcome corresponds to that dominating entries in the hidden preference tensor. Next we give a rough estimation on the complexity of obtaining a good recommendation by algorithm 5. Denote the sum of the squares of the dominating entries of tensor as υ (corresponding to the large amplitudes of the state ). According to above analysis, the total cost of first four steps of algorithm 5 is , where α is the Frobenius norm of tensor and it is defined in (19). Then after Step 5, we need to repeat the measurement times in order to ensure the probability of getting a good POI recommendation index is close to 1. Therefore, the complexity of our QC-POI algorithm is when N1 = N2 = N3 = N. According to that dominating entries assumption, the complexity of algorithm 5 could be for large N. The measurement outcome corresponds to a triplet (location, POI, time slot) whose second entry is the recommended POI index, and this algorithm achieves a polynomial speedup compared with the classical 3D recommendation systems algorithm.
We summarize the main steps of this algorithm in algorithm 5.
6. Conclusion
In this paper, we propose a quantum t-svd algorithm for third-order tensors with complexity . The key tools that accelerate this process are quantum Fourier transform and quantum singular value estimation, then we extend this third-order tensor algorithm to order-p tensors. Moreover, based on the quantum t-svd algorithm, a quantum tensor approximation algorithm is proposed, which is applied to context-aware 3D recommendation systems.
Appendix A.: Relevant definitions and results of the classical t-svd algorithm
In this section, we provide the relevant definitions and results of the classical t-svd algorithm, algorithm 1 in the main text.
[1] circulant matrix.Definition 1 Given a vector and a tensor with frontal slices , , the matrices and are defined as
respectively.
circular convolution.Definition 2 Let . The circular convolution between and produces a vector of the same size, defined as
As a circulant matrix can be diagonalized by means of the discrete Fourier transform (DFT), from definition 2, we have DFT( x ) = diag(DFT( u ))DFT( v ), where diag( u ) returns a square diagonal matrix with elements of the vector u on the main diagonal. As a result, the circular convolution between two vectors in definition 2 is better understood in the Fourier domain, which is given by the following result.
Cyclic Convolution theorem [38].Theorem 3 Given , let as defined in definition 2. We have
where denotes the element-wise product.
If a tensor is considered as an N1 × N2 matrix whose (i, j)-th entry is a tube of dimension N3, i.e. , then based on definition 2, the t-product between tensors is defined as follows.
t-product [1].Definition 3 Let and . The t-product of and , i.e. , is an tensor -th tube is
for all and .
Similar as the circular convolution in definition 2, the t-product in definition 3 can be better interpreted in the Fourier domain. Specifically, let be the tensor whose (i, j)-th tube is . Then by theorem 3 and definition 3, we have
which is the Fourier counterpart of equation (A.2). Interestingly, for a fixed index l in the third mode, equation (A.3) is equivalent to , which is the conventional matrix product. This nice equivalence relations between t-product and matrix multiplication (in the Fourier domain) are summarized in the following theorem.
Theorem 4 [1] For tensors and , the equivalence relation
holds for , where is the l-th frontal slice of .
By the equivalence relation given in theorem 4 above, the t-svd defined in theorem 1 in the main text can be interpreted in as the matrix SVD in the Fourier domain, as reflected in algorithm 1. In what follows, we list some definitions used in theorem 1 and algorithm 1.
Firstly, tensor transpose operation is used in theorem 1, which is defined as follows.
tensor transpose [1].Definition 4 The transpose of a tensor , denoted by , is obtained by transposing all the frontal slices and then reversing the order of the transposed frontal slices 1 through .
The tensor transpose defined in definition 4 has the same property as the matrix transpose, i.e. .
Secondly, orthogonal tensors are used in theorem 1, whose definition is given below.
orthogonal tensor [1].Definition 5 A tensor is an orthogonal tensor if it satisfies , where is an identity tensor, in other words, its first frontal slice is an identity matrix and all the other frontal slices are zero matrices.
Finally, we given the definition of tensor Frobenius norm as it is quite useful for tensor approximation.
tensor Frobenius norm [1].Definition 6 The Frobenius norm of a third-order tensor is defined as .
Similar to orthogonal matrices, the orthogonality defined in definition 5 preserves the Frobenius norm of a tensor, i.e. given an orthogonal tensor , we have . Moreover, when the tensor is second-order, definition 5 coincides with the definition of orthogonal matrices.
Appendix B.: The proof of theorem 2
Before proving theorem 2, we would like to sketch the proof first. According to Assumption 1, each frontal slice A(k) is stored in the binary tree structure, hence based on the proof of lemma 3 in [31], the states , corresponding to the i-th row of A(k), can be prepared efficiently by operators P(k). Based on these operators, two new isometries and are constructed in order to perform QSVE on . Moreover, the input of our modified QSVE is also different from that in [31]. The detail of the proof is given below.
Proof. Since every , , is stored in the binary tree structure, the quantum computer can perform the following mapping in time, as shown in theorem 5.1 in [31]:
where is the i-th row of .
Define the degenerate operator related to as
That is,
Based on the efficiently implemented operators Pk and , we define another operator
It can be easily followed that the operator achieves the state preparation of the rows of the matrix , i.e. . Similarly, the isometry corresponding to is , where is the state of the i-th row of . It can be easily shown that is an isometry since
Since can be implemented in time , can also be implemented in time using linear combination of unitaries (LCU) technique [39–43]. For tensor with same dimension N, the complexity for implementing turns out to be , which is analyzed below.
The LCU technique is first proposed by Long in [43] in a more general form, and Shao et al summarize this result in [42]. The problem of LCU can be formulated as follows: Given and unitary operators Ui , . The LCU problem is to implement linear operator . The algorithm stated in [39] implements L in time , where is any given initial state and is the time to implement . In our case, , and the input state is chosen as . Thus, . The complexity to implement is .
Next, we define the mapping
where is a vector whose i-th entry is . This operator can be implemented using the technique developed in [44] for preconditioned linear solvers. To be more specific, according to the analysis of algorithm 3, we can rewrite the state in (8) as . Then we apply to to get the state , thus obtaining the mapping . Finally, similar with , the corresponding isometry is defined as and it can be readily shown that .
Now we can perform QSVE on the matrix following similar procedures as in [31]. First, the factorization can be easily verified. Moreover, we can prove that is reflection and it can be implemented through . Actually,
where is a reflection. The similar result holds for .
Now denote
Let be the singular value decomposition of . We can prove that the subspace spanned by is invariant under the unitary transformation Wm :
The matrix Wm can be calculated under an orthonormal basis using the Schmidt orthogonalization. It is a rotation in the subspace spanned by its eigenvectors with correspondent eigenvalues , where is the rotation angle satisfying , i.e.
Here, we choose the input state as the Kronecker product form of the normalized matrix represented in the svd, i.e. . Then
Performing the phase estimation on Wm with running time for , and computing the estimated singular value of through oracle , we obtain
we next uncompute the phase estimation procedure and then apply the inverse of to obtain the desired state (12) in theorem 2.
□Footnotes
- *
This research is supported in part by Hong Kong Research Grant council (RGC) grants (No. 15208418, No. 15203619, No. 15506619) and Shenzhen Fundamental Research Fund, China under Grant No. JCYJ20190813165207290.