A Grover-search Based Quantum Learning Scheme for Classification

The hybrid quantum-classical learning scheme provides a prominent way to achieve quantum advantages on near-term quantum devices. A concrete example towards this goal is the quantum neural network (QNN), which has been developed to accomplish various supervised learning tasks such as classification and regression. However, there are two central issues that remain obscure when QNN is exploited to accomplish classification tasks. First, a quantum classifier that can well balance the computational cost such as the number of measurements and the learning performance is unexplored. Second, it is unclear whether quantum classifiers can be applied to solve certain problems that outperform their classical counterparts. Here we devise a Grover-search based quantum learning scheme (GBLS) to address the above two issues. Notably, most existing QNN-based quantum classifiers can be seamlessly embedded into the proposed scheme. The key insight behind our proposal is reformulating the classification tasks as the search problem. Numerical simulations exhibit that GBLS can achieve comparable performance with other quantum classifiers under various noise settings, while the required number of measurements is dramatically reduced. We further demonstrate a potential quantum advantage of GBLS over classical classifiers in the measure of query complexity. Our work provides guidance to develop advanced quantum classifiers on near-term quantum devices and opens up an avenue to explore potential quantum advantages in various classification tasks.


Introduction
The field of machine learning has achieved remarkable success in computer vision, natural language processing, and data mining [1]. Recently, an increasing interest from the physics community to use machine learning methods to solve complicated physics problems, e.g., classifying phases of matter and simulating quantum systems [2,3,4], has emerged. Besides the revolutionary influence of machine learning to the physics world, another uprising field that tightly binds machine learning with physics is quantum machine learning whose goal is to solve specific tasks beyond the reach of classical computers [5].
To better understand how quantum computing facilitates the machine learning tasks, devising quantum algorithms that have the ability to solve fundamental machine learning arXiv:1809.06056v2 [quant-ph] 30 May 2022 problems with quantum advantages is desirable [5]. For example, the proposed quantum linear systems algorithm (a.k.a., HHL algorithm) enables the linear equations to be solved with the exponential speedup over its classical counterparts [6]. By employing HHL algorithm as the subroutine, many quantum machine learning algorithms with exponential quantum speedup have been proposed, e.g., the quantum principal component analysis [7], quantum singular value decomposition [8], quantum non-negative matrix factorization [9], and the quantum regression [10]. However, those proposed quantum algorithms that possess fabulous quantum advantages can only be executed on a faulttolerant quantum computer by using the quantum random access memory [6], which is still a rather distant dream.
When approaching the noisy intermediate-scale quantum (NISQ) era, it is intrigued to explore whether there exists any quantum algorithm that can not only solve fundamental learning problems with promised quantum advantages but can also be efficiently implemented on near-term quantum devices [11]. To achieve this goal, one of the most likely solutions is the quantum neural network (QNN), which is also called as variational quantum algorithms [12,13,14]. Concretely, QNN is composed of a variational quantum circuit to prepare quantum states and a classical controller to perform optimization tasks [13,15]. Partial evidence to support this claim is the theoretical result that the probability distribution generated by the variational quantum circuit used in QNN can not be efficiently simulated by classical computers [16,17,18]. Driven by the strong expressive power of quantum circuits and the similar work philosophy between QNN and the classical deep neural network (DNN), its natural to exploit whether QNN can be realized on near-term quantum computers to accomplish certain machine learning tasks with better performance over classical learning algorithms.
A central application of QNN, analogous to DNN, is tackling classification tasks [1]. Many real-world problems can be categorized into the classifying scenario, e.g., the recognization of hand-written digits, the characterization of different creatures, and the discrimination of quantum states. For binary classification, given a dataset with N examples and M features in each example, QNN aims to learn a decision rule f θ (·) that correctly predicts the label of the given datasetD, i.e., where θ refers to the trainable parameters and 1 z is the indicator function that takes the value 1 if the condition z is satisfied and zero otherwise. Recently, QNNs with varied quantum circuit architectures and optimization methods have been proposed to accomplish the aforementioned classification tasks. In particular, the references [19,20,21] have devised the amplitude encoding based QNN to classify the Iris dataset and the hand-written digits image dataset; the references [22,23,24] have developed |h(x i ) F |i I is prepared, where h(·) corresponds to the employed encoding method and the subscripts 'I' and 'F' refer to the index and feature registers, respectively. Given access to |φ(x) , the trainable quantum circuit U L (θ) is employed to interact with its feature register subscripted with F . the kernel-based QNN to accomplish the synthetic datasets; and the references [25] have proposed the convolution based QNN to tackle quantum state discrimination tasks. When no confusion can arise, we use the quantum classifier in the rest of the study to specify QNNs that are used to accomplish classification tasks defined in Eqn. (2).
Despite the promising heuristic results mentioned above, very few studies have theoretically explored the power of quantum classifiers. A noticeable theoretical result about quantum classifiers is the trade-off between the computational cost (i.e., the number of measurements) and the training performance indicated by [13]. Denote L(θ (t) , z) as the loss function employed in quantum classifiers, where θ (t) refers to the trainable parameters at the t-th iteration and z = {z j } N j=1 is the given dataset with in total N samples. As shown in Figure 1, when the batch gradient descent method is employed to optimize the loss function L, the updating rule of the trainable parameters follows where η is the learning rate, B i refers to the i-th batch with ∪ B i=1 B i = z and B i ∩ B j = ∅, and B denotes the number of batches. Define as the utility measure that evaluates the distance between the optimized result and the stationary point in the optimization landscape. The following theorem summarizes the utility bound R 1 of quantum classifiers.
Theorem 1 (Modified from Theorem 1 of [13].) Quantum classifiers under the depolarization noise setting output θ (T ) ∈ R d after T iterations with the utility bound where M is the number of measurements to estimate the quantum expectation value, L Q is the circuit depth of variational quantum circuits, p is the rate of the depolarization noise, and B is the number of batches.
The result of Theorem 1 indicates that a larger number of batches B ensures a better utility bound R 1 , while the price to pay is increasing the total number of measurements. For example, when B = N , we have B i = z i for ∀i ∈ [N ] and each sample z j is sequentially fed into variational quantum circuits to acquire ∇L(θ, z i ) that estimates Suppose that the required number of measurements to estimate the derivative of the j-th parameter θ j , i.e., ∇ j L(θ, z i ) = ∂L(θ,z i ) ∂θ j , is M , then the total number of measurements to acquire 1 Therefore, the estimation of ∇L(θ, z), which includes d parameters, requires N M d measurements. Such a cost becomes unaffordable for large N . However, the trade-off between the utility R 1 and the computational efficiency caused by the varied number of batches B is not considered in previous quantum classifiers, where most of them only focused on the setting B = N . How to design a quantum classifier that can attain a good utility R 1 with a low computational cost is unknown.
Another theoretical issue towards quantum classifiers is that none of the previous results have explored their potential advantages compared with classical counterparts. This questions the necessity of employing quantum classifiers because no benefit can be offered. Under the above observations, it is highly desirable to develop a quantum classifier that can not only achieve a good utility R 1 using a low computational cost, but can also possess certain quantum advantages compared with classical classifiers.
Here we devise a Grover-search based learning scheme (GBLS) to address the above two issues under the NISQ setting. Our proposal has the following advantages. First, GBLS is a flexible and effective learning scheme, which enables the optimization of different quantum classifiers with a varied number of batches B. Note that the choice of the encoding methods and the variational ansatz used in GBLS is very flexible, which covers a wide range of the proposed quantum classifiers [20,21,22,23,24]. Moreover, the Grover-search based machinery is only required in the training process, and the prediction of the new input is completed by only using the optimized variational quantum circuits, which ensures its efficacy. Second, we prove that the query complexity can be quadratically reduced over its classical counterparts in the optimal setting (see Theorem 2) when it is applied to accomplish specific binary classification tasks. Last, numerical simulation results demonstrate that GBLS can well accomplish binary classification tasks even when the system noise and the finite number of quantum measurements are considered (see Section 3). Notably, the required number of measurements of GBLS is dramatically less than other advanced quantum classifiers [23,22,24] with competitive performance (see Table 1). In other words, GBLS is a powerful protocol that allows quantum classifiers to achieve a good utility bound R 1 with a low computational cost.
The central concept in GBLS is reformulating the classification tasks as the search problem. Note that although the advantage held by the quantum Grover-search algorithm is evident, how to transform the classification task into the search problem is inconclusive. Such a reformulation is the main technical contribution in this study. Recall that Groversearch [26] identifies the target element i * in a database of size K by iteratively applying a predefined oracle U f = I − 2 |i * i * | and a diffusion operator U init = 2 |ϕ ϕ| − I with |ϕ = 1 √ K i |i to the input state. GBLS, as shown in Figure 2, employs a specified variational quantum circuit U L 1 and a multiple controlled qubits gate along the Z axis (MCZ) to replace the oracle U f . In particular, the variational quantum circuit conditionally flips a flag qubit (i.e., the black dot behind U L 1 highlighted by the pink region) depending on the training data. The flag qubit is then employed as a part of MCZ gate to guide a Grover-like search algorithm to identify the index of the specified example, i.e., the status of the flag qubit such as '0' or '1' determines the successful probability to identify the target index. Through optimizing the trainable parameters of the variational quantum circuits U L 1 , GBLS aims to maximize the successful probability to sample the target index when the corresponding training example is positive; otherwise, GBLS minimizes the successful probability of sampling the target index. The inherited property from the Grover-search algorithm allows our proposal to achieve an advantage in terms of query complexity when the binary classification task involves the searching constraint (see Section 2.3 for details). Besides the computational merit, GBLS is insensitive to noise, guaranteed by the fact that combining a variational learning approach with Grover-search can preserve a high probability of success in finding the solution under the NISQ setting [27]. Figure 2: The paradigm of GBLS. U defined in Eqn. (9) is composed of unitary operators (i.e., U data , U L 1 , MCZ, and U init ) highlighted by the shadowed yellow region. The last cycle employs the unitary operation U E defined in Eqn. (10), highlighted by the brown region. The qubits interacted with U L 1 (or U init ) form the feature (or data) register R F (or R I ).

Grover-search based learning scheme
The outline of this section is as follows. In Subsection 2.1, we first elaborate on the implementation details of the proposed Grover-search based learning scheme (GLBS) as depicted in Figure 2. We then explain how to use the trained GLBS to predict the given new input with O(1) query complexity in Subsection 2.2. We last explain how GBLS can solve certain learning problems with potential advantages in Subsection 2.3.

Implementation
In the preprocessing stage, GBLS employs the datasetD defined in Eqn. (1) to construct an extended dataset D. Compared with the original datasetD, the cardinality of each training example in D is enlarged to K. For the purpose of applying the Grover-search algorithm to locate the target index i * = K −1, the construction rule for the k-th extended training example D k for all k ∈ [N ] is as follows. The mathematical representation of The last pair in D k corresponds to the k-th example ofD, i.e., (x in D k are uniformly sampled from a subset of D, where all labels of this subset, i.e., {y , are opposite to y k . Note that the construction of the subset is efficient. Since y k ∈ {0, 1}, we can construct two subsetŝ D (0) andD (1) that only contains examples ofD with label '0' and label '1', respectively, whereD (0) ∪D (1) =D. When y k = 0, the first K − 1 pairs are sampled fromD (1) ; otherwise, when y k = 1, the first K − 1 pairs are sampled fromD (0) . Figure 3: The circuit implementation of the oracle U in Eqn. (9).
As aforementioned, different quantum classifiers exploit different methods to encode D k into the quantum states [28]. For ease of notation, we denote the quantum state corresponding to the k-th example D k as where h(·) is an encoding operation (a possible encoding method is discussed in Section 3), and the subscripts 'F ' and 'I' refer to the feature register R F with N F qubits and the index register R I with N I qubits, respectively.
We now move on to explain the training procedure of GBLS. Recall that the reference [27] points out that combining a variational learning approach with Groversearch algorithm produces an additional quantum advantage than conventional Grover's algorithm such that the target solution can be located with a higher success probability. A similar idea is used in GBLS. Namely, the employed variational quantum circuits U L 1 aim to learn a hyperplane that separates the last pair in D k with its first K − 1 pairs. Denote U L 1 = L l=1 U (θ l ), where each layer U (θ l ) contains O(poly(N F )) parameterized single qubit gates and at most O(poly(N F )) fixed two-qubit gates with the identical layouts. In the optimal situation, given the initial state |Φ k F,I in Eqn. (6), applying to the feature register R F yields the following target state: (i) If the last pair of the input example D k refers to the label y k = 0, the target state is (ii) Otherwise, when the last pair of the input example D k refers to y k = 1, the target state is Figure 4: The circuit implementation of the oracle U E in Eqn. (10).
We denote |ψ as the first qubit of the quantum state in the feature register R F being |0 (resp. |1 ). As shown in Figure 3, once the state (U L 1 ⊗ I I ) |Φ k F,I is prepared, GBLS iteratively applies MCZ gate to the index register controlled by the first qubit of the feature register and the index register, uses U data and U L 1 to uncompute the feature register, and applies the diffusion operator U init to the index register to complete the first cycle. Denote all quantum operations belong to one cycle as U , i.e., With a slight abuse of notation, we define U init = I F ⊗(2 |ϕ ϕ|−I I ) with |ϕ = 1 √ K i |i in the rest of the paper. GBLS repeatedly applies U to the initial state |0 except for the last cycle, where the applied unitary operations are replaced by as highlighted by the brown shadow in Figure 4. Following the conventional Groversearch, GBLS queries U and U E with in total O( √ K) times before taking quantum measurements. This completes the quantum part of GBLS.
We next analyze how the quantum state evolves for the case y k = 0 and y k = 1, respectively. For the case of y k = 0, applying U L 1 ⊗ I I to the input state |Φ k (y k = 0) F,I in Eqn. (6) will transform this state to 1/ √ K i=0 |ψ (0) i F |i I as described in Eqn. (7). Since the control qubit in the feature register is 0, applying MCZ gate does not flip the phase of the state. After uncomputing, the result state yields positive phase for all computational basis i ∈ [K − 1] implies that applying the quantum operation U init • U † data • (U L 1 ⊗ I I ) † does not change the state as well, i.e., In other words, when we measure the index register of the output state, the probability to sample the computation basis i with i ∈ [K − 1] is uniformly distributed. For the case of y k = 1, the input state |Φ k (y k = 1) F,I in Eqn. (6) will be transformed F |i I after interacting with unitary U L 1 ⊗I I , as described in Eqn. (8). With the control qubit in the feature register being 1, such a generated quantum state will evolve as Grover-search algorithm does by iteratively applying MCZ, the uncomputation operation U † data • (U L 1 ⊗ I) † , and U init . Mathematically, the result state after interacting with MCZ yieldŝ |i I , and |i * I refers to the computational basis |K − 1 . Analogous to the U f in Grover-search, the trainable and data-drivenÛ f used above conditionally flips the phase of the state |i * . Next, the uncomputing operation U † data • (U L 1 ⊗ I) † and the diffusion operator U init are employed to increase the probability of |i * I . Mathematically, the generated state after the first cycle yields where U is defined in Eqn. (9). The probability of sampling i * is increased to sin 2 3γ, which is in accordance to Grover-search algorithm. This observation leads to the following theorem, whose proof is given in Appendix A.
Theorem 2 For GBLS, under the optimal setting, the probability of sampling the outcome i * = K − 1 approaches 1 asymptotically iff the label of the last entry of D k is y k = 1.
We leverage the particular property of GBLS, in which the output distribution is varied for different label of input D k as shown in Theorem 2, to accomplish the binary classification task. Concisely, the output state of GBLS, i.e., U E U O( √ K) |0 F,I , corresponding to y k = 1 will contain the computational basis i = K − 1 with probability near to 1. By contrast, the output state corresponding to y k = 0 will contain all computational bases i ∈ [K − 1] with the equal probability. Driven by this observation and the mechanism of the Grover-search algorithm, the loss function of GBLS is where sign(·) is the sign function, and U (θ) is defined in Eqn. (9) (for clearness, we use the explicit form U (θ) instead of U ). Intuitively, the minimized L(θ) corresponds to the facts that when y k = 1 (y k = 0), the success probability to sample i * as well as attain the first feature qubit to be '1' ('0') is maximized (minimized). GBLS employs a gradient-based method, i.e., the parameter shift rule [22], to optimize θ. Confer Appendix B for the detail.
We would like to address that, GBLS can be used to conduct both the linear and nonlinear classification tasks depending on the specified quantum classifiers. For example, when GBLS adopts the proposal [23,24] to implement U data and U L 1 , it has capability of classifying nonlinear data.

Prediction
Once the training of GBLS has finished, the trained U L 1 can be directly employed to predict the label of the future instances with O(1) query complexity, where the corresponding circuit implementation is shown in Figure 5. To achieve this, we devise the following prediction method. Denote the new input as (x,ỹ). We first encodex into the quantum state with the identical encoding method used in the training procedure, i.e., |ψ F = |h(x) . Applying the trained U L 1 to |ψ F yields where |α| 2 + |β| 2 = 1.

|0⟩ |0⟩ |0⟩
" ! ! ( (#) ) … … Figure 5: The circuit implementation of GBLS for prediction. The same encoding method used in the training process is adopted to prepare the state |h(x) . The trained variational quantum circuit U (θ (T ) ) is applied to |h(x) before the measurement.
Denote the probability of the outcome '1' after measuring the first feature qubit of the state in Eq. (15) as p 1 = |β| 2 and let the threshold be 1/2. The new input datax will be identified as label '0', if p 1 < 1/2; otherwise, it will be given label '1'.

Potential advantage of GBLS
Here we design a binary classification task to explore the potential advantage of GBLS in terms of query complexity. Consider the classification task that requires not only to find a decision rule in Eqn. (2) but also to output the index j satisfying a pre-determined blackbox function. Note that the identification of a target index is a common functionality in the context of database searching in the medical system, economy, and online shopping. For example, given a medical database, it is natural to expect that the trained classifier can predict whether a patient is ill or healthy based on her/his symptoms, and can identify a healthy patient with additional properties, e.g., the gender of the patient is female, which can be modeled by a black box function.
The mathematical formulation of this classification task is as follows. Given the data D k in Eqn. (5), denoted the black box as q(·), the task yields where the function q(·) is a boolean function with the input set {j : ∀y j ∈ D k , y j = 1}. Taking GBLS implemented in the previous subsections as an example, q(·) has the following form, ∀j = {0, · · · , K − 1} Furthermore, q(·) could be implemented by the MCZ gate, which conditionally flips the phase of the computational basis corresponding to j * := K − 1 if the state is |ψ given in Eqn. (8). In this way, the Grover-like search structure used in GBLS promises that the probability to sample j * will be maximized. We remark that GBLS can be effectively generalize to implement other forms of q(·) via modifying the MCZ gate. When the size of the dataset loaded by GBLS is K, a well-trained GBLS can locate the target index with O( √ K) query complexity, guaranteed by the result of Theorem 2. However, given access to the well-trained classifier f θ (·), both classical algorithms and previous quantum classifiers need at least O(K) query complexity to find j * . The reduced query complexity of GBLS implies a potential quantum advantage to accomplish classification tasks.

Numerical Experiments
We now apply GBLS to classify a nonlinear synthetic datasetD to evaluate its performance. The construction ofD follows the proposal [23]. Consider a synthetic 2 ∈ (0, 2π). Let g(·) be a specific embedding function with |g(ω where V ∈ SU (4) is a unitary operator, Π = I ⊗ |0 0| is the measurement operator, and the gap ∆ is set as 0.2. The label of x i is assigned as We illustrate the synthetic datasetD in the left panel of Figure 6.
At the data preprocessing stage, we split the datasetD into the training datasetŝ D train with size N train = 100 and the test datasetD test with N test = 100. In the training process, we follow the construction rule of GBLS to build the extended training dataset D train by usingD train . We set K = 4 in the following analysis, where the training example D k ⊂ D train can be encoded into a quantum state by using four qubits with N I = N F = 2 (see Appendix C for the detailed implementation of GBLS). Note that, at each epoch, we shuffle D train and rebuild the extended datasetD train . An epoch means that an entire dataset is passed forward through the quantum learning model, e.g., when the dataset contains 1000 training examples, and only two examples are fed into the quantum learning model each time, then it will take 500 iterations to complete 1 epoch.
The numerical simulations are implemented on Python in conjunction with the PennyLane, Qiskit, and pyQuil libraries [29,30,31]. The hyper-parameters setting used in our experiment is as follows. The block of U E in Figure 4 is employed once for the case K = 4, according to the Grover's theorem O( √ K). The layer number of variational quantum circuits, i.e., U L 1 = L l=1 U (θ l ), is set as L = 2. The number of epochs used in classical optimization is 20. For comparison, we also apply the quantum kernel classifier proposed by [23,24] with two different loss functions, i.e., the mean squared error (MES) loss, and the binary cross entropy (BCE) loss, to learn the synthetic datasetD. The selection of the quantum kernel classifiers as the reference is based on the fact that this method has achieved state-of-the-art performance to classify nonlinear data [23].
Ideal setting. We first evaluate performance of different quantum classifiers under the ideal setting, where the quantum system is noiseless and the number of measurements is infinite. The right panel of Figure 6 illustrates the averaged training and testing accuracies versus the number of epochs. In particular, our proposal achieves comparable performance with the quantum kernel classifier with the BCE loss, where both the train and test accuracies converge to 99% within 2 epochs. Moreover, these two methods outperform the quantum kernel classifier with the MSE loss (B = N ), whose test accuracy can only reach 95% after 10 epochs. The variance of these three quantum classifiers after 10 epochs becomes small, which implies that all of them hold stable performance under the ideal setting. Depolarization noise setting. We next investigate performance of GBLS and the referenced quantum kernel classifiers under the realistic setting, where the quantum system noise is considered and the number of measurements is finite. Specifically, we employ the depolarization channel to model the system noise, i.e., given a quantum state ρ ∈ C d×d , the quantum depolarization channel E p that acts on this state is defined as where p is the depolarization rate, and π d is the maximally mixed state with π d = I d /d. Meanwhile, to explore the trade-off between the computational cost (i.e., the total number of measurements) and the utility R 1 indicated by Theorem 1, we also compare performance between GBLS and a modified quantum kernel classifier with the MSE loss, which supports to use the batch gradient descent method with B = N/4 to optimize parameters (Please refer to Appendix C for implementation details). Table 1 summarizes the basic information about GBLS and the referenced quantum classifiers. See Appendix D about the derivation of the required number of measurements for GBLS and the quantum kernel classifier with the BCE loss.

Methods
MSE_batch  The hyper-parameters settings applied to GBLS and other quantum classifiers are as follows. The depolarization rate is set as p = 0.05 and p = 0.25, respectively. The number of measurements is set as 10 to approximate the quantum expectation result. The parameter shift rule is used to estimate the analytic gradients [22,32]. For each classifier, we repeat the numerical simulations with five times to collect the statistical information. Confer Appendix C for other settings such as learning rates and random seeds.
The simulation results of GBLS and the referenced quantum classifiers are illustrated in Figure 7. Specifically, when p = 0.05, GBLS and the other three referenced quantum classifiers achieve comparable performance after 10 epochs. Moreover, the quantum kernel classifier with the MSE loss (B = N/4 possesses a lower the convergence rate and a larger variance than the rest three classifiers. When p = 0.25, there exists a relatively large gap between the quantum kernel classifiers with the MSE_bactch method and the rest three quantum classifiers in the measure of the convergence rate. Such a difference reflects the importance to use GBLS to investigate classification tasks under the varied number of batches. We summarize the averaged training and test accuracies of GBLS and other quantum classifiers at the last epoch in Table 2. Even though the measurement error and quantum gate noise are considered, GBLS can still attain stable performance, since its variance is very small (i.e., at most 0.04). This observation suggests the applicability of our proposal on NISQ machines.
We would like to emphasize the main issue considered in this study: whether there exists a quantum classifier that can attain a good utility bound R 1 by using a few number of measurements. The numerical simulation results of GBLS provide a positive response towards this issue. Recall the setting given in Table 1 Table 1. The value 'a ± b' refers that the averaged accuracy is a and its variance is b.  Figure 7. The noise model, which is extracted from a real quantum hardware, is applied to the trainable unitary U L (θ) of these three classifiers.
7. Although the required number of measurements for GBLS is reduced by K = 4 times compared with quantum classifiers with the BCE loss and the MSE loss (B = N ), they achieve comparable performance. This result implies a huge separation of the computational efficacy between GBLS and previous quantum classifiers with B = N when N is large.
Noise model from real quantum hardware. We further compare performance of GBLS and the referenced quantum classifiers under a noise model extracted from real quantum hardware, i.e., IBMQ_ourense, provided by the Qiskit and PennyLane Python Libraries [29,30]. Notably, for all classifiers, the gate noise is only imposed on the trainable quantum circuits U L instead of the whole circuits, since the implementation of multi-controlled gates (e.g., CCZ) used in GBLS will introduce a huge amount of noise and destroy the optimization of GBLS (See Appendix C for details). Meanwhile, the measurement noise is applied to all quantum classifiers. Due to the relatively poor performance of the quantum kernel classifier with the MSE loss and B = N/4, here we only focus the comparison among GBLS and quantum kernel classifiers with the BCE loss and the MSE loss (B = N ). Note that all hyper-parameters settings are identical to those used in the above numerical simulations.
The simulation results are exhibited in Figure 8. Specifically, the three classifiers achieve comparable performance. Such results indicate that the efficacy of GBLS, since the required number of measurements for GBLS is reduced by four times compared with the rest two quantum classifiers.

Discussion and Conclusion
In this study, we have proposed a Grover-search based learning scheme for classification. Different from previous proposals, GBLS supports the optimization of a wide range of quantum classifiers with a varied number of batches. This property allows us to explore the trade-off between the computational efficiency and the utility bound R 1 . Moreover, we demonstrate that GBLS possesses a potential advantage to tackle certain classification tasks in the measure of query complexity. Numerical experiments showed that GBLS can achieve comparable performance with other advanced quantum classifiers by using a fewer number of measurements. We believe that our work will provide immediate and practical applications for near-term quantum devices.

Appendix A. Proof of Theorem 1
Proof of Theorem 1. To achieve Theorem 1, we separately discuss the situations in which the label of the last entry in D k is y k = 1 and y k = 0, respectively. For the case y k = 1. Suppose that the label of the last entry in D k is y k = 1. Followed from Eqn. (13), after the first cycle, the generated state of GBLS is where sin γ = 1 √ K . This result indicates that the probability to sample the target index i * is increased from sin 2 γ to sin 2 3γ, which is same with Grover-search.
Then, by induction as the proof of Grover-search does [33], the generated state of GBLS after applying U to |0 F,I with times yields i=1 U i |0 F,I = |0 F ⊗ (cos((2 + 1)γ) |B I + sin((2 + 1)γ) |i * I ). (A.1) Note that, GBLS requires that the employed quantum operation at the last cycle is U E as defined in Eqn. (10) instead of U . Mathematically, the generated state is where the first equality uses Eqn. (A.1), the second equality exploits Eqn. (13) to engineer the feature register, the third equality employs MCZ to flip the phase the state |i * whose first qubit in the feature register is |1 , and last equality comes from the application of the diffusion operator U init = I F ⊗ (2 |ϕ ϕ| − I I ) with |ϕ = 1 √ K i |i to the index register.
The result of Eqn. (A.2) indicates that, under the optimal setting, the probability to sample i * is close to 1 when ∼ O( √ K), since sin γ ≈ γ = 1/ √ K and then sin((2 + 3)γ) ≈ 1. For the case y k = 0. We then demonstrate that, when the label of the last entry in D k is y k = 0, even if applying U = i=1 and U E to |0 F,I with ∼ O( √ K), the probability to sample i * is 1/K. Followed from Eqn. (11), after the first cycle, the generated state of GBLS is where sin γ = 1 √ K . Due to U c 1 |Φ k (y k = 0) F,I = U |0 F,I , after applying U to the state |0 , the probability to sample any index is identical. By induction, applying the corresponding U to the state |0 F,I with times yields where given any positive integer , the probability to sample |i * I is 1/K. As with the case of y k = 1, at the last cycle, we apply the unitary U E to the state i=1 U i |0 F,I , and the generated state is where the first equality uses the explicit form of U E and Eqn. (A.3), and the second equality is guaranteed by Eqn. (12) (note that the only difference is replacing |ψ i * F based on the setting y k = 0), and the last equality exploits the explicit form of U init .
The result of Eqn. (A.4) reflects that, under the optimal setting, the probability to sample i * can never be increased when y k = 0. Therefore, we can conclude that, under the optimal setting, the probability to sampling the outcome i * approaches 1 asymptotically if and only if the label of the last entry of D k is y k = 1.

Appendix B. Variational quantum circuits and the optimizing method
In this section, we first introduce the variational quantum circuits U L 1 (θ) used in GBLS. We then elaborate the optimization method, i.e., the parameter shift rule, that is employed to train U L 1 (θ).
Variational quantum circuits, which is also called parameterized quantum circuit, are composed of trainable single qubit gates and two qubits gates (e.g., CNOT or CZ). As a promising scheme for NISQ devices, variational quantum circuits have been extensively investigated for accomplishing the generative and discriminative [34,35,15,20,36] tasks via variational hybrid quantum-classical algorithms [37]. One typical variational quantum circuits is called the multiple-layer parameterized quantum circuits (MPQC), where the arrangement of quantum gates in each layer is identical [34]. Denote the operation formed by the l-th layer as U (θ l ). The generated quantum state from MPQC yields where L is the total number of layers. GBLS employs MPQC to construct U L 1 , i.e., and the circuit arrangement for the l-th layer U (θ l ) is shown in Figure B1. When the number of layers is L, the total number of trainable parameters for GBLS is 2N F L. Figure B1: The implementation of the l-th layer U (θ l ). Suppose that the l-th layer U (θ l ) interacts with N F qubits. Three trainable parameterized gates, R Z , R Y and R Z , are firstly applied to each qubit, followed by N F − 1 CNOT gates.
The updating rule of GBLS at the k-th iteration follows where η is the learning rate and D k is the k-th training example. By expanding the explicit form of L(θ (k) , D k ) given in Eqn. (14), the gradients of L(θ (k) , D k ) can be rewritten as where y k refers to the label of the last entry in D k , sign(·) is the sign function, Π is the measurement operator, and GBLS adopts the parameter shift rule proposed by [22] to attain the gradient ∂ Tr(Πρ(θ (k) )) ∂θ . Concisely, the parameter shift rule iteratively computes each entry of the gradient. Without loss of generality, here we explain how to compute ∂ Tr(Πρ(θ (k) )) where only the j-th parameter is rotated by ± π 2 . Then the mathematical representation of the gradient for the j-th entry is In this section, we provide more details about the numerical simulations. Specifically, we first explain how to construct the employed synthetic dataset. We then elaborate on the implementation of GBLS and referenced classifiers, and their hyper-parameters settings. We next analyze the required circuit depth to implement these quantum classifiers. Last, we introduce the construction of the modified dataset used in the MSE_batch method.
The construction of the synthetic dataset. Given the training example that is used to encode x i into the quantum states is formulated as 2 ) 2 is a specified mapping function. The above formulation implies that g(x i ) can be converted to a sequence of quantum operations, where its implementation is illustrated in the upper left panel of Figure B2. To simultaneously encode multiple training examples into the quantum states, we should implement g(x i ) as a controlled version, where the implementation is shown in the upper right panel of Figure B2.
The details of GBLS, the referenced classifiers, and hyper-parameters setting. The implementation of GBLS is shown the lower panel of Figure B2. In particular, the data encoding unitary U data is composed of a set of controlled-g(x i ) quantum operations. The MPQC introduced in Appendix B is employed to build U L 1 (θ), where each layer U (θ l ) is composed of R Y gates and CZ gates and the layer number is L = 2.
The basic components of the referenced quantum classifiers are identical to those used in GBLS. In particular, for all employed quantum kernel classifiers, the implementation of variational quantum circuits U L 1 (θ) are the same with GBLS, where the layer number is L = 2 and each layer is composed of R Y gates and CZ gates as shown in Figure B2. The implementation of the encoding unitary U data depends on the batch size B. For the quantum kernel classifiers with the BCE loss and MSE loss (B = N ), following Eqn. (C.1), the encoding unitary is For the quantum kernel classifier with the MSE loss (B = N/4), the implementation of the encoding unitary U data is the same with GBLS as shown in Figure B2. The detailed hyper-parameters settings for GBLS and the referenced classifiers are as follows. The learning rate for GBLS, the quantum kernel classifier with the BCE loss, the quantum kernel classifier with the MSE loss (B = N and B = N/4) is identical, which is set as η = 1.0. Moreover, when we explore the statistical performance of different quantum classifiers under the noise setting, the random seeds are set as {i} R i=1 with R being the total number of repetitions.
The analysis of the quantum circuit depth. Here we analyze the required circuit depth to implement quantum kernel classifiers used in numerical simulations. As explained in the above subsection, the quantum kernel classifiers with B = N can be efficiently realized, since the data encoding unitary U data and the variational quantum circuits only involve single and two qubits gates. In particular, the circuit depth to construct the unitary U data in Eqn. (C.2) is 1. Moreover, the circuit depth to construct U L (θ) as shown in Figure B2 is 4. In total, when the number of batches B equals to N , the required depth for the quantum kernel classifier with the BCE or MSE loss is 5.
Compared with the setting B = N , the implementation of the quantum kernel classifier with B = N/4 and GBLS requires a relatively deep circuits. The substantial reason is that the fabrication of the data encoding unitary U data involves multi-controlled qubits gates as shown in Figure B2 (highlighted by the brown region). Specifically, when we decompose the CC-R Y gate into single-qubit and two-qubit gates, the required circuit depth is 27. Therefore, following Figure B2, the circuit depth to implement U data is 113. Considering that the circuit depth to implement U L 1 is 4, the total circuit depth to implement the quantum kernel classifier with B = N/4 is 117. As shown in Figure B2, the quantum circuit in GBLS is composed of U data , U L 1 , and U init . The implementation of U data and U L 1 is identical to the quantum kernel classifier with B = N/4. Moreover, based on Grover-search algorithm, the circuit depth to implement U init is 15, which includes 4 Hadamard gates and 1 CCZ gate. Therefore, the total circuit depth to implement GBLS is 132.
We remark that the circuit depth of the quantum kernel classifier with B = N/4 and GBLS is dominated by the implementation of U data , which exploits multi-controlled qubits gates to load different training examples in superposition. Such an observation implies that efficient encoding methods can dramatically reduce the required circuit depth to construct these quantum classifiers. A possible solution is proposed by [38], which constructs a target multi-qubits gate by optimizing a variational quantum circuit which consists of tunable single-qubit gates and fixed two qubits gates.
The modified training dataset for the MSE_batch method. We note that naively employing the original training datasetD to optimize the quantum kernel classifier with the MSE_batch loss is infeasible. Let us illustrate a simple example. Suppose the input state is 1 |g(x (i)) F |i I with the batch size 2, where the subscript 'I' ('F') refers to the index (feature) register. When the trainable quantum circuits U L (θ) ⊗ I I and the measurement operator are applied to this state, the output corresponds to the averaged predictions of the examples {x (i)) } 2 i=1 . Such a setting is ill-posed once the labels x (1) and x (i) of are opposite, e.g., the former is 0 and the latter is 1, since a wrong prediction (the former is 1 and the latter is 0) also leads to the averaged truth label 0.5.
To conquer the above issue, we build a modified dataset instead ofD to optimize the quantum kernel classifier with the MSE_batch loss. Specifically, we shuffle the given datasetD and ensure that for the modified dataset, the training examples in each batch B i for ∀i ∈ [B] must possess the same label. In doing so, the averaged truth label can either be 0 and 1 without any confusion.
Appendix D. The computational complexity of GBLS and the quantum kernel classifier with the BCE loss We now separately derive the required number of measurements, or equivalently, the computational complexity, for GBLS and the quantum kernel classifier with the BCE loss at each epoch. For both methods, the hyper-parameters setting is supposed to be identical, i.e., the size of the datasetD is N , the layer number of MPQC U L 1 is L, the number of qubits to load data features is N F , the total number of trainable parameters θ is N F L, and the number of measurements applied to estimate the quantum expectation value is M .
We say one query when the variational quantum circuit used in the quantum classifier takes the encoded data and then be measured by the measurement operator once. Following the training mechanism of the quantum classifier, its query complexity amounts to counting the total number of measurements to the variational quantum circuits to acquire the gradients in one epoch.
We now derive the required number of measurements of the quantum kernel classifier with the BCE loss in one epoch. Given the datasetD, the binary cross entropy loss yields y i log(p(y i )) + (1 − y i ) log(1 − p(y i )) , (D.1) where y i is the label of the i-th example and p(y i ) is the predicted probability of the label y i , or equivalently, the output of the quantum circuit used in the quantum kernel classifier p(y i ) = Tr(Πρ(θ)) , where ρ(θ) = U L 1 (θ) |g(x i ) g(x i )| U L 1 (θ) † , U L 1 (θ) refers to variational quantum circuits defined in Eqn. (B.1), |g(x i ) represents the encoded quantum state defined in Eqn. (C.1), and Π is the measurement operator. Following the parameter shift rule, the derivative of BCE loss satisfies where θ ± is defined in Eqn. (B.4). The above equation implies that to acquire the gradients of the BCE loss, it necessitates to feed the training example one by one to the quantum kernel classifier to estimate p(y i ), and then conduct the classical post-processing to compute the coefficient 1−y i 1−p(y i ) − y i p(y i ) . In other words, the number of batches for this quantum classifier can only be B = N . Since the estimation of p(y i ), Tr(Πρ(θ + )), and Tr(Πρ(θ − )) are completed by using M measurements, the derivative ∂L BCE /∂θ j can be estimated by using 3N M measurements. Considering that there are in total N F L trainable parameters, the total number of measurements at each epoch for the quantum kernel classifier with the BCE loss is 3N M N F L.
Unlike the quantum kernel classifier with the BCE loss, GBLS uses a simple loss function L defined in Eqn. (14), which allows us to efficiently acquire the gradient ∂L/∂θ j by leveraging the superposition property. Recall Eqn. (B.6). The gradient of GBLS satisfies ∂L(θ, D k ) ∂θ j = Tr(Πρ(θ (k) where y k refers to the label of the last pair in the extended training example D k . The above equation indicates that the gradient for D k , which contains K training examples inD, can be estimated by using 2M measurements, where the first (last) M measurements aim to approximate Tr(Πρ(θ