Shadows of quantum machine learning

Quantum machine learning is often highlighted as one of the most promising practical applications for which quantum computers could provide a computational advantage. However, a major obstacle to the widespread use of quantum machine learning models in practice is that these models, even once trained, still require access to a quantum computer in order to be evaluated on new data. To solve this issue, we introduce a class of quantum models where quantum resources are only required during training, while the deployment of the trained model is classical. Specifically, the training phase of our models ends with the generation of a ‘shadow model’ from which the classical deployment becomes possible. We prove that: (i) this class of models is universal for classically-deployed quantum machine learning; (ii) it does have restricted learning capacities compared to ‘fully quantum’ models, but nonetheless (iii) it achieves a provable learning advantage over fully classical learners, contingent on widely believed assumptions in complexity theory. These results provide compelling evidence that quantum machine learning can confer learning advantages across a substantially broader range of scenarios, where quantum computers are exclusively employed during the training phase. By enabling classical deployment, our approach facilitates the implementation of quantum machine learning models in various practical contexts.


I. INTRODUCTION
Quantum machine learning is a rapidly growing field [1][2][3] driven by its potential to achieve quantum advantages in practical applications.A particularly interesting approach to make quantum machine learning applicable in the near term is to develop learning models based on parametrized quantum circuits [4][5][6].Indeed, such quantum models have already been shown to achieve good learning performance in benchmarking tasks, both in numerical simulations [7][8][9][10][11] and on actual quantum hardware [12][13][14][15].Moreover, based on widely-believed cryptography assumptions, these models also hold the promise to solve certain learning tasks that are intractable for classical algorithms [16,17].
Despite these advances, quantum machine learning is facing a major obstacle for its use in practice.A typical workflow of a machine learning model involved, e.g., in driving autonomous vehicles, is divided into: (i) a training phase, where the model is trained, typically using training data or by reinforcement; followed by (ii) a deployment phase, where the trained model is evaluated on new input data.For quantum machine learning models, both of these phases require access to a quantum computer.But given that in many practical machine learning applications, the trained model is meant for a widespread deployment, the current scarcity of quantum computing access dramatically reduces the applicability of quantum machine learning.One way of addressing this problem is by generating classical shadows of quantum machine learning models.That is, we propose inserting a shadowing phase between the training and deployment, where a quantum computer is used to collect information on the quantum model.Then a classical computer can use this information to evaluate the model on new data during the deployment phase.
The conceptual idea of generating shadows of quantum models was already proposed by Schreiber et al. [18], albeit under the terminology of classical surrogates.In that work, as well as in that of Landman et al. [19], the authors make use of the general expression of quantum models as trigonometric polynomials [20] to learn the Fourier representation of trained models and evaluate them classically on new data.However, these works also suggest that a classical model could potentially be trained directly on the training data and achieve the same performance as the shadow model, thus circumventing the need for a quantum model in the first place.This raises the concern that all quantum models that are compatible with a classical deployment would also lose all quantum advantage, hence severely limiting the prospects for a widespread use of quantum machine learning.
Therefore, two natural open questions are raised: 1. Can shadow models achieve a quantum advantage over entirely classical (classically trained and classically evaluated) models?
2. Do there exist quantum models that do not admit efficiently evaluatable classical shadows?
In this work, we address both of these open questions.We take a novel approach to define our shadow models based on a general expression of quantum machine learning models as linear models [21] and techniques from classical shadow tomography [22].We propose a general definition of a shadow model and show that, under this definition, all shadow models can be captured by our approach.Moreover, we show that there exist learning tasks where shadow models have a provable quantum advantage over classical models, under widely-believed cryptography assumptions.Finally, we prove that there exist quantum models that do not admit classical shadows, under common assumptions in complexity theory.(left) Conventional quantum models can be expressed as inner products between a data-encoding quantum state ρ(x) and a parametrized observable O(θ).The resulting linear model f θ = Tr[ρ(x)O(θ)] naturally corresponds to a quantum computation, depicted here.(middle) We define flipped models f θ = Tr[ρ(θ)O(x)] as quantum linear models where the role of the quantum state ρ(θ) and the observable O(x) is flipped compared to conventional models.(right) Flipped models are associated to natural shadow models: one can use techniques from shadow tomography to construct a classical representation ρ(θ) of the parametrized state ρ(θ) (during the shadowing phase), such that, for encoding observables O(x) that are classically representable (e.g., linear combinations of Pauli observables), ρ(θ) can be used by a classical algorithm to evaluate the model f θ (x) on new input data (during the evaluation phase).More generally, a shadow model is defined by (i) a shadowing phase where a (bit-string) advice ω(θ) is generated by the evaluation of multiple quantum circuits W1(θ), . . ., WM (θ), and (ii) an evaluation phase where this advice is used by a classical algorithm A, along with new input data x to evaluate their labels f θ (x).In Section III, we show that under this general definition, all shadow models are shadows of flipped models.
In the beginning, we will adhere to a working definition of a shadow model as a model that is trained on a quantum computer, but can be evaluated classically on new input data with the help of information generated by a quantum computer (i.e., quantum-generated advice) that is independent of the new data.We will (informally) call a model "shadowfiable" if there exists a method of turning it into a shadow model.In Section III, we will make our definitions more precise.

II. CONNECTION TO CLASSICAL SHADOWS: THE FLIPPED MODEL
The construction of our shadow models starts from a simple yet key observation: all standard quantum machine learning models can be expressed as linear models [21].Indeed, early works on parametrized quantum models [7,12] proposed quantum models that could be expressed as linear models of the form: where ρ(x) are quantum states that encode classical data x ∈ X and O(θ) are parametrized observables whose inner product with ρ(x) define f θ (x).To make this concrete: in a regression task, one would use such a model to assign a real-valued label to an input x, while in classification tasks, one would additionally apply, e.g., a signfunction, to discretize its output to a class.From a circuit picture, such functions can for instance be evaluated by: (i) preparing an initial state ρ 0 , e.g., |0⟩⟨0| ⊗n , (ii) evolving it under a data-dependent circuit U (x), (iii) followed by a variational circuit V † (θ), (iv) before finally measuring the expectation value of a Hermitian observable O.
Steps (i) and (ii) together define while steps (iii) and (iv) define It is also known that such quantum linear models also encompass quantum kernel models [23], as well as more general data re-uploading models [21].

A. Flipped model definition
The definition of a quantum linear model in Eq. ( 1) can in general accommodate any pair of Hermitian operators in place of ρ(x), O(θ).However, due to how these models are evaluated on a quantum computer, one commonly works under the constraint that ρ(x) defines a quantum state (i.e., a positive semi-definite operator with unit trace).Indeed, from an operational perspective, ρ(x) must be physically prepared on a quantum device before being measured with respect to the observable O(θ) (which only needs to be Hermitian in order to be a valid observable).
For reasons that will become clearer from the shadowing perspective, we define a so-called flipped model, where we flip the role of ρ(x) and O(θ).That is, we consider where ρ(θ) is a parametrized quantum state and O(x) is an observable that encodes the data and can take more general forms than Eq.(3) as we will see next.This model also corresponds to a straightforward quantum computation as ρ(θ) can be physically prepared before being measured with respect to O(x).
A simple example of flipped model is for instance defined by: w j (x)P j (5) for an initial state ρ 0 , a variational circuit V (θ), and a collection of Pauli observables {P j } m j=1 weighted by datadependent weights w j (x) ∈ R. One can evaluate this model by repeatedly preparing ρ(θ) on a quantum computer, measuring it in a Pauli basis specified by a P j , and weighting the outcome by w j (x).For other examples of flipped models, see Appendix A 2.
As opposed to conventional quantum linear models, flipped models are well-suited to construct shadow models.Since the variational operators ρ(θ) are quantum states, one can use techniques from shadow tomography [22] to construct classical shadows ρ(θ) of these states.By definition, these classical shadows ρ(θ) are collections of measurement outcomes obtained from copies of ρ(θ) that can be used to classically approximate expectation values of certain observables O.If we take these observables to be our data-dependent O(x), then we end up with a classical model f θ (x) that approximates our flipped model.Classical shadow tomography [22] also provides us with the tools to guarantee that the resulting shadow model f θ (x) is a good approximation of the original flipped model f θ (x).For instance, in the example of Eq. ( 5), we know that if all Pauli operators

B. Properties of flipped models
Flipped models are a stepping stone toward the claims of quantum advantage and "shadowfiability" that are the focus of this paper.Nonetheless, they constitute a newly introduced model, which is why it is useful to understand first how they relate to previous quantum models and what learning guarantees they can have.
Since conventional linear models of the form of Eq. (1) play a central role in quantum machine learning, we start by asking the question: when can these models be represented by (efficiently evaluatable) flipped models?Clearly, a linear model f θ (x) = Tr[ρ(x)O(θ)] for which the parametrized operator O(θ) is also a quantum state (i.e., a positive semi-definite trace-1 operator) is by definition also a flipped model.Therefore, a natural strategy to flip a conventional model is to transform its observable O(θ) into a quantum state ρ ′ (θ).This transformation involves dealing with the negative eigenvalues of O(θ), which is straightforward1 , as well as normalizing these eigenvalues, which more importantly affects the efficiency of evaluating the resulting flipped model.Indeed, when normalizing O(θ), the normalization factor α corresponding to its trace norm ].This directly impacts the spectral norm ∥O ′ ∥ ∞ = max |ψ⟩ ⟨O ′ ⟩ ψ = α of the flipped model, and therefore the efficiency of its evaluation, as O(∥O ′ ∥ 2 ∞ /ε 2 ) measurements of ρ ′ (θ) are needed in order to estimate f θ (x) to additive error ε with high probability (see Appendix B 1 for a derivation).
Interestingly, in the relevant regime where the number of qubits n, m used by the linear models involved in this flipping is logarithmic in ∥O∥ 1 (e.g., where O is a Pauli observable and hence ∥O∥ 1 = 2 n ), we find that this requirement on the spectral norm ∥O ′ ∥ ∞ of the resulting flipped model is unavoidable in the worst case (up to a logarithmic factor in ∥O∥ 1 ).We summarize this result informally in the following lemma and refer to Appendix B 3 for a more in-depth discussion.Another property of interest in machine learning is the generalization performance of a learning model.That is, we want to bound the gap between the performance of the model on its training set (so-called training error) and its performance on the rest of the data space (or expected error).Such bounds have for instance been derived in terms of the number of encoding gates in the quantum model [24], or the rank of its observable [25].In the case of flipped model, we find instead a bound in terms of the number of qubits n and the spectral norm ∥O∥ ∞ of the observable.Since these quantities are operationally meaningful, this gives us a natural way of controlling the generalization performance of our flipped models.
Lemma 2 (Generalization bounds (informal)).Consider a flipped model f θ that acts on n qubits and has a bounded observable norm ∥O∥ ∞ .If this model achieves a small training error |f θ (x) − f (x)| ≤ η for all x in a dataset of size M , then it also has a small expected error |f θ (x) − f (x)| ≤ 2η with probability 1 − ε over the data distribution, provided that the size of the dataset scales as Note also that the dependence on n and ∥O∥ ∞ is linear and quadratic, respectively, which means that we can afford a large number of qubits and a large spectral norm and still guarantee a good generalization performance.This is particularly relevant as the spectral norm is a controllable quantity, meaning we can easily fine-tune our models to perform well in training and generalize well.E.g., in the case of the model in Eq. ( 5), this spectral norm is bounded by max x m j=1 |w i (x)|, which scales fa-vorably with the number of qubits n if m ∈ O(poly(n)) or if the vector w(x) is sparse.

C. Quantum advantage of a shadow model
We recall that we (informally) define shadow models as models that are trained on a quantum computer, but, after a shadowing procedure that collects information on the trained model, are evaluated classically on new input data.In this section, we consider the question of achieving a quantum advantage using such shadow models.It may seem at first sight that this question has a straightforward answer, which is "no": if the function learned by a model is classically computable, then there should be no room for a quantum advantage.However, as demonstrated in Ref. [17], one can also achieve a quantum advantage based on so-called trap-door functions.These are functions that are believed to be hard to compute classically, unless given a key (or advice) that allows for an efficient classical computation.Notably, there exist trap-door functions where this key can be efficiently computed using a quantum computer, but not classically.This allows us to construct shadow models that make use of this quantum-generated key to compute an otherwise classically-untractable function.
Similarly to related results showing a quantum advantage in machine learning with classical data [16,26], we consider a learning task where the target function (i.e., the function generating the training data) is derived from cryptographic functions that are widely believed to be hard to compute classically.More precisely, we introduce a variant of the discrete cube root learning task [17], which is hard to solve classically under a hardness assumption related to that of the RSA cryptosystem [27].In this task, we consider target functions defined on Z N = {0, . . ., N − 1} as where N = pq is an n-bit integer, product of two primes p, q of the form 3k + 2, 3k ′ + 2, such that the discrete cube root is properly defined as the inverse of the function y 3 mod N .These target functions are particularly appealing because of a number of interesting properties: (i) It is believed that given only x and N as input, computing g(x) = 3 √ x mod N with high probability of success over random draws of x and N is classically intractable.This assumption is known as the discrete cube root (DCR) assumption.
Observations (i) and (ii) can be leveraged to show that learning the functions g s from examples is also intractable.Indeed, Alexi et al. [29] showed that a classical algorithm that could faithfully capture a single bit g s (x) of the discrete cube root of x, for even a 1/2 + 1/poly(n) fraction of all x ∈ Z N , could also be used to reconstruct g(x), ∀x ∈ Z N , with high probability of success.Since, from observation (ii), the training data for the learning algorithm can also be generated efficiently classically from N , a classical learner that learns g s (x) correctly for a 1/2 + 1/poly(n) fraction of all x ∈ Z N would then contradict the DCR assumption.
Observation (iii) allows us to define the following flipped model: That is, ρ(θ) (for θ = (N, s ′ )) specifies candidates for the key d ′ and the parameter s ′ of interest, while O(x) uses that information to compute The state ρ(θ) = |d, s ′ ⟩⟨d, s ′ | for the right key d can be prepared efficiently using Shor's algorithm applied on N (provided with the training data).As for O(x), it simply processes classically a bit-string to compute g d ′ ,s ′ (x) efficiently, which corresponds to g s (x) when (d ′ , s ′ ) = (d, s).
Finding an s ′ close to s is an easy task given training data and d ′ = d.Since ρ(θ) is a computational basis state, this flipped model admits a trivial shadow model where a single computational basis measurement of ρ(θ) allows to evaluate f θ (x) classically for all x.Therefore, we end up showing the following theorem: Theorem 3 (Quantum advantage (informal)).There exists a learning task where a shadow model first trained using a quantum computer then evaluated classically on new input data, can achieve an arbitrarily good learning performance, while any fully classical model cannot do significantly better than random guessing, under the hardness of classically computing the discrete cube root.
In Appendix C we formalize the statement of this result using the PAC framework and provide more details on the setting and the proofs.

III. GENERAL SHADOW MODELS
As mentioned at the start of this paper, shadow models are not limited to shadowfied flipped models, and the main alternative proposals are based on the Fourier representation of quantum models [18,19].It is clear that Fourier models are defined very differently from flipped models, but one may wonder whether they nonetheless include shadowfied flipped models as a special case, or the other way around.
In this section, we first start by showing that there exist quantum models that admit classical shadows (i.e., are shadowfiable) but cannot be shadowfied efficiently using a Fourier approach.This then motivates our proposal for a general definition of shadow models, and we show that, under this definition, all shadow models can be expressed as shadowfied flipped models.Finally, we show the existence of quantum models that are not shadowfiable at all under likely complexity theory assumptions.

A. Shadow models beyond Fourier
An interesting approach to construct shadows of quantum models is based on their natural Fourier representation.It has been shown [20,30] that quantum models can be expressed as generalized Fourier series of the form where the accessible frequencies Ω only depend on properties of the encoding gates used by the model (notably the number of encoding gates and their eigenvalues).Since these frequencies can easily be read out from the circuit, one can proceed to form a shadow model by estimating their associated coefficients c ω (θ) using queries of the quantum model f θ (x) at different values x and, e.g., a Fourier transform [18].Given a good approximation of these coefficients, one can then compute estimates of f θ (x) for arbitrary new inputs x.We will refer to such a shadowing approach that considers the quantum model as a black-box, aside from the knowledge of its Fourier spectrum, as the Fourier shadowing approach.Although we will be explicit about this in the next subsection, we will consider a shadowing procedure to be successful, if, with high probability, the resulting shadow model agrees with the original model on all inputs2 , i.e., max for a specified ε ≥ 0.
We show that the Fourier shadowing approach can suffer from an exponential sample complexity in the dimension of the input data x, making it intractable for highdimensional input spaces.To see this, consider the linear model: (11) for x ∈ R n and y ∈ {0, 1} ⊗n .Let us first restrict our attention to the domain x ∈ {0, π 2 } n .It is quite clear that on this domain, f y (x) = δ 2x/π,y plays the role of a database search oracle, where the database has 2 n elements and a unique marked element y.From lower bounds on database search, we know that Ω(2 n ) calls to this oracle are needed to find y [31].This implies that a Fourier shadowing approach would require Ω(2 n ) calls to f y (x) = δ 2x/π,y in order to guarantee max x∈X f θ (x) − f θ (x) ≤ 1/4.In Appendix D 3, we explain how this result can be generalized to the full domain x ∈ R n .
On the other hand, note that the flipped model associated to f y (x) allows for a straightforward shadowing procedure.Indeed, by preparing O(y) and measuring it in the computational basis, one straightforwardly obtains y and can therefore classically compute the expectation value of any tensor product observable ρ(x) as specified by Eq. (11).We summarize our result in the following lemma: Lemma 4 (Not all Fourier shadowfiable).There exist shadowfiable models that are not efficiently Fouriershadowfiable, i.e., for which a shadowing procedure based solely on the knowledge of their Fourier spectrum and on black-box queries can have a query complexity that is exponential in the input dimension.

B. All shadow models are shadows of flipped models
We give a general definition of shadow models that can encompass all methods that have been proposed to generate them.In contrast to the definition of classical surrogates proposed by Schreiber et al. [18], we give explicit definitions for the shadowing and evaluation phases of shadow models which allows to bound the complexity of generating and evaluating them.We view a general shadowing phase as the generation of advice that can be used to evaluate classically a quantum model.This advice is generated by the execution of quantum circuits that may or may not depend on the (trained) quantum circuit from the training phase.For instance, when we construct shadows of our flipped models, we simply prepare the parametrized states ρ(θ) and use (randomized) measurements to generate an operationally meaningful classical description.In the case of Fourier shadowing, this advice is instead generated by evaluations of the quantum model f θ (x) for different inputs x ∈ R d that are rich enough to learn the Fourier coefficients of this model.We propose the following definition: Definition 5 (General shadow model).Let W 1 (θ), ..., W M (θ) be a sequence of O(poly(m))-time quantum circuits applied on all-zero states |0⟩ ⊗m , and that can potentially be chosen adaptively.Call ω(θ) = (ω 1 (θ), . . ., ω M (θ)) the outcomes of measuring the output states of these circuits in the computational basis.A general shadow model is defined as: where A is a classical O(poly(M, m, d))-time algorithm that processes the outcomes ω(θ) along with an input x ∈ R d to return the (real-valued) label f θ (x).
From this definition, a shadow model is a classically evaluatable model that uses quantum-generated advice.Crucially, this advice must be independent of the data points x we wish to evaluate the model on in the future.We distinguish the notion of a shadow model from that of a shadowfiable quantum model, that is a quantum model that admits a shadow model: Definition 6 (Shadowfiable model).A model f θ acting on n qubits is said to be shadowfiable if, for ε, δ > 0, there exists a shadow model f θ such that, with probability 1 − δ over the quantum generation of the advice ω(θ) (i.e., the shadowing phase), the shadow model satisfies and uses m, M ∈ O(poly(n, 1/ε, 1/δ)) qubits and circuits to generate its advice ω(θ).
While we have seen that there exist shadowfiable models that cannot be shadowfied efficiently using a Fourier approach, we show that all shadowfiable models as defined above can be approximated by shadowfiable flipped models.
Lemma 7 (Flipped models are shadow-universal).All shadowfiable models as defined in Defs. 5 and 6 can be approximated by flipped models f θ (x) = Tr[ρ(θ)O(x)] with the guarantee that computational basis measurements of ρ(θ) and efficient classical post-processing can be used to evaluate f θ (x) to good precision with high probability.
This result is essentially based on the observation that the evaluation of a general shadow model as defined in Def. 5 can be done entirely coherently.Instead of classically running the algorithm A using the random advice ω(θ), one can quantumly simulate this algorithm (using a reversible execution) and execute it on the coherent advice ρ(θ) = |ω(θ)⟩⟨ω(θ)| generated by {W 1 (θ), ..., W M (θ)} before the computational basis measurements.We refer to Appendix D 1 for a more detailed statement and proof.Seperations between classical, shadow, and quantum models.Under the assumption that the discrete cube root (DCR) cannot be computed classically in polynomial time, we have a separation between shadow models (captured by the class BPP/qgenpoly) and classical models (in BPP).Under the assumption that there exist functions that can be computed in quantum polynomial time but not in classical polynomial time with the help of advice (i.e., BQP ̸ ⊂ P/poly), we have a separation between quantum models (universal for BQP) and shadow models (BPP/qgenpoly).A candidate function for this separation is the discrete logarithm (DLP).

C. Not all quantum models are shadowfiable
From the discrete cube root learning task, we already understand that a learning separation can be established between classical and shadowfiable models.We would also like to understand whether a learning separation exists between shadowfiable models and general quantum models, or equivalently, whether all quantum models are shadowfiable.We show that this also is not the case, under widely believed assumptions.
We start by noting that shadow models can be characterized by a complexity class we define as BPP/qgenpoly, 3which contains all functions that can be computed efficiently classically with the help of polynomially-sized advice generated efficiently by a quantum computer.This class is trivially contained in the standard class BPP/poly, which doesn't have any constraint on how the advice is generated and can be derandomized to P/poly (i.e., BPP/poly=P/poly [32]).
On the other hand, it is easy to show that quantum models (more precisely quantum linear models) can also represent any function in BQP, i.e., all functions that are efficiently computable on a quantum computer.For this, one simply takes a simple encoding of an n-bit input x: along with an observable specified by an arbitrary n-qubit circuit U n in BQP and the Pauli-Z operator applied on it first qubit.The resulting model f n (x) = Tr[ρ(x)O n ] can then be used to decide any language in BQP.
Combining these two observations, we get that the proposition "all quantum models are shadowfiable" would imply that BQP ⊆ BPP/qgenpoly ⊆ P/poly, which violates the widely-believed conjecture [33] that BQP ̸ ⊆ P/poly (see Appendix D 2 for a formal proof).To give an example of candidates of non-shadowfiable quantum models, the discrete logarithm log g x mod p (or even one bit of it) is provably in BQP but is not believed to be in P/poly.Therefore, a model that could be used to compute the discrete logarithm (e.g., the quantum model of Liu et al. [16]) is likely not shadowfiable.

ACKNOWLEDGMENTS
The authors would like to acknowledge Johannes Jakob Meyer, Franz Schreiber, and Richard Kueng for insightful discussions at different phases of this project.SJ acknowledges support from the Austrian Science Fund (FWF) through the projects DK-ALM:W1259-N27 and SFB BeyondC F7102.SJ also acknowledges the Austrian Academy of Sciences as a recipient of the DOC Fellowship.SJ thanks the BMBK (EniQma) and the Einstein Foundation (Einstein Research Unit on Quantum Devices) for their support.VD and CG acknowledge the support of the Dutch Research Council (NWO/ OCW), as part of the Quantum Software Consortium programme (project number 024.003.037).VD and RM acknowledge the support of the NWO through the NWO/NWA project Divide and Quantum.VD and SCM acknowledge the support by the project NEASQC funded from the European Union's Horizon 2020 research and innovation programme (grant agreement No 951821).VD and SCM also acknowledge partial funding by an unrestricted gift from Google Quantum AI.This work was also in part supported by the Dutch National Growth Fund (NGF), as part of the Quantum Delta NL programme.
Appendix A: Formal definitions 1. Linear models Definition A.1 (Conventional linear model).Let U (x) be an encoding quantum circuit that is parametrized by input data x ∈ R d , ρ 0 a fixed input quantum state (diagonal in the computational basis), V (θ) a variational quantum circuit parametrized by a vector θ ∈ R p and O = m i=1 w i O i an observable specified by a (trainable) linear combination of Hermitian matrices {O i } m i=1 .A conventional linear model is defined by the parametrized function: for ρ(x) = U (x)ρ 0 U † (x) and O(θ) = V (θ)OV † (θ) (when the weights {w i } m i=1 are also trainable, we include them in the parameters θ of the model).

Shadow models
Definition A.3 (Shadow model).Let {W 1 (θ), ..., W M (θ)} be a sequence of m-qubit unitary circuits that are dependent on a parameter vector θ ∈ R p , and can potentially be chosen adaptively.We define the quantum-generated advice ω(θ) = (ω 1 (θ), . . ., ω M (θ)) as the measurement outcomes ω i (θ) obtained by measuring the states W i (θ) |0⟩ ⊗m in the computational basis (and a description of their associated circuits W i (θ)).A shadow model is defined as the parametrized function: In this case, the unitary circuits {W 1 (θ), ..., W M (θ)} are simply obtained by V (θ) followed by a basis change unitary (corresponding to the Pauli basis of P i ).As for the classical algorithm A, this is simply a collection of mean estimators that compute estimates ⟨P i ⟩ out of measurement outcomes, followed by the computation of a weighted sum.The number of measurements needed for this shadow model to guarantee . Indeed, estimating each ⟨P i ⟩ to additive error ε maxx ∥w(x)∥ 1 allows us to guarantee the desired total additive error, and each of this estimates can be obtained using A more interesting shadow model relies on Pauli classical shadows [34] where random Pauli measurements are used to construct ω(θ).Median-of-mean estimators then use these measurement outcomes to compute empirical estimates ρ(θ) of ρ(θ) and approximate all expectation values Tr[ρ(θ)P i ].The advantage of this shadow model is that it requires M ∈ O Clifford classical shadows [34] allow to construct a representation ω(θ) of ρ(θ) that guarantees f θ (x) − f θ (x) ≤ ε, ∀x ∈ R d using only M ∈ O 1 ε 2 measurements.However, for this estimation to be computationally efficient, the states |ψ(x)⟩ need to be stabilizer states, or be generated by few (i.e., O(log(n))) non-Clifford gates [35].
2 for m = 4 n , i.e., {P i } m i=1 are all n-qubit Pauli strings.By preparing two-copy states ρ(θ) ⊗ ρ(θ) and performing simultaneous Bell measurements between pairs of qubits of these two copies, M = O 1 ε 2 such measurements give a ω(θ) rich enough to compute any Tr[ρ(θ)P i ] 2 to precision ε [36].Therefore, evaluating f θ (x) requires only M ∈ O measurements using this shadow model, compared to 2 Ω(n) for a shadow model that would construct ω(θ) using single-copy measurements only (assuming that for all i ∈ {1, . . ., m}, there exists an x ∈ R d such that w(x) i ̸ = 0).For a w(x) that is k-sparse for all x, with k ≪ m, this constitutes an exponential separation.
Appendix B: Properties of flipped models

Sample complexity of evaluating quantum models
Consider a linear quantum model (either conventional or flipped) of the form for a quantum state ρ(x) parametrized by a vector x ∈ R d and O(y) a Hermitian observable parametrized by a vector y ∈ R p .Assume that we can prepare single copies of ρ(x) and that we can measure them in the eigenbasis of O(y).
We ask: given error parameters ε, δ > 0, how many such measurements of ρ(x) do we need in order to compute an estimate f y (x) of f y (x) such that f y (x) − f y (x) ≤ ε with success probability at least 1 − δ.
It is easy to see that this problem corresponds to a simple Monte Carlo mean estimation.Indeed, we can write a decomposition of O(y) in its eigenbasis as: where λ i (y) is a real eigenvalue (since O(y) is Hermitian) associated to the eigenstate |ϕ i ⟩⟨ϕ i | (these eigenstates can in general also depend on y, but we do not write this dependence explicitly for ease of notation).We can also write a decomposition of ρ(x) in this same basis as : such that Tr[ρ(x)] = i ρ i,i (x) = 1 by the unit-trace property of ρ(x), and ⟨ϕ i | ρ(x) |ϕ i ⟩ = ρ i,i (x) ≥ 0 from its positive semi-definiteness.From these two properties, we deduce that {ρ i,i (x)} i defines a probability distribution over the eigenstates {|ϕ i ⟩⟨ϕ i |} i .Therefore, we can see that: simply corresponds to the expectation value of the random variable i → λ i (y) under the probability distribution {ρ i,i (x)} i , i.e., the probability distribution obtained by measuring ρ(x) in the eigenbasis of O(y).Therefore, we can use known results from (classical) Monte Carlo estimation to bound the sample complexity of evaluating this mean value, and therefore the quantum model.With the assumption that ρ(x) is given as a black-box and that it can generate arbitrary quantum states (and therefore arbitrary distributions {ρ i,i (x)} i ), the only property of the random variable i → λ i (y) we can use to bound the sample complexity is its bounded domain.Indeed, without additional assumptions of the distribution, we have a tight sample complexity bound of samples in order to estimate the mean of a random variable taking values in [−B, B], to precision ε and with probability of success 1−δ [37,38].In the case of a quantum model, the random i where ∥O(y)∥ ∞ is the spectral norm of the observable O(y).Therefore, the sample complexity of estimating a quantum model in the absence of any constraint (or information) on the quantum states ρ(x).

Generalization performance
In this section, we study the generalization performance of flipped models.For this, we take an approach very similar to that of Aaronson [39], where we lower bound the number of qubits n and spectral norm ∥O∥ ∞ needed by a flipped model to encode arbitrary k-bit strings, in a way that can be recovered efficiently via repeated measurements.These bounds naturally allow us to upper bound the fat-shattering dimension of flipped models, a complexity measure that is widely used in generalization bounds [40].
a. Encoding bit-strings in flipped models Theorem B.1.Let k and n be positive integers with k > n.For all k-bit strings y = y 1 . . .y k , let ρ(y) be an n-qubit mixed state that "encodes" y.Suppose there exists Hermitian observables O 1 , . . ., O k with spectral norms ∥O i ∥ ∞ such that we call ∥O∥ ∞ = max i ∥O i ∥ ∞ , as well as real numbers α 1 , . . ., α k , such that, for all y ∈ {0, 1} k and i ∈ {1, . . ., k}, Proof.We take a similar approach to the proof of Aaronson [39], in which we show that a combination of an encoding ρ(y) and observables O 1 , . . ., O k that satisfies guarantees (i) and (ii) would need n∥O∥ 2 ∞ /γ 2 to scale linearly in the length k of the bit-strings it encodes in order not to contradict with Holevo's bound.
Suppose by contradiction that such an encoding scheme exists with n∥O∥ 2 ∞ /γ 2 ∈ o(k).We first adapt to the setting of Aaronson by constructing two-outcome POVMs {E i , I − E i } out of the observables O i .That is, we take the general Hermitian matrices O i with eigenvalues in [λ min , λ max ] ⊂ [−∥O∥ ∞ , ∥O∥ ∞ ], and transform them into Hermitian matrices E i with eigenvalues in [0, 1], such that the POVM {E i , I − E i } accepts ρ (i.e., outputs 1) with probability Tr(ρE i ), and rejects ρ (i.e., outputs 0) with probability 1 − Tr(ρE i ).Specifically, we define Conditions (i) and (ii) then translate to: From here, by noting that |λ min | + |λ max | ≤ 2∥O∥ ∞ , we can directly apply Theorem 2.6 of Aaronson [39] and get our result, but we detail the reasoning further for clarity.
We first need to amplify the probability that we correctly identify whether y i = 0 or 1 from measuring copies of ρ(y), since the probabilities obtained from E i can be arbitrarily small.Consider an amplified scheme, where each bit-string y ∈ {0, 1} k is encoded by the tensor product ρ(y) ⊗ℓ , for some ℓ > 1 to be defined later.For all i ∈ {1, . . ., k}, let {E * i , I − E * i } be the amplified POVM that applies {E i , I − E i } to each of the ℓ copies of ρ(y) and accepts if and only if at least α i ℓ = αi+|λmin| |λmin|+|λmax| ℓ of these POVMs do.For all j ∈ {1, . . ., ℓ}, call X (j) i the random variable that takes the value 1 if {E i , I − E i } accepts the j-th copy of ρ(y) (i.e., with probability p i = Tr(ρ(y)E i )), and value 0 otherwise.Consider the case where y i = 0. We have Tr[O i ρ(y)] ≤ α i − γ, which implies that α i ≥ p i + γ |λmin|+|λmax| .Therefore, the probability that at least α i ℓ of the POVMs accept is then: i .From the Chernoff bound, we hence get: To guarantee an acceptance probability Tr(ρ(y . A similar analysis holds for the case y i = 1. From here, the result we use that derives from Holevo's bound is Theorem 5.1 of Ambainis et al. [41].It states that in order for the POVMs {E * i , I − E * i } to correctly identify whether y i = 0 or 1 with probability of failure less than 1/3, we need a number of qubits nℓ ≥ (1 − H(1/3))k, where H is the binary entropy function.This implies that n∥O∥

b. Generalization bounds of flipped models
The conditions (i) and (ii) of Theorem B.1 are very similar to that of a fat-shattering dimension of a concept class.Definition B.2.Let X be a data space, let C be a class of functions from X to R, and let γ > 0. The fat-shattering dimension of the concept class C at width γ, denoted fat C (γ), is defined as the size k of the largest set of points {x (1) , . . ., x (k) } for which there exist real numbers α 1 , . . ., α k such that for all y ∈ {0, 1} k , there exists a f ∈ C that satisfies, for all i ∈ {1, . . ., k}, By comparing this definition with the statement of Theorem B.1, we can show: ] defined on a data space X , using n-qubit quantum states and observables with spectral norm Proof.We note that since, for all x ∈ X , O(x) lives in a manifold of all observables with spectral norm ∥O∥ ∞ , then the fat-shattering dimension of C n,O is upper bounded by that of the concept class C n,O = {O ′ → Tr[ρ(θ)O ′ ]} θ defined on the input space of observables O ′ that satisfy ∥O ′ ∥ ∞ ≤ ∥O∥ ∞ .Theorem B.1 immediately yields an upper bound for fat C n,O (γ), when we identify ρ(θ) in this corollary to ρ(y) in the Theorem, and observe that the number of observables O 1 , . . ., O k that can be γ-shattered (i.e., conditions (i) and (ii) for all labelings) must satisfy k ∈ O(n∥O∥ To obtain generalization bounds on the performance of flipped models, we combine this bound on their fat-shattering dimension with standard results from learning theory, e.g.: Theorem B.4 (Anthony and Barlett [40]).Let X be a data space, let C be a class of functions from X to R, and let D be a probability measure over X .Fix an element f ∈ R X , as well as error parameters ε, η, γ, δ > 0 with γ > η.
Suppose that we draw m samples X = (x (1) , . . ., x (m) ) from X according to D, and choose any hypothesis h ∈ C such that |h(x) − f (x)| ≤ η for all x ∈ X.Then, there exists a positive constant K such that, provided From Corollary B.3, we have that samples suffice to get these generalization guarantees.This proves Lemma 2 in the main text.

Flipping bounds
In this section, we study mappings between conventional and flipped models (most importantly from conventional to flipped, but our flipping bounds can be used either way).We find that the important quantity that governs the trade-off in resources between these models is the observable trace norm ∥O∥ 1 = Tr √ O 2 of the model to be mapped.
Since all observables O(y) (where y is either a data vector or a parameters vector, depending on the model) have to be turned into unit-trace density matrices, their eigenvalues need (i) either to be normalized when preserving the number of qubits of the model, or (ii) be encoded in more qubits than in the original model (e.g., by using a binary encoding of all the eigenvalues of O(y)).Each of these options has its disadvantages: (i) normalizing eigenvalues introduces an overhead in the spectral norm ∥O ′ ∥ ∞ of the observable of the resulting model, which results in an overhead in the number of measurements needed to evaluate this model to the same precision as the original one.As for (ii), we show that the number of qubits of the new model would need to scale quadratically with ∥O∥ 1 in general, which is commonly an exponential quantity in the number of qubits of the original model (e.g., when O is a Pauli).Lemma 1 in the main text is obtained from Theorem B.5 as stated below and Theorem B.6 for the case m ∈ O(log d), a. Upper bounds Theorem B.5.Given a specification of a conventional quantum model f θ (x) = Tr[ρ(x)O(θ)] acting on n qubits, with a known (upper bound on the) trace norm ∥O∥ 1 of its observable, one can construct an equivalent flipped model Proof.From Def.A.1, we assume that, in the definition of the conventional model, ρ(x) is obtained by applying a unitary U (x) on a known quantum state ρ 0 that is diagonal in the computational basis (i.e., is a mixture of computational basis states).As for O(θ), we assume that it is specified by a unitary V (θ) and a weighted sum of d i , for some known λ i,j 's and W i 's.Note that, as opposed to the parameters that specify V (θ), the weights w i influence the trace norm ∥O(θ)∥ 1 .Therefore we need to pay attention to the fact that ∥O(θ)∥ 1 is only upper bounded by ∥O∥ 1 = sup θ ∥O(θ)∥ 1 , and that these two quantities are not always equal.
Out of the observables O(θ), we need to prepare quantum states ρ The only difficulty is that quantum states are positive semi-definite and have unit trace while Hermitian observables generally do not fulfill any of these two conditions.To get around these constraints, we simply decompose the observables O(θ) into positive and negative components, that we both normalize.More precisely, call Algorithm 1: Flipped evaluation of a conventional model Input: an n-qubit unitary U (x) and a quantum state ρ0 (diagonal in the computation basis) such that ρ(x) = U (x)ρ0U † (x), n-qubit unitaries V (θ) and {Wi} 1≤i≤d , real values such that is a valid quantum state (positive semi-definite and unit trace).We can then take However, this still does not give us a proper flipped model as the renormalization factor ∥O(θ)∥ 1 of O ′ (x) can depend on the parameters θ (and more precisely the weights w i , see remark above).We would like to use here the upper bound ∥O∥ 1 .To do so, we can simply (re-)define: and also Going a bit further, we also propose an algorithm to evaluate the flipped model we constructed in our proof (see Algorithm 1).Given that we only assume to know the eigenvalue decomposition of the single O i 's, and not that of the full observable O(θ), we do not decompose O(θ) directly into its positive and negative components, but rather the O i 's.For this, we define: We then recover ρ ′ (θ) via importance sampling of the indices i, j and ± and the implementation of the pure state V (θ)W i |j⟩ (see Algorithm 1) 4 .Naturally, one could alternatively design a full unitary implementation of ρ ′ (θ) using ancillary qubits to prepare coherent encodings of the probability distributions appearing in Algorithm 1, along with controlled operations between these ancillas and the working register, but this would require more qubits and a more complicated quantum implementation.
then, it must also satisfy Proof.The core of the proof is to show that a conventional model Tr[ρ(i)O(y)] with trace norm ∥O(y)∥ 1 = d and acting on n = ⌈log 2 (N + 1)⌉ qubits, for N = ⌊d 2 /4⌋, can represent the function i → y i for 1 ≤ i ≤ N , for all y ∈ {0, 1} N .For this, we take, for all 1 ≤ i ≤ N : for where |y| = We take this conventional model to be our target model.Satisfying the condition of Eq. (B19) is then equivalent to satisfying: Now note that this condition is stronger than that of Theorem B.1 for γ = 1/2 − ε and α i,j = 1/2 ∀i, j.Therefore, in order not to contradict with this theorem, we must have

c. Circumventing lower bounds
Note that our flipping bounds are essentially tight only in the regime where the number of qubits used by the original and resulting model n, m are both in O(polylog(∥O∥ 1 )).This is a relevant regime, as it includes notably the case of Pauli observables (be they local or non-local) or linear combinations thereof.However, outside this regime our bounds can be circumvented.
Also note that an easy way of circumventing our lower bounds, even in the regime where n, m ∈ O(polylog(∥O∥ 1 )), is by imposing the constraint that O(y) is parametrized by |y| = O(poly(n)) parameters 5 .Indeed, in this case, one can simply use a similar construction to that in Ref. [21] (Fig. 3) where one would encode y (e.g., in binary form) in ancilla qubits as | y⟩ and use controlled operations that are independent of y to simulate the action of gates parametrized by y.One can then note that by taking ρ ′ (θ) = ρ 0 ⊗ | y⟩⟨ y| and the rest of the resulting circuit to define O ′ (x), one ends up with a flipped model that acts on O(poly(n)) qubits and satisfies ∥O ′ ∥ ∞ = ∥O∥ ∞ .For O defined by a Pauli observable for instance, we have ∥O∥ ∞ = 1 and ∥O∥ 1 = 2 n .Such a construction therefore does not suffer from an exploding spectral norm ∥O ′ ∥ ∞ .However, it also does not lead to shadowfiable models in general as the parametrized states ρ ′ (θ) play a trivial role and the observables O ′ (x) hide all of the quantum computation.These M registers constitute the coherent encoding of the advice |ω(θ)⟩.The algorithm A can then be simulated by a reversible quantum computation UA (see Sec. 3.2.5. in [43]) that processes a binary encoding |x⟩ of x and the coherent advice |ω(θ)⟩ (either directly or indirectly via controlled operations that imprint |ω(θ)⟩ on an ancilla register).This coherent implementation of the shadow model can be viewed as a shadowfiable flipped model g θ (x) = Tr[ρ(θ)O(x)], such that one evaluation of this model samples an advice ω(θ) and evaluates A(x, ω(θ)) for that advice and a given x.
Proof.By Def. 6, the shadowfiable model f θ admits for every error of approximation ε ′ > 0 and probability of failure δ ′ > 0 a shadow model f θ that uses m • M ∈ O(poly(n, 1/ε ′ , 1/δ ′ )) qubits and guarantees with probability 1 − δ ′ over the generation of its advice.Out of this shadow model, we use the construction described in Fig. 4.b) to define the flipped model g θ (x).Since this flipped model corresponds to the evaluation of the shadow model f θ (x) averaged over the randomness of ω(θ), we have, for all x ∈ R d : where we use that with probability 1 − δ ′ we have f θ (x) − f θ (x) ≤ ε ′ and otherwise assume the worse case error .A universal quantum model for BQP.For an n-dimensional input x ∈ {0, 1} n , this model acts on n qubits, encodes x in its binary form |x⟩ and applies a poly(n)-time unitary Un before a Pauli-Z measurement of the first qubit.For appropriately chosen unitaries {Un : n ∈ N}, this model can decide any language in BQP.For more general computational basis measurements, the resulting model can represent arbitrary functions in FBQP, the functional version of BQP.

BQP and P/poly
In this section we give a rigorous proof that there exist quantum models that are not shadowfiable, under the assumption that BQP ̸ ⊂ P/poly.The approach we take is similar to that of Huang et al. [34] (Appendix A), although we consider different complexity classes and therefore show different results.Let us start by noting that the shadow models defined in Def. 5 compute functions in a subclass of P/poly, namely the subclass in which the advice ω(θ) is efficiently generated from the measurements of a polynomial number of quantum circuits.We call this complexity class BPP/qgenpoly and it is obvious that BPP/qgenpoly ⊆ P/poly as the latter is equal to BPP/poly [32] and contains all classically efficiently computable functions with advice of polynomial length, without any constraints on how the advice is generated.On the other hand, we know that quantum models can compute all functions in BQP.To see this, let us recall the definition of BQP: Definition D.2 (BQP).A language L is in BQP if and only if there exists a polynomial-time uniform family of quantum circuits {U n : n ∈ N}, such that 1.For all n ∈ N, U n takes as input an n-qubit computational basis state, and outputs one bit obtained by measuring the first qubit in the computational basis.
2. For all x ∈ L, the probability that the output of U |x| applied on the input x is 1 is greater or equal to 2/3.

For all x /
∈ L, the probability that the output of U |x| applied on the input x is 0 is greater or equal to 2/3.
It is easy to show that for any language L in BQP, there exists a quantum model which can decide this language.Consider the following quantum model f n = Tr[ρ(x)O n ] defined as: Where X i is the Pauli-X gate acting on the i-th qubit, here parametrized by x i ∈ {0, 1}, the i-th bit of x, Z 1 is the Pauli observable on the first qubit, and U n is the quantum circuit used to decide the language L in Def.D.2.Then: 1.For all x ∈ L, f n (x) = Pr[the output of U |x| applied on the input x is 1] -Pr[the output of U |x| applied to the input x is 0] ≥ 2/3 − 1/3 = 1/3.
2. For all x / ∈ L, f n (x) = Pr[the output of U |x| applied on the input x is 1] -Pr[the output of U |x| applied to the input x is 0] ≤ 1/3 − 2/3 = −1/3.Therefore, as f n (x) > 0 if x ∈ L and f n (x) < 0 if x / ∈ L such quantum model could efficiently decide the language.We show that if all such quantum models would be shadowfiable then BQP ⊆ P/poly.Lemma D.3.If all quantum models f θ are shadowfiable with the guarantee that, ∀x ∈ X , f θ (x) − f θ (x) < 0.15, (D9) with probability at least 2/3 over the shadowing phase and the randomness of evaluating the shadow model f θ , then BQP ⊆ P/poly.
when measuring the output state of the circuit.More naturally, when measuring a basis state |j⟩, one would have a computable function that returns its corresponding eigenvalue, which in this case could reveal y.An easy fix for this is to include the encoding of y in the circuit, by redefining O(y) as which delegates the obfuscation of y to gates in the circuit.
One can also go a step further in this obfuscation by considering instead the following observables where V DLP is the unitary7 that maps a basis state to its discrete logarithm: V DLP : |y⟩ → (log g (y) mop p) + 1 = |y ′ ⟩ .(D16) Now, even the knowledge of y and a description of the quantum circuit do not help identify y ′ classically, under the classical hardness assumption of DLP.Moreover, we still retain the hardness of Fourier-shadowing the resulting model f y (x) = Tr[ρ(x)O(y)] from the same database-search arguments.And finally, the flipped model associated to f y still benefits from the same efficient shadowing procedure, as O(y) can be prepared on a quantum computer and measured in the computational basis to reveal y ′ .

Shadowfiablility
In our definition of shadowfiable models in the main text (see Def. 6), we take the convention that the shadow model should agree with the original quantum model for all possible inputs x ∈ X .This choice makes sense for two reasons: 1. We would like the shadowing procedure to work on all potential data distributions, as to be applicable in all learning tasks a given quantum model could be used in.
2. In the context of machine learning, one typically considers PAC conditions, meaning that the final model should achieve a small error E x∼D |h(x) − g(x)| only with respect to some data distribution D. Note that if the quantum model to be shadowfied achieves these PAC conditions, our demands on worst-case approximation will guarantee that the shadow model achieves them as well.
Nonetheless, one may still be interested in a notion of shadowfiability that considers an average-case error with respect to a specified data distribution D. It is not entirely clear which models can still be shadowfied in this way.But our results on the universality of flipped models (Lemma 7 in the main text and Lemma D.1 in the Appendix), as well as on the existence of quantum models that are not shadowfiable (Theorem 8 in the main text and Lemma D.3) would also hold.More precisely, for each of these results, respectively: 1.The same proof structure of Lemma D.1 can be used, as the constructed flipped model only adds a small controllable error to each x ∈ X .
2. One can consider here instead of quantum models that compute arbitrary functions in BQP, a restricted model that computes (single bits of) the discrete logarithm log g (x) mod p (analogous to our DCR concept class defined in Eq. (C5)).The result of Liu et al. [16] (Theorem 6 in the Supplementary Information) shows the classical hardness of achieving an expected error E x∼D |h(x) − g(x)| ≤ 1/2 + 1/poly(n) for such target functions (and a hypothesis h producing labels h(x) ∈ {0, 1}), under the assumption that DLP ̸ ∈ BPP.One can then use this result to show that there exist quantum models that are not average-case shadowfiable under the assumption that DLP ̸ ∈ P/poly.

Figure 1 .
Figure 1.Quantum and shadow models.(left) Conventional quantum models can be expressed as inner products between a data-encoding quantum state ρ(x) and a parametrized observable O(θ).The resulting linear model f θ = Tr[ρ(x)O(θ)]naturally corresponds to a quantum computation, depicted here.(middle) We define flipped models f θ = Tr[ρ(θ)O(x)] as quantum linear models where the role of the quantum state ρ(θ) and the observable O(x) is flipped compared to conventional models.(right) Flipped models are associated to natural shadow models: one can use techniques from shadow tomography to construct a classical representation ρ(θ) of the parametrized state ρ(θ) (during the shadowing phase), such that, for encoding observables O(x) that are classically representable (e.g., linear combinations of Pauli observables), ρ(θ) can be used by a classical algorithm to evaluate the model f θ (x) on new input data (during the evaluation phase).More generally, a shadow model is defined by (i) a shadowing phase where a (bit-string) advice ω(θ) is generated by the evaluation of multiple quantum circuits W1(θ), . . ., WM (θ), and (ii) an evaluation phase where this advice is used by a classical algorithm A, along with new input data x to evaluate their labels f θ (x).In Section III, we show that under this general definition, all shadow models are shadows of flipped models.

Figure 2 .
Figure 2. Seperations between classical, shadow, and quantum models.Under the assumption that the discrete cube root (DCR) cannot be computed classically in polynomial time, we have a separation between shadow models (captured by the class BPP/qgenpoly) and classical models (in BPP).Under the assumption that there exist functions that can be computed in quantum polynomial time but not in classical polynomial time with the help of advice (i.e., BQP ̸ ⊂ P/poly), we have a separation between quantum models (universal for BQP) and shadow models (BPP/qgenpoly).A candidate function for this separation is the discrete logarithm (DLP).

Definition A. 2 (
Flipped model).Let V (θ) be a variational quantum circuit parametrized by a vector θ ∈ R p , ρ 0 a fixed input quantum state (diagonal in the computational basis), U (x) an encoding quantum circuit that is parametrized by input data x ∈ R d and O x = m i=1 w(x) i O i an observable specified by a linear combination of Hermitian matrices {O i } m i=1 , weighted by a data-dependent function w : R d → R m .A flipped model is defined by the parametrized function: A3) for A a classical O(poly(m, M, d))-time algorithm that takes as input the advice ω(θ), a data vector x ∈ R d and outputs a real-valued label f θ (x) Examples of shadow models: • Take a flipped model f θ (x) = Tr[ρ(θ)O(x)] for ρ(θ) a quantum state generated by a circuit V (θ) applied to |0⟩ ⊗n and O(x) = m i=1 w(x) i P i where {P i } m i=1 are all k-local Pauli strings acting on n qubits.A simple shadow model associated to this flipped model consists in estimating all the expectation values ⟨P i ⟩ ≈ Tr[ρ(θ)P i ] via repeated measurements of ρ(θ) in the eigenbasis of each of the m = n k 3 k Pauli strings P i , and taking their weighted combination f θ

N i=1 y i is
the Hamming weight of y.By construction, we have that the upper-left N × N block of O(y) satisfies O (N ×N ) (y) 1 = |y| ≤ d (it corresponds to the adjacency matrix of a star graph of degree D = |y| ≤ N , which has trace norm 2 √ D [42]), such that ∥O(y)∥ 1 = d for all y ∈ {−1, 1} N .Also, it is easy to check that Tr[ρ(i)O(y)] = y i for all 1 ≤ i ≤ N .

Figure 3 .
Figure 3.A visualization of the functions involved in the quantum advantage learning task.The core functions of this task map ZN = {0, . . ., N − 1} to itself, for N a large semiprime.a) In feature space, data is linearly separable by a hyperplane parametrized by a certain s ∈ ZN .One can efficiently transform data y in feature space into its corresponding data x in input space via the "discrete cube" function x = y 3 mod N .b) To a fully classical learner, data in input space looks randomly labeled, as inverting it back to feature space via the discrete cube root function y = 3 √ x mod N is believed to be classically intractable.However, a shadow model can make use of the trap-door property of the discrete cube root function to efficiently compute a key d ∈ ZN using a quantum computer and classically map data to feature space through the transformation y = x d mod N .

Figure 4 .
Figure 4.All shadow models can be expressed as shadowfiable flipped models.a) A shadow model consists of M unitary circuits Wi(θ) that can be chosen adaptively, and that generate advice ωi(θ) from computational basis measurements of the states Wi(θ) |0⟩ m .This advice, along with a (binary description of) an input x ∈ R d are processed by a classical algorithm A to compute an approximation f θ of the shadowfiable model f θ .b) A coherent implementation of this shadow model, where the unitaries Wi(θ) are applied on different m-qubit registers, and coherently controlled by previous registers (for adaptivity).These M registers constitute the coherent encoding of the advice |ω(θ)⟩.The algorithm A can then be simulated by a reversible quantum computation UA (see Sec. 3.2.5.in[43]) that processes a binary encoding |x⟩ of x and the coherent advice |ω(θ)⟩ (either directly or indirectly via controlled operations that imprint |ω(θ)⟩ on an ancilla register).This coherent implementation of the shadow model can be viewed as a shadowfiable flipped model g θ (x) = Tr[ρ(θ)O(x)], such that one evaluation of this model samples an advice ω(θ) and evaluates A(x, ω(θ)) for that advice and a given x.

Figure 5
Figure 5.A universal quantum model for BQP.For an n-dimensional input x ∈ {0, 1} n , this model acts on n qubits, encodes x in its binary form |x⟩ and applies a poly(n)-time unitary Un before a Pauli-Z measurement of the first qubit.For appropriately chosen unitaries {Un : n ∈ N}, this model can decide any language in BQP.For more general computational basis measurements, the resulting model can represent arbitrary functions in FBQP, the functional version of BQP.