Optimised resource construction for verifiable quantum computation

Recent developments make the possibility of achieving scalable quantum networks and quantum devices closer. From the computational point of view these emerging technologies become relevant when they are no longer classically simulatable. Hence a pressing challenge is the construction of practical methods to verify the correctness of the outcome produced by universal or non-universal quantum devices. A promising approach that has been extensively explored is the scheme of verification via encryption through blind quantum computing initiated by Fitzsimons and Kashefi. We present here a new construction that simplifies the required resources for any such verifiable blind quantum computating protocol. We obtain an overhead that is linear in the size of the input, while the security parameter remains independent of the size of the computation and can be made exponentially small. Furthermore our construction is generic and could be applied to any non-universal scheme with a given underlying graph.


Introduction
It is widely believed that quantum computers, and quantum devices in general, can outperform their classical counterparts. There are problems which can be solved efficiently by quantum computers that is believed classical computers require an exponentially (in the size of the input) long time for. If the problem is in NP, a classical verifier can efficiently check the result of the quantum device. However, there are problems believed to be outside NP, such as quantum simulation [1,2] or other BQP problems [3] that the verifier needs to resort to different techniques to detect a 'dishonest' quantum device. Currently the most efficient way to verify a quantum computation is to employ cryptographic methods, where we have an almost classical verifier that executes the computation using an untrusted but fully quantum prover. There have been a number of such verification methods [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20] where generally there exists a trade-off between the practicality of the scheme versus their generality, trust assumptions and security level. It is the target of this work to both reduce the experimental requirements of the most general schemes and to achieve further improvements in the more restricted schemes. In general, in order to make quantum verification schemes practical a number of different aspects have been considered. While a full review of those aspects is beyond the scope of this paper it is worth noticing that most of them have been addressed using protocols based on verification via blind quantum computing [4,[6][7][8][13][14][15]. We will refer to this family of protocols collectively as verifiable blind quantum computation (VBQC) schemes, where the key idea is based on hiding the underlying computation (also known as blindness). This would allow the verifier to encode simple trap computations within a general computation that runs on a remote device in such a way that the computation is not affected, while revealing no information to the device. The correctness of the general computation is then tested via the verification of the trap computation. The latter is significantly less costly and thus leads to an efficient scheme (essentially similar to an error detection code). What makes the procedure work is the blindness which hides the trap computation from the actual one. To elaborate further on the security parameter scaling, consider the following informal definition of verification that we formalise later (for details see also [4]).

Definition 1.
A quantum computation protocol is ε-verifiable if the probability of accepting an incorrect output for any choice of the prover's strategy is bounded by ε.
In a practical scenario, to be convinced of the correctness of the output obtained from a given quantum device, one needs a verification protocol where the security parameter (ε) can be made arbitrarily small while keeping the cost (in terms of the experimental requirements) optimal. The standard technique for amplification when dealing with classical output is to simply repeat the protocol multiple times (let us say d) and if all rounds are accepted and result in the same outputs, then this output is correct except with probability d ε . However, dealing with quantum output requires more elaborate methods (to deal with the possibility of coherent attacks) which involves the use of full fault-tolerant computation and the presence of multiple traps in order to achieve exponential bounds.
It is evident that in order to obtain a verifiable quantum computation, some extra cost in terms of resources is needed. However, one wishes to ensure that the extra cost of verification is not excessive, and in particular that it is not more than the speed-up that one obtains from using a quantum algorithm. Here it is worth mentioning, that quantum algorithms, in many cases, provide polynomial speed-up (e.g. Grover's search 3 ) and if their verification requires extra quadratic cost it could reduce considerably or even annihilate the advantage of the quantum algorithm.

Our contribution
In this work we focus to further improve the underlying resource construction required for VBQC schemes. Our main results can be summarised as follows: (i) In section 2, inspired by the dotted-complete graph state introduced in [4], we give a generic construction where for any given (universal or non-universal graph state resource) multiple trap qubits isolated from the computation qubits can be added. Unlike the dotted-complete graph state the overhead of the new construction is only linear (instead of quadratic) in the size of the specific computation that will be performed. Furthermore the traps are uniformly distributed and their positions are essentially independent from each other. (ii) We use this construction to obtain a new universal VBQC protocol (section 3) that has a lower cost. Since we are using a different resource, the proof technique had to be adapted accordingly. Our protocol even before adding any boosting mechanism has a constant security parameter and thus allows a straightforward one-shot experiment. (iii) When the output of the quantum computation is classical, we use a repetition technique to boost the security of our protocol to arbitrarily small ε (section 4.1). Importantly, we can achieve this using a constant number of repetitions which is independent of the size of the computation and scales with the desired security parameter leading to an overall In previous VBQC protocols the number of repetitions that were required increased with the size of the computation. (iv) For the general quantum output case, we use a fault-tolerant encoding of the computation to boost the security to arbitrary small ε while at the same time we still requiring only a linear, in the size of the computation, overhead (section 4.2) with a moderate extra cost that depends on the security parameter and scales as O log 1 ( ) ε depending on the fault-tolerant encoding used. The overhead of previous VBQC protocols (except [7]) is quadratic (on top of the security-parameter logarithmic dependency).
Our construction could be used to optimise various other existing VBQCs (see appendix F).

Related works
There have been a number of papers on verification addressing different aspects. We do not aim to give a complete list but we give here a brief description of some related works. Aharonov, Ben-Or and Eban [5] provided the first verification protocol. This required a linear overhead in the size of the computation, but also required a verifier that has involved quant um abilities, in particular the ability to prepare entangled states of size which depend on the security parameter.
Following another approach, based on measurement-based quantum computation, Fitzsimons and Kashefi [4] obtained the most optimal scheme from the point of view of the capability of verifiers. However, the overhead of the full scheme becomes quadratic. Recently a solution for addressing this issue was proposed in [7] by combining the above two protocols [4,5]) to construct a hybrid scheme. This was the only verification protocol (before our work) that requires a linear number of qubits while at the same time requires that the verifier has the minimal quantum property of preparing single quantum systems. However, the protocol requires the preparation of qudits (rather than qubits) where the dimension is dictated by the desired level of security. Moreover the required resource is still constructed based on a dottedcomplete graph state but of small constant size. Hence further investigation is needed to compare the experimental simplicity of the two schemes, ours and the one in [7].
The first experimental implementation of a simplified verification protocol was presented in [6] where a repetition technique was also explored. Other experiments on verifiable protocols include [19] and an experiment based on the protocol in [17]. However, none of these works are applicable to a full universal scheme such as ours.
On the other hand, to achieve a classical verifier new techniques are proposed either using two provers at the cost of increasing the overall overhead of the protocol dramatically [11] or increasing the number of the provers further [12]. Other device-independent protocols [13,14] used a single universal quantum prover and an untrusted qubit measuring device and while the complexity improved (compared to the two provers protocol [11]) it was still far from experimentally realisable.
The VBQC protocol could be generally viewed as a prepare and send scheme (using the terminology from quantum key distribution). Equivalent schemes based on measurement-only could also be obtained [9,10]. In this scenario the prover prepares a universal resource and sends it qubit-by-qubit to the verifier that performs different measurements in order to complete a quantum computation. These protocols are referred to as online protocols (in contrast to the offline protocols mentioned above) since the quantum operations of the verifier occur when they know what they want to compute. The online scheme can also achieve verification either by creating traps [9], or by measuring the stabiliser of the resource state [10]. These protocols could be improved using our techniques (see appendix F).
Finally a composable definition of [4] is given in [15], while a limited computational model (one-pure-qubit) is examined in [8]. Due to the generic nature of our construction these results would also be applicable to our protocol.
The verification protocols in [16,20] are teleportation based. Due to the general mapping (see [21,22]) between the teleportation (with two-qubit measurement) and one-way computation (with single-qubit measurement), one can also explore any possible improvement that our techniques could bring to these new protocols. For example, it may be possible to amplify the probability of success for quantum output with minimal extra cost, given a constant probability of error of the 'one-shot' protocol (which is already achieved in [16]) combining the technique of [4] that uses fault-tolerant encoding with our local resource construction.

Background
The family of VBQC protocols are conveniently presented in the measurement-based quant um computation (MBQC) model [23] that is known to be the same as any gate teleportation model [24]. We will assume that the reader is familiar with this model; further details can be found in [22]. The general idea behind an MBQC protocol is that one starts with a large and highly entangled multiparty state (the resource state) and the computation is performed by carrying out single-qubit measurements. There is an order to the measurements since the basis of a measurement may depend on outcomes of previous measurements. The resource states used are known as graph states as they can be fully determined by a given graph see details in [25]. One way to construct a graph state given the graph description is to assign to each vertex of the graph a qubit initially prepared in the state |+⟩ and for each edge of the graph to perform a − Z controlled gate to the two adjacent vertices. If one starts with a graph state where qubits are prepared in a rotated basis then it is possible to perform the same computation with the non-rotated graph state by performing measurements in a similarly rotated basis. This observation led to the formulation of the universal blind quantum computation (UBQC) protocol [26] which hides the computation in a client-server setting. Here a client prepares rotated qubits, where the rotation is only known to them. The client sends the qubits to the server (as soon as they are prepared, hence there is no need for any quantum memory). Finally the client instructs the server to perform entangling operations according to the graph and perform single qubit measurements in suitable angles in order to perform the desired computation (where an extra randomisation r i of the outcome of the measurements is added). During the protocol the client receives the outcomes of previous measurements and can classically evaluate the next measurement angle. Due to the unknown rotation and the extra outcome randomisation, the server does not learn what computation they actually perform.
The UBQC protocol can be uplifted to a verification protocol where the client (referred to now as verifier) can detect a cheating server (referred to now as prover). To do so, the verifier for certain vertices (called dummies) sends states from the set | | 0 , 1 { ⟩ ⟩} which has the same effect as a Z-basis measurement on that vertex. In any graph state if a vertex is measured in the Z-basis it results in a new graph where that vertex and all its adjacent edges are removed. During the protocol the prover does not know for a particular vertex if the verifier sends a dummy qubit or not. This enables the verifier to isolate some qubits (disentangled from the rest of the graph). Those qubits have fixed deterministic outcomes if the prover follows the instructions honestly. The positions of those isolated qubits are unknown to the prover and the verifier uses them as traps to test that the prover performs the quantum operations that is given. This technique led to the first universal VBQC protocol [4] which is the basis of our paper. While the trapification idea is straightforward, it is challenging to find the optimal way of inserting trap qubits while not breaking the general computation. This is the central focus of this paper: to introduce a general optimised scheme for constructing graph state resources for VBQC protocols.

The dotted triple-graph construction
Our construction starts with a 'base' graph G such that the related graph state |G⟩ can be used as the resource to perform a particular (or universal) quantum computation in MBQC. This graph is then 'decorated' in a suitable way, resulting to a graph that we will call dotted triple-graph DT(G) that defines the resource state |DT G ( )⟩ for running a verified quantum computation in an efficient way. The general idea is to construct the DT(G) graph which after some operations (chosen secretly by the verifier) can be broken to three identical graphs. One will be used to perform the desired computation and the other two to insert trap computations to detect possible deviations. The way that the DT(G) is broken is chosen by the verifier and thus the prover is blind to which vertex belongs to which graph. This general idea was first introduced in [4]. The key difference of our construction is that while in [4] the breaking to subgraphs occurs in a global way, in our construction it happens locally. This difference results in a reduction on the number of vertices (and thus qubits).
Our local construction, defined precisely later, means that the prover can obtain certain information about the graph without compromising the security. Therefore knowledge or leaking of secret parameters at one part of the computation does not affect other positions. This property makes the present construction particularly useful for applications and extensions that involve multiple parties, a fact exploited in the secure two-party quantum computation protocol of [27].
In this section we will only give definitions and properties of the dotted triple-graph construction when viewed purely as graph operations. These properties will play a crucial role in the next sections where we will use as a resource state the dotted triple-graph state |DT G ( )⟩ in order to obtain verifiable quantum computation protocols.

Definition 2 (Introduced in [4]).
We define the dotting operator D on graph G to be the operator which transforms a graph G to a new graph denoted as D(G) and called dotted graph, by replacing every edge in G with a new vertex connected to the two vertices originally joined by that edge 4 . We call the set of vertices of D(G) previously inherited from G primary vertices P(D(G)), and the new vertices as added vertices A(D(G)).

Dotted triple-graph construction
(i) We are given a base graph G that has vertices ∈ v V G ( ) and edges ∈ e E G ( ), as in figure 1(a). In the following steps we will give the new graph DT(G), called dotted-triple graph and specify its vertices and edges.
(ii) For each vertex v i , we define a set of three new vertices = P p p p , , Note that, according to definition 2 and the labelling in the above construction the primary vertices are given as . For convenience we also label the added vertices A(DT(G)) as follows. Corresponding to each edge e v v , . If the maximum degree of the base graph is a constant c then the number of vertices of the DT(G) are linear in the number of vertices of the base graph. This property means that if we can base our verifiable quantum computation protocol on this graph, then the number of qubits we will need is linear in the size of the computation. In what follows we present a labelling scheme that for convenience we present as a colouring (however, connected vertices could be the same colour).

Definition 3 (Trap-Colouring).
We define trap-colouring to be an assignment of one colour to each of the vertices of the dotted triple-graph that is consistent with the following conditions.
(i) Primary vertices are coloured in one of the three colours of white, black or green. (ii) Added vertices are coloured in one of the four colours of white, black, green or red. (iii) In each primary set P v there is exactly one vertex of each colour. (iv) Colouring the primary vertices fixes the colours of the added vertices: added vertices that connect primary vertices of different colour are red. Added vertices that connect primary vertices of the same colour get that colour.
Note that the choice of colours for each of the primary sets P v can be chosen randomly and is independent from the choices made on other primary sets. We can also see that in each of the added sets A e we have one white, one black, one green and six red vertices. It is easy to see that such labelling can be obtained efficiently for any graph.
In figures 2(a) and (b) we see an example of trap-colouring, where in (a) we choose independently the colour choices of primary vertices and in (b) the colours of added vertices is fixed following the rules for trap-colouring given above.

Definition 4 (Introduced in [4]).
We define the break operator 5 on a vertex v of a graph G to be the operator which removes vertex v and any adjacent edges to v from G. Lemma 1. Given the dotted triple-graph DT(G) and a trap-colouring, by performing break operations on the red vertices we obtain three identical copies 6 of the dotted base graph D(G) each of them consisting of a single colour.
The proof is given in appendix A.1. Figure 2(c) illustrates how the DT(G) breaks to three identical dotted base graph, after performing break operations on the red vertices.

Definition 5.
We define the base-location of a vertex ∈ f V DT G ( ( )) of the dotted triplegraph to be the position that the set P v or A e that includes f has in the dotted base graph D(G). This position is denoted by either 'v' corresponding to the specific primary vertex of D(G) or with 'e' corresponding to the specific added vertex of D(G) on the edge e.
Given a trap-colouring, each primary vertex belongs to one of the three graphs where the colour is determined by the trap-colouring. However, its base-location is fixed prior to the trap-colouring. Here we can see the difference of our construction with that of [4]. There a dotted-complete graph was used and the graph also broke to three identical graphs, where all primary vertices belonged to one of these graphs. However, there was no restriction as to how this break happens, and any choice of three equal subsets was valid. In our construction we maintain the structure of the base-location (reducing the number of added vertices required), but at the same time the colour choices at one primary base-location are totally independent from colour choices at other primary base-location.

Lemma 2. Given a dotted graph D(G), by applying break operators to every vertex in P(D(G)) or A(D(G)) the resulting graph is composed of the vertices of A(D(G)) or P(D(G)) respectively and contains no edges.
This property was essentially proved in [4] (see appendix A.2) and is required for the verification protocols presented in the next sections. In figure 2(d) we see after the break operations of figure 2(c), further break operations performed on all white added vertices and on all black primary vertices. We end up with a (green) copy of the dotted base graph and white isolated traps at primary vertices and black isolated traps at added vertices.
There are common properties that we will prove for both primary and added vertices and for the ease of notation we will refer to either such set P v or A e as F l with the convention that the subscript l denotes the base-location of the set and when it takes value v (primary baselocation) it becomes P v and when it takes value e (added base-location) it becomes A e .
Next we show that while the trap-colouring is a global construction it can indeed be considered as a local scheme. This property will be explored in our proof technique for the verification. We formalise this notion in the next set of definitions and lemmas. Definition 6. We define local-colouring of a set F l to be an assignment of colours to that set that is consistent with some global trap-colouring.
This definition captures the idea of colouring a particular set F l corresponding to base-location l such that it can be part of some global trap-colouring without having any further constraints from colours of vertices at other base-locations. We can see that a localcolouring of an added set A eij fully determines the colours of the vertices in the two neighbouring primary base-locations P P , v v i j , while the converse is also true. A local-colouring of the two primary sets P P , v v i j fully determines the colours of the added set A eij . We can therefore see that a local-colouring of set A eij is equivalent with a local-colouring of the two neighbouring primary base-locations P P , v v i j . We can also see that a local-colouring of all primary sets P v is compatible with a trap-colouring and fixes it uniquely.
However, if we have two general sets F F , l l 1 2 it is not always possible to colour them both using a local-colouring and still be able to find a global trap-colouring. An example is if we have a primary set P v1 and its added neighbouring set A e k 1 , where a local colouring of the set P v1 imposes constraints on the colours of A e k 1 further than those required from a local-colouring. E.g. an added vertex connected to a white primary vertex can be either white or red, but can never be black. An added set A eij can have local-colouring if there is no constraint on the colours from the neighbouring primary sets P P , v v i j , but also from other added sets A A , e e ik jk that have common neighbour sets (either P vi or P vj ). We wish to make it very clear that when there is a collection of base-locations that one can assign (independently) local-colourings to all the related sets F l and still be able to always find a global trap-colouring.

Definition 7 (Independently colourable locations (ICL)).
Given a dotted triplegraph DT(G) and a collection of n base-locations E with corresponding sets F l , we call the set E independently colourable locations if any local-colouring of the sets F l is consistent with at least one trap-colouring.
We should stress at this point, that ICL is a property of a collection of base-locations and not of vertices. The motivation is that for those base-locations, one could independently colour the vertices of each base-location and obtain an allowed trap-colouring. In other words, what this definition captures is that the choice of colours within each of the sets F l corresponding to a baselocation in E is independent from the choice of colours in other sets ′ F l with base-location in E. ε if the base-location is added (i.e. in the latter case, it contains the two primary base-locations that are adjacent to the location l).

Lemma 3. A set of n base-locations E is ICL if and only if for all pairs
The proof is given in appendix A.3. The following property is necessary for section 4.2.

Theorem 1. Consider a dotted triple-graph DT(G). Consider a set S of n base-locations
and assume that the base graph G has maximum degree c. Then there exists a subset ⊆ ′ S S of these base-locations that are ICL (independent colourable locations) and it contains at least | |= ′ + S n c 2 1 locations. Proof. The set S has n locations of the graph D(G) 7 . We want a subset of these locations ′ S such that it satisfies lemma 3. The condition of that lemma requires that if a primary baselocation v i is included, then all its neighbouring base-locations should be excluded. The maximum number of neighbours is given by the maximum number of added base-locations which is c. Therefore if we include the base-location v i in the set ′ S , we might need to exclude at most c other base-locations from the set S.
To include any added base-location e ij in the set ′ S , lemma 3 requires that its neighbours v v , i j and the neighbours of its neighbours e e , ik jk should be excluded. Its neighbours are 2, while the neighbours of the neighbours are at most 2(c − 1). It follows that to include e ij in the set ′ S we might need to exclude at most 2c other base-locations from the set S. From the pigeonhole principle it follows that we can find a set ′ S with at least =| | ′ + S n c 2 1 base-locations that are ICL. □ The existence of this number of ICL is what is necessary for the proof of section 4.2. However, we should note that finding such ′ S given S can be done efficiently, essentially following the procedure of the above proof.

Verifiable quantum computation
We give a verifiable blind quantum computation protocol using the dotted triple-graph construction, but otherwise, we follow similar steps with [4]. With our construction we obtain a protocol where the probability of success is constant (independent of the size of the computation) and we use only linear, in the size of the computation, number of qubits.
As we mentioned in section 1.3 dummy qubits break the graph to the computation graph and isolated traps. This breaking is hidden from the prover, since the prover does not know the positions of dummy qubits. For the computation to be accepted, the prover needs to return the correct results for the isolated traps. In other words, a malicious prover that wants to deceive the verifier, needs in the same time to guess correctly all the traps and corrupt the computation by deviating on some of the computation graph qubits.
As it is evident from the protocol (see protocol 1), the positions of the dummy qubits (i.e. those that are | | 0 , 1 { ⟩ ⟩}) is determined by the trap-colouring. It is easy to check that sending dummy qubits has the same effect as making a Z measurement in MBQC which effectively breaks the graph state at this vertex. Therefore the properties defined in section 2 corre sponding to the reduction of the DT(G) to one dotted base graph D(G) and isolated traps (lemmas 1 and 2) as well as the properties concerning the independence of the colouring and thus the distribution of traps (theorem 1), all apply here.

Theorem 2 (Correctness).
If both verifier and prover follow the steps of protocol 1 then the output is correct and the computation accepted. Protocol 1. Verifiable universal blind quantum computation using dotted triple-graph.
We assume that a standard labelling of the vertices of the dotted triple-graph DT(G), is known to both the verifier and the prover. The number of qubits is at most 3N(3c + 1) where c is the maximum degree of the base graph G. • Verifier's resources -Verifier is given a base graph G that the dotted graph state |D G ( )〉 can be used to perform the desired computation in MBQC with measurement pattern M Comp .
-Verifier generates the dotted triple-graph DT(G), and selects a trap-colouring according to definition 3 which is done by choosing independently the colours for each set P v . -Verifier will send dummy qubits for all red vertices, and thus performs break operation.
-Verifier chooses the green graph to perform the computation.
-For the white graph verifier sends dummy qubits for all added qubits a w e and thus generates white isolated qubits at each primary vertex set P v . Similarly for the black graph the verifier sends dummy qubits for the primary qubits p b v and thus generates black isolated qubits at each added vertex set A e .
-The dummy qubits position set D is chosen as defined above (fixed by the trap-colouring).
-A binary string s of length at most 3N(3c + 1) represents the measurement outcomes. It is initially set to all zero's.
-A sequence of measurement angles, φ φ = to be the measurement angle in MBQC, when corrections due to previous measurement outcomes s are taken into account (the function depends on the specific base-graph and its flow, see e.g. [26]). We also set φ = ′ 0 i for all the trap and dummy qubits. The verifier chooses a measurement order on the dotted base-graph D(G) that is consistent with the flow of the computation (this is known to prover). The measurements within each set P A , v e of DT(G) graph are ordered randomly.
-3N(3c + 1) random variables θ i with value taken uniformly at random from A.
-3N(3c + 1) random variables r i and | | D random variable d i with values taken uniformly at random from {0, 1}.
i that for each non-output qubit i computes the angle of the measurement of qubit i to be sent to the prover. Protocol 2. Cont. VUBQC using dotted triple-graph.

• Initial
Step -Verifier's move: Verifier sets all the value in s to be 0 and prepares the input qubits as The remaining qubits are prepared in the following form and sends the prover all the 3N(3c + 1) qubits in the order of the labelling of the vertices of the graph. -Prover's move: Prover receives 3N(3c + 1) single qubits and entangles them according to DT(G). Proof sketch. If both verifier and prover follow the steps of protocol 1 then the prover essentially (when pre-rotations are taken into account) applies the pattern Comp M at the green dotted base graph D(G), which by assumption performs the desired computation (see also theorems 1 and 3 of [4]). Moreover, the isolated white and black qubits are measured in the correct basis and thus the verifier receives = b r i i for the traps and accepts the computation (for further details, see appendix B). □ As already stated, the protocol is ε-verifiable if the probability of accepting an incorrect output for any strategy of the prover is bounded by ε. We follow the same definitions as in [4], while for completion the exact meaning of 'strategy of prover' and expressions for 'incorrect output' and 'accepting' are given in appendix C. 8 9 ( ) -verifiable (for quantum or classical output).

Theorem 3 (Verification). Protocol 1 is
Proof sketch. The proof follows closely certain steps of the proof of theorem 8 of [4]. Here we give an outline of the proof (details in appendix C).
The aim is to show that the probability that a malicious prover corrupts the computation and succeeds in all traps (and thus the verifier accepts the output) is bounded by ε. To achieve this we follow five steps. In step 1 we prove that any deviation from the ideal protocol can be expressed in terms of some Kraus operators which are then written as linear combination of strings of Pauli matrices (denoted as σ i ) and the remaining of the proof is to see which of those attacks maximise the probability of accepting an incorrect outcome.
In step 2 we note that there are some strings σ i that for any choice of the secret parameters (trap positions, angles, etc) of the verifier do not corrupt the computation and thus they do not contribute to the probability of failure. The set of all other strings σ (that could corrupt the computation for some choice of parameters) will be denoted as E i . It is clear that a malicious prover, to optimise the chance to get an incorrect outcome accepted, should only use attacks from the set E i . In this section, where we consider the simplest protocol, a single non-trivial attack could corrupt the computation and the set E i consists of all the attacks σ's that have in at least one position a non-trivial attack. However, in the next section this changes. The technique to amplify the success probability uses fault-tolerant encoding and thus the computation is corrupted only if multiple errors occur and this leads to different set E i . For now we keep the description general for as long as possible, so that it applies for the next section. After the set E i is defined, in order to compute an upper bound for the failure probability, we simply compute the probability of not triggering any trap given that the attacks used are all from the set E i . This is clearly an upper bound for the failure probability (worse-case scenario), since in reality the fact that there exist some choices of the secret parameters that a given σ ∈ E i corrupts the computation does not mean that it corrupts the computation in general. However, an upper bound ε of the failure probability suffices to prove that the protocol is ε-verifiable.
In step 3 we exploit the blindness of the malicious prover. The fact that they do not know the secret parameters restricts the attacks that contribute to attacks that are a convex combination of Pauli attacks. This is important since it eliminates the 'coherent' type of attacks and resembles theorems in quantum key distribution (QKD) that reduce coherent attacks to a collective by exploiting the symmetry of the states.
In step 4 we show that a malicious prover maximises the value of the bound of failure probability if they perform an attack with exactly the fewest non-trivial attacks that are consistent with E i obtained from step 2. This is a single attack for the protocol of this section (but different in section 4.2). It is easy to see that the greatest value is obtained for a single σ. In the next steps of the proof we find the maximum value of our bound for an attack corresponding to the single optimal (for a malicious prover) σ.
Finally, in step 5 we use the partition of the qubits to sets P v and A e . It is important to note that within each of those sets there is exactly one computation qubit and exactly one trap qubit. From previous steps we know that the bound of the failure probability is highest if the malicious prover chooses to make a single non-trivial attack. This attack happens at a qubit that belongs to either some set P v or some set A e . The probability of hitting a trap given a single set is clearly independent from the other free parameters corresponding to this qubit (but not the probability to detect it in general) and it goes as / . This leads to a bound for the failure probability = 8 9 / ε . □

Amplification of the probability of success
In the previous section we gave a simple construction to directly obtain a verification protocol with constant failure probability ε. However, a verification protocol is successful if the ε of the failure probability can be made arbitrarily small. There are two techniques that have been used to amplify the probability of success of a verification protocol. The first is simpler both conceptually and in terms of experimental requirements, but applies only in the case that the output of the quantum computation performed is classical. The second applies for computations with quantum output as well. We will use both techniques and show that starting with the dotted triple-graph construction we obtain in both cases improvements.

Amplification for classical output
In the case that a quantum computation has a classical output (e.g. solving classical problems or sampling, etc) it suffices to have an ε-verifiable protocol for any < 1 ε . This ε can be boosted and made arbitrarily small by repeating the protocol a sufficient number of times and accepting only when all repetitions agree. This results to an = ′ d ε ε which can be made as small as the security level required by choosing the number of repetitions d suitably. This of course implies an extra communication cost, for the multiple copies prepared, which scales as By using the dotted triple-graph construction we can obtain a repetition protocol where we only repeat a constant number of times (and the number of repetitions depends only on the security level). This is in contrast with the increasing number of repetitions needed in the repetition protocol used in [6] that was based on the brickwork-state protocol of [4]. It follows that the dotted triple-graph repetition protocol requires only a linear number of qubits. As we will see in the next section this does not give better complexity from the general protocol (that allows for quantum output). However, it has a number of practical advantages (easier to implement, smaller coefficient of leading term, etc) which can be of importance in view of the quantum systems that are being developed, such as Networked-Quantum-Information-Technologies (NQIT) [28]. Further details and an alternative construction applicable only for classical output can be found in appendix D.

Amplification for quantum output
We now turn to the general case, where the output of the computation can be quantum. Our main result is that our DTG construction leads to an exponentially-secure verification protocol for quantum output with only a linear overhead. Similar to [4] we use a technique that encodes the computation in a fault-tolerant way in order to amplify the probability of success of the protocol. The particular size of the boosting achieved depends on the fault-tolerant code that is used. Here we treat the protocol in full generality.
The general idea is that the computation is encoded with fault-tolerant encoding, while the traps remain single (non-encoded) qubits. Therefore, while a single error on a trap leads to a rejection of the computation, to corrupt the actual output of the computation many errors on computation qubits are required. The malicious prover needs at the same time to avoid hitting any single trap and hit many computation qubits in order to corrupt the output. • Verifier chooses a base graph G and a measurement pattern M mathbbComp on the dotted base graph D(G) that implements the desired computation in a fault-tolerant way, that can detect or correct errors fewer than / δ 2. • Verifier follows steps of protocol 1.

c is the maximum degree of the base graph and δ is the number of errors tolerated on the base graph G.
Proof sketch. The proof follows similar steps to [4]. However, because of our local construction, the proof changes and we highlight here where our technique deviates. Since the computation is completed using a fault-tolerant encoding, any deviation that affects fewer than δ 2 / computation qubits does not corrupt the output. It follows that attacks which contribute to the p fail have non-trivial Pauli's in, at least, δ 2 / base-locations 8 . Here we used the fact that in our construction the prover knows the partition of the qubits with respect to their base-location and thus will necessarily attack at least δ 2 / base-locations since they wish to corrupt the computation. Using blindness (as in steps 3 and 4 of proof of theorem 3), we conclude that the fewer attacks (given that corruption is possible) maximises p fail . According to our construc- Since the computation uses a fault-tolerant encoding, the number of qubits required scales accordingly. In particular, as in [4], there is an extra multiplicative cost O log similar to the classical output case.

Acknowledgments
The authors would like to thank Vedran Dunjko, Alexandru Gheorghiu and Theodoros Kapourniotis for useful discussions. The authors are also grateful to Joe Fitzsimons for useful discussion regarding a robust fault tolerance scheme as proposed in appendix F. EK acknowledges funding through EPSRC grants EP/N003829/1 and EP/M013243/1.

A.2. Proof of lemma 2
Proof. As the dotting operation only introduces vertices connected to vertices in P(D(G)), every vertex in A(D(G)) shares edges only with vertices in P(D(G)). Thus when the vertices in P(D(G)) and their associated edges are removed by the break operators, the vertices in A(D(G)) become disconnected. Similarly, since the dotting operation removes all edges between vertices in P(D(G)), hence every vertex in P(D(G)) shares edges only with vertices in A(D(G)). Thus when the vertices in A(D(G)) and their associated edges are removed by the break operators, the vertices in P(D(G)) become disconnected. □

A.3. Proof of lemma 3
Proof. First we prove that a collection of base-locations satisfying this condition is ICL.
we can see that (i) for all primary base-locations in E no neighbouring base-location is in E and (ii) for each added base-location, the two neighbouring primary-locations P P , v v i j are not in E and neither is any other added base-location set that has neighbours either of P P , v v i j . In other words, the sets of neighbours of added base-locations are disjoint. However, we already noted that a local-colouring of an added base-location is equivalent with a local-colouring of the two neighboring primary base-locations P P , v v i j . By replacing the localcolouring of added base-locations with that of the neighbouring primary base-locations, we reduce the local-colouring of the set E to that of a collection of local-colourings of primary base-locations. This is ICL since by the definition of trap-colouring no constraint is imposed between the colours of primary sets. To prove the converse consider two locations i,j such that ∩ ≠∅ i j ε ε . Either of these is primary and the other is a neighbouring added base-location, or i and j are added base-locations sharing a common neighbour k. In the first case it is clear that the choice of colour at the primary set (say i) imposes constraints on the colours of the added base-location j. In the second case, the choise of colour at the added location i can determine that of the neighbour location k (for example a white added vertex that is connected with a primary vertex fixes the colour of that vertex to white). But then fixing the colours of the primary base-location k in its turn imposes constraints for the other added neighbour j, and thus a local-colouring of i and j may not be consistent with a trap-colouring. □

Appendix B. Proof of correctness (theorem 2)
Proof. To prove the correctness of the protocol we assume that the prover is honest and follows the instructions. This proof is very similar with [4]. We first consider the effect that the dummy qubits have. Dummies are equivalent with Z measurement and therefore their effect is to break the graph at this particular vertex. In protocol 1 the dummies are placed at red vertices of a trap-colouring of the DT(G) and on white added-vertices and black primary-vertices. According to lemmas 1 and 2 this results in having a copy of the dotted graph (D(G)) at the green vertices, and isolated qubits at the white primary vertices and black added vertices. Moreover, the quantum state that the isolated qubits are is |+ θi ⟩. The measurements on different (disconnected) graphs do not affect each other, so we consider separately the measurement pattern on the (green) dotted-graph D(G) and the measurements in the isolated (trap) qubits.
The qubits in the computation graph (D(G)) are measured in the rotated basis δ φ θ π = + + ′ r i i i i , while the graph is similarly rotated as each qubit (before the entangling operations between the computation qubits) was in state |+ θi ⟩. As in UBQC [26] this is identical with performing Comp M to the non-rotated graph and results to the correct computation (by assumption), provided that the verifier, in order to account for the extra πr i rotation, sets = ⊕ s b r i i i and uses s i to compute the next measurement angles. The isolated traps (that are in state |+ θi ⟩) are measured in δ θ π = + r i i i angle (as φ = ′ 0 i for dummy and trap qubits) and give deterministically the outcome = b r i i . This is precisely the outcome that the verifier needs to accept the computation as correct. Therefore, in the honest prover case, the verifier always accepts the output (traps correct) and as we saw in the previous paragraph, obtains as output the ideal (correct).
Finally, note that the dummy qubits are also measured. However, since they are disconnected from the rest of the qubits (they do not affect them), and their result contributes neither to the correct output nor to the accept/reject decision, the outcome of these measurements is irrelevant. □

Appendix C. Proof of verification (theorem 3)
Proof. We now give some definitions taken from [4], before breaking the proof to five steps and exploring the places that we differ. The output density operator of the protocol is ν B j ( ) and is given by where we have the following definitions: The subscript j of the operator B, corresponds to the strategy/deviation that the prover makes, and when j = 0 is the honest run where there is no deviation (and thus the operator Ω = I). The index ν, collectively denotes all the random choices made by the verifier, i.e. is the Pauli operator acting on the quantum output, that maps the final outcome to the correct one depending on the choices of random variables ν C and the computation branch b. P is the unitary corresponding to implementing honestly the protocol. Ω is the deviation of the prover and is identity in the honest run.
, ⟩ ⟩ ⟩ is the initial state send by the verifier, that includes the quantum input and the |+ θ ⟩ states which are jointly denoted as | ν M ⟩ and depend on the random choices, and the δ | ⟩ registers correspond to the measurement angles (that depend on the branch of the computation b).
is the computation (green) output qubits when trivial deviation Ω = I occurs.
For simplicity, in this proof, we have assumed that the initial state is pure and that the computation P to be performed is unitary and therefore the honest ideal state |Ψ Ψ | ideal ideal ⟩⟨ is also pure.
The probability of failure of the protocol is when the protocol returns 'accept' but the output is orthogonal to the honest ideal. This probability is given by is the projection into the wrong subspace (orthogonal space to the correct ideal state) while it still remains within the accept subspace (where the traps succeed).
The proof has the following five steps. In step 1 we express the attack using Kraus operators and Pauli matrices, in step 2 we show that in order to lie in the incorrect subspace, at least one non-trivial attack to one qubit (of the dotted triple-graph) is required, and then we will replace the projection to the incorrect subspace with this restriction on the sum of allowed attacks. In step 3 we will exploit the blindness of the prover to reduce the attack to Pauli attacks. In step 4 we will show that the fewer the non-trivial attacks the greater the probability for the adversary, and thus we will restrict to the fewer allowed attacks (a single one). Finally, in step 5 we will use a suitable partition of the qubits which will then leads to a constant bound for the p fail .
Step 1: First we note that after tracing out the prover's register, the unitary Ω becomes a completely positive trace preserving map (CPTP), and can be expressed in terms of the . The matrix σ i is a tensor product of Pauli matrices, where if we want to specify the Pauli acting on qubit γ we will denote it as σ γ | i . We then get Step 2: Again following [4], we can see that only terms that satisfy contribute to the p fail . The terms that obey this are those necessarily within those that , which we will denote as ∈ i E i (and similarly ∈ j E j ), where the sets are defined as: s.t. and qubit of the dotted triple-graph s.t. and qubit of the dotted triple-graph s.t. and qubit of the dotted triple-graph s.t. and qubit of the dotted triple-graph and the superscript O denotes subset of those sets that the γ is output qubit. In other words, to corrupt the computation one either needs to flip the outcome of a measured qubit, or make any Pauli (other than the identity) if the attack is on the quantum output.
We have now imposed that the attacks σ i that contribute have at least one non-trivial Pauli attack at a qubit of the DT(G). This is not a sufficient condition to corrupt the computation in general (and send it to the ⊥ P subspace), but is a necessary condition.
To see this, we note that if we consider a σ i where ∉ i E i , then there is no choice of the secret parameters that would bring the state in the ⊥ P subspace. Here we take the worse-case scenario, where we assume that if there is some choice of secret parameters that a given attack could corrupt the computation, then we assume that it already is in the subspace ⊥ P and we only check what is the probability that this attack did not trigger any trap. For protocol 1 it is a single attack that could corrupt the computation. We then replace the projection on the ⊥ P subspace, with a restriction on the possible attacks, i.e. at the sum we only have terms corresponding to attacks that belong to the set E i . Note, that if the computation was encoded in an fault-tolerant way (as is done in section 4.2), then the set E i requires greater number of non-trivial attacks. For now we take the more conservative view.
We then obtain the following expression: Therefore the term that maximises p fail corresponds to an attack σ i that has the fewest (possible, i.e. compatible with E i ) non-trivial terms. From step 2 we obtained that the set E i has at least one non-trivial Pauli attack, so it follows that the bound of the p fail we compute is maximised when there is exactly one non-trivial Pauli attack. It is important to note however, that the set E i will be different in section 4.2 where we consider fault-tolerant encoding of the computation and the corresponding σ i will involve greater number of non-trivial attacks. In that case, the set of attacks that can possibly corrupt the computation (and thus send it to ⊥ P subspace) changes (i.e. E i differs).
Step 5: We will now use the partition of the qubits of the dotted triple-graph, to the subsets P A , v e corresponding to vertices and edges of the base graph. The way that this partition is chosen does not reveal any new information to the prover and does not depend on the choice of trap-colouring, i.e. on the positions of the traps. We have established that the optimal strategy for the prover in order to maximise the value of the bound for the p fail we compute, is to make a single non-trivial attack at one qubit of the dotted triple-graph. Let us assume that this single position is β and we know that it belongs to either a set β P v or a set β A e depending on whether the non-trivial attack is done on a qubit belonging to a primary set β P v or an added set β A e . When it is not clear if the set is primary or added, we will use β F which simply means that = where we used that σ ∑ + | |+  Since this bound is obtained when the attack is on a measured (added) qubit, the bound is the same when the output is fully classical. □ Appendix D. Protocol for classical output Protocol 4. Boosted Verifiable UBQC using dotted triple-graph for classical output.
• Verifier chooses a number d, where Proof. We have multiple repetitions and if all of them return the same output O, then the probability that this is not the correct output is bounded by the probability that all repetitions failed (and resulted to the same deviation). Since the different repetitions have the same outcome it means that if a single of those repetitions is successful then the output O is the correct output. From theorem 3 we know that the probability that a single repetition fails is 8/9. Then the probability that all the d repetitions fail is d 8 9 ( ) .

□
In the case of classical output, there is an alternative construction to the dotted triple-graph that could decrease further the (linear) overhead. In particular, instead of having the dotted triple-graph DT(G) one could consider three copies of the dotted base graph D(G). We will name this the three dotted copies construction. The one copy will be used for computation, while the other two for white traps (on primary vertices) and black traps (on added vertices). This construction is global, in the sense that the decision of which vertices are in which graph is made from the beginning and cannot be decided independently per base graph vertex v i . It follows that the location of the traps is totally correlated globally and there is no way to amplify the success probability in the quantum output case. This is the reason we focused on the dotted triple-graph construction for the quantum output case. For the classical output however, the three dotted copies construction works. Any given attack σ i is characterised by the set S i of locations on the dotted base graph D(G), that it has at least one non-trivial attack, which in the case σ ∈ E i i it means that δ | | S 2 i ⩾ / . Following step 3 and 4 of the proof of theorem 3, we reach equation (C. 12). From this expression we can again see that the fewer the positions of non-trivial attack (consistent with E i ), the greatest the value of this bound is. We already know that we need at least δ 2 / sets β F with non-trivial attacks, so it follows that the maximum is achieved when there are exactly δ 2 / different sets β F with exactly a single attack in each. To proceed further we need to decompose the probability of different configuration of traps p(T ) to this of individual sets. This is not in general possible since there are correlations between the traps of (neighbouring) sets. To this point we should note that fixing a configuration of traps is identical with giving a trap-colouring as in definition 3.
From theorem 1, we know that given a collection S i of δ 2 / locations on the dotted base we obtain the dotted triple-RHG graph G L DT . This graph state has linear number of qubits (as the maximum degree of the graph is 4). With the same choices of parameters as in [4], it can detect or corrects any deviation that has fewer than δ 2 / errors. From our results in the previous section, it follows that we obtain a linear-complexity verification protocol with exponential security bound given by The second application is that it can be used to improve verifiable fault-tolerant protocols. Assuming that there are errors due to noise (non-adversarial), the protocols given earlier in the text and in other VBQC protocols could face a problem. In particular, honest errors due to noise could make trap measurement fail and lead us to reject the output even in honest runs where the computation is not corrupted. Here we should stress that both in this paper and in [4] the use of fault-tolerant encoding was in order to amplify the security and not to correct the computation from errors caused by honest noise. However, one can construct a fault-tolerant verification protocol, at least for classical output, and one such example is presented in [13]. The starting graph used to obtain the fault-tolerant protocol of [13] was the brickwork graph that has a single trap. Then a fault-tolerant encoding was done followed by the repetition technique used to amplify the success probability. However the number of repetitions to maintain a constant level of security increased with the size of the computation. By using the dotted triple-brickwork instead, as the first step of the construction in [13], we can achieve exponential security with a constant number of repetitions. This would essentially bring down the number of qubits required from O(n 2 ) to O(n).
The third application is that we can directly use the dotted triple-graph construction for the verifiable measurement-only protocols [9,10]. In particular [9] is essentially the online version of [4], and the technique to include traps in the graph is equivalent. It follows that if the resource used instead of a dotted-complete graph is a dotted triple-RHG graph then the number of qubits required will reduce from quadratic to linear. In [10] the verifier, instead of including traps, uses 2k + 1 copies of a universal graph. In order to test the honesty of the prover it makes stabiliser measurements to 2k copies of the desired graph while it performs the computation on the final copy. However, there is always at least a 1/(2k + 1) probability that the computation is corrupted and not detected (e.g. one picks one copy and attacks all the qubits of that copy). Using the dotted triple-graph construction, modified for the measurement-only protocols, this probability can be made exponentially small while still using only linear number of qubits. In other words in [10] a malicious prover can choose one copy and corrupt all its qubits without diminishing their chances compared to their chances when corrupting a single qubit, as the positions of traps are correlated, i.e. a copy is either fully a test copy or a computation copy. On the other hand, in our local construction, for each baselocation, the choice of computation and test qubits is independent.