Physical-depth architectural requirements for generating universal photonic cluster states

Most leading proposals for linear-optical quantum computing (LOQC) use cluster states, which act as a universal resource for measurement-based (one-way) quantum computation (MBQC). In ballistic approaches to LOQC, cluster states are generated passively from small entangled resource states using so-called fusion operations. Results from percolation theory have previously been used to argue that universal cluster states can be generated in the ballistic approach using schemes which exceed the critical threshold for percolation, but these results consider cluster states with unbounded size. Here we consider how successful percolation can be maintained using a physical architecture with fixed physical depth, assuming that the cluster state is continuously generated and measured, and therefore that only a finite portion of it is visible at any one point in time. We show that universal LOQC can be implemented using a constant-size device with modest physical depth, and that percolation can be exploited using simple pathfinding strategies without the need for high-complexity algorithms.


Introduction
Within the last decade, great progress has been made in the theoretical field of quantum computer architectures. Modern fault-tolerant schemes rely on the use of many error-prone physical qubits to create individual logical qubits with fewer errors. Whilst we understand these methods of abstraction theoretically, implementing them in reality is not a trivial task when experimental constraints are applied. The study of quantum computation architectures must therefore incorporate both an understanding of high-level theoretical models and experimental limitations.
One particularly appealing approach to LOQC uses ideas from percolation theory as first proposed in [8]. The main idea is to passively entangle small resource states (also called microclusters), using fusion gates, to generate a large cluster state which can enable universal quantum computing. The cluster state which is generated corresponds to a random graph on a geometric lattice with missing sites and bonds. By using schemes which exceed the critical threshold for percolation on the lattice [8][9][10]19], a cluster state which supports universal quantum computation can be guaranteed. A lattice of logical qubits can then be identified using methods such as renormalization as given in [8], or the lattice concentration algorithm of [20]. The main virtue of using the percolation approach to LOQC is that it enables ballistic architectures that sidestep requirements for extensive adaptive switching networks, which are technologically very challenging [21].
In this work, we address a vital question that must be addressed for any high-level LOQC architecture based on percolation: can successful percolation be sustained using a physical device of fixed finite size, and what size (cross-section and depth) of percolating cluster state must be kept online at any point in time to do so? The methods we use to answer this question differ from conventional treatments of percolation, and are based on pathfinding algorithms which must exploit information in real-time about the outcomes of recent fusion operations. We assume that photons making up the percolating cluster state can only be kept online for modest periods using optical delays, which provide limited lookahead capability before measurements must be performed on the photons. Our analysis can have implications for all aspects of LOQC architecture by impacting hardware specifications at the component level. Specifically, this work presents three key results: (i) spanning paths can exist on extremely elongated blocks of edge-percolated cluster state lattice, but only when the crosssectional side length exceeds some minimum length set by the lattice edge probability; (ii) an LOQC device with a physical-depth of only 10-20 layers is sufficient to produce measurement-based quantum computation (MBQC) qubit channels (within a loss-and error-less LOQC architecture model); (iii) long-range limitedlookahead pathfinding can be achieved with algorithms with minimal complexity, thereby reducing associated classical co-processing requirements for LOQC.
The structure of this work is as follows: in section 2 we briefly review recent work on percolation-based architectures for LOQC. In section 3 we consider the minimum resource requirements of percolated cluster state lattices for producing long-range single-qubit channels. In section 4 we present the main results of our work, where we define the Random-node pathfinding process, conjecture a condition of pathfinding success and present results from numerical pathfinding simulations. Section 5 considers implications of the results presented for LOQC architectures, identifying key architectural trade-offs and specifications. Finally, a selection of open questions for future work are presented in section 6.

Percolation-based architectures for LOQC
The fundamental challenge of LOQC is the construction of large graph states. Graph states are a subset of stabilizer states [22] that can be uniquely described by simple graphs (for a review of graph states see [23]). In this formalism, a graph G V E , ( )containing vertices (or nodes) V and edges (or bonds) E, uniquely represents the state where CZ 00 00 01 01 10 10 11 11 We specifically refer to graph states represented by regular lattices as cluster states. In LOQC, cluster states can be probabilistically built using two types of fusion gate [5]. Known as type-I and -II fusion gates, these gates destructively consume 1 and 2 photonic qubits respectively and on success produce entanglement between the remaining qubits in the clusters (and on failure the input qubits are subjected to single-qubit measurements). Whilst type-I fusion consumes fewer qubits, it cannot herald photon loss, whereas type-II can herald such loss, but at the cost of consuming an extra qubit. In standard operation, both gates operate with a 50% success rate. However, Type-II fusion can be boosted to increase the success rate above 50% through the consumption of additional auxiliary resources [17,18]. For example, a success rate of 75% can be achieved through either the consumption of a Bell pair or 4 single photons.
To overcome nondeterministic entangling gates, renormalization is used to produce an idealized lattice *  from a coarse graining of some percolated lattice . For example, in one common strategy, microcluster states are placed on the sites of a lattice and fusion gates of success probability p f are applied to produce entanglement between the centre qubits of adjacent microclusters. Once  is constructed, a single central qubit is identified on each renormalization block that is path-connected to central qubits of adjacent blocks by sets of path qubits 6 . As in MBQC protocols [15,16], all other qubits in the lattice are then removed by adaptive single-qubit measurements, thereby producing *  . An example of this is depicted in figure 3, where a single-qubit MBQC channel is produced from the renormalization of a 2D lattice.
The size of blocks on  required for renormalization to a fixed *  depends only on the percolation threshold p c of , as produced by the lattice's structure. Reducing the overall resource requirements for a LOQC device therefore relies on producing a lattice with low p c without the need for high-degree and therefore costly microcluster resource states. Initial work on renormalization identified cubic, diamond and pyrochlore lattices as potential candidates, requiring 7-, 5-and 4-qubit microcluster resources respectively [8]. By extending a percolation approach to the generation of resource states, it was shown that both microcluster creation and fusion could be achieved from boosted fusion [17,18] of 3-photon GHZ states to produce a 'brickwork' diamond lattice with p p f c < [10] and pyrochlore [9]. Recently, this scheme was further generalized for higherdimensional lattices and n-qubit microclusters [19]. After  has been constructed, renormalization can be abstracted to the graph-theoretical problem of finding crossing clusters on percolated lattices, which can be solved efficiently [24].
Commonly, schemes for generating  correspond to a bond-percolation, where successful bonds correspond to open edges [25,26]. On percolated lattices with bond probability p, the existence of an infinite open cluster exhibits threshold behaviour. In the limit of an infinite lattice  ¥ , the probability P p,  ¥ ¥ ( )that an infinite open cluster  ¥ undergoes a phase transition (from 0 to 1) at p p c = . This threshold represents the division between two distinct percolation regimes for p p c < and p p c > , known respectively as the sub-and super-critical regime. The degree of connectivity within the lattice is fundamentally different between these regimes; for example, the scaling in size of the largest connected component transitions from sub-linear to linear across the threshold, as depicted in figure 1(a). For finite lattices , the finite-sized analogue to P ¥ is probability P p, )that a spanning cluster  exists along the i direction, thereby containing a path connecting opposite faces of the lattice block along axis i. Thresholds for P p, i  ( )correspond to continuous functions, becoming sharper for larger lattices and converge to P p,  ¥ ¥ ( ), as depicted by figure 1(b). In practise, percolation thresholds can be found by identifying the crossing point of functions P p, i  ( )for various sizes of  [26], or numerically using the Newman-Ziff algorithm [27].
In order to exploit percolation phenomena within a scheme for quantum computation, [10] also considered percolation on a subregion of the lattice with a small cross section which is to be used as a single-qubit channel for MBQC. By simulating P p 0.75, ) for L L L t´´b rickwork diamond lattices over a range of L (for L L t  ), it was shown that long-range percolation, and hence a single-qubit channel, was produced above some minimum L. This result can also be applied to finding long-range renormalization.

Long-range percolation for single-qubit channels
Our first set of new results extends the study of lattice percolation for single-qubit channels presented in [10], which was limited to the generation of the partially amorphous 7 and anisotropic brickwork diamond lattice, built specifically with p f = 0.75 fusion gates. To do so, we present a generalized model of percolation on elongated bond-percolated cubic lattices and establish a relationship between the minimum side-length L min required for consistent long-range percolation and edge probability p.
The probability of percolation P as a function of edge probability p depicted for small, medium and large lattices (L 10, 20 = and 100 respectively), depicting the phase transition between sub-and super-critical percolation at the percolation threshold p c . 7 Here partially amorphous describes a lattice that may contain bonds other than those defined by the lattice structure, such as diagonal edges or edges between non-adjacent nodes. When constructing a brickwork diamond lattice by the scheme presented in [10], this occurs for certain choices of fusion gate bases.
The model we use is as follows: consider a block of percolated L L L t´´c ubic lattice t  with edge probability p, where L L t  , depicted in the inset of figure 2. On t  , we examine the existence of an end-to-end spanning cluster, occurring with probability P p, ). To produce a reliable single-qubit channel, we specifically consider probabilities of percolation near unity, P p, . We therefore generally consider successful outcomes (for percolation and, in later sections, pathfinding) as having probability of at least 0.95, and long-range as referring to L 1000 t  . These definitions are chosen such that if the above conditions are satisfied, a renormalized qubit loss rate below 10 −3 can be achieved (given reasonable assumptions of renormalization blocks with side-length 10 ( )in the scheme of Kieling et al [8]) 8 . Given the known trade-off between correctability of qubit error and qubit loss for topological codes [28], minimizing loss rates is essential for maximizing tolerance for unavoidable computational errors. Such a low rate is also expected be a negligible contribution to renormalized qubit loss in the face of other potential sources of error within the architecture (such as photonic qubit loss, detector inefficiencies, distinguishability, etc).
However, within this model, percolation phenomena are less-well studied than in the standard regime. When considering finite-sized, elongated lattices such as t  , it is challenging to make analytic statements about the existence of spanning clusters, as can often be done for the limit of infinite lattices. For example, while for a lattice t  , one can find some p 1 < such that P p, . As such, we highlight that all results presented in this work are expected to have some minor functional dependence on our specific definition of successful and long-range given above. Therefore, we apply a more phenomenological and empirical approach to the relevant percolation effects, and within the context of LOQC such results provide important information for designing an architecture.
We now consider the following question: what is the minimum side length L min required to successfully produce a long-range spanning cluster  on t  as a function of edge probability p? To answer this question numerically, we have generated instances of L L 1000´´sized t  for a given p, and identified the minimum value L L min = for which P p, . In figure 2 we show values of L min over a range of p p c > . We observe that for edge probabilities well above p c = 0.248 (the percolation threshold for a simple cubic lattice [29]), small L min can be achieved (such as L 5 min = for p = 0.5), with small increases in L min providing large reductions in p. However, as p approaches p c , the scaling in L min is less favourable, requiring progressively greater increases in L min for incremental reductions in p. This scaling region suffers from particularly punitive resource costs if used for MBQC, as the number of qubits in L 1000 t 2  = scales quadratically in L. We also note that such a relationship for L p min ( )can be inverted to define p L min ( ), such that for a given L, long-range percolation can only be achieved for some p p min  . Furthermore, we can consider the implications of these results for a renormalization-based LOQC scheme. In this context, L min provides a lower bound on the side length for renormalization blocks. Whether or not this bound can be reached depends on finding intersections between spanning clusters connecting pairs of opposing faces within a single block as well as between adjacent blocks. This is especially problematic for p close to p c as inter-and intra-block connectivity is sparse; however for p well above p c , the increased connectivity also increases the likelihood such intersections occur. for L t = 1000) as a function edge probability p for cubic lattice. For a given edge probability, L min represents not only the smallest L required for pathfinding, but also the smallest renormalization block size achievable. Inset: an illustrative example of a block of percolated cubic lattice with a valid percolated path highlighted in red. 8 This can be seen by noting that if the probability of creating 100 renormalized qubits is greater than 0.95, then the probability of a creating a single renormalized qubit is (to a reasonable approximation) greater than 0.95 0.9995 1 100 » , and thus the loss rate for said qubit is less than 10 −3 . 9 This can be seen by considering that the probability of no open edges occurring between two layers spanning the cross-section of the block ) , and hence the probability that this never occurs over L t layers is . Since a spanning cluster is contingent on this never occurring then P t  < G ( ) , but for p L 1, 0 t <  ¥  G  , and therefore in the limit of infinite length, percolation never occurs.

Limited-lookahead pathfinding
In a physical LOQC device,  exists in one time and two spatial dimensions with 3  node coordinates t y z , , ( ) and size L L L t y ź´. To construct , at each time t from t=0 to t L t = , a L L y ź layer of  is created and entangled to the previous layer at t 1 -, where L y and L z are fixed by the renormalization protocol. However, alloptical storage of L t lattice layers in time would require lengthy delay lines, producing a physical qubit loss rate that scales with computation length (for some applications L t is effectively unbounded); under such conditions, it is unlikely such a scheme could succeed. It is therefore expected that an LOQC device will have a finite fixed depth, storing only a finite-depth window W of the lattice at any time t. In this model, depicted in figure 3, any classical co-processing algorithms applied to  suffer from a limited-lookahead, preventing analysis of a complete  (as previously assumed by algorithms for MBQC and renormalization). Under this limitation, previously-considered algorithms no longer apply, or their optimality proofs and scaling efficiencies are no longer guaranteed. To address this, new non-trivial dynamic algorithms must be designed.
However, finding optimality proofs for graph algorithms that only ever have partial knowledge of a problem is highly non-trivial, and different input scenarios may require different algorithm strategies for optimal performance. To study the limitations of the necessary dynamic algorithms, we consider the aforementioned task of identifying single-qubit channels on percolated lattices. Specifically, we extend the task of finding a spanning cluster presented in section 3 to the identification of a single end-to-end path, given a limitedlookahead. To do so, we next construct a basic limited lookahead pathfinding (LLP) algorithm.

Random-node pathfinding
We now introduce some notation needed for describing the LLP algorithm. Consider again the lattice t  as defined in section 3, with nodes labelled by their coordinates t y z , , ( ). We define a layer l t as the subgraph of t  induced by the 2D L Ĺ layer of nodes at time t, that is and represent nodes that are potentially usable for pathfinding. Similarly, l t t  may contain more than one connected component. Therefore, we also define v a b , To represent a limited lookahead, we consider the restriction that at a given time t, we can only have knowledge of the lattice structure within the finite block t t W ,  + of fixed window-length W. This 'visible' block of lattice is known as the active block. At the end of every time-step, the next far layer of lattice l t W 1 + + is revealed and nearest layer layer l t is removed, the active block now becoming t t W 1, 1  + + + for time t 1 + . Figure 3. Renormalization process applied to a 2D lattice (existing in one time and one spatial dimension) with limited-lookahead to create a MBQC single-qubit channel. The lattice block can be divided into three regions in time: past, active and future. Past qubits exist in the past, before time t, having already been created and destructively measured by the device. Active qubits exist in the present between time t and t+W, having been created by the device, but not yet measured. Future qubits exist in the future, after time t+W, and are yet to be created. Here the red, dashed lines and highlighted edges correspond to the allocation of renormalization blocks and renormalization paths respectively.
This limitation requires us to consider an iterative approach to finding spanning paths, which we shall call limited-lookahead pathfinding, where each time-step the algorithm must choose a path inside the lattice based on only partial information of the lattice. Specifically, we shall consider a low-complexity instance of pathfinding, which we call Random-node pathfinding. We consider a naive algorithm such as this to both identify a lower bound on the success rates of general pathfinding strategies as well as their computational complexities. To find a path  the following pathfinding algorithm is applied (depicted visually in figure 4), starting at t=0, (with v 0  = for some v 0 0  Î ) and is repeated until success or failure occurs: Random-node pathfinding: 1) Find far nodes. From the current path node v t in the nearest layer l t , find the set of all nodes v l v v : } in the farthest active block layer l t W + to which a path exists (only considering nodes and edges within the active block). If  = AE, pathfinding fails.
2) Find path to far node. Randomly pick a far node f t from t  , and find the shortest path v f , , 3) Find next layer node. Find the node in layer l t 1 + that occurs furthest along t  and assign it to the next timestep path node v t 1 If the final node f t in  is a member of l L t , pathfinding succeeds. The first thing to note is that this algorithm is far from optimal, and in fact is almost the worst strategy one could apply (other than making deliberately bad path choices). The only non-trivial analysis of structure occurs at step 3, where the action of finding the furthest l t layer node allows the inclusion of paths that double-back on themselves, advancing forwards and then back to layer l t before eventually reaching the final layer, an example of which is shown in step 3 of figure 4. The most computationally expensive operation in the algorithm occurs in step 1, when finding t  . This operation consists of running Dijkstra's algorithm (for finding shortest paths on arbitrary graphs) from v t , thus providing Random-node pathfinding with an overall worst-case performance of [30].
Finding optimal pathfinding strategies which demand only minimal values for W is very challenging in general and the Random-node strategy can be used to explore the worst-case scenario, from which improvement may be made. Inevitably, more complex strategies that require detailed analysis of the active block's configuration are computationally expensive, which is a major concern for real-time implementation in hardware devices. A secondary aim of our work is therefore also to minimize the computational overhead required for pathfinding, and the Random-node strategy also adheres to this goal.

Successful long-range pathfinding
We now consider the conditions required for successful long-range LLP and show that these can be framed in terms of standard block percolation. This aims to reduce the complexity of analysing a dynamic pathfinding algorithm to the simpler problem of calculating percolation statistics on small lattices.
First and foremost, pathfinding fails if no spanning cluster exists. To ensure that a path does exist (with probability P 0.95 t  for a given pathfinding distance L t ), we immediately require two conditions: p p c > and L L p min  ( ). Having satisfied these, we then seek to identify the conditions such that pathfinding almost certainly succeeds. In this section, we prove that pathfinding always succeeds if the number of end-to-end components in each active block never exceeds one, and subsequently conjecture that successful pathfinding is only achieved if the probability of this number exceeding one is less than some small ò.
Before outlining our argument we assert two key assumptions made. Firstly, we assume a unique spanning cluster always exists across  (where unique specifies that only one ever exists), and hence exclude any cases where long-range block percolation does not exist (e.g. by assuming L L p min > ( )). The validity of this assumption is provided by recalling that for p p c > the mean size of a finite cluster decreases exponentially in p [25], thereby preventing more than one cluster from spanning the lattice. Given this assumption, failure therefore only occurs from incorrect choices made during pathfinding. Secondly, we assume that at any given time, the pathfinding algorithm may only have access to information of the lattice's structure within the active block, i.e. it cannot store in memory any information about past lattice structure, nor gain preemptive knowledge of any future lattice structure. This allows us to consider each individual active block as a single instance of block percolation on a small lattice, and hence percolation statistics are constant across all active blocks.
Under these assumptions, the probability P W , pf t  ( )of pathfinding across t  with window length W, is given by the product of the probabilities P pf  + is chosen that still allows for successful pathfinding to distance L t , that is ) (from our first assumption). However, for W L t < , the values of P pf t are less easily computed. We can easily see that the probability of successful pathfinding given next node choice v t 1 + depends on the probability that v t 1 + exists in a component extending to the farthest layer, that is are the end-to-end connected components contained within t L 1, t  + that have one or more nodes in both l t 1 + and l L t ). However, at any given time step, we cannot know whether v t t L Ideally, we therefore desire some feature function of active blocks F: . From this, we then conjecture that if lattice parameters can be found such that P F t , successful longrange pathfinding will be achieved. We now prove one such feature function to be the number of end-to-end connected components within an active block, and thereby define a condition for W such that P F 1 1 t t W , . We find that one such feature function can be defined from the uniqueness of end-to-end connected components, such that   + ( ) can reach layer L t , and therefore allows successful pathfinding, or v , 5  + ( ) cannot contribute to pathfinding, and therefore represents a deadend, which causes pathfinding to fail. Note, due to effect of finite block side lengths L, there is always some nonzero probability that equation (5) To satisfy this requirement, we define (for a given p and L) the minimum window length W L p , min ( ) as the smallest W such that P n t 1 1 Note that for L L min  such a minimum window length must exist; for any t  , clearly P n 1 1 either W min is found when P n 1 1 occurs, or else no lookahead is required (and W 2 min = ). We further define W L p , max min ( )as the maximum window length required by LLP occurring for a given L at p min , above which any further increase in W provides no advantage.
We have shown that for a given t  with lattice parameters p and L, n 1 is a sufficient feature function. We hence conjecture that P n 1 1 is a necessary and sufficient condition for successful long-range LLP. That is, if this condition is not satisfied then no strategy (regardless of complexity) can ever produce successful long-range LLP, and that this condition is always satisfied for W W min  as 0   .

Numerical simulation
We now consider numerical simulation of LLP applying a Random-node strategy.
Firstly, we address the conjecture that P n 1 1 is a necessary and sufficient condition for successful pathfinding. Figure 5 -, such that 10 2  = -. We also note that P p W , pf ( ) drops significantly as P n 1 decreases below 1  -, further validating our choice of feature function. In conjunction with the proofs of our feature function presented in section 4.2, these results support our conjecture that P n 1 1 is a necessary and sufficient condition for successful pathfinding. We now consider the interdependence of pathfinding parameter W min and lattice parameters L and p. To do so, we consider the probability of successful pathfinding P p W , pf ( )on instances of cubic lattice t  with dimensions L L 1000´´over a range of p and W. Figure 6 depicts such a simulation for L=7. , such that L L p min > ( ) (see figure 2). This supports the conjecture that P n 1 1 is a necessary and sufficient condition for successful pathfinding, achieved for some minimum window length. Here we find that successful pathfinding P 95% pf  occurs for W 16 min  and is achieved for 0.01   , with both thresholds respectively depicted by coloured lines.
The first and most striking feature of these results is the sharp threshold at p 0.4 » for large W. This clearly identifies the minimum edge probability p L 7 min = ( )below which no long-range percolation occurs, and agrees with numerical L min results depicted in figure 2, showing that p L 7 0.4 min = » ( ) . From the argument made in section 4.2, we expect this pathfinding threshold to recreate the standard block percolation threshold of a L L 1000´´cubic lattice. We confirm this numerically with figure 7, which depicts LLP and block percolation thresholds found over a range of L, showing LLP reproducing long-range block percolation statistics. Furthermore, we find that percolation statistics found for active blocks can be used to estimate pathfinding performance over long distances. In this simplified stacked-block heuristic, we model long-range LLP as W 1000 consecutive instances of block percolation, as if adjacently stacked face-to-face in t to form the full block t  (without requiring two adjacent blocks' percolation paths are connected at adjacent faces), such that P p W P p , , ) . Figure 7 shows that even for large L, this heuristic provides a good estimate for P p W , pf ( )and P t (p) (when W W max  ). The second feature we observe is the effect of small window lengths upon pathfinding. For p p min = , we observe a maximum window length W L p , 1 0 max min » ( ) . As conjectured, W W max > provides no additional  ( )) was chosen to ensure the thresholds found were due to percolation effects, rather than pathfinding's limited lookahead. By comparison with long-range block percolation, depicted by dashed lines, we can see that P P pf t » , confirming that for sufficiently large window lengths, LLP is equivalent to long-range block percolation. Furthermore, we find that within this regime both long-range block percolation and LLP can be approximated as multiple stacked instances of ( L L 15´´) active block percolation, such that P p Pp Pp , , 15 , , To fully understand the parameter space for successful pathfinding, we consider contours of P 0.95 pf = in p and W for L 2, 3, 4, 5, 10 = and 15, depicted in figure 8. From these results we can also incorporate the effects of L into our previous analysis. As identified by the results of figure 2, an increase in L reduces the minimum edge probability p min at which long-range percolation occurs, and hence the value of p min which LLP can succeed. However, whilst an increase in L (for a fixed W) always decreases the required p for successful pathfinding, these gains are most significant when W is also increased, allowing the new p L min ( ) to be achieved. Such insights provide us with far greater clarity into the inherent resource trade-offs in a LOQC device.
Finally, we note that even for the largest active blocks considered, p L W 15 ) had yet to approach p c . This indicates that successful pathfinding is likely to require a lattice with edge probability greater than p c by some non-insignificant amount. Furthermore, when more sophisticated and computationally expensive pathfinding strategies were simulated, they did not reduce

Implications for LOQC architectures
Using the results presented in section 4.3 additional clarity can now be given to the resource trade-offs inherent to a realistic LOQC device. Firstly, generating a lattice with p p c > is necessary for the reduction of active block size. For p close to p c , small increases in p will lead to significant resource savings in block size. The success rate of LOQC's boosted fusion gates 10 p f can be increased from 50% to 75% through the consumption of either a Bell state or four single photons per gate [17,18]. However, above this first level of boosting, gains in p f become more marginal at the expense of increasingly costly resource states (which cannot be produced deterministically using linear optics without significant resource overheads). This leads us to believe that it is likely that LOQC will utilize boosted fusion of at least p 75% f = , from which a choice of active block dimensions, W and L, can be made accordingly. We note that in [10], it was shown that p 75% f = greatly exceeds the percolation threshold of p 62.5% c = . In practise, experimental fusion gate success rates will be reduced by error mechanisms, such as photon loss. However, if this reduction can be sufficiently minimized, our results indicate that small active block sizes can be achieved, thereby reducing overall resource requirements for LOQC.
Secondly, the probability of successful pathfinding affects the accommodation of bond/qubit 11 loss for a renormalized lattice. From the perspective of the lattice renormalization, a failure in pathfinding simply represents a missing bond/qubit along the time axis. Thus the quantum error correction (QEC) protocol's ability to deal with bond/qubit loss on the renormalized lattice explicitly determines the required P pf (which adds to all other loss mechanisms). For example, consider the pathfinding requirements for a linear cluster of ) for t  with dimension L L 1000´´) for a range of side lengths L. From this, we can fully understand the various resource trade-offs one can make in order to achieve successful pathfinding. 10 Note p p f ¹ , as in current proposals multiple fusion operations must succeed for a given edge to be created in the target lattice. Furthermore, failure modes of boosted fusion gates can also maintain connectivity, producing additional connectivity outside the standard percolation model. 11 If a block lacks connectivity to be successfully renormalized, one can choose to represent this either as the loss of individual bonds or an entire qubit. 100 renormalized qubits, with each renormalized block being 10 layers long, such that the dimensions of t  are L L 1000´´. If less that one bond/qubit must be lost per string of 100 renormalized qubits, then we require P W , 0 . 9 9 pf t  > ( ) . However, if more bond/qubit loss can be accommodated, this reduces the required pathfinding probability, thus allowing for a further reduction in L or W.
Finally, we expect the identified resource costs and trade-offs to be somewhat sensitive to our chosen value of P pf , and would expect a reduction in size of the successful long-range pathfinding parameter space (L, W, p) if it were increased (say to 0.99). However, we further expect that the effect of such a difference would be very small and furthermore would decrease 12 as P 1 pf  , and therefore our presented results provide an accurate description of the relevant limited-lookahead phenomenon.

Open questions
There are other architectural necessities that must be incorporated to produce a complete model. In this work pathfinding is only considered within the context of producing a single-qubit channel, but in order to produce a renormalized lattice for QEC percolated paths must also be found in y and z. While an renormalization algorithm with optimal scaling is known for 2D [20], none are known for higher-dimension lattices. Additionally, for a realistic device, local pathfinding algorithms must also be designed to reduce the associated computational overheads for finding percolated paths in both y and z (for example, similar to recently proposed cellular automata decoders for QEC [31]).
Also, we do not consider the effects of experimental errors on our pathfinding strategy. It is known that one of the most significant challenges for LOQC is photon loss. The teleportation of quantum information via MBQC in our model assumes that each photon is measured successfully. However, in a physical device some degree of both heralded and unheralded photon loss will undoubtably occur from active components and memory delay lines. For heralded qubit loss occurring in the lattice generation stage, it is known that the affected qubit's neighbours can be removed from the lattice. With this approach, it was shown in [10] that a loss rate up to 1.5% could be tolerated by the diamond brickwork lattice (with p 75% f = ). Given that for W W L p , max  ( )we recover standard percolation statistics, we therefore expect a similar loss tolerance for our pathfinding model. But for an unheralded qubit loss it is not yet known whether it is possible to perform MBQC without an explicit loss-tolerant encoding (such as presented in [32]), especially under the realistic restriction of a fixed order of qubit measurement.
We additionally note that in the context of a LOQC architecture, our approach here is far from optimal. For example, our pathfinding algorithm only considers a single path per qubit channel at anyone time. However, for p p c  the number of percolation paths spanning one axis of a L L Ĺ´block scales as L ( ), compared to 1 ( ) for p p c > close to p c [33]. It may therefore be possible to utilize these extra paths as backup paths to insure against both unheralded photon loss and unforeseen dead ends. This may have the combined effect of both reducing W L p , min ( ) and providing loss tolerance, without resulting in an increased susceptibility to accrued Pauli errors (from increased MBQC measurements per single-qubit channel).
Lastly, it remains to extend such pathfinding simulations to candidate lattices for percolated LOQC cluster states. Due to the amorphism, anisotropy and correlations of bond percolation applied to the brickwork diamond lattice presented in [10], a direct mapping of resource costs cannot be made from our results. However, preliminary simulations have shown comparable effects as presented here, suggesting that the presented LLP phenomenon is general to many lattice configurations [34]. Nevertheless, it remains to identify the specific impact of deviations from the standard percolation model as such lattices must also permit resource-efficient LLP in order to be utilized within an LOQC architecture.

Conclusions and outlook
Realistic architectures for LOQC must consider the physical constraints of a large-scale device, such as a finite and fixed depth. As such, this work has considered the effect of a finite fixed depth on the creation of a singlequbit channel from a percolated cluster state lattice. We have shown that within this model, a limited-lookahead pathfinding algorithm can be applied to successfully create such a channel and identified resources requirements for successful pathfinding. This suggests that an LOQC architecture with a computational window of 10 ( ) layers (i.e. clock-cycles of photon production) is sufficient to produce the almost indefinitely large states required for universal quantum computation. However, we also find that these constraints many require percolation-based LOQC architectures to operate above previously-identified minimum resource estimates.
Notably, we find that resource requirements become significant as the cluster state lattice's edge probability approaches it is critical threshold. However, this equally implies that even small increases in edge probability (close above the percolation threshold) can provide significant resource savings and allow an LOQC device to operate with surprisingly low fixed depth.
An additional key result of this work is a significant step towards bridging the gap between high-and lowlevel architectural requirements. When applied to a specific LOQC architectural schema, the model presented here allows direct mapping of high-level architectural resource requirements (such as a qubit channel loss rates) onto low-level device requirements (such as device depth and ancillae resource counts). Once identified, this mapping allows the device's fixed finite depth to be effectively ignored allowing the high-level abstractions required for studying the high-level architecture, such as QEC protocols. Furthermore, by identifying LLP simulation heuristics, the performance of novel candidate lattices for LOQC can be quickly and easily analysed without extensive LLP simulations-a key advantage as architectural models become increasingly sophisticated.