Integration of highly probabilistic sources into optical quantum architectures: perpetual quantum computation

In this paper we introduce a design for an optical topological cluster state computer constructed exclusively from a single quantum component. Unlike previous efforts we eliminate the need for on demand, high fidelity photon sources and detectors and replace them with the same device utilised to create photon/photon entanglement. This introduces highly probabilistic elements into the optical architecture while maintaining complete specificity of the structure and operation for a large scale computer. Photons in this system are continually recycled back into the preparation network, allowing for a arbitrarily deep 3D cluster to be prepared using a comparatively small number of photonic qubits and consequently the elimination of high frequency, deterministic photon sources.

In recent years, the extraordinary advances in experimental systems and theoretical techniques for quantum information processing has allowed for serious questions in architectural design and construction to be discussed [20][21][22][23][24][25][26][27]. This new area of research is generally being referred to as quantum engineering (QE). Broadly, the primary goal of QE is to adapt and combine the best experimental technologies and theoretical techniques to construct an experimentally viable large-scale quantum computer.
One proposed architecture was introduced in 2009 [20]. This optics-based computer is based on the topological cluster state model of computation [28][29][30] and a photon-atom-photon coupling device called the photonic module [31]. This architecture illustrated the structure and operation of a fault-tolerant, fully error-corrected quantum architecture. However, the architecture was based on components that were all theoretically deterministic. Deterministic photon-photon coupling utilized the nonlinearity afforded by an atom/cavity system present within each photonic module and photonic sources and detectors were simply assumed to be deterministic and of high fidelity.
It has been well known since the seminal paper of Knill et al [6] that an all optical quantum computer based on linear elements only allows for coupling between qubits in a probabilistic 3 fashion [32,33]. Since this result there has been extensive work investigating how probabilistic techniques could be utilized to construct a viable architecture [10], [34][35][36][37][38][39][40][41][42][43][44]. However, while this work demonstrated that, in principle, probabilistic components could be used to slowly grow large entangled states suitable for quantum computation two problems remained. The first is that two-dimensional (2D) cluster states [45] do not include any protocols for quantum error correction. This problem was addressed by applying error correction protocols on top of the underlying cluster model and resulted in extremely high demands on quantum resources [40]. The second significant problem is that these results did not show how such a massive optical system was to be constructed, arranged and operated when large-scale algorithms and error correction require billions of photons and millions of optical components.
The experimental development of optical systems [19], [46][47][48][49][50][51][52] (and more generally, probabilistic quantum components [53,54]) has in many ways been far more successful than deterministic technologies [13,14], [55][56][57][58][59]. Therefore, an important problem is: Can these more advanced probabilistic technologies still be integrated and used in viable large-scale quantum information architectures? Most research attempts to do this from the bottom up, taking well-established non-deterministic protocols and incorporating them into appropriate computational and error correction models [60,61]. While this addresses some issues related to the more effective use of error correction techniques, these results still do not address the fundamental architectural structure of a large-scale system.
We will attempt to approach this from the top down. The optical architecture proposed in [20] is designed such that its physical structure and operation can be very well defined up to billions of qubits. Since the architectural structure of this system is so well defined, this will be our starting point. The system is constructed from three key deterministic components. We will show how we can remove two of them. We will take the optical architecture consisting of deterministic single-photon sources, deterministic coupling via the photonic module and deterministic single-photon detection and replace the sources and detectors with the photonic module. This will lead to a network running with highly probabilistic single-photon sources and entirely constructed from one quantum component, namely the photonic module.
To combat the issue of probabilistic sources we introduce a perpetual design. As the photonic module can act as a non-demolition photon detector, photons are simply recycled. In this way, probabilistic sources are responsible for two tasks: (i) providing the photons to initialize the network and (ii) replacing photons which are periodically lost during computation. This paper will demonstrate how a highly probabilistic source can be integrated into a largescale architecture without sacrificing performance or the overall design and operation of the system. Although we focus specifically on the design proposed in [20], these results are applicable to any architecture that essentially mimics the operational properties of the photonic module. This represents the first step in integrating probabilistic technologies with deterministic technologies and will hopefully lead to future architectural designs incorporating an increasing number of probabilistic components.
We begin in section 2 with a brief discussion of the general principles governing the topological cluster state model. In section 3, we review the optical computer introduced in [20]. Section 4 discusses how we can reduce the required quantum technologies by replacing singlephoton sources and detectors with the same device utilized in the preparation of the cluster. Section 5 introduces the idea of a perpetual architecture design, where a comparatively small number of photons are recycled again and again to perform large cluster computations. Each element of the architecture is detailed, including how fault tolerance is maintained and how Each node represents a physical qubit initialized in the |+ = (|0 + |1 )/ √ 2 state and each edge is a controlled-σ z operation between two qubits. The 2D cross section of the cluster dictates the size (and error correcting power) of the computer, while the third dimension dictates the total number of computational time steps available. Computation proceeds by measuring the cluster along one of the three dimensions. photon loss is reliably detected. The network consisting of photonic sources is discussed in section 6, where highly probabilistic sources are used to 'boot-up' the computer and replace heralded loss events during computation. Finally in sections 7 and 8, we present several network simulations illustrating that a perpetual computer utilizing photon recycling and highly probabilistic sources can operate effectively.

Topological cluster state computing
Topological cluster states were introduced by Raussendorf et al in 2007 [28,29]. This model incorporates the ideas from topological quantum computing [62] and cluster state computation [45]. This model for quantum information processing is very attractive as it incorporates fault-tolerant quantum error correction by construction and exhibits a high faulttolerant threshold.
Computation proceeds via the initial construction of a highly entangled multi-qubit state. Figure 1 illustrates a unit cell of the cluster. Each node represents a physical qubit, initially prepared in the |+ = (|0 + |1 )/ √ 2 state, and each edge represents a controlled-σ z entangling gate between qubits. This unit cell of the cluster extends in all three dimensions, dictating the size and error correcting strength of the computer. Computation under this model is achieved via the consumption of the cluster along one of the three spatial dimensions [29,30] (referred to as simulated time). Logical qubits are defined via the creation of 'holes' or 'defects' within the global lattice and multi-qubit operations are achieved via braiding operations (movement of defects around one another) as the cluster is consumed along the direction of simulated time. Figure 2 illustrates a braided CNOT operation. Qubits within the cluster are selectively measured in the σ z basis to create and manipulate defects. By measuring the correct set of physical qubits, defects can be moved as the cluster is consumed. Physical qubits not associated Diagram of a logical CNOT operation in the topological cluster model. Each point in the diagram represents a physical qubit and the cluster is consumed from the front of the image to the rear. This image shows a CNOT, which is approximately 50% complete. Logical qubits are defined via pairs of defects. Four sets of defects are required in order to perform the braided CNOT. The three blue defects represent the control input, control output and target, respectively. These three defects are known as primal defects [29,30]. The purple defect is known as a dual defect and is used to enact braided logic operations (logic operations can only be performed between defects of differing type). The total size and separation of each defect dictates the error-correcting power of the topological code.
with defects are measured in the σ x basis and are utilized to perform fault-tolerant error correction on the system. The specific details of computation under this model are not important for this discussion; see [29,30] for further details. For this analysis, we will be examining the network required to successfully create the entangled cluster for computation.

Optical topological computer
Shown in figure 3 is the basic structure of the preparation network for the computer. The preparation network consists of two sets of fabricated 'wafers' containing an interlaced network of photonic modules [31] oriented at 90 • to each other (see figure 3(a)). Each set of parallel wafers is not interconnected; the only connections are at the junction between horizontally and vertically oriented wafers. In total, there are four separate stages in the preparation network, two on each horizontal wafer and two on each vertical wafer. The computer operates via the injection of single photons into the left-hand side of this network. Each photon then interacts with a total of four individual photonic modules, which act to deterministically entangle photons into the required cluster. After the photons are entangled, they are then measured in appropriate bases to perform fault-tolerant, error-corrected quantum computation. Shown in figure 3(b) is a schematic diagram of one of the wafers. This wafer illustrates stages 3 and 4 of the cluster preparation network and an additional array of photonic modules  [20]. The optical computer is constructed via a stacked array of 'wafers'. Each wafer consists of a network of photonic modules [31], each of which is designed to deterministically couple single photons. (a) The structure of the actual computer. Two sets of wafers, oriented at 90 • to each other, are connected to prepare the required entangled cluster. Photons travel through the network from left to right through four stages of cluster preparation before they are measured. (b) Stages 3 and 4 of the cluster preparation network and a second array of photonic modules used for measurement (section 5.2). The width of each wafer is independent of the total size of the computer, while the length increases linearly with the size and/or error-correcting power of the computer.
that are utilized to carry out measurement of the cluster (this will be discussed in section 5.2). The total width of each of these wafers remains constant, while the length of each of these wafers scales linearly with the size of the computer.
The injection of single photons into the computer requires a specific temporal arrangement. Illustrated in figure 4 is the temporal arrangement of photons for stages 1 and 2 of the preparation network for one of the vertically oriented wafers shown in figure 3(a). Within the network, half of the optical lines contain photons temporally separated by 2T and the other half has photons temporally separated by 4T , where T is the operational time of a single photonic module. This arrangement allows the photon stream to essentially 'flow' through the interlaced  [31]. This temporal arrangement allows for each photon to flow through the network of modules. After exiting the network, each photon is immediately measured to perform computation. network of modules, maintaining temporal separation and ensuring that only a single photon is present within any photonic module at any given time. Utilizing this temporal arrangement, it was shown in [20] that this network could deterministically prepare an arbitrarily large 3D topological cluster without employing sophisticated photonic routing or storage. Additionally, the specific operation of every photonic module in the network is completely specified and independent of the size of the computer.

Reducing required technologies
The optical computer requires several different, high-fidelity, quantum components: (i) the photonic module [31]; (ii) high-frequency, on demand, single-photon sources; (iii) high-fidelity single-photon detectors; and (iv) high-fidelity single-photon switching. It could be argued that the construction of a large number of photonic modules represents the most difficult element of the above list. If the construction of such a device could be achieved, it could be safely assumed that the other required quantum components will also exist with sufficient reliability to construct a large-scale computer. However, what if this is not the case? Can we further reduce the required technology and still design a viable architecture? 8 Apart from the photonic module, the next two components listed above that will require extensive development are high-frequency, on demand single-photon sources and high-fidelity single-photon detectors. These components currently exist at several levels of efficiency and reliability [10,63]. However, these technologies still require significant work before they are adaptable to a large-scale quantum computer. These are the two components that we will eliminate. We will replace high-fidelity single-photon detectors with the photonic module itself and we will replace the high-frequency, on demand sources with non-deterministic sources. While we could consider multiple non-deterministic sources, we will assume that photonic modules are used, distilling weak coherent light into single photons with a very low probability of success. In this way we will redesign the computer architecture to consist of only two required components: a high-fidelity photonic module and reliable single-photon switching.
It should be stressed that while the goal of this redesign may seem theoretically trivial, from an architectural standpoint it is significantly more complicated. In principle, any number of techniques could be utilized to remove the requirements of single-photon sources and detectors. However, we need to accomplish this under some tight constraints.
1. The original design of the architecture required one high-frequency (MHz or above) source per optical line or an ultrahigh-frequency source (THz or above) serving multiple optical lines. When replacing deterministic sources with non-deterministic sources, this ratio of the number of sources per optical line must remain effectively constant. 2. Non-deterministic sources such as the photonic module prepare single photons at random (but heralded) times. As these photons may be required anywhere in the network, the design must allow for this without introducing complicated photon routing or significant photon storage. 3. The probability of photon loss within the computer is non-zero. Therefore the design must be able to effectively replace lost photons, which will occur in random but heralded locations, with new photons that will also be prepared at random but heralded locations.

Utilizing the photonic module as a detector
The ability to utilize the photonic module to carry out individual photon measurement is key to replacing on demand sources without reintroducing additional technologies. As described in [31] the sole action of the photonic module is to project an arbitrary N -photon state, |ψ N , into a ±1 eigenstate of the operator X ⊗N . Within the preparation network of the computer, each module interacts with five photons before being measured and decoupled. This, combined with suitable local operations, projects each five-photon group into ±1 eigenstates of the operator Z Z X Z Z . These are the relevant eigenoperators describing the topological cluster state utilized for computation [20,28,29]. If, however, only a single photon is allowed to interact with a photonic module between its initialization and measurement, an arbitrary state |ψ is mapped to where M is the interaction between the atomic system in the module and the photon, |g 1 a , |g 2 a are the two states of the atomic qubit and |+ a = (|g 1 a + |g 2 a )/ √ 2. Measuring the atomic 9 system will project the photon into either the |+ = (|0 + |1 )/ √ 2 or the |− = (|0 − |1 )/ √ 2 state, dependent on measuring the module in the |g 1 a or |g 2 a state, respectively. Therefore, the module can be utilized to carry out an X -basis measurement on any photon it is allowed to interact with between initialization and measurement. Combining this with appropriate local rotations before measurement (via optical waveplates or waveguide techniques [47]) allows for the measurement of a single photon in any desired basis.
The major advantage of using the module is that it is intrinsically a non-demolition measurement. The photonic state is measured via readout of the atomic system. Therefore the photon is not physically destroyed during measurement and will simply exit the module in exactly the same way as for the preparation network. Consequently, we can recycle it. Measured photons can therefore be rerouted back into the input of the cluster preparation network.

Using the photonic module as a probabilistic source
By using photon recycling we can, in principle, operate a large topological cluster state computer with a comparatively small number of individual photons. However, these photons still need to be initially prepared and loss events within the network need to be compensated for with some type of source device.
In [20], it was assumed that the cluster network was fed via high-frequency, on demand, single-photon sources. Assuming one such source per optical line in the network, the operational frequency of each source is between 500 kHz and 100 MHz (assuming that the photonic modules have an operational time between 10 ns and 1 µs [31,64]). Replacing a high-frequency source with a probabilistic source and photon recycling can, in many circumstances, be desirable. In the case of our architecture, the photonic module can act as this probabilistic source. This allows for a large-scale quantum architecture to be constructed using essentially only one quantum component.
The photonic module acts as a parity check device, allowing us to effectively measure multi-qubit observables. It is well known that such a quantum component can be utilized for distilling weak coherent light into single-photon states [65,66]. A weak coherent pulse, in the number basis, can be approximated as for α 1. This state can routinely be prepared in the laboratory. The internal mechanism of the photonic module is based on working in the dispersive limit of the Jaynes-Cummings model. Illustrated in figure 5 is the structure of the internal atomic system for each photonic module. In the dispersive limit, the effective Hamiltonian of the system is H = βa † aσ z . The operator(s) a (a † ) are the creation (annihilation) operators for the cavity mode, σ z is the usual Pauli operator over the qubit spanned by the two ground states |g 1 a and |g 2 a and the effective coupling constant is β = −g 2 / . If the cavity mode is detuned by from the transition |g 1 a ↔ |e a and the atomic system is initialized in the state (|g 1 a + |g 2 a )/ √ 2, then a single photon introduced into the cavity mode will result in the following evolution, The two ground states, |g 1 a and |g 2 a , are the two levels measured when each module is read out. The transition between |g 1 a and a third state |e a is detuned from the cavity mode by an amount . Utilizing a basic Jaynes-Cummings interaction in the dispersive limit, a phase shift will accumulate on the state |g 1 a if the cavity mode is occupied by a single photon.
for an interaction time t. If we tune the interaction time to t = π/(2β) before removing the photon from the cavity and out-coupling it to a waveguide, we will induce a π phase shift on the state |g 1 a . Instead of a single photon we now interact the atomic system with a weak coherent pulse. The system evolution is Defining the states |± a = (|g 2 a ± |g 1 a )/ √ 2 and rewriting equation (4), we have The atomic system is measured in the |± a basis. The probability of each outcome is 4 4 We calculate the probabilities associated with each measurement outcome using the full coherent state |α = e −|α| 2 /2 ∞ n=0 α n √ n! |n , before making the approximation α 1. and the resultant states after each measurement result are Therefore, if the atomic system within the module is measured in the |− a state, the projected optical state approximates a single photon. The strength of the weak coherent state is the determining factor in how close the projected eigenstate is to a single photon, with a fidelity given by The relationship is inverted when considering the probability of measuring the module in the |− a state. As α → 0, the probability of measuring the module in the |− a state and projecting the coherent state into an approximate single-photon state approaches zero. Hence, there is a tradeoff between the probability that a module will successfully distill a single photon and how well the distilled state approximates a single photon. Utilizing the photonic module to distill weak coherent states will therefore result in sources with very low probabilities of success (as we require distilled states to approximate single photons to a high degree). However, even with low success probabilities, modules can be combined successfully with photon recycling.

Perpetual network
The previous section illustrated how the photonic module can be utilized to effectively replace single-photon detection and probabilistically distil weak coherent states into single photons. We can now discuss the general structure of a perpetual quantum computer. Figure 6 illustrates the overall structure of the design. Probabilistic sources are used to slowly 'boot-up' the computer. Each injected photon then proceeds through the preparation network of photonic modules, through a network of detection modules and is then rerouted back to the source. We examine each component of the network separately.

Preparation network
The preparation network has already been detailed in [20]. The temporal arrangement of photons is such that half of the optical lines consist of photons separated by 2T and the other optical lines contain photons separated by 4T , where T is the operational time of the module. Each photon interacts with four separate modules and suffers a delay of T for each interaction.
Within the network there are temporal 'windows', which allows for each module to be measured and reinitialized. Both the measurement window and the reinitialization window are assumed to also take time T . After the preparation network, there is an additional delay of 4T  Figure 6. General structure of the preparation network. Dedicated single-photon sources and detectors are replaced with photonic modules. As each module acts as a non-demolition detector, photons can be rerouted back into the start of the preparation network. Each deterministic source can then be replaced with lowprobability sources, which provide the initial photons to saturate the preparation network and replace photons lost during computation.
before each photon enters the measurement network. This delay allows the final parity checks in cluster preparation to be completed before photons are measured.

The detection network
As shown in section 4.1, the modules can be utilized to perform non-demolition detection on each photon. However, there are issues related to fault tolerance that need to be addressed when designing the detection system. It is well known that photon loss is a major error channel for optical computers. The topological cluster codes that are used in this computer are quite efficient at correcting this type of error. Several recent results have examined the robustness of the topological cluster model when subjected to loss [67]. A percolation threshold of approximately 25% was found, suggesting that this particular error channel is preferable to standard quantum errors 5 .
In the original design of the architecture, photon detection was achieved using dedicated single-photon detectors, which destructively measure photons and hence can discriminate between the presence or absence of the physical photon. This combined with the fact that Loss can be uniquely identified by measuring photons twice separated by a single-photon phase rotation. As each module requires a temporal window of T for measurement and reinitialization, the central optical line has two sets of measurement modules. While one module is reinitialized, the next incident photon is routed to a second module that was initialized in the previous time step.
new photons are continuously injected from deterministic sources mean that loss was an easily correctable error for the original design. However, moving to a perpetual architecture presents two problems: 1. The internal function of the photonic module is such that it cannot discriminate between a photon in the |+ state and the vacuum. 2. As each physical photon is used repeatedly for generating the entangled cluster state, an undetected loss error will lead to temporally correlated errors in the computation.
Therefore, we need to first figure out a method to uniquely detect loss events using the modules and ensure that this technique can be made fault-tolerant (i.e. not cause errors in detection to spread to large groups of errors at later times). The detection network is illustrated in figure 7.
Illustrated is a small cross section of the network. The upper and lower optical lines run at a repetition rate of 4T , while the central optical line runs at a repetition of 2T . As we assume that each photonic module requires a window of T for both measurement and reinitialization, the detection network for the central optical line has twice as many modules. While one of the modules is being reinitialized for the next measurement, the next incident photon arrives at the same time. Therefore, it is switched to a second module that has already been initialized.
Fault tolerance is achieved by measuring photons twice. We can write down the module transformations for an incident photon in the |+ state, the |− state and vacuum: M|vac |+ a = |vac |+ a .
Hence, measuring the atomic system in the |+ a state indicates the presence of either a single photon |+ state or the |vac state.
In order to discriminate between the states |+ and |vac , we measure the photon a second time. Before this second measurement a phase rotation is applied to the photon, taking |± ↔ |∓ . Therefore, if the state entering the measurement network is vacuum, both modules will be measured in the |+ a state. Assuming that measurements of the photonic module are error free, we are therefore able to uniquely discriminate the actual photonic states |± and the loss channel |vac .
Although the above scheme allows us to uniquely identify loss in the network, we also need to check that it is still effective when each of the two photonic modules is subjected to measurement errors and when loss can occur between the two measurements. We can summarize the possible measurement outcomes for the two modules under these error channels.
For each case, except |− M 1 |− M 2 , the correct error-free result is listed first. Ensuring correct fault-tolerant operation of the perpetual network requires protecting the system from the following (table 1): 1. Lost photons need to be reliably detected and replaced. Additional errors may cause temporally correlated loss events, but these should not persist in the network (unless additional errors occur). 2. Two photons should never be injected into the preparation network at the same time. Hence we tolerate a small increase in the correlated loss rate in order to completely suppress this possibility.
Therefore, we always assume that no loss event has taken place if the modules are measured in the states |+ M 1 |− M 2 or |− M 1 |+ M 2 and the photon is rerouted back to the start of the preparation network. The other possibilities, if additional errors have occurred, are: 1. The wrong state was measured, but the photon is still present in the network. This is a standard measurement error, which is effectively corrected by the properties of the cluster. 2. A |vac state was rerouted back into the network. This causes a temporally correlated loss event but is corrected (with high probability) in the next cycle.
Each error channel is summarized in table 1.
A |vac state is rerouted back into the network combined with a measurement error. The topological cluster corrects the measurement error and a correlated loss error is corrected (with high probability) in the next cycle.
If the measurement modules are measured in either the |+ M 1 |+ M 2 or the |− M 1 |− M 2 states, then the optical line is not rerouted back into the network. Instead the injection network is signaled that a loss event has taken place and a new photon is re-injected from the source network (if one is available). This ensures that we never accidentally introduce two photons into the computer. The adverse effect is that the measurement error in the module produces an additional correlated loss event. For this loss event to persist, an additional loss event must occur in the source network such that a vacuum state is re-injected. Therefore, some measurement errors in the detection network generate at most two correlated errors: the initial measurement error and an additional loss event. This loss event will be corrected the next time through the measurement network unless an additional measurement error occurs (which is a higher-order effect).
These rules for interpreting the measurement results from the detection network ensure that lost photons can be detected and replaced. Single failure events (in the detection modules) propagate to at most two temporally correlated errors and two photons are never injected into the preparation network at the same time.
The fault-tolerant threshold for this architecture will only be influenced if there is a cascade event within the computer (an error channel with probability p that cascades into n errors, which should otherwise occur with probability O( p n )). The structure of the detection network ensures that the threshold will not be affected. Qubit loss in the topological cluster model is an error channel that is more preferable as it is heralded. The threshold for standard errors will not change, as the detection network does not cause a second unknown Pauli error, but it will increase the effective loss rate (proportional to the probability of measurement error) of the computer. This can be mitigated by adaptive error decoding where known correlation events are incorporated into classical processing [68].

Photon re-routing
Once measured by the detection network, photons are rerouted back into the cluster preparation network. The full structure of the perpetual network is illustrated in figure 8. The injection and source networks will be discussed shortly. The return optical routing of photons contains a total delay of 2T . This delay is required to ensure that re-injected photons maintain the temporal arrangement of the network. Each photon spends a total of 10T within the cluster preparation network and the detection network. Once a given photon is measured by the second module in the detection system, it must be further delayed by 2T . This is due to the fact that the atomic system needs to be measured, taking time T , and to allow the classical measurement signal to be transmitted from the detector network to the source network (once again taking time T ). The photon repetition rates for the network are 4T and 2T , respectively. Therefore, to maintain the temporal arrangement within the network, the total cycle time, T T , must satisfy T T mod(4T ) = T T mod(2T ) = 0. For the network in figure 8 the total cycle time is T T = 12T , satisfying these conditions. Consequently, photons arriving from the return path will re-enter the network at the correct temporal location.
Photon rerouting will presumably be performed with optic fiber, linking the outputs and inputs of the computer. Photon loss will be later defined with respect to the temporal interval, T . This will either be the loss associated with a photonic module (where the photon is delayed by T ) or the loss associated with the temporal delay (achieved with fiber loop). The intrinsic loss associated with optic fiber is estimated as p = 1 − e −d/L , where d is the length of fiber and L is the attenuation length of optical fiber (approximately L = 25 km). Therefore, a fiber delay of T will experience loss at a probability of p = 1 − exp(T c f /25 000), where c f ≈ 2 × 10 8 is the speed of light in fiber. For T = 10 ns → 1 µs the loss from the fiber delay is 8 × 10 −5 → 8 × 10 −3 . Therefore, the intrinsic loss associated with the fiber is comparable with the loss expected for the photonic module and is well below the practical loss rate required to operate the topological cluster computer.

Source and injection network
The final part of the perpetual design is the injection network. This part of the system accepts photons being rerouted back from the measurement network and also accepts photons from the network of probabilistic sources. As explained in section 4.2, each source distils single photons at random but heralded times and the rest of the computer will lose photons at random but heralded times. Therefore, the injection network needs to be designed in a way that photons Cross-sectional structure of the perpetual network. The network contains three separate subnetworks. The source network consists of an array of photonic modules that probabilistically distil a weak coherent pulse into single photons. The injection network accepts photons from the source or photons recycled from the measurement network. The cluster preparation network deterministically couples photons and was detailed previously in [20]. Finally, the measurement network that measures photons for computation, detects loss events and reroutes the photons successfully measured back to the injection network.
prepared by the sources can be routed to various injection points in the preparation network as they are required. This can be done by connecting each source to the optical lines immediately above. Probabilistic sources are therefore linked together to form a uni-directional, linear nearestneighbor network (uLNN). This network connecting sources to the preparation network is referred to as the shunting network. Individual wafers of photonic modules (illustrated in figure 3(b)) have independent shunting networks and are not interconnected between separate wafers. Each of the shunting networks has a size along the length of each wafer related to the fundamental probability that an individual source successfully prepares a single photon. The size of the shunting network is given by N = O(1/ p s ), where N is the number of sources connected in the network and p s is the probability that the source will distil a single photon. If photonic modules are utilized as probabilistic sources, then the time required for attempted distillation is 3T : 2T for module initialization and measurement and an interaction time of T . The size of the source network is chosen such that, on average, one photon is successfully distilled within the network every 3T . The source network is connected to the shunting network via a delay of T ; this gives sufficient time for each module to be measured to confirm if a single photon has been successfully distilled before being introduced into the shunting network. The shunting network connects each optical line to the one above it with an additional time delay of T . The boundary conditions are periodic, with the uppermost optical line connected to the lowermost optical line. Given that T is defined via the operational time of a photonic module (between 10 ns and 1µs), the spatial extent of the shunting network is limited to between 1 and 100 m, more than sufficient for a large-scale array. The required time delay of T between neighboring optical lines is due to the temporal arrangement of photons entering the preparation network, as figure 9 illustrates. Each individual optical line has a pulse separation of either 2T or 4T and each pulse in neighboring optical lines is separated by T . If a source produces a heralded photon at the appropriate time to be injected to its corresponding optical line but the photon is not required, then it is routed one line up. The delay of T ensures that this shunted photon is at the right temporal location should the photon be required to replace a loss event in the next optical line.
The shunting network is designed to cycle freshly prepared photons in a loop until they are required. Each time a loss event is confirmed by the measurement network, the system accepts a photon from the shunting network if one is available. For each optical line, there are seven distinct switching scenarios, which are illustrated in figure 10. These switching scenarios represent all the cases where photons are successfully recycled, where the source successfully distils a state and where a photon is present in the shunting network. Each pattern can be summarized as follows: 1. Panel (a): the photon is successfully recycled and simply rerouted back into the preparation network. 2. Panel (b): the photon is successfully recycled and a second photon is present in the shunting network. The recycled photon is routed back into the preparation network and the photon in the shunting network is routed up into the next optical line. Figure 10. Each of the seven switching scenarios dictating how photons are recycled from the measurement network, accepted in or out of the shunting network or accepted from the source. In these figures, the |vac states have always been detected by the heralding associated with the sources or the detectors. Vacuum states can also be present that have not been detected. For example, a photon is lost while it is being rerouted from the detectors back to the injection network. In this case, the switching pattern assumes that the photon is still present, re-injecting a |vac state into the computer, which will be corrected in the next cycle.

Panel (c)
: the photon is confirmed as lost and a second photon is present in the shunting network. The photon in the shunting network is routed into the preparation network. 4. Panel (d): the photon is confirmed as lost at the same time as a second photon is present in the shunting network and a new photon is successfully distilled from the source. The photon in the shunting network is routed into the preparation network and the photon from the source enters the shunting network. 5. Panel (e): all three photons are present. The photon from the source is routed to a termination point and removed from the network; the other two photons are routed in the same way as panel (b). 6. Panel (f): the photon is confirmed as lost and the source successfully distils a photon. This new photon is directly routed to the preparation network. 7. Panel (g): the photon is successfully recycled and the source distils a photon. The recycled photon is rerouted back into the preparation network and the newly distilled photon is routed into the shunting network.
Photon loss can occur at any point in this network and we do not assume any nondemolition 'probing' of the network to confirm if photons are still present. The only classical signals available are the heralding signal from each source and the signal from the measurement network confirming loss events. As each distilled photon is injected into the shunting network, they are classically tracked. If these photons are lost before being injected into the preparation network, then this will produce another loss event within the computer, which, with high probability, will be corrected in the next cycle.

Network simulations
To confirm that this network design operates as intended, direct numerical simulations were performed. It should be stressed that we are not simulating any quantum aspect of this architecture. We are simply focusing on the network structure under finite photon loss and low probability source injection.
As detailed in previous sections, there is a total of 12T timesteps within the network, 4T in the cluster preparation network, 2T in the measurement network, a 4T delay between the preparation and measurement networks and a 2T delay when rerouting photons. During each of these steps, individual photons are subjected to loss with a probability of p L . We refer to p L as the per component loss probability. We also define p c as the per cycle loss probability; this is the probability that a photon is lost between entering the preparation network and re-entering it again after it is recycled, p c = 1 − (1 − p L ) 12 ≈ 12 p L . Within the source network, each photon is delayed by T immediately after it is distilled (in order for the module to be measured, confirming distillation) and again subjected to loss with probability p L . Once a photon enters the shunting network it is continually shunted in a loop, each step in the loop takes time T . The shunting continues until the photon is accepted into the network or lost.
In each simulation we vary the total time the network is running and the bias between p L and the probability of success for each source, p s . The source probability is fixed and dictated by the size of the network. For a network consisting of N optical lines, the source probability is p s = 1/(3N ), guaranteeing that, on average, one photon is successfully distilled every 3T steps. The simulations are designed to determine the following: 1. For a given network size N = 1/(3 p s ) and a bias B = p s / p L > 1, does the network saturate? That is, how large is B such that as t → ∞ each temporal position in the network is occupied by a single photon? 2. For a given network size N = 1/(3 p s ) and a bias B = p s / p L , how many timesteps are required before the network is saturated with photons? Essentially how long is required to boot-up the computer before computation can proceed?
For N optical lines, the total number of photons required to saturate the network is given by P = 9N /2. For a large-scale computer, consisting of N × N optical lines, the total number of photons required to boot-up the computer is P = 81N 2 /4. However, as each of the individual uLNN shunting networks in the computer are not interconnected, it is sufficient to simulate a single cross section containing N optical lines.

Results
The numerical simulations are performed varying the network size N , the bias B and the total number of timesteps t. The source and loss probabilities are given by p s = 1/(3N ) and p L = p s /B = 1/(3B N ). Monte-Carlo simulations were performed using 10 3 statistical runs. Due to the available computation power, we have simulated up to 168 optical lines. This corresponds to a source probability as low as p s = 2 × 10 −3 and hence an approximate error on the distilled single-photon state of = 1 − F = 6.7 × 10 −7 . Shown in figure 11 is the plot for N = 88, with other simulation results shown in appendix A, figures A1-A10. This simulation illustrates the total number of photons present in the network as a function of t for 2 B 192 averaged over 10 3 statistical runs.

Number of Photons in Network
The first thing to note is that the total number of photons in the network is greater than P = 9N /2 when B and t are large. This is due to the shunting network. While the computer requires 9N /2 photons to saturate, additional photons are also present in the shunting network. These photons are used to replace confirmed loss events while the computer is in operation. On average, when saturated, the shunting network will contain an extra N /2 photons. These two bounds are illustrated on each plot. As B → ∞, the total number of photons in the network approaches P = 5N .
The second set of simulations shown in appendix B, figures B1-B10 (with figure 12 illustrating for N = 88) examines the total number of photons present in the network as a function of B. Each data point is taken at the maximum value of t simulated in appendix A. For each value of N , the network saturates once the bias reaches B ≈ 30. Note that this threshold bias does decrease slightly as N increases, but it is essentially independent of the network size. A bias threshold of 30 translates to a per cycle loss probability of p c = 0.4 p s . Hence, provided that the source probability is approximately 2.5 times higher than the per cycle loss probability of each photon, the system will eventually saturate and computation can proceed. Finally, we can determine an approximate boot-up time for the computer. For this, we fix the bias at B = 32 and resimulate the network for a total of 3 × 10 3 samples. We then plot, as a function of total timesteps, the percentage of all samples that result in complete saturation of the preparation network. Figure 13 illustrates this.
Each curve shows a clear transition, where the total number of timesteps is large enough such that the network saturates 6 . To estimate the boot-up time for various network sizes, we simply find the approximate point where 50% of all samples lead to > 9N /2 photons in the network. Figure 14 illustrates the approximate boot-up time for 8 N 168.
As you can see, there is a clear linear relationship between the boot-up time and the total size of the network. As the size of the network is dictated by the source probability, N = 1/(3 p s ), the time required to saturate the network scales inversely with p s . While, even for modestly sized networks, the total number of timesteps required for boot-up is large, the actual physical time is more than acceptable. Each timestep, T , represents the operational time for a photonic module. Hence, assuming T = 1 µs, the scaling, in units of milliseconds, is approximately We can now summarize the requirements for a perpetual design running with highly probabilistic sources. A reasonable parameter range for topological cluster states requires a per cycle loss probability, p c , between approximately 10 −2 and 10 −4 . Higher loss probabilities begin to require unreasonable cluster resources to correct [67] while loss probabilities lower than ≈ 10 −4 are experimentally unrealistic. Therefore, source probabilities can be in the range of approximately 2.5% → 0.025% with each shunting network connecting 10 → 300 optical lines and a boot-up time in the range of 1 → 100 ms.
These estimates represent approximate upper bounds for the boot-up time and the size of each shunting network. If each probabilistic source succeeds with a much higher probability (while loss probabilities in the network remain fixed), then both the boot-up time and the size of each shunting network will decrease. Table 2 details the requirements for a large range of these parameters.

Conclusions
This discussion has shown how highly probabilistic photon sources can be integrated into a scalable optical architecture. By replacing deterministic sources and detectors with the photonic module, this modified design is constructed exclusively from a single quantum component. These results equally apply to any other form of highly probabilistic photonic sources, provided they can be integrated into a computational network similar to the optical architecture used in this work. Table 2. Estimation of boot-up times and shunting network sizes for various values of distillation probability, p s . Max-p c is the maximum per cycle probability of photon loss such that the computational network will saturate (corresponding to a bias of approximately p s = 2.5 p c ). With this detailed design, future work will require an architecturally specific threshold calculation to be performed. A microscopic analysis of the photonic module and a detailed network treatment of all error channels and how they propagate to the final cluster will allow us to identify multiple correlation patterns that can be incorporated into the decoding algorithm. As with the work of Wang et al [68], we anticipate that this will lead to a threshold of over 1%.
The introduction of the shunting network, connecting probabilistic sources to the cluster preparation network, combined with the ability to recycle measured photons, essentially leads to a pseudo set of on demand sources. It can be easily shown that by having sources with a slightly higher probability of success than loss, the system will be able to compensate for photon loss during computation. The shunting network solved the problem of being able to route photons prepared in random locations in the network to other random locations where photons are lost. This structure effectively achieved the same goal as extremely complicated multi-port switches and extensive optical delay.
This analysis demonstrates the practicality of a limited number of highly probabilistic quantum components in a large-scale architecture. Unlike other results, illustrating the theoretical ability to construct a quantum computer from probabilistic components, this work has maintained the explicit architectural structure of the optical computer. By starting from a fully deterministic architecture, we could carefully integrate a limited amount of probabilistic technology without sacrificing the overall structure of the system. The next major step is to determine if additional probabilistic technology can replace other currently deterministic components. The results presented in this paper give us a certain amount of optimism that this can be achieved.