Benchmarking near-term devices with quantum error correction

Now that ever more sophisticated devices for quantum computing are being developed, we require ever more sophisticated benchmarks. This includes a need to determine how well these devices support the techniques required for quantum error correction. In this paper we introduce the topological_codes module of Qiskit-Ignis, which is designed to provide the tools necessary to perform such tests. Specifically, we use the RepetitionCode and GraphDecoder classes to run tests based on the repetition code and process the results. As an example, data from a 43 qubit code running on IBM’s Rochester device is presented.


I. INTRODUCTION
Software comes in many forms.The most prominent forms of software for classical computers are dedicated to applications, in which the device performs a useful task for the end user.Though this is also the goal of quantum software, there will instead be a heavy focus on benchmarking, testing and validation of quantum devices in the near-term.The topological_codes module of Qiskit Ignis is one means by which this can be done.In this paper we introduce this new module, and describe its implementation and the methodology behind it.Quantum software is based on the idea of encoding information in qubits.Most quantum algorithms developed over the past few decades have assumed that these qubits are perfect: they can be prepared in any state we desire, and be manipulated with complete precision.Qubits that obey these assumptions are often known as logical qubits.
The last few decades have also seen great advances in finding physical systems that behave as qubits with ever greater fidelity.However, the imperfections can never be removed entirely.These qubits will always be much too imprecise to serve directly as logical qubits.Instead, we refer to them as physical qubits.
In the current era of quantum computing, we seek to use physical qubits despite their imperfections, by designing custom algorithms and using error mitigation [1][2][3].For the future era of fault-tolerance, however, we must find ways to build logical qubits from physical qubits.This will be done through the process of quantum error correction [4], in which logical qubits are encoded in a large numbers of physical qubits.The encoding is maintained by constantly putting the physical qubits through a highly entangling circuit.Auxiliary degrees of freedom are then constantly measured, to detect signs of errors and allow their effects to be removed.Because of the vast amount effort required for this process, most operations performed in fault-tolerant quantum computers will be done to serve the purpose of error detection and correction.The logical operations required for quantum computation are essentially just small perturbations to the error correction procedure.As such, as we benchmark our progress towards fault-tolerant quantum computation, we must keep track of how well our devices perform error correction.Various experiments testing the ideas behind quantum error correction have already been performed [5][6][7][8][9][10][11][12][13][14][15][16].These include several experiments based on repetition codes [5,13,14].This is the simplest example of error detection and correction that can be done using the standard techniques of quantum stabilizer codes [17].Though not a true example of quantum error correction -it uses physical qubits to encode a logical bit, rather than a qubit -it serves as a simple guide to all the basic concepts in any quantum error correcting code.Its requirements in terms of qubit number and connectivity are very flexible, allowing it to be straightforwardly implemented on almost any device.This makes it an excellent general-purpose benchmark.
In this paper we will provide a simple introduction to the code, and show how to run instances of it on current prototype devices using the open-source Qiskit framework [18].Specifically, we will use the topological_codes module of Qiskit-Ignis, which provides tools to create the quantum circuits required for simple quantum error correcting codes, as well as process the results.

A. The basics of error correction
The basic ideas behind error correction are the same for quantum information as for classical information.This allows us to begin by considering a very straightforward example: speaking on the phone.If someone asks you a question to which the answer is 'yes' or 'no', the way you give your response will depend on two factors: • How important is it that you are understood correctly?
• How good is your connection?Both of these can be paramaterized with probabilities.For the first, we can use P a , the maximum acceptable probability of being misunderstood.If you are being asked to confirm a preference for ice cream flavours, and don't mind too much if you get vanilla rather than chocolate, P a might be quite high.If you are being asked a question on which someone's life depends, however, P a will be much lower.
For the second we can use ρ, the probability that your answer is garbled by a bad connection.For simplicity, let's imagine a case where a garbled 'yes' doesn't simply sound like nonsense, but sounds like a 'no'.And similarly a 'no' is transformed into 'yes'.Then ρ is the probability that you are completely misunderstood.A good connection or a relatively unimportant question will result in ρ < P a .In this case it is fine to simply answer in the most direct way possible: you just say 'yes' or 'no'.If, however, your connection is poor and your answer is important, we will have ρ > P a .A single 'yes' or 'no' is not enough in this case.The probability of being misunderstood would be too high.Instead we must encode our answer in a more complex structure, allowing the receiver to decode our meaning despite the possibility of the message being disrupted.The simplest method is the one that many would do without thinking: simply repeat the answer many times.For example say 'yes, yes, yes' instead of 'yes' or 'no, no no' instead of 'no'.If the receiver hears 'yes, yes, yes' in this case, they will of course conclude that the sender meant 'yes'.If they hear 'no, yes, yes', 'yes, no, yes' or 'yes, yes, no', they will probably conclude the same thing, since there is more positivity than negativity in the answer.To be misunderstood in this case, at least two of the replies need to be garbled.The probability for this, P, will be less than ρ.When encoded in this way, the message therefore becomes more likely to be understood.The code cell below shows an example of this.p = 0.01 P = 3 * p**2 * (1-p) + p**3 # probability of 2 or 3 errors print('Probability of a single reply being garbled:',p) print('Probability of a the majority of three replies being garbled:',P) The output obtained from running the above program snippet is as follows (from henceforth, any such output is displayed directly beneath the cell that it pertains to).
Probability of a single reply being garbled: 0.01 Probability of a the majority of three replies being garbled: 0.00029800000000000003 If P < P a , this technique solves our problem.If not, we can simply add more repetitions.The fact that P < ρ above comes from the fact that we need at least two replies to be garbled to flip the majority, and so even the most likely possibilities have a probability of ∼ ρ 2 .For five repetitions we'd need at least three replies to be garbled to flip the majority, which happens with probability ∼ ρ 3 .The value for P in this case would then be even lower.Indeed, as we increase the number of repetitions, P will decrease exponentially.No matter how bad the connection, or how certain we need to be of our message getting through correctly, we can achieve it by just repeating our answer enough times.Though this is a simple example, it contains all the aspects of error correction.
• There is some information to be sent or stored: In this case, a 'yes' or 'no'.
• The information is encoded in a larger system to protect it against noise: In this case, by repeating the message.• The information is finally decoded, mitigating for the effects of noise: In this case, by trusting the majority of the transmitted messages.This same encoding scheme can also be used for binary, by simply substituting 0 and 1 for 'yes' and 'no.It can therefore also be easily generalised to qubits by using the states |0 and |1 .In each case it is known as the repetition code.Many other forms of encoding are also possible in both the classical and quantum cases, which outperform the repetition code in many ways.However, its status as the simplest encoding does lend it to certain applications.One is exactly what it is used for in Qiskit: as the first and simplest test of implementing the ideas behind quantum error correction.

B. Correcting errors in qubits
We will now implement these ideas explicitly using Qiskit.To see the effects of imperfect qubits, we can simply use the qubits of the prototype devices.We can also reproduce the effects in simulations.The function below creates a simple noise models in order to do the latter.The noise models it creates go beyond the simple case discussed earlier, of a single noise event which happens with a probability ρ.Instead we consider two forms of error that can occur.One is a gate error: an imperfection in any operation we perform.We model this here in a simple way, using so-called depolarizing noise.The effect of this will be, with probability ρ gate ,to replace the state of any qubit with a completely random state.For two qubit gates, it is applied independently to each qubit.The other form of noise is that for measurement.This simply flips between a 0 to a Note that the shots=1024 argument here is actually the default argument for the execute function, and so it nexed not be included (unless a different number of shots is required).As such, it will not be included in future code snippets.
Here a set of typical results are shown.Results will vary for different runs, but will be qualitatively the same.Specifically, almost all results still come out '000', as they would if there was no noise.Of the remaining possibilities, those with a majority of 0s are most likely.Much less than 10 of the 1024 samples will come out with a majority of 1s.When using this circuit to encode a 0, this means that P < 1% Now let's try the same for storing a 1 using three qubits in state |1 .The number of samples that come out with a majority in the wrong state (0 in this case) is again much less than 100, so P < 1%.Whether we store a 0 or a 1, we can retrieve the information with a smaller probability of error than either of our sources of noise.This was possible because the noise we considered was relatively weak.As we increase ρ meas and ρ gate , the higher the probability P will be.The extreme case of this is for either of them to have a 50/50 chance of applying the bit flip error, x.For example, let's run the same circuit as before but with ρ meas = 0.5 and ρ gate = 0. noise_model = get_noise(0.5,0.0)counts = execute( qc1, Aer.get_backend('qasm_simulator'), noise_model=noise_model).result().get_counts()print(counts) {'000': 123, '001': 125, '011': 121, '100': 131, '010': 124, '110': 130, '111': 140, '101': 130} With this noise, all outcomes occur with equal probability, with differences in results being due only to statistical noise.No trace of the encoded state remains.This is an important point to consider for error correction: sometimes the noise is too strong to be corrected.The optimal approach is to combine a good way of encoding the information you require, with hardware whose noise is not too strong.

C. Storing qubits
So far, we have considered cases where there is no delay between encoding and decoding.For qubits, this means that there is no significant amount of time that passes between initializing the circuit, and making the final measurements.However, there are many cases for which there will be a significant delay.As an obvious example, one may wish to encode a quantum state and store it for a long time, like a quantum hard drive.A less obvious but much more important example is performing fault-tolerant quantum computation itself.For this, we need to store quantum states and preserve their integrity during the computation.This must also be done in a way that allows us to manipulate the stored information in any way we need, and which corrects any errors we may introduce when performing the manipulations.In all cases, we need account for the fact that errors do not only occur when something happens (like a gate or measurement), they also occur when the qubits are idle.Such noise is due to the fact that the qubits interact with each other and their environment.The longer we leave our qubits idle for, the greater the effects of this noise becomes.If we leave them for long enough, we'll encounter a situation like the ρ meas = 0.5 case above, where the noise is too strong for errors to be reliably corrected.The solution is to keep measuring throughout.No qubit is left idle for too long.Instead, information is constantly being extracted from the system to keep track of the errors that have occurred.
For the case of classical information, where we simply wish to store a 0 or 1, this can be done by just constantly measuring the value of each qubit.By keeping track of when the values change due to noise, we can easily deduce a history of when errors occurred.
For quantum information, however, it is not so easy.For example, consider the case that we wish to encode the logical state |+ .Our encoding is such that To encode the logical |+ state we therefore need With the repetition encoding that we are using, a z measurement (which distinguishes between the |0 and |1 states) of the logical qubit is done using a z measurement of each physical qubit.The final result for the logical measurement is decoded from the physical qubit measurement results by simply looking which output is in the majority.
As mentioned earlier, we can keep track of errors on logical qubits that are stored for a long time by constantly performing z measurements of the physical qubits.However, note that this effectively corresponds to constantly performing z measurements of the logical qubit.This is fine if we are simply storing a 0 or 1, but it has undesired effects if we are storing a superposition.Specifically: the first time we do such a check for errors, we will collapse the superposition.This is not ideal.If we wanted to do some computation on our logical qubit, or if we wish to perform a basis change before final measurement, we need to preserve the superposition.Destroying it is an error.But this is not an error caused by imperfections in our devices.It is an error that we have introduced as part of our attempts to correct errors.And since we cannot hope to recreate any arbitrary superposition stored in our quantum computer, it is an error that cannot be corrected.
For this reason, we must find another way of keeping track of the errors that occur when our logical qubit is stored for long times.This should give us the information we need to detect and correct errors, and to decode the final measurement result with high probability.However, it should not cause uncorrectable errors to occur during the process by collapsing superpositions that we need to preserve.

Results: {'1': 1024}
In such cases the output is always '1'.This measurement is therefore telling us about a collective property of multiple qubits.Specifically, it looks at the two code qubits and determines whether their state is the same or different in the z basis.For basis states that are the same in the z basis, like |00 and |11 , the measurement simply returns 0. It also does so for any superposition of these.Since it does not distinguish between these states in any way, it also does not collapse such a superposition.Similarly, For basis states that are different in the z basis it returns a 1.This occurs for |01 , |10 or any superposition thereof.Now suppose we apply such a 'syndrome measurement' on all pairs of physical qubits in our repetition code.If their state is described by a repeated |0 , a repeated |1 , or any superposition thereof, all the syndrome measurements will return 0. Given this result, we will know that our states are indeed encoded in the repeated states that we want them to be, and can deduce that no errors have occurred.If some syndrome measurements return 1, however, it is a signature of an error.We can therefore use these measurement results to determine how to decode the result.

D. Quantum repetition code
We now know enough to understand exactly how the quantum version of the repetition code is implemented We can use it in Qiskit by importing the required tools from Ignis.from qiskit.ignis.verification.topological_codesimport RepetitionCode from qiskit.ignis.verification.topological_codesimport lookuptable_decoding from qiskit.ignis.verification.topological_codesimport GraphDecoder We are free to choose how many physical qubits we want the logical qubit to be encoded in.We can also choose how many times the syndrome measurements will be applied while we store our logical qubit, before the final readout measurement.Let us start with the smallest non-trivial case: three repetitions and one syndrome measurement round.The circuits for the repetition code can then be created automatically from the using the RepetitionCode object from Qiskit-Ignis.In these circuits, we have two types of physical qubits.There are the 'code qubits', which are the three physical qubits across which the logical state is encoded.There are also the 'link qubits', which serve as the ancilla qubits for the syndrome measurements.
Our single round of syndrome measurements in these circuits consist of just two syndrome measurements.One compares code qubits 0 and 1, and the other compares code qubits 1 and 2. One might expect that a further measurement, comparing code qubits 0 and 2, should be required to create a full set.However, these two are sufficient.This is because the information on whether 0 and 2 have the same z basis state can be inferred from the same information about 0 and 1 with that for 1 and 2. Indeed, for n qubits, we can get the required information from just n − 1 syndrome measurements of neighbouring pairs of qubits.Running these circuits on a simulator without any noise leads to very simple results.Here we see that the output comes in two parts.The part on the right holds the outcomes of the two syndrome measurements.That on the left holds the outcomes of the three final measurements of the code qubits.

E. Lookup table decoding
Now let's return to the n = 3, T = 1 example and look at a case with some noise.Here we have created raw_results, a dictionary that holds both the results for a circuit encoding a logical 0 and 1 encoded for a logical 1.
Our task when confronted with any of the possible outcomes we see here is to determine what the outcome should have been, if there was no noise.For an outcome of '000 00' or '111 00', the answer is obvious.These are the results we just saw for a logical 0 and logical 1, respectively, when no errors occur.The former is the most common outcome for the logical 0 even with noise, and the latter is the most common for the logical 1.We will therefore conclude that the outcome was indeed that for logical 0 whenever we encounter '000 00', and the same for logical 1 when we encounter '111 00'.Though this tactic is optimal, it can nevertheless fail.Note that '111 00' typically occurs in a handful of cases for an encoded 0, and '000 00' similarly occurs for an encoded 1.In this case, through no fault of our own, we will incorrectly decode the output.In these cases, a large number of errors conspired to make it look like we had a noiseless case of the opposite logical value, and so correction becomes impossible.We can employ a similar tactic to decode all other outcomes.The outcome '001 00', for example, occurs far more for a logical 0 than a logical 1.This is because it could be caused by just a single measurement error in the former case (which incorrectly reports a single 0 to be 1), but would require at least two errors in the latter.So whenever we see '001 00', we can decode it as a logical 0.
Applying this tactic over all the strings is a form of so-called 'lookup table decoding'.Whenever an output string is obtained, it is compared to a large body of results for known logical values.Then most likely logical value can then be inferred.For many qubits, this quickly becomes intractable, as the number of possible outcomes becomes so large.In these cases, more algorithmic decoders are needed.However, lookup table decoding works well for testing out small codes.We can use tools in Qiskit to implement lookup table decoding for any code.For this we need two sets of results.One is the set of results that we actually want to decode, and for which we want to calculate the probability of incorrect decoding, P. We will use the raw_results we already have for this.The other set of results is one to be used as the lookup table.This will need to be run for a large number of samples, to ensure that it gets good statistics for each possible outcome.We'll use shots=10000.circuits = code.get_circuit_list()job = execute( circuits, Aer.get_backend('qasm_simulator'), noise_model=noise_model, shots=10000 ) table_results = {} for log in ['0','1']: table_results[log] = job.result().get_counts(log) With this data, which we call table_results, we can now use the lookuptable_decoding function from Qiskit.This takes each outcome from raw_results and decodes it with the information in table_results.
Then it checks if the decoding was correct, and uses this information to calculate P. P = lookuptable_decoding(raw_results,table_results) print('P =',P) P = {'0': 0.0238, '1': 0.0237} Here we see that the values for P are lower than those for ρ meas and ρ gate , so we get an improvement in the reliability for storing the bit value.Note also that the value of P for an encoded 1 is higher than that for 0. This is because the encoding of 1 requires the application of x gates, which are an additional source of noise.

F. Graph theoretic decoding
The decoding considered above produces the best possible results, and does so without needing to use any details of the code.However, it has a major drawback that counters these advantages: the lookup table grows exponentially large as code size increases.For this reason, decoding is typically done in a more algorithmic manner that takes into account the structure of the code and its resulting syndromes.
The topological_codes module is designed to support multiple codes that share the same structure, and therefore can be decoded using the same methods.These methods are all based on similar graph theoretic minimization problems, where the graph in question is one that can be derived from the syndrome.The repetition code is one example that can be decoding in this way, and it is with this example that we will explain the graph-theoretic decoding in this section.Other examples are the toric and surface codes [19,20], 2D color codes [21,22] and matching codes [23].All of these are examples of so-called topological quantum error correcting codes, which led to the name of the module.However, note that not all topological codes are compatible with such a decoder.Also, some non-topological codes will be compatible (such as the repetition code).
To find the the graph that will be used in the decoding, some post-processing of the syndromes is required.Instead of using the form shown above, with the final measurement of the code qubits on the left and the outputs of the syndrome measurement rounds on the right, we use the process_results method of the code object to rewrite them in a different form.
For example, below is the processed form of a raw_results dictionary, in this case for n = 3 and T = Logical 0: raw results {'000 00 00': 485, '000 00 01': 55} processed results {'0 0 00 00 00': 485, '0 0 01 01 00': 55} Logical 1: raw results {'111 10 00': 51, '111 01 00': 57, '111 00 00': 455, '111 00 10': 51} processed results {'1 1 00 10 10': 51, '1 1 00 01 01': 57, '1 1 00 00 00': 455, '1 1 10 10 00': 51} Here we can see that '000 00 00' has been transformed to '0 0 00 00 00', and '111 00 00' to '1 1 00 00 00', and so on.In these new strings, the 0 0 to the far left for the logical 0 results and the 1 1 to the far left of the logical 1 results are the logical readout.Any code qubit could be used for this readout, since they should (without errors) all be equal.It would therefore be possible in principle to just have a single 0 or 1 at this position.We could also do as in the original form of the result and have n, one for each qubit.Instead we use two, from the two qubits at either end of the line.The reason for this will be shown later.In the absence of errors, these two values will always be equal, since they represent the same encoded bit value.After the logical values follow the n − 1 results of the syndrome measurements for the first round.A 0 implies that the corresponding pair of qubits have the same value, and 1 implies they they are different from each other.There are n − 1 results because the line of d code qubits has n − 1 possible neighbouring pairs.In the absence of errors, they will all be 0.This is exactly the same as the first such set of syndrome results from the original form of the result.The next block is the next round of syndrome results.However, rather than presenting these results directly, it instead gives us the syndrome change between the first and second rounds.It is therefore the bitwise OR of the syndrome measurement results from the second round with those from the first.In the absence of errors, they will all be 0. Any subsequent blocks follow the same formula, though the last of all requires some comment.This is not measured using the standard method (with a link qubit).Instead it is calculated from the final readout measurement of all code qubits.Again it is presented as a syndrome change, and will be all 0 in the absence of errors.This is the T + 1-th block of syndrome measurements since, as it is not done in the same way as the others, it is not counted among the T syndrome measurement rounds.
The following examples further illustrate this convention.Example 1: 0 0 0110 0000 0000 represents a d = 5, T = 2 repetition code with encoded 0. The syndrome shows that (most likely) the middle code qubit was flipped by an error before the first measurement round.This causes it to disagree with both neighboring code qubits for the rest of the circuit.This is shown by the syndrome in the first round, but the blocks for subsequent rounds do not report it as it no longer represents a change.Other sets of errors could also have caused this syndrome, but they would need to be more complex and so presumably less likely.Example 2: 0 0 0010 0010 0000 represents a d = 5, T = 2 repetition code with encoded 0.Here one of the syndrome measurements reported a difference between two code qubits in the first round, leading to a 1.The next round did not see the same effect, and so resulted in a 0. However, since this disagreed with the previous result for the same syndrome measurement, and since we track syndrome changes, this change results in another 1.Subsequent rounds also do not detect anything, but this no longer represents a change and hence results in a 0 in the same position.Most likely the measurement result leading to the first 1 was an error.Example 3: 0 1 0000 0001 0000 represents a d = 5, T = 2 repetition code with encoded 1.A code qubit on the end of the line is flipped before the second round of syndrome measurements.This is detected by only a single syndrome measurement, because it is on the end of the line.For the same reason, it also disturbs one of the logical readouts.Note that in all these examples, a single error causes exactly two characters in the string to change from the value they would have with no errors.This is the defining feature of the convention used to represent the syndrome in topological_codes.It is used to define the graph on which the decoding problem is defined.Specifically, the graph is constructed by first taking the circuit encoding logical 0, for which all bit values in the output string should be 0.Many copies of this are then created and run on a simulator, with a different single Pauli operator inserted into each.This is done for each of the three types of Pauli operator on each of the qubits and at every circuit depth.The output from each of these circuits can be used to determine the effects of each possible single error.Since the circuit contains only Clifford operations, the simulation can be performed efficiently.In each case, the error will change exactly two of the characters (unless it has no effect).A graph is then constructed for which each bit of the output string corresponds to a node, and the pairs of bits affected by the same error correspond to an edge.
The process of decoding a particular output string typically requires the algorithm to deduce which set of errors occurred, given the syndrome found in the output string.This can be done by constructing a second graph, containing only nodes that correspond to non-trivial syndrome bits in the output.An edge is then placed between each pair of nodes, with an corresponding weight equal to the length of the minimal path between those nodes in the original graph.A set of errors consistent with the syndrome then corresponds then to finding a perfect matching of this graph.To deduce the most likely set of errors to have occurred, a good tactic would be to find one with the least possible number of errors that is consistent with the observed syndrome.This corresponds to a minimum weight perfect matching of the graph [20].Using minimal weight perfect matching is a standard decoding technique for the repetition code and surface codes [20,24], and is implemented in Qiskit Ignis.It can also be used in other cases, such as Color codes, but it does not find the best approximation of the most likely set of errors for every code and noise model.For that reason, other decoding techniques based on the same graph can be used.The GraphDecoder of Qiskit Ignis calculates these graphs for a given code, and will provide a range methods to analyze it.At time of writing, only minimum weight perfect matching is implemented.Note that, for codes such as the surface code, it is not strictly true than each single error will change the value of only two bits in the output string.A σ y error, for example would flip a pair of values corresponding to two different types of stabilizer, which are typically decoded independently.Output for these codes will therefore be presented in a way that acknowledges this, and analysis of such syndromes will correspondingly create multiple independent graphs to represent the different syndrome types.

III. RUNNING A REPETITION CODE BENCHMARKING PROCEDURE
We will now run examples of repetition codes on real devices, and use the results as a benchmark.First, we will briefly summarize the process.This applies to this example of the repetition code, but also for other benchmarking procedures in topological_codes, and indeed for Qiskit Ignis in general.In each case, the following three-step process is used.
1.A task is defined.Qiskit Ignis determines the set of circuits that must be run and creates them.
2. The circuits are run.This is typically done using Qiskit.However, in principle any service or experimental equipment could be interfaced.3. Qiskit Ignis is used to process the results from the circuits, to create the output required for the given task.For topological_codes, step 1 requires the type and size of quantum error correction code to be chosen.Each type of code has a dedicated Python class.A corresponding object is initialized by providing the parameters required, such as n and T for a RepetitionCode object.The resulting object then contains the circuits corresponding to the given code encoding simple logical qubit states (such as |0 and |1 ), and then running the procedure of error detection for a specified number of rounds, before final readout in a straightforward logical basis (typically a standard |0 /|1 measurement).For topological_codes, the main processing of step 3 is the decoding, which aims to mitigate for any errors in the final readout by using the information obtained from error detection.The optimal algorithm for decoding typically varies between codes.The decoding is done by the GraphDecoder class.A corresponding object is initialized by providing the code object for which the decoding will be performed.This is then used to determine the graph on which the decoding problem will be defined.The results can then be processed using the various methods of the decoder object.In the following we will see the above ideas put into practice for the repetition code.In doing this we will employ two Boolean variables, step_2 and step_3.The variable step_2 is used to show which parts of the program need to be run when taking data from a device, and step_3 is used to show the parts which process the resulting data.Both are set to false by default, to ensure that all the program snippets below can be run using only previously collected and processed data.However, to obtain new data one only needs to use step_2 = True, and perform decoding on any data one only needs to use step_3 = True.Before running the circuits from these codes, we need to ensure that the transpiler knows which physical qubits on the device it should use.This means using the qubit of line[0] to serve as the first code qubit, that of line [1] to be the first link qubit, and so on.This is done by the following function, which takes a repetition code object and a line, and creates a Python dictionary to specify which qubit of the code corresponds to which element of the line.Now we can transpile the circuits, to create the circuits that will actually be run by the device.A check is also made to ensure that the transpilation indeed has not introduced non-trivial effects by increasing the number of qubits.Furthermore, the compiled circuits are collected into a single list, to allow them all to be submitted at once in the same batch job.We are now ready to run the job.As with the simulated jobs considered already, the results from this are extracted into a dictionary raw_results.However, in this case it is extended to hold the results from different code sizes.This means that raw_results[n] in the following is equivalent to one of the raw_results dictionaries used earlier, for a given n.It can be convenient to save the data to file, so that the processing of step 3 can be done or repeated at a later time.with open('results/raw_results_'+device_name+'.txt', 'r') as file: raw_results = eval(file.read()) As was described previously, some post-processing of the syndromes is required to find the the graph that will be used in the decoding.This is done using the process_results method of each repetition code object code Another insight we can gain is to use the results to determine how likely certain error processes are to occur.To see how this can be done, recall that each node in the syndrome graph corresponds to a particular syndrome measurement being performed at a particular point within the circuit.A pair of nodes are connected by an edge if and only if a single error, occurring on a particular qubit at a particular point within the circuit, can cause the value of both to change.For any such pair of adjacent nodes, we will specifically consider the values C 11 and C 00 , where the former is the number of counts in results[n]['0'] corresponding to the syndrome value of both adjacent nodes being 1, and the latter is the same for them both being 0. The most likely cause for each event recorded in C 11 is the occurrence of the error corresponding to the edge between these two nodes.Also, it is most likely that this error has not occurred for each event recorded in C 00 .As such, to first order, we can state that Where p is the probability of the error corresponding to the edge between these two nodes.For example, suppose that one of the nodes we are considering corresponds to the syndrome measurement of code qubits 0 and 1 in the first round, and the other corresponds to the same for code qubits 1 and serves to serve as a starting point for such endeavours, since anyone with an internet connection can now use the tool to probe the 15 qubits of IBM's publicly accessible Melbourne device.

FIG. 1 :
FIG. 1: The layout of the Rochester device.Colours represent error probabilities for controlled-NOTs and readout on qubits.
2.Only results with 50 or more samples are shown for clarity.