Characteristics of Reversible Circuits for Error Detection

In this work, we consider error detection via simulation for reversible circuit architectures. We rigorously prove that reversibility augments the performance of this simple error detection protocol to a considerable degree. A single randomly generated input is guaranteed to unveil a single error with a probability that only depends on the size of the error, not the size of the circuit itself. Empirical studies confirm that this behavior typically extends to multiple errors as well. In conclusion, reversible circuits offer characteristics that reduce masking effects -- a desirable feature that is in stark contrast to irreversible circuit architectures.


I. INTRODUCTION
The detection of errors is a fundamental problem in electrical engineering and computer science. Given a circuit C 1 with n inputs and m outputs (the Golden Specification), the task is to decide whether a given circuit realization C 2 (the Design Under Verification) describes the same functionality on the logical level.
Many approaches exist that address this important and challenging problem. In this work, we focus on error detection protocols that only require simulation runs of the two circuits-as opposed to formal verification techniques which explicitly utilize structural knowledge about both circuits [1]- [6]. This is a severe restriction, but simulations alone are-in principlesufficient to solve this task. If the two circuits are equivalent, they have the same input-output behavior. Conversely, suppose that they are functionally distinct. Then, there exists at least one input string for which the two circuits produce distinct outputs. In formulas: Such an input successfully detects the discrepancy between C 1 and C 2 and serves as a counterexample for the equivalence of the circuits. The problem, however, is how to find counterexamples (1). If we only allow simulations of both circuits, i.e., we consider them as black boxes, we do not have actionable advice on how to choose promising input strings and we may as well generate inputs uniformly at random: ⃗ x ∼ Unif ({0, 1} n ), i.e., we flip an unbiased coin for each input value (⃗ x = (x n , . . . , x 1 ), where x n , . . . , x 1 ∼ x and Pr [x = 0] = Pr[x = 1] = 1/2). Subsequently, we simulate both circuits with this input and check whether they produce the same output: C 1 (⃗ x) ? = C 2 (⃗ x). If the outputs are distinct, we have found a counterexample. The circuits cannot be equivalent. But if the outputs are the same, the test is inconclusive. In this case, we must repeat it with new (randomly generated) inputs until we either find a counterexample (non-equivalence) or have exhausted all 2 n possible inputs (equivalence). The latter, unfortunately, can be a very real possibility. The two circuits C 1 and C 2 may differ on a single input only and it is extremely unlikely to quickly find this input by (random) chance.
To make matters worse, classical circuits can mask even "small" errors very effectively. For n = 8, this is illustrated in Fig. 1. A cascade of logical AND gates, realizing the functionality y = x n · . . . · x 1 (ideal circuit C 1 ), is affected by a single bit-flip error (erroneous implementation C 2 ) in the second layer. It is easy to check that only 4 out of all 2 8 = 256 input strings can detect this discrepancy.
Masking is a serious issue for error detection using simulation techniques. No malicious intent is required to fool randomly generated inputs. The circuit may do it all by itself. Needless to say, this issue has been well-known for decades. Error detection based on random inputs (alone) often pales in comparison to other more sophisticated techniques. Today's state of the art is governed by constrained-based stimuli generation techniques [7]- [11], fuzzing [12], etc. But on the positive side, error detection using randomly-chosen inputs is based on minimal assumptions, namely the possibility to simulate two circuits as black boxes. Moreover, it is intuitive and individual simulation runs are easy and fast to execute.
Error detection in classical circuits is hard: Suppose that a cascade of logical AND gates, realizing the Boolean function y = x 8 · . . . · x 1 , is affected by a single bit-flip error (red) in the second layer. Only 4 out of the 2 8 = 256 possible input strings can detect this error. vs.
⇔ size vs. Fig. 2. Illustration of main rigorous contributions: Simulations with uniformly random inputs completely expose any single reversible error in a given reversible circuit. The two scenarios are exactly equivalent ("no masking"). In the lower scenario, the probability of correct distinction is governed by the size k of the error, not the total number of lines.

II. SUMMARY OF RESULTS: ERROR DETECTION IN REVERSIBLE CIRCUITS
We have seen that, in general, simulation with (uniformly) random inputs is not a viable strategy for detecting errors in classical circuits. Already a single "small" error can be exceedingly difficult to detect (masking). Perhaps surprisingly, this dark picture lightens up considerably if we consider reversible implementations of logical functionalities. As the name suggests, reversible circuits are circuits whose action can be undone by running the circuit backwards. More formally, n-bit reversible circuits implement permutations on the set of all 2 n bit strings. This, in particular, implies that the number of input and output bits must be the same (n = m). Despite these restrictions, reversible circuits are universal, i.e., any logical function on n bits can be implemented by a reversible circuit [13] and efficient mapping techniques are readily available [14]- [16] (this implementation may require strictly more than n bits, though). Negation (NOT), exclusive or (CNOT) and the Toffoli gate (CCNOT) are examples of simple reversible functionalities. Viewed as a logic gate, CCNOT is also universal. Every reversible circuit can be constructed from Toffoli gates alone [13].
To summarize, reversible circuits bear strong similarities with classical (irreversible) circuits, but there are some notable additional characteristics. Chief among them is reversibility itself which implies that information cannot easily escape. Here, we show that this has profound implications for error detection with random inputs. More precisely, (i) reversible circuits can never mask single reversible errors (rigorous result, see Proposition 1) (ii) the probability of detecting a single reversible error only depends on its size, i.e., on the number of bits it affects, not the total number of bits (unsurprising rigorous result, see Lemma 2) (iii) multiple reversible errors are typically even easier to detect (empirical studies, see Fig. 3 and discussions in Section IV) The first two insights are mathematical statements that address single errors only. They readily follow from reversibility and fundamental properties of uniformly random input strings. We refer to Section III for details and Fig. 2 for illustrative caricatures. When combined, they imply the following confidence bound for detecting single errors with random inputs. Theorem 1. Suppose that a general reversible circuit is affected by a single reversible error of size k and fix δ ∈ (0, 1) (confidence). Then, at most ⌈log(1/δ)2 k−1 ⌉ randomly selected inputs suffice to witness this error with probability (at least) 1 − δ.
For k = 1-a single bit-flip error (NOT) anywhere within the circuit-this statement can be further improved (see Theorem 2) and actually becomes deterministic: already a single (random) input is guaranteed to detect this error with certainty. We emphasize that this statement is true irrespective of the number of lines and the circuit's size. It is simply impossible to hide a single bit-flip inside a reversible circuit. Such a behavior is strikingly different from irreversible circuit architectures. There it can routinely happen that order 2 n random inputs are necessary to detect even a single bit-flip error, see e.g.  The multiple-error case is much more intricate, because error locations and circuit structure start to matter. This leads to drastically different behaviors of best case (independent errors) and worst case (severe masking) behavior. To better understand the typical behavior of multiple errors, we resort to numerical simulations. These indicate a (close-to) best-case behavior: the probability of failing to detect a total of l reversible errors is exponentially suppressed in l, see Fig. 3. Additional simulation results and details are provided in Section IV.
Note that a similar line of thought has recently been presented for the domain of quantum computing (which bears many similarities to reversible circuits). More precisely, a verification scheme heavily based on simulation has been proposed in [17] and refined in [18]. A similar theoretical result has been presented in [19].

III. RIGOROUS THEORY FOR SINGLE ERRORS A. Reversible circuits and error model
We will work in the reversible circuit model for n input bits (and n output bits). A high-level of mathematical abstraction already suffices to deduce powerful consequences. An n-bit reversible circuit implements a permutation R : n of all 2 n bit strings. Reversing the circuit, that is running it backwards, produces the unique permutation R T : {0, 1} n → {0, 1} n that undoes the original circuit: n is the identity permutation ("do nothing"). This defining feature suffices to deduce three elementary properties that will form the basis of our proof strategy.
Lemma 1 (Characteristics of reversible circuits). Consider reversible circuits R 1 , R 2 , R 3 : {0, 1} n → {0, 1} n and an n-bit string ⃗ x ∈ {0, 1} n . Then, (i) output equivalence is unaffected by composition: example abstraction (Single) error model and compatible circuit decomposition: An ideal reversible circuit (blue) is corrupted by a single reversible error (red). The error location begets a decomposition of ideal and corrupted circuit into matching constituents: Proof. All proofs utilize the fact that reversible circuits act like permutations on the set of all 2 n bit strings.
(i) Permutations are invertible transformations. As such, they preserve equivalence: y = y ′ if and only if R(y) = R(y ′ ) for any reversible circuit R. The claim follows from setting The uniform distribution over n-bit strings assigns the same weight to each of the 2 n bit strings. Permuting the bit strings cannot affect the weights and, by extension, the uniform distribution itself.
is equal to the number of fix points of the underlying permutation. A non-trivial permutation of 2 n elements can have at most 2 n − 2 fix points (transposition).
Different reversible circuits of compatible bit-size n can be combined to yield another (larger) circuit: n ("composition"). The reverse direction is also possible ("decomposition") and, arguably, more interesting. Circuit diagrams provide a well-established tool that does precisely that. They decompose a possibly complicated circuit into a structured sequence of simpler building blocks. We use circuit decomposition on a rather high level to reason about single reversible errors in reversible circuits. Suppose that an n-bit reversible circuit R is affected by a reversible error E that produces a functionally different circuitR. Then, the location of this error within the circuit suggests a compatible decomposition into three parts: In summary, and we refer to Fig. 4 for a visual illustration.

B. No masking for random inputs
We now have all building blocks in place to present and derive the main conceptual result of this work. It addresses the probability of detecting single reversible errors in arbitrary reversible circuits (2) based on a single random input ⃗ x ∼ Unif({0, 1} n ).
Then, the probability of detecting this discrepancy with a random input ⃗ x ∼ Unif({0, 1} n ) only depends on the error E, not the actual circuit. More precisely, where the probability is taken with respect to the uniform distribution over all 2 n possible input strings.
Proof. This statement is an immediate consequence of two elementary characteristics of reversible circuit architectures. Apply Lemma 1 (i) to remove the effect of R 2 , and note that, according to Lemma Although simple to prove, Proposition 1 pinpoints remarkable differences between reversible and irreversible circuits. As illustrated in Fig. 2, the former cannot hide errors from randomly sampled inputs ("no masking").
We emphasize that a uniformly random selection of input strings is crucial to arrive at such a powerful conclusion. Reversibility alone is enough to ignore the final portion of the circuit R 2 (after the error has occurred). Reversible circuits always map (non-)equal bit strings to (non-)equal bit strings. In contrast, the first portion of the circuit R 1 (before the error has occurred) can affect concrete inputs ⃗ x ∈ {0, 1} n . But if ⃗ x is sampled randomly, then R 1 (⃗ x) will be a different, but still random, bit string. The uniform distribution is special in the sense that it is invariant under reversible transformations. The circuit R 1 may affect every concrete input, but it does not affect the underlying distribution.

C. Only error size matters
We have seen that uniformly random inputs can uncover single reversible errors in a general reversible circuit. According to Proposition 1, the probability of witnessing a discrepancy only depends on the error, not the underlying circuit structure.
We say that an error E : {0, 1} n → {0, 1} n has size k if it only affects k bits in a nontrivial fashion. The remaining n − k bits are not touched at all. We refer to Fig. 2 for a visual illustration of this summary parameter. Intuitively, we would expect that "large" errors are easier to detect than "small" ones and that the number of lines n plays an active role. However, the following simple statement shows that the probability of detecting an error in the worst case is exponentially suppressed with respect to the error size k, but is independent of the actual number of bits n.
Proof. Suppose, without loss of generality, that the error E only affects the least-significant k bits, i.e., E(⃗ x) = E(x n , . . . , x 1 ) = (x n , . . . , x k+1 , y k , . . . , y 1 ), where (y k , . . . , y 1 ) =Ẽ(x k , . . . , x 1 ). Since E is reversible, its restrictionẼ : {0, 1} k → {0, 1} k to the k relevant bits must also be reversible. Moreover,Ẽ ̸ = id, because E is non-trivial. Lemma 1 (iii) then implies that there must be at least 2 bit strings of size k that are affected byẼ. Finally, we use the fact that ⃗ x = (x n , . . . , implies that the least-significant k bits are also distributed uniformly: (x k , . . . , x 1 ) ∼ Unif({0, 1} k ). Therefore, This probability bound is actually sharp. Worst-case reversible errors of size k permute exactly 2 out of the 2 k possible k-bit inputs on which they act. Concrete examples of such a behavior are NOT (k = 1), CNOT (k = 2), CCNOT (k = 3) and, more generally, a (k − 1)-fold controlled NOT gate on k bits (general k). The numerical simulations shown in Fig. 3 are based on injecting such worst-case errors at random circuit locations.

D. General confidence bound for detecting single reversible errors
We now have all necessary ingredients to establish a rigorous performance guarantee for reversible error detection with (uniformly) random inputs. The following statement bounds the number of uniformly random inputs that may be required to detect a single reversible error of size k. Theorem 2. Fix R = R 2 • R 1 (ideal circuit),R = R 2 • E • R 1 (single, reversible error) and E has size k. Suppose that ⃗ x 1 , . . . , ⃗ x N are N (independent) uniformly random inputs. Then, In words, the probability of failing to detect a single error is exponentially suppressed in the number N of random test inputs.
Theorem 1 above is a streamlined consequence of this observation: setting N = ⌈log(1/δ)2 k−1 ⌉ provides a concrete number of repetitions that ensures that we detect the discrepancy with probability (at least) 1 − δ.
Proof of Theorem 2. For N = 1 (one random input), the claim readily follows from combining Proposition 1 and Lemma 2 (more precisely, their contrapositions): This bound readily extends to the general N -case by using the assumption that the individual input strings ⃗ x 1 , . . . , ⃗ x N are all sampled independently. Joint probabilities of independent events factorize and we conclude 5. Partial simplification for multiple errors: Simulation with uniformly random inputs exposes multiple errors only partially. Everything before the first error (R 1 ) and after the last error (R 3 ) can be safely ignored, but the part in between (R 2 ) does matter. Different circuit structures can lead to strikingly different error detection probabilities. Fig. 6. Best-case scenario for two errors: One of the errors, say E 2 , commutes with the relevant circuit part R 2 . Reordering allows us to treat the two errors as a single effective errorẼ = E 1 • E 2 . In addition, E 1 and E 2 affect disjoint bit collections (independence) andẼ factorizes nicely into two disjoint components: Apply 1 + x < exp(x) for all x ∈ R (convexity of the exponential function) with x = −2 −(k−1) to complete the argument.
The bound provided in Theorem 2 is simple, but not sharp (the inequality 1 + x ≤ exp(x) is never tight). As such, it always under-estimates the actual confidence level. This discrepancy is most pronounced for small error sizes k. The extreme case is a single NOT error (k = 1). For k = 1, the bound in Eq. (3) becomes (exactly) zero. By contraposition, every possible input bit string is guaranteed to detect a single bit-flip error that is hidden anywhere within the circuit.

IV. EMPIRICAL ANALYSIS FOR MULTIPLE ERRORS
In the previous section, we have established strong theoretical support for detecting single reversible errors. At its heart has been the decompositionR = R 2 • E • R 1 illustrated in Fig. 4. Reversibility and uniformly random inputs have subsequently allowed us to discuss away the circuit portions R 2 and R 1 completely. In turn, we were able to focus exclusively on the error itself.
For more than one error, this is in general not an option anymore. While we can safely ignore circuit contributions before the first and after the last error, the circuit in between cannot be ignored, see Fig. 5. The relation between errors and intermediate circuit parts governs how likely it is to witness the overall error.
In this section, we analyze error accumulation effects in generic reversible circuits. To obtain guiding intuition, we will first isolate and discuss the two extreme cases. Independent errors (best case, see Section IV-A) and maximal masking (worst case, see Section IV-B) turn out to behave in a radically different fashion. Subsequent numerical studies demonstrate that typical error accumulation effects closely follow the best-case trajectory: Multiple errors are typically much easier to detect than a single error.

A. Best-case behavior: Commuting and independent errors
Let us first discuss l = 2 reversible errors of size k. An extension to multiple errors (l ≥ 3) and different sizes will be straightforward. Fig. 5 provides valuable guidance for potential best-case behavior. Suppose that one of the errors, say E 2 , can be pulled through the central circuit part R 2 without affecting it: E 2 • R 2 = R 2 • E 2 . If circuit and error commute in such a fashion, we can group both errors into a single layer and have effectively reduced the problem to the single-error case which we already understand: Fig. 7. Worst-case scenario for two errors: Two bit-flip errors (k = 1) affect one control line of a (n − 1)-fold controlled NOT-gate. These errors do not commute with the relevant circuit part R 2 . Quite the opposite: two errors with size k = 1 produce an effective errorẼ of size k = (n − 1). To make matters even worse, such a (n − 2)-fold controlled NOT error is extremely difficult to detect: The only remaining question is: what is the probability of failing to detect the cumulative error E 2 • E 1 with a single random input? This failure probability is smallest if the two errors are independent in the sense that they act on disjoint sets of k bits each. A uniformly random input ⃗ x ∈ Unif({0, 1} n ) then ensures that the failure probability factorizes: This argument readily extends to multiple errors (l ≥ 3). Taking the complement ensures provided that all l errors commute with the circuit (first equality) and act on different subsets of k bits each (second inequality). Rel. (4) highlights that the probability of (best case) error detection increases substantially with the number of errors l. Intuitively, this makes sense: more errors should be easier to detect. This insight has implications for the number N of random inputs that are required to detect l best-case errors of size k each. To pinpoint them, it is instructive to view a single simulation run as a biased coin toss: we detect a discrepancy with probability p = Pr R 2 (⃗ x) ̸ = R 2 (⃗ x) ("heads") and fail to detect it with probability 1 − p = Pr R 2 (⃗ x) = R 2 (⃗ x) ("tails"). When attempting to detect a discrepancy, we input new randomly generated inputs until we find a mismatch. This is equivalent to tossing the biased coin until "heads" appears. The expected number of required coin tosses to achieve this goal is 1/p (geometric distribution). Together with Rel. (4), this analogy allows us to conclude that we expect to require random inputs to detect l commuting and independent errors of size k each. This bound is sharp. It holds with equality if each of the l errors is a worst-case error of size k, e.g. a (k − 1)-fold controlled NOT gate. We conclude this section with a simplified interpretation of Rel (5). For small l (in comparison to 2 (k−1) ), the claim is comparable to N (↓) expect ≈ 2 k−1 /l, which can also be observed in Fig. 3: the slopes of the solid lines match this estimate rather well whenever the number of errors l is small compared to 2 (k−1) . Under best-case assumptions, detecting l size k-errors is l-times easier than detecting a single error of the same size.

B. Worst-case: anti-commuting errors and masking
We expect that worst case error accumulation should occur when errors and relevant circuit portion do not commute at all ("anti-commutation"). If this is the case, the probability of detecting errors can become exponentially small in the total number of bits. We illustrate this by means of an example that is illustrated in Fig. 7: E 1 and E 2 are bit-flip errors (k = 1) that affect the first bit while R 2 : {0, 1} n → {0, 1} n is a (n − 1)-fold controlled NOT-gate. It is easy to check that whereẼ is a (n − 2)-fold controlled NOT gate that acts on all bits, except the very first one (k = n − 1). This is a single worst-case error of almost maximal size. Proposition 1 and Lemma 2 assert This success probability is exponentially small in the total number of bits and we expect to require a total of random inputs in order to detect the discrepancy. Even worse error accumulation effects can occur for more errors (l ≥ 3) and/or larger error sizes (k ≥ 2). But already Rel. (6) is almost as bad as it can be. It is only a factor of two away from 2 n−1 -the absolute worst case for distinguishing any pair of reversible circuits, see Lemma 1 (iii).

C. Empirical studies
The multiple-error case is intricate by comparison, because the interplay between error (locations) and underlying circuit geometry starts to matter. We have seen that this leads to strikingly different best-(commuting errors, Sub. IV-A) and worst-case (anticommuting errors, Sub. IV-B) behavior. Concrete problem instances fall into the wide range between these extreme cases. In this section, we employ numerics to delineate typical behavior.
We study the effect of size-k reversible errors in reversible circuits with n lines. For a given number of lines n, we construct random reversible circuits with g ≈ O(n 2 ) arbitrary multi-controlled NOT gates. When injecting errors of size k, we always consider (k −1)-fold controlled NOT gates which represent the worst case behavior, as discussed in Section III-C. Without loss, we assume that these errors are geometrically local, i.e., they only affect neighbouring lines. All experiments were repeated 10 000 times with different random seeds in order to ensure adequate statistical uniformity. First and foremost, we confirm interesting aspects of the theory developed in Sec. III. To this end, we considered the injection of a single size-k-error and count the required number of simulations for detecting this error. The results are depicted in Fig. 8. In contrast to classical intuition, the probability of detecting a single reversible error of size k is (1) completely independent of the circuit under consideration, and (2) diminishes exponentially in the error size k, i.e., the smaller the error, the greater its impact. This is in excellent agreement with Theorem 2. On average, the required simulations exactly follow the predicted 2 k−1 trajectory with no apparent variation. Additionally, the distributions of results is the same when simulating the circuits R = R 2 • R 1 andR = R 2 • E • R 1 as compared to only simulating the error E itself.
The next set of numerical experiments pilots us in more interesting territory. Namely, the multiple-error case. We have already teased the results in the introduction and summarized them in Fig. 3. The averaged number of inputs highlights an excellent agreement between the observed behavior and the best-case scenario discussed in Section IV-A. The deviation from this optimum for higher numbers of errors can be explained by accumulation affects of errors not acting independently (see Section IV-B). Comparison of worst-case and average-case errors : performed simulations (x-axis) vs. cumulative distribution function (cdf) for detecting l = 1, 2, 4, 6 reversible errors of size k = 5 (y-axis). The red curve corresponds to injecting worst-case errors, while the blue curve delineates the cdf for detecting randomly generated errors of the same size. This goes to show, that average-case errors require far less simulations than worst-case ones.
Last but not least, we emphasize that-up to this point-theoretical and empirical results have been contingent on a worstcase assumption: each injected size-k error is a (k − 1)-fold controlled NOT-gate. In a final series of evaluations, we analyzed the success probability after conducting a certain number of simulations when choosing errors at random. More precisely, each size-k error is a randomly selected gate sequence with the additional constraint that none of the k relevant lines remain unaffected (such a scenario would produce an error of size (at most) (k − 1)). We expect that this error model captures typical behavior in a more accurate fashion. The results are shown in Fig. 9 and highlight a considerable discrepancy between random (blue) and worst-case (red) errors. This is not at all surprising. Random errors of size k tend to factorize into several independent contributions and the probabilities of detecting them with random inputs factorizes accordingly, see Sub. IV-A. Such factorizations lead to an increased error detection probability within (very) few simulation runs.

V. CONCLUSION
In this work, we have shown the impact of the reversible circuit paradigm on the probability of detecting errors in circuits. Our rigorous analysis shows, that, as opposed to classical/irreversible circuits, reversible circuits can never mask single errors and, that the probability of detecting a single reversible error only depends on the error's size and not at all on the surrounding circuit. Empirical evaluations have shown that, in case of multiple errors, the detection probability is very close to the theoretical best case. Finally, we have observed that, in case the assumption of worst-case errors is dropped, the probability of detecting these errors is increased even more.