Stochastic Quantum Zeno by Large Deviation Theory

Quantum measurements are crucial to observe the properties of a quantum system, which however unavoidably perturb its state and dynamics in an irreversible way. Here we study the dynamics of a quantum system while being subject to a sequence of projective measurements applied at random times. In the case of independent and identically distributed intervals of time between consecutive measurements, we analytically demonstrate that the survival probability of the system to remain in the projected state assumes a large-deviation (exponentially decaying) form in the limit of an infinite number of measurements. This allows us to estimate the typical value of the survival probability, which can therefore be tuned by controlling the probability distribution of the random time intervals. Our analytical results are numerically tested for Zeno-protected entangled states, which also demonstrates that the presence of disorder in the measurement sequence further enhances the survival probability when the Zeno limit is not reached (as it happens in experiments). Our studies provide a new tool for protecting and controlling the amount of quantum coherence in open complex quantum systems by means of tunable stochastic measurements.

Abstract. Quantum measurements are crucial to observe the properties of a quantum system, which however unavoidably perturb its state and dynamics in an irreversible way. Here we study the dynamics of a quantum system while being subject to a sequence of projective measurements applied at random times. In the case of independent and identically distributed intervals of time between consecutive measurements, we analytically demonstrate that the survival probability of the system to remain in the projected state assumes a large-deviation (exponentially decaying) form in the limit of an infinite number of measurements. This allows us to estimate the typical value of the survival probability, which can therefore be tuned by controlling the probability distribution of the random time intervals. Our analytical results are numerically tested for Zeno-protected entangled states, which also demonstrates that the presence of disorder in the measurement sequence further enhances the survival probability when the Zeno limit is not reached (as it happens in experiments). Our studies provide a new tool for protecting and controlling the amount of quantum coherence in open complex quantum systems by means of tunable stochastic measurements.

Introduction
A striking aspect of the dynamical evolution of quantum systems that distinguishes it from that of the classical ones is the strong influence on the evolution caused by measurements performed on the system. Indeed, in the extreme case of a frequent enough series of measurements projecting the system back to the initial state, its dynamical evolution gets completely frozen, i.e., the survival probability to remain in the initial state approaches unity in the limit of an infinite number of measurements. This effect, known as the quantum Zeno effect (QZE), was first discussed in a seminal paper by Sudarshan and Misra in 1977 [1], and can be understood intuitively as resulting from the collapse of the wave function corresponding to the initial state due to the process of measurement. It was later explored experimentally in systems of ions [2], polarized photons [3], cold atoms [4], and dilute Bose-Einstein condensed gases [5]. In noisy quantum systems, both the Zeno effect and the acceleration due to an anti-Zeno effect [6,7] have been demonstrated and also proposed for thermodynamical control of quantum systems [8], and for quantum computation [9]. The so-called quantum Zeno dynamics (QZD) that generalizes QZE is obtained when one applies frequent projective measurements onto a multi-dimensional Hilbert subspace [10]. In this case, the system does evolve away from its initial state, but nevertheless remains confined in the subspace defined by the projection itself [11]. The QZD has been recently demonstrated in an experiment with a rubidium Bose-Einstein condensate in a fivelevel Hilbert space [12]. The evolution of physical observables can also be slowed down by frequent measurements (operator QZE) even while the quantum state changes randomly with time [13]. Additionally, the Zeno phenomena have assumed particular relevance in applications owing to the possibility of quantum control, whereby specific quantum states (including entangled ones) may be protected from decoherence by means of projective measurements [14,15]. Indeed, QZE is also a physical consequence of the statistical indistinguishability of neighboring quantum states [16]. Very recently, it has been shown that frequent observation of a small part of a quantum system turns its dynamics from very simple to an exponentially complex one [17], thereby paving the way for universal quantum computation.
While in its original QZE formulation, the measurement sequence is equally spaced in time, later treatments have considered the case of measurements randomly spaced in time (stochastic QZE) [18]. In the latter case, the survival probability in the projected state becomes itself a random variable that takes on different values corresponding to different realizations of the measurement sequence. In particular, one would expect that the mean of the survival probability obtained by considering an average over a large (ideally infinite) number of realizations of the measurement sequence leads to the result obtained for an evenly spaced sequence, under some constraints (e.g., the mean of the time interval between consecutive measurements is finite). An interesting question, relevant both theoretically and experimentally, naturally emerges: Is it possible to have realizations of the measurement sequence that give values of the survival probability significantly deviated from the mean? How typical/atypical are those realizations? Are there ways to quantify the probability measures of such realizations? These questions assume particular importance in devising experimental protocols that on demand may slow down or speed up efficiently the transitions of a quantum system between its possible states.
In this work, by exploiting tools from probability theory, we propose a framework that allows an effective addressal of the questions posed above. In particular, we adapt the well-established theory of large deviations (LD) to quantify the dependence of the survival probability on the realization of the measurement sequence, in the case of independent and identically distributed (i.i.d.) time intervals between consecutive measurements. Our goal is two fold: 1) adapt and apply the LD theory to discuss the QZE by transferring tools and ideas from classical probability theory to the arena of quantum Zeno phenomena, 2) analytically predict the corresponding survival probability and exploit it for a new type of control based on the stochastic features of the applied measurements.
The LD theory concerns the asymptotic exponential decay of probabilities associated with large fluctuations of stochastic variables. Originally developed in the realm of probability theory [19,20,21,22], an increasing interest in the last years has led to several studies of large deviations in both classical and quantum systems. In the latter case, the LD formalism has been discussed in the context of quantum gases [23], quantum spin systems [24], quantum information theory [25], among others. An interesting recent application pursued in Refs. [26,27,28,29] has invoked the LD theory to develop a thermodynamic formalism for the study of quantum jump trajectories [30,31] of open quantum systems [32,33,34]. Indeed, they have addressed thermodynamic issues for quantum systems, e.g., quantum stochastic thermodynamics for the quantum trajectories of a continuously monitored forced harmonic oscillator coupled to a thermal reservoir [35], the work statistics in a driven two-level system coupled to a heat bath [36], and stochastic thermodynamics of a quantum heat engine [37].
Here, we consider a general quantum system with unitary dynamical evolution that is subject to a sequence of measurements projecting it into a fixed (initial) state. These measurements are separated by time intervals that are i.i.d. random variables. We analytically show that in the limit of a large number m of measurements, the distribution of the survival probability to remain in the initial state assumes a large-deviation form, namely, a profile decaying exponentially in m with a positive multiplying factor, the so-called rate function, which is a function only of the survival probability. The value at which the function attains its minimum gives out of all possible outcomes the most probable or the typical value of the survival probability. Our analytical results are supported by numerical studies in the case of Zeno-protected entangled states. They show that the presence of disorder in the sequence of time intervals between consecutive measurements is deleterious in reaching the Zeno limit. Nevertheless, the disorder does enhance the survival probability when the latter is not exactly one, which, interestingly enough, corresponds to the typical experimental situation.
The layout of the paper is as follows. In Section 2, we introduce our framework applied to a generic quantum system with unitary dynamics and subjected to a sequence of projective measurements performed at random times. We then discuss in Section 3 the statistics of the survival probability of the system to remain in an initial pure state. In Section 4, we consider the case of a d-dimensional Bernoulli probability distribution for the time intervals between consecutive measurements, and analyze the statistics of the survival probability in the limit of a large number of measurements by means of the LD formalism. We also generalize our results to the case of a continuous probability distribution. In Section 5, we confirm our analytical results by numerical studies on Zeno-protected entangled states for a generic three-level quantum system. In Section 6, we discuss how to analytically recover the exact quantum Zeno limit. Conclusions and outlook follow in Section 7. Some of the technical details of our analysis are provided in the four Appendices.

The Model
Consider a quantum mechanical system described by a finite-dimensional Hilbert space H, which may be taken to be a direct sum of r orthogonal subspaces H (k) , as H = r k=1 H (k) . We assign to each subspace a projection operator P (k) , i.e., P (k) H = H (k) . An initial quantum state described by the density matrix ρ 0 undergoes a unitary dynamics to evolve in time t to exp(−iHt)ρ 0 exp(iHt), where H is a generic system Hamiltonian that in general commutes with the projectors P (k) . In this work, we set the reduced Planck's constant to unity.
In our model, starting with a ρ 0 that belongs to one of the subspaces, say subspacē r ∈ 1, 2 . . . , r, so that ρ 0 = P (r) ρ 0 P (r) and Tr[ρ 0 P (r) ] = 1, we subject the system to an arbitrary but fixed number m of consecutive measurements separated by time intervals µ j ; µ j > 0, with j = 1, . . . , m. During each interval µ j , the system follows a unitary evolution described by the Hamiltonian H, while the measurement corresponds to applying the projection operator P (r) . We take the µ j 's to be independent and identically distributed (i.i.d.) random variables sampled from a given distribution p(µ), with the normalization dµ p(µ) = 1. We assume that p(µ) has a finite mean, denoted by µ. For simplicity, in the following we represent P (r) and H (r) by P and H P , respectively. The (unnormalized) density matrix at the end of evolution for a total time corresponding to a given realization of the measurement sequence {µ j } ≡ {µ j ; j = 1, 2, . . . , m}, is given by where we have defined and To obtain Eq. (2), we have used P † = P, ρ 0 = Pρ 0 P, and P 2 = P. Note that T is a random variable that depends on the realization of the sequence {µ j }. The survival probability, namely, the probability that the system belongs to the subspace H P at the end of the evolution, is given by while the final (normalized) density matrix is Note that the survival probability P({µ j }) depends on the system Hamiltonian H, the initial density matrix ρ 0 , and also on the probability distribution p(µ).

Statistics of the survival probability
In this section, we obtain the distribution of the survival probability P({µ j }) with respect to different realizations of the sequence {µ j }. We suppose that the system is initially in a pure state |ψ 0 belonging to H P , so that ρ 0 = |ψ 0 ψ 0 |, and assume that the projection operator is P ≡ |ψ 0 ψ 0 |. In this way, starting with a pure state, the system evolves according to the following repetitive sequence of events: unitary evolution for a random interval, followed by a measurement that projects the evolved state into the initial state. Note that our analysis can be easily generalized to the case of initially mixed states and for other choices of the projection operator. The survival probability P({µ j }) can be evaluated by using Eq. (5) to get where we have defined the probability q(µ j ) as which takes on different values depending on the random numbers µ j . Note that being a probability, possible values of q(µ) lie in the range 0 < q(µ) ≤ 1. The distribution of q(µ j ) is obtained as where Eq. (8) gives From Eq. (7), one then derives the distribution of P as In particular, one may be interested in the average value of the survival probability, where the average corresponds to repeating a large number of times the protocol of m consecutive measurements interspersed with unitary dynamics for random intervals µ j . One gets: Here and in the following, we will use angular brackets to denote averaging with respect to different realizations of the sequence {µ j }. Additionally, let us note that writing q(µ) as we have with In particular, considering µ 1, one has, to leading order in µ 2 , the result where τ Z is the so-called Zeno-time [11]:

Large deviation formalism for the survival probability
Let us now employ the LD formalism to discuss the statistics of the survival probability P({µ j }) in the limit m → ∞. In this limit, Eq. (1) gives where we have used the fact that the µ j 's are i.i.d. random variables and µ is a finite number.
To proceed further, let us consider p(µ) to be a d-dimensional Bernoulli distribution, namely, µ takes on d possible discrete values µ (1) , µ (2) , . . . , µ (d) with corresponding probabilities p (1) , p (2) , . . . , p (d) , such that d α=1 p (α) = 1. The average value of the survival probability is then obtained by using Eq. (12) as In order to introduce the LD formalism for the survival probability, first consider where n α is the number of times µ (α) occurs in the sequence {µ j }. Noting that L({µ j }) is a sum of i.i.d. random variables, its probability distribution is given by where, as indicated, the summation in the first equality is over all possible values of n 1 , n 2 , . . . , n d subject to the constrain d α=1 n α = m. In the second equality, n α 's are such that Starting from Eq. (22), and considering the limit m → ∞, a straightforward manipulation (see Appendix A) leads to the following LD form for the probability distribution P (L/m), as where the function I(x), i.e., the so-called rate function [22], is given by The approximate symbol ≈ in Eq. (25) It is evident from Eq. (26) that the rate function I (x) is the relative entropy or the Kullback-Leibler distance between the set of probabilities {f (µ (α) )} and the set {p (α) }; it has the property to be positive and convex, with a single non-trivial minimum [38]. Equation (25) implies that the value at which the function I(L/m) is minimized corresponds to the most probable value L of L as m → ∞. Using ∂I(L/m)/∂ ln q(µ (α) )| L=L = 0; α = 1, . . . , d, we get (see Appendix B) As for the distribution of the survival probability, one may obtain a LD form for it in the following way: where in the third step we have considered large m and have used Eq. (25), while in the last step we have used the saddle point method to evaluate the integral. We thus have lim m→∞ − ln(P (P))/m = J(P); J(P) ≡ min L:L=m ln P I(L/m).
The value at which J(P) takes on its minimum value gives the most probable value of the survival probability in the limit m → ∞, which may also be obtained by utilizing the relationship between L and P; one gets which may be compared with the average value (20). In other words, while the average value P is determined by the logarithm of the averaged q(µ (α) ), the most probable value P is given by the average performed on the logarithm of q(µ (α) ). The latter is the so-called log-average or the geometric mean of the quantity q(µ (α) ) with respect to the µ-distribution. A straightforward generalization of Eq. (33) for a generic (continuous) µ-distribution is while that for the average reads P = exp m ln dµ p(µ)q(µ) .
Using the so-called Jensen's inequality, namely, exp(x) ≥ exp( x ), it immediately follows that with the equality holding only when no randomness in µ (that is, only a single value of µ exists) is considered. The difference between P and P can be estimated in the following way in experiments. First, we perform a large number m of projective measurements on our quantum system. The value of the survival probability to remain in the initial state that is measured in a single experimental run will very likely be close to P , with deviations that decrease fast with increasing m. On the other hand, averaging the survival probability over a large (ideally infinite) number of experimental runs will yield P . All the derivations above were based on the assumption of a fixed number m of measurements, so that the total time interval T -see Eq. (1) -is a quantity fluctuating between different realizations of the measurement sequence. To obtain the LD formalism, we eventually let m approach infinity, which in turn leads to an infinite T (unless µ → 0) -see Eq. (19). We now consider the situation where we keep the total time T fixed, and let m fluctuate between realizations of the measurement sequence. In this case, in contrast to Eq. (22), we have the joint probability distribution P (L, T ) = m n 1 ,n 2 ,...,n d : We thus have to find the set of n α 's, which we now refer to as n α 's, such that the following conditions are satisfied: The above equations have a unique solution only for d = 2, that is, when one has a Bernoulli distribution. In this case, the solutions satisfy which may be solved for m, for given values of L and T and then used in Eq. (37) to determine P (L, T ). In the limit m → ∞, provided the mean µ of p(µ) exits, Eq. (1) together with the law of large numbers ‡ gives: In this case, for every d, one obtains an LD form for P (L, T ) (see Appendix C): where The rate function (44) is related to the rate function (26) as follows: and similarly to Eq. (31), one has P (P, T ) ≈ exp (−mJ (P, T /m)) , where J (P, T /m) ≡ min L:L=m ln P I(L/m, T /m).
As in Eq. (34), the most probable value of the survival probability for a continuous µ-distribution is given by

Numerical results
In this section, in order to test our analytical results, we numerically simulate the dynamical evolution of a generic n-level quantum system governed by the following Hamiltonian H = n j=1 ω j |j j| + n−1 j=1 Ω (|j j + 1| + |j + 1 j|) .
(51) ‡ The law of large numbers states that the sum of a large number N of i.i.d. random variables, when scaled by N , tends to the mean of the underlying identical distribution with probability one as N approaches infinity [39].
Here, |j ≡ |0 . . . 1 . . . 0 , with 1 in the j-th place and 0 otherwise, denotes the state for the j-th level, (j = 1, . . . , n), with the corresponding energy ω j , while Ω is the coupling rate between nearest-neighbor levels. For simplicity, we take n = 3, Ω = 2πf , with f = 100 kHz, and ω j = 2πf j , with f 1 = 30 kHz, f 2 = 20 kHz and f 3 = 10 kHz. We choose the initial state |ψ 0 to be the entangled (with respect to the bipartition 1|23) pure state Under these conditions, we obtain the survival probability P as a function of the number of measurements m for a d-dimensional Bernoulli distribution for the µ j 's, with d = 2, 3, 4 -see Fig. 1. We find a perfect agreement between the numerical evaluation of Eq. (5) for a typical realization of the measurement sequence {µ j } and the asymptotic most probable values obtained by using Eq. (33). Moreover, a comparison between these two quantities for d = 2, m = 2000, and 100 typical realizations of the measurement sequence is shown in Fig. 2. Note that in the case of a bi-dimensional Bernoulli distribution with probability p, the quantity P depends linearly on p -see Fig. 3.   Furthermore, to test our analytical predictions for a continuous µ-distribution, we have considered the following distribution for the µ j 's, namely, p(µ) = α/ µ 0 (µ/µ 0 ) 1+α , with α > 0 and µ ∈ [µ 0 , ∞). The corresponding survival probability shown in Fig. 4 further confirms our analytical predictions. Note that the decrease of fluctuations around the most probable value with increasing α is consistent with the concomitant smaller fluctuations of µ around the average µ. In all the cases discussed here, we observe excellent agreement with our estimate of the most probable value based on the LD theory. Moreover, our analytical predictions are numerically confirmed also for any coherent superposition state |ψ 0 ≡ a 1 |100 + a 2 |001 with |a 1 | 2 + |a 2 | 2 = 1. Preliminary numerical studies show similar results for the numerical survival probability P in the context of the QZD, by taking into account the projector P = |100 100| + |001 001| that confines the system in the Hilbert subspace spanned by the states |100 and |001 . It will be further investigated in a forthcoming work.
Finally, we want to address the following question. Does the presence of disorder in the sequence of measurement time intervals enhance the survival probability? To address it, we consider a d-dimensional Bernoulli p(µ) with d = 2, and a given fixed value of the average µ = p (1) µ (1) + p (2) µ (2) . Then, in the first scenario, we apply m projective measurements at times equally spaced by the amount µ, while in the second, we sample this time interval from p(µ). In the former case, one has so that Eqs. (34) and (35) give The absence of randomness on the values of µ trivially leads to P = P . In the second scenario, the most probable value P is given by Eq. (33), hence Now the question arises as to whether for given fixed µ and µ (1) a random sequence of measurement yields a larger value of the survival probability that the one obtained by performing equally spaced measurements. For the Hamiltonian (51) and the initial state (52), we show in Fig. 5 the behavior of P as a function of p (1) at fixed values of µ = 2.4µ 0 and µ (1) = µ 0 , with µ 0 = 10 µs. A comparison with P(µ) shows that while in the Zeno limit, such a disorder is deleterious, there are instances where random measurements are beneficial in enhancing the survival probability. In Fig. 6, moreover, we show the behaviour of the ratio P /P(µ) as a function of P (µ (1) ) at fixed values of p (1) = 0.99, m = 100 and µ = 2.4µ (1) , with µ (1) ∈ [1,250] ns. An effective survival probability enhancement is reached in every dynamical evolution regime, except that in the Zeno  limit (inset - Fig. 6). Interestingly enough, these regimes might be particularly relevant from the experimental side when the ideal Zeno condition is only partially achieved.

Quantum Zeno limit
Following the analysis in Sec. 4, we now discuss how to recover the exact QZ limit. Let us consider m projective measurements at times equally separated by an amount µ, so that one has the result (54) for the survival probability, while Eq. (19) gives Combining these two expressions, we get To recover QZE, in the limit µ → 0 with finite T and using Eqs. (13) and (16), one indeed obtains provided ∆ 2 H is finite, as it is the case for a finite-dimensional Hilbert space. Let us now discuss QZE for a general p(µ). Note that in this case, it is natural in experiments to keep the number of measurements m fixed at a large value, with the total time T fluctuating between different sequences of measurements {µ j }. From Eqs. (34) and (35), with the use of Eq. (13), and the Taylor expansion of log(1 + x) for x < 1, we get where δ k ≡ dµ p(µ)δ k (µ); k = 1, 2, 3, . . . .

Conclusions and outlook
In this work, we have analyzed stochastic quantum Zeno phenomena by means of the large deviation theory widely applied in probability theory and statistical physics. More specifically, we have analytically derived the asymptotic distribution of the survival probability for the system to remain in the initial state when its unitary evolution is combined with a very long sequence of measurements that are randomly spaced in time. The framework allowed us to obtain analytical expressions for the most probable and the average value of the survival probability. While the most probable value represents what an experimentalist will measure in a single typical implementation of the measurement sequence, the average value corresponds instead to an averaging over a large (ideally infinite) number of experimental runs. Therefore, by tuning the probability distribution of the time intervals between consecutive measurements, one can achieve the most probable survival provability, thereby allowing to engineer novel optimal control protocols for the manipulation of, e.g., atomic population of a specific quantum state. Our analytical predictions have been fully validated against numerical studies of a simple n-level quantum system and Zeno-protected entangled states. These states are particularly relevant in the context of quantum information science, and therefore need to be well protected from unavoidable environmental decoherence [13,14]. For example, in Ref. [40], it has been shown that an entangled state can be characterized by a Zeno time comparable with the one of a separable state by virtue of its interaction with a suitable noisy environment. Since the decoherence may correspond to a continuous monitoring from the environment (repetitive random measurements), our formalism allows one to predict the occupation probability of an arbitrary entangled state by the knowledge of the probability distribution of the system-environment interaction times. Additionally, we have found that the presence of stochasticity in the measurement process may enhance the survival probability in the typical experimental scenario where quantum Zeno limit is not achieved. This suggests new schemes that advantageously exploits disorder in implementing quantum information protocols.
Finally, as an outlook, our formalism may be extended to the more general context of QZD where the system dynamics gets confined in a Hilbert space, thereby enhancing its complexity exponentially [17]. Another possible related application is the analysis of the interplay between classical noise arising from external drive and the noise due to the randomness in the measurement sequence, thereby designing new types of noise constraining the Hamiltonian dynamics in atomic quantum simulators of many-body systems [41].

Acknowledgments
We acknowledge fruitful discussions with M. Campisi