Optimality and universality in quantum Zeno dynamics

The effective lifetime of a quantum state can increase (the quantum Zeno effect) or decrease (the quantum anti-Zeno effect) in the response to increasing frequency of the repeated measurements and the multiple transitions between these two regimes are potentially possible within the same system. An interesting question arising in this regards is how to choose the optimal schedule of repeated measurements to achieve the maximal possible decay rate of a given quantum state. Addressing the issue of optimality in the quantum Zeno dynamics, we derive a range of rigorous results, which are, due to generality of the theoretical framework adopted here, applicable to the majority of models appeared in the quantum Zeno literature. In particular, we prove the universal dominance of the regular stroboscopic sampling in the sense that it always provides the shortest expected decay time among all possible measurement procedures. However, the implementation of the stroboscopic protocol requires the knowledge of the optimal sampling period which may depend on the fine details of the quantum problem. We demonstrate that this difficulty can be overcome with the tricky non-regular measurement schedule inspired by the scale-free restart strategy used to speed up the completion of the probabilistic algorithms and Internet tasks in computer science as it allows to achieve a near-optimal decay rate in the absence of detailed knowledge of the underlying quantum statistics. Besides, our general approach reveals unexpected universality displayed by the quantum systems subject to the optimally tuned rate of Poissonian measurements and the simple statistical criteria to discriminate between Zeno and anti-Zeno regimes following from this universality. We illustrate our findings with an example of Zeno dynamics in the system of optically-trapped ultra-cold atoms and discuss the implications arising from them.


Introduction
As was recognized soon after establishing the foundations of the quantum theory, the repeated measurements can slow down the quantum evolution, e.g., inhibit the spontaneous decay of an unstable system. In the idealized limit of infinitely frequent instantaneous measurements the quantum transitions are completely suppressed so that the system is frozen to the initial state. This phenomenon had first been qualitatively described by computer science pioneer Alan Turing [1] and later took the name of the quantum Zeno effect [2]-after ancient Greek philosopher Zeno of Elea famous for his paradoxes challenging the logical possibility of motion.
During past decades the measurement-induced slow down of quantum evolution has been investigated for a variety of experimental setups [3][4][5][6][7][8][9][10][11]. Importantly, the opposite phenomenon-the quantum anti-Zeno effect or inverse Zeno effect [12,13]-has also been demonstrated experimentally [14,15]. Quite often the effective decay rate exhibits a non-monotonic behavior in dependence on the measurement frequency (see, e.g., references [16,17]) so that both the quantum Zeno and anti-Zeno effects can potentially be observed in the same system. This open the door to the opportunity to optimize the rate of quantum transitions by the carefully scheduled repeated measurements. However, it remains unclear if there are any general guiding principles to design the optimal measurement schedule providing the fastest decay of a quantum state of interest.
Besides, considerable theoretical efforts have been made to find the conditions to discriminate between the quantum Zeno and the anti-Zeno effects in the decay of unstable systems. Say, accordingly to references [13,16,18] one should explore the general features of the system-environment interaction and calculate the residue of the propagator involving the environmental spectral density function. More recently, in reference [19] an explicit analytical criterion is derived which relates the sign of Zeno dynamics of the two-level system in a dissipative environment with the convexity of the spectrum. While these theoretical approaches provide a valuable insight, one may ask how to formulate the general criteria (if any) of the Zeno and anti-Zeno regimes in terms of the directly measurable decay statistics at a given measurement frequency, i.e. without referring to any details of the interaction Hamiltonian.
To address these challenges, here we develop a unifying theoretical approach to the quantum Zeno dynamics for a generic quantum system under an arbitrary distributed measurement events. Our analysis focuses on the behavior of the average time required to detect the decay of an initial state in the series of successive measurements. This metric exhibits a non-monotonic behavior in dependence on the measurement frequency so that one can achieve the optimal decay conditions by bringing the system to the point of Zeno/anti-Zeno transition. With simple arguments, we determine the optimal detection strategy which provides the shortest expected decay time: one should apply measurements in a strictly periodic manner, i.e. each τ * units of time. Since the optimal measurement period τ * of such a stroboscopic protocol depends on the parameters of the particular quantum system, which may be poorly specified, we next propose the non-uniform measurement protocols whose performance is weakly sensitive to such details. These protocols resemble the scale-free restart strategies previously developed to improve the mean completion time of the random search tasks [20,21] and of the probabilistic algorithms in computer science [22]. Finally, our general framework is used to describe the peculiarities of the Zeno/anti-Zeno transition in the practically important case of randomly distributed Poissonian measurement events. It is shown that quantum decay under optimally tuned rate of Poissonian measurements exhibits universal statistical features allowing us to formulate the simple and widely applicable criteria to discriminate between Zeno and anti-Zeno regimes. All these findings are illustrated with the example of Zeno dynamics in the interband Landau-Zener tunneling of ultra-cold atoms [14,23].

Theoretical framework
In the description of quantum Zeno dynamics a key role is played by the survival probability P(T), i.e. the probability of finding the quantum system in its initial state after time T has elapsed in the absence of any measurements performed on the system prior to this point in time. We are interested in the scenario, when the system undergoes a series of measurements during the time evolution to check whether it is still in the initial state. Explaining the general idea of our approach, we will treat the measurements as instantaneous projections which is justified when the time required to perform a measurement is small compared to the typical inter-measurement interval. However, as explained through the text, the main conclusions of our analysis survive in the more general settings with non-negligible measurement duration. Note also, we assume that the system-environment correlations can be neglected, which implies that the survival probability after several measurements factorizes.
The measurement protocol is characterised by the sequence of the inter-measurement time intervals In what follows, we calculate the expectation of the random time t when the measurement attempts finally result in detection of decay. This quantity can be determined from the infinite chain of equations . . .
where x T k is the binary random variable which is equal to zero if the outcome of the kth measurement is 'yes' (i.e. the decay of the initial state is detected) and is unity otherwise, and t i represents the time remaining to the decay provided the ith measurement was 'no' (i.e. the decay is still not detected). Intuition behind this set of equations is very simple: if the next measurement attempt failed to detect decay, the system quantum evolution starts anew from the initial (undecayed) state. Performing averaging over the quantum statistics and taking into account that x T k = P(T k ) one obtains The later equation relates the expectation of the decay detection time to the quantum survival probability of the system. Once P(t) is known, equation (4) allows to calculate the expected decay time for any sequence of the inter-measurement intervals {T k } +∞ k=1 . Let us discuss how this formula simplifies for the particular measurement strategies most studied in the existing literature. In the original formulation of the quantum Zeno effect [2], the measurement protocol was stroboscopic, i.e. the measurement events are equally spaced in time, {T k } +∞ k=1 = τ , τ , . . . . In this case, the right-hand side of equation (4) represents the sum of the infinite geometric series with the common ratio P(τ ) so that we readily find Another important scenario is the stochastic measurements protocol where measurement events are separated by random time intervals [24][25][26][27][28]. In this case, T i 's are assumed to be independent and identically distributed random variables sampled from the probability density ρ(T) with the well-defined first moment T = ∞ 0 dTρ(T)T. To calculate the expected decay time, we should additionally average equation (4) over the statistics of inter-measurement intervals. This gives Clearly, the stroboscopic protocol with period τ can be treated as a particular case of randomly distributed protocol having measurement intervals distribution ρ(T) = δ(T − τ ). Importantly, while here we consider the mean detection time as the key metric of interest, the previous theoretical studies of Zeno effect focused mainly on the calculation of the effective decay rate extracted from the behavior of the measurements-modified survival probability, see, e.g., [16,17,19,[29][30][31]. Say, for the stroboscopic measurement protocol with period τ , the probability of finding the system in its initial state after N measurements (i.e. after time t = Nτ ) is equal to P(τ ) N = e −Γt , where Γ = −ln P(τ )/τ is the effective decay rate. In more general situation with stochastic measurement protocol, the survival probability after N measurements is given by . Obviously, when the measurement frequency is high enough, deviation of P(T) from unity for typical values of T is small and we can safely replace ln P(T) ≈ 1 − P(T) . Then, using equation (6), one obtains Γ ≈ (1 − P(T) )/ T = t −1 , i.e. the mean decay time is equal to the inverse effective decay rate. Thus, these two approaches to quantification of Zeno dynamics-either in terms of the effective decay rate, or in terms of the expected decay time-become equivalent in the limit of sufficiently frequent measurements. Note also that the mean detection time is more natural and well-defined metric while dealing with the non-uniform measurement protocols discussed below. The behavior of t as a function of T allows us to identify the Zeno and anti-Zeno regimes. Namely, the expected decay time can either increase with decreasing T , which we refer to as the Zeno effect, or decrease for more frequent measurements, which we define as the anti-Zeno effect. Equivalent definition of Zeno and anti-Zeno effects in terms of effective decay rate was adopted, e.g., in references [12,17,29,32,33]. In general, t in its dependence on T must exhibit a minimum corresponding to the Zeno/anti-Zeno transition. This is particularly evident in the case of the stroboscopic measurements. At the very beginning of the decay 1 − P(τ ) ∝ τ 2 [34] and therefore t τ ∝ 1/τ , whereas for the very large detection period t τ ∝ τ (assuming that P(τ ) → 0 as τ → ∞). Thus, the dependence of mean time required to detect the decay on the stroboscopic period always attains a minimum, see the right panel in figure 1. Note also that multiple Zeno/anti-Zeno transitions (i.e. multiple local extrema of t or Γ) may occur, see, e.g., references [17,29].

Optimality of the stroboscopic protocol
For the sake of illustration, we will consider a system consisting of ultra-cold atoms that are trapped in an accelerating periodic optical potential of the form V 0 cos(2k L x − k L at 2 ), where V 0 is the potential amplitude, k L is the laser wavenumber, x is position in the laboratory frame, a is the acceleration and t is time-the experimental setup intensively discussed in references [14,23]. In the accelerating reference frame, where M is the mass of atoms and the last term corresponds to the inertial force experienced by them. The energy spectrum of the considered system under the assumption of zero acceleration consists of Bloch bands separated by gaps. As an initial condition, we assume that the lowest band is uniformly occupied, while the higher bands are empty. When an acceleration is imposed, the atomic quasimomentum changes as the atoms undergo Bloch oscillations. The trapped atoms can escape from the accelerating lattice by interband Landau-Zener tunneling. The corresponding escape probability can be calculated analytically (see equation (A1) in appendix A) and it exhibits deviation from exponential decay for short times, see reference [23] and figure 2. The experimental technique intended to repeatedly measure the amount of trapped atoms was developed in reference [14] and it was shown that the analytical expression for the survival probability together with the assumption of instant and ideal measurements lead to the results that are in a good agreement with the experiment.
In figure 3(a) we plot the expected tunneling time t as a function of mean inter-measurement intervals T for various stochastic measurement strategies having uniform in time statistics. We see that a minimum of t is always achieved and while the values taken by the different minima and their positions depend on the distribution of the inter-measurement time, it is the regular stroboscopic protocol that provides the lowest of minima. Also, figure 3(b) demonstrates that the stroboscopic protocol outperforms the non-uniform in time measurement procedures.
These observations allow us to conjecture that in the case of ultra-cold atoms, the stroboscopic protocol is the optimal measurement strategy. Importantly, this is also true in general. For the rigorous proof, assume that τ * is the optimal period of the stroboscopic measurement protocol minimizing the mean time to decay which is given by equation (5), i.e.
for all 0 < τ < ∞. Below we show that for any other measurement procedure {T k } +∞ k=1 = T 1 , T 2 , . . . the resulting expected decay time t cannot be lower than t τ * . Indeed, from equation (4) one obtains Next, we note that 1 − P(T 1 ) is the probability to register the decay after the first measurement Q 1 , and accordingly, Q n = (1 − P(T n )) n−1 k=1 P(T k ) is the probability of registering the decay after the nth measurement. Also, we remind that T n /(1 − P(T n )) = t T n is the expected decay time under stroboscopic protocol of period T n , and, therefore, Further we exploit the facts that t T n t τ * (direct consequence of equation (B3) and normalization condition ∞ n=1 Q n = 1) to obtain from equation (9) t t τ * .
This proves that stroboscopic measurement protocol is optimal among all possible strategies.  The same conclusion can be drawn from the following qualitative arguments. Let {T * k } +∞ k=1 = T * 1 , T * 2 , . . . be the optimal measurement schedule for a given quantum system and assume that the outcome of the first measurement is 'no'. Clearly, by the definition of the optimal strategy, the subsequent measurements must minimize the residual time to decay. Since the system's quantum evolution starts anew after the first measurement, the sequence of the inter-measurement intervals T * 2 , T * 3 , . . . minimizing the remaining time to decay coincides with the original sequence T * 1 , T * 2 , . . . . This means that the optimal sampling protocol is strictly periodic, i.e. T * 1 = T * 2 = T * 3 = . . . . Note that these simple arguments do not invoke any system-dependent features and remain valid in the case of non-instantaneous measurements, thus making our conclusion completely universal.
In figure 4 we compare the behavior of the mean decay time t and of the inverse decay rate Γ −1 in their dependence on the measurement period T for the stroboscopic measurement protocol. While both metrics attain minimum at very close optimal values of T, the match of two curves is not perfect (especially for large measurement periods), and, so, it is natural to ask if there is a rigorous proof that the stroboscopic strategy is optimal when one wish to maximize the effective decay rate Γ. We provide such a proof in appendix B.

Luby-like protocol
As we learn from consideration in the previous section, to achieve the maximal decay rate one should implement the stroboscopic measurements with carefully tuned sampling period τ * which, strictly speaking, may depend on the fine details of the underlying quantum process. An interesting question then naturally emerges: is it possible to have a realization of the measurement sequence that gives value of the expected decay time close to the optimal value t τ * without detailed knowledge of the system parameters? Previously, a similar challenge motivated the development of the universal restart strategy capable to improve the average running time of the Las Vegas type computer algorithms without introducing any assumptions regarding the completion time statistics [22]. Motivated by the appealing analogy between restart and projective measurement (indeed, the projection postulate tells us that at every moment one confirms the survival of the system through the measurement, the quantum state of the system is reset to the initial undecayed one and its evolution starts from scratch), we now investigate the effect of the Luby-like measurement protocol [22] on the quantum Zeno dynamics. Namely, we assume that the inter-measurement intervals are given by {T k } +∞ k=1 = {τ 0 , τ 0 , 2τ 0 , τ 0 , τ 0 , 2τ 0 , 4τ 0 , τ 0 , τ 0 , 2τ 0 , τ 0 , τ 0 , 2τ 0 , 4τ 0 , 8τ 0 , τ 0 , . . . }. The ith term in this sequence is where the initial time step τ 0 should be smaller then the (estimated) decay time of the quantum state under study.
This strategy possesses the following remarkable property: the total time spent on inter-measurement intervals of each possible length up to the end of some interval in the measurement schedule is roughly equal. Indeed, as follows from equation (11), all inter-measurement intervals are powers of two and each time a pair of intervals of a given length has been completed, an interval of twice that length is immediately executed. If the total time elapsed since the start of measurements is t τ 0 , then approximately log 2 t τ 0 different measurement periods have been applied, and the total time spent on each is ∼ t/log 2 t τ 0 . This means that the effective probability density, describing the relative frequencies of appearance of the different measurement periods in the sequence defined by equation (11), behaves in the scale-free fashion: Due to this property, the strategy does not introduce any characteristic time scale into the measurement process.
It turns out that the Luby-like measurement strategy allows to come surprisingly close to the optimum value of the mean decay time provided by the best stroboscopic protocol. Namely, in appendix C we prove that the expected time required to detect the decay in a series of measurements scheduled accordingly to equation (11) obeys the following inequality which is universally valid for any quantum state whose survival probability P(t) is a monotonically decreasing function. Here C 1 =

8
(1−e −1 ) 2 and C 2 = (1 − e −1 ) 2 ∞ k=1 ke −k+1 log 2 k + 3. Thus, the decay of a quantum system under the Luby-like strategy is only a logarithmic factor slower than the decay of the same system subject to the optimally tuned stroboscopic measurements.
Importantly, in practice the performance of this measurement procedure is much better that the upper bound predicted by equation (12), see figure 5. This is because in derivation of equation (12) we were struggling mainly for the logarithmic factor and did not try to reduce the numerical constants C 1 and C 2 in the involved estimates. Note also that the account of the non-zero measurement duration changes the constant C 1 , while the structure of the final answer remains the same (see appendix C).

Other scale-free protocols
Next we introduce a class of non-uniform measurement protocols resembling the scale-free restart strategies recently proposed for the robust optimization of the mean first-passage time in random search processes [20,21]. These strategies are effective for the problems where the survival probability has the form P(T) = f(T/T 0 ), where T 0 is the unknown characteristic time associated with quantum evolution. Assume that the measurements are applied at random time moments at rate which is inversely proportional to the time elapsed since the start of the experiment, i.e. r(t) = α/t, where α is a dimensionless constant [20]. As we see in figure 6(a), such measurement protocol does not need to be optimized with respect to the characteristic time scale T 0 of the underlying quantum problem: the mean decay time as a function of α attains its minimum at optimal value α * which is determined by the particular form of the function f(T/T 0 ), but does not depend on the time scale T 0 . The mathematical explanation of this observation can be found in appendix D.
The same property is exhibited by the geometric protocol [21], see figure 6(b). Here the measurement time instants are chosen from the geometric sequence {T k } +∞ k=1 = τ 0 , qτ 0 , q 2 τ 0 , . . ., where τ 0 T 0 is the initial time-step and q is the dimensionless common ratio. Again the optimal value q * minimizing the expected decay time does not depend on the scale T 0 .
Thus, once the form of the survival probability f(T/T 0 ) is known, one can optimize the quantum evolution using the scale-free sampling of the measurement times even in the absence of reliable estimate of the characteristic time scale T 0 . An important open question, which is beyond of the scope of the current article, is how to derive the rigorous bounds on the performance of these measurement protocols.

Poissonian measurements
Sometimes the form of the measurement protocol is dictated by the very experimental conditions. Say, in some settings, the measurement process is of stochastic nature so that the time interval between two consecutive measurements is randomly varied [24,26,27]. A particularly important example is the Poissonian measurements for which the inter-measurement intervals are independently sampled from the exponential distribution, i.e. ρ(T) = re −rT , where r is the measurement rate. In this section we describe the universal statistical properties of quantum decay in the systems subject to the optimal measurement rate bringing the expected decay time to its extremum and discuss the practical implications arising from this universality.
In this case ∞ 0 dTρ(T)T = r −1 and ∞ 0 dTρ(T)P(T) = rP(r), whereP(r) denotes the Laplace transform of P(t) evaluated at r, so that equation (6) reduces to . (13) To arrive at the model-independent manifestations of the Zeno/anti-Zeno transition announced above, one also needs to calculate the second statistical moment of the random time required to detect a decay in a series of Poissonian measurements. Note that for any stochastic measurement schedule with uniform in time statistics, the random times t and t 1 entering equation (1) are the statistically independent copies of each other. In other words, the time to decay satisfies the simple renewal equation: t = T + t x T , in which x T is the binary random variable which is equal to zero if the outcome of the first measurement is 'yes' and is unity otherwise, T is the random measurement time sampled from ρ(T), and t is independent copy of t. Therefore, t 2 = T 2 + 2t x T T + t 2 x T , and after averaging over the statistics of T and over the quantum statistics one obtains we then readily find For the measurement events coming from Poisson statistics, , and t is given by equation (13). Substituting these expressions into equation (14) gives the following result for the second moment of the decay time Now let us assume that we found the optimal measurement rate r * such that the expected decay time of the initial quantum state attains its (local or global) extremum. Then ∂ r t r * = 0, and equation (13) together with equation (15) yield No assumptions were made concerning the particular form of the survival probability P(T) and, thus, equation (16) must hold for an arbitrary quantum system subject to optimally tuned Poissonian measurements. Interestingly, equation (16) is automatically valid for any quantum decay process subject to the strictly periodic repeated measurements with the period τ = 2r −1 * . Indeed, for ρ(T) = δ(T − τ ), equation (14) reduces to t 2 τ = τ 2 +2τ P(τ ) t τ

1−P(τ )
, where t τ is given by equation (5), and, therefore, Thus, the quantum decay altered by the optimal Poissonian measurements approaches in its statistical properties the quantum decay under the stroboscopic measurements characterised by the twice larger (in average) sampling period.
The point of the Zeno/anti-Zeno transition associated with the optimal rate of Poissonian measurements is also characterized by another universal statistical relation in which the timeT r has the following meaning. Assume that we apply a single projective measurement at the random (Poisson) moment of time to each of the large number of identically prepared quantum systems. ThenT r is the average time of those trials that gave the outcome 'yes'. To derive equation (17) let us note that, as follows from equation (13), whereQ(r) is the Laplace transform of the function Q(T) = 1 − P(T) evaluated at r. Obviously, Q(T) represents the probability that the initial quantum state will decay after time T. For r = r * one has ∂ r t r * = 0 so that − ∂ rQ (r * ) Q(r * ) = 2/r * . Introducing the timeT r ≡ (1−x T )T Q(r) we arrive at equation (17). The universal identities described above allow us to formulate the simple and practically important criteria to distinguish between the quantum Zeno and the anti-Zeno effects in terms of the directly measurable statistics of the random decay time. Namely, one can determine the sign of the Zeno dynamics by probing the deviations from equations (16) and (17): it is straightforward to show from equations (13) and (15) that the conditions t 2 r < 2 t r 2 − 2r −1 t r (or rT r > 2) and t 2 r > 2 t r 2 − 2r −1 t r (or rT r < 2) indicate the Zeno (∂ r t r > 0) and anti-Zeno (∂ r t r < 0) regimes, respectively, see figure 7 for the  (16) and (17). illustration. Let us stress that equations (16) and (17) were derived assuming that the inter-measurement intervals are taken from the exponential distribution and, for this reason, these criteria are universally valid only in the case of Poissonian measurements.
Importantly, an assumption of instantaneous measurements does not play a crucial role for our argumentation. Relaxing this assumption and introducing a non-zero duration of the measurement event Δ we arrive at slightly modified identities characterising the Zeno/anti-Zeno transition (see appendix E) and which trivially reduce to equations (16) and (17) at Δ = 0. The criteria of the quantum Zeno effect in the presence of non-instantaneous Poissonian measurements are given by the inequalities t 2 r > 2 t r 2 − 2r −1 t r − r t r Δ 2 1+rΔ and rT r > 2 + r 2 Δ 2 1+rΔ , whereas the similar relations with opposite inequality signs indicate the anti-Zeno regime.
Our analysis proves that the regular stroboscopic sampling-the most popular measurement protocol in the quantum Zeno research literature-is a universally optimal strategy providing the best performance (i.e. the shortest expected decay time in the point of Zeno/anti-Zeno transition) for any given quantum system, see section 3. However, in general, finding of the optimal sampling rate is not a simple task. Here we proposed the computer science-inspired solution to this problem. As was first noticed more than two decades ago, restarting a task whose run-time is a random variable (e.g., a probabilistic algorithm or downloading a web page) may speed up its completion, but the optimal restart frequency depends on the running time distribution which is usually unavailable in practice [22,[44][45][46][47]. To circumvent this obstacle, the scale-free sampling of the inter-restart intervals has been suggested [20][21][22]. By noting a formal analogy between the projective measurement of a quantum system and the restart of a classical stochastic process [20,21,[48][49][50][51][52][53][54][55][56][57][58][59][60][61][62][63][64][65][66][67], we demonstrated that the scale-free detection protocols help to achieve the near-to-optimal decay rate without prior knowledge of the system parameters, see section 4. These theoretical results are expected to be an important step toward a robust measurement-based control of quantum dynamics especially taking into account the fact that they are not sensitive to the assumption of instantaneous measurements as discussed in the text.
Also, the present analysis establishes the general statistical criteria to discriminate between Zeno and anti-Zeno regimes and to probe the optimal decay conditions in practice. As explained in section 5, this insight became possible due to the surprising universality of the Zeno/anti-Zeno transitions under the Poissonian measurements: the extrema in the expected decay time as a function of the measurement rate entail the universal footprints in the decay statistics, see equations (16) and (17). Another remarkable consequence of this universality is the previously unknown inequality constraint relating the optimal (mean) sampling period r −1 * and the resulting expectation of the decay time t r * . Indeed, since t 2 r t r 2 , we immediately find from equation (16) that t r * 2r −1 * . This also means that the average number of measurement attempts required to detect the decay, n r = r t r , is bounded from below in the point of the Zeno/anti-Zeno transition: n r * 2. The broad model-independent nature of these conclusions should facilitate their experimental testing.
Besides the issue of optimal control of the quantum decay, another promising application of the above theory lies in the fields of quantum walks and quantum search algorithms. Assume that the quantum particle undergoes a unitary evolution and the series of measurements are performed (either in regular or in random fashion) to see if the particle has reached the target region of its phase space. What is then the expected time required to detect the first arrival of the particle? This question is known as quantum first detection problem [68][69][70][71][72][73][74][75][76][77][78][79][80][81], which is of basic notion for design of quantum search algorithms. As first noticed in reference [69], in some cases the optimum sampling period exists that brings the mean first detection time to the minimum thus providing opportunity to optimize the search process. We anticipate that the theoretical approach presented here may help to uncover the universal aspects of this kind of optimal behavior. for all 0 < τ < ∞. Let us multiply both sides of equation (B3) by T T , where T > 0 is a random variable having probability density ρ(T), Next we replace τ by T and average over statistics of T On the left we see the decay rate Γ τ * attained by the optimal stroboscopic measurement protocol, while the right-hand side represents the decay rate Γ for a system subject to measurements at a generally distributed random time moments (see introduction). This proves that stroboscopic measurement protocol is an optimal strategy in terms of maximization of the effective decay rate.

Appendix C. Luby-like protocol
In our derivation of equation (12) we follow the line of argumentation originally proposed in reference [22] in the context of probabilistic algorithms under restart. We also show how these arguments can be generalized to account the non-instantaneous measurements. For any j, if the total time spent on the inter-measurement intervals of duration 2 j τ 0 up to the end of some interval in the measurement schedule defined by equation (11) is W, then at most log 2 W/τ 0 + 1 different intervals have so far been used, and the total time spent on each one cannot exceed 2W. Thus the total time elapsed since the start of the experiment up to this point is at most 2W(log 2 W/τ 0 + 1).
As in the main text, we denote as τ * τ 0 the optimal period of the stroboscopic measurement protocol minimizing the mean decay time given by equation (5). We set i 0 = log 2 (τ * /τ 0 ) and m 0 = log 2 (1/Q(τ * )) , where Q(T) = 1 − P(T) and . . . denotes the procedure of rounding up to an integer. Consider the instant when 2 m 0 inter-measurement intervals of duration 2 i 0 τ 0 have elapsed. The probability that the corresponding measurements have not detected the decay of the initial state is at most (1 − Q(2 i 0 τ 0 )) 2 m 0 (1 − Q(τ * )) 1/Q(τ * ) e −1 , ( C 1 ) where we have used an assumption that the survival probability P(T) is monotonically decreasing function. At this point, the total time spent on intervals of duration 2 i 0 τ 0 is W = 2 i 0 +m 0 τ 0 4 t τ * , ( C 2 ) due to equations (5) and (B3), and by the observation above the total time spent up to this point is at most 2W(log 2 W/τ 0 + 1). More generally, after k2 m 0 intervals of length 2 i 0 τ 0 have elapsed, the probability that the system has failed to decay is at most e −k , and the total time spent up to this point is at most 2kW(log 2 (kW/τ 0 ) + 1). Therefore, the expected decay time of a quantum system subject to the Luby-like measurement protocol obeys the inequality t Luby ∞ k=1 2kW(log 2 (kW/τ 0 ) + 1)e −k+1 .
Taking into account equation (C2) and relation ∞ k=1 ke −k+1 = 1 (1−e −1 ) 2 one readily obtains equation (12). Now let us introduce a non-zero duration of the measurement event, Δ. As previously, if the total time spent on the inter-measurement intervals of length 2 j τ 0 up to the end of some interval in the Luby sequence is W, then at most log 2 W/τ 0 + 1 different intervals have so far been used, and the total time spent on each one cannot exceed 2W. The total time elapsed since the start of the experiment up to this point is at most 2W(log 2 W/τ 0 + 1)(1 + Δ τ 0 ), where we accounted that each measurement entails a time penalty Δ. Denoting again as τ * τ 0 the optimal period of the instantaneous stroboscopic measurements minimizing the mean decay time given by equation (5) and repeating the above argumentation we obtain t Luby (Δ) C 1 (Δ) t τ * (log 2 t τ * τ 0 + C 2 ), where C 1 (Δ) =
To proceed we note that the random time to decay t of a quantum system under the stroboscopic protocol with the inter-measurement period τ and the measurement duration Δ obeys the simple renewal equation: t = τ + Δ + xt , where x is the binary random variable decoding the outcome of the first measurement, and t is independent copy of t. After averaging over the quantum statistics, one obtains the relation t τ (Δ) = τ + Δ 1 − P(τ ) , ( C 5 ) which generalizes equation (5) to the case of non-instantaneous measurements. Let us denote as t τ * (Δ) the minimal expected decay time attained via optimization of equation (C5) over τ at a fixed value of Δ. As follows from equations (B3) and (C5), t τ * (Δ) t τ * , where t τ * ≡ t τ * (0) represents the optimal expected decay time in the limit of instantaneous measurements. The later inequality together with equation (C4) finally give us the bound which tells us that, in the presence of an arbitrary time cost associated with non-zero measurement duration, the decay of a quantum system under Luby measurement strategy is a logarithmic factor slower than the decay of the same system subject to the optimally tuned stroboscopic measurements. Equation (C6) generalises equation (12) and reduces to it at Δ = 0.