Inference in High-Dimensional Online Changepoint Detection

Abstract We introduce and study two new inferential challenges associated with the sequential detection of change in a high-dimensional mean vector. First, we seek a confidence interval for the changepoint, and second, we estimate the set of indices of coordinates in which the mean changes. We propose an online algorithm that produces an interval with guaranteed nominal coverage, and whose length is, with high probability, of the same order as the average detection delay, up to a logarithmic factor. The corresponding support estimate enjoys control of both false negatives and false positives. Simulations confirm the effectiveness of our methodology, and we also illustrate its applicability on the U.S. excess deaths data from 2017 to 2020. The supplementary material, which contains the proofs of our theoretical results, is available online.


Introduction
The real-time monitoring of evolving processes has become a characteristic feature of 21st century life.Watches and defibrillators track health data, Covid-19 case numbers are reported on a daily basis and financial decisions are made continuously based on the latest market movements.Given that changes in the dynamics of such processes are frequently of great interest, it is unsurprising that the area of changepoint detection has undergone a renaissance over the last 5-10 years.
One of the features of modern datasets that has driven much of the recent research in changepoint analysis is high dimensionality, where we monitor many processes simultaneously, and seek to borrow strength across the different series to identify changepoints.The nature of series that are tracked in applications, as well as the desire to evade to the greatest extent possible the curse of dimensionality, means that it is commonly assumed that the signal of interest is relatively sparse, in the sense that only a small proportion of the constituent series undergo a change.Furthermore, the large majority of these works have focused on the retrospective (or offline) challenges of detecting and estimating changes after seeing all of the available data (e.g.Chan and Walther, 2015;Cho and Fryzlewicz, 2015;Jirak, 2015;Cho, 2016;Soh and Chandrasekaran, 2017;Wang and Samworth, 2018;Enikeeva and Harchaoui, 2019;Padilla et al., 2021;Kaul et al., 2021;Liu et al., 2021;Londschien et al., 2021;Rinaldo et al., 2021;Follain et al., 2022).Nevertheless, the related problem where one observes data sequentially and seeks to declare changes as soon as possible after they have occurred, is nowadays receiving increasing attention (e.g.Kirch and Stoehr, 2019;Dette and Gösmann, 2020;Gösmann et al., 2020;Yu et al., 2021b;Chen et al., 2022).Although the focus of our review here has been on recent developments, including finite-sample results in multivariate and high-dimensional settings, we also mention that changepoint analysis has a long history (e.g.Page, 1954).Entry points to this classical literature include Csörgő and Horváth (1997) and Horváth and Rice (2014).For univariate data, sequential changepoint detection is also studied under the banner of statistical process control (Duncan, 1952;Tartakovsky et al., 2014).In the field of high-dimensional statistical inference more generally, uncertainty quantification has become a major theme over the last decade, originating with influential work on the debiased Lasso in (generalized) linear models (Javanmard and Montanari, 2014;van de Geer et al., 2014;Zhang and Zhang, 2014), and subsequently developed in other settings (e.g.Janková and van de Geer, 2015;Yu et al., 2021a).
The aim of this paper is to propose methods to address two new inferential challenges associated with the high-dimensional, sequential detection of a sparse change in mean.The first is to provide a confidence interval for the location of the changepoint, while the second is to estimate the signal set of indices of coordinates that undergo the change.Despite the importance of uncertainty quantification and signal support recovery in changepoint applications, neither of these problems has previously been studied in the multivariate sequential changepoint detection literature, to the best of our knowledge.Of course, one option here would be to apply an offline confidence interval construction (e.g., Kaul et al., 2021) after a sequential procedure has declared a change.However, this would be to ignore the essential challenge of the sequential nature of the problem, whereby one wishes to avoid storing all historical data, to enable inference to be carried out in an online manner.By this we mean that the computational complexity for processing each new observation, as well as the storage requirements, should depend only on the number of bits needed to represent the new data point observed 1 .The online requirement turns out to impose severe restrictions on the class of algorithms available to the practitioner, and lies at the heart of the difficulty of the problem.
To give a brief outline of our construction of a confidence interval with guaranteed (1 − α)level coverage, consider for simplicity the univariate setting, where (X n ) n∈N form a sequence of independent random variables with X 1 , . . ., X z iid ∼ N (0, 1) and X z+1 , X z+2 , . . .iid ∼ N (θ, 1).Without loss of generality, we assume that θ > 0. Suppose that θ is known to be at least b > 0 and, for n ∈ N, define residual tail lengths (1) In the case of a tie, we choose the smallest h achieving the maximum.Since n i=n−h+1 (X i − b/2) can be viewed as the likelihood ratio statistic for testing the null of N (0, 1) against the 1 Here, we ignore the errors in rounding real numbers to machine precision; thus, we do not distinguish between continuous random variables and quantized versions where the data have been rounded to machine precision.
alternative of N (b, 1) using X n−h+1 , . . ., X n , the quantity t n,b is the tail length for which the likelihood ratio statistic is maximized.If N is the stopping time defining a good sequential changepoint detection procedure, then, intuitively, N − t N,b should be close to the true changepoint location z, and almost pivotal.This motivates the construction of a confidence interval of the form max N − t N,b − g(α, b), 0 , N , where we control the tail probability of the distribution of N − t N,b to choose g(α, b) so as to ensure the desired coverage.In the multivariate case, considerable care is required to handle the post-selection nature of the inferential problem, as well as to determine an appropriate left endpoint for the confidence interval.For this latter purpose, we only assume a lower bound on the Euclidean norm of the vector of mean change, and employ a delicate multivariate and multiscale aggregation scheme; see Section 2 for details, as well as Section 3.5 for further discussion.
The procedure for the inference tasks discussed above, which we call ocd CI (short for online changepoint detection Confidence Intervals), can be run in conjunction with any base sequential changepoint detection procedure.However, we recommend using the ocd algorithm introduced by Chen et al. (2022), or its variant ocd , which provides guarantees on both the average and worst-case detection delays, subject to a guarantee on the patience, or average false alarm rate under the null hypothesis of no change.Crucially, these are both online algorithms.The corresponding inferential procedures inherit this same online property, thereby making them applicable even in very high-dimensional settings and where changes may be rare, so we may need to see many new data points before declaring a change.
In Section 3 we study the theoretical performance of the ocd CI procedure.In particular, we prove in Theorem 1 that, whenever the base sequential detection procedure satisfies a patience and detection delay condition, the confidence interval has at least nominal coverage for a suitable choice of input parameters.Theorem 2 provides a corresponding guarantee on the length of the interval.In Section 3.3, we show that by using ocd as the base procedure, the aforementioned patience and detection delay condition is indeed satisfied.As a result, the output confidence interval has guaranteed nominal coverage and the length of the interval is of the same order as the average detection delay for the base ocd procedure, up to a polylogarithmic factor.This is remarkable in view of the intrinsic challenge that the better such a changepoint detection procedure performs, the fewer post-change observations are available for inferential tasks.
A very useful byproduct of our ocd CI methodology is that we obtain a natural estimate of the set of signal coordinates (i.e.those that undergo change).In Theorem 3, we prove that, with high probability, it is able both to recover the effective support of the signal (see Section 3.1 for a formal definition), and to avoid noise coordinates.We then broaden the scope of applicability of our methodology in Section 3.4 by relaxing our distributional assumptions to deal with sub-Gaussian or sub-exponential data.Finally, in Section 3.5, we introduce a modification of our algorithm that permits an arbitrarily loose lower bound β > 0 on the Euclidean norm of the vector of mean change to be employed, with only a logarithmic increase in the confidence interval length guarantee and the computational cost.
An attraction of our theoretical results is that we are able to handle arbitrary spatial (cross-sectional) dependence between the different coordinates of our data stream.On the other hand, two limitations of our analysis for practical use are that real data may exhibit both heavier than sub-exponential tails and temporal dependence.While a full theoretical analysis of the ocd CI algorithm in these contexts appears to be challenging, we have made some practical suggestions regarding these issues in Sections 3.4 and 4.4 respectively.
Section 4 is devoted to a study of the numerical performance of our methodological proposals.Our simulations confirm that the ocd CI methodology (with the ocd base procedure) attains the desired coverage level across a wide range of parameter settings, that the average confidence interval length is of comparable order to the average detection delay and that our support recovery guarantees are validated empirically.We further demonstrate the way in which naive application of offline methods may lead to poor performance in this problem.Moreover, in Section 4.4, we apply our methods to excess death data from the Covid-19 pandemic in the US.Proofs, auxiliary results, extensions to sub-Gaussian and sub-exponential settings and additional simulation results are provided in the supplementary material (Section 5).
We conclude this introduction with some notation used throughout the paper.We write N 0 for the set of all non-negative integers.
∈ R d 1 and A −j,j := A 1,j , . . ., A j−1,j , A j+1,j . . ., A d 1 ,j ∈ R d 1 −1 .We use Φ(•), Φ(•) and φ(•) to denote the distribution function, survivor function and density function of the standard normal distribution respectively.For two real-valued random variables U and V , we write U ≥ st V or V ≤ st U if P(U ≤ x) ≤ P(V ≤ x) for all x ∈ R. We adopt conventions that an empty sum is 0 and that min ∅ := ∞, max ∅ := −∞.

Confidence interval construction and support estimation methodology
In the multivariate sequential changepoint detection problem, we observe p-variate observations X 1 , X 2 , . . . in turn, and seek to report a stopping time N by which we believe a change has occurred.Here and throughout, a stopping time is understood to be with respect to the natural filtration, so that the event {N = n} belongs to the σ-algebra generated by X 1 , . . ., X n .The focus of this work is on changes in the mean of the underlying process, and we denote the time of the changepoint by z.Moreover, since our primary interest is in highdimensional settings, we will also seek to exploit sparsity in the vector of mean change.Given α ∈ (0, 1), then, our primary goal is to construct a confidence interval C ≡ C(X 1 , . . ., X N , α) with the property that z ∈ C with probability at least 1 − α.
For i ∈ N and j ∈ [p], let X j i denote the jth coordinate of X i .The ocd CI algorithm relies on a lower bound β > 0 for the 2 -norm of the vector of mean change, sets of signed scales B and B 0 defined in terms of β and a base sequential changepoint detection procedure CP.As CP processes each new data vector, we update the matrix of residual tail lengths (t j n,b ) j∈[p],b∈B∪B 0 with t j n,b := sargmax 0≤h≤n n i=n−h+1 (X j i − b/2), as well as corresponding tail partial sum vectors A •,j n,b j∈[p],b∈B∪B 0 , where A j ,j n,b := n i=n−t j n,b +1 X j i .After the base procedure CP declares a change at a stopping time N , we identify an "anchor" coordinate ĵ ∈ [p] and a signed anchor scale b ∈ B, where .
The intuition is that the anchor coordinate and signed anchor scale are chosen so that the final t ĵ N, b observations provide the best evidence among all of the residual tail lengths against the null hypothesis of no change.Meanwhile, A The main idea of our confidence interval construction is to seek to identify coordinates with large post-change signal.To this end, observe when t ĵ N, b is not too much larger than N −z, the \ { ĵ}, with variance close to 1. Indeed, if ĵ, b, N and t ĵ N, b were fixed, and if 0 < t ĵ N, b ≤ N − z, then the former quantity would have unit variance around this centering value.The random nature of these quantities, however, introduces a post-selection inference aspect to the problem.Nevertheless, by choosing an appropriate threshold value d 1 > 0, we can ensure that with high probability, when j = ĵ is a noise coordinate, we have |E j, ĵ N, b| < d 1 , and when j = ĵ is a coordinate with sufficiently large signal, there exists a signed scale b having the same sign as θ j , for which In fact, such a signed scale, if it exists, can always be chosen to be from B 0 .As a convenient byproduct, the set of indices j for which the latter inequality holds, which we denote as Ŝ, forms a natural estimate of the set of coordinates in which the mean change is large.
For each j ∈ Ŝ, there exists a largest scale b We denote the signed version of this quantity, where the sign is chosen to agree with that of E j, ĵ N, b, by bj ; this can be regarded as a shrunken estimate of θ j , so plays the role of the lower bound b from the univariate problem discussed in the introduction.Finally, then, our confidence interval is constructed as the intersection over indices j ∈ Ŝ of the confidence interval from the univariate problem in coordinate j, with signed scale bj .
As a device to facilitate our theory, the ocd CI algorithm allows the practitioner the possibility of observing a further observations after the time of changepoint declaration, before constructing the confidence interval.The additional observations are used to determine the anchor coordinate ĵ and scale b, as well as the estimated support Ŝ and the estimated scale bj for each j ∈ Ŝ.Thus, the extra sampling is used to guard against an unusually early changepoint declaration that leaves very few post-change observations for inference.Nevertheless, we will see in Theorem 1 below that the output confidence interval has guaranteed nominal coverage even with = 0, so that additional observations are only used to control the length of the interval.In fact, even for this latter aspect, the numerical evidence presented in Section 4 indicates that = 0 provides confidence intervals of reasonable length in practice.Similarly, Theorem 3 ensures that with high probability, our support estimate Ŝ contains no noise coordinates (i.e. has false positive control) even with = 0, so that the extra sampling is only used to provide false negative control.
Pseudo-code for this ocd CI confidence interval construction is given in Algorithm 1, where we suppress the n dependence on quantities that are updated at each time step.The computational complexity per new observation, as well as the storage requirements, of this algorithm is equal to the sum of the corresponding quantities for the CP base procedure and O p 2 log(ep) regardless of the observation history.Thus the ocd CI method inherits the property of being an online algorithm, as discussed in the introduction, from any online CP base procedure.
Algorithm 1: Pseudo-code for the confidence interval construction algorithm ocd CI Input: X 1 , X 2 , . . .∈ R p observed sequentially, β > 0, a ≥ 0, an online changepoint detection procedure CP, A natural choice for the base online changepoint detection procedure CP is the ocd algo-rithm, or its variant ocd , introduced by Chen et al. (2022).Both are online algorithms, with computational complexity per new observation and storage requirements of O p 2 log(ep) .
The ocd base procedure is considered for the theoretical analysis in Section 3 due to its known patience and detection delay guarantees, while we prefer ocd for numerical studies and practical use.For the reader's convenience, the ocd and ocd algorithms are provided as Algorithms 2 and 3 respectively in Section 5.3.

Theoretical analysis
Throughout this section, we will assume that the sequential observations X 1 , X 2 , . . .are independent, and that for some unknown covariance matrix Σ ∈ R p×p whose diagonal entries are all equal to 1, there exist z ∈ N 0 and θ = (θ 1 , . . ., θ p ) = 0 for which X 1 , . . ., X z ∼ N p (0, Σ) and X z+1 , X z+2 , . . .∼ N p (θ, Σ).We let ϑ := θ 2 , and write P z,θ,Σ for probabilities computed under this model, though in places we omit the subscripts for brevity.Define the effective sparsity of θ, denoted s(θ), to be the smallest s ∈ 2 0 , 2 1 , . . ., 2 log 2 (p) such that the corresponding effective support S(θ) := j ∈ [p] : |θ j | ≥ θ 2 / s log 2 (2p) has cardinality at least s(θ).Thus, the sum of squares of coordinates in the effective support of θ has the same order of magnitude as θ 2 2 , up to logarithmic factors.Moreover, if at most s components of θ are non-zero, then s(θ) ≤ s, and the equality is attained when, for example, all non-zero coordinates have the same magnitude.
For r > 0 and an online changepoint detection procedure CP characterized by an extended stopping time N , we define g(r; N ) := sup z∈N 0 P z,θ,Σ (N > z + r). (2)

Coverage Probability and Length of the Confidence Interval
The following theorem shows that the confidence interval constructed in the ocd CI algorithm has the desired coverage level whenever the base online changepoint detection procedure satisfies a patience and detection delay condition.
Theorem 1.Let p ≥ 2 and fix α ∈ (0, 1).Suppose that ϑ ≥ β > 0. Let CP be an online changepoint procedure characterized by an extended stopping time N satisfying for some r ≥ 1.Then, with inputs (X t ) t∈N , β > 0, a ≥ 0, CP, 9s log 2 (2p) , d 2 = 4d 2 1 and ≥ 0, the output confidence interval C from Algorithm 1 satisfies As mentioned in Section 2, our coverage guarantee in Theorem 1 holds even with = 0, i.e. with no additional sampling.Condition (3) places a joint assumption on the base changepoint procedure CP and the parameter r, the latter of which appears in the inputs d 1 and d 2 of Algorithm 1.The first term on the left-hand side of (3) is the false alarm rate of the stopping time N associated with CP.The second term can be interpreted as an upper bound on the probability of the detection delay of N being larger than r, and in addition we also need r to be at least of order s/β 2 up to logarithmic factors for the third term to be sufficiently small.See Section 3.3 for further discussion, where in particular we provide a choice of r for which (3) holds with the ocd base procedure.
We now provide a guarantee on the length of the ocd CI confidence interval.
Theorem 2. Fix α ∈ (0, 1).Assume that θ has an effective sparsity of s := s(θ) ≥ 2 and that ϑ ≥ β > 0. let CP be an online changepoint detection procedure characterized by an extended stopping time N that satisfies (3) for some r ≥ 1.Then there exists a universal constant As mentioned following Theorem 1, we can take r to be the maximum of an appropriate quantile of the detection delay distribution of CP and a quantity that is of order s/β 2 up to logarithmic factors.The main conclusion of Theorem 2 is that, with high probability, the length of the confidence interval is then of this same order r.Whenever the quantile of the detection delay distribution achieves the maximum above -which is the case, up to logarithmic factors, for the ocd base procedure (see Proposition 5) -we can conclude that with high probability, the length of the ocd CI confidence interval is of the same order as this detection delay quantile (which is the best one could hope for).Note that the choices of inputs in Theorem 2 are identical to those in Theorem 1, except that we now ask for order r additional observations after the changepoint declaration.

Support Recovery
Recall the definition of S(θ) from the beginning of this section, and denote S β (θ) := j ∈ [p] : |θ j | ≥ b min , where b min , defined in Algorithm 1, is the smallest positive scale in B ∪ B 0 , We will suppress the dependence on θ of both these quantities in this subsection.Theorem 3 below provides a support recovery guarantee for Ŝ, defined in Algorithm 1.Since neither Ŝ nor the anchor coordinate ĵ defined in the algorithm depend on d 2 , we omit its specification; the choices of other tuning parameters mimic those in Theorems 1 and 2.
(a) Then, with inputs (b) Now assume that θ has an effective sparsity of s := s(θ) ≥ 2. Then there exists a universal constant C > 0 such that, with inputs (X t ) t∈N , β > 0, a = C log(rp/α), CP, 9s log 2 (2p) , ≥ 80r, we have Note that S ⊆ S β ⊆ {j ∈ [p] : θ j = 0}.Thus, part (a) of the theorem reveals that with high probability, our support estimate Ŝ does not contain any noise coordinates.Part (b) offers a complementary guarantee on the inclusion of all "big" signal coordinates, provided we augment our support estimate with the anchor coordinate ĵ.See also the further discussion of this result following Proposition 4 below and in Section 3.3.
We now turn our attention to the optimality of our support recovery algorithm, by establishing a complementary minimax lower bound on the performance of any support estimator.In fact, we can already establish this optimality by restricting the cross-sectional covariance matrix to be the identity matrix.Thus, given θ ∈ R p and z ∈ N 0 , we write P z,θ for a probability measure under which (X n ) n∈N are independent with Define T to be the set of stopping times with respect to the natural filtration (F n ) n∈N 0 , and set Write 2 [p] for the power set of [p], equipped with the symmetric difference metric d : For any stopping time N , denote where we recall that ψ is said to be F N -measurable if for any A ∈ 2 [p] and n ∈ N 0 , we have that Proposition 4. For r > 0 and m ≥ 15, we have This proposition considers any support estimation algorithm obtained from a stopping time in T r,m , and we note that such a competing procedure is even allowed to store all data up to this stopping time, in contrast to our online algorithm.This result can be interpreted as an optimality guarantee for the support recovery property of the ocd CI algorithm presented in Theorem 3(b), provided that the base procedure N belongs to the class T r,m , and that N and r satisfy (3).See Section 3.3 below for further discussion.

Using ocd as the Base Procedure
In this subsection, we provide a value of r that suffices for condition (3) to hold when we take our base procedure to be ocd .For the convenience of the reader, this algorithm is presented as Algorithm 3 in Section 5.3, where we also provide interpretation to the input parameters ã, T diag and T off .
By combining Proposition 5 with Theorems 1, 2 and 3 respectively, we immediately arrive at the following corollary.
Corollary 6 reveals that, when ocd is used as the base procedure, the ocd CI methodology provides guaranteed confidence interval coverage.Moreover, up to poly-logarithmic factors, with an additional O 1 ∨ (s/β 2 ) post-change observations, the ocd CI interval length is of the same order as the average detection delay.In terms of signal recovery, Corollary 6(b) shows that with high probability, ocd CI with inputs as given in that result selects all signal coordinates whose magnitude exceeds ϑ/s 1/2 , up to logarithmic factors.Focusing on the case β = ϑ and where s/ϑ 2 is bounded away from zero for simplicity of discussion (though see also Section 3.5 for discussion of the effect of the choice of β), Proposition 10 also reveals that the ocd base procedure belongs to T r,m with r of order s/ϑ 2 , up to logarithmic factors, and m = |{j : |θ j | ≤ 1/(8 √ r)}|.On the other hand, Proposition 4 shows that any such support estimation algorithm makes on average a non-vanishing fraction of errors in distinguishing between noise coordinates and signals that are below the level ϑ/s 1/2 , again up to logarithmic factors.In other words, with high probability, the ocd CI algorithm with base procedure ocd selects all signals that are strong enough (up to logarithmic factors) to be reliably detected, while at the same time including no noise coordinates (see Corollary 6(a)).

Relaxing the Gaussianity assumption
It is natural to ask to what extent the theory of Sections 3.1, 3.2 and 3.3 can be generalised beyond the Gaussian setting.The purpose of this subsection, then, is to describe how our earlier results can be modified to handle both sub-Gaussian and sub-exponential data.Recall that we say a random variable Z with EZ = 0 is sub-Gaussian with variance parameter σ 2 > 0 if Ee λZ ≤ e σ 2 λ 2 /2 for all λ ∈ R, and is sub-exponential with variance parameter σ 2 > 0 and rate parameter A > 0 if Ee λZ ≤ e σ 2 λ 2 /2 for all |λ| ≤ A.
We first consider the sub-Gaussian setting where X 1 , . . ., X z , X z+1 − θ, X z+2 − θ, . . .are independent, each having sub-Gaussian components with variance parameter 1.Note that this data generating mechanism no longer requires all pre-change observations to be identically distributed, and likewise the post-change observations need not all have the same distribution.We assume that the base changepoint procedure, characterized by an extended stopping time N , satisfies a slightly strengthened version of (3), namely that for some r ≥ 1.Under (5), Theorems 1, 2 and 3 hold with the same choices of input parameters.Moreover, the ocd base procedure satisfies the conclusion of Proposition 5, i.e. there exists a universal constant C > 0 such that (5) holds for r ≥ r 1 = r 1 (C ) in ( 4), provided that we use the modified input ã = 2 log 32p 2 γ log 2 (2p) .Generalising these ideas further, now consider the model where X 1 , . . ., X z , X z+1 −θ, X z+2 − θ, . . .are independent, each having sub-exponential components with variance parameter 1 and rate parameter A > 0. In this setting, provided the base procedure satisfies (5) for some r ≥ 1 and ϑ ≤ 2A 2 log 2 (2p), Theorems 1, 2 and 3 hold when we redefine a := C max log(rp/α), 1 A log(rp/α) and d 1 := max 9s log 2 (2p) , 5rβ 2 9As log 2 (2p) .Furthermore, with the modified input ã = 2 log 32p 2 γ log 2 (2p) ∨ 2 A log 32p 2 γ log 2 (2p) , the ocd base procedure satisfies the conclusion of Proposition 5 for where C > 0 is a universal constant.The claims made in the previous two paragraphs are justified in Section 5.4.These results confirm the flexible scope of the ocd CI methodology beyond the original Gaussian setting, at least as far as sub-exponential tails are concerned.Where data may exhibit heavier tails than this, clipping (truncation) and quantile transformations may represent viable ways to proceed, though further research is needed to confirm the theoretical validity of such approaches.

Confidence interval construction with unknown signal size
In some settings, an experienced practitioner may have a reasonable idea of the magnitude ϑ of the 2 -norm of the vector of mean change that would be of interest to them, and this would facilitate a choice of a lower bound β for ϑ in Algorithm 1.However, it is also worth considering the effect of this choice, and the extent to which its impact can be mitigated.
We first remark that the coverage probability guarantee for the ocd CI interval in Corollary 6 remains valid for any (arbitrarily loose) lower bound β on ϑ.The only issue is in terms of power: if β is chosen to be too small, then both the average detection delay and the highprobability bound on the length of the confidence interval may be inflated.In the remainder of this subsection, then, we describe a simple modification to Algorithm 1 that permits a loose lower bound β to be employed that retains coverage validity with only a logarithmic effect on the high-probability bound on the length of the confidence interval.The only other price we pay is that the computational cost increases as β decreases, as we describe below.
Our change to Algorithm 1 is as follows: we replace the definition of b min by setting } in both the ocd base procedure and in Algorithm 1.The rest of algorithm remains as previously stated.Thus, if we choose a conservative (very small) β, then the effect of the modification is to increase the number of scales on which we search for a change, so that the largest element of B is of order 1.In order to state our theoretical results for this modified algorithm, first define ∧ 1.Under the same assumptions as in Proposition 5, and modifying the inputs to (X t ) t∈N , β > 0, ã = 2 log(16p 2 γM ), T diag = log 16pγ(M + 1) and T off = 8 log(16pγM ), it can be shown using very similar arguments to those in the proof of Proposition 5 that there exists a universal constant C > 0 such that with r ≥ C b −2 opt log pγα −1 (β −2 ∨ 1) =: r 1 , the output N satisfies With this in place, we can derive Corollary 7 below, which is the analog of Corollary 6 for our modified algorithm.
Corollary 7. Fix α ∈ (0, 1), γ > 0. Assume that θ has an effective sparsity of s := s(θ) ≥ 2, that ϑ ≥ β > 0 and that z ≤ 2αγ.Let (X t ) t∈N , β > 0, ã = 2 log(16p 2 γM ), T diag = log 16pγ(M + 1) and T off = 8 log(16pγM ) be the inputs of the modified ocd procedure.Let The main difference between Corollaries 7 and 6 concerns the high-probability guarantees on the length of the confidence interval.Ignoring logarithmic factors, with high probability the length of the confidence interval in the modified algorithm is at most of order (s/ϑ 2 ) ∨ 1, whereas for the original algorithm it was of order (s/β 2 ) ∨ 1.Thus, the modified algorithm has the significant advantage of enabling a conservative choice of β with only a logarithmic effect on the length guarantee relative to an oracle procedure with knowledge of θ 2 .The computational complexity per new observation and the storage requirements of this modified algorithm are O p 2 log(ep) + log(1/β) , so the order of magnitude is increased relative to the original algorithm only in an asymptotic regime where β is small by comparison with 1/p K for every K > 0.Moreover, the modified algorithm still does not require storage of historical data and the computational time per new observation after observing n observations does not increase with n.Nevertheless, since the computational complexity now depends on β, the modified algorithm does not strictly satisfy our definition of an online algorithm given in introduction.

Numerical studies
In this section, we study the empirical performance of the ocd CI algorithm.Throughout this section, by default, the ocd CI algorithm is used in conjunction with the recommended base online changepoint detection procedure CP = ocd.

Tuning Parameters
Chen et al. ( 2022) found that the theoretical choices of thresholds T diag and T off for the ocd procedure were a little conservative, and therefore recommended determining these thresholds via Monte Carlo simulation; we replicate the method for choosing these thresholds described in their Section 4.1.Likewise, as in Chen et al. (2022), we take a = ã = √ 2 log p in our simulations.
For d 1 and d 2 , as suggested by our theory, we take d 2 = 4d 2 1 , and take d 1 to be of the form d 1 = c log(p/α).Here, we tune the parameter c > 0 through Monte Carlo simulation, as we now describe.We considered the parameter settings p ∈ {100, 500}, s ∈ {2, √ p , p}, ϑ ∈ {2, 1, 1/2}, Σ = I p , α = 0.05, β ∈ {2ϑ, ϑ, ϑ/2}, γ = 30000 and z = 500.Then, with θ generated as ϑU , where U is uniformly distributed on the union of all s-sparse unit spheres in R p (independent of our data), we studied the coverage probabilities, estimated over 2000 repetitions as c varies, of the ocd CI confidence interval for data generated according to the Gaussian model defined at the beginning of Section 3. Figure 1 displays a subset of the results (the omitted curves were qualitatively similar).On this basis, we recommend c = 0.5 as a safe choice across a wide range of data generating mechanisms, and we used this value of c throughout our confidence interval simulations.The previous two paragraphs, in combination with Algorithms 1 and 2, provide the practical implementation of the ocd CI algorithm that we use in our numerical studies and that we recommend for practitioners.The only quantity that remains for the practitioner to input (other than the data) is β, which represents a lower bound on the Euclidean norm of the vector of mean change.Fortunately, this description makes β easily interpretable by practitioners.In cases where an informed default choice is not available, practitioners can make a conservative (very small) choice and use an increased grid of scales to with only a small inflation in the confidence interval length guarantee and computational cost; see Section 3.5.

Coverage Probability and Interval Length
In Table 1, we present the detection delay of the ocd procedure, as well as the coverage probabilities and average confidence interval lengths of the ocd CI procedure, all estimated over 2000 repetitions, with the same set of parameter choices and data generating mechanism as in Section 4.1.From this table, we see that the coverage probabilities are at least at the nominal level (up to Monte Carlo error) across all settings considered.Underspecification of β means that the grid of scales that can be chosen for indices in Ŝ is shifted downwards, and therefore increases the probability that bj will underestimate θ j for j ∈ Ŝ.In turn, this leads to a slight conservativeness for the coverage probability (and corresponding increased average confidence interval length).On the other hand, overspecification of β yields a shorter interval on average, though these were nevertheless able to retain the nominal coverage in all cases considered.Another interesting feature of Table 1 is to compare the average confidence interval lengths with the corresponding average detection delays.Corollary 6(b), as well as Chen et al. (2022, Theorem 4), indicates that both of these quantities are of order (s/β 2 ) ∨ 1, up to polylogarithmic factors in p and γ, but of course whenever the confidence interval includes For comparison, we also present the corresponding estimated coverage probabilities and average lengths of the procedure based on Kaul et al. (2021) the changepoint, its length must be at least as long as the detection delay.Nevertheless, in most settings, it is only 2 to 3 times longer on average, and in all cases considered was less than 7 times longer on average.Moreover, we can also observe that the confidence interval length increases with s and decreases with β, as anticipated by our theory.
For comparison, we also present the corresponding coverage probabilities and average lengths of confidence intervals obtained using an offline procedure as described in the introduction.More precisely, after the ocd algorithm has declared a change, we treat the data up to the stopping time as an offline dataset, and apply the inspect algorithm (Wang and Samworth, 2018), followed by the one-step refinement of Kaul et al. (2021), to construct an estimate, ẑKFJS , of the changepoint location.As recommended by Kaul et al. (2021), we obtain an estimator θKFJS of ϑ using the 2 -norm of the soft-thresholded difference in empirical mean vectors before and after ẑKFJS , with the soft-thresholding parameter chosen via the Bayesian Information Criterion.The final confidence interval is then of the form [ẑ KFJS − q α/2 /( θKFJS ) 2 , ẑKFJS + q α/2 /( θKFJS ) 2 ], where q α/2 is the 1 − α/2 quantile of the distribution of the (almost surely unique) maximizer of a two-sided Brownian motion with a triangular drift as given by Kaul et al. (2021, Theorem 3.1).In particular, we have q 0.025 = 11.03.The last two columns of Table 1 reveal that both the coverage probabilities and confidence interval lengths from this procedure are disappointing and not competitive with those of the ocd CI algorithm.There are two main reasons for this: first, the nature of the online problem means that the changepoint is often located near the right-hand end of the dataset up to the stopping time; on the other hand, the theoretical guarantees of Kaul et al. (2021) are obtained under an asymptotic setting where the fraction of data either side of the change is bounded away from zero.Thus, the estimated changepoint from the one-step refinement is often quite poor.Moreover, the estimated magnitude of change, θKFJS , is often a significant underestimate of ϑ due to the soft-thresholding operation, and this can lead to substantially inflated confidence interval lengths.We emphasize that the Kaul et al. (2021) procedure was not designed for use in this online setting, but we nevertheless present these results to illustrate the fact that the naive application of offline methods in sequential problems may fail badly.
While Table 1 covers the most basic setting for our methodology, our theory in Section 3 applies equally well to data with spatial dependence across different coordinates.To assess whether this theory carries over to empirical performance, Table 3 in the supplement (Section 5.5) presents corresponding coverage probabilities and lengths for the ocd CI procedure with the cross-sectional covariance matrix Σ = (Σ jk ) j,k∈ [p] taken to be Toeplitz with parameter ρ ∈ {0.5, 0.75}; in other words, Σ jk = ρ |j−k| .The results are again encouraging: the coverage remains perfectly satisfactory in all settings considered, and moreover, the lengths of the confidence intervals are very similar to those in Table 1.

Support Recovery
We now turn our attention to the empirical support recovery properties of the quantity Ŝ (in combination with the anchor coordinate ĵ) computed in the ocd CI algorithm.In Table 2, we present the probabilities, estimated over 500 repetitions, that Ŝ ⊆ S β and that Ŝ ∪ { ĵ} ⊇ S for p = 100, s ∈ {5, 50}, ϑ ∈ {1, 2}, Σ = I p , and for three different signal shapes: in the uniform, inverse square root and harmonic cases, we took θ respectively.As inputs to the algorithm, we set a = ã = √ 2 log p, α = 0.05, d 1 = 2 log(p/α), β = ϑ, and, motivated by Corollary 6, took an additional = 2sβ −2 log 2 (2p) log p post-declaration observations in constructing the support estimates.The results reported in Table 2 provide empirical confirmation of the support recovery properties claimed in Corollary 6.
Finally in this section, we consider the extent to which the additional observations are necessary in practice to provide satisfactory support recovery.In the left panel of Figure 2, we plot Receiver Operating Characteristic (ROC) curves to study the estimated support recovery probabilities with = 0 as a function of the input parameter d 1 , which can be thought of as controlling the trade-off between P( Ŝ ∪ { ĵ} ⊇ S) and P( Ŝ ⊆ S β ).The fact that the triangles in this plot are all to the left of the dotted vertical line confirms the theoretical guarantee provided in Corollary 6(a), which holds with d 1 = 2 log(p/α), and even with = 0); the less conservative choice d 1 = √ 2 log p, which roughly corresponds to an average of one noise coordinate included in Ŝ, allows us to capture a larger proportion of the signal.From this panel, we also see that additional sampling is needed to ensure that, with high probability, we recover all of the true signals.This is unsurprising: for instance, with a uniform signal shape and s = 50, it is very unlikely that all 50 signal coordinates will have accumulated such similar levels of evidence to appear in Ŝ ∪ { ĵ} by the time of declaration.The right panel confirms that, with an inverse square root signal shape, the probability that we capture each signal increases with the signal magnitude, and that even small signals tend to be selected with higher probability than noise coordinates.

US Covid-19 Data Example
In this section, we apply ocd CI to a dataset of weekly deaths in the United States between January 2017 and June 2020 (available at: https://www.cdc.gov/nchs/nvss/vsrr/data.An obvious discrepancy between underlying dynamics of these weekly deaths and the conditions assumed in our theory in Section 3 is temporal dependence, particularly induced by seasonal and weather effects.Although we can never hope to remove this dependence entirely, we seek to mitigate its impact by pre-processing the data as follows: for each of the 50 states, as well as Washington, D.C. (p = 51), we first estimate the "seasonal death curve", i.e. the mean death numbers for each day of the year, for each state.The seasonal death curve is estimated by first splitting the weekly death numbers evenly across the seven relevant days, and then estimating the average number of deaths on each day of the year from these derived daily death numbers using a Gaussian kernel with a bandwidth of 20 days.As the death numbers follow an approximate Poisson distribution, we apply a square-root transformation to stabilize the variance; more precisely, the transformed weekly excess deaths are computed as the difference of the square roots of the weekly deaths and the predicted weekly deaths from the seasonal death curve.Finally, we standardize the transformed weekly excess deaths using the mean and standard deviation of the transformed data over the training period.The standardized, transformed data are plotted in Figure 3 for 12 states.When applying ocd CI to these data, we take a = ã = √ 2 log p, T diag = log{16pγ log 2 (4p)}, T off = 8 log{16pγ log 2 (2p)}, d 1 = 0.5 log(p/α) and d 2 = 4d 2 1 , with α = 0.05, β = 50 and γ = 1000.On the monitoring data (from 30 June 2019), the ocd CI algorithm declares a change on the week ending 28 March 2020, and provides a confidence interval from the week ending 21 March 2020 to the week ending 28 March 2020.This coincides with the beginning of the first wave of Covid-19 deaths in the United States.The algorithm also identifies New York, New Jersey, Connecticut, Michigan and Louisiana as the estimated support of the change.Interestingly, if we run the ocd CI procedure from the beginning of the training data period (while still standardizing as before, due to the lack of available data prior to 2017), it identifies a subtler change on the week ending 6 January 2018, with a confidence interval of [17 December 2017, 6 January 2018].This corresponds to a bad influenza season at the end of 2017 (see, https://www.cdc.gov/flu/about/season/flu-season-2017-2018.htm).Despite the natural interpretation of these findings, we recognize that the model in Section 3 under which we proved our theoretical results cannot capture the full complexity of the temporal dependence in this dataset even after our pre-processing transformations.A complete theoretical analysis of the performance of ocd CI in time-dependent settings is challenging and beyond the scope of the current work; in practical applications, we advise careful modeling of this dependence to facilitate the construction of appropriate residuals for which the main effects of this dependence have been removed.

Supplementary material
In this section, we provide proofs of our main results (Section 5.1), auxiliary results and their proofs (Section 5.2), pseudocode for the ocd and ocd base procedures (Section 5.3), extensions of our results to sub-Gaussian and sub-exponential settings (Section 5.4) and additional simulation results (Section 5.5).Throughout this section, we use P instead of P z,θ,Σ when it is clear from the context.

Proofs of main results
Proof of Theorem 1. Fix r ≥ 1 that satisfies the assumption (3) in the theorem, n > z, j ∈ [p], b ∈ B and j ∈ [p] \ {j}.We assume, without loss of generality, that θ j ≥ 0. The case θ j < 0 can be analyzed similarly.Recall that b min , defined in Algorithm 1, is the smallest positive scale in B ∪ B 0 , and write b j Thus, recalling the definition of Ŝ and bj from Algorithm 1, we have Moreover, by a similar argument to (28) in the proof of Proposition 8, for b ∈ (0, θ j ), we have Combining ( 7) and ( 8), we have where the last inequality follows from the choice of d 1 and d 2 in the statement of the theorem and the standard Gaussian tail bound used at the end of the proof of Lemma 9.By a union bound and (3), we have as required.
Proof of Theorem 2. Fix r ≥ 1 that satisfies the assumption (3).Denote 0 := 80r.Then ≥ 0 .Since the output of Algorithm 1 remains unchanged if we replace (X j t : t ∈ N) by (−X j t : t ∈ N) for any fixed j, we may assume without loss of generality that θ 1 ≥ θ 2 ≥ ϑ/ s log 2 (2p).For j ∈ {1, 2}, we denote b Now define the following events: and all j ∈ [p] \ {j} with θ j ≤ δ , Finally, we denote event We note that for j ∈ {1, 2}, on Ω 4,j , we have bj ∈ B ∪ B 0 .Then, on the event 4 k=0 Ω k , we have Thus, it suffices to control the probability of ∪ 4 k=0 Ω c k .First, we note that On Ω 0 , we have for any j ∈ [p] and b ∈ B ∪ B 0 that Thus, by Lemma 9 (taking µ = −b/2) and a union bound, we have Now observe that, for all z < n ≤ z + r, b ∈ B ∪ B 0 , j ∈ [p] and j ∈ [p] \ {j}, we have Hence, when θ j ≤ δ, we have that where Y 1 ∼ N (δ √ n − z + , 1), and where the last inequality follows from the relation a = 2δ √ r + .Thus, by a union bound, we have Recall that u = 0 b 2 * /80.We therefore have for any z < n ≤ z + r, j ∈ [p] and b ∈ B that where the final inequality follows from Lemma 11(a), applied with U = Thus, by a union bound, we have where the last inequality follows from r = 0 /80 ≤ /80.Recall that Hence, for any z < n ≤ z + r, j ∈ [p] \ {1} and b ∈ B, we have Here, in the final bound, we have used the facts that when t j n,b ≤ /16, as well as the standard Gaussian tail bound used at the end of the proof of Lemma 9.By a similar argument, we also have for any z < n ≤ z + r and b ∈ B that Thus, by a union bound, we have Hence combining ( 9), ( 10), ( 13), ( 15) and ( 18), we conclude that where the last inequality follows from the choice of a with a sufficiently large universal constant C and (3).
where, in the final bound, we have used the facts that where the penultimate inequality follows from ( 9), ( 10), ( 13), ( 15) and ( 19), and the last inequality follows from the choice of a with a sufficiently large universal constant C and (3).
Proof of Proposition 5. Following the proof of Chen et al. (2022, Theorem 1(b)) up to, but not including, ( 16), we have for every j ∈ It follows by a union bound that Next, for every j Thus, by another union bound, we have From ( 24) and (25), we deduce that On the other hand, for a sufficiently large universal constant C > 0, we have Thus, by Proposition 10 and by increasing the value of C if necessary, we have for every r ≥ r 1 that where the penultimate inequality follows from the fact that xe −x ≤ e −x/2 for x ≥ 0. The desired result follows by combining ( 26) and ( 27).
Proof.The first inequality holds since µU ξ ≤ µU 0 = 0.For the second and third inequalities, we may assume without loss of generality that µ > 0, since the result is clear when µ = 0, and if µ < 0 then the result will follow from the corresponding result with µ > 0 by setting Y i := −Y i for i ∈ N. Note that (U n − nµ) n∈N 0 is a standard Gaussian random walk starting at 0. Let (B t ) t∈[0,∞) denote a standard Brownian motion starting at 0. Then, we have for any y ∈ N 0 and u > 0 that where the final inequality follows from Siegmund (1986, Proposition 2.4 and Equation (2.5)).
Thus, for y ∈ [0, ∞), we have where the first inequality follows from ( 29) and the fact that U y ∼ N ( y µ, y ) and the last inequality follows from the standard normal distribution tail bound Φ(x) ≤ e −x 2 /2 /2 for x ≥ 0.
In Proposition 10, we assume the Gaussian data generating mechanism given at the beginning of Section 3, and show that for the ocd base procedure, the quantity g(r; N ) from ( 2) has essentially the same form as the final term in (3).
(a) Let Hence, by the standard Gaussian tail bound used at the end of the proof of Lemma 9, we have where Then where the first inequality holds because w 1 is increasing in φ 1 and decreasing in both φ 2 and φ 3 .Hence, using the fact that −(U + V + Y ) ≤ st U + V + Y , as well as ( 39) and ( 40), we have Note that the assumption guarantees that E(W 2 ) > 0. Hence, by the standard Gaussian tail bound used at the end of the proof of Lemma 9, we have where Calculating the partial derivatives of w 2 with respect to φ 1 , φ 3 and φ 4 and simplifying the expressions, we have Thus w 2 is increasing in φ 4 and decreasing in both φ 1 and φ 3 and hence Hence, using the fact that −(U + Y + Z) ≤ st U + Y + Z, as well as ( 41) and ( 42), we have as required.

The ocd and ocd base procedures
Algorithms 2 and 3, which are taken from Chen et al. (2022), provide two options for a base online changepoint detection procedure.The ocd algorithm is recommended for practical use, while its variant, the ocd algorithm, satisfies the key condition (3) that underpins our theoretical results in Section 3.
In order to help make the paper self-contained, and to aid interpretability, we remark that the input parameter ã represents a thresholding level employed in the definition of the quantity Q j b in Algorithm 2 and Qj b in Algorithm 3 that is designed to ensure that we only aggregate over signal coordinates.The input parameters T diag and T off represent critical values for the diagonal and off-diagonal statistics S diag and S off respectively, both defined in these algorithms.In other words, we declare a change as soon as either S diag ≥ T diag or S off ≥ T off .
x 2 2 n i=1 a 2 i for all x ≥ 0.
(b) Let a 1 , . . ., a n ∈ R and let X 1 , . . ., X n be independent sub-exponential random variables with variance parameter 1 and rate parameter A > 0. Then for all x ≥ 0.
One special case where we apply this proposition frequently is with a 1 = . . .= a n = n −1/2 .The two bounds are then e −x 2 /2 and e −x(x∧A √ n)/2 respectively.The following proposition can be used in place of Lemma 9 to control excursion probabilities of sub-Gaussian and sub-exponential random walks with drift.for y ∈ [0, ∞).
Proof.Following the same argument at the beginning of the proof of Lemma 9, it suffices to only prove the latter inequality for both results for µ > 0. where the last inequality follows from the last three inequalities in the proof for the (a) part, with µ(µ ∧ A) taking the place of µ 2 .
Our final preparatory result will be used to modify (23) in the proof of Proposition 5, which can be traced back to the proof of Chen et al. (2022, Theorem 1).Let b = 0, T diag > 0 and let Z 1 , Z 2 , . . .be independent centered random variables.We define the stopping time where 'os' stands for one-sided.When Z 1 , Z 2 . . .
iid ∼ N (0, 1), we can use a sequential probability ratio test argument to show that P(N os < ∞) ≤ e −T diag .The aim of Proposition 14 below is to establish similar bounds under the sub-Gaussian and the sub-exponential distributional assumptions respectively.The following argument then holds for both cases.Denote S n := n t=1 Z t with S 0 := 0, V n := exp{(bS n − b 2 n/2) ∧ T diag } and F n = σ(Z 1 , . . ., Z n ) with F 0 defined to be trivial σ-algebra.Then for n ∈ N, we have Hence (V n ) n≥0 is a supermartingale with respect to the filtration (F n ) n≥0 .Since |V n | ≤ e T diag for all n, we have by the Optional Stopping Theorem that 1 = EV 0 ≥ EV Nos = E V Nos 1 {Nos<∞} + V Nos 1 {Nos=∞} ≥ e T diag P(N os < ∞), as required.

Additional simulation results
Table 3 provides additional simulation results for the ocd CI procedure under spatial dependence.The data generating mechanisms and conclusions from these results are given in Section 4.2.
For d ∈ N, we write [d] := {1, . . ., d}.Given a, b ∈ R, we denote a ∨ b := max(a, b) and a ∧ b := min(a, b).For a set S, we use 1 S and |S| to denote its indicator function and cardinality respectively.For a real-valued function f on a totally ordered set S, we write sargmax x∈S f (x) := min argmax x∈S f (x) and largmax x∈S f (x) := max argmax x∈S f (x) for the smallest and largest maximizers of f in S, and define sargmin x∈S f (x) and largmin x∈S f (x) analogously.For a vector /9 and d 2 = 4d 2 1 .Then the following statements hold: (a) With extra inputs CP = ocd , a ≥ 0, and ≥ 0 for the modified Algorithm 1, the output confidence interval C and the support estimate Ŝ satisfy P z,θ,Σ (z ∈ C) ≥ 1 − α and P z,θ,Σ ( Ŝ ⊆ S β ) ≥ 1 − α.(b) There exists a universal constant C > 0 such that, with extra inputs CP = modified ocd , a = C log pγα −1 (β −2 ∨ 1) , and ≥ 80r 1 for the modified Algorithm 1, the length L of the output confidence interval C and the support estimate satisfy

ϑFigure 1 :
Figure 1: Coverage probabilities of the ocd CI confidence interval as the parameter c, involved in the choice of tuning parameter d 1 , varies.

Figure 2 :
Figure 2: Support recovery properties of ocd CI.In the left panel, we plot ROC curves for three different signal shapes and for sparsity levels s ∈ {5, 50}.The triangles and circles correspond to points on the curves with d 1 = 2 log(p/α) (with α = 0.05), and d 1 = √ 2 log p respectively.The dotted vertical line corresponds to P( Ŝ ⊆ S β ) = 1 − α.In the right panel, we plot the proportion of 500 repetitions for which each coordinate belongs to Ŝ ∪ { ĵ} with d 1 = √ 2 log p; here, the s = 20 signals have an inverse square root shape, and are plotted in red; noise coordinates are plotted in black.Other parameters for both panels: p = 100, Σ = I p , β = ϑ = 2, = 0, a = ã = √ 2 log p.

Figure 3 :
Figure 3: Standardized, transformed weekly excess death data from 12 states (including Washington, D.C.).The monitoring period starts from 30 June 2019 (dashed line).The data from the states in the support estimate are shown in red.The confidence interval [8 March 2020, 28 March 2020] is shown in the light blue shaded region.