A Metastable Dominated Convergence Theorem

The dominated convergence theorem implies that if (f n) is a sequence of functions on a probability space taking values in the interval [0, 1], and (f n) converges pointwise a.e., then (f n) converges to the integral of the pointwise limit. Tao [26] has proved a quantitative version of this theorem: given a uniform bound on the rates of metastable convergence in the hypothesis, there is a bound on the rate of metastable convergence in the conclusion that is independent of the sequence (f n) and the underlying space. We prove a slight strengthening of Tao's theorem which, moreover, provides an explicit description of the second bound in terms of the first. Specifically, we show that when the first bound is given by a continuous functional, the bound in the conclusion can be computed by a recursion along the tree of unsecured sequences. We also establish a quantitative version of Egorov's theorem, and introduce a new mode of convergence related to these notions.


Introduction
If (a n ) is a nondecreasing sequence of real numbers in the interval [0, 1], then (a n ) converges, and hence is Cauchy.Say that r(ε) is a bound on the rate of convergence of (a n ) if for every ε > 0, |a n − a n | < ε whenever n and n are greater than or equal to r(ε).In general, one cannot compute a bound on the rate of convergence from the sequence itself: such a bound is not even continuous in the data, since the sequence (a n ) can start out looking like a constant sequence of 0's and then increase to 1 unpredictably.
But suppose that instead of a bound on the rate of convergence, we fix a function F : N → N and ask for an m such that |a n − a n | < ε for every n and n in the interval [m, F(m)].Since the sequence (a n ) cannot increase by ε more than 1/ε times, 2 J. Avigad, E. T. Dean and J. Rute at least one element of the sequence 0, F(0), F(F(0)), . . ., F 1/ε (0) has the desired property.Hence there is always such a value of m less than or equal to F 1/ε (0).Now notice that not only is this bound on m easily computable from F and a rational ε > 0, but it is, moreover, entirely independent of the sequence (a n ).What has happened is that we have replaced the assertion ∀ε > 0 ∃m ∀n, n ≥ m |a n − a n | < ε by a "metastable" version, The two statements are logically equivalent: an m as in the first statement is sufficient for any F in the second, and, conversely, if the first statement were false for some ε > 0 then for every m we could define F(m) to return a value large enough so that [m, F(m)] includes a rogue pair n, n .But whereas one cannot compute a bound on the m in the first statement from ε and (a n ), one can easily compute a bound on the second m that depends only on ε and F .If (a n ) is any sequence, say that M(F) is a bound on the ε-metastable convergence of (a n ) if the following holds: For every function F : N → N there is an m ≤ M(F) such that for every n, n ∈ [m, F(m)], |a n − a n | < ε.

Then what we have observed amounts to the following:
• There is a bound on the ε-metastable convergence of (a n ) if and only if there is an m such that |a n − a n | < ε for all n, n ≥ m.Hence, a sequence (a n ) is Cauchy if and only if there is a bound on the ε-metastable convergence of (a n ) for every ε > 0.
• For every ε > 0 the function M(F) = F 1/ε (0) is a bound on the ε-metastable convergence of any nondecreasing sequence (a n ) of elements of the real interval [0, 1].
Thus there is a sense in which the second statement provides a quantitative, uniform version of the original convergence theorem.
The particular example above is discussed by Kreisel [19,page 49].Variations on this idea have played a role in the Green-Tao proof [9] that there are arbitrarily long arithmetic progressions in the primes, and in Tao's proof [26] of the convergence of certain diagonal averages in ergodic theory.In these instances the Kreiselian trick takes the form of an "energy incrementation argument"; see also [25] and [27,].The Birkhoff and von Neumann ergodic theorems and generalizations have also been analyzed in these terms [3,15,16,17].
Here we are concerned with measure-theoretic facts such as the dominated convergence theorem, which relate one mode of convergence to another.Inspired by Tao [26], our goal will be to show that from a suitable metastable bound on the first type of convergence, one can obtain a suitable metastable bound on the second; and that, moreover, the passage from the first to the second is uniform in the remaining data.
For example, if (f n ) is a sequence of measurable functions on a measure space X = (X, B, µ), then (f n ) is said to converge almost uniformly if for every λ > 0, there is a set A with measure at most λ such that (f n (x)) converges uniformly for x ∈ A. This is equivalent to saying that for every λ > 0 and ε > 0 there is an m such that µ({x , since for a fixed λ > 0 we can choose a sequence (ε i ) decreasing to 0 and then, for each ε i , apply this last statement with λ = λ /2 i+1 .Thus the fact that f n converges almost uniformly can be expressed as follows: By manipulations similar to the ones above, (AU) has the following metastable equivalent: As above, say that M(F) is a bound on the λ-uniform ε-metastable convergence of (f n ) if the following holds: For every F , there is an m ≤ M(F) such that In other words, fixing λ and ε, M(F) provides a bound on a value of m asserted to exist by (AU * ).
Egorov's theorem asserts that if X is a probability space and (f n ) converges pointwise almost everywhere, then it converges almost uniformly.In Section 3, we obtain the following quantitative version.Say that M(F) is a λ-uniform bound for the εmetastable pointwise convergence of (f n ) if the following holds: For every In other words, for every F , M(F) provides a uniform ε-metastable bound for the convergence of each sequence (f n (x)) outside a set of measure at most λ.Compare this to the previous definition: if M(F) is a bound on the λ-uniform ε-metastable convergence of (f n ), then M(F) provides a bound on a single m that works outside a set of measure at most λ.With this terminology in place, we can state our quantitative version of Egorov's theorem: given ε > 0, λ > λ > 0, and a λ -uniform bound M 1 (F) on the ε-metastable pointwise convergence of (f n ), there is a bound M 2 (F) on the λ-uniform ε-metastable convergence of (f n ); and moreover M 2 (F) depends only on ε, λ, λ , and M 1 (F), and not on the underlying probability space or the sequence (f n ).In fact, we provide an explicit description of M 2 (F) in terms of this data, and explicit bounds on the complexity of M 2 when M 1 is a computable functional that can be defined using Gödel's schema of primitive recursion in the finite types.The proof relies on a combinatorial lemma, presented in Section 2, whose proof can be viewed as an energy incrementation argument that is iterated along a well-founded tree.
It is easy to show that if (f n ) is a sequence of functions taking values in [0, 1] and (f n ) converges almost uniformly, then the sequence ( f n ) converges.Thus the dominated convergence theorem follows easily from Egorov's theorem in the case where X is a probability space and the sequence (f n ) is dominated by a constant function.In a similar way, we show in Section 3 that our quantitative version of Egorov's theorem implies a quantitative version of the dominated convergence theorem, a mild strengthening of Theorem A.2 of Tao [26], again with an explicit description of the computation of one metastable bound from the other.
The notion of a λ-uniform bound on the ε-metastable pointwise convergence of a sequence gives rise to a new mode of convergence that sits properly between pointwise convergence and almost uniform convergence.In Section 4, we explore the relationships between these notions.
We are grateful to Ulrich Kohlenbach and Paulo Oliva for advice and suggestions.Work by the first and third authors has been partially supported by NSF grant DMS-1068829.

A combinatorial fact
This section is devoted to establishing a key combinatorial fact that underlies our quantitative convergence theorems.As a warmup, consider the following: Proposition 2.1 Let (A n ) be a sequence of measurable subsets of a probability space X = (X, B, µ), and let λ > 0. Then the following are equivalent: (1) There is an M such that µ( n≥M A n ) < λ.
(2) There is an M such that for every function F(m), (3) There is a λ < λ such that for every F there is an M such that clearly implies (3).To show (3) implies ( 1), fix λ > λ > 0 and for each m, let F(m) be large enough so that By hypothesis, for this F , there is an In particular, if (3) holds, there is an n such that µ(A n ) < λ.Now suppose we are given a functional M(F) witnessing (3).The main result of this section, Theorem 2.2, shows that there is a bound on n that depends only on M(F), λ, and λ .In particular, the bound is independent of X and the sequence (A n ).
Theorem 2.2 For every functional M(F) and λ > λ > 0, there is a value M with the following property.Suppose (A n ) is a sequence of measurable subsets of a probability space X with the property that for every function F , Then there is an n ≤ M such that µ(A n ) < λ.
A functional M is said to be continuous if the value of M(F) depends on only finitely many values of F .Say that two functions F and F agree up to k if F(j) = F (j) for every j ≤ k.If M is continuous, a functional k(F) with the property that M(F) = M(F ) whenever F and F agree up to k(F) is said to be a modulus of continuity for M .
The next lemma shows that, without loss of generality, we can assume the functional M in the hypothesis of Theorem 2.2 is continuous, because one can always replace it by a suitable continuous version, M .
Lemma 2.3 Given any functional M , there is a continuous functional M with the following property: for every F , there is an F such that M(F) = M(F ) and F and F agree up to M(F).
Proof Given M , define The last set is nonempty since it contains M(F) itself.Clearly M(F) satisfies the stated condition, so we only need to show that M is continuous.
In fact, we claim that M is its own modulus of continuity.To see this, suppose F and F agree up to M(F).We need to show M(F) = M(F ).By the definition of M , there is an F such that M(F) = M(F ) and F and F agree up to M(F ).But then F and F agree up to M(F ), and so Since F and F agree up to M(F), a fortiori, they agree up to M(F ).But now the symmetric argument shows that The condition on M imposed by Lemma 2.3 ensures that any sequence (A n ) of a measure space X satisfying and so it suffices to prove Theorem 2.2 for M in place of M .By similar machinations, we could arrange that M(F) ≤ M(G) whenever F is pointwise less than or equal to G, and that M is determined by the values it takes on nondecreasing F 's.However, we will not need these additional conveniences below.
Notice that the passage from M to M is noneffective; in general it will not be possible to "compute" M(F) from descriptions of M and F .We will show, however, that in the case where M is continuous, the M in the conclusion of Theorem 2.2 can be computed from a suitable description of M .
To explain our algorithm, we need to establish some background involving computation on well-founded trees.If σ is a finite sequence of natural numbers, we index the elements starting with 0 so that σ = (σ 0 , . . ., σ length(σ)−1 ), and write σˆn to denote the sequence extending σ with an additional element n.If τ is another finite sequence of natural numbers, write σ ⊆ τ to indicate that σ is an initial segment of τ .By a tree on N, we mean a set T of finite sequences of natural numbers that is closed under initial segments.Think of the empty sequence, (), as denoting the root, and the elements σˆn as being the children of σ in the tree.
Identify functions F from N to N with infinite sequences, and write σ ⊂ F if σ is an initial segment of F .A tree T on N is said to be well-founded if it has no infinite branch, which is to say, for every function F there is a σ ⊂ F such that σ is not in the tree.One can always carry out a proof by induction on a well-founded tree: if P σ is any property that holds everywhere outside a tree T and, moreover, P σ holds whenever P σˆn holds for every n, then P σ holds for every σ ; otherwise, one could successively extend a counterexample σ to build an infinite branch F that never leaves the tree.By the same token, one can define a function on finite sequences of natural numbers by a schema of recursion: where λn.G(σˆn) denotes the function which maps n to G(σˆn).Using induction on T , one can show that G is well-defined.Moreover, if T and the functions H and K are computable, so is G.For example, the computation of G on the empty string requires recursive calls to G((n)), for various n; these, in turn, require recursive calls to G((n, n )), for various n , and so on.The well-foundedness of T guarantees that every branch of the computation terminates.Now suppose M(F) is a continuous functional.Say that a finite sequence σ is unsecured if there are F 1 , F 2 extending σ such that M(F 1 ) = M(F 2 ).In words, σ is unsecured if it does not provide sufficient information about a function F to determine the value of M .Let T = {σ : σ is unsecured}.Then it is not hard to see that T is a tree, and the continuity of M implies it is well-founded.
Suppose moreover that k(F) is a modulus of continuity for M .For any finite sequence σ of natural numbers, use σ to denote the function One can check that the set T = {σ : ∀τ ⊆ σ k( τ ) ≥ length(τ )} is again a wellfounded tree that includes T .In the next proof, given a continuous functional M , we will define a function N(σ) by recursion on any well-founded tree that includes the tree of sequences that are unsecured for M .When this tree is given by a modulus of continuity, k(F), as above, this amounts to the principle of bar recursion, due to Spector [24] (see also [2,5,14]).
We now turn to the proof of Theorem 2.2.
Proof By Lemma 2.3, we can assume without loss of generality that M is continuous.Fix λ > λ > 0, and let T be any well-founded tree that includes all the sequences that are unsecured for M .We will define a function N(σ) by recursion on T , and simultaneously show, by induction on T , that N(σ) satisfies the following property, P σ , for every σ : whenever X and (A n ) satisfy there is an n ≤ N(σ) such that µ(A n ) < λ.In that case, N(()) is the desired bound, since Q () is the desired hypothesis, and R () is vacuously true.
In the base case, suppose σ is not in T , and hence secured for M .Define N(σ) = M( σ).
To see that N(σ) satisfies P σ , suppose X and (A n ) satisfy Q σ and R σ .Define σ to be the function since for m ≥ length(σ), σ(m) = N(σ).We use a calculation similar to that of Proposition 2.1, with N(σ) now playing the role of infinity.
As before, the measure of this set is at most λ + m≤M (λ − λ )/2 m+1 < λ, and so N(σ) itself satisfies the conclusion of P σ .
In the inductive case where σ is not in T , we can assume that we have already defined N(σˆn) for every n so that P σˆn is satisfied.Define the sequence n i by setting n 0 = 0 and n i+1 = N(σˆn i ), set m = length(σ), and set N(σ) = max i≤ 2 m+1 /(λ−λ ) n i .
To show that N(σ) satisfies P σ , fix X and (A n ) satisfying Q σ and R σ .We need to show that there is an n ≤ N(σ) satisfying µ(A n ) < λ.By the definition of N(σ), this is the same as showing that for some i ≤ 2 m+1 /(λ − λ ) , there is an n ≤ n i with this property.
Start by trying i = 1.Suppose the conclusion fails, that is, there is no n ≤ n 1 satisfying µ(A n ) < λ.Since n 1 = N(σˆn 0 ) satisfies P σˆn 0 , this implies that either Q σˆn 0 or R σˆn 0 fails.But we are assuming Q σ , and that implies Q σˆn 0 , so R σˆn 0 fails.This means that there is an m < length(σˆn 0 ) = length(σ) But our assumption of R σ implies that this does not hold for m < length(σ), since N(σˆn 0 ) = n 1 ≤ N(σ).So the only possibility is that it holds for m = m = length(σ); in other words, we have Now repeat this argument for i = 2, 3, . . ., 2 m+1 /(λ − λ ) .If the conclusion fails each time, then for each i we have The argument just employed is reminiscent of "energy incrementation" arguments used by Green and Tao [9,25,26], wherein the success of an algorithm is guaranteed by the fact that a bounded nondecreasing sequence can increase by more than a fixed ε > 0 only finitely many times.What is novel here is that we employ a transfinite iteration of that strategy.
Notice that the value of M in the theorem depends on the values of λ, λ , and the functional M .It is therefore somewhat difficult to make sense of the question as to whether the bound computed in the proof is, in some sense, asymptotically sharp.Given M , λ, λ , one can effectively determine whether or not a putative value of M works; so given any bound, one can also compute the least value of M that satisfies the conclusion.So at issue is not whether we can compute the precise bound, but, rather, come up with a perspicuous characterization of the rate of growth.One can easily use recursion along fairly simple trees to define functions that grow astronomically fast.Nonetheless, there are some things we can say about the complexity of M in terms of M .It is well known that Gödel's system T of primitive recursive functionals of finite type can be stratified into levels T n .At the bottom level, T 0 , primitive recursion is restricted in such a way that the only functions from natural numbers to natural numbers that are definable in the system are primitive recursive.The functionals of T 0 are said to be primitive recursive functionals in the sense of Kleene, in contrast to the functionals of T , which are are said to be primitive recursive functionals in the sense of Gödel (see [13,2,15]).The results of Howard [12] show the following: Theorem 2.4 In the previous theorem, if M is definable in Gödel's T n for some n ≥ 0, then, as a function of λ and λ , M is definable in T n+1 .
(See also [22,Section 10], which relates Howard's results explicitly to the fragments T n .)Theorem 2.4 implies that if M is a primitive recursive functional in the sense of Kleene, then M is in T 1 (which is to say, roughly Ackermannian).The results of Kreuzer [21] provide even more information: [14,Section 3]), then M is primitive recursive.
It would be interesting to know whether these results can be improved.Alternatively, one can consider Theorem 2.2 for particular functionals M(F).One can show, for example, that with M(F) = F(0) + n, the smallest value of M that works is roughly n/(λ − λ ).Using the algorithm given in the proof of Theorem 2.2 yields the bound M = n• 2/(λ − λ ) , but this can be improved to n• 1/(λ − λ ) by tinkering with the values (λ − λ )/2 m+1 in the right hand side of condition R σ .An explicit construction gives a lower bound of n • However, even for simple functionals like M(F) = F(F(0)) + n, the combinatorial details quickly become knotty.In this particular case our algorithm gives an M = m 2/(λ−λ ) , where m 0 = n and m i+1 = n • 2 mi+1 /(λ − λ ) .This is an iterated exponential in n, where the depth of the stack depends on λ − λ ; but we do not know whether such a rate of growth is necessary.

Metastable convergence theorems
We can now prove our metastable version of Egorov's theorem.Theorem 3.1 For every ε > 0, λ > λ > 0, and functional M 1 (F), there is a functional M 2 (F) with the following property: for any probability space X = (X, B, µ) and sequence (f n ) of measurable functions, if M 1 (F) is a λ -uniform bound on the εmetastable pointwise convergence of (f n ), then M 2 (F) is a bound on the λ-uniform ε-metastable convergence of (f n ).In other words, if for every Proof Fix ε > 0, λ > λ > 0, and M 1 .Given F 2 , define and let M 2 (F 2 ) be the value M given by Theorem 2.2.Let We wish to show µ(A n ) < λ for some m ≤ M 2 (F 2 ).By the definition of M 2 (F 2 ), it is enough to show that for every function F(m), This straightforwardly yields our quantitative version of the dominated convergence theorem.
Theorem 3.2 For every ε > 0, λ > λ > 0, and M 1 (F), there is an M 2 (F) such that, for any probability space X and sequence (f n ) of nonnegative measurable functions dominated by the constant function Proof From the hypotheses, Theorem 3.1 yields an M 2 (F) that is a bound on the λ-uniform ε-metastable convergence of (f n ).Thus, for all F , there is Call the set just indicated A. From our choice of λ and the definition of A, it follows that for all n, n ∈ [m, F(m)], That is, M 2 (F) provides a bound on the (ε + λ)-metastable convergence of ( f n ) as desired.
Theorem 3.2 strengthens Tao's Theorem A.2 [26] in three ways.First, we formulate convergence in terms of the Cauchy criterion, rather than referring to a fixed limit, as Tao does.This is more natural in the context of metastability, and our result implies Tao's, since one can always consider a sequence f 0 , f , f 1 , f , f 2 , f , . . . in which a fixed limit f has been interleaved.Second, Tao used the stronger hypothesis that M 1 (F) provides a bound that works almost everywhere, rather than outside a set of measure at most λ .Finally, and most importantly, our proof of Theorem 2.2 provides an explicit description of the bound, M 2 (F).
Tao also stated his theorem for the convergence of nets indexed by the directed set N × N, as was needed in his application.But as he himself noted, the extension to arbitrary countable nets is straightforward.Given any countable net (f i ) i∈I , one can define an increasing cofinal sequence (a i ) i∈N of elements of the directed set I .To adapt Theorem 2.2, for example, suppose we are given a net (A i ) i∈I of measurable subsets of a probability space X with the property that for every function F , where the notation [a, b] denotes {i : a ≤ i ≤ b}.Define the sequence (A n ) n∈N by A n = i∈[an,a n+1 ] A i .Then (A n ) satisfies the requirements of Theorem 2.2, and hence there is an i ≤ a M such that µ(A i ) < λ.
Notice that the expression |f n − f n | in the proof of Theorem 3.2 is the L 1 norm of f n − f n .In fact, the same argument shows the following: Theorem 3.3 For every ε > 0, λ > λ > 0, and M 1 (F), there is an M 2 (F) such that, for any probability space X and sequence (f n ) of nonnegative measurable functions dominated by the constant function 1, if M 1 (F) is a λ -uniform bound on the εmetastable pointwise convergence of (f n ), then for every F 2 there is an m ≤ M 2 (F 2 ) such that for every p ≥ 1, We have considered a metastable version of the dominated convergence theorem where X is a probability space and the sequence (f n ) is uniformly dominated by the constant function 1.The dominated convergence theorem itself is usually stated more generally where X is an arbitrary measure space, and the sequence (f n ) is dominated by an arbitrary integrable function g.The general case can be reduced to the one we have considered, taking into account that given an integrable function g and any δ 1 , δ 2 greater than 0, there is a set A with finite measure such that X\A g < δ 1 , and a K sufficiently large so that A (g − min(g, K)) < δ 2 .The bound M 2 in the conclusion, however, now depends on bounds on K and the size of A, for certain δ 1 and δ 2 depending on ε.

A new mode of convergence
Recall that a sequence (f n ) of measurable functions converges pointwise a.e. if for almost every x, and we noted in Section 1 it converges almost uniformly if Each of these has an equivalent expression in terms of metastable convergence.Our formulation of Egorov's theorem provides yet another mode of convergence, which we will call almost uniform metastable pointwise convergence: In other words, M , as a function of F , provides a bound on the ε-metastable convergence of the sequences (f n (x)) that is uniform in x, and valid outside a set of measure at most λ.
Recall that if X is a probability space, or if the sequence (f n ) is dominated by an L p function, then a.e.convergence and almost uniform convergence coincide.More generally, we have the following relationships between these three modes of convergence: Proposition 4.1 Let (f n ) be a sequence of measurable functions on a measure space X = (X, B, µ).
(1) AU → AUM → AE. (Hence, if X is a probability space or the sequence (f n ) is dominated, the three notions coincide.) ∞ for all ε > 0, n, and n , then AE implies AUM.(In particular, the conclusion holds if for some p ≥ 1, f n ∈ L p for every n.) (3) In general, the implications in (1) do not reverse.
Proof For (1), note that AU is equivalent to its metastable version, AU * , which clearly implies AUM.Similarly, AUM implies, in particular, that for almost every x the sequence (f n (x)) is metastably convergent, and hence convergent.By the assumption that each {x : |f n (x) − f n (x)| ≥ ε} has finite measure, we can take the limit as M → ∞ to get Hence, (f n ) is not a.e.Cauchy.
There is also a non-Cauchy version of AUM, which refers to a limit function f : It is easy to see that AUM implies AUM, but the converse need not hold; for example, h n = χ [n,∞) converges AUM, but not AUM .Moreover, the analogue of Proposition 4.1 holds when AUM is replaced by AUM .Thus we have the implications none of which can be reversed in general.

Final comments
As noted in Section 2, it would be interesting to know the extent to which the bounds we obtain are sharp.For example, can one show that there are functionals M that are primitive recursive in the sense of Kleene for which the M in Theorem 2.2 is not primitive recursive?
When Tao [26] presented his quantitative version of the dominated convergence theorem, he observed that the bound M can be computed in principle.
In practice, though, it seems remarkably hard to do; the proof of the Lebesgue dominated convergence theorem, if inspected carefully, relies implicitly on the infinite pigeonhole principle, which is notoriously hard to finitize.
He went on to note that since the Lebesgue dominated convergence theorem is equivalent, in the sense of reverse mathematics, to the arithmetic comprehension axiom (ACA) (see Yu [28]), the dependence of M on the parameters is likely to be "fantastically poor."The dependence we have obtained is, indeed, rather poor, but it is at least explicit and comprehensible.
In fact, the axiomatic strength of the dominated convergence theorem is sensitive to the way in which it is formulated.Elsewhere [1] we have shown that the formulation of the dominated convergence theorem that corresponds to Tao's quantitative version is strictly weaker than (ACA).It is possible, however, that the quantitative version, which quantifies over continuous functionals, is axiomatically stronger than the original.We suspect that each of Theorems 2.2, 3.1, and 3.2 is equivalent to (ACA).Hirst [10] has shown that the kind of transfinite induction used in the proof of Theorem 2.2 is equivalent to (ACA), so at issue is whether the full strength of this principle is needed.The situation is reminiscent of Gaspar and Kohlenbach [7], which provides a sense in which a quantitative version of the infinitary pigeonhole principle is axiomatically stronger than the non-quantitative version.
The results here can be viewed as instances of "proof mining," which aims to extract quantitative and computationally meaningful information from nonconstructive results; see [14] and [3,15,16,17].Specifically, the instance of arithmetic comprehension used in the implication from (3) to (1) of Proposition 2.1 can be cast as a choice principle, and Spector [24] has shown how that can be eliminated in favor of bar recursion; see also [14,Section 11.3].Paulo Oliva has, moreover, shown [23] that for a fixed space X and sequence of sets (A n ), one can straightforwardly compute a witness to the implication using a particular form of bar recursion (the "explicitly controlled product of selection functions" of [5]; see also Definition 18 of [6]).One can then obtain a bound that is uniform in X and (A n ) using hereditary majorizability in the sense of either Howard [11] or Bezem [4].
the hypothesis of the theorem gives the desired conclusion.