Hypothesis Testing via Affine Detectors

In this paper, we further develop the approach, originating in [GJN], to"computation-friendly"hypothesis testing via Convex Programming. Most of the existing results on hypothesis testing aim to quantify in a closed analytic form separation between sets of distributions allowing for reliable decision in precisely stated observation models. In contrast to this descriptive (and highly instructive) traditional framework, the approach we promote here can be qualified as operational -- the testing routines and their risks are yielded by an efficient computation. All we know in advance is that, under favorable circumstances, specified in [GJN], the risk of such test, whether high or low, is provably near-optimal under the circumstances. As a compensation for the lack of"explanatory power,"this approach is applicable to a much wider family of observation schemes and hypotheses to be tested than those where"closed form descriptive analysis"is possible. In the present paper our primary emphasis is on computation: we make a step further in extending the principal tool developed in the cited paper -- testing routines based on affine detectors -- to a large variety of testing problems. The price of this development is the loss of blanket near-optimality of the proposed procedures (though it is still preserved in the observation schemes studied in [GJN], which now become particular cases of the general setting considered here). [GJN]: Goldenshluger, A., Juditsky, A., Nemirovski, A."Hypothesis testing by convex optimization,"Electronic Journal of Statistics 9(2), 2015


Introduction
This paper can be considered as an extension of [13] where the following simple observation was the starting point of numerous developments: Imagine that we want to decide on two composite hypotheses about the distribution P of a random observation ω taking values in observation space Ω, i-th hypothesis stating that P ∈ P i , where P i , i = 1, 2, are given families of probability distributions on Ω. Let φ : Ω → R be a detector, and let the risk of detector φ be defined as the smallest ǫ ⋆ such that Ω e −φ(ω) P (dω) ≤ ǫ ⋆ ∀P ∈ P 1 & Ω e φ(ω) P (dω) ≤ ǫ ⋆ ∀P ∈ P 2 . (1) Then the test T K which, given K i.i.d. observations ω t ∼ P ∈ P 1 ∪ P 2 , t = 1, ..., K, deduces that P ∈ P 1 when K t=1 φ(ω t ) ≥ 0, and that P ∈ P 2 otherwise, accepts the true hypothesis with P -probability at least 1 − ǫ K ⋆ .
Note that while the risk 2 δ(1 − δ) seems to be much larger than δ, especially for small δ, we can "compensate" for risk deterioration passing from the single-observation test T 1 associated with the detector φ to the test T K based on the same detector and using K observations. The risk of the test T K , by the above, is upper-bounded by ǫ K ⋆ = [2 δ(1 − δ)] K and thus is not worse than the risk δ of the "ideal" single-observation test already for a "quite moderate" value ⌋ 2 1−ln(4(1−δ))/ ln(1/δ) ⌊ of K. 2. There are "good," in certain precise sense, parametric families of distributions, primarily, • Gaussian N (µ, I d ) distributions on Ω = R d , • Poisson distributions with parameters µ ∈ R d + on Ω = Z d ; the corresponding random variables have d independent entries, j-th of them being Poisson with parameter µ j , • Discrete distributions on {1, ..., d}, the parameter µ of a distribution being the vector of probabilities to take value j = 1, ..., d, for which the optimal (with the minimal risk, and thus -near-optimal by 1) detectors can be found efficiently, provided that P i , i = 1, 2, are convex hypotheses, meaning that they are cut off the family of distributions in question by restricting the distribution's parameter µ to reside in a convex domain. 1 On a closer inspection, the "common denominator" of Gaussian, Poisson and Discrete families of distributions is that in all these cases the minimal risk detector for a pair of convex hypotheses is affine, 2 and the results of [13] in the case of deciding on a pair of convex hypotheses stemming from a good family of distributions sum up to the following: A) the best -with the smallest possible risk -affine detector, same as its risk, can be efficiently computed; B) the smallest risk affine detector from A) is the best, in terms of risk, detector available under the circumstances, so that the associated test is near-optimal.
Note that as far as practical applications of the above approach are concerned, one "can survive" without B) (near-optimality of the constructed detectors), while A) is a must. In this paper, we focus on families of distributions obeying A); this class turns out to be incomparably larger than what was defined as "good" in [13]. In particular, it includes nonparametric families of distributions. Staying within this much broader class, we still are able to construct in a computationally efficient way the best affine detectors for a pair of "convex", in certain precise sense, hypotheses, along with valid upper bounds on the risks of the detectors. What we, in general, cannot claim anymore, is that the tests associated with the above detectors are near-optimal. This being said, we believe that investigating possibilities for building tests and quantifying their performance in a computationally friendly manner is of value even when we cannot provably guarantee near-optimality of these tests. 1 In retrospect, these results can be seen as a development of the line of research initiated by the pioneering works of H. Chernoff [9], C. Kraft [16], and L. Le Cam [17], further developed in [1,2,3,4,7,8] among many others (see also references in [15]). 2 affinity of a detector makes sense only when Ω can be naturally identified with a subset of some R d . This indeed is the case for Gaussian and Poisson distributions; to make it the case for discrete distributions on {1, ..., d}, it suffices to encode j ≤ d by j-th basic orth in R d , thus making Ω the set of basic orths in R d . With this encoding, every real valued function on {1, ..., d} becomes affine.
The paper is organized as follows. The families of distributions well suited for constructing affine detectors in a computationally friendly fashion are introduced and investigated in section 2. In particular, we develop a kind of fully algorithmic "calculus" of these families. This calculus demonstrates that the families of probability distributions covered by our approach are much more common commodity than "good observation schemes" as defined in [13]. In section 3 we explain how to build within our framework tests for pairs (and larger tuples) of hypotheses and how to quantify performance of these tests in a computationally efficient fashion. Aside of general results of this type, we work out in detail the case where the family of distributions giving rise to "convex hypotheses" to be tested is comprised of sub-Gaussian distributions (section 3.2.3). In section 4 we discuss an application to the now-classical statistical problem -aggregation of estimators -and show how the results of [12] can be extended to the general situation considered here. Finally, in section 5 we show how our framework can be extended in the Gaussian case to include quadratic detectors. To streamline the presentation, all proofs exceeding few lines are collected in the appendix.

Setup
Let us fix observation space Ω = R d , and let P j , 1 ≤ j ≤ J, be given families of Borel probability distributions on Ω. Put broadly, our goal is, given a random observation ω ∼ P , where P ∈ j≤J P j , to decide upon the hypotheses H j : P ∈ P j , j = 1, ..., J. We intend to address the above goal in the case when the families P j are simple -they are comprised of distributions for which moment-generating functions admit an explicit upper bound. We refer to H, M, Φ(·, ·) satisfying the above restrictions as to regular data. Regular data H, M, Φ(·, ·) define a family

Regular and simple probability distributions
of Borel probability distributions P on Ω such that We say that distributions satisfying (2) are regular, and, given regular data H, M, Φ(·, ·), we refer to R[H, M, Φ] as to regular family of distributions associated with the data H, M, Φ. The same regular data H, M, Φ(·, ·) define a smaller family of Borel probability distributions P on Ω such that We say that distributions satisfying (3) are simple. Given regular data H, M, Φ(·, ·), we refer to S[H, M, Φ] as to simple family of distributions associated with the data H, M, Φ.
Recall that the starting point of our study is a "plausibly good" detector-based test which, given two families P 1 and P 2 of distribution with common observation space, and independent observations ω 1 , ..., ω t drawn from a distribution P ∈ P 1 ∪ P 2 , decides whether P ∈ P 1 or P ∈ P 2 . Our interest in regular/simple families of distributions stems from the fact that when the families P 1 and P 2 are of this type, building such test reduces to solving a convex-concave game and thus can be carried on in a computationally efficient manner. We postpone the related construction and analysis to section 3, and continue with presenting some basic examples of simple and regular families of distributions and a simple "calculus" of these families.

Sub-Gaussian distributions
where µ i is the probability for the variable to take value e i .

Distributions with bounded support
Consider the family P[X] of Borel probability distributions supported on a closed and bounded convex set be the support function of X. We have the following result (to be refined in section 2.3.5): where e[P ] = R d ωP (dω) is the expectation of P , and the right hand side function in (5) is convex. As a result, setting we get regular data such that For proof, see Appendix A.1

Calculus of regular and simple families of probability distributions
Regular and simple families of probability distributions admit "fully algorithmic" calculus, with the main calculus rules as follows.

Direct summation
Then H is a symmetric w.r.t. the origin closed convex set in Ω = R d , M is a nonempty closed convex set in R n , Φ : H × M → R is a continuous convex-concave function, and clearly

IID summation
Let Ω = R d be an observation space, (H, M, Φ) be regular data on this space, and let λ = {λ ℓ } K ℓ=1 be a collection of reals. We can associate with the outlined entities a new data (H λ , M, Φ λ ) on Ω by setting Now, given a probability distribution P on Ω, we can associate with it and with the above λ a new probability distribution P λ on Ω as follows: P λ is the distribution of ℓ λ ℓ ω ℓ , where ω 1 , ω 2 , ..., ω L are drawn, independently of each other, from P . An immediate observation is that the data (H λ , M, Φ λ ) is regular, and • whenever a probability distribution P belongs to S[H, M, Φ], the distribution P λ belongs to In particular, when ω ∼ P ∈ S[H, M, Φ], then the distribution P L of the sum of L independent copies of ω belongs to S[H, M, LΦ].

Semi-direct summation
To avoid complications, we assume that for every ℓ, • M ℓ is bounded.
Let also an ǫ > 0 be given. We assume that ǫ is small, namely, Lǫ < 1. Let us aggregate the given regular data into a new one by setting and let us define function Φ(h; µ) : Ω d × M → R as follows: By evident reasons, the infimum in the description of Φ is achieved, and Φ is continuous. In addition, Φ is convex in h ∈ R d and concave in µ ∈ M. Postponing for a moment verification, the consequences are that H = Ω = R d , M and Φ form a regular data. We claim that Let us set µ = [ µ 1 ; ...; µ L ], and let h = [h 1 ; ...; h L ] ∈ Ω be given. We can find λ ∈ ∆ ǫ such that Applying Hölder inequality, we get We see that and thus P ∈ S[R d , M ℓ , Φ ℓ ], as claimed. It remains to verify that the function Φ defined by (6) indeed is convex in h ∈ R d and concave in µ ∈ M. Concavity in µ is evident. Further, functions λ ℓ Φ ℓ (h ℓ /λ ℓ ; µ) (as perspective transformation of convex functions Φ ℓ (·; µ)) are jointly convex in λ and h ℓ , and so is Ψ(λ, h; µ) = L ℓ=1 λ ℓ Φ ℓ (h ℓ /λ ℓ , µ). Thus Φ(·; µ), obtained by partial minimization of Ψ in λ, indeed is convex.

Affine image
Let H, M, Φ be regular data, Ω be the embedding space of H, and x → Ax + a be an affine mapping from Ω toΩ = Rd, and let us set Note thatH,M andΦ are regular data. It is immediately seen that

Incorporating support information
Consider the situation as follows. We are given regular data H ⊂ Ω = R d , M, Φ and are interested in the family of distribution P known to belong to S[H, M, Φ]. In addition, we know that all distributions P from P are supported on a given closed convex set X ⊂ R d . How could we incorporate this domain information to pass from the family S[H, M, Φ] containing P to a smaller family of the same type still containing P ? We are about to give an answer in the simplest case of H = Ω. Specifically, denoting by φ X (·) the support function of X and selecting somehow a closed convex set G ⊂ R d containing the origin, let us set where Φ(h; µ) : R d × M → R is the continuous convex-concave function participating in the original regular data. Assuming that Φ is real-valued and continuous on the domain R d × M (which definitely is the case when G is a compact set such that φ X is finite and continuous on G), note that Φ is convex-concave on this domain, so that R d , M, Φ is a regular data. We claim that The family S[R d , M, Φ] contains P, provided the family S[R d , M, Φ] does so, and the first of these two families is smaller than the second one.
Verification of the claim is immediate. Let P ∈ P, so that for properly selected µ = µ P ∈ M and for all e ∈ R d it holds Besides this, for every g ∈ G we have φ X (ω) − g T ω ≥ 0 on the support of P , whence for every h ∈ R d one has Since the resulting inequality holds true for all g ∈ G, we get implying that P ∈ S[R d , M, Φ]; since P ∈ P is arbitrary, the first part of the claim is justified.
is readily given by the inequality Φ ≤ Φ, and the latter is due Illustration: distributions with bounded support revisited. In section 2.2.4, given a convex compact set X ⊂ R d with support function φ X , we checked that the data Note that Φ(h; e[P ]) describes well the behaviour of the logarithm F P (h) = ln R d e h T ω P (dω) of the moment-generating function of P ∈ P[X] when h is small (indeed, F P (h) = h T e[P ] + O( h 2 ) as h → 0), and by far overestimates F P (h) when h is large. Utilizing the above construction, we replace Φ with the real-valued, convex-concave and continuous on R d × M function It is easy to see that Φ(·; ·) still ensures the inclusion P ∈ S[R d , and φ X (h) is a correct description of F P (h) for large h.
3 Affine detectors and hypothesis testing

Situation
Assume we are given two collections of regular data with common Ω = R d and H, specifically, the collections (H, M χ , Φ χ ), χ = 1, 2. We start with a construction of a specific test for a pair of hypotheses H 1 : P ∈ P 1 , H 2 : P ∈ P 2 , where When building the test, we impose on the regular data in question the following Assumption I: The regular data (H, M χ , Φ χ ), χ = 1, 2, are such that the convex-concave function We associate with a saddle point (h * ; µ * 1 , µ * 2 ) the following entities: this quantity is uniquely defined by the saddle point value of Ψ and thus is independent of how we select a saddle point; • the detector φ * (ω) -the affine function of ω ∈ R d given by A simple sufficient condition for existence of a saddle point of (7) is Condition A: The sets M 1 and M 2 are compact, and the function Indeed, under Condition A by Sion-Kakutani Theorem [14] it holds , so that the optimization problems (P ) : have equal optimal values. Under Condition A, problem (P ) clearly is a problem of minimizing a continuous coercive function over a closed set and as such is solvable; thus, Opt(P ) = Opt(D) is a real. Problem (D) clearly is the problem of maximizing over a compact set of an upper semicontinuous (since Φ is continuous) function taking real values and, perhaps, value −∞, and not identically equal to −∞ (since Opt(D) is a real), and thus (D) is solvable. Thus, (P ) and (D) are solvable with common optimal value, and therefore Φ has a saddle point.

Main observation
An immediate (and crucial!) observation is as follows: In the situation of section 3.1 and under Assumption I, one has Proof. For every , and for every P ∈ P 1 , we have Similarly, for every , and for every P ∈ P 2 , we have

Testing pairs of hypotheses
Repeated observation. Given Ω = R d , let random observations ω t ∈ Ω, t = 1, 2, ..., be generated as follows: "In the nature" there exists a random sequence ζ t ∈ R N , t = 1, 2, ...,, of driving factors such that ω t is a deterministic function of ζ t = (ζ 1 , ..., ζ t ), t = 1, 2, ...; we refer to this situation as to case of repeated observation ω ∞ . Let now (H, M, Φ) be a regular data with observation space Ω. We associate with this data four hypotheses on the stochastic nature of observations ω ∞ = {ω 1 , ω 2 , ...}. Denoting by P |ζ t−1 the conditional, ζ t−1 being given, distribution of ω t , we say that the (distribution of the) repeated observation ω ∞ obeys hypothesis Note that the last two hypotheses, in contrast to the first two, require from the observations ω 1 , ω 2 , ... to be i.i.d. Note also that H R is weaker than H S , H Ri is weaker than H Si , and the "non-stationary" hypotheses H R , H S are weaker than their respective stationary counterparts H Ri , H Si .
The tests to be considered in the sequel operate with the initial fragment ω K = (ω 1 , ..., ω K ), of a prescribed length K, of the repeated observation ω ∞ = {ω 1 , ω 2 , ...}. We call ω K K-repeated observation and say that (the distribution of) ω K obeys one of the above hypotheses, if ω K is cut off the repeated observation ω ∞ distributed according to the hypothesis in question. We can think of H R , H S , H Ri and H Si as hypotheses about the distribution of the K-repeated observation ω K .
Pairwise hypothesis testing from repeated observations. Assume we are given two collections of regular data (H, M χ , Φ χ ), χ = 1, 2, with common observation space Ω = R d and common H. Given positive integer K and K-repeated observation ω K = (ω 1 , ..., ω K ), we want to decide on the pair of Assume that the convex-concave function (7) associated with the pair of regular data in question has a saddle point (h * ; µ * 1 , µ * 2 ), and let φ * (·), ǫ ⋆ be the induced by this saddle point affine detector and its risk, see (9), (8). Let us set Consider decision rule T K * for hypotheses H χ , χ = 1, 2, which, given observation ω K , • in the case of a tie (when φ In what follows, we refer to T K * as to test associated with detector φ (K) * .

Proposition 3.2
In the situation in question, we have As a result, the test T K * accepts exactly one of the hypotheses H 1 , H 2 , and the risk of this testthe maximal, over χ = 1, 2, probability not to accept the hypothesis H χ when it is true (i.e., when the K-repeated observation ω K obeys the hypothesis Proof. The fact that the test always accepts exactly one of the hypotheses H χ , χ = 1, 2, is evident. Let us denote E ζ t the expectation w.r.t. the distribution of ζ t , and let E ζ t+1 |ζ t stand for expectation w.r.t. conditional to ζ t distribution of ζ t+1 . Assuming that H 1 holds true and invoking the first inequality in (10), we have (we have taken into account that ω t+1 is a deterministic function of ζ t+1 and that we are in the case where the conditional to ζ t distribution of ω t+1 belongs to P 1 = R[H, M 1 , Φ 1 ]), and we arrive at (11.a). This inequality clearly implies that the probability to reject H 1 when the hypothesis is true is Assuming that H 2 is true and invoking the second inequality in (10), similar reasoning shows that (11.b) holds true, so that the probability to reject H 2 when the hypothesis is true does not exceed ǫ K ⋆ .

Illustration: sub-Gaussian and Gaussian cases
For χ = 1, 2, let U χ be nonempty closed convex set in R d , and U χ be a compact convex subset of the interior of the positive semidefinite cone S d + . We assume that U 1 is compact. Setting we get two collections (H, M χ , Φ χ ), χ = 1, 2, of regular data. As we know from section 2.2.1, for (4)), as well as families G[U χ , U χ ] of Gaussian distributions on R d with parameters (θ, Θ) (expectation and covariance matrix) running through U χ × U χ . Besides this, the pair of regular data in question clearly satisfies Condition A. Consequently, the test T K * given by the above construction as applied to the collections of regular data (12) is well defined and allows to decide on hypotheses The same test can be also used to decide on stricter hypotheses H G χ , χ = 1, 2, stating that the observations ω 1 , ..., ω K are i.i.d. and drawn from a Gaussian distribution P belonging to G[U χ , U χ ]. Our goal now is to process in detail the situation in question and to refine our conclusions on the risk of the test T 1 * when the Gaussian hypotheses H G χ are considered and the situation is symmetric, that is, when Observe, first, that the convex-concave function Ψ from (7) in the situation under consideration becomes We are interested in solutions to the saddle point problem -find a saddle point of function (13) - From the structure of Ψ and compactness of U 1 , U 1 , U 2 , combined with the fact that U χ , χ = 1, 2, are comprised of positive definite matrices, it immediately follows that saddle points do exist, and a saddle point From (15.a) it immediately follows that the affine detector φ * (·) and risk ǫ ⋆ , as given by (8) and (9), Note that in the symmetric case (where U 1 = U 2 ), there always exists a saddle point of Ψ with Θ * 1 = Θ * 2 , and the test T 1 * associated with such saddle point is quite transparent: it is the maximum likelihood test for two Gaussian distributions, N (θ * 1 , Θ * ), N (θ * 2 , Θ * ), where Θ * is the common value of Θ * 1 and Θ * 2 , and the bound ǫ ⋆ for the risk of the test is nothing but the Hellinger affinity of these two Gaussian distributions, or, equivalently, Applying Proposition 3.2, we arrive at the following result: In the symmetric sub-Gaussian case (i.e., in the case of (12) with U 1 = U 2 ), saddle point problem (13), (14) admits a saddle point of the form (h * ; θ * 1 , Θ * , θ * 2 , Θ * ), and the associated affine detector and its risk are given by As a result, when deciding, via ω K , on "sub-Gaussian hypotheses" In the symmetric single-observation Gaussian case, that is, when U 1 = U 2 and we apply the test T * = T 1 * to observation ω ≡ ω 1 in order to decide on the hypotheses H G χ , χ = 1, 2, the above risk bound can be improved: (13), and let φ * be the affine detector given by (15) and (16): so that where is the normal error function. In particular, when deciding, via a single observation ω, on Gaussian The "in particular" part of Proposition is readily given by (21) as applied with α = β = 0.

Testing multiple hypotheses from repeated observations
Consider the situation as follows: we are given • observation space Ω = R d and a symmetric w.r.t. the origin closed convex set H; These data give rise to J hypotheses On the top of it, assume we are given a closeness relation -a subset C ⊂ {1, ..., J} 2 which we assume to contain the diagonal ((j, j) ∈ C for all j ≤ J) and to be symmetric ((i, j) ∈ C if and only if (j, i) ∈ C). In the sequel, we interpret indexes i, j with (i, j) ∈ C (and the hypotheses H i , H j with these indexes) as C-close to each other. Our goal is, given a positive integer K and K-repeated observation ω K = (ω 1 , ..., ω K ), to decide, "up to closeness C," on the hypotheses H j , j = 1, ..., J (which is convenient to be thought of as hypotheses about the distribution of ω K ).
Let us act as follows 3 . Let us make Assumption II For every pair i, j, 1 ≤ i < j ≤ J, with (i, j) ∈ C, the convex-concave function and let us set (cf. (8) and (9)). We further set (we have used (11) along with φ ij ≡ −φ ji ). Furthermore, whenever [α ij ] i,j≤J is a skew-symmetric matrix (i.e., α ij = −α ji ), the shifted detectors φ satisfy the scaled version of (23), specifically, for every j ≤ J such that the K-repeated observation The bottom line is as follows: Proposition 3.5 In the situation in question, given a closeness C, consider the following test T K C deciding on the hypotheses H j , 1 ≤ j ≤ J, on the distribution of K-repeated observation ω K : given skew-symmetric shift matrix [α ij ] i,j and observation ω K , T K C accepts all hypotheses H i such that and reject all hypotheses H i for which the predicate ( * i ) does not take place. Then (i) Test T K C accepts some of (perhaps, none of ) hypotheses H i , i = 1, ..., J, and all accepted hypotheses, if any, are C-close to each other. Besides, T K C has C-risk at most meaning that for every j * ≤ J such that the distributionP K of ω K obeys the hypothesis H j * (i.e., the hypothesis H j * is true), theP K -probability of the event "either the true hypothesis H K j * is not accepted, or the list of accepted hypotheses contains a hypothesis which is not C-close to H j * " does not exceed ǫ.
(ii) The infimum of ǫ over all skew-symmetric shifts α ij is exactly the spectral norm E (K) 2,2 of the symmetric entry-wise nonnegative matrix This infimum is attained when the Perron-Frobenius eigenvector g of E (K) can be selected to be positive, in which case an optimal selection of α ij is the selection Proof. Given ω K , the test T K C can accept hypotheses H i and H j with (i, j) ∈ C only when φ ji (·). Thus, T K C can accept H i and H j only when (i, j) ∈ C. Further, let the distributionP K of ω K obey hypothesis H j * . Invoking (25) and the relation φ jj * (·), for every j ≤ J with (j * , j) ∈ C, theP K -probability of the event φ (K) j * j (ω K ) ≤ 0, or, which is the same, of the event φ (K) jj * (ω K ) ≥ 0, is at most ǫ K jj * exp{α jj * } = ǫ K j * j exp{−α j * j }. Using the union bound, theP K -probability of the event "H j * is not accepted" is at most By construction of the test, when H j * is accepted and j is not C-close to j * , H j is not accepted (we have already seen that the test never accepts a pair of hypotheses which are not C-close to each other). Thus, the C-risk of T K C indeed is at most ǫ. Now, E (K) is a symmetric entry-wise nonnegative matrix, so that its leading eigenvector g can be selected to be nonnegative. When g is positive, setting α ij = ln(g i /g j ), we get for every i and thus ǫ = E (K) 2,2 . The fact that this is the smallest possible, over skew-symmetric shifts α ij , value of ǫ is proved in [13]. When g is just nonnegative, consider close to ij ; utilizing the (automatically strictly positive) Perron-Frobenius eigenvector g of this matrix, we, as was just explained, get skew-symmetric shifts α ij such that ij exp{−α ij }, and the right hand side can be made arbitrarily close to E (K) 2,2 by making E close enough to E (K) . Thus, we indeed can make ǫ arbitrarily close to E (K) 2,2 .

Special case: inferring colors
Assume that we are given J collections (H, M j , Φ j ), 1 ≤ j ≤ J, of regular data with common observation space Ω = R d and common H, and thus have at our disposal J hypotheses H j = H R [H, M j , Φ j ], on the distribution of K-repeated observation ω K . Let also the index set {1, ..., J} be partitioned into L ≥ 2 non-overlapping nonempty subsets I 1 , ..., I L ; it is convenient to think about ℓ, 1 ≤ ℓ ≤ L, as the common color of indexes j ∈ I ℓ , and that the colors of indexes j are inherited by the hypotheses H j . The Color Inference (CI) problem we want to solve amounts to decide, given K-repeated observation ω K obeying one or more of the hypotheses H 1 , ..., H J , on the color of these hypotheses. Note that it may happen that the distribution of ω K obeys a pair of hypotheses H i , H j of different colors. If it is not the case -that is, no distribution of ω K obeys a pair of hypotheses H i , H j of two distinct colors -we call the CI problem well-posed. In the well-posed case, we can speak about the color of the distribution of ω K provided this distribution obeys the union, over j = 1, ..., J, of hypotheses H j ; this is the color of (any of) the hypotheses H j obeyed by the distribution of ω K , and the CI problem is to infer this color given ω K .
In order to process the CI problem via our machinery, let us define closeness C as follows: (i, j) ∈ C ⇔ i and j are of the same color.
Assuming that the resulting C ensures validity of Assumption II, we can apply the above scheme to build test T K C . We can then convert this test into a color inference as follows. Given a K-repeated observation ω K , it may happen that T K C , as applied to ω K , accepts one or more among the hypotheses H j . In this case, by item (i) of Proposition 3.5, all accepted hypotheses are C-close to each other (in other words, are of the same color), and we claim that this is the color of the distribution of K-repeated observation we are dealing with. And if T K C , as applied to ω K , accepts nothing, we claim that the color we are interested in remains undecided.
Let us analyze the just described color inferring procedure, let it be denoted A K . Observe, first, that in the situation in question, assuming w.l.o.g. that the sets I 1 , I 2 , ..., I L are consecutive fragments in {1, ..., J}, the matrix E (K) given by (26) is naturally partitioned into L × L blocks E pq = (E qp ) T , 1 ≤ p, q ≤ L, where E pq is comprised of entries E (K) ij with i ∈ I p , j ∈ I q . By construction of E (K) , the diagonal blocks E pp are zero, and off-diagonal blocks are entry-wise positive (since ǫ ij clearly is positive for all pairs i, j of different colors). It follows that Perron-Frobenius eigenvectors of E (K) are strictly positive. This implies that for properly selected shifts α ij = −α ji , the quantity ǫ in Proposition 3.5 is equal to E (K) 2,2 ; in what follows we assume that the test T K C utilizes exactly these optimal shifts, so that we are in the case of ǫ = E (K) 2,2 . Now, it may happen ("bad case") that that E (K) 2,2 ≥ 1; in this case Proposition 3.5 says nothing meaningful about the quality of the test T K C , and consequently, we cannot say much about the quality of A K . In contrast to this, we claim that Lemma 3.1 Assume that ǫ := E (K) 2,2 < 1. Then the CI problem is well posed, and whenever the distributionP K of ω K obeys one of the hypotheses H j , j = 1, ..., J, A K recovers correctly the color of P K withP K -probability at least 1 − ǫ.
The proof is immediate. In the good case, all entries in E (K) are of magnitude < 1, whence ǫ ij < 1 whenever (i, j) ∈ C, see (26), so that It follows that the nonzero entries in E (M) are nonnegative and ≤ǭ M , whence In particular, for properly selected M we have ǫ(M ) < 1/2. Applying Proposition 3.5 with M in the role of K, we see that if the distributionP K of ω K obeys hypothesis H j * with some j * ≤ J (due the origin of our hypotheses, this is exactly the same as to say that the distributionP M of ω M obeys H j * ), then withP M -probability at least 1 − ǫ(M ) > 1/2 the test T M C accepts hypothesis H j * . It follows that ifP M obeys both H j ′ and H j ′′ , then T M C will accept H j ′ and H j ′′ simultaneously with positiveP M -probability, and since, as we have already explained, T M C never accepts two hypotheses of different color simultaneously, we conclude that H j ′ and H j ′′ are of the same color. This conclusion holds true whenever the distribution of ω K obeys one or more of the hypotheses H j , 1 ≤ j ≤ K, meaning that the CI problem is well posed.
Invoking Proposition 3.5, we conclude that if the distributionP K of ω K obeys, for some j * , the hypothesis H j * , then theP K -probability for T K C to accept H j * is at least 1 − ǫ(K); and from the preceding analysis, whenever T K C accepts H j * such thatP K obeys H j * , A K correctly infers the color ofP K , as claimed.
Finally, we remark that when (27) holds, (28) implies that ǫ(K) → 0 as K → ∞, so that the CI problem is well posed, and for every desired risk level ǫ ∈ (0, 1) we can find efficiently observation time K = K(ǫ) such that ǫ(K) ≤ ǫ. As a result, for this K the color inferring procedure A K recovers the color of the distributionP K of ω K (provided this distribution obeys some of the hypotheses H 1 , ..., H J ) withP K -probability at least 1 − ǫ.

Application: aggregating estimates by testing
Let us consider the situation as follows: • We are given I triples of regular data (H, M i , Φ i ), 1 ≤ i ≤ I, with common H and Ω = R d and the parameter sets M i sharing the common embedding space R n ; for the sake of simplicity, assume that M i are bounded (and thus are nonempty convex compact sets in R n ) and the continuous convex-concave functions Φ i (h; µ) : H × M j → R are coercive in H: Φ i (h t , µ) → ∞ whenever µ ∈ M i and sequence {h t ∈ H, t ≥ 1} satisfies h t 2 → ∞, t → ∞.
• We observe a realization of K-repeated observation ω K = (ω 1 , ..., ω K ) with i.i.d. ω t 's drawn from unknown probability distributionP known to belong to the family P Thus, "in the nature" there existsī ≤ I andμ ∈ Mī such that we callμ the parameter associated withP , and our goal is to recover from K-repeated observation ω K the imageḡ = Gμ ofμ under a given linear mapping µ → Gµ : R n → R m .
Undoubtedly, parameter estimation problem is a fundamental problem of mathematical statistics, and as such is the subject of a huge literature. In particular, several constructions of estimators based on testing of convex hypotheses have been studied in connection with signal reconstruction [4,5,6] and linear functionals estimation [10,11]. Our actual goal to be addressed below is more modest: we assume that we are given L candidate estimates g 1 , ..., g L ofḡ (these estimates could be outputs of various estimation routines applied to independent observations sampled formP ), and our goal is to select the best -the · 2 -closest toḡ among the estimates g 1 , ..., g L . This is the well-known problem of aggregating estimates, and our goal is to process this aggregation problem via the Color Inference procedure from section 3.3.1.

Aggregation procedure
It should be stressed that as stated, the aggregation problem appears to be ill-posed: there could be several pairs (ī,μ) satisfying (29), and the values of Gµ at the µ-components of these pairs could be different for different pairs, so thatḡ not necessary is well defined. One way to resolve this ambiguity would be to assume that givenP ∈ P, relation (29) uniquely definesμ. We, however, prefer another setting:μ andī satisfying (29), same asP ∈ P, are "selected by nature" (perhaps, from several alternatives), and the performance of the aggregating procedure we are about to develop should be independent of what is the nature's selection. When processing the aggregation problem, we assume w.l.o.g. that all points g 1 , ..., g L are distinct from each other. Let us split the space R m where Gµ takes values into L Voronoi cells Note that V ℓ is comprised of all points g from R m for which g ℓ is (one of) the · 2 -closest to g among the points g 1 , ..., g L . Let us set so that W i ℓ are convex compact sets in R m . Observe that g ℓ can be a solution to the aggregation problem (that is, the closest toḡ point among g 1 , ..., g L ) only whenḡ = Gμ belongs to V ℓ , that is, only whenμ ∈ Wī ℓ for someī, implying that at least one of the sets W 1 ℓ , W 2 ℓ , ..., W I ℓ is nonempty. Whether the latter condition is indeed satisfied for a given ℓ this can be found out efficiently via solving I convex feasibility problems. If the latter condition does not hold for some ℓ (let us call the associated estimate g ℓ redundant), we can eliminate g ℓ from the list of estimates to be aggregated without affecting the solution to the aggregation problem. Then we can redefine the Voronoi cells for the reduced list of estimates in the role of our original list, check whether this list still contains a redundant estimate, eliminate the latter, if it exists, and proceed recursively until a list of estimates (which by construction still contains all solutions to the aggregation problem) with no redundant estimates is built. We lose nothing when assuming that this "purification" was carried out in advance, so that already the initial list g 1 , ..., g L of estimates does not contain redundant ones. Of course, we lose nothing when assuming that L ≥ 2. Thus, from now on we assume that L ≥ 2 and for every ℓ ≤ L, at least one of the sets W i ℓ , 1 ≤ i ≤ I, is nonempty. Note that to solve the aggregation problem to optimality is exactly the same, in terms of the sets W i ℓ , as to identify ℓ such thatμ ∈ W i ℓ for some i. We intend to reduce this task to solving a series of Color Inference problems. We start with presenting the principal building block of our construction -Individual Inference procedure.
Individual Inference procedure is parameterized by ℓ ∈ {1, ..., L} and a real δ > 0. Given ℓ and δ, we initialize the algorithm as follows: • mark as red all nonempty sets W i ℓ along with their elements and the corresponding regular data • look one by one at all sets W i ℓ ′ with i ≤ I and ℓ ′ = ℓ, and associate with these sets their chunks Note that the resulting sets are convex and compact. Whenever W iδ ℓℓ ′ is nonempty, we mark blue this set along with its elements and the corresponding regular data (H, W iδ ℓℓ ′ , Φ i H×W iδ ℓℓ ′ ) .
As a result of the above actions, we get a collection of nonempty convex compact subsets W sδ ℓ , s = 1, ..., S δ ℓ , of R d and associated regular data D sδ ℓ = (H, W sδ ℓ , Φ sδ ℓ ); the sets W sδ ℓ , same as their elements and regular data D sδ ℓ , are colored in red and blue. Note that the sets W sδ ℓ of different colors do not intersect (since their images under the mapping G do not intersect), so that a point µ ∈ R m gets at most one color. Note also that our collection definitely contains red components.
Individual Inference Procedure A ℓδ K infers the color of a regular data D ∈ D δ ℓ = {D sδ ℓ , 1 ≤ s ≤ S δ ℓ } given i.i.d. K-repeated observation ω K drawn from a distribution P ∈ S[D]: when the collection D δ ℓ contains both red and blue regular data, A ℓδ K is exactly Color Inference procedure from section 3.3.1 associated with this collection and our coloring 4 ; if no blue regular data is present, A ℓδ K always infers that the color is red.
Observe that if the collection D δ ℓ of regular data we have built contains no blue data for some valuē δ of δ, the same holds true for all δ ≥δ. Let us define the risk ǫ(ℓ, δ) of Individual Inference Procedure with parameters ℓ, δ as follows: when δ is such that D δ ℓ contains no blue regular data, ǫ(ℓ, δ) = 0, otherwise ǫ(ℓ, δ) is as stated in Proposition 3.5. Note that whenever δ > 0, ℓ ≤ L, s ≤ S δ ℓ , µ ∈ W sδ ℓ and a probability distribution P satisfies the quantity ǫ(ℓ, δ) , by construction, is an upper bound on P -probability of the event "as applied to observation ω K = (ω 1 , ..., ω K ) with ω 1 , ..., ω K drawn, independently of each other, from P , A ℓδ K does not recover correctly the color of µ." Observe that ǫ(ℓ, δ) = 0 for large enough values of δ (since for large δ the collection D δ ℓ contains no blue data; recall that the parameter sets M i are bounded). Besides this, we claim that ǫ(ℓ, δ) is nonincreasing in δ > 0.
Aggregation procedure we propose is as follows: Given tolerance ǫ ∈ (0, 1 2 ), for every ℓ = 1, ..., L we specify δ ℓ > 0, the smaller the better, in such a way that ǫ(ℓ, δ ℓ ) ≤ ǫ/L 7 . Given observation ω K , we run the procedures A ℓδ ℓ K , 1 ≤ ℓ ≤ L. Whenever A ℓδ ℓ K returns a color, we assign it to the index ℓ and to the vector g ℓ , so that after all ℓ's are processed, some g ℓ 's get color "red," some get color "blue", and some do not get color. The aggregation procedure returns, as a solutionĝ(ω K ), a (whatever) red vector if one was discovered, and returns, say, g 1 otherwise.
Proposition 4.1 In the situation and under the assumptions described in the beginning of section 4, let ω K = (ω 1 , ..., ω K ) be K-element i.i.d. sample drawn from a probability distributionP which, taken along with someī ≤ I andμ ∈ Mī, satisfies (29). Then theP -probability of the event is at least 1 − ǫ.
Simple illustration. Let I = 1, H = Ω = R d , and let M = M 1 be a nonempty convex compact subset of R d . Further, suppose that Φ(h; µ) := Φ 1 (h; µ) = h T µ + 1 2 h T Θh : H × M → R, where Θ is a given positive definite matrix. We are also given a K-element i.i.d. sample ω K drawn from a sub-Gaussian distribution P with sub-Gaussianity parameters (µ, Θ). Let also Gµ ≡ µ, so that the aggregation problem we are interested in reads: "given L estimates g 1 , ..., g L of the expectation µ of a sub-Gaussian random vector ω with sub-Gaussianity parameters (µ, Θ), with a known Θ ≻ 0, and Krepeated i.i.d. sample ω K from the distribution of the vector, we want to select g ℓ which is · 2 -closest to the true expectation µ of ω." From now on we assume that g 1 , ..., g L ∈ M (otherwise, projecting the estimates onto M, we could provably improve their quality) and that g 1 , ..., g L are distinct from each other. 5 The sets W iδ ℓℓ ′ shrink as δ grows, thus some of the blue sets W iδ ℓℓ ′ which are nonempty at δ = δ ′′ can become empty when δ increases to δ ′ . 6 Indeed, by Proposition 3.5, these entries are obtained from saddle point values of some convex-concave functions by a monotone transformation; it is immediately seen that as δ grows, these functions and the domains of the minimization argument remain intact, while the domains of the maximization argument shrink, so that the saddle point values cannot increase. 7 For instance, we could start with δ = δ 0 large enough to ensure that ǫ(ℓ, δ 0 ) = 0, and select δ ℓ as either the last term in the progression δ i = κ i δ0, i = 0, 1, ..., for some κ ∈ (0, 1), such that ǫ(ℓ, δ i ) ≤ ǫ/L, or the first term in this progression which is "negligibly small" (say, less that 10 −6 ), depending on what happens first.
In our situation, the sets (30), (31), (32) and functions Φ i become (we are in the case of I = 1 and thus suppress index i in the notation for W 's and Φ). Note that Individual Inference Procedure A ℓδ K deals with exactly one red hypothesis, H Si [R d , W ℓ , Φ], and at most L − 1 blue hypotheses H Si [R d , W δ ℓℓ ′ , Φ] associated with nonempty sets W δ ℓℓ ′ and ℓ ′ = ℓ. Applying the construction from section 3.3.1, we arrive at the aggregation routine as follows (below ℓ, ℓ ′ vary in {1, ..., L}): • We set • Given ω K , for 1 ≤ ℓ ≤ L we assign vector g ℓ color "red," if ψ (K) ℓℓ ′ (ω K ) > 0 for all ℓ ′ = ℓ, otherwise we do not assign g ℓ any color; • If red vectors were found, we output (any) one of them as solution to the aggregation problem; if no red vectors are found, we output, say, g 1 as solution.
Proposition 4.1 as applied to the situation in question states that whenever ω 1 , ω 2 , ...ω K are drawn, independently from each other, from a sub-Gaussian distribution P with parameters µ, Θ, then with P -probability at least 1 − ǫ the resultĝ(ω K ) of the above aggregation routine satisfies the relation which essentially recovers the classical ℓ 2 oracle inequality (cf. [12,Theorem 4]).
5 Beyond the scope of affine detectors

Lifted detectors
The tests developed in sections 3.2.2 and 3.3 were based on affine detectors -affine functions φ(ω) associated with pairs of composite hypotheses H 1 : P ∈ P 1 , H 2 : P ∈ P 2 on the probability distribution P of observation ω ∈ Ω = R d . Such detectors were built to satisfy the relations with as small ǫ ⋆ as possible (cf. (10)), and affinity of φ is of absolutely no importance here: all constructions in sections 3.2.2, 3.3 were based upon availability of pairwise detectors φ, affine or not, satisfying, for the respective pairs of composite hypotheses, relations (10) with some known ǫ ⋆ . So far, affinity of detectors was utilized only when building detectors satisfying (35) via the generic scheme presented in section 3.1. Now note that given a random observation ζ taking values in some R d along with a deterministic function Z(ζ) : R d → R D , we can convert an observation ζ into an observation Here ω is a deterministic transformation of ζ which "remembers" ζ, so that to make statistical inferences from observations ζ is exactly the same as to make them from observations ω. However, detectors which are affine in ω can be nonlinear in ζ: for instance, for Z(ζ) = ζζ T , affine in ω detectors are exactly detectors quadratic in ζ. We see that within the framework of our approach, passing from ζ to ω allows to consider a wider family of detectors and thus to arrive at a wider family of tests. The potential bottleneck here is the necessity to bring the "augmented" observations (ζ, Z(ζ)) into the scope of our setup.
Example: distributions with bounded support. Consider the case where the distribution P of our observation ζ ∈ R d belongs to a family P of Borel probability distributions supported on a given bounded set, for the sake of simplicity -on the unit Euclidean ball B of R d . Given a continuous function Z(·) : R d → R D , our goal is to cover the family P + of distributions P + [P ] of ω = (ζ, Z(ζ)) induced by distributions P ∈ P of ζ by a family P[H, M, Φ] (or S[H, M, Φ]) associated with some regular data, thus making the machinery we have developed so far applicable to the family of distribution P + . Assuming w.l.o.g. that observe that for P ∈ P the distribution P + [P ] is sub-Gaussian with sub-Gaussianity parameters (θ P , Θ = 2I d+D ), where

It follows that
If we can point out a convex compact set U in R d × R D such that then, specifying regular data as How useful is this (by itself pretty crude) observation in the context of our approach depends on how much information on P can be "captured" by a properly selected convex compact set U satisfying (36). We are about to consider in more details "quadratic lifting" -the case where Z(ζ) = ζζ T .

Quadratic lifting, Gaussian case
Consider the situation where we are given • a nonempty bounded set U in R m ; • a nonempty convex compact subset U of the positive semidefinite cone S d + ; • a matrix Θ * ≻ 0 such that Θ * Θ for all Θ ∈ U ; • an affine mapping u → A(u) = A[u; 1] : R m → Ω = R d , where A is a given d × (m + 1) matrix. Now, a pair (u ∈ U, Θ ∈ U ) specifies Gaussian random vector ζ ∼ N (A(u), Θ) and thus specifies a Borel probability distribution P [u, Θ] of (ζ, ζζ T ). Let Q(U, U ) be the family of probability distributions on Ω = R d × S d stemming in this fashion from Gaussian distributions with parameters from U × U . Our goal is to cover the family Q(U, U ) by a family of the type S[H, M, Φ], which, as it was already explained, would allow to use the machinery developed so far in order to decide on pairs of composite Gaussian hypotheses via tests based on detectors which are quadratic in ζ.
It is convenient to represent a linear form on Ω = R d × S d as h T z + 1 2 Tr(HZ), where (h, H) ∈ R d ×S d is the "vector of coefficients" of the form, and (z, Z) ∈ R d ×S d is the argument of the form.
We denote by b = [0; 0; ...; 0; 1] ∈ R m+1 the last basic orth of R m+1 . We assume that for some δ ≥ 0 it holds where · is the spectral norm. Observe that for every Θ ∈ U we have 0 Θ  For proof, see Appendix A.3. Note that every quadratic constraint u T Qu + 2q T u + r ≤ 0 which is valid on U induces a linear constraint Tr Q q q T r Z ≤ 0 which is valid for all matrices Z(u), u ∈ U , and thus can be incorporated into the description of Z.
Special case. In the situation of Proposition 5.1, let u vary in a convex compact set U . In this case, the simplest way to define Z such that Z(u) ∈ Z for all u ∈ U is to set Let us compute the function Φ(h, 0; Θ). Setting A = [Ā, a], where a is the last column of A, direct computation yields Now imagine that we are given two collections (A χ , U χ , U χ ), χ = 1, 2, of the (A, U, U )-data, with the same number of rows in A 1 and A 2 and have associated with U χ -upper bounds Θ * ,χ on matrices Θ ∈ U χ . We want to use Proposition 5.1 to build an affine detector capable to decide on the hypotheses H 1 and H 2 on the distribution of observation ω, with H χ stating that this observation is N (A χ [u; 1], Θ) with some u ∈ U χ and Θ ∈ U χ. To this end, we have to solve the convex-concave saddle point problem where Φ 1 , Φ 2 are the functions associated, as explained in Proposition 5.1, with the first, respectively, with the second collection of the (A, U, U )-data. In view of the above computation, this boils down to the necessity to solve the convex minimization problem An optimal solution h * to this problem induces the affine detector and the risk of this detector on the pair of families G 1 , G 2 of Gaussian distributions in question is exp{SV}.
On the other hand, we could build affine detector for the families G 1 , G 2 by the machinery from section 3.2.3, that is, by solving convex-concave saddle point problem the risk of the resulting affine detector on G 1 , G 2 is exp{SV}. Now assume that (!) U χ , χ = 1, 2, have -maximal elements, and these elements are selected as Θ * ,χ . In this case the above computation says that SV and SV are the minimal values of identically equal to each other functions and thus are equal to each other. Thus, in the case of (!) the machinery of Proposition 5.1 produces a quadratic detector which can be only better, in terms of risk, than the affine detector yielded by Proposition 3.3. 8 Numerical illustration. To get an impression of the performance of quadratic detectors as compared to affine ones in the case of (!), we present here the results of experiment where , and U χ = {Θ * ,χ = σ 2 χ I 8 } are singletons. The risks of affine, quadratic and "purely quadratic" (with h set to 0) detectors on the pair G 1 , G 2 of families of Gaussian distributions, with G χ = {N (θ, Θ * χ ) : θ ∈ A χ U ρ χ }, ar given in Table 1. We see that • when deciding on families of Gaussian distributions with common covariance matrix and expectations varying in associated with the families convex sets, passing from affine detectors described by Proposition 3.3 to quadratic detectors, does not affect the risk (first row in the table). This is a general fact: by the results of [13], in the situation in question affine detectors are optimal in terms of risk among all possible detectors.
• when deciding on families of Gaussian distributions in the case where distributions from different families can have close expectations (third row in the table), affine detectors are useless, while the quadratic ones are not, provided that Θ * ,1 differs from Θ * ,2 . This is how it should be -we are in the case where the first moments of the distribution of the observation bear no definitive information on the family this distribution belongs to, which makes affine detectors useless. In contrast, quadratic detectors are able to utilize information (valuable when Θ * ,1 = Θ * ,2 ) "stored" in the second moments of the observation.
• "in general" (second row in the table), both affine and purely quadratic components in a quadratic detector are useful; suppressing one of them can increase significantly the attainable risk.

Quadratic lifting: Bounded observations
It is convenient to represent a "quadratically lifted observation" (ζ, ζζ T ) by the matrix Assume that all distributions from P are supported on the solution set X of a system of quadratic constraints f ℓ (ζ) := ζ T A ℓ ζ + 2a T ℓ ζ + α ℓ ≤ 0, 1 ≤ ℓ ≤ L, where A ℓ ∈ S d are such that ℓλ ℓ A ℓ ≻ 0 for properly selectedλ ℓ ≥ 0; as a consequence, X is bounded (sincef (ζ) := L ℓ=1λ ℓ f ℓ (ζ) is a strongly convex quadratic form which is ≤ 0 on X). Setting observe that the distribution P + of Z(ζ) induced by a distribution P ∈ P is supported on the closed convex set which is bounded 9 . The support function of this set is Recalling Example 4 in section 2.2 and section 2.3.5, we arrive at the regular data where e[P + ] is the expectation of P + , and therefore for all λ ∈ R. Hence, which combines with the above computation to yield the relation ln(γ) ≤ a + ln(cosh(η) + β sinh(η)), and all we need to verify is that Indeed, if (41) holds true (40) implies that which, recalling what γ, ν and η are, is exactly what we want to prove. Verification of (41) is as follows. The left hand side in (41) is convex in β for β > − cosh(η) sinh(η) containing, due to η > 0, the range of β in (41). Furthermore, the minimum of the left hand side of (41) over β > − coth(η) is attained when β = sinh(η)−η cosh(η) η sinh (η) and is equal to r(η) = 1 2 η 2 + 1 − η coth(η) − ln(sinh(η)/η).
All we need to prove is that the latter quantity is nonnegative whenever η > 0. We have and since r(+0) = 0, we get r(η) ≥ 0 when η > 0.

A.2 Proof of Proposition 4.1
Let D ℓ denote the collection of regular data processed by A ℓδ ℓ K . Let, further, the K-repeated random observation ω K , the probability distributionP , indexī ≤ I and vectorμ ∈ Mī be as in the premise of the proposition, letḡ = Gμ, and let ℓ * ≤ L be such that ḡ − g ℓ * 2 ≤ ḡ − g ℓ 2 , ℓ = 1, ..., L. Finally, let E be the event "for every ℓ = 1, ..., L such thatP obeys one or more of the hypotheses H Si [D], D ∈ D ℓ , processed by procedure A ℓδ ℓ K , this procedure correctly recovers the color of these hypotheses." By construction and due to the union bound, theP K -probability of E is at least 1 − ǫ. It follows that all we need to verify the claim of the proposition is to show that when ω K ∈ E, relation (33) takes place. Thus, let us fix ω K ∈ E.
Observe, first, thatḡ ∈ V ℓ * , whenceμ ∈ Wī ℓ * . Thus, when running A ℓ * δ ℓ * K ,P K obeys a red one among the hypotheses H Si [D], D ∈ D ℓ * processed by A ℓ * δ ℓ * K , and since we are in the case of ω K ∈ E, g ℓ * gets a color, namely, color "red." By construction of our aggregation procedure, its output can be either g ℓ * -and in this case (33) clearly holds true, or another vector, let it be denoted g ℓ + (ℓ + = ℓ * ), which was also assigned red color. We claim that the vectorḡ = Gμ satisfies the relation Indeed, otherwise we haveμ ∈ Wī δ ℓ + ℓ + ℓ * , meaning thatP K obeys a hypothesis H Si [D] processed when running A ℓ + δ ℓ + K (i.e., with D ∈ D ℓ + ), and this hypothesis is blue. Since we are in the case of E, this implies that the color inferred by A ℓ + δ ℓ + K is "blue," which is a desired contradiction. Now we are nearly done: indeed, g ℓ * is the · 2 closest toḡ point among g 1 , ..., g L , implying that Recalling what u ℓ * ℓ + and v ℓ * ℓ + are, the relations (42) and (43) tell us the following story about the pointsḡ, g * := g ℓ * , g + := g ℓ + and the hyperplane H = {g ∈ R m : u ℓ * ℓ + g = v ℓ * ℓ + }: g * and g + are symmetric to each other w.r.t. H, andḡ is at most at the distance δ := δ ℓ + of H. An immediate observation is that in this case and we arrive at (33).
4 0 . It remains to prove that Φ is coercive in H, h. Let Θ ∈ U and (h i , H i ) ∈ H γ with (h i , H i ) → ∞ as i → ∞, and let us prove that Φ(h i , H i ; Θ) → ∞. Looking at the expression for Φ(h i , H i ; Θ), it is immediately seen that all terms in this expression, except for the terms coming from φ Z (·), remain bounded as i grows, so that all we need to verify is that the φ Z (·)-term goes to ∞ as i → ∞. Observe that H i are uniformly bounded due to (h i , H i ) ∈ H γ , implying that h i 2 → ∞ as i → ∞. Denoting by e the last basic orth of R d+1 and by b, as before, the last basic orth of R m+1 , note that, by construction, B T e = b. Now let W ∈ Z, so that W m+1,m+1 = 1. Taking into account that the matrices [Θ −1 where α i ≥ α > 0 and R i F ≤ C(1 + h i 2 ). As a result, and the concluding quantity tends to ∞ as i → ∞ due to h i 2 → ∞, i → ∞.