On the conditions used to prove oracle results for the Lasso

Oracle inequalities and variable selection properties for the Lasso in linear models have been established under a variety of different assumptions on the design matrix. We show in this paper how the different conditions and concepts relate to each other. The restricted eigenvalue condition (Bickel et al., 2009) or the slightly weaker compatibility condition (van de Geer, 2007) are sufficient for oracle results. We argue that both these conditions allow for a fairly general class of design matrices. Hence, optimality of the Lasso for prediction and estimation holds for more general situations than what it appears from coherence (Bunea et al, 2007b,c) or restricted isometry (Candes and Tao, 2005) assumptions.


Introduction
In this paper we revisit some sufficient conditions for oracle inequalities for the Lasso in regression and examine their relations. Such oracle results have been derived by, among others, [4,19,21,2,16], and for the related Dantzig selector by [11] and [13]. Furthermore, variable selection properties of the Lasso have been studied by [15,23,14,22] and [20]. Our main aim is to present the relations (of which some are known and some are new) between the various conditions, and to emphasize that sufficient conditions for oracle inequalities hold in fairly general situations.
The Lasso, which we at first only study in a noiseless situation, is defined as follows. Let X be some measurable space, Q be a probability measure on X , and · be the L 2 (Q) norm. Consider a fixed dictionary of functions {ψ j } p j=1 ⊂ L 2 (Q), and linear functions f β (·) := p j=1 β j ψ j (·) : β ∈ R p .
We let S := {j : β 0 j = 0} be its active set, and s := |S| be the sparsity index of f 0 .
For some fixed λ > 0, the Lasso for the noiseless problem is where · 1 is the ℓ 1 -norm. We write f * := f β * and let S * be the active set of the Lasso.
Let us precise what we mean by an oracle inequality. With β being a vector in R p , and N ⊂ {1, . . . , p} an index set, we denote by β j,N := β j l{j ∈ N }, j = 1, . . . , p, the vector with non-zero entries in the set N (hence, for example β 0 S = β 0 ). Definition: Sparsity constant and sparsity oracle inequality. The sparsity constant φ 0 is the largest value φ 0 > 0 such that Lasso with β * and f * satisfies the φ 0 -sparsity oracle inequality Restricted eigenvalue conditions (see [12,13] and [2]) have been developed to derive lower bounds for the sparsity constant. We will present these conditions in the next section. Irrepresentable conditions (see [23]) are tailored for proving variable selection, i.e., showing that S * = S, or, more more modestly, that the symmetric difference S * △S is small.

Organization of the paper and summary
We start out with, in Section 2, an overview of the conditions we will compare, and some pointers to the literature. We then examine their relations. Figure 1 below enables to see the various relations at a single glance. Sections 3-9 present a proof of each of the indicated (numbered) implications. Our conclusion is that (perhaps not surprising) the compatibility condition is the least restrictive, and that many sufficient conditions for compatibility may be somewhat too harsh (see also our discussion in Section 12).
We illustrate in Section 10 that one may check compatibility using approximations. We give several examples, where the compatibility condition holds. We also give an example where the compatibility condition yields a major improvement to the oracle result, as compared to the restricted eigenvalue condition. The noisy case, studied briefly in Section 11, poses no additional theoretical difficulties. A lower bound on the regularization parameter λ is required, and implications become somewhat more technical because all further results depend on this lower bound. Section 12 discusses the results.

Some notation
For a vector v, we invoke the usual notation The Gram matrix is Σ := ψ T ψdQ, so that f β 2 = β T Σβ.
We sometimes identify β N with the vector |N |-dimensional vector {β j } j∈N , and write e.g.,

An overview of definitions
The definitions we will present are conditions on the Gram matrix Σ, namely conditions on quadratic forms β T Σβ, where β is restricted to lie in some subset of R p . We first take the set of restrictions The compatibility condition we discuss here is from [18]. Its name is based on the idea that we require the ℓ 1 -norm and the L 2 (Q)-norm to be somehow compatible.
Definition: Compatibility condition. We call The bound β S 1 ≤ √ s β S 2 (which holds for any β) leads to two successively stronger versions of restricted eigenvalues. We moreover consider supsets N of S with size at most N . Throughout in our definitions, N ≥ s. We will only invoke N = s and N = 2s (for simplicity). Define the sets of restrictions and for N ⊃ S, If N = s, we necessarily have N \S = ∅. In that case, we let min j∈N \S |β j | = ∞, i.e., R(L, S, S) = R(L, S) (R adaptive (L, S, S) = R adaptive (L, S)).
The restricted eigenvalue condition is from [2] and [13]. We complement it with the adaptive restricted eigenvalue condition. The name of the latter is inspired by the fact that this strengthened version is useful for the development of theory for the adaptive Lasso [24] which we do not show in this paper.
Definition: (Adaptive) restricted eigenvalue. We call the (L, S, N )-restricted eigenvalue, and, similarly, We introduce the (adaptive) restricted regression condition to clarify various connections between different assumptions.
The adaptive (L, S, N )-restricted regression is The (adaptive) (L, S, N )-restricted regression condition holds if ϑ(L, S, N ) < 1 (ϑ adaptive (L, S, N ) < 1). Note Of course all these definitions depend on the Gram matrix Σ. In Sections 10 and 11, we make this dependence explicit by adding the argument Σ, e.g. the (Σ, L, S)-compatibility condition, etc.
When L = 1, the argument L is omitted, e.g. φ compatible (S) := φ compatible (1, S), and e.g., the S-compatibility condition is then the condition φ compatible (S) > 0. The case L > 1 is mainly needed to handle the situation with noise, and L < 1 is of interest when studying the adaptive Lasso (but we do not develop its theory in this paper).
Definition: Restricted isometry constant. The N -restricted isometry constant is the smallest value of δ N such that for all N with |N | ≤ N , Definition: Uniform eigenvalue. The (S, N )-uniform eigenvalue is As mentioned before, we always assume that Λ(S, s) > 0. Definition: Weak restricted isometry. The weak (S, N )-restricted isometry constant is The weak (L, S, N )-restricted isometry property holds if ϑ weak−RIP (S, N ) < 1/L. Definition: Restricted isometry property. The RIP constant is The restricted isometry property, shortly RIP, holds if ϑ RIP < 1. An irrepresentable condition can be found in [23]. We use a modified version which involves only the design but not the true coefficient vector β 0 (whereas its sign vector appears in [23]). The reason is that most other conditions considered in this paper do not depend on β 0 as well. Our (L, S, N )-irrepresentable condition with L = 1 and N = s is only slightly stronger than the condition in [23].
Definition: Irrepresentable condition. Part 1. We call We say that the (L, S, N )-irrepresentable condition is met, if for some N ⊃ S with |N | ≤ N , and all vectors τ N satisfying τ N ∈ {−1, 1} |N | , we have Part 3. We say that the weak (S, N )-irrepresentable condition is met, if for all τ S ∈ {−1, 1} s , and for some N ⊃ S with |N | ≤ N , and for some τ N \S ∈ {−1, 1} |N \S| , we have Finally, we present coherence conditions, which are in the spirit of [5,4]. [7] derive an oracle result under a tight coherence condition.
Definition: Coherence. The (L, S)-mutual coherence condition holds if The (L, S)-cumulative coherence condition holds if

Implications for the Lasso and some first relations
It is shown in [18] that the compatibility condition implies oracle inequalities for the Lasso. We re-derive the result for later reference and also for illustrating that the compatibility condition is just a condition to make the proof go through. We also show (again for later reference) the additional ℓ 2 -result if one uses the (S, N )-restricted eigenvalue condition.
Moreover, letting N * \S being the set of the N −s largest coefficients |β * j |, j ∈ S c , Proof of Lemma 2.1. The first assertion follows from the Basic Inequality using the definition of the Lasso in (1.1), which implies Note that the last inequality holds because β * − β 0 ∈ R(S) which follows by its preceding inequality: The second result follows from and using φ compatible (S) ≥ φ(S, N ).
An implication of Lemma 2.1 is an ℓ 1 -norm result: where the last inequality is using the first assertion in Lemma 2.1. We also note that the second assertion in Lemma 2.1 has most statistical importance for the case with N = s. We will need the case N = 2s later in our proofs.
In [15] and [23], it is proved that the irrepresentable condition is sufficient and essentially necessary for variable selection, i.e., for achieving S * = S. We will also present a self-contained proof in Section 6 where we will show that the (S, s)-irrepresentable condition is sufficient and the weak (S, s)-irrepresentable condition is essentially necessary for variable selection.
The paper [2] proves oracle inequalities under the restricted eigenvalue condition. They assume min{φ(L, S, s) : |S| = s} > 0 (where L can be taken equal to one in the noiseless case). The restricted isometry property from [10], abbreviated to RIP, also requires uniformity in S. They assume the RIP ϑ RIP < 1.
They show that the RIP implies exact reconstruction of β 0 from f 0 by linear programming (that is, by minimizing β 1 subject to f β − f 0 = 0). [6] prove this result assuming δ N + θ s,N < 1 for N = 1.25s only; see also [8] for an earlier result. It is clear that 1 − δ N ≤ Λ 2 (S, N ), i.e., the restricted isometry constants are more demanding than uniform eigenvalues. [10] furthermore show that See also Figure 1. They prove that the RIP is sufficient for establishing oracle inequalities for the Dantzig selector. [12] and [2] show that Thus, the weak (S, 2s)-restricted isometry property implies the break (S, 2s)restricted eigenvalue condition. See also Figure 1.
The papers [3,5,4] show that their coherence conditions imply oracle results and refinements (see also Section 4 for their condition on the diagonal of Σ). [9] weaken the coherence conditions by restricting the parameter space for the regression coefficient β.
adaptive restricted eigenvalue condition ⇒ restricted eigenvalue condition ⇒ compatibility condition. See also Figure 1. It is easy to see that ϑ(L, S, s) and ϑ adaptive (L, S, s) scale with L, i.e., we have ϑ(L, S, s) = Lϑ(S, s), ϑ adaptive (L, S, s) = Lϑ adaptive (S, s). This is not true for the (adaptive) restricted (ℓ 1 -)eigenvalues. It indicates that the (adaptive) restricted regression is not well-calibrated for proving compatibility or restricted eigenvalue conditions, i.e, one might pay a large price for taking the route to oracle results via restricted regression conditions.
We end this subsection with the following lemma, which is based on ideas in [11]. A corollary is the ℓ 2 -bound given in (2.1), which thus illustrates that considering supsets N of S can be useful. However, we use the lemma for other purposes as well.
We let for any β, r j (β) := rank(|β j |), j ∈ S c , if we put the coefficients in decreasing order. Let N 0 (β) be the set of the s largest coefficients in S c : Put N (β) := N 0 (β) ∪ S. Further, assuming without loss of generality that p = (K + 2)s for some integer K ≥ 0, we let for k = 1, . . . , K, We further define Lemma 2.2. We have for any any r ≥ 1, and 1/r + 1/q = 1, and any β, and for N := N (β), and N k := N k (β), k = 0, 1, . . ., K, the bound This result is from [2]. The proof we give is essentially the same as theirs.

The restricted regression condition implies the restricted eigenvalue condition
We start out with an elementary lemma. Then Proof. Write the projection of f 2 on f 1 as Similarly, let so that

Moreover, by Pythagoras' Theorem
It is then straightforward to derive the following result.
A similar result is true for the adaptive versions. In other words, the (adaptive) restricted regression condition implies the (adaptive) restricted eigenvalue condition.

S-coherence conditions imply adaptive (S, s)-restricted regression conditions
The papers [3,5,4] establish oracle results under a condition which we refer to as the restricted diagonal condition. They provide coherence conditions for verifying the restricted diagonal condition. Definition: Restricted diagonal condition. We say that the S-restricted diagonal condition holds if for some constant ϕ(S) > 0 We now show that coherence conditions actually imply restricted regression conditions. First, we consider some matrix norms in more detail. Let 1 ≤ q ≤ ∞, and r be its conjugate, i.e., 1 q Some properties. The quantity Σ 1,2 (N ) 2 2,2 is the largest eigenvalue of the matrix Σ 1,2 (N )Σ 2,1 (N ). We further have for 1 ≤ q < ∞, and similarly for q = ∞, Moreover, Lemma 4.1. For all 1 ≤ q ≤ ∞, the following inequality holds: Moreover, Proof of Lemma 4.1. Take r such that 1/q + 1/r = 1. Let N ⊃ S, with |N | = 2s and let β ∈ R adaptive (L, S, N ).
We have Applying Lemma 2.2 gives This yields One of the consequences is in the spirit of the mutual coherence condition in [5].
With q = 1 and N = s, the coherence lemma is similar to the cumulative local coherence condition in [4]. We also consider the case N = 2s.
The coherence lemma with q = 2 is a condition about eigenvalues (recall that Σ 1,2 (N ) 2 2,2 equals the largest eigenvalue of Σ 1,2 (N )Σ 2,1 (N )). The bound is then much rougher than the one following from the weak (S, 2s)-restricted isometry condition, which we derive in Lemma 7.1.
Proof of Theorem 5.1. First observe that (Use Cauchy-Schwarz inequality for bounding the first factor). Furthermore, for any constant c, 6. The (S, s)-irrepresentable condition is sufficient and essentially necessary for variable selection An important characterization of the solution β * can be derived from the Karush-Kuhn-Tucker (KKT) conditions which in our context involves subdifferential calculus: see [1]. The KKT conditions. We have Here τ * ∞ ≤ 1, and moreover τ * j l{β * j = 0} = sign(β * j ), j = 1, . . . , p.
For N ⊃ S, we write the projection of a function f on the space spanned by {ψ j } j∈N as f PN , and the anti-projection as f AN := f − f PN . Hence, we note that Proof of Lemma 6.1. By the KKT conditions, we must have It follows that (leaving the second equality untouched). Hence, multiplying the first equality by −(β * N c ) T Σ 2,1 (N ), and the second by (β * N c ) T , where we invoked that β * j τ * j = |β * j |. Adding up the two equalities gives We now connect the irrepresentable condition to variable selection. Define |β 0 | min := min{|β 0 j | : j ∈ S}. Then S * ⊃ S and |S * | ≤ N . Part 3. Conversely, suppose that S * ⊃ S and |S * | ≤ N , and Λ(S, N ) > 0. Then If moreover |β 0 | min > λ √ s/ (2Λ(S, N )), then τ * S * = τ 0 S * , where τ 0 S * := sign(β 0 S * ). A special case is N = s. In Part 1, we then obtain that S * ⊂ S, i.e., no false positive selections. Moreover, Part 2 then proves S * = S and Part 3 assumes S * = S.

The restricted isometry property with small constants implies the weak (S, 2s)-irrepresentable condition
We start with two preparatory lemmas. Recall that ϑ weak−RIP (S, s) = θ(S, s)/Λ 2 (S, s). Then where A S denotes the anti-projection defined in Section 6.
Proof of Lemma 8.2. We have Then, invoking Lemma 2.1, It follows that Moreover, we have So, by Lemma 8.1, ≤ λ 2 sθ(S, s)/ 2φ 2 (S, 2s)Λ 2 (S, s) . Therefore The next result shows that if the constants are small enough, then there will be no more than s false positives. We define Then |S * \S| < s.
Proof of Lemma 8.3. Since α(S) < 1, Lemma 8.2 implies that for anyÑ ⊂ S c , with |Ñ | ≤ s, and for any b with bÑ 2 = 0, For j ∈ S * \S we have by the KKT conditions Suppose now that |S * \S| ≥ s. Then there is a subset N ′ of S * \S, with size |N ′ | = s, and we have This is a contraction, and hence |S * \S| < s.
We conclude that the RIP with small enough constants implies the weak (S, 2s)irrepresentable condition.
As [10] show, the RIP implies exact recovery. To complete the picture, we now show that the (S, s)-irrepresentable condition also implies exact recovery.
The linear programming problem is where, as before f 0 = f β 0 with β 0 = β 0 S . Let β LP be the minimizer of the linear programming problem. Proof of Lemma 8.4. This follows from [10]. They show that β LP = β 0 if one can find a g ∈ L 2 (P ), such that (i) (ψ j , g) = τ 0 j , for all j ∈ S, (ii) |(ψ j , g)| < 1 for all j / ∈ S, where, as before, τ 0 S := sign(β 0 S ). The (S, s)-irrepresentable condition says that this is true for

The (S, s)-uniform irrepresentable condition implies the S-compatibility condition
As the (S, s)-irrepresentable condition implies variable selection, one expects it will be more restrictive than the compatibility condition, which only implies a bound for the prediction error (and ℓ 1 -estimation error). This turns out to be indeed the case, albeit we prove it only under the uniform version of the irrepresentable condition.

Verifying the compatibility and restricted eigenvalue condition
In this section, we discuss the theoretical verification of the conditions. Determining a restricted ℓ 1 -eigenvalue is in itself again a Lasso type of problem. Therefore, it is very useful to look for some good lower bounds. A first, rather trivial, observation is that if Σ is non-singular, the restricted eigenvalue condition holds for all L, S and N , with φ 2 (L, S, N ) ≥ Λ 2 min (Σ), the latter being the smallest eigenvalue of Σ. If Σ is the population covariance matrix of a random design, i.e., the probability measure Q is the theoretical distribution of observed co-variables in X , assuming positive definiteness of Σ is not very restrictive. We will present some examples in Section 10.2. Compatibility conditions for the population Gram matrix are of direct relevance if one replaces L 2 -loss by robust convex loss [19]. But, as we will show in the next subsection, even if Σ corresponds to the empirical covariance matrix of a fixed design, i.e., the measure Q is the empirical measure Q n of n observed co-variables in X , the compatibility and restricted eigenvalue condition is often "inherited" from the population version. Therefore, even for fixed designs (and singular Σ), the collection of cases where compatibility or restricted eigenvalue conditions hold is quite large.

Approximating the Gram matrix
For two (positive semi-definite) matrices Σ 0 and Σ 1 , we define the supremum distance Generally, perturbing the entries in Σ by a small amount may have a large impact on the eigenvalues of Σ. This is not true for (adaptive) restricted ℓ 1eigenvalues, as is shown in the next lemma and its corollary.
Proof of Lemma 10.1. For all β, But if β ∈ R(L, S), it holds that β S c 1 ≤ L β S 1 , and hence This gives The second result can be shown in the same way, and the third result as well as for β ∈ R adaptive (L, S, N ), it holds that β S c 1 ≤ L √ s β S 2 , and hence Corollary 10.1. We have Similarly, and the same result holds for the adaptive version.
Corollary 10.1 shows that if one can find a matrix Σ 0 with well-behaved smallest eigenvalue, in a small enough ℓ ∞ -neighborhood of Σ 1 , then the restricted eigenvalue condition holds for Σ 1 . As an example, consider the situation where ψ j (x) = x j (j = 1, . . . , p) and wherê Σ := X T X/n = (σ j,k ), where X = (X i,j ) is a (n × p)-matrix whose columns consist of i.i.d. N (0, 1)distributed entries (but allowing for dependence between columns). We denote by Σ the population covariance matrix of a row of X. Using a union bound, it is not difficult to show that for all t > 0, and for λ(t) := 4t + 8 log p n + 4t + 8 log p n , one has the inequality This implies that if the smallest eigenvalue Λ 2 min (Σ) of Σ is bounded away from zero, and if the sparsity s is of smaller order o( n/ log p), then the restricted eigenvalue condition holds with constant φ(S, N ) not much smaller than Λ min (Σ). The result can be extended to distributions with Gaussian tails.

Some examples
In the following, our discussion mainly applies for Σ being the population covariance matrix. For Σ being the empirical covariance matrix, the assumptions in the discussion below are unrealistic, but as seen in the previous section, the population properties can have important implications for the restricted eigenvalues of the empirical covariance matrix. Example 10.2. In this example, Σ is a Toeplitz matrix, defined as follows. Consider a positive definite function which is symmetric (R(k) = R(−k)) and sufficiently regular in the following sense. The corresponding spectral density is assumed to exist, to be continuous and periodic, and where R(·) satisfies the conditions described above (in terms of the spectral density). A special case arises with σ j,k = ρ |j−k| for some 0 ≤ ρ < 1. The smallest eigenvalue Λ 2 min (Σ) of Σ is bounded away from zero where the bound is independent of p [17]. Example 10.3. Consider a matrix Σ which is of block structure form: where the Σ j are (m × m) covariance matrices (j = 1, . . . , k) (the restriction to having the same dimension m can be easily dropped) and km = p. If the minimal eigenvalues satisfy then the minimal eigenvalue of Σ is also bounded from below by η 2 > 0. When m is much smaller than p, it is (much) less restrictive that small m × m covariance matrices Σ j have well-behaved minimal eigenvalues than large p × p matrices.
As before, we write f 0 = f β 0 and now,f = fβ . We consider as the noise. Moreover, we write (with some abuse of notation) and we define Here is a simple example which shows how λ 0 behaves in the case of i.i.d. standard normal errors.
In a similar way, but using (S, 2s)-restricted eigenvalue conditions, one may prove ℓ 2 -convergence in the noisy case.
Observe that the S-compatibility condition now involves the matrixΣ, which is definitely singular when p > n. However, we have seen in the previous section that, also for suchΣ, compatibility conditions and restricted eigenvalue conditions hold in fairly general situations.
Proof of Lemma 11.3. This follows from a straightforward generalization of Lemma 6.1, where the equalities now become inequalities: Here, fÂ S is the anti-projection of f, in L 2 (Q n ), on the space spanned by {ψ j } j∈S .
The noisy KKT conditions involve the matrixΣ. Again, as discussed in Subsection 10.1, we may replace it by an approximation. As a consequence, if this approximation is good enough, we can replace (Σ, L, S, s)-irrepresentable conditions by (Σ,L, S, s)-irrepresentable conditions, provided we takeL > L large enough.
Lemma 11.4. Take λ > λ 0 , and define L := (λ + λ 0 )/(λ − λ 0 ). Suppose that Then Proof of Lemma 11.4. We have We conclude that the KKT conditions in the noisy case can be exploited in the same way as in the case without noise, albeit that one needs to adjust the constants (making the conditions more restrictive).

Discussion
We show how various conditions for Lasso oracle results relate to each other, as illustrated in Figure 1. Thereby, we also introduce the restricted regression condition.
For deriving oracle results for prediction and estimation, the compatibility condition is the weakest. Looking at the derivation of the oracle result in Lemma 2.1, no substantial room seems to be left to improve the condition. The restricted eigenvalue condition is slightly stronger but in some cases, as demonstrated in Example 10.5, the compatibility condition is a real improvement.
For variable selection with the Lasso, the irrepresentable condition is sufficient (assuming sufficiently large non-zero regression coefficients) and essentially necessary. We present the, perhaps not unexpected, but as yet not formally shown, result that the irrepresentable condition is always stronger than the compatibility condition.
We illustrate in Section 10 how -in theory -one can verify the compatibility condition. If the sparsity is of small order o( n/ log p), we can approximate the empirical Gram matrix by the population analogue. It is then much more easy and realistic that the population Gram matrix has sufficiently regular behavior, as illustrated with our examples in Section 10.2. We believe moreover that a sparsity bound of small order o( n/ log p) covers a large area of interesting statistical problems. With larger s, the statistical situation is comparable to one of a nonparametric model with "(effective) smoothness less than 1/2", leading to very slow convergence rates. In contrast, for example in decoding problems, sparseness up to the linear-in-n regime can be very important. Moreover, in the case of robust convex loss, one may apply the compatibility condition directly to the population matrix, i.e., the sparsity regime s = o( n/ log p) can be relaxed for such loss functions (see [19]). We therefore conclude that oracle results for the Lasso hold under quite general design conditions.
A final remark is that in our formulation, the compatibility condition and restricted eigenvalue condition depend on the sparsity s as well as on the active set S. As S is unknown, this means that for a practical guarantee, the conditions should hold for all S. Moreover, one then needs to assume the sparsity s to be known, or at least a good upper bound needs to be given. Such strong requirements are the price for practical verifiability. We however believe that in statistical modeling, non-verifiable conditions are allowed and in fact common practice. Moreover, our model assumes a sparse linear "truth" with "true" active set S, only for simplicity. Without such assumptions, there is no "true" S, and the oracle inequality concerns a trade-off between sparse approximation and estimation error, see for example [19].