Improved bounds for Square-Root Lasso and Square-Root Slope

Extending the results of Bellec, Lecu\'e and Tsybakov to the setting of sparse high-dimensional linear regression with unknown variance, we show that two estimators, the Square-Root Lasso and the Square-Root Slope can achieve the optimal minimax prediction rate, which is $(s/n) \log (p/s)$, up to some constant, under some mild conditions on the design matrix. Here, $n$ is the sample size, $p$ is the dimension and $s$ is the sparsity parameter. We also prove optimality for the estimation error in the $l_q$-norm, with $q \in [1,2]$ for the Square-Root Lasso, and in the $l_2$ and sorted $l_1$ norms for the Square-Root Slope. Both estimators are adaptive to the unknown variance of the noise. The Square-Root Slope is also adaptive to the sparsity $s$ of the true parameter. Next, we prove that any estimator depending on $s$ which attains the minimax rate admits an adaptive to $s$ version still attaining the same rate. We apply this result to the Square-root Lasso. Moreover, for both estimators, we obtain valid rates for a wide range of confidence levels, and improved concentration properties as in [Bellec, Lecu\'e and Tsybakov, 2017] where the case of known variance is treated. Our results are non-asymptotic.


Introduction
In a recent paper by Bellec, Lecué and Tsybakov [1], it is shown that there exist high-dimensional statistical methods realizable in polynomial time that achieve the minimax optimal rate (s/n) log (p/s) in the context of sparse linear regression. Here, n is the sample size, p is the dimension and s is the sparsity parameter. The result is achieved by the Lasso and Slope estimators, and the Slope estimator is adaptive to the unknown sparsity s. Bounds for more general estimators are proved by Bellec, Lecué and Tsybakov [3,2]. These articles also establish bounds in deviation that hold for any confidence level and for the risk in expectation. However, the estimators considered in [1,3,2] require the knowledge of the noise variance σ 2 . To our knowledge, no polynomial-time methods, which would be at the same time optimal in a minimax sense and adaptive both to σ and s are available in the literature.
Estimators similar to the Lasso, but adaptive to σ are the Square-Root Lasso and the related Scaled Lasso, introduced by Sun and Zhang [13] and Belloni, Chernozhukov and Wang [4]. It has been shown to achieve the rate (s/n) log(p) in deviation with the value of the tuning parameter depending on the confidence level. A variant of this estimator is the Heteroscedastic Square-Root Lasso, which is studied in more general nonparametric and semiparametric setups by Belloni, Chernozhukov and Wang [5], but it also achieves the rate (s/n) log(p) and depends on the confidence level. We refer to the book by Giraud [8] for the link between the Lasso and the Square-Root Lasso and a short proof of oracle inequalities for the Square-root Lasso. In summary, there are two points to improve for the Square-root Lasso method: (i) The available results on oracle inequalities are valid only for the estimators depending on the confidence level. Thus, one cannot have an oracle inequality for one given estimator at any confidence level except the one that was used to design it. (ii) The obtained rate is (s/n) log(p) which is greater than the minimax rate (s/n) log(p/s).
The Slope, which is an acronym for Sorted L-One Penalized Estimation, is an estimator introduced by Bogdan et al. [7], that is close to the Lasso, but uses the sorted l 1 norm instead of the standard l 1 norm for penalization. Su and Candès [12] proved that, as opposed to the Lasso, the Slope estimator is asymptotically minimax, in the sense that it attains the rate (s/n) log(p/s) for two isotropic designs, that is either for X deterministic with 1 n X T X = I p×p or when X is a matrix with i.i.d. standard normal entries. Moreover, their result has not only the optimal minimax rate, but also the exact optimal constant. General isotropic random designs are explored by Lecué and Mendelson [9]. For non-isotropic random designs and deterministic designs under conditions close to the Restricted Eigenvalue, the behavior of the Slope estimator is studied in [1]. The Slope estimator is adaptive only to s, and requires knowledge of σ, which is not available in practice. In order to have an estimator which is adaptive both to s and σ, we will use the Square-Root Slope, introduced by Stucky and van de Geer [11]. They give oracle inequalities for a large group of square-root estimators, including the new Square-Root Slope, but still following the scheme where (i) and (ii) cannot be avoided. The square-root estimators are also members of a more general family of penalized estimators defined by Owen [10, Equations (8)-(9)] ; using their notation, these estimators correspond to the case where H M is the squared loss and B M is a norm (either the l 1 norm or the slope norm).
The paper is organized as follows. In Section 2, we provide the main definitions and notations. In Section 3, we show that the Square-Root Lasso is minimax optimal if s is known while being adaptive to σ under a mild condition on the design matrix (SRE). In Section 4, we show that any sequence of estimators can be made adaptive to the sparsity parameter s, while keeping the same rate up to some constant, with a computational cost increased by a factor of log(s * ) where s * is an upper bound on the sparsity parameter s. As an application, the Square-root Lasso modified by this procedure is still optimal while being now adaptive to s (in addition of being already adaptive to σ). In Section 5, we show how to adapt any algorithm for computing the Slope estimator to the case of the Square-root Slope estimator. In Section 6, we study the Square-Root Slope estimator, and show that it is minimax optimal and adaptive both to s and σ, under a slightly stronger condition (WRE). The (SRE) and (WRE) conditions have already been studied by Bellec, Lecué and Tsybakov [1] and hold with high probability for a large class of random matrices. Moreover, the inequalities we obtain for each estimator are valid for a wide range of confidence levels. Proofs are given in Section 7.

The framework
We use the notation | · | q for the l q norm, with 1 ≤ q ≤ ∞, and | · | 0 for the number of non-zero coordinates of a given vector. For any v ∈ R p , and any set of coordinates J, we denote by v J the vector (v j 1{i ∈ J}) i=1,...,p , where 1 is the indicator function. We also define the empirical norm of a vector u = (u 1 , . . . , u n ) as ||u|| 2 As a particular case, |v| (j) is the j-th largest component of the vector |v| whose components are the absolute values of the components of v. We use the notation ·, · for the inner product with respect to the Euclidean norm and (e j ) j=1,...,p for the canonical basis in R p .
Let Y ∈ R n be the vector of observations and let X ∈ R n×p be the design matrix. We assume that the true model is the following (1)

A. Derumigny
Here β * ∈ R p is the unknown true parameter. We assume that ε is the random noise, with values in R n , distributed as N (0, σ 2 I n×n ), where I n×n is the identity matrix. We denote by P β * the probability distribution of Y satisfying (1). In what follows, we define the set B 0 (s) := {β * ∈ R p : |β * | 0 ≤ s}. In the highdimensional framework, we have typically in mind the case where s is small, p is large and possibly p n. We define two square-root type estimators of β * : the Square-Root Lassoβ SQL and the Square-Root Slopeβ SQS by the following relationŝ where λ > 0 is a tuning parameter to be chosen, and the sorted l 1 norm, | · | * , is defined for all u ∈ R p by |u| * = p i=1 λ j |u| (j) , with tuning parameters λ 1 ≥ · · · ≥ λ p > 0.

Optimal rates for the Square-Root Lasso
In this section, we derive oracle inequalities with optimal rate for the Square-Root Lasso estimator. We will use the Strong Restricted Eigenvalue (SRE) condition, introduced in where C SRE (s, c 0 ) := {δ ∈ R p : The condition max j=1,...,p ||Xe j || n ≤ 1 is standard and corresponds to a normalization. It is shown in [1, Proposition 8.1] that the SRE condition is equivalent to the Restricted Eigenvalue (RE) condition of [6] if that is considered in conjunction with such a normalization. By the same proposition, the RE condition is also equivalent to the s-sparse eigenvalue condition, which is satisfied with high probability for a large class of random matrices. It is the case, if for instance, n ≥ Cs log(ep/s) and the rows of X satisfies the small ball condition, which is very mild, see, e.g. [1].
Note that the minimum in (4) is the same as the minimum of the function δ → ||Xδ|| n on the set C SRE (s, c 0 ) ∩ {δ ∈ R p : |δ| 2 = 1}, which is a continuous function on a compact of R p , therefore this minimum is attained. When there is no ambiguity over the choice of s, we will just write κ instead of κ(s).
and assume that Then, for every δ 0 ≥ exp(−n/4γ 2 ) and every β * ∈ R p such that |β * | 0 ≤ s, with P β * -probability at least 1 − δ 0 − (1 + e 2 )e −n/24 , we have The values of the constants C 1 , C 2 , C 3 and C 4 in Theorem 3.1 can be found in the proof, in Section 7.2. Using the fact that κ ≤ 1 and choosing δ 0 = (s/p) s , we get the following corollary of Theorem 3.1.

Corollary 3.2.
Under the assumptions of Theorem 3.1, with P β * -probability at Theorem 3.1 and Corollary 3.2 give bounds that hold with high probability for both the prediction error and the estimation error in the l q norm, for every q in [1,2]. Note that the bounds are best when the tuning parameter is chosen as small as possible, i.e. with γ = 16 + 4 √ 2. As shown in Section 7 of Bellec, Lecué and Tsybakov [1], the rates of estimation obtained in the latter corollary are optimal in a minimax sense on the set B 0 (s) := {β * ∈ R p : |β * | 0 ≤ s}. We obtain the same rate of convergence as [1] (see the paragraph after Corollary 4.3 in [1]) up to some multiplicative constant.
The rate is also the same as in Su and Candès [12], but the framework is quite different: we obtain a non-asymptotic bound in probability whereas they consider asymptotic bounds in expectation (cf. Theorem 1.1 in [12]) and in probability (Theorem 1.2) but without giving an explicit expression of the probability that their bound is valid. Our result is non-asymptotic and valid when general enough conditions on X are satisfied whereas the result in [12] is asymptotic as n → ∞, and valid for two isotropic designs, that is either for X deterministic with 1 n X T X = I p×p or when X is a matrix with i.i.d. standard normal entries. Similarly to [1], for each tuning parameter γ, there is a wide range of levels of confidence δ 0 under which the bounds of Theorem 3.1 are valid. However, [1] allows for an arbitrary small confidence level while in our case, there is a lower bound on the size of the confidence level under which the rate is obtained. Note that this bound can be made arbitrary small by choosing a sample size n large enough.
Note that the possible values chosen for the tuning parameter λ are independent of the underlying standard deviation σ, which is unknown in practice. This gives an advantage for the Square-Root Lasso over other methods such as the ordinary Lasso. Nevertheless, this estimator is not adaptive to the sparsity s, so that we need to know that |β * | 0 ≤ s in order to be able to apply this result. In the following section, we suggest a procedure to make the Square-root Lasso adaptive to s while keeping its optimality and adaptivity to σ.

Adaptation to sparsity by a Lepski-type procedure
Let s * be an integer in {2, . . . , p/e}. We want to show that the Square-Root Lasso can also achieve the minimax optimal bound, adaptively to the sparsity s on the interval [1, s * ] (in addition of being already adaptive to σ). Following [1], we will use aggregation of at most log 2 (s * ) Square-Root Lasso estimators with different tuning parameters to construct an adaptive estimatorβ of β and at the same time an estimators of the sparsity s.
We can reformulate Corollary 3.2 as follows: for any s = 1, . . . , 2s * and any denoting byβ SQL (s,γ) the estimator (2) with the tuning parameter λ (s,γ) given by (5). Replacing s by 2s in equation (9), we get that for any s = 1, . . . , s * and any Remark that λ (s,γ) = γ 1 n log 2p As a consequence,β SQL (s,γ) =β SQL (2s,γ) and we can apply Equation (10), replacing γ byγ and we get Note that Equations (9) and (11) We describe now an algorithm to compute this adaptive estimator. The idea is to use an estimators of s which can be written ass := 2m for some positive datadependent integerm. We will use the notation M := max{m ∈ N : 2 m ≤ s * }, so that the number of estimators we consider in the aggregation is M .
The suggested procedure is detailed in Algorithm 1 below, with the distance d(β, β ) = ||X(β−β )|| n or d(β, β ) = |β−β | q for q ∈ [1,2]. It can be used for any family of estimators (β (s) ) s=1,...s * , and chooses the best one in terms of the distance d(·, ·), resulting in an aggregated estimatorβ. Note that the weight function w(·) used in the algorithm cannot depend on σ as in [1], i.e. to have the form w(b) = C 0 σ (b/n) log(p/b) (respectively w(b) = C 0 σb 1/q (1/n) log(p/b) ), because we are looking for a procedure adaptive to σ. Therefore, we will remove σ from w and use an estimateσ.
Then, there exists a constant C 5 , depending on C 0 , C , C , C 2 , κ and α such that, for all β * ∈ B 0 (s), the aggregated estimatorβ satisfies: Furthermore, This theorem is proved in Section 7.3.1. In particular, it implies that when β (s) =β SQL (s,γ) , the aggregated estimatorβ has the same rate on B 0 (s) as the estimators with known s. We detail it below. The following lemmas proved in Sections 7.3.2 and 7.3.3 assure that Theorem 4.3 can be applied to the familŷ β (s) =β SQL (s,γ) .    Thus, we have shown that the suggested aggregated procedure based on the Square-root Lasso is adaptive to s while still being adaptive to σ and minimax optimal. Note that the computational cost is multiplied by O(log(s * )).

Algorithms for computing the Square-root Slope
In this part, our goal is to provide algorithms for computing the square-root Slope estimator. A natural idea is revisiting the algorithms used for the squareroot Lasso and for the Slope, then adapting or combining them.
Belloni, Chernozhukov and Wang [4, Section 4] have proposed to compute the Square-root Lasso estimator by reducing its definition to an equivalent problem, which can be solved by interior-point or first-order methods. The equivalent formulation as the Scaled Lasso, introduced by Sun and Zhang [13] allows one to view it as a joint minimization in (β, σ). Sun and Zhang [13] propose an iterative algorithm which alternates estimation of β using the ordinary Lasso and estimation of σ.
Zeng and Figueiredo [14] studied several algorithms related to estimation of the regression with the ordered weighted l 1 -norm, which is the Slope penalization. Bogdan et al. [7] provide an algorithm for computing the Slope estimator using a proximal gradient.
As in the case of the Square-root Lasso, we still have for any β, where the minimum is attained forσ = ||Y − Xβ|| n . As a consequence, is equivalent to take the estimatorβ in the joint minimization program (β,σ) ∈ arg min Alternating minimization in β and in σ gives an iterative procedure for a "Scaled Slope" (see Algorithm 2).

Optimal rates for the Square-Root Slope
In this part, we will use another condition, the Weighted Restricted Eigenvalue condition, introduced in [ where

A. Derumigny
To obtain the following result, we assume that the Weighted Restricted Eigenvalue condition holds. This condition is shown to be only slightly more constraining than the usual Restricted Eigenvalue condition of [6], but is nevertheless satisfied with high probability for a large class of random matrices, see Bellec, Lecué and Tsybakov [1] for a discussion. Note that, in a similar way as in definition (4), the minimum is attained. Indeed, κ is equal to the minimum of the function δ → ||Xδ|| n on the set C W RE (s, c 0 ) ∩ {δ ∈ R p : |δ| 2 = 1}, which is a continuous function on a compact of R p .
The values of the constants C 1 and C 2 can be found in the proof, in Section 7.4. Note that the bounds are best when the tuning parameters is chosen as small as possible, i.e. using the choice γ = 16 + 4 √ 2. Using the fact that κ ≤ 1 and choosing δ 0 = (s/p) s , we get the following corollary.
These results show that the Square-Root Slope estimator, with a given choice of parameters, attains the optimal rate of convergence in the prediction norm || · || n and in the estimation norm | · | 2 . We also provide a bound on the sorted l 1 norm | · | * of the estimation error. One can note that the choice of λ i that allows us to obtain optimal bounds does not depend on the level of confidence δ 0 , but only influence the size of the range of valid δ 0 . This improves upon the oracle result of Stucky and van de Geer [11], in which the parameter does depend on the level of confidence and the rate does not scale in the optimal way, i.e., as (s/n) log(p/s). Moreover, we can see that our estimator is independent of the underlying standard deviation σ and of the sparsity s, even if the rates depend on them. Note that, up to some multiplicative constant, we obtain the same rates as for the Slope in Bellec, Lecué and Tsybakov [1]. In Su and Candès [12], the Slope estimator is proved to attain the sharp constant in the asymptotic framework where σ is known and for specific X ; whereas here we obtain only the minimax rates, but in a non-asymptotic framework, and under general assumptions on the design matrix X.
For this estimator, we did not provide a bound for the l 1 norm, for the same reasons as in [1]. Indeed, the coefficients λ j of the components of β are different in the sorted norm. As a consequence, we do not provide inequalities for l q norms when q < 2, that are obtained by interpolation between the l 1 and l 2 norms.

Preliminary lemmas
Let β * ∈ R p , S ⊂ {1, . . . , p} with cardinality s and denote by S C the complement of S. For i ∈ {1, . . . , p}, let β * i be the i-th component of β * and assume that for every i ∈ S C , β * i = 0. Lemma 7.1. We have The proof follows from the arguments in Giraud [8, pages 110-111], and it is therefore omitted.
Proof. We combine the arguments from Giraud [8, pages 110-111], and from the proof of Lemma A.1 in [1]. First, we remark that the sorted l 1 norm can be written as follows, for any v ∈ R p ,
The following simple property is proved in Giraud [8, page 112]. For convenience, it is stated here as a lemma. Lemma 7.6. With P β * -probability at least 1 − (1 + e 2 )e −n/24 , we have
For any u = (u 1 , . . . u p ) in R p , we define : If ε ∼ N (0, σ 2 I n×n ), then the random event is of probability at least 1 − δ 0 /2. Moreover, by the Cauchy-Schwarz inequality, we have H(u) ≤ F (u), for all u in R p .

Proof of Theorem 3.1
Lemma 7.7 allows one to control the random variable ε T Xu that appears in Lemmas 7.1 and 7.3 with u :=β SQL − β * . Our calculations will take place on an event of probability at least 1 − δ 0 − (1 + e 2 )e −n/24 , where both Lemmas 7.6 and 7.7 can be used. Applying Lemma 7.7, we will distinguish between the two cases : G(u) ≤ F (u) and F (u) < G(u).