A sharp oracle inequality for Graph-Slope

Following recent success on the analysis of the Slope estimator, we provide a sharp oracle inequality in term of prediction error for Graph-Slope, a generalization of Slope to signals observed over a graph. In addition to improving upon best results obtained so far for the Total Variation denoiser (also referred to as Graph-Lasso or Generalized Lasso), we propose an efficient algorithm to compute Graph-Slope. The proposed algorithm is obtained by applying the forward-backward method to the dual formulation of the Graph-Slope optimization problem. We also provide experiments showing the interest of the method.


Introduction
Many inference problems of interest involve signals defined on discrete graphs. This includes for instance two-dimensional imaging but also more advanced hyper-spectral imaging scenarios where the signal lives on a regular grid. Two types of structure arise naturally in such examples: The first type of structures comes from regularity or smoothness of the signal, which led to the development of wavelet methods. The second type of structure involves signals with few sharp discontinuities. For instance in one dimension, piecewise constant signals appear when transition states are present, the graph being a 1D path. In imaging, where the underlying graph is a regular 2D grid, occlusions create piece-wise smooth signals rather than smooth ones. This paper studies regularizers for signals with sharp discontinuities. A popular choice in imaging is the Total Variation (TV) regularization [24]. For 1D signals, TV regularization has also long been used in statistics [18]. If an additional 1 regularization is added, this is sometimes referred to as the fused Lasso [30,32,13].
A natural extension of such methods to arbitrary graphs relies on 1 analysis penalties [15] which involve the incidence matrix of the underlying graph, see for instance [25] or the Edge Lasso of [26]. Such penalties have the form where λ > 0 is a tuning parameter and D is the (edge-vertex) incidence matrix of the graph defined below, and β represents the signal to be recovered. This approach is notably different from contributions in machine learning where 2 penalties, i.e., Laplacian regularization, have been considered for spectral clustering [27,20] (see also [33] for a review). Theoretical results in favor of the 1 norm instead of the squared 2 norm are studied in [25].
Penalties based on 0 regularization with the graph incidence matrix have recently been analyzed [16], including an analysis of their approximation algorithm. They are of interest as they do not suffer from the (shrinkage) bias created by the convex 1 norm. However, such methods present the difficulty that in the general case they lead to non-convex problems. Note that the 1D path is an exception since the associated optimization problem can be solve using dynamic programming [1]. Concerning the bias reduction though, simpler remedies could be used, including least-squares refitting on the model space associated, applying for instance the CLEAR method [14].
Following the introduction of the Slope regularization in the context of high dimensional regression [10], we propose Graph-Slope, its generalization to contexts where the signal is supported on a graph. In linear regression, Slope [10] is defined as follows. Given p tuning parameters λ 1 ≥ λ 2 ≥ · · · ≥ λ p ≥ 0 with at least one strict inequality, define the ordered 1 norm by where for any θ ∈ R p , we use the notation 1 (|θ| ↓ 1 , . . . , |θ| ↓ p ) for the non-increasing rearrangement of its amplitudes (|θ 1 |, . . . , |θ p |). Then, given a design matrix X ∈ R n×p and a response vector y ∈ R n , the Slope estimator is defined as a solution of the minimization problem If the parameters λ 1 , . . . , λ p are all equal, then Slope is equal to the Lasso with tuning parameter λ 1 . Slope presents several advantages compared to Lasso in sparse linear regression. First, Slope provably controls the False Discovery Rate (FDR) for orthogonal design matrices [10] and experiments show that this property is also satisfied for some non-orthogonal design matrices [10]. Second, it appears that Slope has more power than Lasso in the sense that Slope will discover more nonzero coefficients of the unknown target vector [10]. An interpretation of this phenomenon is that Lasso shrinks small coefficients too heavily and may thus miss the smallest nonzero coefficients of the target vector. On the other hand, the Slope penalty induces less shrinkage on small coefficients, leading to more power. Third, while Lasso with the universal parameter is known to achieve the rate of estimation of order (s/n) log(p) (where s is the sparsity of the unknown target vector, n the number of measurements and p the number of covariates), Slope achieves the optimal rate of estimation of order (s/n) log(p/s) [28,4].
We propose a theoretical and experimental analysis of Graph-Slope, the counterpart estimator of Slope for signals defined on graphs. Graph-Slope is defined in the next section. Our theoretical contribution for Graph-Slope borrows some technical details recently introduced in [17] to control the Mean Squared Error (MSE) for the Generalized Lasso.
Last but not least, we provide an efficient solver to compute the Graph-Slope estimator. It relies on accelerated proximal gradient descent to solve the dual formulation [3,12,22]. To obtain an efficient solver, we leverage the seminal contribution made in [36] showing the link between ordered 1 norm (1.1) and isotonic regression. Hence, we can use fast implementations of the PAVA algorithm (for Pool Adjacent Violators Algorithm, see for instance [6]), available for instance in scikit-learn [23] for this purpose. Numerical experiments illustrate the benefit of Graph-Slope, in particular in terms of True Discovery Rate (TDR) performance.
A high level interpretation of our simulation results is as follows. In the model considered in this paper, a sharp discontinuity of the signal corresponds to an edge of the graph with nonzero coefficient. Since Graph-Lasso uses an 1 -penalty, the penalty level is uniform across all edges of the graph. Edges with small coefficients are too heavily penalized with Graph-Lasso. Using Graph-Slope lets us reduce the penalty level on the edges with small coefficients. This leads to the discovery of more discontinuities of the true signal as compared to Graph-Lasso.

Model and notation
Let G = (V, E) be an undirected and connected graph on n vertices, V = [n], and p edges, E = [p]. This graph can be represented by its edge-vertex incidence matrix D = D G ∈ R p×n (we drop the reference to G when no ambiguity is possible) defined as where e = {i, j}. The matrix L = DD is the so-called graph Laplacian of G. The Laplacian L is invariant under a change of orientation of the graph. For any u ∈ R p , we denote by u 0 the pseudo 0 norm of u : u 0 = |{j ∈ [p] : u j = 0}|, and for any matrix A, we denote by A † its Moore-Penrose pseudo-inverse. The canonical basis of R p is denoted (e 1 , . . . , e p ).
For any norm · on R n , the associated dual norm · * reads at v ∈ R n v * = sup As a consequence, for every (β, v) ∈ R n × R n , one has β, v β v * . In this work, we consider the following denoising problem for a signal over a graph. Assume that each vertex i ∈ [n] of the graph carries a signal β i . For each vertex i ∈ [n] of the graph, one observes y i , a noisy perturbation of β i . In vector form, one observes the vector y ∈ R n and aims to estimate β ∈ R n , i.e., where ε ∼ N (0, σ 2 Id n ) is a noise vector. We will say that an edge e = {i, j} of the graph carries the signal (D β ) e . In particular, if two vertices i and j are neighbours and if they carry the same value of the signal, i.e., β i = β j , then the corresponding edge e = {i, j} carries the constant signal. The focus of the present paper is on signals β that have few discontinuities. A signal β ∈ R n has few discontinuities if D β has few nonzero coefficients, i.e., D β 0 is small, or equivalently if most edges of the graph carry the constant signal. In particular, if D β 0 = s, we say that β is a vector of D -sparsity s.

The Graph-Slope estimator
We consider in this paper the so-called Graph-Slope variational scheme: with λ = (λ 1 , . . . , λ p ) ∈ R p satisfying λ 1 λ 2 · · · λ p ≥ 0, and using for any vector θ ∈ R p the notation 2 (|θ| ↓ 1 , . . . , |θ| ↓ p ) for the non-increasing rearrangement of its amplitudes (|θ 1 |, . . . , |θ p |). According to [10], · [λ] is a norm over R p if and only if λ 1 λ 2 · · · λ p ≥ 0 with at least one strict inequality. This is a consequence of the observation that if λ 1 λ 2 · · · λ p ≥ 0 then one can rewrite the Slope-norm of θ as the maximum over all τ ∈ S p (the set of permutations over [p]), of the quantity The Generalized Lasso (also sometimes referred to as TV denoiser) relies on 1 regularization. It was recently investigated in [17,25], and can be defined aŝ where · 1 is the standard 1 norm, and λ 1 > 0 is a tuning parameter. 3) as it allows the smaller coefficients of D β to be less penalized than its larger coefficients. We will see in the next sections that this flexibility brings advantages to both the theoretical properties ofβ GS as well as its performance in simulations, as compared toβ GL .

Theoretical guarantees: sharp oracle inequality
We can now state the main theoretical result of the paper, a sharp oracle inequality for the Graph-Slope. For any integer s and weights λ = (λ 1 , . . . , λ p ), define (2.1) has probability at least 1/2. Then, for any δ ∈ (0, 1), we have with probability at where Λ(·, ·) is defined in (2.1) and the compatibility factor κ(s) is defined as Proof. Let β be a minimizer of the right hand side of (2.3) and let s = D β 0 .
To complete the proof, it remains to show that By definition of the median, it is enough to show that By Lemma A.4 and the fact that Id n − Π = (D ) † D , we obtain that for all v, where we used the duality between · * [λ] and · [λ] for the second term and the fact that the transpose and the Moore-Penrose pseudo-inverse commute, which implies (D † ) = D † .
We now bound g(ε) from above on the event (2.2). On the event (2.2), Consider v ∈ R n such that v = 1 and 3Λ(λ, s) Then, by the definition of κ(s) given in defined in (2.4) we have Consider Thus, we have proved that on the event (2.2) that has probability at least 1/2, (2.7) holds. This implies (2.6) by definition of the median.
The constant κ(s) is sometimes referred to as the compatibility factor of D . Bounds on the compatibility factor are obtained for a large class of random and deterministic graphs [17]. For instance, for graphs with bounded degree, the compatibility factor is bounded from below (see for instance [17,Lemma 3]). In linear regression, constants that measure the correlations of the design matrix have been proposed to study the Lasso and the Dantzig selector: [8] defined the Restricted Eigenvalue constant, [31] defined the Compatibility constant, [35] defined the Cone Invertibility factors and [13] defined the Compatibility factor, to name a few. The Weighted Restricted Eigenvalue constant was also defined in [4] to study the Slope estimator. These constants are the linear regression analogs of κ(s) defined in (2.4).
Theorem 2.1 does not provide an explicit choice for the weights λ 1 ≥ · · · ≥ λ p . These weights should be large enough so that the event (2.2) has probability at least 1/2. These weights should also be as small as possible in order to minimize the right hand side of (2.3). Define g 1 , . . . , g p by for all j = 1, . . . , p . (2.8) and let |g| ↓ 1 ≥ · · · ≥ |g| ↓ p be a nondecreasing rearrangement of (|g 1 |, . . . , |g p |). Inequality (2.10) below reads Thus, if the event max j=1,...,p |g| ↓ j / (nλ j ) ≤ 1/2 , has probability greater than 1/2, then the event (2.2) has probability greater than 1/2 as well, and the conclusion of Theorem 2.1 holds. This observation can be used to justify the following heuristics for the choice of the tuning parameters λ 1 ≥ · · · ≥ λ p . This heuristics can be implemented provided that the Moore-Penrose pseudo-inverse D † and the probability distribution of the noise random vector ε are both known. These heuristics go as follows. Assume that one has generated N independent copies of the random vector ε, and denote byP N the empirical probability distribution with respect to these independent copies of ε, andF j N the empirical cumulative distribution function (cdf) of |g| ↓ j . Next, define λ j as the (1 − 1/(3p))th quantile of 2|g| ↓ j , so that As N → +∞, by the Glivenko-Cantelli theorem,F j N (t) converges to the cdf of |g| ↓ j at t uniformly in t ∈ R, j ∈ [p]. Hence if N is large enough, then for all j = 1, ..., p we have P(2|g| ↓ j ≤ nλ j ) ≥ 1 − 1/2p . By the union bound over j = 1, . . . , p, thus the event (2.2) has probability greater than 1/2 with respect to the probability distribution P of ε. This simple scheme provides a computational heuristic to choose the weights λ 1 , . . . , λ p . The following corollaries propose a theoretical choice for the weights. To state these corollaries, let us write ρ(G) = max j∈ [p] (D ) † e j , following the notation in [17].
Under an explicit choice of tuning parameters, Corollary 1 yields the following result.

Corollary 2.
Under the same hypothesis as Theorem 2.1 but with the special choice nλ j = 8σρ(G) log(2p/j) for any j ∈ [p], then for any δ ∈ (0, 1), we have with probability at least 1 − 2δ Proof. We apply Lemma A.2 with the choice C = 8σρ(G)/n.
When the true signal satisfies D β 0 = s , the previous bound reduces to Corollary 2 is an improvement w.r.t. the bound provided in [17,Theorem 2] for the TV denoiser (also sometimes referred to as the Generalized Lasso) relying on 1 regularization defined in Eq. (1.3). Indeed, the contribution of the second term in Corollary 2 is reduced from log(ep/δ) (in [17,Theorem 2]) to log(2ep/s). Thus the dependence of the right hand side of the oracle inequality in the confidence level δ is significantly reduced compared to the result of [17,Theorem 2].
A similar bound as in Corollary 2 could be obtained for 1 regularization adapting the proof from [4,Theorem 4.3]. However such a better bound would be obtained for a choice of regularization parameter relying on the D -sparsity of the signal. The Graph-Slope does not rely on such a quantity, and thus Graph-Slope is adaptive to the unknown D -sparsity of the signal.

Remark 1.
The optimal theoretical choice of parameter requires the knowledge of the noise level σ from the practitioner. Whenever the noise level σ is not known, the practitioner can use the corresponding Concomitant estimator to alleviate this issue [21,5,29], see also [19] for efficient algorithms to compute such scale-free estimators.

Algorithm for Graph-Slope
In this section, we propose an algorithm to compute a solution of the highly structured optimization problem (1.2). The data fidelity term f : β → y − β 2 2 /2 is a convex smooth function with 1-Lipschitz gradient, and the map β → D β [λ] is the pre-composition by a linear operator of the norm · [λ] whose proximal operator can be easily computed [36,10]. Thus, the use of a dual or primal-dual proximal scheme can be advocated.
Problem (1.2) can be rewritten as where f is a smooth, 1-Lipschitz strictly convex function and g = · [λ] is a convex, proper, lower semicontinuous function (see for instance [2, p. 275]). Its dual problem reads min θ∈R p f (Dθ) + g (−θ) , where f is the convex conjugate of f , i.e., for any x ∈ R n Classical computations leads to the following dual problem min θ∈R p The dual formulation (3.1) can be rewritten as an unconstrained problem, using for any set C ⊂ R n , and any θ ∈ R n , the notation The quadratic term in y is constant and can be dropped. Thus the optimization problem (3.1) is equivalent to

Algorithm 1 FISTA on dual formulation
Require: (β 0 , θ 0 ) initial guess, L = D 2 , t 0 = 1, ε duality gap tolerance The formulation in (3.2) is now well suited to apply an accelerated version of the forward-backward algorithm such as FISTA [3]. As a stopping criterion, we use a duality gap criterion: for a feasible pair (β, θ) and by Δ(β, θ) = +∞ for an unfeasible pair. In practice we set ε = 10 −2 as a default value. Algorithm 1 summarizes the dual FISTA algorithm applied to the Graph-Slope minimization problem. We recall that the proximity operator of a convex, proper, lower semicontinuous function f is given as the unique solution of the optimization problem Prox λf (β) = argmin z∈R n To compute the proximity operator of ι { · * [λ] 1} , we use the Moreau's decomposition [22, p. 65] which links it to the proximity operator of the dual Slope-norm, where Π 1 τ B * is the projection onto the unit ball B * associated to the dual norm · * scaled by a factor 1/τ . The proximity operator of · [λ] can be obtained obtained in several ways [36,10]. In our numerical experiments, we use the connection between this operator and the isotonic regression following [36], which can be computed in linear time. Under the assumption that the quantity (u i − λ i ) is positive, non-increasing (which is obtained by sorting |u| and restoring the signs and ordering afterwards, see details in [36,Eq. (24)]), computing Prox · [λ] (u) is equivalent to solving the problem argmin θ∈R p We have relied on the fast implementation implementation of the PAVA algorithm (for Pool Adjacent Violators Algorithm, see for instance [6]), available in scikit-learn [23] to solve this inner problem. The source code used for our numerical experiments is freely available at http://github.com/svaiter/gslope_oracle_inequality.

Synthetic experiments
To illustrate the behavior of Graph-Slope, we first propose two synthetic experiments in moderate dimension. The first one is concerned with the so-called "Caveman" graph and the second one with the 1D path graph.
For these two scenarios, we analyze the performance following the same protocol. For a given noise level σ, we use the bounds derived in Theorem 2.1 (we dropped the constant term 8) and in [17], i.e., For every n 0 between 0 and p, we generate 1000 signals as follows. We draw J uniformly at random among all the subsets of [p] of size n 0 . Then, we let Π J be the projection onto Ker D J and generate a vector g ∼ N (0, Id n ). We then construct β = c(Id − Π J )g where c is a given constant (here c = 8). This constrains the signal β to be of D -sparsity at most p − n 0 . We corrupt the signals by adding a zero mean Gaussian noise with variance σ 2 , and run both the Graph-Lasso estimator and the Graph-Slope estimator. We then compute the mean of the mean-squared error (MSE), the false detection rate (FDR) and the true detection rate (TDR). To clarify our vocabulary, given an estimatorβ and a ground truth β , the MSE reads (1/n) β −β 2 , while the FDR and TDR read, respectively,

Example on Caveman
The caveman model was introduced in [34] to model small-world phenomenon in sociology. Here we consider its relaxed version, which is a graph formed by l cliques of size k (hence n = lk), such that with probability q ∈ [0, 1], an edge of a clique is linked to a different clique. In our experiment, we set l = 4, k = 10 (n = 40) and q = 0.1. We provide a visualisation of such a graph in Figure 1a. For this realization, we have p = 180. The  Figure 1a whereas the edges similar to the complete graph on 10 nodes are in black. The signals are generated as random vectors of given D -sparsity with a noise level of σ = 0.2. Figure 1b shows the weights decay. Figures 1c-1e represent the evolution of the MSE and TDR in function of the level of D -sparsity. We observe that while the MSE is close between the Graph-Lasso and the Graph-Slope estimator at low level of sparsity, the TDR is vastly improved in the case of Graph-Slope, with a small price concerning the FDR (a bit more for the Monte Carlo choice of the weights). Hence empirically, Graph-Slope will make more discoveries than Graph-Lasso without impacting the overall FDR/MSE, and even improving it.

Example on a path: 1D-Total Variation
The classical 1D-Total Variation corresponds to the Graph-Lasso estimatorβ GL when G is the path graph over n vertices, hence with p = n − 1 edges. In our experiments, we take n = 100, σ = 0.6 and a very sparse gradient (s = 4). According to these values, and taking a random amplitude for each step, we generate a piecewise-constant signal. We display a typical realization of such a signal in Figure 2a. Figure 2b shows the weights decay. Note that in this case, the Monte-Carlo weights shape differs from the one in the previous experiment. Indeed, they are adapated to the underlying graph, contrary to the theoretical weights λ GS which depend only on the size of the graph. Figures 2c-2e represent the evolution of the MSE and TDR in function of the level of D -sparsity. Here, Graph-Slope does not improve the MSE significantly. However, as for the caveman experiments, Graph-Slope is more likely to make more discoveries than Graph-Lasso for a small price concerning the FDR.

Example on real data: Paris roads network
To conclude our numerical experiments, we present our result on a real-life graph, the road network of Paris, France. Thanks to the Python module osmnx [9], which downloads and simplifies OpenStreetMap data, we run our experiments on p = 20108 streets (edges) and n = 10205 intersections (vertices).
The ground truth signal is constructed as in [16] as follows. Starting from 30 infection sources, each infected intersections has probability 0.75 to infect each of its neighbors. We let the infection process run for 8 iterations. The resulting graph signal β is represented in Figure 3a with D -sparsity 1586. We then corrupt this signal by a zero mean Gaussian noise with standard-deviation σ = 0.8, leading to the observations y represented in Figure 3b.
Instead of using the parameters given in (3.3), we have computed the oracle parameters for the Graph-Lasso and Graph-Slope estimators by evaluating for 100 parameters of the form λ GL = ασ 2 log(p) n and (λ GS ) j = ασ 2 log(p/j) n ∀j ∈ [p] , where α lives on a geometric grid inside [10 −5 , 10 1.5 ]. The best one in terms of MSE (i.e., in term of (1/n) β − β 2 ) is refered to as the oracle parameter.
The results are illustrated in Figure 3c for Graph-Lasso and in Figure 3d for 1-strongly convex function and a convex function is 1-strongly convex, and thus by multiplying by n we have for all β ∈ R n and for any d in the subdifferential of the objective function (1.2) atβ. Sinceβ is a minimizer of (1.2), we can choose d = 0 in the above display. For d = 0, the previous display is equivalent to the claim of the Lemma.
Lemma A.4. Let us suppose that the graph G has K connected components C 1 , . . . , C K . Then, where for any k ∈ [K], the vectors 1 C k ∈ R n are defined by Moreover, the orthogonal projection over ker(D ) is denoted by Π and is the component-wise averaging given by Furthermore, if G is a connected graph then ker(Π) = span(1 n ).
Proof. The proof can be done for the simple case of a connected graph (i.e., K = 1), and the result can be generalized by tensorization of graph for K > 1 components. Hence, we assume that K = 1. For any β ∈ ker(D ), the definition of the incidence matrix yields that for all (i, j) ∈ E, β i = β j . Since all vertices are connected, by recursion all the β j 's are identical, and β ∈ span(1 n ) = span(1 C1 ). The converse is proved in the same way.