The Complexity of Primal-Dual Fixed Point Methods for Ridge Regression

We study the ridge regression (L2 regularized least squares) problem and its dual, which is also a ridge regression problem. We observe that the optimality conditions describing the primal and dual optimal solutions can be formulated in several different but equivalent ways. The optimality conditions we identify form a linear system involving a structured matrix depending on a single relaxation parameter which we introduce for regularization purposes. This leads to the idea of studying and comparing, in theory and practice, the performance of the fixed point method applied to these reformulations. We compute the optimal relaxation parameters and uncover interesting connections between the complexity bounds of the variants of the fixed point scheme we consider. These connections follow from a close link between the spectral properties of the associated matrices. For instance, some reformulations involve purely imaginary eigenvalues; some involve real eigenvalues and others have all eigenvalues on the complex circle. We show that the deterministic Quartz method---which is a special case of the randomized dual coordinate ascent method with arbitrary sampling recently developed by Qu, Richt\'{a}rik and Zhang---can be cast in our framework, and achieves the best rate in theory and in numerical experiments among the fixed point methods we study. Remarkably, the method achieves an accelerated convergence rate. Numerical experiments indicate that our main algorithm is competitive with the conjugate gradient method.


Introduction
Given matrices A 1 , . . . , A n ∈ R d×m encoding n observations (examples), and vectors y 1 , . . . , y n ∈ R m encoding associated responses (labels), one is often interested in finding a vector w ∈ R d such that, in some precise sense, the product A T i w is a good approximation of y i for all i. A fundamental approach to this problem, used in all areas of computational practice, is to formulate the problem as an L 2 -regularized least-squares problem, also known as ridge regression. In particular, we consider the primal ridge regression problem where λ > 0 is a regularization parameter, · denotes the standard Euclidean norm. In the second and more concise expression we have concatenated the observation matrices and response vectors to form a single observation matrix A = [A 1 , A 2 , · · · , A n ] ∈ R d×N and a single response vector y = (y 1 , y 2 , · · · , y n ) ∈ R N , where N = nm.
With each observation (A i , y i ) we now associate a dual variable, α i ∈ R m . The Fenchel dual of (1) is also a ridge regression problem: where α = (α 1 , α 2 , . . . , α n ) ∈ R N .
Optimality conditions. The starting point of this work is the observation that the optimality conditions for the primal and dual ridge regression problems can be written in several different ways, in the form of a linear system involving the primal and dual variables. In particular, we find several different matrixvector pairs (M, b), where M ∈ R (d+N )×(d+N ) and b ∈ R d+N , such that the optimality conditions can be expressed in the form of a linear system as where x = (w, α) ∈ R d+N .
Fixed point methods. With each system (3) one can naturally associate a fixed point method performing the iteration x k+1 = M x k + b. However, unless the spectrum of M is contained in the unit circle, such a method will not converge [1]. To overcome this drawback, we utilize the idea of relaxation.
In particular, we pick a relaxation parameter θ = 0 and replace (3) with the equivalent system where G θ = (1 − θ)I + θM and b θ = θb. The choice θ = 1 recovers (3). We then study the convergence of the primal-dual fixed point methods through a careful study of the spectra of the iteration matrices G θ . Our work starts with the following observation: While all these formulations are necessarily algebraically equivalent, they give rise to different fixed-point algorithms, with different convergence properties.

Contributions and literature review
It is well known that the role of duality in optimization and machine learning is very important, not only from the theoretical point of view but also computationally [2,3,4].
However, a more recent idea that has generated many contributions is the usage of the primal and dual problems together. Primal-dual methods have been employed in convex optimization problems where strong duality holds, obtaining success when applied to several types of nonlinear and nonsmooth functions that arise in various application fields, such as image processing, machine learning, inverse problems, among others [5,6,7].
On the other hand, fixed-point-type algorithms are classical tools for solving some structured linear systems. In particular, we have the iterative schemes developed by the mathematical economists Arrow, Hurwicz and Uzawa for solving saddle point problems [8,9].
In this paper we develop several primal-dual fixed point methods for the Ridge Regression problem. Ridge regression was introduced by Hoerl and Kennard [10,11] as a regularization method for solving least squares problems with highly correlated predictors. The goal is to reduce the standard errors of regression coefficients by imposing a penalty, in the L 2 norm, on their size.
Since then, numerous papers were devoted to the study of ridge regression or even for solving problems with a general formulation in which ridge regression is a particular case. Some of these works have considered its dual formulation, proposing deterministic and stochastic algorithms that can be applied to the dual problem [12,13,14,2,15,4].
To the best of our knowledge, the only work that considers a primal-dual fixed point approach to deal with ridge regression is [16], where the authors deal with ill-conditioned problems. They present an algorithm based on the gradient method and an accelerated version of this algorithm.
Here we propose methods based on the optimality conditions for the problem of minimizing the duality gap between the ridge regression problems (1) and (2) in different and equivalent ways by means of linear systems involving structured matrices. We also study the complexity of the proposed methods and prove that our main method achieves the optimal accelerated Nesterov rate. This theoretical property is supported by numerical experiments indicating that our main algorithm is competitive with the conjugate gradient method.

Outline
In Section 2 we formulate the optimality conditions for the problem of minimizing the duality gap between (1) and (2) in two different, but equivalent, ways by means of linear systems involving structured matrices. We also establish the duality relationship between the problems (1) and (2). In Section 3 we describe a family of (parameterized) fixed point methods applied to the reformulations for the optimality conditions. We present the convergence analysis and complexity results for these methods. Section 4 brings the main contribution of this work, with an accelerated version of the methods described in Section 3. In Section 5 we discuss some variants of our accelerated algorithm. In Section 6 we perform some numerical experiments. Finally, concluding remarks close our text in Section 7.

Separable and Coupled Optimality Conditions
Defining x = (w, α) ∈ R d+N , our primal-dual problem consists of minimizing the duality gap between the problems (1) and (2), that is This is a quadratic strongly convex problem and therefore admits a unique global solution x * ∈ R d+N .

A separable system
, where So, the first and natural way of writing the optimality conditions for problem (4) is just to set the expressions given in (5) equal to zero, which can be written as

A coupled system
In order to derive the duality between (1) and (2), as well as to reformulate the optimality conditions for problem (4), note that where φ i (z) = 1 2 z − y i 2 and g(w) = 1 2 w 2 . Now, recall that the Fenchel conjugate of a convex function ξ : Note that if ξ is strongly convex, then ξ * (u) < ∞ for all u ∈ R l . Indeed, in this case ξ is bounded below by a strongly convex quadratic function, implying that the "sup" above is in fact a "max".
It is easily seen that φ * i (s) = 1 2 s 2 + s T y i and g * (u) = 1 2 u 2 . Furthermore, we have the duality gap can be written as and the weak duality follows immediately from the fact that Strong duality occurs when these quantities vanish, which is precisely the same as w = ∇g * (ᾱ) and α i = −∇φ i (A T i w), or, equivalently,ᾱ = ∇g(w) and A T i w = ∇φ * i (−α i ). Therefore, another way to see the optimality conditions for problem (4) is by the relations This is equivalent to

Compact form
Both reformulations of the optimality conditions, (6) and (11), can be viewed in the compact form for some M ∈ R (d+N )×(d+N ) and b ∈ R d+N . Let us denote the matrices associated with the optimality conditions formulated as (6) and (11), respectively. Also, let b 1 = 1 λn Ay λny and b 2 = 0 y .

Primal-Dual Fixed Point Methods
A method that arises immediately from the relation (12) is given by the scheme However, unless the spectrum of M is contained in the unit circle, this scheme will not converge. To overcome this drawback, we utilize the idea of relaxation. More precisely, we consider a relaxation parameter θ = 0 and replace (12) with the equivalent system Note that the choice θ = 1 recovers (12). The proposed algorithm is then given by the following framework.
As we shall see later, the use of the relaxation parameter θ enables us to prove convergence of Algorithm 3.1 with M = M 1 and b = b 1 or M = M 2 and b = b 2 , chosen according to (13) and (14), independent of the spectral radius of these matrices. Let us denote and let x * be the solution of the problem (4). Then x * = M x * +b with M = M 1 and b = b 1 or M = M 2 and b = b 2 . Therefore, x * = G(θ)x * + θb. Further, the iteration of Algorithm 3.1 can be written as x k+1 = G(θ)x k + θb. Thus, and consequently the convergence of the algorithm depends on the spectrum of G(θ). More precisely, it converges if the spectral radius of G(θ) is less than 1, because in this case we have G(θ) k → 0. In fact, we will address the following questions: • What is the range for θ so that this scheme converges?
• What is the best choice of θ?
• What is the rate of convergence?
• How the complexity of this algorithm compares with the known ones?

Convergence analysis
In this section we study the convergence of Algorithm 3.1 and answer the questions raised above. To this end we point out some properties of the iteration matrices and uncover interesting connections between the complexity bounds of the variants of the fixed point scheme we consider. These connections follow from a close link between the spectral properties of the associated matrices.
For this purpose, let be the singular value decomposition of A. That is, U ∈ R d×d and V ∈ R N ×N are orthogonal matrices and p N − p where Σ = diag(σ 1 , . . . , σ p ) brings the (nonzero) singular values σ 1 ≥ · · · ≥ σ p > 0 of A. First, we state a basic linear algebra result (the proof is straightforward by induction).
Proposition 3.1. Let Q j ∈ R l×l , j = 1, . . . , 4, be diagonal matrices whose diagonal entries are components of α, β, γ, δ ∈ R l , respectively. Then The next result is crucial for the convergence analysis and complexity study of Algorithm 3.1.
The result then follows from Proposition 3.1.
The following result follows directly from Lemma 3.2 and the fact that M 1 is symmetric.
From Corollary 3.3 we conclude that if σ 1 < √ λn, then ρ 1 ≤ ρ 2 < 1. So, M k 1 → 0 and M k 2 → 0, which in turn implies that the pure fixed point method, that is, Algorithm 3.1 with θ = 1, converges. However, if σ 1 ≥ √ λn, we cannot guarantee convergence of the pure method. Now we shall see that Algorithm 3.1 converges for a broad range of the parameter θ, without any assumption on σ 1 , λ or n. We begin with the analysis of the framework that uses M 1 and b 1 , defined in (13) and (14).
Theorem 3.4. Let x 0 ∈ R d+N be an arbitrary starting point and consider the sequence (x k ) k∈N generated by Algorithm 3.2 with θ ∈ 0, 2λn λn + σ 2 1 . Then the sequence (x k ) converges to the (unique) solution of the problem (4) at a lin- , then the (theoretical) convergence rate is optimal and it is equal Proof. We claim that the spectral radius of G 1 (θ) def = (1−θ)I +θM 1 is ρ 1 (θ) and also coincides with G 1 (θ) . Using Lemma 3.2, we conclude that the eigenvalues of this matrix are So, its spectral radius is Since G 1 (θ) is symmetric, this quantity coincides with G 1 (θ) . Furthermore, the admissible values for θ, that is, the ones such that the eigenvalues have modulus less than one, can be found by solving which immediately gives 0 < θ < 2λn λn + σ 2 1 . So, the linear convergence of Algorithm 3.1 is guaranteed for any θ ∈ 0, 2λn λn + σ 2 1 . Finally, note that the solution of the problem min The top picture of Figure 1 illustrates the eigenvalues of G 1 (θ) (magenta squares) together with the eigenvalues of M 1 (blue triangles), for a fixed value of the parameter θ. The one farthest from the origin is On the bottom we show the two largest (in absolute value) eigenvalues of G 1 (θ) corresponding to the optimal choice of θ. Now we analyze the fixed point framework that employs M 2 and b 2 , defined in (13) and (14).

Fixed Point Method based on
Theorem 3.5. Let x 0 ∈ R d+N be an arbitrary starting point and consider the sequence (x k ) k∈N generated by Algorithm 3.3 with θ ∈ 0, 2λn λn + σ 2 1 . Then the sequence (x k ) converges to the (unique) solution of the problem (4) at an asymptotic convergence rate of ρ 2 (θ) , then the (theoretical) convergence rate is optimal and it is equal to ρ * Proof. First, using Lemma 3.2, we conclude that the eigenvalues of G 2 (θ) The two ones with largest modulus are 1 − θ ± θσ 1 √ λn i (see Figure 2). So, the spectral radius of G 2 (θ) is Further, the values of θ for which the eigenvalues of G 2 (θ) have modulus less than one can be found by solving The asymptotic convergence follows from the fact that G 2 (θ) k 1/k → ρ 2 (θ). Indeed, using (17) we conclude that This means that given γ > 0, there exists k 0 ∈ N such that for all k ≥ k 0 . Finally, the optimal parameter θ * 2 and the corresponding optimal rate ρ * 2 can be obtained directly by solving The left picture of Figure 2 illustrates, in the complex plane, the eigenvalues of G 2 (θ) (magenta squares) together with the eigenvalues of M 2 (blue triangles), for a fixed value of the parameter θ. On the right we show, for each θ ∈ (0, 1), one of the two eigenvalues of G 2 (θ) farthest from the origin. The dashed segment corresponds to the admissible values for θ, that is, the eigenvalues with modulus less than one. The square corresponds to the optimal choice of θ.

Comparison of the rates
We summarize the discussion above in Table 1 which brings the comparison between the pure (θ = 1) and optimal (θ = θ * j , j = 1, 2) versions of Algorithm 3.1. We can see that the convergence rate of the optimal version is λn/(2λn + σ 2 1 ) times the one of the pure version if M 1 is employed (Algorithm 3.2) and λn/(λn + σ 2 1 ) times the pure version when using M 2 (Algorithm 3.3). Moreover, in any case, employing M 1 provides faster convergence. This can be seen in Figure 3, where Algorithm 3.1 was applied to solve the problem (4). The dimensions considered were d = 200, m = 1 and n = 5000 (so that the total dimension is d + N = d + nm = 5200).
We also mention that the pure version does not require the knowledge of σ 1 , but it may not converge. On the other hand, the optimal version always converges, but θ depends on σ 1 . Table 1: Ranges of convergence and convergence rates of pure and optimal versions of Algorithm 3.2 (the one that uses M 1 ), indicated by PDFP1(θ), and Algorithm 3.3 (the one that uses M 2 ), PDFP2(θ). The dimensions considered were d = 200, m = 1 and n = 5000 (so that the total dimension is d + N = d + nm = 5200). The matrix A ∈ R d×N and the vector y ∈ R N were randomly generated. For simplicity of notation we have denoted the pure and optimal versions of Algorithm 3.2 by PDFP1 and PDFP1*, respectively. Analogously, for Algorithm 3.3, we used PDFP2 and PDFP2* to denote the pure and optimal versions, respectively.

Direct relationship between the iterates of the two methods
Another relation regarding the employment of M 1 or M 2 in the pure version of Algorithm 3.1, which is also illustrated in Figure 3, is that one step of the method with M 1 corresponds exactly to two steps of the one with M 2 . Indeed, note first that M 2 2 = M 1 . Thus, denoting the current point by x and the next iterate by x + M , in view of (15) we have In Section 4 we shall see how this behavior can invert with a small change in the computation of the dual variable.

Complexity results
In order to establish the complexity of Algorithm 3.1 we need to calculate the condition number of the objective function, defined in (4). Note that the Hessian of f is given by Let us consider two cases: • If λn < 1, then σ 2 1 + λn < σ 2 1 + 1 < σ 2 1 λn + 1, which in turn implies that So, assuming that d < N , the condition number is σ 2 1 + λn If A is rank deficient, then the condition number is We stress that despite the analysis was made in terms of the sequence x k = (w k , α k ), the linear convergence also applies to objective values. Indeed, since f is L-smooth, we have where the equality follows from the fact that the optimal objective value is zero. Therefore, if we want to get f (x k ) − f (x * ) < ε and we have linear convergence rate ρ on the sequence (x k ), then it is enough to enforce Using the estimate log(1 − θ) ≈ −θ, we can approximate the right hand side of (22) by 1 2θ * in the case M 1 is used and by if we use M 2 . In order to estimate the above expressions in terms of the condition number, let us consider the more common case λn ≥ 1. Then the condition number of ∇ 2 f is given by (20), that is, So, if we use M 1 , the complexity is proportional to If we use M 2 , the complexity is proportional to

Accelerated Primal-Dual Fixed Point Method
Now we present our main contribution. When we employ Algorithm 3.3, the primal and dual variables are mixed in two equations. More precisely, in view of (9) the iteration in this case can be rewritten as The idea here is to apply block Gauss-Seidel to this system. That is, we use the freshest w to update α. Let us state formally the method by means of the following framework.

Algorithm 4.1. Accelerated Fixed Point Method
input: matrix A ∈ R d×N , vector y ∈ R N , parameter θ ∈ (0, 1] starting points: w 0 ∈ R d and α 0 ∈ R N repeat for k = 0, 1, 2, . . . Due to this modification, we can achieve faster convergence. This algorithm is a deterministic version of a randomized primal-dual algorithm (Quartz) proposed and analyzed by Qu, Richtárik and Zhang [7].

Convergence analysis
In this section we study the convergence of Algorithm 4.1. We shall determine all values for the parameter θ for which this algorithm converges as well as the one giving the best convergence rate.
To this end, we start by showing that Algorithm 4.1 can be viewed as a fixed point scheme. Then we determine the "dynamic" spectral properties of the associated matrices, which are parameterized by θ.
First, note that the iteration of our algorithm can be written as or in a compact way as and f = 0 θy .
We know that if the spectral radius of G 3 (θ) is less that 1, then the sequence defined by (28) converges. Indeed, in this case the limit point is just x * , the solution of the problem (4). This follows from the fact that x * = G 3 (θ)x * + f .
Next lemma provides the spectrum of G 3 (θ). Proof. Consider the matrix M 3 (θ) Using the singular value decomposition of A, given in (18), we can write Therefore, the eigenvalues of M 3 (θ) are the same as the ones of and Σ is defined in (19). The characteristic polynomial of this matrix is Using Proposition 3.1 and denoting q = N + d − 2p, we obtain Thus, the eigenvalues of M 3 (θ) are giving the desired result. Figure 4 illustrates, in the complex plane, the spectrum of the matrix G 3 (θ) for many different values of θ. We used n = 250, d = 13, m = 1 (therefore N = 250), λ = 0.3 and a random matrix A ∈ R d×N . The pictures point out the fact that for some range of θ the spectrum is contained in a circle and for other values of θ some of the eigenvalues remain in a circle while others are distributed along the real line, moving monotonically as this parameter changes. These statements will be proved in the sequel.
In what follows, let us consider the functions δ j : [0, 1] → R defined by The following straightforward result brings some basic properties of them, illustrated in Figure 5.
From Lemmas 4.3 and 4.4 we can conclude thatθ 1 is the threshold value for θ after which the eigenvalues of G 3 (θ) start departing the circle of radius 1 − θ. The next result presents the threshold after which the eigenvalues are all real.
Using the previous results, we can finally establish the convergence of Algorithm 4.1 (Deterministic Quartz).
Theorem 4.6. Let w 0 ∈ R d and α 0 ∈ R N be arbitrary and consider the sequence (w k , α k ) k∈N generated by Algorithm 4.1 with θ ∈ 0, 2 √ λn √ λn + σ 1 . Then the sequence (w k , α k ) converges to the (unique) solution of the problem (4) at an asymptotic linear rate of . Furthermore, if we choose θ * 3 def =θ 1 , then the (theoretical) convergence rate is optimal and it is equal to ρ * Proof. Since Algorithm 4.1 can be represented by (28), we need to show that ρ(G 3 (θ)), the spectral radius of G 3 (θ), is less than 1. First, note that by Lemmas 4.3, 4.4 and 4.5, we have ρ(G 3 (θ)) = ρ 3 (θ). Using Lemma 4.2 we conclude that the function θ → ρ 3 (θ) is increasing on the interval [θ 1 , ∞), which means that its minimum is attained atθ 1 . To finish the proof, it is enough to prove that ρ 3 (θ) = 1 if and only if Note that completing the proof.
It is worth noting that if the spectral radius of M 1 is less than 1, that is, if σ 2 1 < λn, then Algorithms 3.2, 3.3 and 4.1 converge for any choice of θ ∈ (0, 1]. Indeed, in this case we have which implies that the set of admissible values for θ established in Theorems 3.4, 3.5 and 4.6 contains the whole interval (0, 1].
On the other hand, if σ 2 1 ≥ λn, the convergence of these algorithms is more restrictive. Moreover, in this case we have which means that Algorithm 4.1 has a broader range for θ than Algorithms 3.2 and 3.3.

Complexity results
Taking into account (22), (25), the relation log(1 − θ) ≈ −θ and Theorem 4.6, we conclude that the complexity of our Accelerated Fixed Point Method, Algorithm 4.1, is proportional to Note that in the case when λ = 1/n, as is typical in machine learning applications, we can write This is very surprising as it means that we are achieving the optimal accelerated Nesterov rateÕ( √ κ).

Extensions
In this section we discuss some variants of Algorithm 4.1. The first one consists of switching the order of the computations, updating the dual variable first and then the primal one.
The second approach updates the primal variable enforcing the first relation of the optimality conditions given by (10) and using the relaxation parameter θ only to update the dual variable.

Switching the update order
This approach updates the dual variable α first and then updates the primal variable w using the new information about α. This is summarized in the following scheme.
As we shall see now, this scheme provides the same complexity results as Algorithm 4.1. To see this, note that the iteration (37) is equivalent to or in a compact way, It can be shown that the matrix G(θ) has exactly the same spectrum of G 3 (θ), defined in (29). So, the convergence result is also the same, which we state again for convenience.

Maintaining primal-dual relationship
The second approach updates the primal variable enforcing the first relation of the optimality conditions given by (10) and uses the relaxation parameter θ only to update the dual variable, as described in the following scheme.
Differently from the previous case, this scheme cannot achieve accelerated convergence. Indeed, note first that the scheme (38) can be written as or in a compact way, We can conclude that the eigenvalues of this matrix are exactly the same of the matrix G 1 (θ), the iteration matrix of Algorithm 3.1 with employment of M 1 . So, the complexity analysis here is the same as that one established in Theorem 3.4.

Maintaining primal-dual relationship 2
For the sake of completeness, we present next the method where we keep the second relationship intact and include θ in the first relationship. This leads to Here we obtain the same convergence results as the ones described in Section 5.2. In fact, the relations above can be written as We can conclude that the eigenvalues of this matrix are exactly the same of the matrix G 1 (θ), the iteration matrix of Algorithm 3.1 with employment of M 1 . So, the complexity analysis here is the same as that one established in Theorem 3.4.
It is worth noting that if we update the variables as λn A T A as the associated iteration matrices, respectively. Moreover, we can conclude that they also have the same spectrum of G 1 (θ). So, the complexity analysis is the same as that one established in Theorem 3.4.

Numerical Experiments
In this section we present a comparison among the methods discussed in this work. Besides a table with the convergence rates and complexity bounds, we show here some numerical tests performed to illustrate the properties of Algorithms 3.2, 3.2 and 4.1 as well as of the extensions (37) and (38) applied to solve the primal-dual ridge regression problem stated in (4). We refer to Algorithm 4.1 as Quartz and the extensions (37) and (38) as New Quartz and Modified Quartz, respectively. The name Quartz is due to the fact that Algorithm 4.1 is a deterministic version of a randomized primal-dual algorithm proposed and analyzed by Qu, Richtárik and Zhang [7].
We summarize the main features of these methods in Table 2 which brings the range of the parameter to ensure convergence, the optimal convergence rates, the complexity and the cost per iteration of each method. For instance, the two versions of Algorithm 3.1 have the same range for theta. The usage of M 1 provides best convergence rate compared with using M 2 . However, it requires more calculations per iteration: the major computational tasks to be performed are computation of the matrix-vector products AA T w and A T Aα, while the use of M 2 needs the computation of Aα and A T w.
Surprisingly, Algorithm 4.1 has shown to be the best from both the theoretical point of view and the numerical experiments and with the same cost as the computation of Aα and A T w.
We also point out that the modified Quartz, (38), did not have here the same performance as the randomized version studied in [7].   against the number of iterations. The dimensions considered were d = 10, m = 1 and n = 500. We adopted the optimal parameters associated with each method, namely, θ * 1 , θ * 2 and θ * 3 for Algorithms 3.2, 3.3 and 4.1, respectively, θ * 3 for the algorithm given by (37) and θ * 1 for the algorithm given by (38). These parameters are defined in Theorems 3.4, 3.5 and 4.6 and the computational cost for computing them is the same as the cost for computing σ 1 , the largest singular value of A.

Range of θ
The left picture of Figure 7 compares Algorithms 3.2, 3.3 and 4.1, while the right one shows the performance of Algorithm 3.2 and the three variants of Quartz. We can see the equivalence between Quartz and New Quartz and also the equivalence between Modified Quartz and Algorithm 3.2. Note that besides the advantage of QTZ* in terms of number of iterations, it does not need more arithmetic operations per iteration as we have seen in Table 2.  Figure 7: Performance of the optimal versions of the algorithms proposed in this paper applied to solve the problem (4). The pictures show the objective values against the number of iterations. The dimensions considered were d = 10, m = 1 and n = 500. The matrix A ∈ R d×N and the vector y ∈ R N were randomly generated. For simplicity of notation we have denoted Algorithm 3.2 by PDFP1*, Algorithm 3.3 by PDFP2*, Algorithm 4.1 by QTZ* and the extensions (37) (New Quartz) and (38) (Modified Quartz) by NQTZ* and MQTZ*, respectively.
Despite the main goal of this work being a theoretical study about convergence and complexity of various fixed point type methods, for the sake of completeness, we present here a comparison of our methods with the classical one for solving quadratic optimization problems: the conjugate gradient algorithm (CG). Figure 8 shows the performance of the optimal versions of the algorithms proposed in this paper compared with CG, applied to solve the problem (4). On the top we have plotted the objective values against the number of iterations, while the bottom pictures show the objective values against the cpu time. The numerical experiments indicate that Quartz is competitive with CG. While Quartz needs more iterations than CG to converge, it is faster in runtime. This is due to the big difference between the effort per iteration of these two algorithms: 6dN + 5d + 9N arithmetic operations per iteration for Quartz compared to 4d 2 + 4N 2 + 4dN + 14d + 17N for CG.  Figure 8: Performance of the optimal versions of the algorithms proposed in this paper compared with the conjugate gradient algorithm, applied to solve the problem (4). The dimensions considered were d = 200, m = 1 and n = 5000. The matrix A ∈ R d×N and the vector y ∈ R N were randomly generated. For simplicity of notation we have denoted Algorithm 3.2 by PDFP1*, Algorithm 3.3 by PDFP2*, Algorithm 4.1 by QTZ* and conjugate gradient by CG. The pictures on the top show the objective values against the number of iterations, while the bottom ones show the objective values against the cpu time. The right pictures present the results of QTZ* and CG of the left ones with the horizontal axis rescaled. Note that despite QTZ* spent more iterations than CG, the computational time for solving the problem was less than that for CG.

Conclusion
In this paper we have proposed and analyzed several algorithms for solving the ridge regression problem and its dual. We have developed a (parameterized) family of fixed point methods applied to various equivalent reformulations of the optimality conditions. We have performed a convergence analysis and obtained complexity results for these methods, revealing interesting geometrical insights between convergence speed and spectral properties of iteration matrices. Our main method achieves the optimal accelerated rate of Nesterov. We have performed some numerical experiments to illustrate the properties of our algorithms as well as a comparison with the conjugate gradient algorithm. The numerical experiments indicate that our main algorithm is competitive with the conjugate gradient algorithm.