1 Introduction

We are interested in solving the following composite convex minimisation problem:

$$\begin{aligned} \min _{x \in \mathbb {R}^m} \left\{ F(x) {\mathop {=}\limits ^{{ \text{ def }}}}f(x) + g(x) {\mathop {=}\limits ^{{ \text{ def }}}}\frac{1}{n} \sum _{i=1}^n f_i(x) + g(x) \right\} . \end{aligned}$$
(1)

Throughout, we assume \(f_i : \mathbb {R}^m \rightarrow \mathbb {R}\) are convex and have L-Lipschitz continuous gradients for all i. We also assume \(g : \mathbb {R}^m \rightarrow \mathbb {R}\cup \{\infty \}\) is proper, lower semicontinuous, and \(\mu \)-strongly convex with \(\mu \ge 0\), but we do not require g to be differentiable. Problems of this form are ubiquitous in many fields, including machine learning, compressed sensing, and image processing (see, e.g., [11, 12, 25, 35]). Fundamental examples include LASSO [35] and matrix completion [12], where f is a least-squares loss and g is the \(\ell _1\) or nuclear norm, respectively, and sparse logistic regression, where f is the logistic loss and g is the \(\ell _1\) norm.

One well-studied algorithm that solves (1) is the forward-backward splitting algorithm [13, 29]. This method has a worst-case convergence rate of \(\mathcal {O}\left( 1/T \right) \) when F is not strongly convex, and when F is \(\mu \)-strongly convex, it converges linearly with a rate of \(\mathcal {O}\left( (1 + \kappa ^{-1})^{-T} \right) \), where \(\kappa {\mathop {=}\limits ^{{ \text{ def }}}}L / \mu \) is the condition number of F. The inertial forward-backward splitting algorithm [9] converges at an even faster rate of \(\mathcal {O}\left( 1/T^2 \right) \) without strong convexity and a linear rate of \(\mathcal {O}\left( (1+\kappa ^{-1/2})^{-T}\right) \) when F is strongly convex. The inertial forward-backward method is able to achieve these optimal convergence rates because it incorporates momentum, using information from previous iterates to adjust the current iterate.

Although the inertial forward-backward algorithm converges quickly, it requires access to the full gradient \(\nabla f\) at each iteration, which can be costly, for instance, when n is large. In many applications, common problem sizes are so large that computing \(\nabla f\) is prohibitively expensive. Stochastic gradient methods exploit the separable structure of f, using the gradient of a few of the components \(\nabla f_i\) to estimate the full gradient at the current iterate. In most cases, the complexity of computing \(\nabla f_i\) for one i is 1/n-times the complexity of computing the full gradient, so stochastic gradient methods generally have a much smaller per-iteration complexity than full-gradient methods. Moreover, it has recently been shown that the optimal convergence rates of stochastic gradient methods are \(\mathcal {O}\left( n/T^2 \right) \) without strong convexity and \(\mathcal {O}\left( \theta _S^{-T} \right) \) with \(\theta _S {\mathop {=}\limits ^{{ \text{ def }}}}1 + \sqrt{\frac{\mu }{L n}}\) when g is \(\mu \)-strongly convex, matching the optimal dependence on T and \(\kappa \) of full-gradient methods [37].Footnote 1 Stochastic gradient methods have undergone several revolutions to improve their convergence rates before achieving this lower bound. We summarise these revolutions below, beginning with traditional stochastic gradient descent.

Stochastic Gradient Descent (SGD). Stochastic gradient descent, dating back to [32], uses the gradients \(\nabla f_j, \ \forall j \in J_k \subset \{1,2,\ldots ,n\}\) to estimate the full gradient. The mini-batch \(J_k\) is an index set chosen uniformly at random from all subsets of \(\{ 1,2,\ldots ,n \}\) with cardinality \(b {\mathop {=}\limits ^{{ \text{ def }}}}|J_k|\). When \(b \ll n\), the per-iteration complexity of stochastic gradient descent is much less than full-gradient methods. However, the per-iteration savings come at the cost of a slower convergence rate, as SGD converges at a rate of \(\mathcal {O} (1/\sqrt{T})\) in the worst case. Still, SGD outperforms full-gradient methods on many problems, especially if a low-accuracy solution is acceptable.

Variance Reduction. Variance-reduced estimators use gradient information from previous iterates to construct a better estimate of the gradient at the current step, ensuring that the mean-squared error of these estimates decreases as the iterations increase. Variance-reduction improves the convergence rates of stochastic gradient methods, but either have a higher per-iteration complexity or have larger storage requirements than SGD. The two most popular variance-reduced algorithms are SVRG [21] and SAGA [15], which use the following estimators to approximate \(\nabla f(x_{k+1})\):

$$\begin{aligned} {\widetilde{\nabla }}^{\text {SVRG}}_{k+1}&{\mathop {=}\limits ^{{ \text{ def }}}}\frac{1}{b} \left( \sum _{j \in J_k} \nabla f_j(x_{k+1}) - \nabla f_j({\widetilde{x}}) \right) + \nabla f({\widetilde{x}}) \end{aligned}$$
(2)
$$\begin{aligned} {\widetilde{\nabla }}^{\text {SAGA}}_{k+1}&{\mathop {=}\limits ^{{ \text{ def }}}}\frac{1}{b} \left( \sum _{j \in J_k} \nabla f_j(x_{k+1}) - \nabla f_j(\varphi _k^j) \right) + \frac{1}{n} \sum _{i=1}^n \nabla f_i(\varphi _k^i). \end{aligned}$$
(3)

In SVRG, the full gradient \(\nabla f({\widetilde{x}})\) is computed every \(m \approx 2n\) iterations, and \(\nabla f({\widetilde{x}})\) is stored and used for future gradient estimators. SAGA takes a similar approach, storing n past stochastic gradients, and updating the stored gradients so that \(\nabla f_j(\varphi _{k+1}^j) = \nabla f_j(x_{k+1})\). In this work, we consider a variant of SVRG where the full gradient is computed at every iteration with probability \(1/p \in (0,1]\) rather than deterministically computing the full gradient every 2n iterations.

SVRG, SAGA, and related variance-reduced methods converge at a rate of \(\mathcal {O} \left( n/T \right) \) when no strong convexity is present. With strong convexity, these algorithms enjoy linear convergence, with a rate of \(\mathcal {O} ( \left( 1 + (n + \kappa )^{-1} \right) ^{-T} )\). Although these convergence rates are significantly faster than the rate of SGD, they do not match the asymptotic convergence rates of accelerated first-order methods, converging like \(\mathcal {O}\left( n/T^2 \right) \) without strong convexity and \(\mathcal {O}( (1+(n \kappa )^{-1/2})^{-T} )\) with strong convexity.

Variance Reduction with Bias. SAGA and SVRG are unbiased gradient estimators because they satisfy \(\mathbb {E}_k{\widetilde{\nabla }}_{k+1} = \nabla f(x_{k+1})\), where \(\mathbb {E}_k\) is the expectation conditioned on the first k iterates. There are several popular variance-reduced algorithms that use biased gradient estimators [27, 33]. In [16], the authors develop a framework for proving convergence guarantees for biased methods, suggesting that the convergence rates of biased stochastic gradient estimators depend on the sum of two terms:

$$\begin{aligned} \gamma ^2 \mathbb {E}_k\Vert {\widetilde{\nabla }}_{k+1} - \nabla f(x_{k+1}) \Vert ^2 + \gamma \left\langle \nabla f(x_{k+1}) - \mathbb {E}_k{\widetilde{\nabla }}_{k+1}, x_{k+1} - x^* \right\rangle . \end{aligned}$$

These terms are the mean-squared error (MSE) of the gradient estimator and the “bias term”, respectively. The authors also show that recursive gradient estimators such as SARAH [27] and SARGE [16] minimise these terms better than other biased or unbiased estimators, leading to better convergence rates in some settings. The SARAH gradient estimator is

$$\begin{aligned} {\widetilde{\nabla }}^{\text {SARAH}}_{k+1} {\mathop {=}\limits ^{{ \text{ def }}}}{\left\{ \begin{array}{ll} \frac{1}{b} \left( \sum _{j \in J_k} \nabla f_j(x_{k+1}) - \nabla f_j(x_k) \right) + {\widetilde{\nabla }}^{\text {SARAH}}_k &{} \text {w.p. } 1 - \frac{1}{p}, \\ \nabla f(x_{k+1}) &{} \text {w.p. } \frac{1}{p}. \end{array}\right. } \end{aligned}$$
(4)

As with SVRG, we consider a slight variant of the SARAH estimator in this work, where we compute the full gradient at every step with probability 1/p. The SARGE gradient estimator is similar to the SAGA estimator.

$$\begin{aligned} {\widetilde{\nabla }}^{\text {SARGE}}_{k+1}&{\mathop {=}\limits ^{{ \text{ def }}}}\frac{1}{b} \left( \sum _{j \in J_k} \nabla f_j(x_{k+1}) - \psi _k^j \right) + \frac{1}{n} \sum _{i=1}^n \psi _k^i \nonumber \\&\quad - \left( 1 - \frac{b}{n}\right) \left( \frac{1}{b} \sum _{j \in J_k} \nabla f_j(x_k) - {\widetilde{\nabla }}^{\text {SARGE}}_k \right) , \end{aligned}$$
(5)

where the variables \(\psi _k^i\) follow the update rule \(\psi _{k+1}^j = \nabla f_j(x_{k+1}) - \left( 1 - \frac{b}{n}\right) \) \(\nabla f_j (x_k)\) for all \(j \in J_k\), and \(\psi _{k+1}^i = \psi _k^i\) otherwise. Like SAGA, SARGE uses stored gradient information to avoid having to compute the full gradient. These estimators differ from SAGA and SVRG because they are biased (i.e., \(\mathbb {E}_k{\widetilde{\nabla }}_{k+1} \not = \nabla f(x_{k+1})\). Many works have recently shown that algorithms using the SARAH or SARGE gradient estimators achieve faster convergence rates than algorithms using other estimators in certain settings. Importantly, these recursive gradient methods produce algorithms that achieve the oracle complexity lower bound for non-convex composite optimisation [16, 17, 30, 36, 42]. They have not yet been shown to achieve optimal convergence rates for convex problems.

Variance Reduction with Negative Momentum. Starting with Katyusha [2] and followed by many others [1, 3, 4, 22, 34, 40, 41], a family of stochastic gradient algorithms have recently emerged that achieve the optimal convergence rates implied by Woodworth and Srebro [37]. There are two components to these algorithms that make this acceleration possible. First, these algorithms incorporate momentum into each iteration, either through linear coupling [6], as in the case of [1,2,3,4, 40], or in a more traditional manner reminiscent of Nesterov’s accelerated gradient descent [34, 41]. Second, these algorithms incorporate an “anchor-point” into their momentum updates that supposedly softens the negative effects of bad gradient evaluations. Almost all of these algorithms are an accelerated form of SVRG with updates of the form

$$\begin{aligned} x_{k+1}&= {\widetilde{x}} + \tau _k(x_k - {\widetilde{x}}), \quad \text {or}\\ x_{k+1}&= \tau _1 z_k + \tau _2 {\widetilde{x}} + (1-\tau _1 - \tau _2) y_k, \end{aligned}$$

using traditional acceleration or linear coupling, respectively (\(z_k\) and \(y_k\) are as defined in Algorithm 1, and \(\tau _k, \tau _1, \tau _2 \in [0,1]\)). We see that these updates “attract” the current iterate toward a “safe” point, \({\widetilde{x}}\), where we know the full gradient. Because of this “attractive” rather than “repulsive” quality, updates of this type have been termed “negative momentum”.

There are several issues with negative momentum. Most importantly, negative momentum is algorithm-specific. Unlike Nesterov’s method of momentum or linear coupling, negative momentum cannot be applied to other stochastic gradient algorithms. SAGA, for example, cannot be accelerated using negative momentum of this form, because there does not exist a point \({\widetilde{x}}\) where we compute the full gradient (however, see [40]). Also, numerical experiments show that negative momentum is often unnecessary to achieve acceleration (see the discussion in [2] or Sect. 7, for example), suggesting that acceleration is possible without it.

Other Accelerated Methods. Outside of the family of algorithms using negative-momentum, there exist many stochastic gradient methods that achieve near-optimal convergence rates, including Catalyst [24] and RPDG [23]. Catalyst’s convergence rates are a logarithmic factor worse than Katyusha’s when the objective is strongly convex or smooth. RPDG achieves optimal convergence rates in the strongly convex setting, matching Katyusha’s rate. When strong convexity is not present, RPDG achieves optimal rates up to a logarithmic factor. We include further discussion of these and other related works in Sect. 3.

Contributions. In this work, we provide a framework to accelerate many stochastic gradient algorithms that does not require negative momentum. We introduce the MSEB property, a property that implies natural bounds on the bias and MSE of a gradient estimator, and we prove accelerated convergence rates for all MSEB gradient estimators. As special cases, we show that incorporating the SAGA, SVRG, SARAH, and SARGE gradient estimators into the framework of Algorithm 1 creates a stochastic gradient method with an \(\mathcal {O}\left( 1 / T^2 \right) \) convergence rate without strong convexity, and a linear convergence rate that scales with \(\sqrt{\kappa }\) when strong convexity is present, achieving the optimal convergence rates in both cases up to a factor of n depending on the bias and MSE of the estimator.

Roadmap. We introduce our algorithm and state our main result in Sect. 2. We compare our results to existing work in Sect. 3. The next four sections are devoted to proving our main results. In Sect. 4, we review elementary results on the subdifferential relation, results on the proximal operator, and lemmas from convex analysis. We prove a general inequality for accelerated stochastic gradient methods using any stochastic gradient estimator in Sect. 5. This inequality implies that many stochastic gradient methods can be accelerated using our momentum scheme; to prove an accelerated convergence rate for a specific algorithm, we only need to apply an algorithm-specific bound on the MSE and bias of the gradient estimator. We do this for the SAGA, SVRG, SARAH, and SARGE gradient estimators in Sect. 6. Finally, in Sect. 7, we demonstrate the performance of our algorithms in numerical experiments.

figure a

2 Algorithm and main results

The algorithm we propose is outlined in Algorithm 1. Algorithm 1 takes as input any stochastic gradient estimator \({\widetilde{\nabla }}_{k+1}\), so it can be interpreted as a framework for accelerating existing stochastic gradient methods. This algorithm incorporates momentum through linear coupling [6], but is related to Nesterov’s accelerated gradient method after rewriting \(x_{k+1}\) as follows:

$$\begin{aligned} x_{k+1} = y_k + (1-\tau _k) (y_k - y_{k-1}). \end{aligned}$$

With \(\tau _k = 1\), there is no momentum, and the momentum becomes more aggressive for smaller \(\tau _k\). Although linear coupling provides the impetus for our acceleration framework, similar acceleration schemes appear in earlier works, including Auslender and Teboulle [8], and Ghadimi and Lan, 2016 [19].

We show that as long as the MSE and bias of a stochastic gradient estimator satisfy certain bounds and the parameters \(\gamma _k\) and \(\tau _k\) are chosen correctly, Algorithm 1 converges at an accelerated rate. There are three principles for choosing \(\gamma _k\) and \(\tau _k\) so that Algorithm 1 achieves acceleration.

  1. 1.

    The step size \(\gamma _k\) should be small, roughly \(\mathcal {O}\left( 1 / n \right) \) with the exact dependence on n decreasing with larger MSE and bias of the gradient estimator.

  2. 2.

    On non-strongly convex objectives, the step size should grow sufficiently slowly, so that \(\gamma _k^2 \left( 1 - \rho \right) \le \gamma _{k-1}^2 \left( 1 - \frac{\rho }{2}\right) \) with \(\rho = \mathcal {O} \left( 1 / n \right) \) decreasing with larger MSE and bias.

  3. 3.

    The momentum should become more aggressive with smaller step sizes, with \(\tau _k = \mathcal {O}\left( \frac{1}{n \gamma _k}\right) \).

For strongly convex objectives, \(\gamma _k\) and \(\tau _k\) can be kept constant.

For Algorithm 1 to converge, the stochastic gradient estimator must have controlled bias and MSE. Specifically, we require the estimator to satisfy the MSEB property,Footnote 2 introduced below.

Definition 1

For any sequence \(\{x_{k+1}\}\), let \({\widetilde{\nabla }}_{k+1}\) be a stochastic gradient estimator generated from the points \(\{x_{\ell +1}\}_{\ell = 0}^k\). The estimator \({\widetilde{\nabla }}_{k+1}\) satisfies the MSEB\((M_1,M_2,\rho _M,\) \(\rho _B, \rho _F)\) property if there exist constants \(M_1, M_2 \ge 0\), \(\rho _M,\rho _B,\rho _F \in (0,1]\), and sequences \(\mathcal {M}_k\) and \(\mathcal {F}_k\) satisfying

$$\begin{aligned}&\nabla f(x_{k+1}) - \mathbb {E}_k{\widetilde{\nabla }}_{k+1} = \left( 1 - \rho _B \right) \left( \nabla f(x_k) - {\widetilde{\nabla }}_k \right) ,\\&\mathbb {E} \Vert {\widetilde{\nabla }}_{k+1} - \nabla f(x_{k+1})\Vert ^2 \le \mathcal {M}_k,\\&\mathcal {M}_k \le \frac{M_1}{n} \sum _{i=1}^n \mathbb {E} \Vert \nabla f_i (x_{k+1}) - \nabla f_i(x_k)\Vert ^2 + \mathcal {F}_k + (1-\rho _M) \mathcal {M}_{k-1}, \end{aligned}$$

and

$$\begin{aligned} \mathcal {F}_k \le \sum _{\ell =0}^k \frac{M_2 (1-\rho _F)^{k - \ell } }{n} \sum _{i=1}^n \mathbb {E} \Vert \nabla f_i (x_{\ell +1}) - \nabla f_i(x_\ell )\Vert ^2. \end{aligned}$$

On a high-level, the MSEB property guarantees that the bias and MSE of the gradient estimator decrease sufficiently quickly with k.

Remark 1

In [10], the authors study the convergence of unbiased stochastic gradient methods under first- and second-moment bounds on the gradient estimator. The bounds implied by the MSEB property are similar, but with the crucial difference that they are non-Markovian; we allow our bound on \(\mathcal {M}_k\) to depend on all preceding iterates, not just \(x_k\).

In this work, we show that most existing stochastic gradient estimators satisfy the MSEB property, including SAGA, SVRG, SARAH, SARGE, and the full gradient estimator. We list their associated parameters in the following propositions.

Proposition 1

The full gradient estimator \({\widetilde{\nabla }}_{k+1} = \nabla f(x_{k+1})\) satisfies the MSEB property with \(M_1 = M_2 = 0\) and \(\rho _M = \rho _B = \rho _F = 1\).

Proof

The bias and MSE of the full gradient estimator are zero, so it is clear these parameter choices satisfy the bounds in the MSEB property. \(\square \)

Although trivial, Proposition 1 allows us to show that our analysis recovers the accelerated convergence rates of the inertial forward-backward algorithm as a special case. The MSEB property applies to the SAGA and SVRG estimators non-trivially.

Proposition 2

The SAGA gradient estimator (3) satisfies the MSEB property with \(M_1 = \mathcal {O} ( n/b^2 )\), \(\rho _M = \mathcal {O} ( b/n )\), \(M_2 = 0\), and \(\rho _B = \rho _F = 1\). Setting \(p = \mathcal {O}( n / b )\), the SVRG gradient estimator (2) satisfies the MSEB property with the same parameters.

We prove Proposition 2 in “Appendix C”. We are able to choose \(\rho _B = 1\) for the SAGA and SVRG gradient estimators because they are unbiased, and we can choose \(M_2 = 0\) and \(\rho _F = 1\) for these estimators because they admit Markovian bounds on their variance. This is not true for SARAH and SARGE, but these estimators are still compatible with our framework. We prove Propositions 3 and 4 in “Appendices D and E”, respectively.

Proposition 3

Setting \(p = \mathcal {O}( n )\), the SARAH gradient estimator (4) satisfies the MSEB property with \(M_1 = \mathcal {O}(1)\), \(M_2 = 0\), \(\rho _M = \mathcal {O}( 1 / n )\), \(\rho _B = \mathcal {O}(1/n)\), and \(\rho _F = 1\).

Proposition 4

The SARGE gradient estimator (5) satisfies the MSEB property with \(M_1 = \mathcal {O}(1/n)\), \(M_2 = \mathcal {O}( 1/n^2 )\), \(\rho _M = \mathcal {O}(b/n)\), \(\rho _B = \mathcal {O}(b/n)\), and \(\rho _F = \mathcal {O} (b/n)\).

All gradient estimators satisfying the MSEB property can be accelerated using the framework of Algorithm 1, as the following two theorems guarantee.

Theorem 1

(Acceleration Without Strong Convexity) Suppose the stochastic gradient estimator \({\widetilde{\nabla }}_{k+1}\) satisfies the MSEB\((M_1,M_2,\) \(\rho _M,\rho _B,\rho _F)\) property. Define the constants

$$\begin{aligned} \varTheta _1 {\mathop {=}\limits ^{{ \text{ def }}}}1 + \frac{8 (1-\rho _B)}{\rho _B^2 \rho _M}, \quad \varTheta _2 {\mathop {=}\limits ^{{ \text{ def }}}}\frac{M_1 \rho _F + 2 M_2}{\rho _M \rho _F}, \quad {and } \quad \rho {\mathop {=}\limits ^{{ \text{ def }}}}\min \{ \rho _M, \rho _B, \rho _F \}. \end{aligned}$$

With

$$\begin{aligned} c \ge \max \left\{ \frac{2 \left( 1 + \sqrt{1 + 8 \varTheta _1 \varTheta _2 (2 - \rho _M + \rho _B \rho _M)} \right) }{2 - \rho _M + \rho _B \rho _M}, 16 \varTheta _1 \varTheta _2 \right\} , \end{aligned}$$

and \(\nu \ge \max \left\{ 0,\frac{2-6 \rho }{\rho } \right\} \), set \(\gamma _k = \frac{k+\nu +4}{2 c L}\) and \(\tau _k = \frac{1}{c L \gamma _k}\). After T iterations, Algorithm 1 produces a point \(y_T\) satisfying the following bound on its suboptimality:

$$\begin{aligned} \mathbb {E} F(y_T) - F(x^*) \le \frac{K_1 (\nu +2)(\nu +4)}{(T + \nu + 3)^2}, \end{aligned}$$

where

$$\begin{aligned} K_1 {\mathop {=}\limits ^{{ \text{ def }}}}F(y_0) - F(x^*) + \frac{2 c L}{(\nu +2)(\nu +4)} \Vert z_0 - x^*\Vert ^2. \end{aligned}$$

A similar result gives an accelerated linear convergence rate when strong convexity is present.

Theorem 2

(Acceleration With Strong Convexity)

Suppose the stochastic gradient estimator \({\widetilde{\nabla }}_{k+1}\) satisfies the MSEB\((M_1,M_2,\) \(\rho _M,\rho _B,\rho _F)\) property and g is \(\mu \)-strongly convex with \(\mu > 0\). With the constants \(\varTheta _1, \varTheta _2, c\), and \(\nu \) set as in Theorem 1, set \(\gamma = \min \{ \frac{1}{\sqrt{\mu c L}}, \frac{\rho }{2 \mu } \}\) and \(\tau = \mu \gamma \). After T iterations, Algorithm 1 produces a point \(z_T\) satisfying the following bound:

$$\begin{aligned} \mathbb {E} \Vert z_T - x^*\Vert ^2 \le K_2 \left( 1+ \min \left\{ \sqrt{\frac{\mu }{L c}}, \frac{\rho }{2} \right\} \right) ^{-T}. \end{aligned}$$

where

$$\begin{aligned} K_2 {\mathop {=}\limits ^{{ \text{ def }}}}\frac{2}{\mu } \left( F(y_0) - F(x^*)\right) + \Vert z_0 - x^*\Vert ^2 \end{aligned}$$

Remark 2

Although we prove accelerated convergence rates for many popular gradient estimators, the generality of Theorems 1 and 2 allows our results to extend easily to gradient estimators not considered in this work as well. These include, for example, the gradient estimators considered in [20].

Remark 3

With some manipulation, we see that these rates with \(c = \nu = \mathcal {O}(n)\) are similar to the rates proved for Katyusha. In [2], the author shows that in the non-strongly convex case, Katyusha satisfies

$$\begin{aligned} \mathbb {E} F({\widetilde{x}}_S) - F(x^*) \le \mathcal {O} \left( \frac{F(x_0) - F(x^*)}{S^2} + \frac{L \Vert x_0 - x^*\Vert ^2}{P S^2} \right) . \end{aligned}$$

Recall that Katyusha follows the algorithmic framework of SVRG; S denotes the epoch number, \({\widetilde{x}}_S\) the point where the full gradient was computed at the beginning of epoch S, and \(P = \mathcal {O}(n)\) is the epoch length. In our notation, \(S = T/P = \mathcal {O}\left( T/n \right) \). Theorem 1 with \(c = \nu = \mathcal {O}(n)\) shows that Algorithm 1 achieves a similar convergence rate of

$$\begin{aligned} \mathbb {E} F(y_T) - F(x^*) \le \mathcal {O} \left( \frac{n^2}{T^2} \left( F(y_0) - F(x^*) + \frac{L}{n} \Vert z_0 - x^*\Vert ^2 \right) \right) . \end{aligned}$$

In the strongly convex case, an appropriately adapted version of Katyusha satisfies

$$\begin{aligned} \mathbb {E} F({\widetilde{x}}_S) - F(x^*) \le {\left\{ \begin{array}{ll} \mathcal {O} \left( \left( 1 + \frac{\sqrt{\mu }}{\sqrt{L P}} \right) ^{-S P} \right) &{} \frac{4}{3} \le \frac{\sqrt{L}}{\sqrt{\mu n}} \\ \mathcal {O} \left( \left( \frac{3}{2} \right) ^{-S} \right) &{} \frac{\sqrt{L}}{\sqrt{\mu n}} < \frac{4}{3}. \end{array}\right. } \end{aligned}$$

Similarly, with \(c = \rho = \mathcal {O}( n )\), Theorem 2 shows that the iterates of Algorithm 1 satisfy

$$\begin{aligned} \frac{1}{2} \mathbb {E} \Vert z_T - x^*\Vert ^2 \le \mathcal {O} \left( \left( 1+ \min \left\{ \sqrt{\frac{\mu }{L n}}, \frac{1}{n} \right\} \right) ^{-T} \right) , \end{aligned}$$

which again matches the rate of Katyusha. Of course, not all stochastic gradient estimators satisfy the bounds necessary to set \(c = \nu = \rho = \mathcal {O}( n )\), so these optimal rates are conditional on being able to construct an “optimal estimator”. SAGA, SVRG, SARAH, and SARGE all require c to be slightly larger than \(\mathcal {O}(n)\).

The proofs of Theorems 1 and 2 use a linear coupling argument adapted from [6], but we use a different adaptation than the one in [2] used to prove convergence rates for Katyusha. To explain the differences between our approach and existing approaches, let us give a high-level description of linear coupling and the generalisation used in [2].

In [6], the authors suggest that gradient descent and mirror descent can be coupled to create an accelerated algorithm. We do not discuss gradient descent and mirror descent in detail (for this, see [6]), but the main idea of linear coupling can be understood from only two bounds arising from these algorithms. For the purpose of this argument, suppose \(g \equiv 0\), so that \(F \equiv f\). Gradient descent with step size \(\eta \) satisfies the following bound on the decrease of the objective (equation (2.1) in [6]):

$$\begin{aligned} f(x_{k+1}) \le f(x_k) - \frac{1}{\eta } \Vert \nabla f(x_{k+1})\Vert ^2. \end{aligned}$$
(6)

This bound shows that gradient descent is indeed a descent method; it is guaranteed to make progress at each iteration. The iterates of mirror descent using step size \(\gamma \) satisfy a bound on the sub-optimality of each iterate (equation (2.2) in [6]).

$$\begin{aligned} \langle \nabla f(x_k), x_k - x^* \rangle \le \frac{1}{2} \Vert x_k - x^*\Vert ^2 - \frac{1}{2} \Vert x_{k+1} - x^*\Vert ^2 + \frac{\gamma ^2}{2} \Vert \nabla f(x_k)\Vert ^2. \end{aligned}$$
(7)

While gradient descent is guaranteed to make progress proportional to \(\Vert \nabla f(x_k)\Vert ^2\) each iteration, mirror descent potentially introduces an “error” that is proportional to \(\Vert \nabla f(x_k)\Vert ^2\). Linear coupling takes advantage of this duality. Loosely speaking, by combining the sequence of iterates produced by gradient descent with the sequence produced by mirror descent, the guaranteed progress of gradient descent balances the potential error introduced by mirror descent, accelerating convergence.

This argument does not immediately hold for stochastic gradient methods. This is because in addition to the norm \(\Vert \nabla f(x_k)\Vert ^2\) arising in inequalities (6) and (7), we also get the MSE of our gradient estimator \(\Vert {\widetilde{\nabla }}_k - \nabla f(x_k)\Vert ^2\) as well as a “bias term”. In the stochastic setting, analogues of inequalities (6) and (7) read

$$\begin{aligned} f(x_{k+1})&\le f(x_k) + \langle \nabla f(x_{k+1}), x_{k+1} - x_k \rangle \\&= f(x_k) - \frac{1}{\eta } \Vert {\widetilde{\nabla }}_{k+1}\Vert ^2 + \langle \nabla f(x_{k+1}) - {\widetilde{\nabla }}_{k+1}, x_{k+1} - x_k \rangle \\&\le f(x_k) + \left( \frac{\epsilon }{2 \eta ^2} - \frac{1}{\eta } \right) \Vert {\widetilde{\nabla }}_{k+1}\Vert ^2 + \frac{1}{2 \epsilon } \Vert {\widetilde{\nabla }}_{k+1} - \nabla f(x_{k+1})\Vert ^2, \end{aligned}$$

where the last inequality is Young’s, and

$$\begin{aligned} \gamma ( f(x_k) - f(x^*) ) \le&\, \frac{1}{2} \Vert x_k - x^*\Vert ^2 - \frac{1}{2} \Vert x_{k+1} - x^*\Vert ^2 \\&+ \gamma \left\langle \nabla f(x_k) - {\widetilde{\nabla }}_k, x_k - x^* \right\rangle + \frac{\gamma ^2}{2} \Vert {\widetilde{\nabla }}_k\Vert ^2. \end{aligned}$$

If the MSE or bias term is too large, the gradient step is no longer a descent step, and the progress does not balance the “error terms” in each of these inequalities, so we cannot expect linear coupling to offer any acceleration. This problem with the MSE and bias term exists for non-accelerated algorithms as well, and all analyses of stochastic gradient methods bound the effect of these terms, but in different ways. Katyusha and other accelerated algorithms in this family incorporate negative momentum to cancel part of the MSE. In contrast, analyses of non-accelerated algorithms do not try to cancel any of the variance, but show that the variance decreases fast enough so that it does not affect convergence rates.

3 Related work

Besides Katyusha, there are many algorithms that use negative momentum for acceleration. In [34], the authors consider an accelerated version of SVRG that combines negative momentum with Nesterov’s momentum to achieve the optimal \(\sqrt{\kappa }\) dependence in the strongly convex case. This approach to acceleration is almost the same as Katyusha, but uses a traditional form of Nesterov’s momentum instead of linear coupling. MiG [41] is another variant of these algorithms, corresponding to Katyusha with a certain parameter set to zero. VARAG is another approach to accelerated SVRG using negative momentum. VARAG achieves optimal convergence rates in the non-strongly convex and strongly convex settings under the framework of a single algorithm, and it converges linearly on problems that admit a global error bound, a quality that other algorithms have not yet been shown to possess [22].

The only direct acceleration of a SAGA-like algorithm is SSNM from [40]. Using the notation of (3), SSNM chooses a point from the set \(\{\varphi _k^i\}_{i=1}^n\) uniformly at random, and uses this point as the “anchor point” for negative-momentum acceleration. Although SSNM admits fast convergence rates, there are a few undesirable qualities of this approach. SAGA has heavy storage requirements because it must store n gradients from previous iterations, and SSNM exacerbates this storage problem by storing n points from previous iterations as well. SSNM must also compute two stochastic gradients each iteration, so its per-iteration computational cost is similar to SVRG and Katyusha, and always higher than SAGA’s.

Many algorithms for non-convex optimisation also use negative momentum for acceleration. KatyushaX [3] is a version of Katyusha adapted to optimise sum-of-non-convex objectives. To achieve its acceleration, KatyushaX uses classical momentum and a “retraction step”, which is effectively an application of negative momentum (this relationship is acknowledged in [3] as well). Natasha [1] and Natasha2 [4] are accelerated algorithms for finding stationary points of non-convex objectives. Both algorithms employ a “retraction step” that is similar to negative momentum [1].

There are also many accelerated stochastic gradient algorithms that do not use negative momentum. In [28], the author applies Nesterov’s momentum to SVRG without any sort of negative momentum, proving a linear convergence rate in the strongly convex regime. However, the proven convergence rate is suboptimal, as it implies even worse performance than SVRG when the batch size is small and worse performance than accelerated full-gradient methods when the batch size is close to n. Our results show that a particular application of Nesterov’s momentum to SVRG does provide acceleration.

Point-SAGA [14] is another SAGA-like algorithm that achieves optimal convergence rates, but point-SAGA must compute the proximal operator corresponding to F rather than the proximal operator corresponding to g. This is not possible in general, even if the proximal operator corresponding to g is easy to compute, so point-SAGA applies to a different class of functions than the class we consider in this work.

There are also many algorithms that indirectly accelerate stochastic gradient methods. This class of algorithms include Catalyst [24], APPA [18], and the primal-dual methods in [39]. These algorithms call a variance-reduced stochastic gradient method as a subroutine, and provide acceleration using an inner-outer loop structure. These algorithms are often difficult to implement in practice due to the difficulty of solving their inner-loop subproblems, and they achieve a convergence rate that is only optimal up to a logarithmic factor.

4 Preliminaries

In this section, we present some basic definitions and results from optimisation and convex analysis. Much of our analysis involves Bregman divergences. The Bregman divergence associated with a function h is defined as

$$\begin{aligned} D^\xi _h(y,x) {\mathop {=}\limits ^{{ \text{ def }}}}h(y) - h(x) + \langle \xi , x - y \rangle , \end{aligned}$$

where \(\xi \in \partial h(x)\) and \(\partial \) is the subdifferential operator. If h is differentiable, we drop the superscript \(\xi \) as the subgradient is unique. The function h is convex if and only if \(D^\xi _h(y,x) \ge 0\) for all x and y. We say h is \(\mu \)-strongly convex with \(\mu \ge 0\) if and only if

$$\begin{aligned} \frac{\mu }{2} \Vert x - y\Vert ^2 \le D^\xi _h(y,x). \end{aligned}$$

Bregman divergences also arise in the following fundamental inequality.

Lemma 1

([26], Thm. 2.1.5) Suppose f is convex with an L-Lipschitz continuous gradient. We have for all \(x,y \in \mathbb {R}^m\),

$$\begin{aligned} \Vert \nabla f(x) - \nabla f(y)\Vert ^2 \le 2 L D_f(y,x). \end{aligned}$$

Lemma 1 is equivalent to the following result, which is more specific to our analysis due to the finite-sum structure of the smooth term in (1).

Lemma 2

Let \(f(x) = \tfrac{1}{n} \sum _{i=1}^n f_i(x)\), where each \(f_i\) is convex with an L-Lipschitz continuous gradient. Then for every \(x, y \in \mathbb {R}^m\)

$$\begin{aligned} \frac{1}{n} \sum _{i=1}^n \Vert \nabla f_i(x) - \nabla f_i(y)\Vert ^2 \le 2 L D_f(y,x). \end{aligned}$$

Proof

This follows from applying Lemma 1 to each component \(f_i\). \(\square \)

The proximal operator is defined as

$$\begin{aligned} \text {prox}_{g} (y) = \mathop {\hbox {arg min}}\limits _{x \in \mathbb {R}^m} \left\{ \frac{1}{2} \Vert x-y\Vert ^2 + g(x) \right\} . \end{aligned}$$

The proximal operator is also defined implicitly as \(y - \text {prox}_g(y) \in \partial g(\text {prox}_g\) (y)). From this definition of the proximal operator, the following standard inequality is clear.

Lemma 3

Suppose g is \(\mu \)-strongly convex with \(\mu \ge 0\), and suppose \(z = \text {prox}_{\eta g} \left( x - \eta d \right) \) for some \(x, d \in \mathbb {R}^m\) and constant \(\eta \). Then, for any \(y \in \mathbb {R}^m\),

$$\begin{aligned} \eta \langle d, z - y \rangle \le \frac{1}{2} \Vert x - y\Vert ^2 - \frac{1 + \mu \eta }{2} \Vert z - y\Vert ^2 - \frac{1}{2} \Vert z - x\Vert ^2 - \eta g(z) + \eta g(y). \end{aligned}$$

Proof

By the strong convexity of g,

$$\begin{aligned} g(z) - g(y) \le \langle \xi , z - y \rangle - \frac{\mu }{2} \Vert z - y\Vert ^2 \quad \forall \xi \in \partial g(z) \end{aligned}$$

From the implicit definition of the proximal operator, we know that \(\frac{1}{\eta } (z - x) + d \in \partial g(z)\). Therefore,

$$\begin{aligned} g(z) - g(y)&\le \langle \xi , z - y \rangle - \frac{\mu }{2} \Vert z - y\Vert ^2 \\&= \frac{1}{\eta } \langle z - x + \eta d, z - y \rangle - \frac{\mu }{2} \Vert z - y\Vert ^2 \\&= \langle d, z - y \rangle + \frac{1}{2 \eta } \Vert x - y\Vert ^2 - \frac{1 + \mu \eta }{2 \eta } \Vert z - y\Vert ^2 - \frac{1}{2 \eta } \Vert z - x\Vert ^2. \end{aligned}$$

Multiplying by \(\eta \) and rearranging yields the assertion. \(\square \)

5 The acceleration framework

To apply the linear coupling framework, we must couple stochastic analogues of (6) and (7) to construct a lower bound on the one-iteration progress of Algorithm 1.

Lemma 4

(One-Iteration Progress) The following bound describes the progress made by one iteration of Algorithm 1.

$$\begin{aligned} 0 \le&\frac{\gamma _k(1-\tau _k)}{\tau _k} F(y_k) - \frac{\gamma _k}{\tau _k} F(y_{k+1}) + \gamma _k F(x^*) + \gamma _k^2 \Vert {\widetilde{\nabla }}_{k+1} - \nabla f(x_{k+1})\Vert ^2 \\&+ \frac{\gamma _k}{\tau _k} \left( \frac{L}{2} - \frac{1}{4 \tau _k \gamma _k} \right) \Vert x_{k+1} - y_{k+1}\Vert ^2 + \frac{1}{2} \Vert z_k - x^*\Vert ^2 \\&- \frac{1 + \mu \gamma _k}{2} \Vert z_{k+1} - x^*\Vert ^2 + \gamma _k \left\langle \nabla f (x_{k+1}) - {\widetilde{\nabla }}_{k+1}, z_k - x^* \right\rangle \\&- \frac{\gamma _k(1-\tau _k)}{\tau _k} D_f(y_k,x_{k+1}). \end{aligned}$$

Proof

We use a linear coupling argument. The extrapolated iterate \(x_{k+1}\) can be viewed as a convex combination of an iterate produced from mirror descent (namely, \(z_k\)) and one from gradient descent (\(y_k\)). This allows us to provide two bounds on the term \(f(x_{k+1}) - f(x^*)\): one is a regret bound inspired by the classical analysis of mirror descent, and the other is inspired by the traditional descent guarantee of gradient descent.

$$\begin{aligned}&\gamma _k (f(x_{k+1}) - f(x^*)) \nonumber \\&\quad {\mathop {\le }\limits ^{\textcircled {1}}} \gamma _k\langle \nabla f(x_{k+1}),x_{k+1} - x^* \rangle \nonumber \\&\quad = \gamma _k \langle \nabla f(x_{k+1}),x_{k+1} - z_k \rangle + \gamma _k \langle \nabla f(x_{k+1}), z_k - x^* \rangle \nonumber \\&\quad {\mathop {=}\limits ^{\textcircled {2}}} \frac{\gamma _k(1 - \tau _k)}{\tau _k} \langle \nabla f(x_{k+1}), y_k - x_{k+1} \rangle + \gamma _k\langle \nabla f(x_{k+1}), z_k - x^* \rangle \nonumber \\&\quad = \frac{\gamma _k(1-\tau _k)}{\tau _k} ( f(y_k) - f(x_{k+1}) ) + \gamma _k \left\langle {\widetilde{\nabla }}_{k+1}, z_k - x^* \right\rangle \nonumber \\&\qquad - \frac{\gamma _k(1 - \tau _k)}{\tau _k} D_f(y_k,x_{k+1}) + \gamma _k \left\langle \nabla f (x_{k+1}) - {\widetilde{\nabla }}_{k+1}, z_k - x^* \right\rangle \nonumber \\&\quad = \frac{\gamma _k(1-\tau _k)}{\tau _k} ( f(y_k) - f(x_{k+1}) ) + \gamma _k \left\langle {\widetilde{\nabla }}_{k+1}, z_k - z_{k+1} \right\rangle \nonumber \\&\qquad + \gamma _k \left\langle {\widetilde{\nabla }}_{k+1}, z_{k+1} - x^* \right\rangle - \frac{\gamma _k(1 - \tau _k)}{\tau _k} D_f(y_k,x_{k+1}) \nonumber \\&\qquad + \gamma _k \left\langle \nabla f (x_{k+1}) - {\widetilde{\nabla }}_{k+1}, z_k - x^* \right\rangle \nonumber \\&\quad {\mathop {=}\limits ^{\textcircled {3}}} \frac{\gamma _k(1-\tau _k)}{\tau _k} ( f(y_k) - f(x_{k+1}) ) + \frac{\gamma _k}{\tau _k} \left\langle {\widetilde{\nabla }}_{k+1}, x_{k+1} - y_{k+1} \right\rangle \nonumber \\&\qquad + \gamma _k \left\langle {\widetilde{\nabla }}_{k+1}, z_{k+1} - x^* \right\rangle - \frac{\gamma _k(1 - \tau _k)}{\tau _k} D_f(y_k,x_{k+1}) \nonumber \\&\qquad + \gamma _k \left\langle \nabla f (x_{k+1}) - {\widetilde{\nabla }}_{k+1}, z_k - x^* \right\rangle \end{aligned}$$
(8)

Inequality \(\textcircled {1}\) uses the convexity of f, \(\textcircled {2}\) follows from the fact that \(x_{k+1} = \tau _k z_k + (1 - \tau _k) y_k\), and \(\textcircled {3}\) uses \(x_{k+1} - y_{k+1} = \tau _k (z_k - z_{k+1})\). We proceed to bound the inner product \(\langle {\widetilde{\nabla }}_{k+1}, z_{k+1} - x^* \rangle \) involving the sequence \(z_{k+1}\) using a regret bound from mirror descent, and we bound the term \(\langle {\widetilde{\nabla }}_{k+1}, x_{k+1} - y_{k+1} \rangle \) using an argument similar to the descent guarantee of gradient descent.

By Lemma 3 with \(z = z_{k+1}\), \(x = z_k\), \(y = x^*\), \(d = {\widetilde{\nabla }}_{k+1}\), and \(\eta = \gamma _k\),

$$\begin{aligned}&\gamma _k \left\langle {\widetilde{\nabla }}_{k+1}, z_{k+1} - x^* \right\rangle \nonumber \\&\quad \le \frac{1}{2 } \Vert z_k - x^*\Vert ^2 - \frac{1 + \mu \gamma _k}{2 } \Vert z_{k+1} - x^*\Vert ^2 - \frac{1}{2 } \Vert z_{k+1} - z_k\Vert ^2 \nonumber \\&\qquad - \gamma _k g(z_{k+1}) + \gamma _k g(x^*) \nonumber \\&\quad = \frac{1}{2 } \Vert z_k - x^*\Vert ^2 - \frac{1 + \mu \gamma _k}{2 } \Vert z_{k+1} - x^*\Vert ^2 - \frac{1}{2 \tau _k^2} \Vert x_{k+1} - y_{k+1}\Vert ^2 \nonumber \\&\qquad - \gamma _k g(z_{k+1}) + \gamma _k g(x^*). \end{aligned}$$
(9)

For the other term,

$$\begin{aligned}&\frac{\gamma _k}{\tau _k} \langle {\widetilde{\nabla }}_{k+1}, x_{k+1} - y_{k+1} \rangle \nonumber \\&\quad = \frac{\gamma _k}{\tau _k} \langle \nabla f(x_{k+1}), x_{k+1} - y_{k+1} \rangle + \frac{\gamma _k}{\tau _k} \langle {\widetilde{\nabla }}_{k+1} - \nabla f(x_{k+1}), x_{k+1} - y_{k+1} \rangle \nonumber \\&\quad {\mathop {\le }\limits ^{\textcircled {1}}} \frac{\gamma _k}{\tau _k} \left( f(x_{k+1}) - f(y_{k+1}) \right) + \frac{\gamma _k}{\tau _k} \langle {\widetilde{\nabla }}_{k+1} - \nabla f(x_{k+1}), x_{k+1} - y_{k+1} \rangle \nonumber \\&\qquad + \frac{L \gamma _k}{2 \tau _k} \Vert x_{k+1} - y_{k+1} \Vert ^2 \nonumber \\&\quad {\mathop {\le }\limits ^{\textcircled {2}}} \frac{\gamma _k}{\tau _k} \left( f(x_{k+1}) - f(y_{k+1}) \right) + \gamma _k^2 \Vert {\widetilde{\nabla }}_{k+1} - \nabla f(x_{k+1})\Vert ^2 \nonumber \\&\qquad + \left( \frac{L \gamma _k}{2 \tau _k} + \frac{1}{4 \tau _k^2} \right) \Vert x_{k+1} - y_{k+1} \Vert ^2 \nonumber \\&\quad = \frac{\gamma _k}{\tau _k} \left( f(x_{k+1}) - F(y_{k+1}) \right) + \gamma _k^2 \Vert {\widetilde{\nabla }}_{k+1} - \nabla f(x_{k+1})\Vert ^2 \nonumber \\&\qquad + \left( \frac{L \gamma _k}{2 \tau _k} + \frac{1}{4 \tau _k^2} \right) \Vert x_{k+1} - y_{k+1} \Vert ^2 + \frac{\gamma _k}{\tau _k} g(y_{k+1}) \nonumber \\&\quad {\mathop {\le }\limits ^{\textcircled {3}}} \frac{\gamma _k}{\tau _k} \left( f(x_{k+1}) - F(y_{k+1}) \right) + \gamma _k^2 \Vert {\widetilde{\nabla }}_{k+1} - \nabla f(x_{k+1})\Vert ^2 \nonumber \\&\qquad + \left( \frac{L \gamma _k}{2 \tau _k} + \frac{1}{4 \tau _k^2} \right) \Vert x_{k+1} - y_{k+1} \Vert ^2 + \gamma _k g(z_{k+1}) + \frac{\gamma _k (1-\tau _k)}{\tau _k} g(y_k). \end{aligned}$$
(10)

Inequality \(\textcircled {1}\) follows from the Lipschitz continuity of \(\nabla f_i\), \(\textcircled {2}\) is Young’s inequality, and \(\textcircled {3}\) uses the convexity of g and the update rule \(y_{k+1} = \tau _k z_{k+1} + (1-\tau _k) y_k\). Combining inequalities (9) and (10) with (8) and rearranging yields the assertion. \(\square \)

Lemma 4 completes the linear coupling part of our argument. If not for the MSE and bias terms, we could telescope this inequality as in [6] and prove an accelerated convergence rate. As with all analyses of stochastic gradient methods, we need a useful bound on these qualities of the estimator.

Existing analyses of unbiased stochastic gradient methods bound the variance term by a pair of terms that telescope over several iterations, showing that the variance tends to zero with the number of iterations. It is difficult to generalise these arguments to accelerated stochastic methods because one must prove that the variance decreases at an accelerated rate that is inconsistent with existing variance bounds. In the analysis of Katyusha, negative momentum cancels part of the variance term, leaving telescoping terms that decrease at an accelerated rate. Without negative momentum, we must handle the variance term differently.

In the inequality of Lemma 4, we have two non-positive terms:

$$\begin{aligned} - \frac{1}{\tau _k^2} \Vert x_{k+1} - y_{k+1}\Vert ^2 \quad \text {and} \quad - \frac{\gamma _k (1-\tau _k)}{\tau _k} D_f(y_k, x_{k+1}). \end{aligned}$$

This makes our strategy clear: we must bound the MSE and bias terms by terms of the form \(\Vert x_{k+1} - y_{k+1}\Vert ^2\) and \(D_f(y_k,x_{k+1})\). The following two lemmas use the MSEB property to establish bounds of this form.

Lemma 5

(Bias Term Bound) Suppose the stochastic gradient estimator \({\widetilde{\nabla }}_{k+1}\) satisfies the MSEB\((M_1\), \(M_2,\) \(\rho _M,\rho _B,\rho _F)\) property, let \(\rho = \min \{ \rho _M, \rho _B, \rho _F \}\), and let \(\{\sigma _k\}\) and \(\{s_k\}\) be any non-negative sequences satisfying \(\sigma _k s_k^2 \left( 1 - \rho \right) \le \sigma _{k-1} s^2_{k-1} \left( 1 - \frac{\rho }{2} \right) \) and \(\sigma _k \left( 1 - \rho \right) \le \sigma _{k-1} \left( 1 - \frac{\rho }{2} \right) \). The bias term can be bounded as

$$\begin{aligned}&\sum _{k=0}^{T-1} \sigma _k s_k \mathbb {E} \left\langle \nabla f(x_{k+1}) - {\widetilde{\nabla }}_{k+1}, z_k - x^* \right\rangle \\&\quad \le (1-\rho _B) \sum _{k=0}^{T-1} \sigma _k \mathbb {E} \left[ \frac{8 s^2_k}{\rho _B^2 \rho _M} \left\| \nabla f(x_{k+1}) - {\widetilde{\nabla }}_{k+1} \right\| ^2 + \frac{\rho _M}{8 \tau _k^2} \Vert x_{k+1} - y_{k+1}\Vert ^2 \right] . \end{aligned}$$

Proof

Because \(z_k\) depends only on the first \(k-1\) iterates, we can use the MSEB property to say

$$\begin{aligned}&\sigma _k s_k \mathbb {E} \left\langle \nabla f(x_{k+1}) - {\widetilde{\nabla }}_{k+1}, z_k - x^* \right\rangle \\&\quad = \sigma _k s_k \mathbb {E} \left\langle \nabla f(x_{k+1}) - \mathbb {E}_k{\widetilde{\nabla }}_{k+1}, z_k - x^* \right\rangle \\&\quad {\mathop {=}\limits ^{\textcircled {1}}} \sigma _k s_k (1-\rho _B) \mathbb {E} \left\langle \nabla f(x_k) - {\widetilde{\nabla }}_k, z_k - x^* \right\rangle \\&\quad {\mathop {=}\limits ^{\textcircled {2}}} \sigma _k (1-\rho _B) \mathbb {E} \Big [ s_k \left\langle \nabla f(x_k) - {\widetilde{\nabla }}_k, z_k - z_{k-1} \right\rangle \\&\qquad + s_k \left\langle \nabla f(x_k) - \mathbb {E}_{k-1} {\widetilde{\nabla }}_k, z_{k-1} - x^* \right\rangle \Big ] \\&\quad {\mathop {\le }\limits ^{\textcircled {3}}} \sigma _k (1-\rho _B) \mathbb {E} \Bigg [ \frac{4 s_k^2}{\rho _M \rho _B} \left\| \nabla f(x_k) - {\widetilde{\nabla }}_k \right\| ^2 + \frac{\rho _M \rho _B}{16} \Vert z_k - z_{k-1}\Vert ^2 \\&\qquad + s_k \left\langle \nabla f(x_k) - \mathbb {E}_{k-1} {\widetilde{\nabla }}_k, z_{k-1} - x^* \right\rangle \Bigg ]. \end{aligned}$$

Equality \(\textcircled {1}\) is due to the MSEB property. We are able to pass the conditional expectation into the second inner product in \(\textcircled {2}\) because \(z_{k-1}\) is independent of \({\widetilde{\nabla }}_k\) conditioned on the first \(k-2\) iterates, and inequality \(\textcircled {3}\) is Young’s. We can repeat this process once more, applying the MSEB property to obtain

$$\begin{aligned}&\sigma _k (1-\rho _B) \mathbb {E} \Bigg [ \frac{4 s_k^2}{\rho _M \rho _B} \left\| \nabla f(x_k) - {\widetilde{\nabla }}_k \right\| ^2 + \frac{\rho _M \rho _B}{16} \Vert z_k - z_{k-1}\Vert ^2 \\&\qquad + s_k (1-\rho _B) \left\langle \nabla f(x_{k-1}) - {\widetilde{\nabla }}_{k-1}, z_{k-1} - x^* \right\rangle \Bigg ] \\&\quad \le \sigma _k (1-\rho _B) \mathbb {E} \Bigg [ \frac{4 s_k^2}{\rho _M \rho _B} \left\| \nabla f(x_k) - {\widetilde{\nabla }}_k \right\| ^2 + \frac{4 s_k^2 (1 - \rho _B)}{\rho _M \rho _B} \left\| \nabla f(x_{k-1}) - {\widetilde{\nabla }}_{k-1} \right\| ^2 \\&\qquad + \frac{\rho _M \rho _B}{16} \left( \Vert z_k - z_{k-1}\Vert ^2 + (1 - \rho _B) \Vert z_{k-1} - z_{k-2}\Vert ^2 \right) \\&\qquad + s_k (1-\rho _B) \left\langle \nabla f(x_{k-1}) - {\widetilde{\nabla }}_{k-1}, z_{k-2} - x^* \right\rangle \Bigg ] \\&\quad {\mathop {\le }\limits ^{\textcircled {4}}} (1-\rho _B) \mathbb {E} \Bigg [ \frac{4 \sigma _k s_k^2}{\rho _M \rho _B} \left\| \nabla f(x_k) - {\widetilde{\nabla }}_k \right\| ^2 \\&\qquad + \frac{4 \sigma _{k-1} s_{k-1}^2 \left( 1 - \frac{\rho _B}{2} \right) }{\rho _M \rho _B} \left\| \nabla f(x_{k-1}) - {\widetilde{\nabla }}_{k-1} \right\| ^2 + \frac{\rho _M \rho _B}{16} \Big (\sigma _k \Vert z_k - z_{k-1}\Vert ^2 \\&\qquad + \sigma _{k-1} \left( 1 - \frac{\rho _B}{2} \right) \Vert z_{k-1} - z_{k-2}\Vert ^2 \Big ) \\&\qquad + \sigma _k s_k (1-\rho _B) \left\langle \nabla f(x_{k-1}) - {\widetilde{\nabla }}_{k-1}, z_{k-2} - x^* \right\rangle \Bigg ]. \end{aligned}$$

Inequality \(\textcircled {4}\) uses our hypotheses on the decrease of \(\sigma _k s_k^2\) and \(\sigma _k\). This is a recursive inequality, and expanding the recursion yields

$$\begin{aligned}&\sigma _k s_k \mathbb {E} \left\langle \nabla f(x_{k+1}) - \mathbb {E}_k{\widetilde{\nabla }}_{k+1}, z_k - x^* \right\rangle \\&\quad \le (1-\rho _B) \sum _{\ell =1}^k \sigma _\ell \mathbb {E} \Bigg [ \frac{4 s^2_\ell (1-\frac{\rho _B}{2})^{k-\ell }}{\rho _M \rho _B} \left\| \nabla f(x_\ell ) - {\widetilde{\nabla }}_\ell \right\| ^2 \\&\qquad + \frac{\rho _M \rho _B (1-\frac{\rho _B}{2})^{k-\ell }}{16} \Vert z_\ell - z_{\ell -1}\Vert ^2 \Bigg ]. \end{aligned}$$

The above uses the fact that \({\widetilde{\nabla }}_1 = \nabla f(x_1)\), so the inner product \(\langle \nabla f(x_{1}) - {\widetilde{\nabla }}_{1},\) \( z_0 - x^* \rangle = 0\). Taking the sum over the iterations \(k=0\) to \(k=T-1\), we apply Lemma 7 to simplify this bound.

$$\begin{aligned}&\sum _{k=0}^{T-1} \sigma _k s_k \mathbb {E} \left\langle \nabla f(x_{k+1}) - \mathbb {E}_k{\widetilde{\nabla }}_{k+1}, z_k - x^* \right\rangle \\&\quad \le (1-\rho _B) \sum _{k=1}^{T-1} \sum _{\ell =1}^k \sigma _\ell \mathbb {E} \Big [ \frac{4 s^2_\ell (1-\frac{\rho _B}{2})^{k-\ell }}{\rho _M \rho _B} \left\| \nabla f(x_\ell ) - {\widetilde{\nabla }}_\ell \right\| ^2 \\&\qquad + \frac{\rho _M \rho _B (1-\frac{\rho _B}{2})^{k-\ell }}{16} \Vert z_\ell - z_{\ell -1}\Vert ^2 \Big ] \\&\quad {\mathop {\le }\limits ^{\textcircled {1}}} (1-\rho _B) \sum _{k=0}^{T-1} \sigma _k \mathbb {E} \left[ \frac{8 s^2_k}{\rho _B^2 \rho _M} \left\| \nabla f(x_{k+1}) - {\widetilde{\nabla }}_{k+1} \right\| ^2 + \frac{\rho _M}{8} \Vert z_{k+1} - z_k\Vert ^2 \right] \\&\quad {\mathop {=}\limits ^{\textcircled {2}}} (1-\rho _B) \sum _{k=0}^{T-1} \sigma _k \mathbb {E} \left[ \frac{8 s^2_k}{\rho _B^2 \rho _M} \left\| \nabla f(x_{k+1}) - {\widetilde{\nabla }}_{k+1} \right\| ^2 + \frac{\rho _M}{8 \tau _k^2} \Vert x_{k+1} - y_{k+1}\Vert ^2 \right] . \end{aligned}$$

Inequality \(\textcircled {1}\) follows from Lemma 7, and equality \(\textcircled {2}\) is the identity \(y_{k+1} - x_{k+1} = \tau _k (z_{k+1} - z_k)\). \(\square \)

This bound on the bias term includes the MSE, so to complete our bound on the bias term, we must combine Lemma 5 with the following lemma.

Lemma 6

(MSE Bound) Suppose the stochastic gradient estimator \({\widetilde{\nabla }}_{k+1}\) satisfies the MSEB\((M_1,M_2,\rho _M,\) \(\rho _B,\rho _F)\) property, let \(\rho = \min \{ \rho _M, \rho _B, \rho _F \}\), and let \(\{s_k\}\) be any non-negative sequence satisfying \(s^2_k \left( 1 - \rho \right) \le s^2_{k-1} \left( 1 - \frac{\rho }{2} \right) \). For convenience, define \(\varTheta _2 = \frac{M_1 \rho _F + 2 M_2}{\rho _M \rho _F}\). The MSE of the gradient estimator is bounded as

$$\begin{aligned}&\sum _{k=0}^{T-1} s_k^2 \mathbb {E} \Vert \nabla f(x_{k+1}) - {\widetilde{\nabla }}_{k+1}\Vert ^2 \\&\quad \le \sum _{k=0}^{T-1} 4 \varTheta _2 L s_k^2 \mathbb {E} \left[ 2 D_f(y_k,x_{k+1}) + L \Vert x_{k+1} - y_{k+1} \Vert ^2 \right] \end{aligned}$$

Proof

First, we derive a bound on the sequence \(\mathcal {F}_k\) arising in the MSEB property. Taking the sum from \(k=0\) to \(k=T-1\),

$$\begin{aligned} \sum _{k=0}^{T-1} s_k^2 \mathcal {F}_k \le&\sum _{k=0}^{T-1} \sum _{\ell =0}^k \frac{M_2 s_k^2 (1-\rho _F)^{k - \ell }}{n} \sum _{i=1}^n \mathbb {E} \Vert \nabla f_i(x_{\ell +1}) - \nabla f_i(x_\ell ) \Vert ^2 \\ {\mathop {\le }\limits ^{\textcircled {1}}}&\sum _{k=0}^{T-1} \sum _{\ell =0}^k \frac{M_2 s_\ell ^2 (1-\frac{\rho _F}{2})^{k - \ell }}{n} \sum _{i=1}^n \mathbb {E} \Vert \nabla f_i(x_{\ell +1}) - \nabla f_i(x_\ell ) \Vert ^2 \\ {\mathop {\le }\limits ^{\textcircled {2}}}&\sum _{k=0}^{T-1} \frac{2 M_2 s_k^2}{n \rho _F} \sum _{i=1}^n \mathbb {E} \Vert \nabla f_i(x_{k+1}) - \nabla f_i(x_k) \Vert ^2. \end{aligned}$$

Inequality \(\textcircled {1}\) uses the fact that \(s_k^2 (1-\rho _F) \le s_{k-1}^2\left( 1-\frac{\rho _F}{2} \right) \), and \(\textcircled {2}\) uses Lemma 7. With this bound on \(\mathcal {F}_k\), we proceed to bound \(\mathcal {M}_k\) in a similar fashion.

$$\begin{aligned}&\sum _{k=0}^{T-1} s_k^2 \mathbb {E} \Vert \nabla f(x_{k+1}) - {\widetilde{\nabla }}_{k+1}\Vert ^2 \\&\quad \le \sum _{k=0}^{T-1} \frac{M_1 s_k^2}{n} \sum _{i=1}^n \mathbb {E} \Vert \nabla f_i(x_{k+1}) - \nabla f_i(x_k) \Vert ^2 + s_k^2 \mathcal {F}_k + s_k^2 (1-\rho _M) \mathcal {M}_{k-1} \\&\quad \le \sum _{k=0}^{T-1} \frac{(M_1 \rho _F + 2 M_2) s_k^2}{n \rho _F} \sum _{i=1}^n \mathbb {E} \Vert \nabla f_i(x_{k+1}) - \nabla f_i(x_k) \Vert ^2 + s_k^2 (1-\rho _M) \mathcal {M}_{k-1} \\&\quad \le \sum _{k=0}^{T-1} \sum _{\ell =1}^k \frac{\varTheta _2 s_k^2 (1-\rho _M)^{k - \ell } \rho _M}{n} \sum _{i=1}^n \mathbb {E} \Vert \nabla f_i(x_{\ell +1}) - \nabla f_i(x_\ell ) \Vert ^2 \\&\quad {\mathop {\le }\limits ^{\textcircled {1}}} \sum _{k=0}^{T-1} \sum _{\ell =1}^k \frac{\varTheta _2 s_\ell ^2 (1-\frac{\rho _M}{2})^{k - \ell } \rho _M}{n} \sum _{i=1}^n \mathbb {E} \Vert \nabla f_i(x_{\ell +1}) - \nabla f_i(x_\ell ) \Vert ^2 \\&\quad {\mathop {\le }\limits ^{\textcircled {2}}} \sum _{k=0}^{T-1} \frac{2 \varTheta _2 s_k^2}{n} \sum _{i=1}^n \mathbb {E} \Vert \nabla f_i(x_{k+1}) - \nabla f_i(x_k) \Vert ^2 \\&\quad {\mathop {\le }\limits ^{\textcircled {3}}} \sum _{k=0}^{T-1} \frac{4 \varTheta _2 s_k^2}{n} \sum _{i=1}^n \mathbb {E} \Vert \nabla f_i(x_{k+1}) - \nabla f_i(y_k) \Vert ^2 \nonumber \\&\qquad + \frac{4 \varTheta _2 s_k^2}{n} \sum _{i=1}^n \mathbb {E} \Vert \nabla f_i(y_k) - \nabla f_i(x_k) \Vert ^2 \\&\quad {\mathop {\le }\limits ^{\textcircled {4}}} \sum _{k=0}^{T-1} \left( 8 \varTheta _2 L s_k^2 \mathbb {E} D_f(y_k,x_{k+1}) + 4 \varTheta _2 L^2 s_k^2 \mathbb {E} \Vert x_k - y_k \Vert ^2 \right) . \end{aligned}$$

Inequality \(\textcircled {1}\) uses \(s_k^2(1-\rho _M) \le s_{k-1}^2\left( 1-\frac{\rho _M}{2} \right) \), \(\textcircled {2}\) uses Lemma 7, \(\textcircled {3}\) uses the inequality \(\Vert a-c\Vert ^2 \le 2 \Vert a - b\Vert ^2 + 2 \Vert b - c\Vert ^2\), and \(\textcircled {4}\) uses Lemma 1 and the Lipschitz continuity of \(\nabla f_i\). \(\square \)

Lemmas 5 and 6 show that it is possible to cancel the bias term and the MSE using the non-negative terms appearing in the inequality of Lemma 4. Without these terms, we can telescope this inequality over several iterations and prove accelerated convergence rates. We are now prepared to prove Theorems 1 and 2.

Proof of Theorem 1

We set \(\mu = 0\) in the inequality of Lemma 4, apply the full expectation operator, and sum the result over the iterations \(k=0\) to \(k=T-1\).

$$\begin{aligned} 0 \le&\frac{1}{2} \Vert z_0 - x^*\Vert ^2 - \frac{1}{2} \mathbb {E} \Vert z_T - x^*\Vert ^2 + \sum _{k=0}^{T-1} \mathbb {E} \Big [ \frac{\gamma _k (1-\tau _k)}{\tau _k} F(y_k) - \frac{\gamma _k}{\tau _k} F(y_{k+1}) \\&+ \gamma _k F(x^*) + \frac{\gamma _k}{\tau _k} \left( \frac{L}{2} - \frac{1}{4 \tau _k \gamma _k} \right) \Vert x_{k+1} - y_{k+1}\Vert ^2 - \frac{\gamma _k (1-\tau _k)}{\tau _k} D(y_k,x_{k+1}) \\&+ \gamma _k \left\langle \nabla f(x_{k+1}) - {\widetilde{\nabla }}_{k+1}, z_k - x^* \right\rangle + \gamma _k^2 \Vert \nabla f(x_{k+1}) - {\widetilde{\nabla }}_{k+1}\Vert ^2 \Big ]. \end{aligned}$$

We bound the terms in the final line, beginning with the bias term. Our choice for \(\gamma _k\) satisfies \(\gamma _k^2 \left( 1 - \rho \right) \le \gamma _{k-1}^2 \left( 1 - \frac{\rho }{2} \right) \), so with \(s_k = \gamma _k\) and \(\sigma _k = 1\), we apply Lemma 5. This gives

$$\begin{aligned} 0 \le&\frac{1}{2} \Vert z_0 - x^*\Vert ^2 + \sum _{k=0}^{T-1} \mathbb {E} \Bigg [ \frac{\gamma _k (1-\tau _k)}{\tau _k} F(y_k) - \frac{\gamma _k}{\tau _k} F(y_{k+1}) + \gamma _k F(x^*) \\&+ \left( \frac{\gamma _k}{\tau _k} \left( \frac{L}{2} - \frac{1}{4 \tau _k \gamma _k} \right) + \frac{(1-\rho _B) \rho _M}{8 \tau _k^2} \right) \Vert x_{k+1} - y_{k+1}\Vert ^2 \\&- \frac{\gamma _k (1-\tau _k)}{\tau _k} D(y_k,x_{k+1}) + \gamma _k^2 \varTheta _1 \Vert \nabla f(x_{k+1}) - {\widetilde{\nabla }}_{k+1}\Vert ^2 \Bigg ], \end{aligned}$$

where we have dropped the term \(-1/2 \mathbb {E} \Vert z_T - x^*\Vert ^2\) because it is non-positive. Applying Lemma 6 to bound the MSE, we have

$$\begin{aligned} 0 \le&\frac{1}{2} \Vert z_0 - x^*\Vert ^2 + \sum _{k=0}^{T-1} \mathbb {E} \Big [ \frac{\gamma _k (1-\tau _k)}{\tau _k} F(y_k) - \frac{\gamma _k}{\tau _k} F(y_{k+1}) \nonumber \\&+ \gamma _k F(x^*) + \left( 8 \gamma _k^2 L \varTheta _1 \varTheta _2 - \frac{\gamma _k (1-\tau _k)}{\tau _k} \right) D(y_k,x_{k+1})\nonumber \\&+ \left( \frac{\rho _M (1-\rho _B)}{8 \tau _k^2} + 4 \gamma _k^2 L^2 \varTheta _1 \varTheta _2 + \frac{\gamma _k}{\tau _k} \left( \frac{L}{2} - \frac{1}{4 \tau _k \gamma _k} \right) \right) \Vert x_{k+1} - y_{k+1}\Vert ^2 \Big ]. \end{aligned}$$
(11)

With the parameters set as in the theorem statement, it is clear that the final two lines of (11) are non-positive (see “Appendix B” for a proof). This allows us to drop these lines from the inequality, leaving

$$\begin{aligned} 0 \le&\frac{1}{2} \Vert z_0 - x^*\Vert ^2 + \sum _{k=0}^{T-1} \mathbb {E} \left[ \frac{\gamma _k (1-\tau _k)}{\tau _k} F(y_k) - \frac{\gamma _k}{\tau _k} F(y_{k+1}) + \gamma _k F(x^*) \right] . \end{aligned}$$

Rewriting \(\tau _k\) in terms of \(\gamma _k\) shows that this is equivalent to

$$\begin{aligned} 0 \le&\frac{1}{2} \Vert z_0 - x^*\Vert ^2 + \sum _{k=0}^{T-1} \mathbb {E} \left[ (c L \gamma _k^2 - \gamma _k) F(y_k) - c L \gamma _k^2 F(y_{k+1}) + \gamma _k F(x^*) \right] . \end{aligned}$$

Our choice for \(\gamma _k\) satisfies \(c L \gamma _k^2 - \gamma _k = c L \gamma _{k-1}^2 - \frac{1}{4 c L}\), allowing the \(F(y_k)\) terms to telescope. Hence, our inequality is equivalent to

$$\begin{aligned} 0&\le - c L \gamma _{T-1}^2 \mathbb {E} [F(y_T) - F(x^*)] - \frac{1}{4 c L} \sum _{k=1}^{T-1} \mathbb {E} \left[ F(y_k) - F(x^*) \right] \\&\quad + (c L \gamma _0^2 - \gamma _0) (F(y_0) - F(x^*)) + \frac{1}{2} \Vert z_0 - x^*\Vert ^2 - \frac{1}{2} \mathbb {E} \Vert z_T - x^*\Vert ^2. \end{aligned}$$

Using the facts that \(c L \gamma _{T-1}^2 = \frac{(T + \nu + 3)^2}{4 c L}\), \(c L \gamma _0^2 - \gamma _0 = \frac{(\nu +2)(\nu +4)}{4 c L}\), and \(F(y_k) \le F(x^*)\), we have

$$\begin{aligned}&\frac{(T + \nu + 3)^2}{4 c L} \mathbb {E} \left[ F(y_T) - F(x^*) \right] \\&\quad \le \frac{(\nu +2)(\nu +4)}{4 c L} (F(y_0) - F(x^*)) + \frac{1}{2} \Vert z_0 - x^*\Vert ^2. \end{aligned}$$

Dropping the term \(1 / 2 \Vert z_T - x^*\Vert ^2\) on the left side of this inequality and rearranging, we have proved the assertion.

\(\square \)

A similar argument proves an accelerated linear convergence rate when strong convexity is present.

Proof of Theorem 2

We recall the inequality of Lemma 4.

$$\begin{aligned}&\frac{\gamma }{\tau } \left( F(y_{k+1}) - F(x^*) \right) + \frac{(1+\mu \gamma )}{2} \Vert z_{k+1} - x^*\Vert ^2 \\&\quad \le \frac{\gamma (1-\tau )}{\tau } \left( F(y_k) - F(x^*)\right) + \frac{1}{2} \Vert z_k - x^*\Vert ^2 + \gamma ^2 \Vert {\widetilde{\nabla }}_{k+1} - \nabla f(x_{k+1})\Vert ^2 \\&\qquad + \frac{\gamma }{\tau } \left( \frac{L}{2} - \frac{1}{4 \tau \gamma } \right) \Vert x_{k+1} - y_{k+1}\Vert ^2 - \frac{\gamma (1-\tau )}{\tau } D_f(y_k,x_{k+1}) \\&\qquad + \gamma \left\langle \nabla f(x_{k+1}) - {\widetilde{\nabla }}_{k+1}, z_k - x^* \right\rangle . \end{aligned}$$

By our choice of \(\gamma \) and \(\tau \), we have

$$\begin{aligned} \frac{\gamma }{\tau } \left( \frac{\gamma (1 - \tau )}{\tau } \right) ^{-1} = \frac{1}{1-\tau } \ge 1 + \tau = 1 + \mu \gamma . \end{aligned}$$

Therefore, we can extract a factor of \((1+\mu \gamma )\) from the left.

$$\begin{aligned}&(1+\mu \gamma ) \Bigg ( \frac{\gamma (1-\tau )}{\tau } \left( F(y_{k+1}) - F(x^*) \right) + \frac{1}{2} \Vert z_{k+1} - x^*\Vert ^2 \Bigg ) \\&\quad \le \frac{\gamma (1-\tau )}{\tau } \left( F(y_k) - F(x^*)\right) + \frac{1}{2} \Vert z_k - x^*\Vert ^2 + \gamma ^2 \Vert {\widetilde{\nabla }}_{k+1} - \nabla f(x_{k+1})\Vert ^2 \\&\qquad + \frac{\gamma }{\tau } \left( \frac{L}{2} - \frac{1}{4 \tau \gamma } \right) \Vert x_{k+1} - y_{k+1}\Vert ^2 - \frac{\gamma (1-\tau )}{\tau } D_f(y_k,x_{k+1}) \\&\qquad + \gamma \left\langle \nabla f(x_{k+1}) - {\widetilde{\nabla }}_{k+1}, z_k - x^* \right\rangle . \end{aligned}$$

Multiplying this inequality by \((1+\mu \gamma )^k\), summing over iterations \(k=0\) to \(k = T-1\), and applying the full expectation operator, we obtain the bound

$$\begin{aligned}&(1+\mu \gamma )^T \mathbb {E} \Bigg [ \frac{\gamma (1-\tau )}{\tau } \left( F(y_T) - F(x^*) \right) + \frac{1}{2} \Vert z_T - x^*\Vert ^2 \Bigg ] \nonumber \\&\quad \le \frac{\gamma (1-\tau )}{\tau } \left( F(y_0) - F(x^*)\right) + \frac{1}{2} \Vert z_0 - x^*\Vert ^2 \nonumber \\&\qquad + \sum _{k=0}^{T-1} (1+\mu \gamma )^k \mathbb {E} \Big [ \gamma ^2 \Vert {\widetilde{\nabla }}_{k+1} - \nabla f(x_{k+1})\Vert ^2 \nonumber \\&\qquad + \frac{\gamma }{\tau } \left( \frac{L}{2} - \frac{1}{4 \tau \gamma } \right) \Vert x_{k+1} - y_{k+1}\Vert ^2 \nonumber \\&\qquad - \frac{\gamma (1-\tau )}{\tau } D_f(y_k,x_{k+1}) + \gamma \left\langle \nabla f(x_{k+1}) - {\widetilde{\nabla }}_{k+1}, z_k - x^* \right\rangle \Big ]. \end{aligned}$$
(12)

As in the proof of Theorem 1, we bound the bias term and the MSE using Lemmas 5 and 6, respectively. To apply Lemma 5, we let \(\sigma _k = (1+\mu \gamma )^k\) and \(s_k = \gamma \). These choices are appropriate because \((1 + \mu \gamma )^k (1-\rho ) \le (1 + \mu \gamma )^{k-1} (1 - \frac{\rho }{2})\) due to the fact that \(\mu \gamma \le \rho /2\).

Combining these bounds with (12), we have

$$\begin{aligned}&(1+\mu \gamma )^T \mathbb {E} \Bigg [ \frac{\gamma (1-\tau )}{\tau } \left( F(y_T) - F(x^*) \right) + \frac{1}{2} \Vert z_T - x^*\Vert ^2 \Bigg ] \nonumber \\&\quad \le \frac{\gamma (1-\tau )}{\tau } \left( F(y_0) - F(x^*)\right) + \frac{1}{2} \Vert z_0 - x^*\Vert ^2 \\&\qquad + \sum _{k=0}^{T-1} (1+\mu \gamma )^k \mathbb {E} \Bigg [ \left( 8 \gamma ^2 L \varTheta _1 \varTheta _2 - \frac{\gamma (1-\tau )}{\tau } \right) D(y_k,x_{k+1}) \\&\qquad + \left( \frac{\rho _M (1-\rho _B)}{8 \tau ^2} + 4 \gamma ^2 L^2 \varTheta _1 \varTheta _2 + \frac{\gamma }{\tau } \left( \frac{L}{2} - \frac{1}{4 \tau \gamma } \right) \right) \Vert x_{k+1} - y_{k+1}\Vert ^2 \Big ]. \end{aligned}$$

The parameter settings in the theorem statement ensure the final two lines are non-positive (see “Appendix B” for details). This gives

$$\begin{aligned}&\frac{1}{2} \mathbb {E} \Vert z_T - x^*\Vert ^2 \\&\quad \le (1 + \mu \gamma )^{-T} \left( \frac{\gamma (1-\tau )}{\tau } \left( F(y_0) - F(x^*) \right) + \frac{1}{2} \Vert z_0 - x^*\Vert ^2 \right) \\&\quad \le \left( 1 + \min \left\{ \sqrt{\frac{\mu }{L c}}, \frac{\rho }{2} \right\} \right) ^{-T} \left( \frac{1}{\mu } \left( F(y_0) - F(x^*)\right) + \frac{1}{2} \Vert z_0 - x^*\Vert ^2 \right) , \end{aligned}$$

which is the desired result. \(\square \)

6 Convergence rates for specific estimators

In light of Theorems 1 and 2, we must only establish suitable bounds on the MSE and bias terms of a gradient estimator to prove accelerated convergence rates for Algorithm 1. We consider four variance-reduced gradient estimators: SAGA, SVRG, SARAH, and SARGE, beginning with the unbiased estimators. We defer proofs to the “Appendix”. To preserve the generality of our framework, we have not optimised the constants appearing in the presented convergence rates.

Theorem 3

(SAGA Convergence Rates) When using the SAGA gradient estimator in Algorithm 1, set \(b \le 4 \sqrt{2} n^{2/3}\), \(\gamma _k = \frac{b^3 (k+\frac{4 n}{b}+4)}{192 n^2 L}\), and \(\tau _k = \frac{b^3}{96 n^2 L \gamma _k}\). After T iterations, the suboptimalty at \(y_T\) satisfies

$$\begin{aligned} \mathbb {E} F(y_T) - F(x^*) \le \frac{(\frac{4 n}{b}+2)(\frac{4 n}{b}+4) K_1}{(T + \frac{4 n}{b} + 3)^2}, \end{aligned}$$

where

$$\begin{aligned} K_1 = \left( F(y_0) - F(x^*) + \frac{192 n^2 L}{b^3(\frac{4 n}{b}+2)(\frac{4 n}{b}+4)} \Vert z_0 - x^*\Vert ^2 \right) . \end{aligned}$$

If g is \(\mu \)-strongly convex, set \(\gamma = \min \left\{ \frac{b^{3/2}}{4 n \sqrt{6 \mu L}}, \frac{b}{4 n \mu } \right\} \) and \(\tau = \mu \gamma \). After T iterations, the point \(z_T\) satisfies

$$\begin{aligned} \mathbb {E} \Vert z_T - x^*\Vert ^2 \le \left( 1+ \min \left\{ \frac{b^{3/2} \sqrt{\mu }}{4 n \sqrt{6 L}}, \frac{b}{4 n} \right\} \right) ^{-T} K_2, \end{aligned}$$

where \(K_2\) is defined as in Theorem 2.

It is enlightening to compare these rates to existing convergence rates for full and stochastic gradient methods. In the non-strongly convex setting, our convergence rate is \(\mathcal {O}\left( n^2/T^2\right) \), matching that of Katyusha. As with Katyusha, this rate could be improved for SVRG using the epoch-doubling procedure in SVRG++ (see Allen-Zhu, 2018 [7] for further details). In the strongly convex case, if F is poorly conditioned so that \(L/\mu \ge \mathcal {O}( b )\), we prove linear convergence at the rate \(\mathcal {O} \left( \left( 1 + \frac{b^{3/2} \sqrt{\mu }}{n \sqrt{L}} \right) ^{-T} \right) \). With \(b = n^{2/3}\), this rate matches the convergence rate of inertial forward-backward on the same problem (i.e., the rate is independent of n), but we require only \(n^{2/3}\) stochastic gradient evaluations per iteration compared to the n evaluations that full gradient methods require. This is reminiscent of the results of [5, 31], where the authors show that SAGA and SVRG achieve the same convergence rate as full gradient methods on non-convex problems using only \(n^{2/3}\) stochastic gradient evaluations per iteration. This is slightly worse than the results proven for Katyusha, which requires \(\mathcal {O}(\sqrt{n})\) stochastic gradient evaluation per iteration to match the convergence rate of full-gradient methods.

The analogous convergence guarantees for SVRG are included in Theorem 4.

Theorem 4

(SVRG Convergence Rates) When using the SVRG gradient estimator in Algorithm 1, set \(b \le 32 p^2\), \(\gamma _k = \frac{b (k+4 p+4)}{192 p^2 L}\), and \(\tau _k = \frac{b}{ 96 p^2 L \gamma _k}\). After T iterations, the suboptimalty at \(y_T\) satisfies

$$\begin{aligned} \mathbb {E} F(y_T) - F(x^*) \le \frac{(4 p+2)(4 p+4) K_1}{(T + 4 p + 3)^2}, \end{aligned}$$

where

$$\begin{aligned} K_1 = \left( F(y_0) - F(x^*) + \frac{192 p^2 L}{b (4 p+2)(4 p+4)} \Vert z_0 - x^*\Vert ^2 \right) . \end{aligned}$$

If g is \(\mu \)-strongly convex, set \(\gamma = \min \left\{ \frac{\sqrt{b}}{4 p \sqrt{6 \mu L}}, \frac{1}{4 p \mu } \right\} \) and \(\tau = \mu \gamma \). After T iterations, the point \(z_T\) satisfies

$$\begin{aligned} \mathbb {E} \Vert z_T - x^*\Vert ^2 \le \left( 1 + \min \left\{ \frac{\sqrt{b \mu }}{4 p \sqrt{6 L}}, \frac{b}{4 p} \right\} \right) ^{-T} K_2, \end{aligned}$$

where \(K_2\) is defined as in Theorem 2.

The convergence rates for SVRG are similar to the rates for SAGA if p and b are chosen appropriately. In the strongly convex case, setting \(b = p^2\) allows SVRG to match the convergence rate of full gradient methods, and the expected number of stochastic gradient evaluations per iteration is \(n/p + b\). To minimise the number of stochastic gradient evaluations while maintaining the convergence rate of full gradient methods, we set \(p = \mathcal {O}(n^{1/3})\), showing that Algorithm 1 using the SVRG gradient estimator achieves the same convergence rate as full gradient methods using only \(\mathcal {O}(n^{2/3})\) stochastic gradient evaluations per iteration.

Remark 4

The above discussion shows that when using the SAGA gradient estimator on a strongly convex objective with \(b = \mathcal {O}(n^{2/3})\) and \(\gamma = \mathcal {O}(1 / \sqrt{\mu L} )\), Algorithm 1 finds a point satisfying \(\mathbb {E} \Vert z_T - x^*\Vert ^2 \le \epsilon \) in \(\mathcal {O}(n^{2/3} \sqrt{\kappa } \log (1 / \epsilon ))\) iterations. This is compared to the complexity lower bound of \(\mathcal {O}(\sqrt{n \kappa } \log (1 / \epsilon ) )\) achieved by Katyusha [37]. SVRG has a similar complexity when \(b = n^{2/3}\) and \(p = n^{1/3}\).

The SARAH gradient estimator is similar to the SVRG estimator, as both estimators require the full gradient to be computed periodically. SARAH differs from SVRG by using previous estimates of the gradient to inform future estimates. The recursive nature of the estimator seems to decrease its MSE, which can be observed in experiments and in theory [16, 27]. However, this comes at the cost of introducing bias into the estimator.

Biased stochastic gradient methods are underdeveloped compared to their unbiased counterparts. The convergence proofs for biased algorithms are traditionally complex and difficult to generalize (see [33], for example), and proximal support has only recently been extended to the biased algorithms SARAH and SARGE, as well as biased versions of SAGA and SVRG in the convex setting [16]. It is difficult to determine conclusively if the negative effect of the bias outweighs the benefits of a lower MSE. We show that Algorithm 1 is able to achieve accelerated rates of convergence using biased estimators as well, beginning with the SARAH estimator.

Theorem 5

(SARAH Convergence Rates) When using the SARAH gradient estimator in Algorithm 1, set \(\gamma _k = \frac{k + 2 p + 4}{288 p^4 L}\) and \(\tau _k = \frac{1}{144 p^4 L \gamma _k}\). After T iterations, the suboptimalty at \(y_T\) satisfies

$$\begin{aligned} \mathbb {E} F(y_T) - F(x^*) \le \frac{(2 p + 2)(2 p + 4) K_1}{(T + 2 p + 3)^2}. \end{aligned}$$

where

$$\begin{aligned} K_1 = \left( F(y_0) - F(x^*) + \frac{288 p^4 L}{(2 p + 2)(2 p + 4)} \Vert z_0 - x^*\Vert ^2 \right) . \end{aligned}$$

If g is \(\mu \)-strongly convex, set \(\gamma = \min \left\{ \sqrt{ \frac{1}{144 p^4 \mu L}}, \frac{1}{2 p \mu } \right\} \) and \(\tau = \mu \gamma \). After T iterations, the point \(z_T\) satisfies

$$\begin{aligned} \mathbb {E} \Vert z_T - x^*\Vert ^2 \le \left( 1 + \min \left\{ \sqrt{\frac{\mu }{144 p^4 L}}, \frac{1}{2 p} \right\} \right) ^{-T} K_2, \end{aligned}$$

where \(K_2\) is defined as in Theorem 2.

We provide a proof of this result in “Appendix D”. Theorem 5 shows that using the SARAH gradient estimator in Algorithm 1 achieves an optimal \(\mathcal {O}\left( 1 / T^2 \right) \) convergence rate on convex objectives, but with \(p = \mathcal {O}(n)\), the constant is a factor of \(n^2\) worse than it is for accelerated SAGA, SVRG, and Katyusha. In the strongly convex case, setting \(p = \mathcal {O} (n)\) and \(b = \mathcal {O}(1)\) guarantees a linear convergence rate of \(\mathcal {O} ( (1 + n^{-2} \sqrt{\mu /L} )^{-T} )\), achieving the optimal dependence on the condition number, but with a constant that is a factor of n worse than accelerated SAGA and SVRG, and a factor of \(n^{3/2}\) worse than Katyusha. Despite this dependence on n, experimental results, including those in Sect. 7 and [27], show that the SARAH gradient estimator exhibits competitive performance.

Finally, we provide convergence rates for the SARGE estimator. In [16], the authors introduce the SARGE gradient estimator to mimic the recursive nature of SARAH but trade larger storage costs for a lower average per-iteration complexity, similar to the relationship between SAGA and SVRG. We prove in “Appendix E” that SARGE satisfies the MSEB property with similar constants to SARAH, and achieves similar convergence rates as well.

Theorem 6

(SARGE Convergence Rates) LetFootnote 3\(c = 86016 n^4/b^4\). When using the SARGE gradient estimator in Algorithm 1, set \(\gamma _k = \frac{k+\frac{4 n}{b}+4}{2 c L}\) and \(\tau _k = \frac{1}{c L \gamma _k}\). After T iterations, the suboptimalty at \(y_T\) satisfies

$$\begin{aligned} \mathbb {E} F(y_T) - F(x^*) \le \frac{2 (\frac{2 n}{b} + 1)( \frac{2 n}{b} + 2) K_1}{(T + \frac{4 n}{b} + 3)^2}, \end{aligned}$$

where

$$\begin{aligned} K_1 = \left( F(y_0) - F(x^*) + \frac{86016 n^4}{b^4 (\frac{2 n}{b} + 1)( \frac{2 n}{b} + 2)} \Vert z_0 - x^*\Vert ^2 \right) . \end{aligned}$$

If g is \(\mu \)-strongly convex, set \(\gamma = \min \left\{ \frac{1}{\sqrt{c \mu L}}, \frac{b}{4 n \mu } \right\} \) and \(\tau = \mu \gamma \). After T iterations, the point \(z_T\) satisfies

$$\begin{aligned} \mathbb {E} \Vert z_T - x^*\Vert ^2 \le \left( 1+ \min \left\{ \frac{48 b^2 \sqrt{154 \mu }}{n^2 \sqrt{L}}, \frac{b}{4 n} \right\} \right) ^{-T} K_2, \end{aligned}$$

where \(K_2\) is defined as in Theorem 2.

The convergence rates for SARGE are of the same order as the convergence rates for SARAH, even though SARGE requires fewer stochastic gradient evaluations per iteration on average.

Although our bound on the MSE of the SARAH and SARGE estimators is a factor of n smaller than our bound on the MSE of the SAGA and SVRG estimators, the analytical difficulties due to the bias lead to a worse dependence on n. Nevertheless, SARAH and SARGE are competitive in practice, as we demonstrate in the following section.

7 Numerical experiments

To test our acceleration framework, we use it to accelerate SAGA, SVRG, SARAH, and SARGE on a series of ridge regression and LASSO tasks using the binary classification data sets australian, mushrooms, phishing, and ijcnn1 from the LIBSVMFootnote 4 database. We include Katyusha and Katyushans for comparison as well. For SVRG and SARAH, we compare our accelerated variants that compute the full gradient probabilistically to the non-accelerated versions that compute the full gradient deterministically at the beginning of each epoch.

Fig. 1
figure 1

Performance comparison for solving ridge regression among different algorithms

With feature vectors \(a_i\) and labels \(y_i\) for \(i \in \{1,2,\ldots ,n\}\), ridge regression and LASSO can be written as

$$\begin{aligned} \min _{x \in \mathbb {R}^m} \quad \frac{1}{n} \sum _{i=1}^n \left( a_i^\top x - y_i\right) ^2 + \lambda R(x), \end{aligned}$$

where \(R \equiv \tfrac{1}{2} \Vert \cdot \Vert ^2\) in ridge regression and \(R \equiv \Vert \cdot \Vert _1\) for LASSO. Letting \(g \equiv \lambda R\), it is clear that g is \(\lambda \)-strongly convex in ridge regression and g is not strongly convex for LASSO. In all our experiments, we rescale the value of the data to \([-1, 1]\). For ridge regression, we set \(\lambda = 1 / n\), and for LASSO, we set \(\lambda = 1 / \sqrt{n}\).

For accurate comparisons, we automate all our parameter tuning. For our experiments using ridge regression, we select the step size and momentum parameters from the set \(\{ 1 / t : t \in \mathbb {N} \}\). For LASSO, we use the parameters suggested by Theorem 1, but we scale the step size by a constant \(s \in \mathbb {N}\), and we rescale the momentum parameter so that \(\tau _0 = 1 / 2\). We perform the same parameter-tuning procedure for Katyusha, and set the negative momentum parameter \(\tau _2 = 1 / 2\) as suggested in [2] unless otherwise stated. In our accelerated variants of SVRG and SARAH, we set \(p = \frac{1}{2 n}\), and for the non-accelerated variants and Katyusha, we set the epoch length to 2n. We use a batch size of \(b=1\) for all algorithms.

Fig. 2
figure 2

Performance comparison for solving LASSO among different algorithms. In Katyusha, the negative momentum parameters \(\tau _2 = 0, \frac{1}{2}\) are not tuned

We measure performance with respect to the suboptimality \(F(x_{k+1}) - F(x^*)\), where \(x^*\) is a low-tolerance solution found using forward-backward. To fairly compare algorithms that require a different number of stochastic gradient evaluations per iteration, we report their performance with respect to the number of effective full gradient computations they perform on average each iteration. By this metric, SAGA performs 1/n full gradient computations each iteration, while SVRG performs an average of \(\frac{2}{n} + \frac{1}{2 n}\), for example.

Figures 1 and 2 display the median of 100 trials of ridge regression and LASSO, respectively. We observe the following trends:

  • Acceleration without negative momentum significantly improves the performance of SAGA, SVRG, SARAH, and SARGE in most cases. The improvement is least dramatic on the smallest data set, australian, and slightly less dramatic for the biased algorithms, SARAH and SARGE.

  • Because they require only one stochastic gradient evaluation per iteration, SAGA and Accelerated SAGA require significantly less computation to achieve the same accuracy as other methods.

  • In the strongly convex setting, Katyusha performs similarly to or better than SVRG with acceleration in most cases.

  • In the non-strongly convex setting, Katyushans performs much worse than other methods when using negative momentum. Without negative momentum, it performs much better than all algorithms except Accelerated SAGA. Because Katyusha without negative momentum is almost exactly the same algorithm as Accelerated SVRG, this improved performance is likely due to the second proximal step and additional step size \(\eta \) in Katyusha. All of the algorithms presented in this work can adopt these features without changing their convergence rates.

8 Conclusion

Although acceleration is a widely used and an extensively researched technique in first-order optimisation, its application to stochastic gradient methods is still poorly understood. The introduction of negative momentum adds another layer of complexity to this line of research. Although algorithms using negative momentum enjoy fast convergence rates and strong performance when the parameters are tuned appropriately, it is unclear if negative momentum is necessary for acceleration. In this work, we propose a universal framework for accelerating stochastic gradient methods that does not rely on negative momentum.

Because our approach does not rely on negative momentum, it applies to a much broader class of stochastic gradient estimators. As long as the estimator admits natural bounds on its bias and MSE, it can be used in our framework to produce an accelerated stochastic gradient method with an optimal \(1/T^2\) dependence on convex problems and an optimal \(\sqrt{\kappa }\) dependence in the strongly convex setting. The bias and MSE of the estimator appear only in the constants of our convergence rates. From this perspective, negative momentum is effectively a variance-reduction technique, reducing the variance in the iterates to improve the dependence on n in the convergence rates. A natural question for future research is whether there exist gradient estimators with smaller bias and MSE than SAGA, SVRG, SARAH, and SARGE that can be accelerated using our framework and admit a better dependence on n.