Thinking Outside the Ball: Optimal Learning with Gradient Descent for Generalized Linear Stochastic Convex Optimization
Abstract
We consider linear prediction with a convex Lipschitz loss, or more generally, stochastic convex optimization problems of generalized linear form, i.e. where each instantaneous loss is a scalar convex function of a linear function. We show that in this setting, early stopped Gradient Descent (GD), without any explicit regularization or projection, ensures excess error at most (compared to the best possible with unit Euclidean norm) with an optimal, up to logarithmic factors, sample complexity of and only iterations. This contrasts with general stochastic convex optimization, where iterations are needed Amir et al. (2021b). The lower iteration complexity is ensured by leveraging uniform convergence rather than stability. But instead of uniform convergence in a norm ball, which we show can guarantee suboptimal learning using samples, we rely on uniform convergence in a distribution-dependent ball.
1 Introduction
The recent success of learning using deep networks, with many more parameters than training points, and even without any explicit regularization, has brought back interest in how biases and “implicit regularization” due to the optimization method used, can ensure good generalization, even in situations where minimizing the optimization objective itself cannot Neyshabur et al. (2015). Alongside interest in understanding the algorithmic biases of optimization in non-convex, deep models, and how they can yield good generalization (e.g. Gunasekar et al., 2018c, b; Li et al., 2018; Nacson et al., 2019; Arora et al., 2019; Lyu and Li, 2019; Woodworth et al., 2020; Moroshko et al., 2020; Chizat and Bach, 2020; Blanc et al., 2020; HaoChen et al., 2021; Razin et al., 2021; Pesme et al., 2021), there has also been renewed interest in understanding the fundamentals of algorithmic regularization in convex models (Shalev-Shwartz et al., 2009; Feldman, 2016; Amir et al., 2021a, b; Sekhari et al., 2021; Dauber et al., 2020), both as an interesting and important setting in its own right, and even more so as a basis for understanding the situation in more complex models (how can we hope to understand phenomena in deep, non-convex models, if we can’t even understand them in convex models?). In particular, these fundamental questions include the relationship between algorithmic regularization, explicit regularization, uniform convergence and stability; and the importance of stochasticity and early stopping.
In this paper we focus on the algorithmic bias of deterministic (full batch) optimization methods, and in particular of full-batch gradient descent (i.e. gradient descent on the empirical risk, or “training error”). Even when gradient descent (GD) is run to convergence, it affords some bias that can ensure generalization. E.g. in linear regression (and even slightly more general settings) where interpolation is possible, it can be shown to converge to the minimum norm interpolating solution. This can be sufficient for generalization even in underdetermined situations where other interpolators (i.e. other minimizers of the optimization objective) would not generalize well. Indeed, the generalization ability of the minimum norm interpolating solution, and hence of GD, even in noisy situations, is the subject of much study. And in parallel, there has also been work going beyond GD on linear regression, characterizing the limit point of GD, or of other optimization methods such as Mirror Descent, steepest descent w.r.t. a norm, coordinate descent and AdaBoost, for different types of loss functions (Telgarsky, 2013; Soudry et al., 2018; Gunasekar et al., 2018a; Ji and Telgarsky, 2019; Li et al., 2019; Ji et al., 2020a; Shamir, 2020; Ji et al., 2020b; Vaskevicius et al., 2020).
But what about situations where interpolation, or completely minimizing the empirical risk, is not desirable, and generalization requires compromising on the empirical risk? In this cases one can consider early stopping, i.e. running GD only for some specific number of iterations. Indeed, early stopped GD is a common regularization approach in practice, and other learning approaches, most prominently Boosting, can also be viewed as early stopping of an optimization procedure (coordinate descent in the case of Boosting). When and how can such early stopped GD allow for generalization? How does this compare to Stochastic Gradient Descent (SGD), or to using explicit regularization, both in terms of generalization ability, and the number of required optimization iterations? And what tools, such as uniform convergence, distribution-dependent uniform convergence, and stability, are appropriate for studying the generalization ability of GD?
Our main result
We will show that when training a linear predictor with a convex Lipschitz loss (or more generally, for stochastic convex optimization with a generalized linear instantaneous objective), GD with early stopping can generalize optimally, up to logarithmic factors, with optimal sample complexity , and with an optimal number of iterations (the same as stochastic gradient descent), even without any explicit regularization, and in particular without projections into a norm ball—just unconstrained GD on the training error. This contrasts to previous results regarding early stopped GD for arbitrary stochastic convex optimization (not necessarily generalized linear), for which Amir et al. (2021b) showed GD iterations are needed (even if projections or explicit regularization is used).
Stability vs Uniform Convergence for SGD and GD
The important difference here, and the only property of generalized linear models (GLMs) that we rely on, is that GLMs satisfy uniform convergence Bartlett and Mendelson (2002): empirical losses converge to their expectations uniformly over all predictors in Euclidean balls. This is in contrast to general Stochastic Convex Optimization (SCO), for which no such uniform convergence is possible (Shalev-Shwartz et al., 2009). Consequently, rather then uniform convergence, the analysis for SCO is based on stability arguments Bassily et al. (2020). For SGD, stability at each step can be used in an online analysis, combined with an online-to-batch conversion, which is sufficient for ensuring optimal generalization with an optimal number of iterations. But for GD, we must consider the stability of the method as a whole, which leads to optimal rates in the case of smooth losses Hardt et al. (2016) but otherwise is much worse, necessitating using a smaller stepsize, and hence quadratically more iterations—Amir et al. (2021b) showed that this is not just an analysis artifact, but a real limitation of GD for general SCO. In this paper, we show that once generalization can be ensured via uniform convergence, e.g. for GLMs, then GD does not have to worry about stability, can take much larger step sizes, and generalize optimally after only iterations.
But we must be careful with how we ensure uniform convergence! To rely on uniform convergence, we need to ensure the output of GD lies in some ball, and the generalization error would then scale with the radius of the ball. A naïve approach would be to ensure that output of GD lies in a norm ball around the origin, and rely on uniform convergence in this ball. In Section 4 we show that this approach can ensure generalization, but only with suboptimal sample complexity of —that is, worse than with the stability-based approach. Instead, we show that, with high probability, the output of GD lies in a small (constant radius) distribution dependent ball, centered not at the origin, but around the (distribution dependent) output of GD on the population objective. Even though this ball is unknown to the algorithm, this is still sufficient for generalization. The situation here is similar to the notion of algorithm dependent uniform convergence introduced by Nagarajan and Kolter (2019), though we should emphasize that here we show that algorithm-dependent uniform convergence is sufficient for optimal generalization.
Context and Insights
Our results complement recent results exposing gaps between generalization using stochastic optimization versus explicit regularization or deterministic optimization in stochastic convex optimization. Sekhari et al. (2021) showed that for a setting that is slightly weaker (where only the population loss is convex, but instantaneous losses can be non-convex), there can be large gaps between SGD versus GD or explicit regularization: even though SGD can learn with the optimal sample complexity of , explicit regularization, in the form of regularized empirical risk minimization cannot learn at all, even with arbitrarily many samples, and GD requires at least a suboptimal number of samples (it is not clear if this sample complexity is sufficient for GD, or even if it can at all ensure learning). This highlights that the generalization ability of SGD cannot be understood in terms of mimicking some regularizer, as well as gaps between stochastic and deterministic optimization.
Returning to the strict SCO setting, where the instantaneous losses are convex, still, uniform convergence does not hold, and generalization can only be ensured via algorithmic-dependent bounds. Nevertheless, optimal generalization can be ensured either thorough explicit regularization, SGD or GD—all three approaches can ensure learning with samples (Shalev-Shwartz et al., 2009; Nemirovski and Yudin, 1983; Bassily et al., 2020). Even so, differences between deterministic and stochastic methods still exist. Dauber et al. (2020) shows that in SCO, the output of SGD cannot even be guaranteed to lie in some “small” distribution-dependent set of (approximate) empirical risk minimizers111“smallness” here, refers to a size notion that measures how well an empirical risk minimizer over the set generalizes, see Dauber et al. (2020) for the definition of a statistically complex set, and so the generalization ability of SGD in this settings cannot be attributed to any “regularizer” being small. Dauber et al. also shows that GD does not follow some distribution-independent regularization path. In contrast, here we show that if we do take the distribution into account, then GD is constrained to follow (at least approximately) a predetermined path (i.e. deterministic path, that is independent of the sample, but does depend only on the distribution). Finally, there is also a gap between SGD and GD, in this case, in the required number of iterations Amir et al. (2021b). Surprisingly, this gap cannot be fixed by adding regularization to the objective Amir et al. (2021a).
Importantly, for either weak or strict SCO, regularization in the form of constraining the norm of the predictor (i.e. empirical risk minimization inside the hypothesis class we are competing with), is not sufficient for learning (Shalev-Shwartz et al., 2009; Feldman, 2016). The failure of constrained ERM is critical here for the constructions of Amir et al.; Dauber et al. and Sekhari et al. establishing the gaps above: since projected gradient descent would quickly converge to the constrained ERM, it would also generalize just as well, and the gaps and constructions are valid also for projected gradient descent.
In this paper we turn to the GLM setting, which is perhaps one of the most well studied frameworks in the learning theory literature, as it captures fundamental problems such as logistic regression, SVM and many more. Uniform convergence for GLMs (Bartlett and Mendelson, 2002; Shalev-Shwartz and Ben-David, 2014; Kakade et al., 2008) ensures that constrained ERM learns with optimal sample complexity, and hence so would projected GD with iterations. Interestingly, GD on the Tikhonov-type regularized objective similarly yields optimal learning with only iterations Sridharan et al. (2008), indicating the lower bounds of Amir et al. on the iteration complexity do not hold in this setting. But both projected GD and GD on the Tikhonov regularized objective are forms of explicit regularization, and so we study unregularized, unprojected, GD. Our results show that (a) when uniform convergence holds, both sample complexity and iteration complexity gaps between stochastic and deterministic optimization disappear (though stochastic optimization is of course still much more computationally efficient), and (b) perhaps surprisingly, distribution-dependent uniform convergence is not only able to explain generalization of the output of GD, but it can do so better than stability; (c) though it appears to be suboptimal, distribution independent uniform convergence can explain generalization even though the norm could become very large (by making explanations based on uniform convergence inside a fixed distribution independent class).
2 Problem Setup and Background
We study the problem of stochastic optimization from i.i.d. data samples. For that purpose, we consider the standard setting of stochastic convex optimization. A learning problem consists of a family of loss functions defined over a fixed domain , parameterized by the parameter space . For each we denote the domain
Our underlying assumption is that for each the function is convex and -Lipschitz with respect to its first argument . In this setting, a learner is provided with a sample of i.i.d. examples drawn from an unknown distribution . The goal of the learner is to optimize the population risk defined as follows:
More formally, an algorithm is said to learn the class (in expectation), with sample complexity if given i.i.d. sample such that , the learner returns a solution that holds
A prevalent approach in stochastic optimization, is to consider, and optimize, the empirical risk over a sample :
(1) |
2.1 Gradient Descent (without projections).
A concrete and prominent way in minimizing the empirical risk in Eq. 1 is with Gradient Descent (GD). GD is an iterative algorithm that runs for iterations, and has the following update rule that depends on a learning-rate :
(2) |
where , and is the (sub)gradient of at point . The output of GD is normally taken to be the average iterate,
(3) |
Throughout the paper we consider the aforementioned output in Eq. 3, though we remark that our results also extend to any reasonable averaging scheme (including prevalent schemes such as tail-averaging or random choice of ).
It is well known (e.g. Shalev-Shwartz and Ben-David (2014)) that the output of GD with standard initialization at the origin, enjoys the following guarantee over the empirical risk, for every :
(4) |
In contrast, as far as the population risk, it was recently shown in Amir et al. (2021b) that, for sufficiently large , GD suffers from the following sub optimal rate222the result in Amir et al. (2021b) is formulated for projected-GD, however as the authors note the proof holds verbatim to the unprojected version.
(5) |
In particular, setting for concreteness , a choice of and may lead to -error over the empirical risk, but is susceptible to overfitting. In fact, it turns out that iterations are necessary and sufficient (Bassily et al., 2020) for GD (with or without projections) to obtain population error.
2.2 Generalized Linear Models.
One, important, class of convex learning problems is the class of generalized linear models (GLMs). In a GLM problem, takes the following generalized linear form:
(6) |
where , is convex and -Lipschitz w.r.t. , and, is an embedding of the domain into some norm bounded set in a linear space (potentially infinite).
As we mostly care about norm bounded solutions, we will assume for concretness that . We also treat the value of the function at zero as constant, namely . In turn, the function is -Lipschitz as well, and we obtain by Lipschitness that:
(7) |
Uniform Convergence of GLMs.
One desirable property of GLMs is that, in contrast with general convex functions, they enjoy dimension-independent uniform convergence bounds. This, potentially, allow us to circumvent the bound in Eq. 5. In more detail, a seminal result due to Bartlett and Mendelson (2002) shows that, under our restrictions on and , the empirical risk converges uniformly to the population loss as follows for any :
(8) |
By a similar reasoning, one can show a stronger property for GLMs, which we will need: uniform convergence on any ball, not necessarily centered around zero. More formally, for every and let us notate
We can then bound the expected generalization error of as follows:
Lemma 2.1.
Suppose is -Lipschitz w.r.t. and is an embedding in a linear space such that . Then, for as in Eq. 6. We have for any and
The proof is essentially the same as the proof of Eq. 8, and exploit the Rademacher complexity of the class. For the sake of completeness we repeat it in Appendix B.
Eq. 8, and more generally Lemma 2.1, imply that any algorithm, whose range is restricted to a -bounded ball, that has an optimization guarantee of,
will also obtain an convergence rate of the excess population risk. For example, projected-GD is an algorithm that enjoys the aforementioned generalization bound.
3 Main Results
Theorem 3.1.
Thus, Theorem 3.1 shows that using GD, in the case of GLMs, we can learn the class , with sample complexity and number of iterations .
Note that, an improved rate of can be obtained, if we choose , but this requires prior knowledge of the bounded domain radius. The bound we formulate above is for a learning rate that is oblivious to the norm of the optimal choice .
The key technical tool in proving Theorem 3.1 is the following structural result that characterizes the output of GD over the empirical risk, and may be of independent interest:
Theorem 3.2.
Let be an i.i.d sequence drawn from some unknown distribution . Assume that is a convex and -Lipschitz function, for all , with respect to its first argument. Then, given the distribution there exists a sequence that depends on (but independent of the sample ), such that, for any with probability of at least over , the iterates of gradient descent, as depicted in Eq. 2, satisfy:
For example, we can set and , then with probability :
As such, for the natural choice of learning rate and iteration complexity, we obtain that the iterates of GD remain at the vicinity of a deterministic trajectory, predetermined by the distribution to be learned. Our following result shows that this bound is tight up to some logarithmic factor. We refer the reader to Appendix C for the full proof.
Theorem 3.3.
Fix and . For any sequence , independent of the sample , there exists a convex and -Lipschitz function and a distribution over , such that, with probability at least over , the iterates of gradient descent, as depicted in Eq. 2, satisfy:
3.1 High Probability Rates
We next move to discuss results for learnability with high probability. We first remark that using Markov’s inequality and, standard techniques to boost the confidence, One can achieve high proability rates at a logarithmic computational and sample cost in the confidence.
If we want, though, to achieve high probability rates for the algorithm without alterations, the standard approach requires concentration bounds which normally rely on boundness of the predictor. This, unfortunately is not true when we run GD without projection, as the prediction can be potentially unbounded.
However, under natural structural assumptions that are often met for the types of losses we are usually interested in, we can achieve such boundness by clipping procedure which we next describe.
For this section we consider a loss function , such that and, we assume that:
Note that this is a very natural assumption to have in the case of convex surrogates for prediction tasks. For example, this holds for the widely used hinge loss in binary classification. Observe that under this assumption, if we consider as a predictor of , the learner has no incentive to return a prediction that is outside of the interval .
Thus, we define the following mapping
(9) |
and consider the clipped solution where is the original output of GD defined in Eq. 3. We can now present our high probability result.
Theorem 3.4.
Suppose is a distribution over and let . Let be a generalized linear loss function of the form given in Eqs. 6 and 3.1 such that for all , is convex and -Lipschitz with respect to its first argument. Then, for the gradient descent solution in Eq. 3 and the function in Eq. 9 we have with probability at least
In particular, setting and we obtain that with probability :
4 Comparison with non-distribution-dependent uniform convergence
In this section, we contrast our approach with what might be possible with a more traditional, non-distribution-dependent, uniform convergence. In particular, we consider an argument based on ensuring that the output of GD, with some stepsize and number of iterations , is always (or with high probability) in a ball of radius around the origin, namely, . Hence, using Rademacher complexity bounds for GLMs, we can say that its population risk is within of its empirical risk. By selecting and to be very small, one can always ensure that the output of GD is in a small ball, but we also need to balance that with the empirical suboptimality of the output. That is, traditional bounds require us to find and that balance between the guarantee on the empirical suboptimality and the guarantee on the norm of the output. Such an approach would then result in population suboptimality that is governed by these two terms:
(10) |
where and are taken over all valid distributions such that is convex and Lipschitz. In particular, we would get,
Claim.
For GLMs it holds that .
Proof.
The upper bound follows by taking and , bounding the first term using the standard GD guarantee in Eq. 4, and the second term by noting that each step increases the gradient by at most . Therefore, and we obtain,
For the lower bound we consider two deterministic functions in one dimension. When we consider the deterministic objective . Clearly, the norm of the solution is , thus:
When we consider the objective . Similarly, we have that . Thus, the empirical risk is:
The claim shows that uniform convergence to a fixed ball around the origin can ensure learning, but with a suboptimal sample complexity of . To obtain better bounds using this approach, additional structural assumption on the loss or the distribution are required. For example, the work of Shamir (2020) assumes max-margin and smoothness(specifically the logistic loss) to obtain bounds on the norm of the solution, while our result applies to general Lipschitz GLMs without any distribution assumptions. It is insightful to directly contrast Eq. 10 to the approach of Section 3, which essentially entails looking at:
for some deterministic . We can then apply a uniform concentration guarantee for a ball around it, and so is also sufficient to ensure:
In Section 3 we show that , improving on and yielding optimal learning, up to logarithmic factors.
5 Technical Overview
Our main result, Theorem 3.1, establishes a generalization bound for GD without projection, and it builds upon a structural result presented in Theorem 3.2. We next, then, outline the derivation of both these results. Because the most interesting implication of our results are derived when we choose and , we will focus here in the exposition on this regime. This is mainly to avoid clutter notation. Hence, unless stated otherwise we assume that and are fixed appropriately.
Our proof of Theorem 3.1 relies on two steps. First, through Theorem 3.2 we argue that that output of GD will be (w.h.p) in a fixed ball that depends solely on the distribution (but not on the sample). Then, as a second step, we can apply standard uniform convergence for a bounded-norm balls (see Lemma 2.1) to reason about generalization.
The second step, builds on a standard generalization bound that is derived through Rademacher complexity. Also notice, that the first step follows immediately from Theorem 3.2. Indeed, Theorem 3.2 argues that there exist a sequence such that if is the trajectory of GD over the empirical risk, we will have (w.h.p):
The output of GD is the averaged iterate, hence we obtain that indeed is restricted to a ball around , as required. Therefore, we are left with proving our main structural result: That the iterates of GD are in a proximity of a trajectory that depends solely on the distribution (i.e. Theorem 3.2).
For the end of proving Theorem 3.2, we introduce the following GD trajectory:
Gradient Descent on the population loss.
We consider an alternative sequence of gradient descent that operates on the population loss rather than the empirical risk . The update rule is then,
(11) |
The sequence will serve as the sequence .
In the proof of Theorem 3.2 we require high probability rates. However, for the simplicity of the exposition, we will prove here a slightly weaker, in expectation, result: (the proof is defered to Section A.1).
Lemma 5.1.
This lemma bounds the distance, in expectation, between the GD trajectory over the empirical risk and the GD trajectory over the population risk. Again, we remark that to obtain the final result in Theorem 3.2, a stronger high probability version of Lemma 5.1 is required. The proof of Theorem 3.2 follows similar lines to that of Lemma 5.1 with modifications concerning specific concentration inequalities.
One crucial challenge in proving Lemma 5.1 stems from the adaptivity of the gradient sequence. In particular, notice that the sequences are governed, respectively, by the dynamics
At a first glance, since is an estimate of , it might seem that the result can be obtained by standard concentration bounds and application of a union bound along the trajectory. Unfortunately, such a naive argument cannot work. Indeed, since , for , depends on the sequence , is not necessarily an unbiased estimate of . This is not just seemingly, and a construction in Amir et al. (2021b) demonstrates how, even after only two iterates the gradient can diverge, significantly, from . While these constructions are outside of the scope of GLMs, we remark that Theorem 3.2 holds for any convex and Lipschitz function. Also, it was in fact shown that even for GLMs (Foster et al., 2018), the estimate of the gradients don’t have any dimension-independent uniform convergence bound, making it a challenge to pursue such a proof direction. To summarize, we need to prove that the two sequences remain in the vicinity, even though, apriori, the update steps of the two functions may be different at each iteration.
Towards this goal, we follow an analysis, reminiscent of the uniform argument stability analysis of Bassily et al. (2020) for non-smooth convex losses. In a nutshell, both arguments compare the trajectory to a reference trajectory, and bound the incremental difference between the two trajectories while exploiting monotonicity of the convex function. In the case of stability, the reference trajectory is the trajectory induced by a sample that differ on a single example from the sample .
Bassily et al. showed that if and are two sequences that differ on a single example, then:
In comparison, we show:
We note, that both results are optimal. Namely, the stability bound of Bassily et al. (2020) is the optimal stability rate, and the proximity bound we provide is the best possible bound against a fixed point, independent of the sample.
Note that both bounds yield difference for the natural choice of and . However, -stability guarantees are vacuous in the sense that they do not provide any interesting implication over the generalization of the algorithm. Whereas, we provide -proximity guarantee to some fixed point, completely independent of the sample . This does imply generalization in our setup.
One might also suggest that our result of -proximity can be derived from -stability. However, it turns out this is not the case. For the same instance we use to lower bound the proximity by in Theorem 3.3, it can be easily shown that GD will be -stable. Setting and will then imply -stability but with -proximity. This asserts that our result is not a direct consequence of standard stability arguments.
References
- Amir et al. [2021a] I. Amir, Y. Carmon, T. Koren, and R. Livni. Never go full batch (in stochastic convex optimization). In Advances in Neural Information Processing Systems, 2021a.
- Amir et al. [2021b] I. Amir, T. Koren, and R. Livni. SGD generalizes better than GD (and regularization doesn’t help). In Conference on Learning Theory, 2021b.
- Arora et al. [2019] S. Arora, N. Cohen, W. Hu, and Y. Luo. Implicit regularization in deep matrix factorization. Advances in Neural Information Processing Systems, 32, 2019.
- Bartlett and Mendelson [2002] P. L. Bartlett and S. Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. J. Mach. Learn. Res., 3:463–482, 2002.
- Bassily et al. [2020] R. Bassily, V. Feldman, C. Guzmán, and K. Talwar. Stability of stochastic gradient descent on nonsmooth convex losses. In Advances in Neural Information Processing Systems, 2020.
- Blanc et al. [2020] G. Blanc, N. Gupta, G. Valiant, and P. Valiant. Implicit regularization for deep neural networks driven by an ornstein-uhlenbeck like process. In Conference on learning theory, pages 483–513. PMLR, 2020.
- Boucheron et al. [2013] S. Boucheron, G. Lugosi, and P. Massart. Concentration Inequalities - A Nonasymptotic Theory of Independence. Oxford University Press, 2013. ISBN 978-0-19-953525-5. doi: 10.1093/acprof:oso/9780199535255.001.0001. URL https://doi.org/10.1093/acprof:oso/9780199535255.001.0001.
- Chizat and Bach [2020] L. Chizat and F. Bach. Implicit bias of gradient descent for wide two-layer neural networks trained with the logistic loss. In Conference on Learning Theory, pages 1305–1338. PMLR, 2020.
- Dauber et al. [2020] A. Dauber, M. Feder, T. Koren, and R. Livni. Can implicit bias explain generalization? stochastic convex optimization as a case study. In Advances in Neural Information Processing Systems, 2020.
- Feldman [2016] V. Feldman. Generalization of ERM in stochastic convex optimization: The dimension strikes back. In Advances in Neural Information Processing Systems, 2016.
- Foster et al. [2018] D. J. Foster, A. Sekhari, and K. Sridharan. Uniform convergence of gradients for non-convex learning and optimization. Advances in Neural Information Processing Systems, 31, 2018.
- Gunasekar et al. [2018a] S. Gunasekar, J. D. Lee, D. Soudry, and N. Srebro. Characterizing implicit bias in terms of optimization geometry. In J. G. Dy and A. Krause, editors, Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 1827–1836. PMLR, 2018a. URL http://proceedings.mlr.press/v80/gunasekar18a.html.
- Gunasekar et al. [2018b] S. Gunasekar, J. D. Lee, N. Srebro, and D. Soudry. Implicit bias of gradient descent on linear convolutional networks. Advances in Neural Information Processing Systems, 2018b.
- Gunasekar et al. [2018c] S. Gunasekar, B. Woodworth, S. Bhojanapalli, B. Neyshabur, and N. Srebro. Implicit regularization in matrix factorization. In 2018 Information Theory and Applications Workshop, 2018c.
- HaoChen et al. [2021] J. Z. HaoChen, C. Wei, J. Lee, and T. Ma. Shape matters: Understanding the implicit bias of the noise covariance. In Conference on Learning Theory, pages 2315–2357. PMLR, 2021.
- Hardt et al. [2016] M. Hardt, B. Recht, and Y. Singer. Train faster, generalize better: Stability of stochastic gradient descent. In International Conference on Machine Learning, 2016.
- Ji and Telgarsky [2019] Z. Ji and M. Telgarsky. The implicit bias of gradient descent on nonseparable data. In Conference on Learning Theory, 2019.
- Ji et al. [2020a] Z. Ji, M. Dudík, R. E. Schapire, and M. Telgarsky. Gradient descent follows the regularization path for general losses. In Conference on Learning Theory, 2020a.
- Ji et al. [2020b] Z. Ji, M. Dudík, R. E. Schapire, and M. Telgarsky. Gradient descent follows the regularization path for general losses. In J. D. Abernethy and S. Agarwal, editors, Conference on Learning Theory, COLT 2020, 9-12 July 2020, Virtual Event [Graz, Austria], volume 125 of Proceedings of Machine Learning Research, pages 2109–2136. PMLR, 2020b. URL http://proceedings.mlr.press/v125/ji20a.html.
- Kakade et al. [2008] S. M. Kakade, K. Sridharan, and A. Tewari. On the complexity of linear prediction: Risk bounds, margin bounds, and regularization. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21, Proceedings of the Twenty-Second Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 8-11, 2008, pages 793–800. Curran Associates, Inc., 2008. URL https://proceedings.neurips.cc/paper/2008/hash/5b69b9cb83065d403869739ae7f0995e-Abstract.html.
- Li et al. [2018] Y. Li, T. Ma, and H. Zhang. Algorithmic regularization in over-parameterized matrix sensing and neural networks with quadratic activations. In Conference On Learning Theory, pages 2–47. PMLR, 2018.
- Li et al. [2019] Y. Li, E. X. Fang, H. Xu, and T. Zhao. Implicit bias of gradient descent based adversarial training on separable data. In International Conference on Learning Representations, 2019.
- Lyu and Li [2019] K. Lyu and J. Li. Gradient descent maximizes the margin of homogeneous neural networks. In International Conference on Learning Representations, 2019.
- Moroshko et al. [2020] E. Moroshko, B. E. Woodworth, S. Gunasekar, J. D. Lee, N. Srebro, and D. Soudry. Implicit bias in deep linear classification: Initialization scale vs training accuracy. Advances in neural information processing systems, 33:22182–22193, 2020.
- Nacson et al. [2019] M. S. Nacson, S. Gunasekar, J. Lee, N. Srebro, and D. Soudry. Lexicographic and depth-sensitive margins in homogeneous and non-homogeneous deep models. In International Conference on Machine Learning, pages 4683–4692. PMLR, 2019.
- Nagarajan and Kolter [2019] V. Nagarajan and J. Z. Kolter. Uniform convergence may be unable to explain generalization in deep learning. Advances in Neural Information Processing Systems, 32, 2019.
- Nemirovski and Yudin [1983] A. S. Nemirovski and D. B. Yudin. Problem complexity and method efficiency in optimization. 1983.
- Neyshabur et al. [2015] B. Neyshabur, R. Tomioka, and N. Srebro. In search of the real inductive bias: On the role of implicit regularization in deep learning. In International Conference on Learning Representations (workshop track), 2015.
- Pesme et al. [2021] S. Pesme, L. Pillaud-Vivien, and N. Flammarion. Implicit bias of sgd for diagonal linear networks: a provable benefit of stochasticity. Advances in Neural Information Processing Systems, 34, 2021.
- Razin et al. [2021] N. Razin, A. Maman, and N. Cohen. Implicit regularization in tensor factorization. In International Conference on Machine Learning, pages 8913–8924. PMLR, 2021.
- Sekhari et al. [2021] A. Sekhari, K. Sridharan, and S. Kale. Sgd: The role of implicit regularization, batch-size and multiple-epochs. Advances in Neural Information Processing Systems, 34, 2021.
- Shalev-Shwartz and Ben-David [2014] S. Shalev-Shwartz and S. Ben-David. Understanding machine learning: From theory to algorithms. Cambridge University Press, 2014.
- Shalev-Shwartz et al. [2009] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Stochastic convex optimization. In Conference on Learning Theory, 2009.
- Shamir [2020] O. Shamir. Gradient methods never overfit on separable data. CoRR, abs/2007.00028, 2020. URL https://arxiv.org/abs/2007.00028.
- Soudry et al. [2018] D. Soudry, E. Hoffer, M. S. Nacson, S. Gunasekar, and N. Srebro. The implicit bias of gradient descent on separable data. The Journal of Machine Learning Research, 19(1):2822–2878, 2018.
- Sridharan et al. [2008] K. Sridharan, S. Shalev-Shwartz, and N. Srebro. Fast rates for regularized objectives. Advances in Neural Information Processing Systems, 2008.
- Telgarsky [2013] M. Telgarsky. Margins, shrinkage, and boosting. In International Conference on Machine Learning, pages 307–315. PMLR, 2013.
- Vaskevicius et al. [2020] T. Vaskevicius, V. Kanade, and P. Rebeschini. The statistical complexity of early-stopped mirror descent. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/024d2d699e6c1a82c9ba986386f4d824-Abstract.html.
- Woodworth et al. [2020] B. Woodworth, S. Gunasekar, J. D. Lee, E. Moroshko, P. Savarese, I. Golan, D. Soudry, and N. Srebro. Kernel and rich regimes in overparametrized models. In Conference on Learning Theory, pages 3635–3673. PMLR, 2020.
Appendix A Main Proofs
Recall that we define to be the -th iterate when applying GD over the population risk as depicted in Eq. 11.
A.1 Proof of Lemma 5.1
Observe that,
(-Lipschitz) | ||||
where in the last inequality we use monotinicity of convex functions: for any . Next, applying Cauchy-Schwarz inequality we get,
(C.S.) | ||||
The last inequality follows from the observation that for any and , specifically for , and .
Applying the formula recursively, and noting that :
where in the last inequality we chose and used the known bound of . Taking the square root and using the inequality of we conclude
We are interested in bounding . By the definition of the empirical risk
Note that by Lipschitzness , and that are independent zero-mean random vectors (as is independent of ). Thus, we get
Consequently, taking the expectation over the sample and using Jensen’s inequality
A.2 Proof of Theorem 3.1
Starting with Lemma 2.1 we obtain for any domain ,
(12) |
From Theorem 3.2, there exists a sequence such that with probability at least ,
(13) |
Setting and to be the RHS in Eq. 13 we obtain:
(Eqs. 13 and 12) | ||||
where we used Eq. 7 and the fact that and to bound the second and third terms. Hence:
and
Next, setting we get that:
(14) |
A.3 Proof of Theorem 3.2
The proof is similar to that of Lemma 5.1 with the exception that here we employ specific concentration inequalities of random variables with bounded difference. The reference sequence we consider is the GD iterates over the population risk, namely, as described in Eq. 11. Observe that
(-Lipschitz) | ||||
From convexity of we know that for any . Therefore, applying Cauchy-Schwarz inequality we get,
(convexity) | ||||
(C.S.) | ||||
(15) |
The last inequality follows from the observation that for any and , specifically for , and .
We are interested in bounding . For that matter we consider the following concentration inequality which is a direct result of the bounded difference inequality by McDiarmid.
Theorem (Boucheron, Lugosi, and Massart [2013, Example 6.3]).
Let be independent zero-mean R.V’s such that and denote . Then, for all ,
Note that by Lipschitzness , and that are independent zero-mean random variables (as is independent of ). Thus, for :
This implies that with probability we get,
Plugging it back to Eq. 15 we obtain w.p.
Applying the formula recursively, and noting that :
where in the last inequality we chose and used the known bound of . Taking the square root and using the inequality of we have
By taking the union bound over all we conclude the proof.
A.4 Proof of Theorem 3.4
Similarly to the proof in Section A.2, let us consider the domain , where we set , the average of the deterministic sequence in Theorem 3.2. From the assumption in Section 3.1 it follows that for any we have that . We also, can use Lemma 2.1 (applying it to ) to obtain that
(16) |
where we denote
Next, we define
and note that for two samples, that differ on a single example we have that
Using then the bounded difference inequality by McDiarmid [see Shalev-Shwartz and Ben-David, 2014, Lemma 26.4]. We have that with probability at least ,
(McDiarmid) | ||||
From Theorem 3.2 we have that with probability at least ,
(17) |
Taken together, and applying union bound, we have that with probability at least :
(18) |
Next, using Eqs. 4 and 3.1 and the fact that the optimization bound Eq. 4 holds for any :
(Section 3.1) | ||||
(Eq. 4) | ||||
(19) |
Now, set such that
(20) |
By independence of and the bound on we obtain by Lipschitzness . It follows from the Hoeffding’s inequality that with probability at least
(21) |
Thus, we have that w.p. :
(Eq. 21) | ||||
(22) |
where the last inequality follows from Eq. 20. Combining Eqs. 18 and 22 and applying union bound we obtain the result.
Appendix B Proof of Lemma 2.1
Using the standard bound of the generalization error, via the Rademacher complexity of the class (see e.g. Shalev-Shwartz and Ben-David [2014]), we have that:
Where we notate the function class:
and is the Rademacher complexity of the class . Namely:
(23) |
and are i.i.d. Rademacher random variables.
We next show that:
(24) |
To show Eq. 24, we use the following well known property of the Rademacher complexity of a class:
Lemma B.1 (contraction lemma, see Shalev-Shwartz and Ben-David [2014]).
For each , let be convex -lipschitz function in their first argument. Let and denote . Then, if , are i.i.d. Rademacher random variables
As well as the Rademacher complexity of the class of linear predictors against a sample of 1-bounded vectors:
(25) |
Next, given a sample we define and we set
Then:
Appendix C Proof of Theorem 3.3
Our construction is comprised of two separate instances. We first provide lower bounds, Lemmas C.1 and C.2, for the distance between the GD iterates defined over two separate i.i.d. samples and , respectively.
Lemma C.1.
Fix and . Suppose and are i.i.d. samples drawn from . There exists a convex and -Lipschitz function and a distribution over , such that, if and are defined as in Eq. 2, then with probability at least :
Lemma C.2.
Fix and . Suppose and are i.i.d. samples drawn from . There exists a convex and -Lipschitz function and a distribution over , such that, if and are defined as in Eq. 2, then with probability at least :
One can then pick the dominant term between the bounds, and obtain that with probability at least :
(26) |
Suppose some , independent on the samples and . Then by the triangle inequality we have that,
( and are i.i.d.) |
Dividing by and using Eq. 26 we conclude the proof.
C.1 Proof of Lemma C.1
Suppose takes following form:
where and with probability . Define a sample and , then by the update rule in Eq. 2 we obtain,
This implies that . Note that,
Using Berry-Esseen inequality one can show that with probability at least :
In turn we conclude that w.p. at least ,
We remark that can be embedded to any large dimension, thus implying our lower bound holds for any arbitrary dimension.
C.2 Proof of Lemma C.2
This proof relies on the same construction of Bassily et al. [2020]. The difference is that we show a lower bound between iterates over two i.i.d. samples while their result holds for two samples that differ only on a single example. The main observation here is that with some constant probability, the problem is reduced to that of Bassily et al. [2020]. Consider the following :
where and
We also choose such that and a sufficiently small , and . Observe that for a given sample the empirical risk is then,
We now claim that with probability over , the empirical risk is given by,
Conditioned on this event, we get that and therefore for any . In addition, we know that the complementary event, namely, for at least a single , is given with probability . Since,
we have that with probability at least both events occur. Note that where is the one vector. Then applying the update rule in Eq. 2 and the fact that we get,
Recall, that under the aforementioned event we have that . This implies that for any . Therefore,
where is the standard basis vector of index . Since we have that . Developing this dynamic recursively we obtain,
Using the reverse triangle inequality we have,
() | ||||
(reverse triangle inequality) | ||||
( and ) | ||||
() |