Stacking as Accelerated Gradient Descent
Abstract
Stacking, a heuristic technique for training deep residual networks by progressively increasing the number of layers and initializing new layers by copying parameters from older layers, has proven quite successful in improving the efficiency of training deep neural networks. In this paper, we propose a theoretical explanation for the efficacy of stacking: viz., stacking implements a form of Nesterov’s accelerated gradient descent. The theory also covers simpler models such as the additive ensembles constructed in boosting methods, and provides an explanation for a similar widely-used practical heuristic for initializing the new classifier in each round of boosting. We also prove that for certain deep linear residual networks, stacking does provide accelerated training, via a new potential function analysis of the Nesterov’s accelerated gradient method which allows errors in updates. We conduct proof-of-concept experiments to validate our theory as well.
1 Introduction
Deep learning architectures are ubiquitous today and have been responsible tremendous technological advances in machine learning. However, until 2006, training deep architectures was extremely challenging. The deep learning revolution of the past couple of decades was ushered in with the discovery that a classical technique, viz. greedy layer-wise pretraining, can be used to train general deep architectures (see [13, Section 15.1] for a historical account of this). Previously, only deep architectures with special structure like convolutions or recurrences were known to be feasible to train. Greedy layer-wise pretraining is a very intuitive technique where a deep network is built and trained in a stagewise manner. Starting with a small network that is easy to train, the technique prescribes adding new layers over a number of stages, and in each stage training the newly added layers (and, potentially, the older layers as well) for a certain number of training steps. This process continues until the desired model depth is reached.
More modern developments such as residual connections [19] and normalization layers [20] have made it possible to directly train deep networks without using greedy layer-wise pretraining. However, in recent years, the tremendous success of deep learning architectures based on transformers [35] in domains such as language modeling, and computer vision [27, 5, 9, 34] has led to the trend of scaling model capacity with ever increasing model sizes for improved performance [8, 1, 33]. This endeavour comes at a significant cost as model training may often take months and require several million dollars of compute resources [8]. As a result there has been a surge of recent work aimed at faster training of large transformer based models. These works include methods for sparse training such as mixture of experts (MoE) models [30, 21], methods for approximate sparse attention mechanisms [6] and better optimization methods [31, 23, 15].
In the effort to reduce the massive costs of training these giant transformer models, greedy layer-wise pretraining has re-emerged as a very effective strategy in recent times. Specifically, a technique for initializing the new layers known as stacking [14, 26] has been shown to be very effective in speeding up training of deep transformer models. Stacking prescribes a heuristic for initializing the newly added layers. Specifically, it prescribes that the newly added layers should be initialized by copying parameters from the previously trained layers. [14] proposed to double the model depth at each time by stacking an exact copy of the current model on top. [26] argue that doubling the model depth may be suboptimal and a better approach is gradual stacking where a few new layers (say 3-4) are added during each stage. These layers are initialized by copying the top most layers from the existing model. See Figure 1 for an illustration of this technique for training deep transformer models.

The classical greedy layer-wise pretraining strategy doesn’t have a specific prescription for initializing the new layers. In general, they’re initialized randomly in some standard fashion. Stacking initialization provides a clear benefit over random initialization: Figure 2 shows one example of this effect, for training the BERT Base [10] model with 4 stages of stagewise training.

Structurally, greedy layer-wise pretraining resembles another classical technique, viz. boosting. In boosting, an additive ensemble of classifiers is constructed via greedy stagewise training of classifiers in the same greedy manner. Boosting algorithms such as AdaBoost [12] and Gradient Boosting [11] have found tremendous practical application, especially when using decision trees as base classifiers (e.g. XGBoost [7]). A heuristic similar to stacking has also found practical application in boosting algorithms. The heuristic is to initialize each new classifier (e.g. a decision tree) by copying over the just-trained classifier and then updating it using new training data. This process is illustrated in Figure 3. Due to the similarity with stacking for training deep transformer models, in the rest of the paper we use “stacking” to also refer to this initialization strategy in the context of boosting.

While stacking based methods lead to impressive speed up in training of transformer models and additive models in boosting, we currently do not have a good theoretical understanding of why this is the case. Recently, [26] provided a theoretical explanation based on the assumption that each transformer block is a good few-shot learner. This assumption, along with a few others, are then used to conclude copying parameters as stacking does leads to fast learning. However, the assumptions made in the paper are fairly strong and hard to verify.
In this work, we make progress on developing a theoretical understanding of the efficacy of stacking by studying it from an optimization perspective. In particular, our main contribution is that when viewed from the perspective of function optimization, stacking speeds up stagewise training by enabling a form of the accelerated gradient descent method (AGD) developed by Nesterov [25]. In other words, each stage of the stacking-initialized stagewise training procedure will reduce the training loss at an accelerated rate.
In contrast, we also show that without using any form of initialization, or in other words, initializing the new block/classifier to implement the zero function, stagewise training simply recovers usual (non-accelerated) gradient descent, whereas random initialization recovers stochastic gradient descent on a smoothed version of the loss. Hence, stacking initialization accelerates stagewise training over zero or random initialization. In more detail, our contributions are as follows:
-
1.
We propose a general theoretical framework towards learning a prediction function via an ensemble, i.e., a sequence of functions in a greedy stagewise manner. The generality of our framework lets us unify classical approaches such as boosting [12, 11] that build the ensemble in an additive manner, and modern approaches that build the ensemble via stagewise training of residual function compositions (e.g. ResNets [19] and Transformer models [35]).
-
2.
Our proposed framework lets us formally establish the connection between various initialization strategies used for building the ensemble and the convergence properties of the resulting overall learning procedure. In particular, we show that the zero initialization strategy recovers the vanilla functional gradient descent algorithm, for both the additive (i.e. boosting) and residual compositional forms of learning, whereas random initialization recovers stochastic functional gradient descent (on a smoothed loss) for both types of models. Furthermore, in the case of additive models, the use of the popular stacking initialization exactly recovers Nesterov’s accelerated functional gradient descent. The consequence is that for stages of boosting with stacking initialization, loss reduces at a rate of for smooth losses, or for smooth and strongly-convex losses with condition number , as opposed to rates of and respectively for zero initialization.
-
3.
For the case of compositional models, we show that stacking initialization results in updates that look remarkably similar to Nesterov’s accelerated functional gradient descent. Proving an accelerated rate in the general non-parametric functional setting seems intractable, so we analyze stacking in a special parametric setting of deep linear networks with a convex loss function. In this setting we prove (Theorem 3.1) that the stacking initialization quantitatively leads to the same kind of convergence benefits over vanilla gradient descent as is observed for Nesterov’s accelerated method. At the core of our proof is a novel potential function based analysis of Nesterov’s method with errors in the momentum term that may be of independent interest (c.f. Lemma 3.2).
-
4.
We perform proof-of-concept experiments (in Section 4) to validate our theory on synthetic and real world data.
1.1 Related work
Boosting is a classical technique for constructing additive ensembles via greedy stagewise training, and has a long and rich history of work. We refer the interested reader to the excellent textbook of [28] for the literature on this topic.
The idea of training deep residual networks in a layer wise manner has been explored in many prior works. In earlier studies [18, 4] the focus was on greedily adding trained layers to the model while keeping the bottom layers frozen followed by a final fine tuning step where the entire network is trained. In recent years progressive or gradual stacking [14, 16, 32, 26] has emerged as a powerful way to train deep networks especially transformer based architectures.
The empirical insight of [14] was that the attention patterns in neighboring layers of trained transformer models show remarkable similarity. Hence, by copying the parameters from the previous layer one is providing a better initialization for the optimization procedure. As mentioned previously, [26] developed the gradual stacking approach based on the assumption that the trained transformer blocks are good few-shot learners, and showed that gradual stacking leads to significant wallclock improvements during training.
2 Stagewise training as functional gradient descent
Preliminaries.
We consider a fairly general supervised learning setting. Denote the input space by and the output space by . Examples are drawn from a distribution (which may simply be the empirical data distribution in the case of empirical risk minimization). We aim to model the input-output relationship via predictions in , for some dimension parameter . Given an example the quality of a prediction is measured via a loss function . Predictions are computed using functions . We will assume that the predictor functions are square integrable with respect to , i.e. . The space of such functions forms a Hilbert space, denoted , with the inner product defined as . Unless specified otherwise, all functions in the subsequent discussion will be assumed to be in . The loss function can then naturally be extended to predictor functions by defining, with some abuse of notation, . The goal of training is to obtain a function that minimizes .
In the rest of this section, we perform the analysis in a purely functional setting, which affords a convenient analysis. However, we note that, in practice, functions are parameterized (say by neural networks) and hence update rules for functions may not always be realizable via the specific parameterization used. The functional setting allows us to sidestep realizability issues and to focus on the conceptual message that stacking initialization enables accelerated updates.
We now define a general ensemble learning setup within the above setting. In this setup, we aim to approximate the minimizer of on via an ensemble, which is a sequence of functions , where is a given parameter defining the size of the ensemble. The functions in the ensemble are typically “simple” in the sense that they are chosen from a class of functions that is easy to optimize over. A predictor function can be obtained from an ensemble by aggregating its constituent functions into a single function . The loss of an ensemble can then be defined (again with some abuse of notation) in terms of its aggregation as . Two specific aggregation operators we consider are the following:
-
1.
Addition: (E.g. boosting.) This is a summation over ensemble outputs: .
-
2.
Residual composition: (E.g. deep residual neural networks.) This is a composed function , where the domain is and is the identity mapping.
Greedy stagewise training.
Stagewise training is a simple greedy procedure to train ensembles in a progressive manner. Suppose we have already obtained a (partial) ensemble . Then, the next function in the ensemble, , is ideally obtained by minimizing the loss of the new ensemble, i.e. .
However, in practice, this ideal is hard to implement, and instead two heuristics are commonly used: (a) the new function to be trained is initialized in some carefully chosen manner, and (b) the optimization above is done using early stopping, i.e. a few steps of gradient descent, which ensures that the new function stays close to initialization. We analyze these heuristics in a functional optimization setting as follows.
First, we assume that the function to be trained is initialized at some carefully chosen value . For notational convenience, we denote the aggregation of the ensemble by and that of the generic ensemble by .
Next, we note that an exact analysis for early stopping quickly becomes technically intractable. Instead, for a theoretical analysis, we model the heuristic of early stopping by using regularization around the initialization and linearizing the loss near the initialization, as follows. It is known (see, e.g. [13, Section 7.8]) that early stopping acts as a form of regularization which ensures that the trained function remains close to its initialization, , which implies that remains close to . Thus, early stopping can be modeled as minimizing , for some regularization parameter . Further, since the trained function remains close to the initialization, we also approximate by its linearization around the initialization: . Here, is the Fréchet derivative, and denotes the inner product in . These considerations lead to the following key modeling assumption.
Assumption 2.1.
The result of the early stopped training is given by
In other words,
(1) |
We can now consider specific initialization strategies (i.e. zero initialization, random initialization, and stacking initialization) in the context of additive and residual compositional models and see how these initializations lead to various forms of functional gradient descent.
Stagewise training with zero initialization recovers functional gradient descent.
First, consider stagewise training where functions are initialized to be zero functions, i.e. . It is easy to see that with this initialization, for both additive and residual compositional models, we have . Thus, from (1), we have that the updated ensemble’s predictor can be written as
This exactly describes functional gradient descent with step size . In the additive setting this is well-known: indeed, boosting can be seen as a functional gradient descent [24]. The result for the residual compositional setting appears to be new.
Stagewise training with random initialization recovers stochastic functional gradient descent on smoothed loss.
We now consider stagewise training where functions are initialized randomly, i.e. is a randomly drawn function, independent of all randomness up to stage . In the following, we will assume that , where the on the RHS denotes the zero function. With this initialization, for both additive and residual compositional models, we have , where for additive models, and for residual compositional models. In either case, note that . Now define the loss functional . Since , we can interpret as a randomized smoothing of , similar to convolving with a Gaussian. Then, from (1), we have that the updated ensemble’s predictor can be written as
Now, note that
Or in other words, the above update can be seen as a stochastic functional gradient descent step on the smoothed loss function .
Stagewise training with stacking initialization recovers accelerated functional gradient descent.
We now consider stagewise training where functions are initialized in a stacking-like fashion with , which we will refer to as the stacking initialization. When the ensemble aggregation operator is addition, we have and hence (1) implies that the updated ensemble’s predictor is
The above formula essentially describes Nesterov’s accelerated gradient descent, which has the following update rule:
(2) |
Here, is a constant that can depend on . In fact, we can exactly recover Nesterov’s accelerated gradient descent if we modify the stacking initialization to . Thus, stacking enables accelerated descent for training additive models.
When the ensemble aggregation operator is residual composition, stagewise training with the stacking initialization results in
Equation (1) therefore implies the updated ensemble’s predictor is
(3) |
In contrast, Nesterov’s update rule (2), and the fact that for residual compositional models,
yields the following equation for :
(4) |
Comparing (3) and (4), barring the minor difference in parameters, which can be easily rectified as in the case of the additive models by setting , the major difference is that replaces . Although possibly intractable to prove formally, we believe that the updates in (3) also provide an accelerated convergence rate, since we expect to be close to as iterates converge to the optimal function.
In the following section, we show that in certain deep linear networks, the above intuition is indeed correct and provide a rigorous proof that stacking provides an accelerated convergence rate.
3 Accelerated convergence of deep linear networks by stacking
To demonstrate that stacking can provide a provably accelerated rate of convergence, we now turn to studying the narrower setting of training deep residual linear networks, which are fully connected feedforward neural networks without non-linear activations and with residual connections. Such networks are a common subject of study in the theory of deep learning [29, 22, 17]. As they have no non-linear components, deep linear networks effectively compute a linear functions, albeit via a parametrization as a product of the weight matrices.
Setup.
Consider again the general supervised learning setting from Section 2 and suppose, as is often the case in modern neural networks, that examples consist of inputs and outputs . The loss function is assumed to be convex in the first argument. Let the samples be drawn from a distribution over . Then the expected loss of the linear predictor for a matrix is (with some abuse of notation) . In the following, we will assume the expected loss is -smooth and -strongly convex in , by which we mean that the following inequalities hold for any :
Here, for matrices , , and is the Frobenius norm of . The condition number of the loss is defined as .
The deep residual neural networks we consider have layers with weight matrices , and the function they compute is , where
Here, is the identity matrix providing the residual connection. The expected loss of the neural network described above on the data is .
Derivation of stacking updates.
Suppose we train the deep residual linear network described above using stacking initialization, but incorporating -scaling: i.e., to train the -th layer, its weight matrix is initialized to , for some constant , and then trained. Following the exact same steps as in the derivation of stacking updates in the functional setting of Section 2, we end up with the following formula for :
When is non-singular, we have , so the above equation can be rewritten as
(5) |
As previously noted, (5) differs from Nesterov’s AGD method in form: Nesterov’s AGD updates would be
(6) |
Accelerated convergence for stacking updates.
Despite the differences with Nesterov’s method, we can show that stacking notably still yields a provably accelerated convergence rate. Let . Suppose that is non-singular, i.e. its smallest singular value . Theorem 3.1 shows that as long as the first two layers are initialized so that and are close to optimal, stacking results in a suboptimality gap of after stages of stacking. This is of the same order as the rate obtained by Nesterov’s acceleration; note that, in comparison, stagewise training with zero initialization results would result in a suboptimality gap of .
Theorem 3.1.
Consider stagewise training with stacking initialization of a deep residual linear network in the setup described above with and . Suppose that the first layer weights are initialized so that , where satisfies for and . Then after stages of stacking, we have
The notation above hides polylogarithmic dependence on the problem parameters for clarity of presentation. Precise expressions can be found in the proof. The primary insight behind the result of Theorem 3.1 is that Nesterov’s accelerated gradient method is relatively robust to perturbations in its update rules. This robustness is formalized below in Lemma 3.2. The lemma is described in a fairly general, standalone setting since it may be of independent interest.
Lemma 3.2 (Robustness of Nesterov’s accelerated gradient method).
Let be a Hilbert space and an -smooth and -strongly convex function to be minimized on . Consider the iterates with chosen arbitrarily, and the update rules
(7) |
where , , and are error terms such that for all . Then the convergence rate of the iterates to a suboptimality gap of is of order . Specifically, for any ,
and
We will apply Lemma 3.2 using the correspondence , and for all , and . Lemma 3.2 implies that even though stagewise training with stacking differs from Nesterov’s method, their similar form allows us to express the former as a perturbation of the latter. In particular, if we write our stacking update (5) for deep residual linear networks as a perturbation of Nesterov’s method, as in (7), the perturbation term is exactly
(8) |
for all . That is, if we examine the sequence of iterates produced by stagewise training with stacking initialization, the term is a measure of the disagreement between the realized iterate and what Nesterov’s method says the iterate should be conditioned on the iterates from previous timesteps. We note that, even when these perturbation terms are small in norm, it is possible for Nesterov’s method to describe an iterate sequence that diverges significantly in norm from the iterates realized by stagewise training.
Rewriting (8) as , we immediately see that in order to satisfy the requirement of Lemma 3.2 that , we simply need . This is satisfied when and is reasonably non-singular. A sufficient condition for this is that the iterates are sufficiently close to the ground-truth solution and is non-singular, which explains the conditions of Theorem 3.1.
We now prove Theorem 3.1 formally.
Sufficient claim.
To prove the main result, it suffices to show that for all . With this claim Lemma 3.2 immediately gives a convergence rate of
This gives the desired result as
where .
Thus, we need to show that for all . For this, we use the following claim, where :
Claim 3.3.
Suppose that and . Then .
Proof of Claim 3.3.
We have
We can further simplify right-most factor as follows:
(submultiplicative property) | |||||
(triangle inequality) | (9) |
The following fact shows that is indeed far from singular as long as it is close enough to .
Fact 3.4.
is invertible and .
Proof of Fact 3.4.
Since is the Frobenius norm, we can upper bound the singular values of the matrix by , where we use to denote the largest singular values of a matrix . We can then use Weil’s inequality to argue that, for any , the difference between the th largest singular values of and is upper bounded by
The smallest singular value of is thus at least . Since our choice of guarantees that , we have that and is invertible. This also implies that the largest singular value of is at most . We can therefore upper bound the Frobenius norm of as claimed by
∎
We can now complete the proof by showing the following claim via induction on :
Claim 3.5.
We have and .
Proof of Claim 3.5.
We proceed with induction on .
Base case: .
Note that . We have
which implies that . Furthermore, since and is -smooth, we have , which implies, exactly as above, that . Thus, by Claim 3.3, we conclude that .
Inductive step.
Fixing , and assume, as our inductive hypothesis, that and for all . Due to this inductive hypothesis, we can invoke Lemma 3.2 to observe that for all :
Thus, we have . Since due to our induction hypothesis, we can now use Claim 3.3 to conclude that . This completes the inductive proof.
∎
∎
Theorem 3.1 presents a local convergence result: it assumes that the initial two layers put the network in the vicinity of the optimal solution . We can also provide a global convergence result by using a small warmup period phase to stacking—that is, by training the first few (only a constant number) stages of the deep linear network without stacking.
Corollary 3.6 (Corollary of Theorem 3.1).
Consider stagewise training of a deep residual linear network in the setup described above with an initial warmup phase of zero initialization for stages followed by stacking initialization for the remaining stages. Then after total stages, we have
Proof.
We first recall the standard convergence rate of gradient descent. In the same setting as Lemma 3.2 where we minimize a -smooth and -strongly convex function on some Hilbert space , we can define gradient descent iterates with the update rule . For the resulting sequence of iterates , it is known that ; see e.g. [3].
We can further recall that, in the stagewise training of deep residual linear networks, we can write networks as . In the initial stages where new layers are initialized with zero weights, i.e. , we recover, as mentioned in Section 2, a gradient descent on :
Putting these two pieces together, we have . By setting
we have . So by setting , and noting that , we can now apply the local convergence result of Theorem 3.1, and conclude that performing rounds of stacking after rounds of warm-start leads to the desired loss bound
∎
Key ingredient of proof: robustness of Nesterov’s accelerated gradient descent method.
We now turn to proving Lemma 3.2.
Proof of Lemma 3.2.
In this proof, we will assume that the norm of the perturbation term is bounded by
(10) |
where is defined in the same way as in Theorem 3.1, namely
We define the coefficients , , and . can be understood as a momentum parameter, as (proportional to) the parameter of the exponential curve that is our convergence rate, and a penalty for the presence of the perturbations . As is done in all proofs of Nesterov’s method, we will define the iterates
for all ; we refer interested readers to [36, 2] for interpretations of these iterates. We note that although our definition of depends on , and by extension , this lemma’s claim about does not directly depend at all on . Thus, we will freely choose to set , guaranteeing that .
To prove the statement of this lemma, it suffices to show the following claim.
Claim 3.7.
The following function is a potential function:
(11) |
That is, for all , .
It suffices to show Claim 3.7 because the consequence that directly implies the lemma’s first statement via
with the last inequality following from the -smoothness of . The second statement follows similarly via
We therefore turn to showing Claim 3.7.
First, we note that the potential function (11) we are proving differs from the usual potential function that is used to prove Nesterov’s acceleration, which we will denote by :
Fact 3.8 says roughly that, for any , we can hypothetically recover a large part of the usual proof of Nesterov’s acceleration by bounding the difference , if only we could remove the perturbation at timestep , i.e. set (but keep in place the perturbations from previous timesteps).
Fact 3.8.
Fix any and let . Then,
(12) |
Next, in Fact 3.9, we show that, since we can argue that if we ignore the perturbation , we can argue that even taking the perturbation into account. That is, Fact 3.9 shows that the left-hand side of (12) upper bounds , so long as our stated assumption on the perturbation norm holds.
Fact 3.9.
For any , the left-hand side of (12) upper bounds ; equivalently,
(13) |
Plugging Fact 3.9’s (13) into Fact 3.8’s (12), we recover our main claim that . We conclude by turning to prove Facts 3.8 and 3.9.
Proof of Fact 3.8.
The proof of this fact closely follows [3]’s proof of Nesterov’s accelerated gradient method in smooth strongly convex settings.
We begin by upper bounding the first summand in the left-hand side of (12). Since is smooth and is a gradient step on from , we have that and thus
(14) |
Using the -strong convexity of , we can further bound part of the right-hand side by
(15) |
Plugging (15) and the identity into (3), direct algebra yields an upper bound on the first summand of (12):
(16) |
Next, we turn to upper bounding the second summand in (12). Plugging the iterate definitions and into our definition of , we can recover the identity
Plugging this identity for into the expression , direct algebra yields the identity
(17) |
Summing (16) and (17) yields the following upper bound on the left-hand side of (12):
∎
Proof of Fact 3.9.
To show this fact, we can observe from direct algebra that it suffices to prove the following inequalities:
(18) | |||
(19) |
The inequality (18) follows directly from the -convexity of , and the identity , as
∎
4 Experiments
In this section we provide some proof-of-concept experiments to validate our theoretical results.
4.1 Deep Linear Networks and Squared Losses
As our main theoretical results in Section 3 apply to the case of deep linear networks, we consider the same function class in our experiments on synthetic data with the square loss. Formally, the output space , and for a predictor , we consider the loss
(21) |
We consider a data distribution where the samples are drawn as follows. Let and be the “ground truth” positive definite matrix and let be the data covariance matrix. We first sample and then conditioned on , the output is generated as , where is a mean zero random variable. We can then write the expected square loss explicitly as
(22) |
Note that for the case of the squared loss described above, the condition number of the expected loss depends on the covariance matrix , i.e., .
Stacking updates.
For the specific case of the squared loss we get the following closed form expression of the stacking updates (see Eq. (5))
(23) |
Here is the smoothness of the loss which depends on the largest singular value of , .
Nesterov’s updates.
Similarly, we get the following closed form expression for obtaining Nesterov’s updates (see Eq. (6)) for the case of squared loss
(24) |
For both stacking and Nesterov’s updates we set .
4.2 Synthetic data experiments
We compare the performance of the three updates namely vanilla gradient descent, stacking updates (Eq. 23) and exact Nesterov updates (Eq. 24). Here at each stacking stage only the last layer is updated which matches our theoretical setup faithfully. In Section 4, we also consider the effect of training all the layers in each stacking stage which is closer to how stacking is applied in practice.
We consider points in dimensions. We generate the ground truth to be of the form , where is a random positive semi-definite matrix of spectral norm and is a parameter defining the closeness of to identity. For a given , we generate a random covariance matrix . Finally, we sample the noise in the output from a mean zero Gaussian with a standard deviation of .



Figure 4 shows the performance of the three types of updates as the problem becomes more ill conditioned, i.e. as a function of . As expected, at small values of the condition number there is no advantage of the stacking updates over vanilla gradient descent. However, for ill conditioned data the stacking updates converge much faster than gradient descent. We also observe that the convergence of the stacking updates mirrors very closely the convergence behavior of the exact Nesterov’s updates.
To further understand the relationship between the stacking updates and Nesterov’s updates, in Figure 5 we show the performance of the two as the distance of from identity increases. As can be seen from the figures, when is farther from identity the stacking updates behave qualitatively different from Nesterov’s updates during the initial phase where the loss for stacking updates explodes before converging in later stages. This suggests that in practice there may be a better way to initialize a stacking stage by making the initialization closer to the ideal Nesterov’s updates. While in the case of deep linear networks and the squared loss we have a closed form expression for such an initialization, in general this is a hard problem.


Next we consider the case where in each stacking stage we train all the layers of the deep linear network. We use the same data generation procedure as described above. We perform stages of stacking where in each stage we perform steps of gradient descent with a learning rate of where is the smoothness of the loss function. We train on examples with batch size of and test on examples.
We consider two types of stacking based initialization schemes. The first one namely Stacking Init. initializes the next layer’s weight matrix as . The second scheme namely Nesterov Init. initializes such that we recover the precises Nesterov’s updates at initialization, i.e., Eq. 24. From the analysis in Section 2 the initialization that achieves this amounts to setting as .
Figure 6 shows the performance of the two stacking initialization schemes as compared to the random baseline where we initialize the next layer’s weight matrix to be a random one. We again observe that both the stacking schemes outperform the baseline particularly when the data is ill conditioned.


4.3 Stacking for BERT Base with parameters

The theory developed in Section 2 requires the initialization at the -th stage to be for some . The introduction of is crucial to get the accelerated convergence rate in Nesterov’s method, but the standard stacking initialization doesn’t use a parameter. We performed sanity check experiments on BERT Base to ensure that the introduction of the parameter doesn’t affect the efficacy of stacking. We introduced a trainable parameter, , that multiplies the output of the newly added transformer block in stacking, which is initialized to the values and , which are standard settings for momentum parameters. Figure 7 shows that introduction of the parameter doesn’t hurt the efficacy of stacking. The plot also shows that the final log perplexity improves a bit when using trainable .
5 Conclusions and Future Work
This paper develops the theoretical perspective that the effectiveness of stacking initialization, compared to other forms of initialization such as zero or random, is because it enables a form of accelerated gradient descent in function space. There are several directions for future work. While this work provides a formal proof of accelerated convergence for a particular parametric setting (deep residual linear networks), such a proof in the general functional setting for deep residual networks is still open, and will probably require some additional assumptions. From a practical standpoint, a very intriguing and potentially impactful question is whether it is possible to come up with an efficiently implementable initialization scheme that leads to Nesterov’s AGD updates exactly for deep residual networks.
References
- AAA+ [23] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
- AS [22] Kwangjun Ahn and Suvrit Sra. Understanding nesterov’s acceleration via proximal point method. In Karl Bringmann and Timothy Chan, editors, 5th Symposium on Simplicity in Algorithms, SOSA@SODA 2022, Virtual Conference, January 10-11, 2022, pages 117–130. SIAM, 2022.
- BG [17] Nikhil Bansal and Anupam Gupta. Potential-function proofs for first-order methods, 2017.
- BLPL [06] Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. Greedy layer-wise training of deep networks. Advances in neural information processing systems, 19, 2006.
- BMR+ [20] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
- BPC [20] Iz Beltagy, Matthew E Peters, and Arman Cohan. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150, 2020.
- CG [16] Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. CoRR, abs/1603.02754, 2016.
- CND+ [22] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
- CWC+ [22] Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, et al. Pali: A jointly-scaled multilingual language-image model. arXiv preprint arXiv:2209.06794, 2022.
- DCLT [19] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio, editors, NAACL-HLT, pages 4171–4186. Association for Computational Linguistics, 2019.
- Fri [01] Jerome H Friedman. Greedy function approximation: a gradient boosting machine. Annals of statistics, pages 1189–1232, 2001.
- FS [97] Yoav Freund and Robert E Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences, 55(1):119–139, 1997.
- GBC [16] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org.
- GHL+ [19] Linyuan Gong, Di He, Zhuohan Li, Tao Qin, Liwei Wang, and Tieyan Liu. Efficient training of bert by progressively stacking. In International conference on machine learning, pages 2337–2346. PMLR, 2019.
- GKS [18] Vineet Gupta, Tomer Koren, and Yoram Singer. Shampoo: Preconditioned stochastic tensor optimization. In Jennifer G. Dy and Andreas Krause, editors, ICML, volume 80 of Proceedings of Machine Learning Research, pages 1837–1845. PMLR, 2018.
- GLY+ [20] Xiaotao Gu, Liyuan Liu, Hongkun Yu, Jing Li, Chen Chen, and Jiawei Han. On the transformer growth for progressive bert training. arXiv preprint arXiv:2010.12562, 2020.
- HM [17] Moritz Hardt and Tengyu Ma. Identity matters in deep learning. In ICLR. OpenReview.net, 2017.
- HOT [06] Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006.
- HZRS [16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770–778. IEEE Computer Society, 2016.
- IS [15] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Francis R. Bach and David M. Blei, editors, ICML, volume 37 of JMLR Workshop and Conference Proceedings, pages 448–456. JMLR.org, 2015.
- JSR+ [24] Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. Mixtral of experts. arXiv preprint arXiv:2401.04088, 2024.
- Kaw [16] Kenji Kawaguchi. Deep learning without poor local minima. In Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, editors, Advances in Neural Information Processing Systems 29, pages 586–594, 2016.
- LLH+ [23] Hong Liu, Zhiyuan Li, David Hall, Percy Liang, and Tengyu Ma. Sophia: A scalable stochastic second-order optimizer for language model pre-training. arXiv preprint arXiv:2305.14342, 2023.
- MBBF [99] Llew Mason, Jonathan Baxter, Peter L. Bartlett, and Marcus R. Frean. Boosting algorithms as gradient descent. In Sara A. Solla, Todd K. Leen, and Klaus-Robert Müller, editors, NeurIPS, pages 512–518. The MIT Press, 1999.
- Nes [83] Yurii Nesterov. A method for unconstrained convex minimization problem with the rate of convergence o (1/k^ 2). In Doklady an ussr, volume 269, pages 543–547, 1983.
- RMK+ [23] Sashank J Reddi, Sobhan Miryoosefi, Stefani Karp, Shankar Krishnan, Satyen Kale, Seungyeon Kim, and Sanjiv Kumar. Efficient training of language models using few-shot learning. In International Conference on Machine Learning, pages 14553–14568. PMLR, 2023.
- RWC+ [19] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
- SF [12] Robert E. Schapire and Yoav Freund. Boosting: Foundations and Algorithms. The MIT Press, 2012.
- SMG [14] Andrew M. Saxe, James L. McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In Yoshua Bengio and Yann LeCun, editors, 2nd International Conference on Learning Representations, ICLR 2014, Conference Track Proceedings, 2014.
- SMM+ [17] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017.
- SS [18] Noam Shazeer and Mitchell Stern. Adafactor: Adaptive learning rates with sublinear memory cost. In International Conference on Machine Learning, pages 4596–4604. PMLR, 2018.
- SWK+ [22] Sheng Shen, Pete Walsh, Kurt Keutzer, Jesse Dodge, Matthew Peters, and Iz Beltagy. Staged training for transformer language models. In International Conference on Machine Learning, pages 19893–19908. PMLR, 2022.
- TAB+ [23] Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023.
- TAD+ [23] Tao Tu, Shekoofeh Azizi, Danny Driess, Mike Schaekermann, Mohamed Amin, Pi-Chuan Chang, Andrew Carroll, Chuck Lau, Ryutaro Tanno, Ira Ktena, et al. Towards generalist biomedical AI. arXiv preprint arXiv:2307.14334, 2023.
- VSP+ [17] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
- ZO [17] Zeyuan Allen Zhu and Lorenzo Orecchia. Linear coupling: An ultimate unification of gradient and mirror descent. In Christos H. Papadimitriou, editor, 8th Innovations in Theoretical Computer Science Conference, ITCS 2017, January 9-11, 2017, Berkeley, CA, USA, volume 67 of LIPIcs, pages 3:1–3:22. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2017.