Theory on Forgetting and Generalization of Continual Learning
Abstract
Continual learning (CL), which aims to learn a sequence of tasks, has attracted significant recent attention. However, most work has focused on the experimental performance of CL, and theoretical studies of CL are still limited. In particular, there is a lack of understanding on what factors are important and how they affect “catastrophic forgetting” and generalization performance. To fill this gap, our theoretical analysis, under overparameterized linear models, provides the first-known explicit form of the expected forgetting and generalization error. Further analysis of such a key result yields a number of theoretical explanations about how overparameterization, task similarity, and task ordering affect both forgetting and generalization error of CL. More interestingly, by conducting experiments on real datasets using deep neural networks (DNNs), we show that some of these insights even go beyond the linear models and can be carried over to practical setups. In particular, we use concrete examples to show that our results not only explain some interesting empirical observations in recent studies, but also motivate better practical algorithm designs of CL.
1 Introduction
Continual learning (CL) [41] is a learning paradigm where an agent needs to continuously learn a sequence of tasks. To resemble the extraordinary lifelong learning capability of human beings, the agent is expected to learn new tasks more easily based on accumulated knowledge from old tasks, and further improve the learning performance of old tasks by leveraging the knowledge of new tasks. The former is referred to as forward knowledge transfer and the latter as backward knowledge transfer. One major challenge herein is the so-called catastrophic forgetting [36], i.e., the agent easily forgets the knowledge of old tasks when learning new tasks.
Although there have been significant efforts in experimental studies (e.g., [27, 14, 50, 16, 17]) to address the forgetting issue, the theoretical understanding of CL is still in the early stage, where only a few attempts have emerged recently, e.g., [49, 12, 16, 17] (see a more detailed discussion about the previous theoretical studies of CL in Section 2). However, none of these existing theoretical results provide an explicit characterization of forgetting and generalization error, that only depends on fundamental system parameters/setups (e.g., number of tasks/samples/parameters, noise level, task similarity/order). Thus, our work here provides the first-known explicit theoretical result, which enables us to comprehensively understand which factors are relevant and how they (precisely) affect forgetting and generalization error of CL.
Our main contributions can be summarized as follows.
First, we provide theoretical results on the expected value of forgetting and overall generalization error in CL, under a linear regression setup with i.i.d. Gaussian features and noise. The expression of our results is in an explicit form that captures a clear dependency on various system parameters/setups. Note that analyzing overparameterized linear models are important in their own right and also, as demonstrated in many recent works, are a first step towards understanding the generalization performance of DNNs, e.g., [9, 5, 23, 40, 21].
Second, we investigate the impact of overparameterization, task similarity, and task ordering on both forgetting and generalization error of CL, which reveals the following important insights: 1) Both forgetting and generalization error can benefit from more parameters in the overparameterized regime. Moreover, benign overfitting exists and is easier to observe with large noise and/or low task similarity. 2) In terms of the impact of task similarity, we show that the generalization error always decreases when tasks become more similar, whereas this ‘monotonicity’ does not always hold for forgetting. Surprisingly, forgetting can even decrease when tasks are less similar under certain scenarios. 3) In order to minimize forgetting, the optimal task order should diversify the learning tasks in the early stage and learn more dissimilar tasks adjacently. This is also corroborated by some special scenarios where the tasks can be divided into multiple categories, and the optimal task order therein alternatively learns tasks from different categories.
Last but not least, we show that our findings for the linear models are applicable to and can also guide the algorithm designs for CL in practice, by conducting experiments on real datasets with DNNs. Specifically, our analysis of the impact of task similarity is clearly corroborated by the experimental results, which further sheds light on the recent observations [42, 30, 17] that ‘intermediate task similarity’ leads to the worst forgetting in the two-task setup. Experimental results on the impact of task ordering are also consistent with our findings in linear models. More interestingly, inspired by our analysis of knowledge transfer in linear models, we slightly modify a previous method [33] on leveraging task correlation to facilitate forward knowledge transfer, and show that better performance can be achieved by counting more on fresher old tasks. These encouraging results corroborate the benefits of studying the overparameterized linear models to fundamentally demystify CL.
2 Related Work
Empirical studies in CL. CL has attracted much attention in the past decade, and a vast amount of empirical methods have been proposed to address catastrophic forgetting. In general, the existing methods can be divided into three categories: (1) Regularization-based methods (e.g., [27, 1, 34]), which regularize the modifications on the important weights to old tasks when learning the new task; (2) Parameter-isolation based methods (e.g., [46, 50, 48]), which learn a mask to fix the important weights to old tasks during the new task learning and further expand the neural network when needed; (3) Memory-based methods, which either store and replay data of old tasks when learning the new task, i.e., experience-replay based methods (e.g., [14, 43, 22]), or store the gradient information of old tasks and learn the new task in the orthogonal direction to old tasks without data replay, i.e., orthogonal-projection based methods (e.g., [18, 44, 33]).
Theoretical studies in CL. Specifically, [12] and [16] analyzed generalization error and forgetting for the orthogonal gradient descent (OGD) approach [18] based on NTK models, and further proposed variants of OGD to address forgetting. [49] proposed a unified framework for the performance analysis of regularization-based CL methods, by formulating them as a second-order Taylor approximation of the loss function for each task. [4] and [30] studied CL in the teacher-student setup to characterize the impact of task similarity on forgetting performance. [13] and [31] investigated continual representation learning with dynamically expanding feature spaces, and developed provably efficient CL methods with a characterization of the sample complexity. [15] characterized the lower bound of memory in CL using the PAC framework. By investigating the information flow between neural network layers, [2] analyzed the selection of frozen filters based on layer sensitivity to maximize the performance of CL. Nevertheless, none of these existing works show an explicit form of forgetting and generalization error, that only depends on fundamental system parameters/setups (e.g., number of tasks/samples/parameters, noise level, task similarity/order). In contrast, our work is the first one to provide such an explicit theoretical result, which enables us to comprehensively understand what factors (and how they) affect the forgetting and generalization performance of CL.
The most relevant study to our work is [17], which also studied CL in overparameterized linear models. However, our work is quite different from [17]: (1) We study and provide the exact forms of both forgetting and generalization error based on the testing loss, while [17] only evaluated forgetting using the training data; (2) Our results characterize the performance of CL in a comprehensive way, through investigating how overparameterization, task similarity and task ordering affect both forgetting and generalization error, while [17] only studied the upper bound of catastrophic forgetting under specific task orderings; (3) Unlike [17], our study is able to explain recent phenomena and guide the algorithmic development in CL with DNN.
Studies about generalization performance on overparameterized models (benign overfitting). DNNs are usually so overparameterized that can completely fit all training samples, yet they can still generalize well on unseen test data. This seems to contradict the classical knowledge of bias-variance trade-off. As a first step of understanding this mystery, the “benign overfitting” or “double-descent” phenomenon111i.e., test error decreases again in the overparameterized region with more parameters, so the overfitting is benign for the generalization performance. has been discovered and studied for overfitted solutions of single-task linear regression. For example, some work discovered and studied double-descent with min -norm overfitted solutions [9, 7, 6, 20, 39] or min -norm overfitted solutions [38, 23], while using simple features such as Gaussian or Fourier features. Some other recent work studied the overfitted generalization performance by adopting features that approximate shallow neural networks, for example, random feature (RF) models [37], two-layer neural tangent kernel (NTK) models [3, 45, 24], and three-layer NTK models [25]. All of these studies considered only a single task. In contrast, our work focuses on CL with a sequence of tasks, which brings in many new variables such as task similarity and task ordering.
3 Continual Learning in Linear Models
Consider the standard CL setup where a sequence of tasks arrives sequentially in time.
Ground truth. We consider a linear ground truth [9, 17] for each task. Specifically, for task , the output is given by
(1) |
where denotes the feature vector, denotes the model parameters, and is the random noise. Here denotes the number of features of ground truth (i.e., the number of true features). In practice, true features are unknown in advance. Therefore, when choosing a model to learn a certain task, people usually choose more features than enough such that all possible features are included. We write this formally into the following assumption222When 3.1 does not hold, the derivation techniques for Theorem 4.1 in the next section still hold with a minor modification that treats the missing features as noise..
Assumption 3.1.
We index all possible features by . Let denote the set of indices of all the chosen features in the model to be trained, with cardinality . Let denote the set of indices of -th task’s true features, with cardinality . We assume that .
We next define an expanded ground-truth vector that expands the original ground-truth vector from dimension to dimension by filling zeros in the positions . Let be the corresponding features for . Therefore, the ground truth Equation 1 can be rewritten as
(2) |
Data. For each task , the training dataset is denoted as with sample size . By stacking the training data as and , Equation 2 can be written as
To simplify our analysis, we consider i.i.d. Gaussian features and noise, which is stated in the following assumption.
Assumption 3.2.
Each element of for all follows standard Gaussian distribution and is independent of each other. The noise and is independent of each other for all , where denotes the noise level.
Learning procedure. We train the model parameters for each task sequentially. Let denote the result after training for task , which is also the initial point in the model training for task . Let , i.e., task 1 starts training from zero. For each task , the training loss is defined by mean-squared-error (MSE) with respect to (w.r.t.) :
(3) |
When underparameterized (i.e., ), minimizing Equation 3 has a unique solution (with probability 1). When overparameterized (i.e., ), minimizing Equation 3 has an infinite number of solutions that make Equation 3 zero. Among all overfitted solutions, we are particularly interested in the one corresponding to the convergent point of stochastic gradient descent (SGD) for minimizing Equation 3. In fact, it can be shown that such an overfitted solution has the smallest -norm of the change of parameters [19]. In other words, corresponds to the solution to the following optimization problem:
(4) |
The constraint in Equation 4 implies that the training loss is exactly zero (i.e., overfitted).
Performance evaluation. For the described linear system, we use to denote the model error333It can be proved that the model error we defined here is equivalent to the mean-squared-error on noise-free test data. for task :
(5) |
which characterizes the generalization performance of on task . As is standard in the empirical studies of CL, e.g., [14, 33], we evaluate the performance of CL on two key metrics, forgetting and overall generalization error, defined as below:
(1) Forgetting: It measures how much ‘knowledge’ of old tasks has been forgotten after learning the current task. Specifically, after learning task , the average forgetting over all old tasks is defined as:
(6) |
In Equation 6, denotes the performance difference between (the result after training task ) and (the result after training task ) on test data of task .
(2) Overall generalization error: We evaluate the model generalization performance of the final task model in terms of the average model error over all tasks:
(7) |
It is worth noting that the forgetting defined in [17] is based on the training loss, which consequently ignores the generalization performance of the learned models for old tasks. Such a definition is not only inconsistent with the evaluation metric in empirical studies, but also insufficient to capture the backward knowledge transfer because the value of forgetting therein can not be negative.
We further simplify the current setup by assuming that each task has the same number of training samples as well as the same noise level , stated as follows.
Assumption 3.3.
and for all .
4 Main Results and Interpretations
Although we use linear models, in order to provide hints on understanding DNNs that are usually heavily overparameterized, we are particularly interested in the performance of CL in the overparameterized region (), where we define the overparameterized ratio as . For ease of exposition, we define the following coefficients that will appear in our main theorem:
(8) |
where are the indices of tasks. Now we are ready to state our main theorem that characterizes the expected value of forgetting and overall generalization error:
Theorem 4.1.
When , we must have
(9) |
(10) |
To the best of our knowledge, Theorem 4.1 is the first result that establishes the closed forms of forgetting and overall generalization error of CL in overparameterized linear models. In the rest of the paper, we will see that Theorem 4.1 not only describes how CL performs on the linear system but also provides guidance on applying CL in practice that DNNs and real-world datasets. The proof of Theorem 4.1 is in Section D.3. We also verify the correctness of Theorem 4.1 in Figure 1 where discrete points indicated by markers in Figure 1 (drawn by simulations) are very close to the curves (drawn by Theorem 4.1 and Theorem 4.3).
We can further simply Equation 9 and Equation 10 by only considering two tasks, so as to better understand Theorem 4.1. The result is shown in the following corollary, which clearly characterizes the dependence on task similarity and different system parameters.
Corollary 4.2.
When and , we must have
(11) | ||||
(12) |
Based on Theorem 4.1, we will provide insights on the following three aspects.
(1) Overparameterization (Section 4.1). In order to understand the generalization power of overfitted machine learning models, much attention has focused (e.g., [9, 23, 21]) on studying the impact of overparameterization on single-task learning, whereas how overparameterization affects the performance of CL still remains unclear. Fortunately, the exact forms in Theorem 4.1 provide a way to directly evaluate the impact of overparameterization and the random noise on both forgetting and generalization error in CL.
(2) Task similarity (Section 4.2). Both forgetting and generalization error depend on the optimal model gap between any two tasks , i.e., for any task and , which defines the task similarity in this work (smaller gap means higher similarity). Understanding the impact of task similarity is helpful to not only explain empirical observations but also guide better designs of CL in practice.
(3) Task ordering (Section 4.3). Given a fixed set of tasks in CL, the learning order of the task sequence clearly plays an important role in affecting both and , through the task order-dependent coefficients, e.g., in Equation 9 and in Equation 10. For example, suppose is the same for all , the optimal task ordering to minimize the generalization error is to learn the tasks in a decreasing order of , i.e., if . Intuitively, the most dissimilar task should be learnt first in this case. Investigating the impact of task ordering is particularly valuable when the agent can control the task order in CL, in the same spirit of curriculum learning [11].
In what follows, we will delve into the impact of those three crucial factors in order to provide a comprehensive understanding of CL in the linear models.

4.1 The impact of overparameterization
In this subsection, we show some insights about the impact of overparameterization. Specially, we will discuss what happens when changes under a fixed .
1) More parameters can lead to zero forgetting and alleviate the negative impact of task dissimilarity on generalization error. As shown in Theorem 4.1, when , we can have that and Term G2 also approaches zero. In some special cases, we can further show that Term G2 is monotonically decreasing w.r.t. . A more detailed discussion can be found in Section C.3.
2) Benign overfitting exists and is easier to observe with large noise and/or low task similarity. As we introduced in related work, benign overfitting has recently been discovered and studied in linear models as a first step towards understanding why DNNs can still generalize well even when heavily overparameterized. The concept of “benign overfitting” and “double-descent” is initially proposed for only a single task. We now show that such a phenomenon also exists in CL where there exists a sequence of tasks.
Notice that Theorem 4.1 is for the overparameterized region. For a precise comparison between the performance of overfitting and underfitting, we present the theoretical result of the underparameterized region in the following theorem.
Theorem 4.3.
When , we must have
We provide an intuitive explanation and rigorous proof of Theorem 4.3 in Section D.8. As shown in Theorem 4.3, becomes larger when the noise level is larger, and both and become larger when tasks are less similar (i.e., when is larger). In contrast, in the overfitted situation, Term F2 and Term G2 in Theorem 4.1 (corresponding to task similarity), Term F3 and Term G3 (corresponding to noise) will go to zero when . This indicates that when the noise level is high and/or task similarity is low, the performance of CL in the overparameterized situation is more likely to be better than that in the underparameterized situation, i.e., benign overfitting exists and is easier to observe. This can be observed from Figure 1. For example, the blue curve with markers “” corresponds to the largest noise (compared with other curves in Figure 1(d)) and the lowest task similarity (compared with Figure 1(c)), and it has the deepest descent curve in the overparameterized region (). This observation indicates that benign overfitting is easier to observe with larger noise and lower task similarity.
3) A descent floor sometimes exists on forgetting and generalization error, especially when tasks are similar and noise is low. In Equation 11, the term first decreases and then increases as increases from to (i.e., increases from to ), while the remaining two terms decrease as increases. Thus, when (task similarity) and (noise level) are relatively small, the trend of w.r.t. will be dominated by the first term, where a descent floor of forgetting exists. In the right-hand-side of Equation 12, the first term increases as increases, while the rest two terms decrease as increases. Taking the derivative of Equation 12 on , we have
Here, since is very large when is close to , while decreasing to zero when , we can tell that when is relatively small w.r.t. , will be positive and then negative as increases from to . In other words, if these two tasks have a positive correlation (i.e., ) and noise is small, there exists a descent floor w.r.t. on . Such a phenomenon can exist in other setups besides the special case of . For example, in Figure 1(a)(c) where the ground truth for each task is exactly the same, we can observe a descent floor for the small noise cases and (i.e., orange and green curves with markers “” and “Y”, respectively).
4.2 The impact of task similarity
Generalization error monotonically decreases with task similarity whereas forgetting may not. Based on Theorem 4.1, it can be seen that the generalization error decreases when for any two different tasks and decreases, because of the positive coefficients in Term G2 in Equation 10. Intuitively, the generalization error of CL will be smaller if the tasks are more similar with each other. In contrast, the forgetting may not change monotonically with , because the coefficients in Term F2 in Equation 9 can be negative. To verify this result, we consider two different scenarios.
(1) Consider the case where . In Equation 11, captures the task similarity between tasks 1 and 2 in terms of the optimal task models. It is clear that forgetting increases with , i.e., less forgetting when the two tasks are more similar.
(2) Consider the case where . We first assume that for any task considering the overparameterized models [17]. Suppose that task 1 and task 2 share the same set of true features, which is orthogonal to the feature set of both task 3 and task 4, i.e., and . Note that
where if . Therefore, we can control the value of by changing , without affecting the value of for any pair of . Based on Theorem 4.1, it can be shown that , such that increasing , i.e., the tasks become less similar, will surprisingly decrease forgetting.
4.3 The impact of task ordering
In order to investigate the impact of task ordering on the performance of CL, we assume that for every task . By ignoring the task order-independent terms in Equation 9 and Equation 10, we focus on the task order-dependent terms, i.e., Term F2 and Term G2.
1) Optimal task ordering of minimizing forgetting tends to arrange dissimilar tasks adjacently in the early stage of the sequence. As shown in Term F2, the optimal task order to minimize forgetting closely hinges upon the value of . Based on Equation 8, is smaller when (1) and are smaller and (2) they are closer. Intuitively, this implies that tasks with larger should be learnt adjacently with higher priority in CL, in order to minimize the impact of the task dissimilarity on the value of . However, finding the optimal task order for the general case is highly nontrivial due to the complex coupling across for different tasks. To verify the implication above and better understand the structure of the optimal task order, we study several special cases of the task setups.
(1) [Special case I: One vs Many] There are two different categories of tasks, where tasks in the same category have the same optimal model; among the entire task set, one special task belongs to Category I while the other tasks belong to Category II. In this case, the optimal task order is captured by the optimal learning order of the special task in Category I. We have the following result to characterize the optimal task order for Special case I.
Proposition 4.4.
Let denote the optimal order of the special task in Category I to minimize forgetting. Suppose . Then 1) can take any integer value between 2 and , depending on the value of ; 2) is non-decreasing with .
As indicated by Proposition 4.4, the special task will be learnt in the first half of the sequence, such that the task diversity in the first half is always larger than in the second half. Besides, with the model capacity increasing (), the order of the special task will move towards the beginning of the sequence, because 1) the model is less concerned about the special task since it is powerful enough to learn different features and 2) the model focuses on the performance of the majority and seeks to learn more tasks from Category II at the end of the sequence for better performance.




(2) [Special case II: Equal Occurrence] There are two different categories ( and ) of tasks, where tasks in the same category have the same optimal model; particularly, two categories contain the same number of tasks. If task and task , we will denote the task order as . The following proposition characterizes the optimal task order in this case:
Proposition 4.5.
Suppose . For and , the optimal task order to minimize forgetting is the perfectly alternating order, i.e., and , where and .
Proposition 4.5 clearly shows that adjacent tasks always belong to different categories in the optimal task order, which leads to a more diverse task learning sequence. Intuitively, the alternating order maximizes the memorization of each category by keeping practicing on different tasks. It can be further proved that the perfectly alternating order is also optimal for with three different categories (Section C.4). Based on these results, we expect that such an alternating order may minimize forgetting for more general scenarios where the tasks contain multiple categories with equal cross-category task model distance.
The findings on the optimal task order indeed share similar insights with the surprising impact of task correlation on forgetting mentioned earlier. Intuitively, learning more dissimilar tasks in the early stage facilitates the exploration of a larger feature space and expands the learnt feature space in CL, which can make the learning of similar tasks in the future much easier. In the meanwhile, the impact of task similarity among the early tasks continuously diminishes in CL with increasing, as suggested by the coefficients (which can be smaller for smaller , ) in Theorem 4.1. Therefore, the negative impact of learning more dissimilar tasks on forgetting is weaker when they are learnt in the early stage, compared to being learnt in the late stage.
2) The optimal task ordering for minimizing forgetting and for minimizing generalization error are not always the same. Consider Special case I and Special case II. It can be shown that the optimal task orders for minimizing forgetting and generalization error are different in Special case I but same in Special case II. This would open up an interesting direction of finding the task order with balanced impact on forgetting and generalization error. A more detailed discussion can be found in Section C.4.
5 Implications on CL with DNN
So far, we have explored different aspects that affect the performance of CL in overparameterized linear models. More interestingly, we will show next that Theorem 4.1 can also shed light on CL in practice with DNNs, by reflecting on recent empirical observations and guiding improved designs therein. More experimental details are in Appendix A.
5.1 Forgetting is not always monotonic with task similarity
To see if our understandings about the impact of task similarity on forgetting can be carried over to CL with DNN, we conduct experiments on MNIST [29] using a convolutional neural network to investigate the impact of task similarity therein. More specifically, we consider each task as a binary classification problem which seeks to decide if an image belongs to a task-specific label subset of the classes, i.e., in MNIST, and we control the task similarity through the degree of class overlapping between the task-specific subsets, e.g., task and are more similar if the cardinality of is larger.
We first consider the case with two tasks, where we fix for task 1 as and change for task 2 to have different numbers of overlapping classes with . As shown in Figure 2, both forgetting and generalization error decrease when the number of overlapping classes increases, i.e., the two tasks are more similar, which is indeed consistent with our analysis for the overparameterized linear models for . More interestingly, this result also agrees with some recent studies [42, 30, 17], which found that ‘intermediate task similarity’ leads to the worst forgetting in a two-task setup using various notions of task similarity (different from our definition of task similarity using the optimal model gap), through either empirical studies or analyzing the upper bound of forgetting. We can build the connection based on the closed form of forgetting in Equation 11.
Note that in Equation 11
and we can divide the task correlation into three cases depending on the value of : (1) : Two tasks are orthogonal in the sense that they share no common features, i.e., ; (2) : Two tasks share some common features and are ‘positively’ correlated; (3) : Two tasks share some common features but are ‘negatively’ correlated. Compared to the first case when two tasks are orthogonal, it can be easily shown that forgetting is worse when two tasks are negatively correlated even if they share some common features, which indeed corresponds to ‘the intermediate task similarity’ in [42, 30, 17]. The reason behind is that in this case task 2 updates the model in the opposite direction to the model update of task 1, which inevitably leads to more forgetting in CL. Note that in Figure 2, the non-overlapping case means that task 1 and 2 are negatively correlated because in this two-task case the image that is not in must be in . On the other hand, the forgetting can even be negative when the two tasks are positively correlated.
We next consider the case with , where we control the task similarity by changing while fixing , and . Here we let as in Section 4.2. As shown in Figure 2, forgetting surprisingly increases when task 1 and task 2 have more overlapping classes, which is also consistent with our analysis for the linear models. Indeed, this also justifies our observation that forgetting can decrease when the adjacent tasks are more dissimilar when studying the impact of task order.
5.2 Diversify the tasks in the early stage and order dissimilar tasks adjacently
We also evaluate the impact of task ordering on forgetting in CL with DNN, by constructing the tasks using a similar strategy as in Section 5.1. More specifically, we consider two different scenarios: (1) , where the task sequence includes one special task and five same tasks; (2) , where the task sequence includes two categories of tasks and each has two same tasks.
Figure 2 demonstrates forgetting in the first scenario w.r.t. the learning order of the special task, and three plots correspond to three different cases, respectively. It is clear that for all three cases, the optimal order of the special task to minimize forgetting is always in the first half of the sequence. For the second scenario, we evaluate forgetting in Figure 2 for all six possible task orders, where task indices and refer to the perfectly alternating order. We can see that the smallest forgetting is also achieved in the perfectly alternating order. These results indicate that our findings in Section 4.3 for the overparameterized linear models can also be carried over to CL with DNN, i.e., the optimal task order should diversify the tasks in the early stage and learn more different tasks adjacently. Such an implication is indeed consistent with the empirical observations in recent studies [31, 10]. Note that in both Figure 2 and Figure 2, we normalize forgetting w.r.t. the worst forgetting in each case.
5.3 Weight the fresher old tasks more in forward knowledge transfer
Recently, there has been increasing interest in CL on leveraging task correlation to facilitate knowledge transfer [26, 33, 32], which first selects the most correlated old tasks with the current task and then designs algorithms to directly increase the knowledge transfer between correlated tasks. By investigating knowledge transfer in the linear models, we show that improved algorithms can be motivated to achieve better knowledge transfer.
Given a task in CL, the forward knowledge transfer [47] in the linear model can be defined as
(13) |
where is the learnt model of task by starting from a random model. Intuitively, Equation 13 characterizes the gap in the testing performance between learnt in CL and learnt from scratch, for which a positive value means that the accumulated knowledge in CL benefits the learning of the current task. As the second term in Equation 13 is independent with CL, it suffices to analyze for understanding the forward knowledge transfer. Based on Lemma B.2 (Appendix B), we can obtain
While it is intuitive that better forward knowledge transfer can be achieved when is smaller for the current task and the old task , the impact of different old tasks on the current task is non-uniform, in the sense that a more recent old task (i.e., is smaller) has a larger effect on the forward knowledge transfer to task . This result implies that fresher old tasks should contribute more when designing algorithms to leverage correlated old tasks to facilitate better forward knowledge transfer.
To verify this insight, we consider the TRGP algorithm proposed in [33]. Specifically, TRGP first selects the most correlated old tasks with the current task and reuses their knowledge through a scaled weight projection to facilitate forward knowledge transfer, where all the selected old tasks are treated equivalently. We slightly modify TRGP by assigning a larger weight to the selected old task that is more recent to the current task, named as TRGP+, and evaluate its performance on standard CL benchmarks (PMNIST [35] and Split CIFAR-100 [28]) and DNN architectures. As shown in Table 1, TRGP+ outperforms TRGP in both accuracy and forgetting. Assigning a larger weight to the more recent correlated old task not only improves the forward knowledge transfer, but also increases the backward knowledge transfer by forcing the learnt model of the current task to be closer to the model of those highly correlated old tasks.
Method | PMNIST | Split CIFAR-100 | |||
---|---|---|---|---|---|
ACC(%) | BWT(%) | ACC(%) | BWT(%) | ||
TRGP | 96.34 | -0.8 | 74.46 | -0.9 | |
TRGP+ | 96.75 | -0.46 | 75.31 | 0.13 |
6 Conclusions
In this work, we studied CL in the overparameterized linear models where each task is a linear regression problem and solved by using SGD. Under the assumption that each task has a sparse linear model with i.i.d. Gaussian features and noise, we derived the exact forms of both forgetting and generalization error, which built the key foundations of understanding the performance of CL. In particular, we investigated the impact of overparameterization, task similarity and task ordering on both forgetting and generalization error. Experimental results on real datasets with DNNs indicated that our findings in linear models can even be carried over to CL in practice and leveraged to develop better algorithms.
References
- [1] Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. Memory aware synapses: Learning what (not) to forget. In Proceedings of the European Conference on Computer Vision (ECCV), pages 139–154, 2018.
- [2] Joshua Andle and Salimeh Yasaei Sekeh. Theoretical understanding of the information flow on continual learning performance. In European Conference on Computer Vision, pages 86–101. Springer, 2022.
- [3] Sanjeev Arora, Simon Du, Wei Hu, Zhiyuan Li, and Ruosong Wang. Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks. In International Conference on Machine Learning, pages 322–332, 2019.
- [4] Haruka Asanuma, Shiro Takagi, Yoshihiro Nagano, Yuki Yoshida, Yasuhiko Igarashi, and Masato Okada. Statistical mechanical analysis of catastrophic forgetting in continual learning with teacher and student networks. Journal of the Physical Society of Japan, 90(10):104001, 2021.
- [5] Peter L Bartlett, Philip M Long, Gábor Lugosi, and Alexander Tsigler. Benign overfitting in linear regression. Proceedings of the National Academy of Sciences, 117(48):30063–30070, 2020.
- [6] Peter L Bartlett, Philip M Long, Gábor Lugosi, and Alexander Tsigler. Benign overfitting in linear regression. Proceedings of the National Academy of Sciences, 2020.
- [7] Mikhail Belkin, Daniel Hsu, and Ji Xu. Two models of double descent for weak features. arXiv preprint arXiv:1903.07571, 2019.
- [8] Mikhail Belkin, Daniel Hsu, and Ji Xu. Two models of double descent for weak features. SIAM Journal on Mathematics of Data Science, 2(4):1167–1180, 2020.
- [9] Mikhail Belkin, Siyuan Ma, and Soumik Mandal. To understand deep learning we need to understand kernel learning. In International Conference on Machine Learning, pages 541–549. PMLR, 2018.
- [10] Samuel J Bell and Neil D Lawrence. The effect of task ordering in continual learning. arXiv preprint arXiv:2205.13323, 2022.
- [11] Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41–48, 2009.
- [12] Mehdi Abbana Bennani, Thang Doan, and Masashi Sugiyama. Generalisation guarantees for continual learning with orthogonal gradient descent. arXiv preprint arXiv:2006.11942, 2020.
- [13] Xinyuan Cao, Weiyang Liu, and Santosh Vempala. Provable lifelong learning of representations. In International Conference on Artificial Intelligence and Statistics, pages 6334–6356. PMLR, 2022.
- [14] Arslan Chaudhry, Marc’Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. Efficient lifelong learning with a-gem. arXiv preprint arXiv:1812.00420, 2018.
- [15] Xi Chen, Christos Papadimitriou, and Binghui Peng. Memory bounds for continual learning. arXiv preprint arXiv:2204.10830, 2022.
- [16] Thang Doan, Mehdi Abbana Bennani, Bogdan Mazoure, Guillaume Rabusseau, and Pierre Alquier. A theoretical analysis of catastrophic forgetting through the ntk overlap matrix. In International Conference on Artificial Intelligence and Statistics, pages 1072–1080. PMLR, 2021.
- [17] Itay Evron, Edward Moroshko, Rachel Ward, Nathan Srebro, and Daniel Soudry. How catastrophic can catastrophic forgetting be in linear regression? In Conference on Learning Theory, pages 4028–4079. PMLR, 2022.
- [18] Mehrdad Farajtabar, Navid Azizan, Alex Mott, and Ang Li. Orthogonal gradient descent for continual learning. In International Conference on Artificial Intelligence and Statistics, pages 3762–3773. PMLR, 2020.
- [19] Suriya Gunasekar, Jason Lee, Daniel Soudry, and Nathan Srebro. Characterizing implicit bias in terms of optimization geometry. In International Conference on Machine Learning, pages 1832–1841. PMLR, 2018.
- [20] Trevor Hastie, Andrea Montanari, Saharon Rosset, and Ryan J Tibshirani. Surprises in high-dimensional ridgeless least squares interpolation. arXiv preprint arXiv:1903.08560, 2019.
- [21] Trevor Hastie, Andrea Montanari, Saharon Rosset, and Ryan J Tibshirani. Surprises in high-dimensional ridgeless least squares interpolation. The Annals of Statistics, 50(2):949–986, 2022.
- [22] Xisen Jin, Arka Sadhu, Junyi Du, and Xiang Ren. Gradient-based editing of memory examples for online task-free continual learning. Advances in Neural Information Processing Systems, 34:29193–29205, 2021.
- [23] Peizhong Ju, Xiaojun Lin, and Jia Liu. Overfitting can be harmless for basis pursuit, but only to a degree. Advances in Neural Information Processing Systems, 33:7956–7967, 2020.
- [24] Peizhong Ju, Xiaojun Lin, and Ness B Shroff. On the generalization power of overfitted two-layer neural tangent kernel models. arXiv preprint arXiv:2103.05243, 2021.
- [25] Peizhong Ju, Xiaojun Lin, and Ness B Shroff. On the generalization power of the overfitted three-layer neural tangent kernel model. arXiv preprint arXiv:2206.02047, 2022.
- [26] Zixuan Ke, Bing Liu, and Xingchang Huang. Continual learning of a mixed sequence of similar and dissimilar tasks. Advances in Neural Information Processing Systems, 33:18493–18504, 2020.
- [27] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526, 2017.
- [28] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
- [29] Yann LeCun, Bernhard Boser, John Denker, Donnie Henderson, Richard Howard, Wayne Hubbard, and Lawrence Jackel. Handwritten digit recognition with a back-propagation network. Advances in neural information processing systems, 2, 1989.
- [30] Sebastian Lee, Sebastian Goldt, and Andrew Saxe. Continual learning in the teacher-student setup: Impact of task similarity. In International Conference on Machine Learning, pages 6109–6119. PMLR, 2021.
- [31] Yingcong Li, Mingchen Li, M Salman Asif, and Samet Oymak. Provable and efficient continual representation learning. arXiv preprint arXiv:2203.02026, 2022.
- [32] Sen Lin, Li Yang, Deliang Fan, and Junshan Zhang. Beyond not-forgetting: Continual learning with backward knowledge transfer. In Thirty-Sixth Conference on Neural Information Processing Systems, 2022.
- [33] Sen Lin, Li Yang, Deliang Fan, and Junshan Zhang. Trgp: Trust region gradient projection for continual learning. Tenth International Conference on Learning Representations, ICLR 2022, 2022.
- [34] Hao Liu and Huaping Liu. Continual learning with recursive gradient optimization. arXiv preprint arXiv:2201.12522, 2022.
- [35] David Lopez-Paz and Marc’Aurelio Ranzato. Gradient episodic memory for continual learning. Advances in neural information processing systems, 30:6467–6476, 2017.
- [36] Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation, volume 24, pages 109–165. Elsevier, 1989.
- [37] Song Mei and Andrea Montanari. The generalization error of random features regression: Precise asymptotics and double descent curve. arXiv preprint arXiv:1908.05355, 2019.
- [38] Partha P Mitra. Understanding overfitting peaks in generalization error: Analytical risk curves for and penalized interpolation. arXiv preprint arXiv:1906.03667, 2019.
- [39] Vidya Muthukumar, Kailas Vodrahalli, and Anant Sahai. Harmless interpolation of noisy data in regression. In 2019 IEEE International Symposium on Information Theory (ISIT), pages 2299–2303. IEEE, 2019.
- [40] Vidya Muthukumar, Kailas Vodrahalli, Vignesh Subramanian, and Anant Sahai. Harmless interpolation of noisy data in regression. IEEE Journal on Selected Areas in Information Theory, 1(1):67–83, 2020.
- [41] German I Parisi, Ronald Kemker, Jose L Part, Christopher Kanan, and Stefan Wermter. Continual lifelong learning with neural networks: A review. Neural Networks, 113:54–71, 2019.
- [42] Vinay V Ramasesh, Ethan Dyer, and Maithra Raghu. Anatomy of catastrophic forgetting: Hidden representations and task semantics. arXiv preprint arXiv:2007.07400, 2020.
- [43] Matthew Riemer, Ignacio Cases, Robert Ajemian, Miao Liu, Irina Rish, Yuhai Tu, and Gerald Tesauro. Learning to learn without forgetting by maximizing transfer and minimizing interference. arXiv preprint arXiv:1810.11910, 2018.
- [44] Gobinda Saha, Isha Garg, and Kaushik Roy. Gradient projection memory for continual learning. In International Conference on Learning Representations, 2021.
- [45] Siddhartha Satpathi and R Srikant. The dynamics of gradient descent for overparametrized neural networks. In Learning for Dynamics and Control, pages 373–384. PMLR, 2021.
- [46] Joan Serra, Didac Suris, Marius Miron, and Alexandros Karatzoglou. Overcoming catastrophic forgetting with hard attention to the task. In International Conference on Machine Learning, pages 4548–4557. PMLR, 2018.
- [47] Tom Veniat, Ludovic Denoyer, and Marc’Aurelio Ranzato. Efficient continual learning with modular networks and task-driven priors. arXiv preprint arXiv:2012.12631, 2020.
- [48] Li Yang, Sen Lin, Junshan Zhang, and Deliang Fan. Grown: Grow only when necessary for continual learning. arXiv preprint arXiv:2110.00908, 2021.
- [49] Dong Yin, Mehrdad Farajtabar, Ang Li, Nir Levine, and Alex Mott. Optimization and generalization of regularization-based continual learning: a loss approximation viewpoint. arXiv preprint arXiv:2006.10974, 2020.
- [50] Jaehong Yoon, Saehoon Kim, Eunho Yang, and Sung Ju Hwang. Scalable and order-robust continual learning with additive parameter decomposition. In Eighth International Conference on Learning Representations, ICLR 2020. ICLR, 2020.
Appendix A Experimental Details
A.1 Experimental details for Section 5.1 and Section 5.2
Datasets. We consider the MNIST dataset. For each task, we randomly select 200 samples for training and 1000 samples for testing. Different tasks have different subsets of classes.
DNN architecture and training details. We use a five-layer neural network with two convolutional layers and three fully-connected layers. Relu is used for the first four layers and Sigmoid is used for the last layer. The first convolutional layer is followed by 2D max-pooling operation with stride of 2. We learn each task by using SGD with a learning rate of for 600 epochs. The forgetting and overall generalization error are evaluated as in Equation 6 and Equation 7, respectively, while here is defined as the mean-squared test error instead of Equation 5.
Task setups. For Figure 2, we consider the following setup:
-
•
task 1: .
-
•
task 2: , , , , , , which correspond to the different numbers of overlapping classes with task 1.
For Figure 2, we randomly select three different setups:
-
•
‘forgetting_0’:
-
–
task 1: .
-
–
task 2: , , , , which correspond to the different numbers of overlapping classes with task 1.
-
–
task 3: .
-
–
task 4: .
-
–
-
•
‘forgetting_1’:
-
–
task 1: .
-
–
task 2: , , , , which correspond to the different numbers of overlapping classes with task 1.
-
–
task 3: .
-
–
task 4: .
-
–
-
•
‘forgetting_2’:
-
–
task 1: .
-
–
task 2: , , , , which correspond to the different numbers of overlapping classes with task 1.
-
–
task 3: .
-
–
task 4: .
-
–
For Figure 2, we randomly select three different setups:
-
•
‘forgetting_0’: the special task is and the other tasks are .
-
•
‘forgetting_1’: the special task is and the other tasks are .
-
•
‘forgetting_2’: the special task is and the other tasks are .
For Figure 2, we randomly select three different setups:
-
•
‘forgetting_0’: the two task categories are and , and the task order indices are:
-
–
‘0’: , , , .
-
–
‘1’: , , , .
-
–
‘2’: , , , .
-
–
‘3’: , , , .
-
–
‘4’: , , , .
-
–
‘5’: , , , .
-
–
-
•
‘forgetting_1’: the two task categories are and , and the task order indices are:
-
–
‘0’: , , , .
-
–
‘1’: , , , .
-
–
‘2’: , , , .
-
–
‘3’: , , , .
-
–
‘4’: , , , .
-
–
‘5’: , , , .
-
–
-
•
‘forgetting_2’: the two task categories are and , and the task order indices are:
-
–
‘0’: , , , .
-
–
‘1’: , , , .
-
–
‘2’: , , , .
-
–
‘3’: , , , .
-
–
‘4’: , , , .
-
–
‘5’: , , , .
-
–
A.2 Experimental details for Section 5.3
A.2.1 TRGP vs TRGP+
TRGP [33] seeks to solve the following optimization problem for the current task :
(14) |
where is the DNN weight for the layer , and denotes the input subspace of the layer for the old task , which can be constructed by using SVD on the representation matrix for that layer. Two important designs are introduced in Section A.2.1:
-
•
The trust region : denotes the set of the most correlated old tasks selected for task based on some correlation evaluation metric in a layer-wise manner. The purpose here is to select the most correlated old tasks and facilitate the forward knowledge transfer by reusing the learnt knowledge of the old tasks in .
-
•
The scaled weight projection : is developed to reuse the learnt model of the selected old tasks in . Specifically, for any ,
where is the bases matrix for the subspace , and is the scaling matrix to scale the weight projection onto . In contrast, is the standard weight projection onto . Since the learnt knowledge for the old task is indeed , scaling the projection provides a way to reuse this knowledge directly for learning the task . Intuitively, characterizes the boosted forward knowledge transfer from old task to task .
However, as shown in Section A.2.1, all the selected old tasks in are treated equivalently in the effective weight , which could be suboptimal. As suggested by our theoretical results, we proposed a slightly modified version of TRGP, i.e., TRGP+, by assigning non-uniform weights for the most correlated old tasks selected in :
(15) |
where if for both , .
A.2.2 Experimental setup
Datasets. We consider two standard benchmarks in CL: (1) PMNIST: 10 sequential tasks will be created using different permutations, where each task has 10 classes; (2) Split CIFAR-100: The entire dataset of CIFAR-100 will be splitted into 10 group, where each task is a 10-way multi-class classification problem for each group.
DNN architecture and training details. Following [33], we use a 3-layer fully-connected network with 2 hidden layer of 100 units for PMNIST, and train the network for 5 epochs with a batch size of 10 for each task. For Split CIFAR-100, we use a version of 5-layer AlexNet, and train the network for a maximum of 200 epochs with early stopping for each task. Two most correlated old tasks are selected for the current task for each layer, and we assign a larger weight of to the more recent old task and to the other one.
Evaluation metrics. The performance is evaluated based on ACC, the average final accuracy over all tasks, and Backward Transfer (BWT) which measures the forgetting of old tasks when learning new tasks. Specfically, ACC and BWT are defined as:
(16) |
where is the accuracy of the model on -th task after learning the -th task sequentially.
Appendix B Useful Lemmas
The following lemma characterizes the solution to the optimization problem Equation 4 for task :
Lemma B.1.
The solution to the optimization problem Equation 4, i.e., the learnt model for task , is given by
(17) |
In the overparameterized case, multiple exist to perfectly fit , and solving Equation 4 picks the one that has minimum distance to . Therefore, the solution in Equation 17 not only incorporates the information of current task through but also depends on the previous model evolution trajectory in CL.
By leveraging the recent advance in [8], we can have the following lemma about the evolution of :
Lemma B.2.
Suppose . For any task and any old task , the following equation holds:
Appendix C Additional Results
C.1 Characterization of negative forgetting
As shown in Figure 2, the forgetting can even be negative when the two tasks are positively correlated. Intuitively, because the common features play a similar role in these two tasks, task 2 updates the model in a favorable direction for task 1, which could even result in better performance of task 1 due to the backward knowledge transfer herein. A formal quantification of the condition for better performance of task 1 can be found in the following proposition:
Proposition C.1.
Suppose and . The learning of task 2 would lead to a better model for task 1, i.e., , if
C.2 Evolution of forgetting
We can also characterize the evolution of forgetting after learning new tasks. Based on the definition of forgetting, we have
Rearranging the above equations gives
Based on the relationship between and characterized in Lemma B.2, it can be seen that
such that
(18) |
Let in Lemma B.2. We can show that
(19) |
By substituting Section C.2 back to Section C.2, we can have
(20) |
C.3 Impact of overparameterization
1) Forgetting approaches zero with more parameters. In Equation 9, when , we have , which implies that and . Therefore, we can conclude that when . An intuitive explanation is that with more parameters, the model has a larger “memory” such that it can remember all knowledge of previous tasks, i.e., zero forgetting.
2) More parameters can alleviate the negative impact of task dissimilarity on generalization error. Term G2 in Equation 10 describes the effect of task dissimilarity on . When , Term G2 approaches zero, which indicates that the negative impact of task dissimilarity on generalization error diminishes. In some special cases, we can further show that Term G2 is monotonically decreasing with respect to , e.g., shown in Equation 12. A more general444For general , this requirement holds if the ground truth of each task has the same power and is orthogonal to each other, i.e., and for all . case is when for all task , we have Term G2 which is also monotonically decreasing w.r.t. .
C.4 Impact of task order
(1) [Special case III] There are three categories (, and ) of tasks: each category contains the same number of tasks; the tasks are same in the same category but different across categories. Without loss of generality, we assume that for any task and
Based on Theorem 4.1, we can show that the optimal task order for Special case III follows a similar structure of that for Special case II, as characterized in the following proposition:
Proposition C.2.
Suppose . For , the optimal task order to minimize forgetting is the perfectly alternating order, i.e., , where , , and .
(2) [The optimal task order can be different for minimizing forgetting and generalization error]
[Special case I] As shown in Proposition 4.4, the optimal task order to minimize forgetting is to learn the special task between the place and the place. In stark contrast, this special task, which has the largest value of , should be always learnt in the very first place in order to minimize the generalization error, i.e., . The underlying rationale is that the generalization error characterizes the average testing performance of the final model on all tasks, which is maximized when the final model works the best for the majority. Therefore, in this case the optimal order for minimizing forgetting is different from that for minimizing generalization error.
[Special case II] As shown in Proposition 4.5, the optimal task order to minimize forgetting is the perfectly alternating order. In contrast, the task order indeed does not affect the generalization performance, because is same for every task . In this case, the optimal task order for minimizing forgetting is also ‘optimal’ for minimizing generalization error. That is to say, we can find an optimal task order to minimize forgetting and generalization error simultaneously.
Appendix D Proofs
D.1 Proof of Lemma B.1
Let . It is clear that Equation 4 can be reformulated as
(21) | ||||
For the overparameterized case, is invertible. Using the Lagrange multipliers, we can get
By setting the derivative w.r.t. to 0, it follows that
(22) |
such that
Therefore,
(23) |
D.2 Proof of Lemma B.2
Let and for any , where characterizes the projection onto the row space of . Based on Lemma B.1, we have
(24) |
Intuitively, the learnt model for task is an ‘interpolation’ between the learnt model for task and the optimal task model for task , while being perturbed by the random noise .
Let . Based on Equation 24, we can know that
(25) |
(1) For the term (a), we have
(26) |
where (a) is because of the orthogonality between and , and (b) is due to the Pythagorean theorem.
Because is the orthogonal projection matrix for the row space of , based on the rotational symmetry of the standard normal distribution, it follows that
(27) |
and
(28) |
since is independent with .
By substituting Equation 27 and Equation 28 back to Section D.2, we can obtain that
(29) |
(2) For the term (b), we have
Because is the projection onto the null space of and is a vector in the row space of , it follows that
(30) |
And since
we can know that
(31) |
(3) For the term (c), we apply the “trace trick” by following [8]. Specifically, it can be first seen that
Due to the independence between and the random noise , we can have that
Since follows the inverse-Wishart distribution with identity scale matrix and degrees-of-freedom, and each diagonal entry of has a reciprocal that follows the distribution with degrees-of-freedom. Therefore, for ,
such that
(32) |
Lemma B.2 can be proved by substituting Equation 29, Equation 31 and Equation 32 to Section D.2.
D.3 Proof of Theorem 4.1
Based on Lemma B.2, we can have that
(33) |
Let in Section D.3. We have
(34) |
Based on Section D.3 and Section D.3, we can obtain the closed form of :
Based on Section D.3, we can also obtain the exact form of the generalization error. Specifically,
such that
D.4 Proof of Proposition C.1
D.5 Proof of Proposition 4.4
Without loss of generality, we assume that for task in Category I and task in Category II. It follows that
Letting . Then minimizing is equivalent to minimize
By setting the derivative w.r.t. to , we can have that the optimal value of is
(35) |
which is clearly increasing with . Therefore, the optimal order of the special task is non-increasing with , i.e., non-decreasing with .
D.6 Proof of Proposition 4.5
Without loss of generality, we assume that for any task and
Based on the closed form of forgetting, we can see that it suffices to minimize in order to minimize the forgetting , where . Besides, since whenever we change the order between the -th task and the -th task, the value of does not change. In other words, only the term affects the optimal task order, which should minimize .
(1) For the case , there are three effective task orders: (1) task , task , task , task ( for simplicity); (2) ; (3) . Swapping all tasks in with all tasks in does not change the value of forgetting, e.g., has the same forgetting with . In what follows, we compare among these three orders.
(a) For ,
(b) For ,
(c) For ,
It is clear that the alternating task order, i.e., and , is the optimal order for this special case.
(2) For the case , based on the closed form of forgetting in Theorem 4.1, we can use computer programming to show that besides the perfectly alternating task order, i.e., and , there are 10 effective task orders as illustrated in Table 2. We further evaluate the difference of forgetting between each task order in Table 2 and the perfectly alternating task order, where a positive difference means that the corresponding task order will lead a larger forgetting than the perfectly alternating task order. It can be verified that the difference of forgetting is positive for all the task orders in Table 2, which indicates that the optimal task order is the perfectly alternating task order.
Index | Order | Difference of forgetting |
---|---|---|
1 | ||
2 | ||
3 | ||
4 | ||
5 | ||
6 | ||
7 | ||
8 | ||
9 | ||
10 |
D.7 Proof of Proposition C.2
Following the same strategy with Special case II, we can have Table 3 to show all effective task orders and their difference of forgetting with the perfectly alternating task order, i.e., and its ‘equivalent’ task orders (e.g., ). It can also be verified that the perfectly alternating task order is the optimal task order in this case.
Index | Order | Difference of forgetting |
---|---|---|
1 | ||
2 | ||
3 | ||
4 | ||
5 | ||
6 | ||
7 | ||
8 | ||
9 | ||
10 | ||
11 | ||
12 | ||
13 | ||
14 | ||
15 |
D.8 Proof of Theorem 4.3
Intuitive explanation of Theorem 4.3: In the underparameterized region, minimizing the loss Equation 3 for the current task will lead to a unique solution for this task, which does not depend on the learning process and the learned model of previous tasks. That is to say, the task learning is independent among all tasks, such that (i) the learning order of the first tasks does not matter, and (ii) both forgetting and generalization performance depend only on the model distance between the last task and the other tasks, i.e., .
Now we formally prove Theorem 4.3.
For the underparameterized regime, the solution of minimizing the training loss is
It follows that
such that the model error for the -th task can be represented as:
By taking expectation on both sides, we can have
Therefore, it can be shown that
and