A central assumption of classical statistical learning is that training data are drawn i.i.d. from a fixed distribution. Real-world systems routinely violate this: a medical diagnosis model must incorporate new disease variants without forgetting old ones; a language model deployed in production should update on new domains without degrading on existing benchmarks; a robot should accumulate motor skills over its lifetime rather than relearning each task from scratch. Continual learning (also called lifelong learning or sequential learning) is the study of how to learn from a non-stationary stream of data or tasks while retaining previously acquired knowledge De Lange et al., 2021.
Problem Formulation¶
Task Sequences and the Forgetting Problem¶
A continual learning agent encounters a sequence of tasks . Each task provides a dataset
drawn from a task-specific distribution . The agent has access to only while training on task ; it cannot revisit in full. The goal is to find parameters that perform well across all tasks simultaneously:
The difficulty is that naive sequential training — minimizing at each step with gradient descent — causes catastrophic forgetting McCloskey & Cohen, 1989: optimizing for the current task moves parameters into a region that performs poorly on previous tasks, and the loss of prior knowledge can be abrupt and nearly total.
Scenarios¶
Three standard scenarios differ in what information is available at test time:
Task-incremental: the task identity is provided at test time, so the model only needs to discriminate within each task.
Domain-incremental: the input distribution shifts across tasks but the output space is shared; task identity is not provided.
Class-incremental: new classes are introduced at each task and task identity is unknown at test time — the hardest setting, requiring the model to discriminate across all classes seen so far.
Evaluation Metrics¶
After training on all tasks, let denote the accuracy on task immediately after training on task . Key metrics are:
Average accuracy: — overall performance after all tasks.
Backward transfer (forgetting): — how much performance on earlier tasks degrades after further training. Negative BWT indicates forgetting; positive BWT indicates that later learning improves earlier tasks.
Forward transfer: — how much prior learning helps on new tasks, where is a random-initialization baseline for task .
Catastrophic Forgetting¶
Why Gradient Descent Forgets¶
Let minimize the loss on all tasks seen so far. When we minimize starting from , the gradient points in a direction that reduces the current task loss. Nothing in this gradient respects the curvature of the previous losses : the step may move parameters to a region of high loss for earlier tasks.
The severity of forgetting depends on task similarity and parameter overlap. If tasks use largely disjoint subsets of parameters, interference is small. If the same parameters are critical for multiple tasks, any update for one task can disrupt the others.
The Stability–Plasticity Dilemma¶
Continual learning requires balancing two competing pressures:
Plasticity: the ability to rapidly acquire new knowledge and adapt to new tasks.
Stability: the ability to retain previously acquired knowledge against interference.
A model that is maximally stable (e.g., frozen weights) cannot learn new tasks. A model that is maximally plastic (e.g., standard SGD) forgets immediately. Effective continual learning algorithms navigate the trade-off between these two extremes.
Regularization-Based Methods¶
Regularization methods augment the loss for each new task with a penalty that discourages large changes to parameters that were important for past tasks.
Elastic Weight Consolidation¶
Elastic weight consolidation (EWC) Kirkpatrick et al., 2017 approximates the posterior over parameters after task as a Gaussian centered on the MAP estimate :
where is the Fisher information matrix, approximated by its diagonal:
The diagonal Fisher measures how much the log-likelihood changes when moves away from : large means is important for task and should be protected.
Using the previous posterior as a prior for the new task and taking the MAP gives the EWC loss:
This is a weighted proximal penalty: parameters with high Fisher weight are kept close to their previous values, while unimportant parameters are free to change.
For a sequence of tasks, the Fisher from all previous tasks accumulates. In the online EWC variant Schwarz et al., 2018, a single running estimate of the Fisher is maintained rather than storing one Fisher matrix per task.
Synaptic Intelligence¶
Synaptic intelligence (SI) Zenke et al., 2017 estimates parameter importance online during training rather than post-hoc. It tracks the cumulative contribution of each parameter to the decrease in the loss along the optimization trajectory. For a parameter moving from to during training on task , the importance is:
where the sum is over optimizer steps , is the parameter update at step , and is a small damping constant. The numerator is the inner product of the parameter path with the negative gradient — large when the parameter moved in directions that actually reduced the loss. The denominator normalizes by the total displacement .
Accumulated importance across tasks, , defines the regularization:
SI requires no additional forward passes or Hessian computations, making it computationally lighter than EWC.
Bayesian Continual Learning¶
The Bayesian perspective gives a principled foundation for regularization-based continual learning and connects it to the sequential inference algorithms we have studied throughout this course.
Sequential Bayes¶
If tasks arrive sequentially and the parameters are shared, the exact Bayesian update after observing is:
This is sequential Bayes: the posterior after task becomes the prior for task . If we could maintain the exact posterior, there would be no forgetting — all past information is encoded in the current prior. The challenge is computational: the exact posterior is generally intractable and grows in complexity with each new task.
Bayesian Online Linear Regression¶
Many continual learning problems reduce, in their simplest form, to online estimation of a shared parameter vector from a streaming sequence of observations. When the model is linear and the noise is Gaussian, sequential Bayes is tractable and exactly equivalent to Kalman filtering — the same algorithm used for state-space models in Linear Dynamical Systems.
Setup. Suppose we observe a stream of regression pairs where and , generated by a linear model:
We place a Gaussian prior on the shared parameter , and the goal is to maintain the posterior after each new pair arrives, without storing the full data history.
Equivalence with the Kalman filter. Treating as the hidden state of a linear dynamical system reduces this problem exactly to Kalman filtering:
| LDS quantity | Regression interpretation |
|---|---|
| State | Parameter (shared, constant) |
| Dynamics , | Parameters do not change across observations |
| Emission matrix | Time-varying: the feature vector at step |
| Emission noise | Observation noise variance |
The dynamics are static (identity transition, zero process noise), so the Kalman predict step is trivial: the predicted distribution equals the filtered distribution from the previous step. Only the update step is active. Starting from , conditioning on gives:
The innovation is the residual from the current prediction, is its variance (a scalar because is scalar), and is the Kalman gain. In the information form, defining the precision and information vector , the update is simply additive (by the Sherman–Morrison identity):
After steps, , which is exactly the posterior precision from batch Bayesian linear regression on all observations. Sequential and batch inference are exactly equivalent for this model — no approximation is introduced by processing data one point at a time.
Drifting parameters. Static parameters are often unrealistic: the true data-generating process may shift over time. A natural generalization introduces a random-walk prior on , i.e., with . The Kalman predict step then inflates the covariance before each update:
This is a form of controlled forgetting: the additional uncertainty injected at each step means older observations have less influence on the current estimate. Setting corresponds to exponential discounting of past data, with an effective memory horizon of observations.
Variational Continual Learning¶
Variational continual learning (VCL) Nguyen et al., 2018 maintains a variational approximation at each step. The variational objective at task is:
where is the approximate posterior from the previous task, used as the new prior. Using a Gaussian mean-field approximation , the parameters are updated by minimizing via the reparameterization gradient.
The KL term in plays the role of the EWC penalty: it discourages from drifting far from , weighted by the inverse variance of the previous posterior. When (the Laplace approximation), the KL penalty reduces exactly to:
This reveals that EWC is the MAP limit of VCL under a Laplace approximation to the posterior, with the Fisher information playing the role of the prior precision.
VCL also integrates naturally with coreset methods: a small set of representative data points from each past task is stored and used to refine the variational posterior, improving on the Laplace approximation when the posterior is non-Gaussian.
Replay-Based Methods¶
Replay methods counteract forgetting by periodically revisiting data from previous tasks, either stored explicitly or regenerated by a model.
Experience Replay¶
The simplest approach maintains a fixed-size memory buffer containing a small number of exemplars from each past task, selected by random subsampling or a principled strategy (e.g., reservoir sampling to maintain a uniform random sample from the full stream). The loss at each step interleaves current-task data with replayed data:
Experience replay is conceptually simple and empirically effective, but its memory cost grows with the number of tasks (or the per-task exemplar budget shrinks).
Dark Experience Replay¶
Dark Experience Replay (DER) Buzzega et al., 2020 improves on standard replay by storing the model’s soft predictions (logits) at the time each exemplar is added to the buffer, in addition to the input . The replay loss then includes a knowledge distillation term:
Matching the current model’s predictions to the stored logits preserves not just the final decision but the full predictive distribution — a richer signal that slows forgetting more effectively than label-only replay.
Gradient Episodic Memory¶
Gradient Episodic Memory (GEM) Lopez-Paz & Ranzato, 2017 uses the memory buffer not to mix replay into the training loss, but to constrain the gradient update direction. GEM solves a quadratic program at each step to find the update that:
does not increase the loss on any past task’s exemplars (the non-interference constraint), and
is as close as possible to the current task’s gradient .
Formally, let be the gradient of the memory loss for past task . GEM solves:
The constraint ensures the update does not increase past task losses to first order. When all constraints are satisfied by itself, no projection is needed; otherwise, is projected onto the intersection of the constraint half-spaces.
Modern Approaches: Parameter-Efficient Continual Learning¶
The dominant paradigm for continual learning with large pretrained models has shifted toward parameter-efficient adaptation: rather than modifying all model weights, these methods encode task-specific knowledge in a small number of additional parameters, leaving the pretrained base frozen. This eliminates forgetting by construction — different tasks are stored in separate, non-interfering modules — and scales naturally to long task sequences without growing the base model.
Low-Rank Adaptation (LoRA)¶
LoRA Hu et al., 2022 constrains task-specific weight updates to a low-dimensional subspace. For a weight matrix , the adapted model uses:
where , , and . The base weights are frozen; only and are trained, requiring parameters per task instead of . For continual learning, each task receives its own adapter pair : switching tasks requires only swapping the adapter, and the base model is never modified. Zero forgetting follows by construction.
A practical question is whether adapters from different tasks can be composed at inference time — for instance, by taking a weighted combination when the task identity is unknown. This connects to the broader challenge of merging independently trained models, an active area of research sometimes called model merging or task arithmetic.
Prefix Tuning and KV Cache Adaptation¶
Prefix tuning Li & Liang, 2021 prepends a set of learned “virtual tokens” to the key-value cache of each attention layer. For a transformer with keys and values from the input tokens, the prefix-augmented attention computes:
where are the only parameters trained and is the head dimension. The base model is fully frozen. Prefix tuning can be understood as soft prompting at every layer: the virtual tokens steer the model’s internal representations without touching its weights. For continual learning, separate prefixes per task require only storage, and switching tasks is a memory swap.
Cartridges¶
Cartridges Eyuboglu et al., 2025 extend prefix tuning from a learned prompt to a compressed knowledge store. Rather than learning a prefix for a task specification, a cartridge encodes the content of an entire document corpus into a compact KV cache with , via a process called self-study:
Generate synthetic reference queries about the target documents using the base model.
Run the model with the full -token documents in context to obtain ground-truth attention outputs .
Train the cartridge to minimize the discrepancy between and the attention outputs produced by alone.
The result is a compact, composable cache: multiple cartridges can be concatenated in the KV cache at inference time without retraining, enabling modular assembly of knowledge from multiple sources. Empirically, cartridges storing a 484k-token corpus require 38× less memory than the equivalent in-context representation and achieve 26× higher inference throughput.
For continual learning, each task or knowledge domain gets its own cartridge trained offline. Adding a new task means training one new cartridge; all previous cartridges are untouched. Forgetting is structurally impossible.
Attention Matching¶
Training a cartridge via self-study requires backpropagation through the model for each new task — taking hours for large contexts. Attention matching Zweiger et al., 2026 finds the compact cache in closed form, in seconds, by decomposing the problem into tractable least-squares subproblems.
Using the same notation as above, the goal is to find that reproduces the attention outputs of the full -token cache on the reference queries .
Value fitting. Given compacted keys , let be the matrix of normalized attention weights from each reference query to each compacted key. The compacted values that best reproduce solve:
This is ordinary least squares: the closed-form solution is the same as Bayesian linear regression with an uninformative prior (cf. the information-form update in the Bayesian online regression section above, with ). Each column of is fit independently, regressing the -dimensional attention output onto the normalized attention weights from the compacted keys.
Bias fitting. To also match the total attention mass — ensuring attracts the correct aggregate attention weight — the method adds per-token scalar log-biases to the attention scores and solves a nonnegative least-squares problem for .
Key selection. Given fixed and , the key positions are chosen greedily by orthogonal matching pursuit or by highest aggregated attention weight across .
The full three-stage pipeline (key selection → bias fitting → value fitting) runs in seconds and achieves 50× compression with minimal quality loss on long-context benchmarks, matching gradient-based methods that take hours. The closed-form structure mirrors the recursive Bayesian updates studied throughout this chapter: both reduce to sequential linear-regression problems that summarize a stream of information into a compact sufficient statistic.
Other Methods¶
Earlier work proposed two additional families that avoid forgetting by design rather than by parameter isolation. Though less prominent in the era of large pretrained models, they introduced ideas that continue to influence modern methods.
Architecture-based methods allocate dedicated parameters to each task. Progressive neural networks Rusu et al., 2016 add a new column of weights per task, with lateral connections to all previous frozen columns — enabling positive forward transfer at the cost of linear model growth. PackNet avoids growth by pruning and reassigning freed weights to future tasks, assigning each task a disjoint binary mask over the shared parameter vector.
Gradient projection methods constrain updates to subspaces that do not interfere with past tasks. Orthogonal gradient descent (OGD) Farajtabar et al., 2020 projects the current gradient onto the orthogonal complement of the gradients at previous task optima:
Gradient projection memory (GPM) Saha et al., 2021 generalizes this by projecting layer-wise gradient matrices onto the null space of the feature subspace from past tasks, identified via SVD: .
Summary and Open Problems¶
Continual learning sits at the intersection of optimization, Bayesian inference, and representation learning. The key tension — the stability–plasticity dilemma — manifests differently across the methods reviewed here:
| Method | Anti-forgetting mechanism | Storage per task |
|---|---|---|
| EWC Kirkpatrick et al., 2017 | Diagonal Fisher penalty | |
| SI Zenke et al., 2017 | Online path-integral importance | accumulated |
| VCL Nguyen et al., 2018 | Variational posterior as prior | + coresets |
| Experience replay | Stored exemplars | buffer |
| DER Buzzega et al., 2020 | Stored exemplars + logit distillation | |
| GEM Lopez-Paz & Ranzato, 2017 | Gradient projection via QP | buffer |
| LoRA Hu et al., 2022 | Task-specific low-rank adapters | per task |
| Prefix tuning Li & Liang, 2021 | Task-specific KV prefix | per task |
| Cartridges Eyuboglu et al., 2025 | Compressed KV cache per task | per task |
| Attention matching Zweiger et al., 2026 | Closed-form KV compaction (OLS) | per task |
Several open problems remain active areas of research:
Adapter composition: when task identity is unknown at test time, how should multiple LoRA adapters or cartridges be combined? Simple averaging does not exploit task structure.
Evaluation standards: the field lacks consensus benchmarks, making it difficult to compare methods across papers.
Class-incremental learning without task identity remains far harder than other scenarios, and PEFT methods that require task identity at inference time do not directly address it.
Compression quality vs. speed: gradient-based cartridge training and closed-form attention matching occupy different points on this frontier; characterizing the trade-off theoretically remains open.
Conclusion¶
Continual learning studies how to learn from a non-stationary stream of tasks without forgetting previously acquired knowledge. The field has evolved from regularization-based methods (EWC, SI, VCL), which protect important parameters using the Fisher information or a variational posterior as a surrogate prior, through replay-based methods (experience replay, DER, GEM), which counteract forgetting by revisiting past data, to modern parameter-efficient approaches (LoRA, prefix tuning, cartridges, attention matching), which sidestep forgetting entirely by keeping the base model frozen and storing task knowledge in lightweight, composable modules. The Bayesian perspective — sequential updating of a posterior, made exact by the Kalman filter in the linear Gaussian case — provides a unifying foundation that connects all of these approaches and motivates the approximate methods needed for nonlinear models.
- De Lange, M., Aljundi, R., Masana, M., Parrini, S., Jodelet, Q., & Tuytelaars, T. (2021). A continual learning survey: Defying forgetting in classification tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(7), 3366–3385.
- McCloskey, M., & Cohen, N. J. (1989). Catastrophic interference in connectionist networks: The sequential learning problem. Psychology of Learning and Motivation, 24, 109–165.
- Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., & others. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13), 3521–3526.
- Schwarz, J., Czarnecki, W., Luketina, J., Grabska-Barwinska, A., Teh, Y. W., Pascanu, R., & Hadsell, R. (2018). Progress & compress: A scalable framework for continual learning. International Conference on Machine Learning, 4528–4537.
- Zenke, F., Poole, B., & Ganguli, S. (2017). Continual learning through synaptic intelligence. International Conference on Machine Learning, 3987–3995.
- Nguyen, C. V., Li, Y., Bui, T. D., & Turner, R. E. (2018). Variational continual learning. International Conference on Learning Representations.
- Buzzega, P., Boschini, M., Paci, A., Abati, D., & Calderara, S. (2020). Dark experience for general continual learning: a strong, simple baseline. Advances in Neural Information Processing Systems, 33, 15920–15930.
- Lopez-Paz, D., & Ranzato, M. (2017). Gradient episodic memory for continual learning. Advances in Neural Information Processing Systems, 30.
- Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., & Chen, W. (2022). LoRA: Low-Rank Adaptation of Large Language Models. arXiv Preprint arXiv:2106.09685.
- Li, X. L., & Liang, P. (2021). Prefix-Tuning: Optimizing Continuous Prompts for Generation. arXiv Preprint arXiv:2101.00190.
- Eyuboglu, S., Ehrlich, R., Arora, S., Guha, N., Zinsley, D., Liu, E., Tennien, W., Rudra, A., Zou, J., Mirhoseini, A., & Ré, C. (2025). Cartridges: Lightweight and General-Purpose Long Context Representations via Self-Study. arXiv Preprint arXiv:2506.06266.
- Zweiger, A., Fu, X., Guo, H., & Kim, Y. (2026). Fast KV Compaction via Attention Matching. arXiv Preprint arXiv:2602.16284.
- Rusu, A. A., Rabinowitz, N. C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., Pascanu, R., & Hadsell, R. (2016). Progressive neural networks. arXiv Preprint arXiv:1606.04671.
- Farajtabar, M., Azizan, N., Mott, A., & Li, A. (2020). Orthogonal gradient descent for continual learning. International Conference on Artificial Intelligence and Statistics, 3762–3773.
- Saha, G., Garg, I., & Roy, K. (2021). Gradient Projection Memory for Continual Learning. International Conference on Learning Representations.