Two tensions define modern sequence modeling. On the training side, recurrent models like RNNs process tokens one at a time, leaving GPU parallelism underutilized on long sequences. On the inference side, Transformers compute all pairwise token interactions, incurring time and memory that becomes prohibitive for long contexts. Can we get the best of both worlds — parallel training and efficient inference?
This chapter shows that the answer is yes, for a broad family of models that can be written as linear recurrences. A linear recurrence is simultaneously a convolution (fast parallel training via FFT or parallel scan) and a fixed-size hidden-state update (fast -memory inference). Structured SSMs exploit this duality with carefully designed dynamics matrices; linear attention recovers it by approximating the softmax kernel.
Deep State Space Models¶
Linear Recurrent Neural Networks¶
A linear RNN evolves a hidden state via
with . Because linear operators compose, unrolling the recurrence gives
where the SSM kernel is .
This recurrence–convolution duality is the key computational property:
Recurrent form: inference, memory — ideal for generation.
Convolutional form: evaluate the full kernel once and convolve via FFT, — ideal for training on long sequences.
When the dynamics are input-dependent (time-varying), the fixed kernel no longer applies. Training then requires a parallel scan over the sequence — an -depth divide-and-conquer algorithm that exploits the associativity of affine function composition. See Parallelizing Nonlinear RNNs for a detailed treatment of the parallel scan and its extensions to nonlinear recurrences.
S4: A Continuous-Time Deep SSM¶
S4 Gu et al., 2022 is a deep SSM that derives its discrete-time recurrence from a continuous-time dynamical system, giving a principled way to initialize so that the model captures long-range dependencies. A linear time-invariant (LTI) system evolves a hidden state via:
Discretizing with step size (zero-order hold) yields the recurrent form:
where and . Because is fixed, the output is a convolution with the SSM kernel :
The remaining challenge is computing efficiently. Computing naively is . S4 addresses this by initializing using the HiPPO framework Gu et al., 2020, which constructs as a structured matrix designed to optimally memorize the input history via polynomial projections. The HiPPO-LegS matrix has entries:
S4 exploits the fact that HiPPO-LegS is a normal plus low-rank (NPLR) matrix to compute the SSM kernel in time via the fast Fourier transform.
S5: Diagonal SSMs with Parallel Scans¶
Smith et al., 2023 simplifies S4 by diagonalizing the state matrix: where is diagonal. With , the recurrence decouples into independent scalar recurrences:
Each mode decays independently at rate ; stability requires . S5 computes all hidden states in parallel using an associative parallel scan, reducing training time from sequential steps to parallel steps.
Mamba: Selective SSMs¶
A fundamental limitation of LTI SSMs is that the transition matrices and are input-independent: every token is processed identically regardless of content. This prevents the model from selectively retaining or discarding information based on context.
Mamba Gu & Dao, 2023 introduces a selection mechanism by making , , and the discretization step functions of the input :
where , , and are computed from via small linear projections.
Because parameters now vary with , the convolutional view no longer applies — the system is time-varying, and depends on all of in a non-linear way. Training requires a selective scan: a hardware-aware parallel algorithm that exploits the structure of the recurrence without materializing intermediate states in memory.
The selection mechanism gives Mamba attention-like content-dependent routing while preserving the cost of a recurrent model.
Linear Attention¶
The link between deep SSMs and Transformers (see the Transformers chapter) may be closer than it appears. Both Mamba and softmax attention compute content-dependent outputs — Mamba via input-driven gates, attention via pairwise dot products. The key difference is architectural: SSMs maintain a fixed-size hidden state updated recurrently, while attention computes all pairwise interactions simultaneously, with no fixed-size bottleneck but at cost. Linearizing the attention kernel bridges the two, recovering a recurrent model from attention. Given queries , keys , and values , the output of causal (autoregressive) softmax attention at position is:
In matrix form (with the causal mask ):
where sets future entries to . Computing this requires materializing the attention matrix, costing time and memory.
Katharopoulos et al. (2020) observe that the softmax is a kernel function, , and that replacing it with a kernel with explicit feature maps, , unlocks a dramatic simplification. The causal attention output becomes:
The key is that acts on the accumulated outer products, which can be built up incrementally as a linear recurrence. Defining the hidden state and normalizer:
the output at each step is . This is a linear RNN with hidden state — time and memory, with no attention matrix. A simple feature map that keeps values positive is elementwise, giving .
Test-Time Regression¶
Linear attention can be reread as solving a regression problem online. At each step, the model has observed a sequence of key-value pairs , which form a training set, and must predict a value at a new query , the test point. The hidden state is the current regression solution, updated incrementally as new pairs arrive.
Concretely, consider fitting a linear map to the accumulated pairs via ridge regression:
The closed-form solution is , and the prediction at is — kernel ridge regression with a linear kernel. The models above are all approximations to this solution:
Linear attention computes with — the numerator of the ridge regression estimator, omitting the Gram matrix inverse.
Softmax attention corresponds to kernel regression with the softmax kernel.
The remaining question is how to approximate the full ridge regression solution cheaply and online.
DeltaNet: Gradient Descent for Linear Regression¶
Rather than accumulating all key-value pairs, DeltaNet Yang et al., 2024 maintains a weight matrix and updates it by taking one step of gradient descent on the per-token least-squares loss . With step size , the gradient step gives the delta rule:
This is a rank-1 correction: the old memory’s prediction is erased and the new target is written in, both weighted by . Unlike linear attention (, no forgetting), DeltaNet selectively overwrites memory associated with . Output is .
With keys and queries -normalized and a learned sigmoid gate, DeltaNet is a gated linear RNN with content-dependent forgetting — more expressive than linear attention, cheaper than softmax attention ( vs. ). Yang et al. (2024) derive an efficient parallel training algorithm based on the WY representation of Householder products.
Test-Time Training¶
Test-time training (TTT) Yu et al., 2024 takes the regression perspective to its logical conclusion: the hidden state is the weights of a small nonlinear model (e.g., a two-layer MLP), updated online by gradient descent on the reconstruction loss:
One gradient step gives , and output is . When is the linear map , this reduces exactly to DeltaNet with — so DeltaNet is TTT with the simplest possible hidden model. For nonlinear , TTT can take multiple gradient steps on mini-batches of context tokens, creating a meta-learned inner loop that trades compute for expressivity of the hidden state.
Conclusion¶
The table below organises the models in this chapter along two axes: whether the dynamics are input-dependent (selective), and whether the hidden state update rule involves forgetting (vs. simple accumulation).
| Model | Hidden state update | Input-dependent? | Forgetting? |
|---|---|---|---|
| Linear RNN / S4 Gu et al., 2022 | No | No (fixed decay) | |
| S5 Smith et al., 2023 | No | No (diagonal decay) | |
| Mamba Gu & Dao, 2023 | Yes | Yes (gated decay) | |
| Linear attention Katharopoulos et al., 2020 | Yes | No () | |
| DeltaNet Yang et al., 2024 | Yes | Yes (selective overwrite) | |
| TTT (linear) Yu et al., 2024 | Yes | Yes (gradient descent) |
All achieve inference complexity. The dominant pattern — a linear recurrence admitting parallel training via associative scans or convolutions — unifies classical signal processing, online learning, and modern sequence modeling.
- Gu, A., Goel, K., & Ré, C. (2022). Efficiently Modeling Long Sequences with Structured State Spaces. International Conference on Learning Representations.
- Gu, A., Dao, T., Ermon, S., Rudra, A., & Ré, C. (2020). HiPPO: Recurrent Memory with Optimal Polynomial Projections. Advances in Neural Information Processing Systems, 33, 1474–1487.
- Smith, J. T. H., Warrington, A., & Linderman, S. W. (2023). Simplified State Space Layers for Sequence Modeling. International Conference on Learning Representations.
- Gu, A., & Dao, T. (2023). Mamba: Linear-Time Sequence Modeling with Selective State Spaces. arXiv Preprint arXiv:2312.00752.
- Katharopoulos, A., Vyas, A., Pappas, N., & Fleuret, F. (2020). Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention. International Conference on Machine Learning, 5156–5165.
- Yang, S., Wang, B., Shen, Y., Peng, H., & Kim, Y. (2024). Parallelizing Linear Transformers with the Delta Rule over Sequence Length. Advances in Neural Information Processing Systems, 37.
- Yu, Y., Guo, S., Kautz, J., Alvarez, J. M., Anandkumar, A., Xu, K., & Molchanov, P. (2024). Learning to (Learn at Test Time): RNNs with Expressive Hidden States. Advances in Neural Information Processing Systems, 37.