By combining the advantages of state space models (SSMs) with attention mechanisms, SAMBA presents a hybrid neural architecture that enables effective, scalable language modeling with an almost infinite context length. SAMBA surpasses both pure attention-based and SSM-based models on a variety of reasoning, comprehension, and coding metrics when trained on SlimPajama with consistent setups. The model processes sequences up to 256K tokens with little fine-tuning, achieving exceptional speed and extrapolation capacity.By combining the advantages of state space models (SSMs) with attention mechanisms, SAMBA presents a hybrid neural architecture that enables effective, scalable language modeling with an almost infinite context length. SAMBA surpasses both pure attention-based and SSM-based models on a variety of reasoning, comprehension, and coding metrics when trained on SlimPajama with consistent setups. The model processes sequences up to 256K tokens with little fine-tuning, achieving exceptional speed and extrapolation capacity.

How Hybrid AI Models Balance Memory and Efficiency

2025/10/28 17:13

Abstract and 1. Introduction

  1. Methodology

  2. Experiments and Results

    3.1 Language Modeling on vQuality Data

    3.2 Exploration on Attention and Linear Recurrence

    3.3 Efficient Length Extrapolation

    3.4 Long-Context Understanding

  3. Analysis

  4. Conclusion, Acknowledgement, and References

A. Implementation Details

B. Additional Experiment Results

C. Details of Entropy Measurement

D. Limitations

\

A Implementation Details

\ For the GLA layer in the Sliding GLA architecture, we use the number of heads dm/384, a key expansion ratio of 0.5, and a value expansion ratio of 1. For the RetNet layer we use a number of head that is half of the number of attention query heads, key expansion ratio of 1 and value expansion ratio of 2. The GLA and RetNet implementations are from the Flash Linear Attention repository[3] [YZ24]. We use the FlashAttention-based implementation for Self-Extend extrapolation[4]. The Mamba 432M model has a model width of 1024 and the Mamba 1.3B model has a model width of 2048. All models trained on SlimPajama have the same training configurations and the MLP intermediate size as Samba, unless otherwise specified. The training infrastructure on SlimPajama is based on a modified version of the TinyLlama codebase[5].

\ Table 10: Detailed hyper-parameters of the SAMBA models trained at different scales. We only show the optimization settings for the first training phase of the 3.8B model.

\ In the generation configurations for the downstream tasks, we use greedy decoding for GSM8K, and Nucleus Sampling [HBD+19] with a temperature of τ = 0.2 and top-p = 0.95 for HumanEval. For MBPP and SQuAD, we set τ = 0.01 and top-p = 0.95.

B Additional Experiment Results

\ Figure 6: Training loss curves of Samba 1.7B and Mistral 1.6B models during 500 steps of instruction tuning on Passkey Retrieval with 4K sequence length. We plot the loss curves for both models using the simple moving average of window size 10.

\

\ Figure 7: Overall passkey retrieval accuracy on the 256K document length of Samba 1.7B and Mistral 1.6B models during 500 steps of instruction tuning.

\

C Details of Entropy Measurement

\

\

D Limitations

Although Samba demonstrates promising memory retrieval performance through instruction tuning, its pre-trained base model has retrieval performance similar to that of the SWA-based model, as shown in Figure 7. This opens up future direction on further improving the Samba’s retrieval ability without compromising its efficiency and extrapolation ability. In addition, the hybridization strategy of Samba is not consistently better than other alternatives in all tasks. As shown in Table 2, MambaSWA-MLP shows improved performance on tasks such as WinoGrande, SIQA, and GSM8K. This gives us the potential to invest in a more sophisticated approach to perform input-dependent dynamic combinations of SWA-based and SSM-based models.

\

:::info Authors:

(1) Liliang Ren, Microsoft and University of Illinois at Urbana-Champaign (liliangren@microsoft.com);

(2) Yang Liu†, Microsoft (yaliu10@microsoft.com);

(3) Yadong Lu†, Microsoft (yadonglu@microsoft.com);

(4) Yelong Shen, Microsoft (yelong.shen@microsoft.com);

(5) Chen Liang, Microsoft (chenliang1@microsoft.com);

(6) Weizhu Chen, Microsoft (wzchen@microsoft.com).

:::


:::info This paper is available on arxiv under CC BY 4.0 license.

:::

[3] https://github.com/sustcsonglin/flash-linear-attention

\ [4] https://github.com/datamllab/LongLM/blob/master/selfextendpatch/Llama.py

\ [5] https://github.com/jzhang38/TinyLlama

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Share Insights

You May Also Like

Ethereum Set to Debut ‘Key to Layer-2 Scaling’ as Fusaka Upgrade Clears Final Test

Ethereum Set to Debut ‘Key to Layer-2 Scaling’ as Fusaka Upgrade Clears Final Test

The post Ethereum Set to Debut ‘Key to Layer-2 Scaling’ as Fusaka Upgrade Clears Final Test appeared on BitcoinEthereumNews.com. In brief Ethereum’s Fusaka upgrade has passed its final testnet and is set to launch on the mainnet around December 3. The update will lower transaction costs and improve efficiency by expanding blob storage and implementing PeerDAS, which boosts layer-2 scalability. Developers have said the change could increase blob space by over 400%, marking a major step toward faster, cheaper Ethereum transactions. Ethereum’s latest overhaul is all systems go for deployment.  The network’s upcoming Fusaka upgrade successfully went live on a third and final testnet Tuesday afternoon—meaning it is now greenlit to go live on the Ethereum mainnet in just a few weeks.  Fusaka had previously deployed successfully on the Holesky and Sepolia testnets earlier this month, before going live on the Hoodi network today. It is currently penciled in to debut on the Ethereum mainnet on or around December 3.  Ethereum’s next major upgrade, Fusaka, is now live on the Hoodi network! ✅ Fusaka mainnet activation is scheduled for December 3rd. Fusaka introduces multiple EIPs to improve scalability, strengthen security, and reduce costs. The upgrade will unlock the next phase of rollup… pic.twitter.com/VQkosIouZQ — Consensys.eth (@Consensys) October 28, 2025 The software update seeks to cut transaction costs on Ethereum and boost the network’s efficiency by further streamlining the process by which it samples and verifies data from layer-2 networks. It also includes multiple proposals designed to improve Ethereum’s user experience. These improvements build on innovations introduced in prior Ethereum updates. The network’s 2024 Dencun upgrade introduced blobs, which significantly lowered layer-2 network gas fees by allowing data from such chains to be stored temporarily, as opposed to permanently. Fusaka will dramatically increase the amount of space reserved on every Ethereum transaction block for blobs, thereby making the innovation even more impactful. Marius van der Wijden, an Ethereum core developer,…
Share
BitcoinEthereumNews2025/10/30 09:34