Skip to content
#

native-sparse-attention

Here are 3 public repositories matching this topic...

Language: All
Filter by language

From-scratch reimplementation of DeepSeek's Native Sparse Attention (arXiv:2502.11089) in Triton + CUDA Hopper WGMMA. 7.4x faster than FlashAttention-3 at 64k context. Five-model training fleet, perplexity sweep, LongBench v2, MoBA comparison.

  • Updated May 12, 2026
  • Python

Improve this page

Add a description, image, and links to the native-sparse-attention topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the native-sparse-attention topic, visit your repo's landing page and select "manage topics."

Learn more