MHLA: Restoring Expressivity of Linear Attention via Token-Level Multi-Head Paper • 2601.07832 • Published Jan 12 • 52
Elastic Attention: Test-time Adaptive Sparsity Ratios for Efficient Transformers Paper • 2601.17367 • Published Jan 24 • 34
Why Attention Patterns Exist: A Unifying Temporal Perspective Analysis Paper • 2601.21709 • Published Jan 29 • 3
LLaDA2.1: Speeding Up Text Diffusion via Token Editing Paper • 2602.08676 • Published 29 days ago • 70
MOVA: Towards Scalable and Synchronized Video-Audio Generation Paper • 2602.08794 • Published 29 days ago • 156
OPUS: Towards Efficient and Principled Data Selection in Large Language Model Pre-training in Every Iteration Paper • 2602.05400 • Published Feb 5 • 346
When to Memorize and When to Stop: Gated Recurrent Memory for Long-Context Reasoning Paper • 2602.10560 • Published 27 days ago • 29
MOSS-Audio-Tokenizer: Scaling Audio Tokenizers for Future Audio Foundation Models Paper • 2602.10934 • Published 27 days ago • 49
BitDance: Scaling Autoregressive Generative Models with Binary Tokens Paper • 2602.14041 • Published 23 days ago • 52
STAPO: Stabilizing Reinforcement Learning for LLMs by Silencing Rare Spurious Tokens Paper • 2602.15620 • Published 21 days ago • 3
SLA2: Sparse-Linear Attention with Learnable Routing and QAT Paper • 2602.12675 • Published 25 days ago • 53
VESPO: Variational Sequence-Level Soft Policy Optimization for Stable Off-Policy LLM Training Paper • 2602.10693 • Published 27 days ago • 216
Decoding as Optimisation on the Probability Simplex: From Top-K to Top-P (Nucleus) to Best-of-K Samplers Paper • 2602.18292 • Published 18 days ago • 10
Test-Time Training with KV Binding Is Secretly Linear Attention Paper • 2602.21204 • Published 14 days ago • 30
Timer-S1: A Billion-Scale Time Series Foundation Model with Serial Scaling Paper • 2603.04791 • Published 5 days ago • 15
LoGeR: Long-Context Geometric Reconstruction with Hybrid Memory Paper • 2603.03269 • Published 7 days ago • 35