Rethinking LLM Evaluation: Can We Evaluate LLMs with 200x Less Data? Paper • 2510.10457 • Published Oct 12 • 2
Efficient Multi-modal Large Language Models via Progressive Consistency Distillation Paper • 2510.00515 • Published Oct 1 • 39
Winning the Pruning Gamble: A Unified Approach to Joint Sample and Token Pruning for Efficient Supervised Fine-Tuning Paper • 2509.23873 • Published Sep 28 • 67
Socratic-Zero : Bootstrapping Reasoning via Data-Free Agent Co-evolution Paper • 2509.24726 • Published Sep 29 • 19
Reasoning Like an Economist: Post-Training on Economic Problems Induces Strategic Generalization in LLMs Paper • 2506.00577 • Published May 31 • 11
Shifting AI Efficiency From Model-Centric to Data-Centric Compression Paper • 2505.19147 • Published May 25 • 144