Can LLMs Predict Their Own Failures? Self-Awareness via Internal Circuits Paper • 2512.20578 • Published 26 days ago • 77
Taming Hallucinations: Boosting MLLMs' Video Understanding via Counterfactual Video Generation Paper • 2512.24271 • Published 19 days ago • 59
Youtu-Agent: Scaling Agent Productivity with Automated Generation and Hybrid Policy Optimization Paper • 2512.24615 • Published 19 days ago • 113
QuantiPhy: A Quantitative Benchmark Evaluating Physical Reasoning Abilities of Vision-Language Models Paper • 2512.19526 • Published 27 days ago • 11
H-Neurons: On the Existence, Impact, and Origin of Hallucination-Associated Neurons in LLMs Paper • 2512.01797 • Published Dec 1, 2025 • 2