RoboLab: A High-Fidelity Simulation Benchmark for Analysis of Task Generalist Policies
Abstract
RoboLab is a simulation benchmarking framework that addresses limitations in robot policy evaluation by enabling scalable, realistic task generation and systematic analysis of policy behavior under controlled perturbations.
The pursuit of general-purpose robotics has yielded impressive foundation models, yet simulation-based benchmarking remains a bottleneck due to rapid performance saturation and a lack of true generalization testing. Existing benchmarks often exhibit significant domain overlap between training and evaluation, trivializing success rates and obscuring insights into robustness. We introduce RoboLab, a simulation benchmarking framework designed to address these challenges. Concretely, our framework is designed to answer two questions: (1) to what extent can we understand the performance of a real-world policy by analyzing its behavior in simulation, and (2) which external factors most strongly affect that behavior under controlled perturbations. First, RoboLab enables human-authored and LLM-enabled generation of scenes and tasks in a robot- and policy-agnostic manner within a physically realistic and photorealistic simulation. With this, we propose the RoboLab-120 benchmark, consisting of 120 tasks categorized into three competency axes: visual, procedural, relational competency, across three difficulty levels. Second, we introduce a systematic analysis of real-world policies that quantify both their performance and the sensitivity of their behavior to controlled perturbations, indicating that high-fidelity simulation can serve as a proxy for analyzing performance and its dependence on external factors. Evaluation with RoboLab exposes significant performance gap in current state-of-the-art models. By providing granular metrics and a scalable toolset, RoboLab offers a scalable framework for evaluating the true generalization capabilities of task-generalist robotic policies.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ManipArena: Comprehensive Real-world Evaluation of Reasoning-Oriented Generalist Robot Manipulation (2026)
- RoboPlayground: Democratizing Robotic Evaluation through Structured Physical Domains (2026)
- AnySlot: Goal-Conditioned Vision-Language-Action Policies for Zero-Shot Slot-Level Placement (2026)
- LiLo-VLA: Compositional Long-Horizon Manipulation via Linked Object-Centric Policies (2026)
- Scaling Sim-to-Real Reinforcement Learning for Robot VLAs with Generative 3D Worlds (2026)
- CoEnv: Driving Embodied Multi-Agent Collaboration via Compositional Environment (2026)
- $\pi$, But Make It Fly: Physics-Guided Transfer of VLA Models to Aerial Manipulation (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper