goxq commited on
Commit
bfb6509
·
verified ·
1 Parent(s): 56f89da

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +194 -0
README.md ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - visual-question-answering
5
+ - image-to-text
6
+ language:
7
+ - en
8
+ tags:
9
+ - robotics
10
+ - embodied-ai
11
+ - multimodal
12
+ - benchmark
13
+ - vision-language
14
+ - EO-1
15
+ pretty_name: EO-Bench
16
+ size_categories:
17
+ - n<1K
18
+ ---
19
+
20
+ # EO-Bench: Embodied Reasoning Benchmark for Vision-Language Models
21
+
22
+ <p align="center">
23
+ <a href="https://arxiv.org/abs/2508.21112"><img src="https://img.shields.io/badge/arXiv-2508.21112-b31b1b.svg" alt="arXiv"></a>
24
+ <a href="https://github.com/SHAILAB-IPEC/EO1"><img src="https://img.shields.io/badge/GitHub-EO--1-blue.svg" alt="GitHub"></a>
25
+ <a href="https://huggingface.co/datasets/IPEC-COMMUNITY/EO-Data1.5M"><img src="https://img.shields.io/badge/🤗-EO--Data1.5M-yellow.svg" alt="Dataset"></a>
26
+ </p>
27
+
28
+ ## Overview
29
+
30
+ **EO-Bench** is a comprehensive benchmark designed to evaluate the **embodied reasoning** capabilities of vision-language models (VLMs) in robotics scenarios. This benchmark is part of the [EO-1](https://github.com/SHAILAB-IPEC/EO1) project, which develops unified embodied foundation models for general robot control.
31
+
32
+ The benchmark assesses model performance across **12 distinct embodied reasoning categories**, covering trajectory reasoning, visual grounding, action reasoning, and more. All tasks are presented in a multiple-choice format to enable standardized evaluation.
33
+
34
+ ## Dataset Description
35
+
36
+ ### Statistics
37
+
38
+ | Metric | Value |
39
+ |--------|-------|
40
+ | Total Samples | 600 |
41
+ | Question Types | 12 |
42
+ | Total Images | 668 |
43
+ | Answer Format | Multiple Choice (A/B/C/D) |
44
+
45
+ ### Question Type Distribution
46
+
47
+ | Question Type | Count |
48
+ |---------------|-------|
49
+ | Trajectory Reasoning | 134 |
50
+ | Visual Grounding | 128 |
51
+ | Process Verification | 113 |
52
+ | Multiview Pointing | 68 |
53
+ | Relation Reasoning | 40 |
54
+ | Robot Interaction | 37 |
55
+ | Object State | 34 |
56
+ | Episode Caption | 17 |
57
+ | Action Reasoning | 13 |
58
+ | Task Planning | 10 |
59
+ | Direct Influence | 4 |
60
+ | Counterfactual | 2 |
61
+
62
+ ### Data Fields
63
+
64
+ Each sample contains the following fields:
65
+
66
+ - **`id`** (int): Unique identifier for each sample
67
+ - **`question`** (str): The question text with multiple-choice options
68
+ - **`question_type`** (str): Category of the question (one of 12 types)
69
+ - **`answer`** (str): Correct answer letter (A, B, C, or D)
70
+ - **`num_images`** (int): Number of images associated with the question
71
+ - **`image_paths`** (list[str]): Relative paths to the associated images
72
+
73
+ ### Question Type Descriptions
74
+
75
+ | Type | Description |
76
+ |------|-------------|
77
+ | **Trajectory Reasoning** | Predict the optimal path for robot end-effector movement |
78
+ | **Visual Grounding** | Locate specific objects or regions in the scene |
79
+ | **Process Verification** | Verify the correctness of a robotic action sequence |
80
+ | **Multiview Pointing** | Identify corresponding points across multiple camera views |
81
+ | **Relation Reasoning** | Understand spatial relationships between objects |
82
+ | **Robot Interaction** | Predict outcomes of robot-environment interactions |
83
+ | **Object State** | Recognize and reason about object states |
84
+ | **Episode Caption** | Describe robotic manipulation episodes |
85
+ | **Action Reasoning** | Reason about the effects of robot actions |
86
+ | **Task Planning** | Plan sequences of actions to achieve goals |
87
+ | **Direct Influence** | Understand direct causal effects in manipulation |
88
+ | **Counterfactual** | Reason about hypothetical alternative scenarios |
89
+
90
+ ## Usage
91
+
92
+ ### Loading the Dataset
93
+
94
+ ```python
95
+ from datasets import load_dataset
96
+
97
+ # Load the dataset
98
+ dataset = load_dataset("IPEC-COMMUNITY/EO-Bench")
99
+
100
+ # Access samples
101
+ for sample in dataset['train']:
102
+ print(f"ID: {sample['id']}")
103
+ print(f"Question: {sample['question']}")
104
+ print(f"Type: {sample['question_type']}")
105
+ print(f"Answer: {sample['answer']}")
106
+ print(f"Images: {sample['image_paths']}")
107
+ break
108
+ ```
109
+
110
+ ### Loading Images
111
+
112
+ ```python
113
+ from PIL import Image
114
+ from datasets import load_dataset
115
+ import os
116
+
117
+ dataset = load_dataset("IPEC-COMMUNITY/EO-Bench")
118
+
119
+ # Get the first sample
120
+ sample = dataset['train'][0]
121
+
122
+ # Load associated images
123
+ for img_path in sample['image_paths']:
124
+ # Images are stored in the 'images' folder
125
+ image = Image.open(hf_hub_download(
126
+ repo_id="IPEC-COMMUNITY/EO-Bench",
127
+ filename=img_path,
128
+ repo_type="dataset"
129
+ ))
130
+ image.show()
131
+ ```
132
+
133
+ ### Evaluation Example
134
+
135
+ ```python
136
+ from datasets import load_dataset
137
+
138
+ def evaluate_model(model, dataset):
139
+ correct = 0
140
+ total = 0
141
+
142
+ for sample in dataset['train']:
143
+ # Load images and prepare input
144
+ images = [load_image(p) for p in sample['image_paths']]
145
+ question = sample['question']
146
+
147
+ # Get model prediction
148
+ prediction = model.predict(images, question)
149
+
150
+ # Check if correct
151
+ if prediction == sample['answer']:
152
+ correct += 1
153
+ total += 1
154
+
155
+ accuracy = correct / total * 100
156
+ return accuracy
157
+ ```
158
+
159
+ ## Related Resources
160
+
161
+ ### EO-1 Model
162
+
163
+ EO-1 is a unified embodied foundation model that processes interleaved vision-text-action inputs using a single decoder-only transformer architecture. The model achieves state-of-the-art performance on multimodal embodied reasoning tasks.
164
+
165
+ - **Paper**: [EmbodiedOneVision: Interleaved Vision-Text-Action Pretraining for General Robot Control](https://arxiv.org/abs/2508.21112)
166
+ - **GitHub**: [SHAILAB-IPEC/EO1](https://github.com/SHAILAB-IPEC/EO1)
167
+ - **Models**: [IPEC-COMMUNITY/EO-1-3B](https://huggingface.co/IPEC-COMMUNITY/EO-1-3B)
168
+
169
+ ### Training Data
170
+
171
+ EO-1 is trained on **EO-Data1.5M**, a comprehensive multimodal embodied reasoning dataset with over 1.5 million high-quality interleaved samples.
172
+
173
+ - **Dataset**: [IPEC-COMMUNITY/EO-Data1.5M](https://huggingface.co/datasets/IPEC-COMMUNITY/EO-Data1.5M)
174
+
175
+ ## Citation
176
+
177
+ If you use this benchmark in your research, please cite:
178
+
179
+ ```bibtex
180
+ @article{eo1,
181
+ title={EmbodiedOneVision: Interleaved Vision-Text-Action Pretraining for General Robot Control},
182
+ author={EO-1 Team},
183
+ journal={arXiv preprint arXiv:2508.21112},
184
+ year={2025}
185
+ }
186
+ ```
187
+
188
+ ## License
189
+
190
+ This dataset is released under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
191
+
192
+ ## Contact
193
+
194
+ For questions or issues, please open an issue on the [GitHub repository](https://github.com/SHAILAB-IPEC/EO1) or contact the EO-1 team.