Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -17,16 +17,18 @@ task_categories:
|
|
| 17 |
- question-answering
|
| 18 |
---
|
| 19 |
|
| 20 |
-
# EPFL
|
| 21 |
|
| 22 |
-
|
|
|
|
|
|
|
| 23 |
we introduce Lemonade: **L**anguage models **E**valuation of **MO**tion a**N**d **A**ction-**D**riven **E**nquiries.
|
| 24 |
Lemonade consists of 36,521 closed-ended QA pairs linked to egocentric video clips, categorized in three groups and six subcategories.
|
| 25 |
18,857 QAs focus on behavior understanding, leveraging the rich ground truth behavior annotations of the EPFL-Smart Kitchen to interrogate models about perceived actions (Perception) and reason over unseen behaviors (Reasoning).
|
| 26 |
8,210 QAs involve longer video clips, challenging models in summarization (Summarization) and session-level inference (Session properties).
|
| 27 |
The remaining 9,463 QAs leverage the 3D pose estimation data to infer hand shapes, joint angles (Physical attributes), or trajectory velocities (Kinematics) from visual information.
|
| 28 |
|
| 29 |
-
## Content
|
| 30 |
The current repository contains all egocentric videos recorded in the EPFL-Smart-Kitchen-30 dataset. You can download the rest of the dataset at ... and ... .
|
| 31 |
|
| 32 |
### Repository structure
|
|
@@ -34,10 +36,10 @@ The current repository contains all egocentric videos recorded in the EPFL-Smart
|
|
| 34 |
```
|
| 35 |
Lemonade
|
| 36 |
βββ MCQs
|
| 37 |
-
|
| 38 |
βββ videos
|
| 39 |
-
|
| 40 |
-
|
| 41 |
βββ README.md
|
| 42 |
```
|
| 43 |
|
|
@@ -57,15 +59,19 @@ Lemonade
|
|
| 57 |
|
| 58 |
> We refer the reader to the associated publication for details about data processing and tasks description.
|
| 59 |
|
|
|
|
|
|
|
| 60 |
|
| 61 |
-
## Usage
|
| 62 |
The evaluation of the benchmark can be done through the following github repository: ... .
|
| 63 |
|
| 64 |
-
|
|
|
|
|
|
|
| 65 |
cite arxiv paper
|
| 66 |
|
| 67 |
-
## Acknowledgments
|
| 68 |
We thank Andy Bonnetto for the design of the dataset and Matea Tashkovska for the adaptation of the evaluation platform. </br>
|
| 69 |
We thank members of the Mathis Group for Computational Neuroscience \& AI (EPFL) for their feedback throughout the project.
|
| 70 |
This work was funded by EPFL, Swiss SNF grant (320030-227871), Microsoft Swiss Joint Research Center and a Boehringer Ingelheim Fonds PhD stipend (H.Q.).
|
| 71 |
-
We are grateful to the Brain Mind Institute for providing funds for hardware and to the Neuro-X Institute for providing funds for services.
|
|
|
|
| 17 |
- question-answering
|
| 18 |
---
|
| 19 |
|
| 20 |
+
# π EPFL-Smart-Kitchen: Lemonade benchmark
|
| 21 |
|
| 22 |
+

|
| 23 |
+
|
| 24 |
+
## π Introduction
|
| 25 |
we introduce Lemonade: **L**anguage models **E**valuation of **MO**tion a**N**d **A**ction-**D**riven **E**nquiries.
|
| 26 |
Lemonade consists of 36,521 closed-ended QA pairs linked to egocentric video clips, categorized in three groups and six subcategories.
|
| 27 |
18,857 QAs focus on behavior understanding, leveraging the rich ground truth behavior annotations of the EPFL-Smart Kitchen to interrogate models about perceived actions (Perception) and reason over unseen behaviors (Reasoning).
|
| 28 |
8,210 QAs involve longer video clips, challenging models in summarization (Summarization) and session-level inference (Session properties).
|
| 29 |
The remaining 9,463 QAs leverage the 3D pose estimation data to infer hand shapes, joint angles (Physical attributes), or trajectory velocities (Kinematics) from visual information.
|
| 30 |
|
| 31 |
+
## πΎ Content
|
| 32 |
The current repository contains all egocentric videos recorded in the EPFL-Smart-Kitchen-30 dataset. You can download the rest of the dataset at ... and ... .
|
| 33 |
|
| 34 |
### Repository structure
|
|
|
|
| 36 |
```
|
| 37 |
Lemonade
|
| 38 |
βββ MCQs
|
| 39 |
+
| βββ lemonade_benchmark.csv
|
| 40 |
βββ videos
|
| 41 |
+
| βββ YH2002_2023_12_04_10_15_23_hololens.mp4
|
| 42 |
+
| βββ ..
|
| 43 |
βββ README.md
|
| 44 |
```
|
| 45 |
|
|
|
|
| 59 |
|
| 60 |
> We refer the reader to the associated publication for details about data processing and tasks description.
|
| 61 |
|
| 62 |
+
## π Evaluation results
|
| 63 |
+
|
| 64 |
|
| 65 |
+
## π Usage
|
| 66 |
The evaluation of the benchmark can be done through the following github repository: ... .
|
| 67 |
|
| 68 |
+
|
| 69 |
+
|
| 70 |
+
## π Citations
|
| 71 |
cite arxiv paper
|
| 72 |
|
| 73 |
+
## β€οΈ Acknowledgments
|
| 74 |
We thank Andy Bonnetto for the design of the dataset and Matea Tashkovska for the adaptation of the evaluation platform. </br>
|
| 75 |
We thank members of the Mathis Group for Computational Neuroscience \& AI (EPFL) for their feedback throughout the project.
|
| 76 |
This work was funded by EPFL, Swiss SNF grant (320030-227871), Microsoft Swiss Joint Research Center and a Boehringer Ingelheim Fonds PhD stipend (H.Q.).
|
| 77 |
+
We are grateful to the Brain Mind Institute for providing funds for hardware and to the Neuro-X Institute for providing funds for services.
|