Datasets:
license: apache-2.0
task_categories:
- visual-question-answering
- image-to-text
language:
- en
tags:
- mobile-ui
- gui-grounding
- android
- ui-automation
- multimodal
size_categories:
- 10K<n<100K
pretty_name: Android Control Dataset for LLaMA-Factory
Android Control Dataset
Overview
This directory contains two dataset files (and_ctrl_train.json and and_ctrl_test.json) derived from the Android Control project by Google Research. These datasets have been formatted specifically for GUI grounding training in LLaMA-Factory.
Dataset Description
The Android Control dataset consists of episodes where each episode contains multiple steps. Each step includes:
- Step instructions: Natural language instructions for UI interactions
- Actions: The type of action to perform (click, scroll, input text, etc.)
- Coordinates: Precise x, y coordinates for the action
The data has been extracted and formatted to train models for mobile UI understanding and interaction tasks.
Files
and_ctrl_train.json: Training datasetand_ctrl_test.json: Test/evaluation datasetdownload_android_control.ipynb: Jupyter notebook for downloading images and processing the original data
Data Format
Each entry in the JSON files follows the LLaMA-Factory conversation format:
{
"messages": [
{
"role": "system",
"content": "You are a helpful assistant that can identify what action to perform on mobile UI Screenshot given the user instruction."
},
{
"role": "user",
"content": "<image>Click on the Recording 2"
},
{
"role": "assistant",
"content": "{\"action_type\": \"click\", \"x\": 561, \"y\": 535}"
}
],
"images": ["and_ctrl/out_episode_18557_step_001.png"]
}
Setup Instructions
To use these datasets in LLaMA-Factory:
Create the image directory:
mkdir -p data/and_ctrlDownload images: Run the provided
download_android_control.ipynbnotebook to download and process the original images. The notebook will:- Download TFRecord files from Google Storage (
gs://gresearch/android_control/) - Extract images and save them directly to
and_ctrl/directory - Automatically organize images with the naming convention:
out_episode_{episode_id}_step_{step_number}.png - Generate an
and_ctrl.jsonfile with the processed data
- Download TFRecord files from Google Storage (
Dataset files:
- Images: Stored in
data/and_ctrl/folder - Training dataset:
and_ctrl_train.jsonindata/datasets/ - Test dataset:
and_ctrl_test.jsonindata/datasets/
- Images: Stored in
Dataset Statistics
Total samples: Train: 82,944 | Test: 904
| Action Type | Train | Test |
|---|---|---|
| click | 51,793 (62.44%) | 125 (13.83%) |
| scroll | 11,005 (13.27%) | 125 (13.83%) |
| input_text | 5,966 (7.19%) | 125 (13.83%) |
| wait | 5,657 (6.82%) | 125 (13.83%) |
| open_app | 5,572 (6.72%) | 125 (13.83%) |
| navigate_back | 2,909 (3.51%) | 125 (13.83%) |
| long_press | 42 (0.05%) | 125 (13.83%) |
| navigate_home | 0 (0.00%) | 29 (3.21%) |
Note: The training dataset shows a natural distribution with click actions being dominant (62.44%), while the test dataset is intentionally balanced with most action types having equal representation (~13.83% each). The navigate_home action appears only in the test set.
Training Usage
These datasets are specifically formatted for training multimodal language models to:
- Understand mobile UI screenshots
- Ground natural language instructions to specific UI elements
- Generate precise action coordinates for UI automation
- Learn mobile app interaction patterns
Source and Attribution
Original dataset: Google Research Android Control
The Android Control dataset was created by Google Research for advancing mobile UI understanding and automation research.
License
This dataset is derived from Google Research's Android Control dataset, which is licensed under the Apache License 2.0. The reformatted version for LLaMA-Factory maintains the same Apache 2.0 license terms.
Copyright for the original dataset belongs to Google LLC. Any modifications or reformatting for LLaMA-Factory compatibility are also provided under Apache License 2.0.
Notes
- The images are referenced with relative paths starting with
and_ctrl/ - Each action includes the action type and necessary parameters (coordinates, text, direction, etc.)
- The test set can be used for evaluating model performance on unseen mobile UI interactions