Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

DMPO Demonstration Datasets

Pre-processed demonstration datasets for DMPO: Dispersive MeanFlow Policy Optimization.

Paper Code Project Page

Overview

This repository contains pre-processed demonstration data for pre-training DMPO policies. Each dataset includes trajectory data and normalization statistics.

Dataset Structure

gym/
├── hopper-medium-v2/
├── walker2d-medium-v2/
├── ant-medium-expert-v2/
├── Humanoid-medium-v3/
├── kitchen-complete-v0/
├── kitchen-mixed-v0/
└── kitchen-partial-v0/

robomimic/
├── lift-img/
├── can-img/
├── square-img/
└── transport-img/

Each task folder contains:

  • train.npz - Training trajectories
  • normalization.npz - Observation and action normalization statistics

Usage

Use the hf:// prefix in config files to auto-download:

train_dataset_path: hf://gym/hopper-medium-v2/train.npz
normalization_path: hf://gym/hopper-medium-v2/normalization.npz

Data Sources

  • Gym tasks: Derived from D4RL datasets
  • Robomimic tasks: Derived from Robomimic proficient-human demonstrations

Citation

@misc{zou2026stepenoughdispersivemeanflow,
      title={One Step Is Enough: Dispersive MeanFlow Policy Optimization},
      author={Guowei Zou and Haitao Wang and Hejun Wu and Yukun Qian and Yuhang Wang and Weibing Li},
      year={2026},
      eprint={2601.20701},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2601.20701},
}

License

MIT License

Downloads last month
28

Paper for Guowei-Zou/DMPO-datasets