metadata
task_categories:
- text-generation
language:
- en
size_categories:
- 1B<n<10B
Dataset: LLaDA-Sample-10BT
Base: HuggingFaceFW/fineweb (subset sample-10BT)
Purpose: Training LLaDA (Large Language Diffusion Models)
Preprocessing
- Tokenizer:
GSAI-ML/LLaDA-8B-Instruct - Chunking: Up to 4,096 tokens per chunk (1% of chunks randomly sized between 1–4,096 tokens)
- Noisy masking: Applied with noise factor ε = 1×10⁻³
- Fields per chunk (PyTorch tensors):
input_idsnoisy_input_idsmaskt(time scalar)
Statistics
- Total chunks: ~2,520,000
- Shards: 252
.ptfiles - Chunks per file: 10,000
- Average file size: ~702–708 MB
- Total size: ~166 GB
Usage
This dataset is used for training in the LLaDA-from-scratch GitHub repository, where you’ll find the full data pipeline and training scripts.
