LLaDA-Sample-10BT / README.md
Fredtt3's picture
Update README.md
ee6dbc7 verified
metadata
task_categories:
  - text-generation
language:
  - en
size_categories:
  - 1B<n<10B

image/png

Dataset: LLaDA-Sample-10BT
Base: HuggingFaceFW/fineweb (subset sample-10BT)
Purpose: Training LLaDA (Large Language Diffusion Models)

Preprocessing

  • Tokenizer: GSAI-ML/LLaDA-8B-Instruct
  • Chunking: Up to 4,096 tokens per chunk (1% of chunks randomly sized between 1–4,096 tokens)
  • Noisy masking: Applied with noise factor ε = 1×10⁻³
  • Fields per chunk (PyTorch tensors):
    • input_ids
    • noisy_input_ids
    • mask
    • t (time scalar)

Statistics

  • Total chunks: ~2,520,000
  • Shards: 252 .pt files
  • Chunks per file: 10,000
  • Average file size: ~702–708 MB
  • Total size: ~166 GB

Usage

This dataset is used for training in the LLaDA-from-scratch GitHub repository, where you’ll find the full data pipeline and training scripts.