openai/whisper-large

This model is a fine-tuned version of openai/whisper-large on the common_voice_22_0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3446
  • Wer: 11.5035

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3.75e-05
  • train_batch_size: 32
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 100000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0153 10.8234 5000 0.2641 10.2881
0.0077 21.6457 10000 0.2892 10.5315
0.0054 32.4680 15000 0.3138 10.5839
0.0058 43.2904 20000 0.3198 11.0335
0.003 54.1127 25000 0.3352 10.7326
0.0047 64.9361 30000 0.3446 11.5035

Framework versions

  • Transformers 4.52.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
52
Safetensors
Model size
2B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for zuazo/whisper-large-eu-cv22.0

Finetuned
(95)
this model

Evaluation results