8f0de2cbfc886988896c73c95fccb371
This model is a fine-tuned version of google-bert/bert-large-cased-whole-word-masking-finetuned-squad on the nyu-mll/glue [stsb] dataset. It achieves the following results on the evaluation set:
- Loss: 0.4801
- Data Size: 1.0
- Epoch Runtime: 20.3375
- Mse: 0.4803
- Mae: 0.5207
- R2: 0.7852
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 50
Training results
| Training Loss | Epoch | Step | Validation Loss | Data Size | Epoch Runtime | Mse | Mae | R2 |
|---|---|---|---|---|---|---|---|---|
| No log | 0 | 0 | 3.8897 | 0 | 1.6591 | 3.8908 | 1.6428 | -0.7405 |
| No log | 1 | 179 | 2.5673 | 0.0078 | 2.1642 | 2.5680 | 1.3172 | -0.1488 |
| No log | 2 | 358 | 2.1251 | 0.0156 | 2.5294 | 2.1257 | 1.2070 | 0.0491 |
| No log | 3 | 537 | 1.0714 | 0.0312 | 3.2148 | 1.0715 | 0.8228 | 0.5207 |
| No log | 4 | 716 | 1.1611 | 0.0625 | 4.4331 | 1.1611 | 0.8623 | 0.4806 |
| No log | 5 | 895 | 0.6477 | 0.125 | 5.9682 | 0.6479 | 0.6391 | 0.7102 |
| 0.0595 | 6 | 1074 | 0.6796 | 0.25 | 8.6019 | 0.6799 | 0.6443 | 0.6958 |
| 0.5724 | 7 | 1253 | 0.5155 | 0.5 | 12.0082 | 0.5158 | 0.5573 | 0.7693 |
| 0.4479 | 8.0 | 1432 | 0.4967 | 1.0 | 20.5912 | 0.4970 | 0.5488 | 0.7777 |
| 0.3088 | 9.0 | 1611 | 0.5115 | 1.0 | 20.3107 | 0.5117 | 0.5287 | 0.7711 |
| 0.277 | 10.0 | 1790 | 0.4597 | 1.0 | 19.9816 | 0.4598 | 0.5163 | 0.7943 |
| 0.2304 | 11.0 | 1969 | 0.5006 | 1.0 | 19.7559 | 0.5008 | 0.5405 | 0.7760 |
| 0.1824 | 12.0 | 2148 | 0.4475 | 1.0 | 20.2815 | 0.4477 | 0.5102 | 0.7997 |
| 0.1648 | 13.0 | 2327 | 0.4415 | 1.0 | 19.6776 | 0.4417 | 0.5017 | 0.8024 |
| 0.1407 | 14.0 | 2506 | 0.4685 | 1.0 | 20.0880 | 0.4688 | 0.5093 | 0.7903 |
| 0.1289 | 15.0 | 2685 | 0.4753 | 1.0 | 19.7466 | 0.4755 | 0.5220 | 0.7873 |
| 0.1121 | 16.0 | 2864 | 0.4612 | 1.0 | 19.6118 | 0.4614 | 0.5059 | 0.7936 |
| 0.1129 | 17.0 | 3043 | 0.4801 | 1.0 | 20.3375 | 0.4803 | 0.5207 | 0.7852 |
Framework versions
- Transformers 4.57.0
- Pytorch 2.8.0+cu128
- Datasets 4.3.0
- Tokenizers 0.22.1
- Downloads last month
- 3