{ "base_model": "dimasik87/07e17751-ab33-467b-b4a3-c3d1a9b3ecbe", "tree": [ { "model_id": "dimasik87/07e17751-ab33-467b-b4a3-c3d1a9b3ecbe", "gated": "False", "card": "---\nlibrary_name: peft\nlicense: llama3.1\nbase_model: unsloth/Llama-3.1-Storm-8B\ntags:\n- axolotl\n- generated_from_trainer\nmodel-index:\n- name: 07e17751-ab33-467b-b4a3-c3d1a9b3ecbe\n results: []\n---\n\n\n\n[\"Built](https://github.com/axolotl-ai-cloud/axolotl)\n
See axolotl config\n\naxolotl version: `0.4.1`\n```yaml\nadapter: lora\nbase_model: unsloth/Llama-3.1-Storm-8B\nbf16: auto\nchat_template: llama3\ndataset_prepared_path: null\ndatasets:\n- data_files:\n - c8a5ff254c4cb151_train_data.json\n ds_type: json\n field: synthesized text\n path: /workspace/input_data/c8a5ff254c4cb151_train_data.json\n type: completion\ndebug: null\ndeepspeed: null\nearly_stopping_patience: null\neval_max_new_tokens: 128\neval_table_size: null\nevals_per_epoch: 3\nflash_attention: false\nfp16: null\nfsdp: null\nfsdp_config: null\ngradient_accumulation_steps: 6\ngradient_checkpointing: true\ngroup_by_length: false\nhub_model_id: dimasik87/07e17751-ab33-467b-b4a3-c3d1a9b3ecbe\nhub_repo: null\nhub_strategy: checkpoint\nhub_token: null\nlearning_rate: 0.0001\nload_in_4bit: false\nload_in_8bit: false\nlocal_rank: null\nlogging_steps: 1\nlora_alpha: 64\nlora_dropout: 0.05\nlora_fan_in_fan_out: null\nlora_model_dir: null\nlora_r: 32\nlora_target_linear: true\nlr_scheduler: cosine\nmax_memory:\n 0: 70GiB\nmax_steps: 50\nmicro_batch_size: 4\nmlflow_experiment_name: /tmp/c8a5ff254c4cb151_train_data.json\nmodel_type: AutoModelForCausalLM\nnum_epochs: 3\noptimizer: adamw_torch\noutput_dir: miner_id_24\npad_to_sequence_len: true\nresume_from_checkpoint: null\ns2_attention: null\nsample_packing: false\nsave_steps: 25\nsave_strategy: steps\nsequence_len: 2048\nstrict: false\ntf32: false\ntokenizer_type: AutoTokenizer\ntorch_dtype: bfloat16\ntrain_on_inputs: false\ntrust_remote_code: true\nval_set_size: 0.05\nwandb_entity: null\nwandb_mode: online\nwandb_name: 07e17751-ab33-467b-b4a3-c3d1a9b3ecbe\nwandb_project: Gradients-On-Demand\nwandb_run: your_name\nwandb_runid: 07e17751-ab33-467b-b4a3-c3d1a9b3ecbe\nwarmup_steps: 10\nweight_decay: 0.01\nxformers_attention: null\n\n```\n\n

\n\n# 07e17751-ab33-467b-b4a3-c3d1a9b3ecbe\n\nThis model is a fine-tuned version of [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B) on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: nan\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- gradient_accumulation_steps: 6\n- total_train_batch_size: 24\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 10\n- training_steps: 50\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:------:|:----:|:---------------:|\n| 0.0 | 0.0005 | 1 | nan |\n| 0.0 | 0.0030 | 6 | nan |\n| 0.0 | 0.0061 | 12 | nan |\n| 0.0 | 0.0091 | 18 | nan |\n| 0.0 | 0.0122 | 24 | nan |\n| 0.0 | 0.0152 | 30 | nan |\n| 0.0 | 0.0183 | 36 | nan |\n| 0.0 | 0.0213 | 42 | nan |\n| 0.0 | 0.0243 | 48 | nan |\n\n\n### Framework versions\n\n- PEFT 0.13.2\n- Transformers 4.46.0\n- Pytorch 2.5.0+cu124\n- Datasets 3.0.1\n- Tokenizers 0.20.1", "metadata": "\"N/A\"", "depth": 0, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [], "base_model": "dimasik87/07e17751-ab33-467b-b4a3-c3d1a9b3ecbe", "base_model_relation": "base" } ] }