YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Fine-tuned Model: gemma3-12b-etghan_sft_v4

πŸ“š Training Configuration

  • data_path: QomSSLab/etghan_sft_v4
  • output_dir: gemma312b_lora_chckpnts
  • new_model_name: gemma3-12b-etghan_sft_v4
  • data_ratio: 1.0
  • model_name: QomSSLab/Legal-gemma3-12b-it-lora-thinking
  • use_4bit: False
  • use_lora: True
  • max_seq_length: 4000
  • batch_size: 1
  • gradient_accu: 8
  • epochs: 1
  • learning_rate: 5e-05
  • lora_alpha: 64
  • lora_drop: 0.05
  • lora_r: 64
  • tune_embedding_layer: False
  • hf_token: ********
  • resume_from_checkpoint: False
  • use_8bit_optimizer: True
  • push_to_hub: True

Auto-generated after training.

Downloads last month
-
Safetensors
Model size
12B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support