Quantized 4bit MLX text-only model converted from https://huggingface.co/google/gemma-3-12b-it using mlx-lm 0.22.2
- Downloads last month
- 51
Model size
2B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
4-bit