Dicta-LM 3.0: Advancing The Frontier of Hebrew Sovereign LLMs

Dicta-LM 3.0 is a powerful open-weight collection of LLMs, trained on extensive corpora of Hebrew and English texts. The models are available for download and for unlimited use. The models set a new SOTA for their weight-class for Hebrew, both as base models and chat models.

This is the 24-billion-parameter base model, originally initialized from Mistral-Small-3.1-24B-Base-2503.

This version of the model is quantized to 4-bits (with 16-bit activations), allowing for inference with significantly less memory although with slightly weaker performance. This version of the model can fit on a single 24GB GPU.

For full details of this model please read our release blog post or the technical report.

Note: This is not a chat model; rather this is a base model that can be further fine-tuned. Chat model variants are available at the link below.

You can view and access the full collection of base/instruct unquantized/quantized versions of DictaLM 3.0 here.

Usage

vLLM

vllm serve dicta-il/DictaLM-3.0-24B-Base-W4A16

If you run out of memory on a 24GB GPU, decrease the context window and enforce eager: --max-model-len 8192 --enforce-eager

Notice

DictaLM-3.0-24-Base-W4A16 is a pretrained base model and therefore does not have any moderation mechanisms.

Citation

If you use this model, please cite:

@article{Shmidman2025DictaLM3,
  title={{Dicta-LM 3.0: Advancing The Frontier of Hebrew Sovereign LLMs}},
  author={Shaltiel Shmidman and Avi Shmidman and Amir DN Cohen and Moshe Koppel},
  year={2025},
  publisher={{DICTA / Jerusalem, Israel}},
  note={https://www.dicta.org.il/publications/DictaLM_3_0___Techincal_Report.pdf}
}
Downloads last month
14
Safetensors
Model size
4B params
Tensor type
I64
·
I32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for dicta-il/DictaLM-3.0-24B-Base-W4A16

Quantizations
2 models

Collection including dicta-il/DictaLM-3.0-24B-Base-W4A16