Modified Qwen3-30B-A3B
This is a modified version of Qwen/Qwen3-30B-A3B with an updated tokenizer configuration for improved agentic mode with function calling.
Modifications
The tokenizer configuration has been updated with:
- Enhanced chat template with
{% generation %}tags for better tool mask handling - Added
extra_special_tokensfield - Improved compatibility with agentic workflows
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("JetBrains-Research/Qwen3-30B-A3B-am")
tokenizer = AutoTokenizer.from_pretrained("JetBrains-Research/Qwen3-30B-A3B-am")
# Use with function calling / agentic mode
messages = [
{"role": "user", "content": "Your prompt here"}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
Original Model
This model is based on Qwen/Qwen3-30B-A3B. Please refer to the original model card for more information about capabilities and limitations.
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support