YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
MiniMax-M2 Model Repository
This is the official MiniMax-M2 model repository containing a 230B parameter MoE model with 10B active parameters, optimized for coding and agentic workflows.
Model Information
- Model Type: Mixture of Experts (MoE)
- Total Parameters: 230B
- Active Parameters: 10B
- Architecture: Transformer-based MoE
- License: Modified MIT
- Pipeline Tag: text-generation
Usage
This model can be used with various inference frameworks:
Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("your-username/MiniMax-M2")
tokenizer = AutoTokenizer.from_pretrained("your-username/MiniMax-M2")
vLLM
from vllm import LLM, SamplingParams
llm = LLM(model="your-username/MiniMax-M2")
SGLang
from sglang import function, system, user, assistant, gen, select
@function
def multi_turn_question(s, question):
s += system("You are a helpful assistant.")
s += user(question)
s += assistant(gen("answer", max_tokens=256))
return s["answer"]
Model Details
- Context Length: 128K tokens
- Thinking Format: Uses
<think>...</think>tags for reasoning - Recommended Parameters:
- Temperature: 1.0
- Top-p: 0.95
- Top-k: 40
Deployment Guides
See the docs/ directory for detailed deployment guides:
License
This model is released under the Modified MIT License. See the license file for details.
- Downloads last month
- 12
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support