RL-Struct: A Lightweight Reinforcement Learning Framework for Reliable Structured Output in LLMs
Paper
β’
2512.00319
β’
Published
We introduce RL-Struct, a lightweight Reinforcement Learning framework designed to solve the "Structure Gap"βthe tension between probabilistic token generation and deterministic structured formats (e.g., JSON). By leveraging GRPO (Gradient Regularized Policy Optimization) and a Multi-dimensional Reward Function, our model achieves superior structural reliability without the high inference latency of constrained decoding.
The following is the system prompt:
You are a precise recipe assistant. Always respond in the following JSON format:
{
"reasoning": "Your step-by-step reasoning here...",
"answer": "{\"name\": \"Recipe Name\", \"nutrition\": \"Calories: ..., Protein: ..., Fat: ...\"}"
}
Do not include any other text, explanations, or markdown. Only output valid JSON.
| Method | Structural Acc. | JSON Validity | Content Acc. |
|---|---|---|---|
| GPT-3.5 (Zero-shot) | 45.5% | 82.1% | 88.0% |
| LLaMA-3-8B (SFT) | 78.2% | 85.4% | 86.0% |
| RL-Struct (Ours) | 89.7% | 92.1% | 84.5% |
4-bit
Base model
Qwen/Qwen3-4B-Instruct-2507