Instructions to use luzimu/WebGen-LM-32B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use luzimu/WebGen-LM-32B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="luzimu/WebGen-LM-32B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("luzimu/WebGen-LM-32B") model = AutoModelForCausalLM.from_pretrained("luzimu/WebGen-LM-32B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use luzimu/WebGen-LM-32B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "luzimu/WebGen-LM-32B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "luzimu/WebGen-LM-32B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/luzimu/WebGen-LM-32B
- SGLang
How to use luzimu/WebGen-LM-32B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "luzimu/WebGen-LM-32B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "luzimu/WebGen-LM-32B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "luzimu/WebGen-LM-32B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "luzimu/WebGen-LM-32B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use luzimu/WebGen-LM-32B with Docker Model Runner:
docker model run hf.co/luzimu/WebGen-LM-32B
WebGen-LM
WebGen-LM is trained using the Bolt.diy trajectories generated from a subset of the training set of WebGen-Bench (🤗 luzimu/WebGen-Bench). It has been introduced in the paper WebGen-Bench: Evaluating LLMs on Generating Interactive and Functional Websites from Scratch.
Project page: https://webgen-bench.github.io/ The training data and code can be found at WebGen-Bench (Github).
The WebGen-LM family of models are as follows:
| Models | HF Links |
|---|---|
| WebGen-LM-7B | 🤗 luzimu/WebGen-LM-7B |
| WebGen-LM-14B | 🤗 luzimu/WebGen-LM-14B |
| WebGen-LM-32B | 🤗 luzimu/WebGen-LM-32B |
Sample Usage
You can use this model with the transformers library for text generation tasks, specifically for code generation based on instructions.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "luzimu/WebGen-LM-32B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
messages = [
{"role": "user", "content": "Write HTML, CSS, and JavaScript for a simple to-do list web application. The list should allow users to add and remove items."},
]
chat_input = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([chat_input], return_tensors="pt").to(model.device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=2048,
do_sample=True,
temperature=0.7,
top_p=0.95
)
# Decode only the newly generated tokens
output_text = tokenizer.decode(generated_ids[0][model_inputs.input_ids.shape[1]:], skip_special_tokens=False)
print(output_text)
Performance on WebGen-Bench
Citation
If you find our project useful, please cite:
@misc{lu2025webgenbenchevaluatingllmsgenerating,
title={WebGen-Bench: Evaluating LLMs on Generating Interactive and Functional Websites from Scratch},
author={Zimu Lu and Yunqiao Yang and Houxing Ren and Haotian Hou and Han Xiao and Ke Wang and Weikang Shi and Aojun Zhou and Mingjie Zhan and Hongsheng Li},
year={2025},
eprint={2505.03733},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.03733},
}
@misc{lu2025webgenagentenhancinginteractivewebsite,
title={WebGen-Agent: Enhancing Interactive Website Generation with Multi-Level Feedback and Step-Level Reinforcement Learning},
author={Zimu Lu and Houxing Ren and Yunqiao Yang and Ke Wang and Zhuofan Zong and Junting Pan and Mingjie Zhan and Hongsheng Li},
year={2025},
eprint={2509.22644},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2509.22644},
}
- Downloads last month
- 24
