can you look at the clip as well ?

#3
by patientxtr - opened

there is support added for gguf in gguf node but I couldn't find any compatible quant of clip in the web.

you mean the text encoder?

You can use any Qwen 3 4B GGUF quants/finetune i guess, as long as it the base model, not the instruct or thinking one.

For example unsloth quants work fine
https://huggingface.co/unsloth/Qwen3-4B-GGUF

Doesn't work for me, neither does an official base model GGUF from Qwen. Using the workflow from this repo.

ClipLoaderGGUF
Error(s) in loading state_dict for Llama2:
size mismatch for model.layers.0.input_layernorm.weight: copying a param with shape torch.Size([2560]) from checkpoint, the shape in current model is torch.Size([4096]).

Same error repeats for all the model layers.

same for me too.

Show the full log. I’ve seen that error a few times on Reddit, and it solved by updating to latest comfy.

So make sure to update ComfyUI and the ComfyUI-GGUF node to the latest version before you try it.

Doesn't work for me, neither does an official base model GGUF from Qwen. Using the workflow from this repo.

ClipLoaderGGUF
Error(s) in loading state_dict for Llama2:
size mismatch for model.layers.0.input_layernorm.weight: copying a param with shape torch.Size([2560]) from checkpoint, the shape in current model is torch.Size([4096]).

Same error repeats for all the model layers.

Update your ComfyUI/GGUF node first. When ZImage dropped, a ton of ppl got this error, turns out they just tried running it without updating Comfy first

Thanks, didn't even think of updating Comfy again since my last update was just a few days ago...

Works now, it's fast and good.

Sign up or log in to comment