{ "base_model": "Qwen/Qwen2.5-VL-32B-Instruct", "tree": [ { "model_id": "Qwen/Qwen2.5-VL-32B-Instruct", "gated": "False", "card": "---\nlicense: apache-2.0\nlanguage:\n- en\npipeline_tag: image-text-to-text\ntags:\n- multimodal\nlibrary_name: transformers\n---\n\n# Qwen2.5-VL-32B-Instruct\n\n \"Chat\"\n\n\n\n## Latest Updates:\nIn addition to the original formula, we have further enhanced Qwen2.5-VL-32B's mathematical and problem-solving abilities through reinforcement learning. This has also significantly improved the model's subjective user experience, with response styles adjusted to better align with human preferences. Particularly for objective queries such as mathematics, logical reasoning, and knowledge-based Q&A, the level of detail in responses and the clarity of formatting have been noticeably enhanced.\n\n## Introduction\n\nIn the past five months since Qwen2-VL\u2019s release, numerous developers have built new models on the Qwen2-VL vision-language models, providing us with valuable feedback. During this period, we focused on building more useful vision-language models. Today, we are excited to introduce the latest addition to the Qwen family: Qwen2.5-VL.\n\n#### Key Enhancements:\n* **Understand things visually**: Qwen2.5-VL is not only proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images.\n\n* **Being agentic**: Qwen2.5-VL directly plays as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use.\n\n* **Understanding long videos and capturing events**: Qwen2.5-VL can comprehend videos of over 1 hour, and this time it has a new ability of cpaturing event by pinpointing the relevant video segments.\n\n* **Capable of visual localization in different formats**: Qwen2.5-VL can accurately localize objects in an image by generating bounding boxes or points, and it can provide stable JSON outputs for coordinates and attributes.\n\n* **Generating structured outputs**: for data like scans of invoices, forms, tables, etc. Qwen2.5-VL supports structured outputs of their contents, benefiting usages in finance, commerce, etc.\n\n\n#### Model Architecture Updates:\n\n* **Dynamic Resolution and Frame Rate Training for Video Understanding**:\n\nWe extend dynamic resolution to the temporal dimension by adopting dynamic FPS sampling, enabling the model to comprehend videos at various sampling rates. Accordingly, we update mRoPE in the time dimension with IDs and absolute time alignment, enabling the model to learn temporal sequence and speed, and ultimately acquire the ability to pinpoint specific moments.\n\n

\n \n

\n\n\n* **Streamlined and Efficient Vision Encoder**\n\nWe enhance both training and inference speeds by strategically implementing window attention into the ViT. The ViT architecture is further optimized with SwiGLU and RMSNorm, aligning it with the structure of the Qwen2.5 LLM.\n\n\nWe have four models with 3, 7, 32 and 72 billion parameters. This repo contains the instruction-tuned 32B Qwen2.5-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2.5-vl/) and [GitHub](https://github.com/QwenLM/Qwen2.5-VL).\n\n\n\n## Evaluation\n\n### Vision\n\n| Dataset | Qwen2.5-VL-72B
([\ud83e\udd17](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct)[\ud83e\udd16](https://modelscope.cn/models/qwen/Qwen2.5-VL-72B-Instruct)) | Qwen2-VL-72B
([\ud83e\udd17](https://huggingface.co/Qwen/Qwen2-VL-72B-Instruct)[\ud83e\udd16](https://modelscope.cn/models/qwen/Qwen2-VL-72B-Instruct)) | Qwen2.5-VL-32B
([\ud83e\udd17](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct)[\ud83e\udd16](https://modelscope.cn/models/qwen/Qwen2.5-VL-32B-Instruct)) |\n|--------------------|--------|--------------|------------------|\n| MMMU |**70.2** | 64.5 | 70 |\n| MMMU Pro |**51.1** | 46.2 | 49.5 |\n| MMStar | **70.8** | 68.3 | 69.5 |\n| MathVista | **74.8** | 70.5 | 74.7 |\n| MathVision |38.1 | 25.9 | **40.0**|\n| OCRBenchV2 | **61.5/63.7** | 47.8/46.1 | 57.2/59.1 |\n| CC-OCR | **79.8** | 68.7 | 77.1 |\n| DocVQA | **96.4** | **96.5** | 94.8 |\n| InfoVQA | **87.3** | 84.5 | 83.4 |\n| LVBench |47.3 | - | **49.00** |\n| CharadesSTA |50.9 | - | **54.2** |\n| VideoMME |**73.3/79.1** | 71.2/77.8 | 70.5/77.9 |\n| MMBench-Video |**2.02** | 1.7 | 1.93 |\n| AITZ |**83.2** | - | 83.1 |\n| Android Control |**67.4/93.7** | 66.4/84.4 | 69.6/93.3 |\n| ScreenSpot |**87.1** | - | 88.5 |\n| ScreenSpot Pro |**43.6** | - | 39.4 |\n| AndroidWorld |**35** | - | 22.0 |\n| OSWorld |**8.83** | - | 5.92 |\n\n### Text\n\n| MODEL | MMLU | MMLU-PRO | MATH | GPQA-diamond | MBPP | Human Eval |\n|-----------------|--------|----------|---------|--------------|--------|------------|\n| Qwen2.5-VL-32B | 78.4 | 68.8 | 82.2 | 46.0 | 84.0 | 91.5 |\n| Mistral-Small-3.1-24B | 80.6 | 66.8 | 69.3 | 46.0 | 74.7 | 88.4 |\n| Gemma3-27B-IT | 76.9 | 67.5 | 89 | 42.4 | 74.4 | 87.8 |\n| GPT-4o-Mini | 82.0 | 61.7 | 70.2 | 39.4 | 84.8 | 87.2 |\n| Claude-3.5-Haiku | 77.6 | 65.0 | 69.2 | 41.6 | 85.6 | 88.1 |\n\n## Requirements\nThe code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command:\n```\n\npip install git+https://github.com/huggingface/transformers accelerate\n\n```\nor you might encounter the following error:\n```\n\nKeyError: 'qwen2_5_vl'\n\n```\n## Quickstart\n\nBelow, we provide simple examples to show how to use Qwen2.5-VL with \ud83e\udd16 ModelScope and \ud83e\udd17 Transformers.\n\nThe code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command:\n```\n\npip install git+https://github.com/huggingface/transformers accelerate\n\n```\nor you might encounter the following error:\n```\n\nKeyError: 'qwen2_5_vl'\n\n```\nWe offer a toolkit to help you handle various types of visual input more conveniently, as if you were using an API. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:\n\n```bash\n# It's highly recommanded to use `[decord]` feature for faster video loading.\npip install qwen-vl-utils[decord]==0.0.8\n```\n\nIf you are not using Linux, you might not be able to install `decord` from PyPI. In that case, you can use `pip install qwen-vl-utils` which will fall back to using torchvision for video processing. However, you can still [install decord from source](https://github.com/dmlc/decord?tab=readme-ov-file#install-from-source) to get decord used when loading video.\n\n### Using \ud83e\udd17 Transformers to Chat\n\nHere we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:\n\n```python\nfrom transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor\nfrom qwen_vl_utils import process_vision_info\n\n# default: Load the model on the available device(s)\nmodel = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n \"Qwen/Qwen2.5-VL-32B-Instruct\", torch_dtype=\"auto\", device_map=\"auto\"\n)\n\n# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.\n# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n# \"Qwen/Qwen2.5-VL-32B-Instruct\",\n# torch_dtype=torch.bfloat16,\n# attn_implementation=\"flash_attention_2\",\n# device_map=\"auto\",\n# )\n\n# default processer\nprocessor = AutoProcessor.from_pretrained(\"Qwen/Qwen2.5-VL-32B-Instruct\")\n\n# The default range for the number of visual tokens per image in the model is 4-16384.\n# You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost.\n# min_pixels = 256*28*28\n# max_pixels = 1280*28*28\n# processor = AutoProcessor.from_pretrained(\"Qwen/Qwen2.5-VL-32B-Instruct\", min_pixels=min_pixels, max_pixels=max_pixels)\n\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image\",\n \"image\": \"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg\",\n },\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n\n# Preparation for inference\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\nimage_inputs, video_inputs = process_vision_info(messages)\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n padding=True,\n return_tensors=\"pt\",\n)\ninputs = inputs.to(\"cuda\")\n\n# Inference: Generation of the output\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_text = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_text)\n```\n\n

\nMulti image inference\n\n```python\n# Messages containing multiple images and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"file:///path/to/image1.jpg\"},\n {\"type\": \"image\", \"image\": \"file:///path/to/image2.jpg\"},\n {\"type\": \"text\", \"text\": \"Identify the similarities between these images.\"},\n ],\n }\n]\n\n# Preparation for inference\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\nimage_inputs, video_inputs = process_vision_info(messages)\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n padding=True,\n return_tensors=\"pt\",\n)\ninputs = inputs.to(\"cuda\")\n\n# Inference\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_text = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_text)\n```\n\n
\n\n
\nVideo inference\n\n```python\n# Messages containing a images list as a video and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"video\",\n \"video\": [\n \"file:///path/to/frame1.jpg\",\n \"file:///path/to/frame2.jpg\",\n \"file:///path/to/frame3.jpg\",\n \"file:///path/to/frame4.jpg\",\n ],\n },\n {\"type\": \"text\", \"text\": \"Describe this video.\"},\n ],\n }\n]\n\n# Messages containing a local video path and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"video\",\n \"video\": \"file:///path/to/video1.mp4\",\n \"max_pixels\": 360 * 420,\n \"fps\": 1.0,\n },\n {\"type\": \"text\", \"text\": \"Describe this video.\"},\n ],\n }\n]\n\n# Messages containing a video url and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"video\",\n \"video\": \"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-VL/space_woaudio.mp4\",\n },\n {\"type\": \"text\", \"text\": \"Describe this video.\"},\n ],\n }\n]\n\n#In Qwen 2.5 VL, frame rate information is also input into the model to align with absolute time.\n# Preparation for inference\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\nimage_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n fps=fps,\n padding=True,\n return_tensors=\"pt\",\n **video_kwargs,\n)\ninputs = inputs.to(\"cuda\")\n\n# Inference\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_text = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_text)\n```\n\nVideo URL compatibility largely depends on the third-party library version. The details are in the table below. change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one.\n\n| Backend | HTTP | HTTPS |\n|-------------|------|-------|\n| torchvision >= 0.19.0 | \u2705 | \u2705 |\n| torchvision < 0.19.0 | \u274c | \u274c |\n| decord | \u2705 | \u274c |\n\n
\n\n
\nBatch inference\n\n```python\n# Sample messages for batch inference\nmessages1 = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"file:///path/to/image1.jpg\"},\n {\"type\": \"image\", \"image\": \"file:///path/to/image2.jpg\"},\n {\"type\": \"text\", \"text\": \"What are the common elements in these pictures?\"},\n ],\n }\n]\nmessages2 = [\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n {\"role\": \"user\", \"content\": \"Who are you?\"},\n]\n# Combine messages for batch processing\nmessages = [messages1, messages2]\n\n# Preparation for batch inference\ntexts = [\n processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)\n for msg in messages\n]\nimage_inputs, video_inputs = process_vision_info(messages)\ninputs = processor(\n text=texts,\n images=image_inputs,\n videos=video_inputs,\n padding=True,\n return_tensors=\"pt\",\n)\ninputs = inputs.to(\"cuda\")\n\n# Batch Inference\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_texts = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_texts)\n```\n\n
\n\n### \ud83e\udd16 ModelScope\n\nWe strongly advise users especially those in mainland China to use ModelScope. `snapshot_download` can help you solve issues concerning downloading checkpoints.\n\n### More Usage Tips\n\nFor input images, we support local files, base64, and URLs. For videos, we currently only support local files.\n\n```python\n# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.\n## Local file path\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"file:///path/to/your/image.jpg\"},\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n## Image URL\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"http://path/to/your/image.jpg\"},\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n## Base64 encoded image\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"data:image;base64,/9j/...\"},\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n```\n\n#### Image Resolution for performance boost\n\nThe model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.\n\n```python\nmin_pixels = 256 * 28 * 28\nmax_pixels = 1280 * 28 * 28\nprocessor = AutoProcessor.from_pretrained(\n \"Qwen/Qwen2.5-VL-32B-Instruct\", min_pixels=min_pixels, max_pixels=max_pixels\n)\n```\n\nBesides, We provide two methods for fine-grained control over the image size input to the model:\n\n1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.\n2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.\n\n```python\n# min_pixels and max_pixels\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image\",\n \"image\": \"file:///path/to/your/image.jpg\",\n \"resized_height\": 280,\n \"resized_width\": 420,\n },\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n# resized_height and resized_width\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image\",\n \"image\": \"file:///path/to/your/image.jpg\",\n \"min_pixels\": 50176,\n \"max_pixels\": 50176,\n },\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n```\n\n### Processing Long Texts\n\nThe current `config.json` is set for context length up to 32,768 tokens.\nTo handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.\n\nFor supported frameworks, you could add the following to `config.json` to enable YaRN:\n\n{\n...,\n\"type\": \"yarn\",\n\"mrope_section\": [\n16,\n24,\n24\n],\n\"factor\": 4,\n\"original_max_position_embeddings\": 32768\n}\n\nHowever, it should be noted that this method has a significant impact on the performance of temporal and spatial localization tasks, and is therefore not recommended for use.\n\nAt the same time, for long video inputs, since MRoPE itself is more economical with ids, the max_position_embeddings can be directly modified to a larger value, such as 64k.\n\n## Citation\n\nIf you find our work helpful, feel free to give us a cite.\n\n```\n@article{Qwen2.5-VL,\n title={Qwen2.5-VL Technical Report},\n author={Bai, Shuai and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Song, Sibo and Dang, Kai and Wang, Peng and Wang, Shijie and Tang, Jun and Zhong, Humen and Zhu, Yuanzhi and Yang, Mingkun and Li, Zhaohai and Wan, Jianqiang and Wang, Pengfei and Ding, Wei and Fu, Zheren and Xu, Yiheng and Ye, Jiabo and Zhang, Xi and Xie, Tianbao and Cheng, Zesen and Zhang, Hang and Yang, Zhibo and Xu, Haiyang and Lin, Junyang},\n journal={arXiv preprint arXiv:2502.13923},\n year={2025}\n}\n```\n\n", "metadata": "\"N/A\"", "depth": 0, "children": [ "huihui-ai/Qwen2.5-VL-32B-Instruct-abliterated", "ddvd233/QoQ-Med-VL-32B", "unsloth/Qwen2.5-VL-32B-Instruct", "minhtien2405/Qwen2.5-VL-32B-Instruct-Golf-Scorecard", "Flo0620/Qwen2_5_32GB_Debugging", "QAdottech/Qwen2.5-VL-32B-Instruct-merged-v2", "Flo0620/Qwen2_5_32BDebugging", "Flo0620/Qwen2_5_32B-8bit", "Flo0620/Qwen2_5_32B-8bit_2Epochs", "Flo0620/Qwen2_5_32B-8bit_r64_a128_d0_1_Final", "Flo0620/Qwen2_5_32B_r64_a128_d0_2_AllData", "Flo0620/Qwen2_5_32B_r64_a128_d0_2_AllData2", "litmudoc/Qwen2.5-VL-32B-Instruct-abliterated-MLX-Q8", "chancharikm/qwen2.5-vl-32b-cam-motion-preview", "One-RL-to-See-Them-All/Orsta-32B-0321", "CodeGoat24/UnifiedReward-qwen-32b", "One-RL-to-See-Them-All/Orsta-32B-0326", "Bofeee5675/TongUI-32B", "acchf/vision-price-trade-qwenvl-qlora-p", "orkungedik/recruitment-docs-32b-extractor" ], "children_count": 20, "adapters": [ "HongxinLi/0406_Qwen32B_AndWorld-CoT", "srai86825/qwen-vl-tool-assistant-lora" ], "adapters_count": 2, "quantized": [ "Qwen/Qwen2.5-VL-32B-Instruct-AWQ", "BCCard/Qwen2.5-VL-32B-Instruct-FP8-Dynamic", "unsloth/Qwen2.5-VL-32B-Instruct-GGUF", "unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit", "unsloth/Qwen2.5-VL-32B-Instruct-bnb-4bit", "leon-se/Qwen2.5-VL-32B-Instruct-FP8-Dynamic", "samgreen/Qwen2.5-VL-32B-Instruct-GGUF", "leon-se/Qwen2.5-VL-32B-Instruct-W4A16-G128", "christopherthompson81/Qwen2.5-VL-32B-Instruct-exl2-4_25bpw", "DevQuasar/Qwen.Qwen2.5-VL-32B-Instruct-GGUF", "bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF", "lmstudio-community/Qwen2.5-VL-32B-Instruct-GGUF", "openfree/Qwen2.5-VL-32B-Instruct-Q8_0-GGUF", "openfree/Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF", "mradermacher/Qwen2.5-VL-32B-Instruct-GGUF", "mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF", "Theta-Lev/Qwen2.5-VL-32B-Instruct-Q5_K_M-GGUF", "TheMagicianGamer/Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF", "xiongwen/Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF", "second-state/Qwen2.5-VL-32B-Instruct-GGUF", "gaianet/Qwen2.5-VL-32B-Instruct-GGUF", "ggml-org/Qwen2.5-VL-32B-Instruct-GGUF", "ig1/Qwen2.5-VL-32B-Instruct-FP8-Dynamic" ], "quantized_count": 23, "merges": [], "merges_count": 0, "total_derivatives": 45, "spaces": [], "spaces_count": 0, "parents": [], "base_model": "Qwen/Qwen2.5-VL-32B-Instruct", "base_model_relation": "base" }, { "model_id": "huihui-ai/Qwen2.5-VL-32B-Instruct-abliterated", "gated": "False", "card": "---\nlicense: apache-2.0\nlanguage:\n- en\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- abliterated\n- uncensored\nlibrary_name: transformers\nbase_model:\n- Qwen/Qwen2.5-VL-32B-Instruct\n---\n\n# huihui-ai/Qwen2.5-VL-32B-Instruct-abliterated\n\n\nThis is an uncensored version of [Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it). \n\nIt was only the text part that was processed, not the image part.\n\n## Usage\nYou can use this model in your applications by loading it with Hugging Face's `transformers` library:\n\n\n```python\nfrom transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor\nfrom qwen_vl_utils import process_vision_info\n\nmodel = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n \"huihui-ai/Qwen2.5-VL-32B-Instruct-abliterated\", torch_dtype=\"auto\", device_map=\"auto\"\n)\nprocessor = AutoProcessor.from_pretrained(\"huihui-ai/Qwen2.5-VL-32B-Instruct-abliterated\")\n\nimage_path = \"/tmp/test.png\"\n\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image\",\n \"image\": f\"file://{image_path}\",\n },\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\nimage_inputs, video_inputs = process_vision_info(messages)\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n padding=True,\n return_tensors=\"pt\",\n)\ninputs = inputs.to(\"cuda\")\n\ngenerated_ids = model.generate(**inputs, max_new_tokens=256)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_text = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\noutput_text = output_text[0]\n\nprint(output_text)\n\n```\n\n### Donation\n##### Your donation helps us continue our further development and improvement, a cup of coffee can do it.\n- bitcoin:\n```\n bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [ "mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-GGUF", "mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF" ], "quantized_count": 2, "merges": [], "merges_count": 0, "total_derivatives": 2, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "huihui-ai/Qwen2.5-VL-32B-Instruct-abliterated", "base_model_relation": "base" }, { "model_id": "ddvd233/QoQ-Med-VL-32B", "gated": "unknown", "card": "---\nlicense: mit\nlanguage:\n- en\nbase_model:\n- Qwen/Qwen2.5-VL-32B-Instruct\n---\n\n# QoQ-Med: Building Multimodal Clinical Foundation Models with Domain-Aware GRPO Training\n\nThis repository contains the model weights for QoQ-Med-VL-32B (Qwen Omni-Reasoning on Medical Questions), a multimodal clinical foundation model with reasoning capabilities.\n\n## Model Weights\n\n| Model | Weights | Avg. Val Accuracy |\n|-------|---------|---------------|\n| QoQ-Med-VL-7B | [\ud83e\udd17 HuggingFace](https://huggingface.co/ddvd233/QoQ-Med-VL-7B) | 68.6% |\n| QoQ-Med-VL-32B | [\ud83e\udd17 HuggingFace](https://huggingface.co/ddvd233/QoQ-Med-VL-32B) | 70.7% |\n\n## Quick Start\n\n### Installation\n\nFirst, ensure you have the necessary dependencies:\n\n```bash\npip install transformers qwen-vl-utils torch\n```\n\n### Loading the Model\n\nYou may load the QoQ-Med model and processors via transformers package:\n\n```python\nfrom transformers import AutoModelForVision2Seq, AutoProcessor\n\nmodel = AutoModelForVision2Seq.from_pretrained(\n \"ddvd233/QoQ-Med-VL-32B\", \n torch_dtype=\"auto\", \n device_map=\"auto\"\n)\n\nprocessor = AutoProcessor.from_pretrained(\"ddvd233/QoQ-Med-VL-32B\")\n```\n\nFor better performance with flash attention:\n\n```python\nimport torch\nfrom transformers import AutoModelForVision2Seq\n\nmodel = AutoModelForVision2Seq.from_pretrained(\n \"ddvd233/QoQ-Med-VL-32B\",\n torch_dtype=torch.bfloat16,\n attn_implementation=\"flash_attention_2\",\n device_map=\"auto\",\n)\n```\n\n### Configuring Visual Token Range\n\nYou can adjust the visual token range to balance performance and computational cost:\n\n```python\nmin_pixels = 256 * 28 * 28\nmax_pixels = 1280 * 28 * 28\n\nprocessor = AutoProcessor.from_pretrained(\n \"ddvd233/QoQ-Med-VL-32B\", \n min_pixels=min_pixels, \n max_pixels=max_pixels\n)\n```\n\n### Preparing Multimodal Input\n\nCreate a message with both image and text content:\n\n```python\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image\",\n \"image\": \"path/to/your/medical/image.jpg\",\n },\n {\"type\": \"text\", \"text\": \"Describe this medical image.\"},\n ],\n }\n]\n```\n\n### Processing the Input\n\nPrepare the input for model inference:\n\n```python\nfrom qwen_vl_utils import process_vision_info\n\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\n\nimage_inputs, video_inputs = process_vision_info(messages)\n\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n padding=True,\n return_tensors=\"pt\",\n)\n\ninputs = inputs.to(\"cuda\")\n```\n\n### Generating Output\n\nRun inference and decode the output:\n\n```python\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\n\ngenerated_ids_trimmed = [\n out_ids[len(in_ids):] \n for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\n\noutput_text = processor.batch_decode(\n generated_ids_trimmed, \n skip_special_tokens=True, \n clean_up_tokenization_spaces=False\n)\n\nprint(output_text[0])\n```\n\n## Citations\n\nIf you find the project useful, please cite the following papers:\n\n```\n@article{dai2025climb,\n title={Climb: Data foundations for large scale multimodal clinical foundation models},\n author={Dai, Wei and Chen, Peilin and Lu, Malinda and Li, Daniel and Wei, Haowen and Cui, Hejie and Liang, Paul Pu},\n journal={International Conference on Machine Learning},\n year={2025}\n}\n@article{dai2025qoq,\n title={QoQ-Med: Building Multimodal Clinical Foundation Models with Domain-Aware GRPO Training},\n author={Dai, Wei and Chen, Peilin and Ekbote, Chanakya and Liang, Paul Pu},\n journal={arXiv preprint arXiv:2506.00711},\n year={2025}\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [ "mradermacher/QoQ-Med-VL-32B-GGUF", "mradermacher/QoQ-Med-VL-32B-i1-GGUF" ], "quantized_count": 2, "merges": [], "merges_count": 0, "total_derivatives": 2, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "unsloth/Qwen2.5-VL-32B-Instruct", "gated": "False", "card": "---\nbase_model:\n- Qwen/Qwen2.5-VL-32B-Instruct\nlicense: apache-2.0\nlanguage:\n- en\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- unsloth\nlibrary_name: transformers\n---\n\n# Qwen2.5-VL-32B-Instruct\n\n \"Chat\"\n\n\n\n## Latest Updates:\nIn addition to the original formula, we have further enhanced Qwen2.5-VL-32B's mathematical and problem-solving abilities through reinforcement learning. This has also significantly improved the model's subjective user experience, with response styles adjusted to better align with human preferences. Particularly for objective queries such as mathematics, logical reasoning, and knowledge-based Q&A, the level of detail in responses and the clarity of formatting have been noticeably enhanced.\n\n## Introduction\n\nIn the past five months since Qwen2-VL\u2019s release, numerous developers have built new models on the Qwen2-VL vision-language models, providing us with valuable feedback. During this period, we focused on building more useful vision-language models. Today, we are excited to introduce the latest addition to the Qwen family: Qwen2.5-VL.\n\n#### Key Enhancements:\n* **Understand things visually**: Qwen2.5-VL is not only proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images.\n\n* **Being agentic**: Qwen2.5-VL directly plays as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use.\n\n* **Understanding long videos and capturing events**: Qwen2.5-VL can comprehend videos of over 1 hour, and this time it has a new ability of cpaturing event by pinpointing the relevant video segments.\n\n* **Capable of visual localization in different formats**: Qwen2.5-VL can accurately localize objects in an image by generating bounding boxes or points, and it can provide stable JSON outputs for coordinates and attributes.\n\n* **Generating structured outputs**: for data like scans of invoices, forms, tables, etc. Qwen2.5-VL supports structured outputs of their contents, benefiting usages in finance, commerce, etc.\n\n\n#### Model Architecture Updates:\n\n* **Dynamic Resolution and Frame Rate Training for Video Understanding**:\n\nWe extend dynamic resolution to the temporal dimension by adopting dynamic FPS sampling, enabling the model to comprehend videos at various sampling rates. Accordingly, we update mRoPE in the time dimension with IDs and absolute time alignment, enabling the model to learn temporal sequence and speed, and ultimately acquire the ability to pinpoint specific moments.\n\n

\n \n

\n\n\n* **Streamlined and Efficient Vision Encoder**\n\nWe enhance both training and inference speeds by strategically implementing window attention into the ViT. The ViT architecture is further optimized with SwiGLU and RMSNorm, aligning it with the structure of the Qwen2.5 LLM.\n\n\nWe have four models with 3, 7, 32 and 72 billion parameters. This repo contains the instruction-tuned 32B Qwen2.5-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2.5-vl/) and [GitHub](https://github.com/QwenLM/Qwen2.5-VL).\n\n\n\n## Evaluation\n\n### Vision\n\n| Dataset | Qwen2.5-VL-72B
([\ud83e\udd17](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct)[\ud83e\udd16](https://modelscope.cn/models/qwen/Qwen2.5-VL-72B-Instruct)) | Qwen2-VL-72B
([\ud83e\udd17](https://huggingface.co/Qwen/Qwen2-VL-72B-Instruct)[\ud83e\udd16](https://modelscope.cn/models/qwen/Qwen2-VL-72B-Instruct)) | Qwen2.5-VL-32B
([\ud83e\udd17](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct)[\ud83e\udd16](https://modelscope.cn/models/qwen/Qwen2.5-VL-32B-Instruct)) |\n|--------------------|--------|--------------|------------------|\n| MMMU |**70.2** | 64.5 | 70 |\n| MMMU Pro |**51.1** | 46.2 | 49.5 |\n| MMStar | **70.8** | 68.3 | 69.5 |\n| MathVista | **74.8** | 70.5 | 74.7 |\n| MathVision |38.1 | 25.9 | **40.0**|\n| OCRBenchV2 | **61.5/63.7** | 47.8/46.1 | 57.2/59.1 |\n| CC-OCR | **79.8** | 68.7 | 77.1 |\n| DocVQA | **96.4** | **96.5** | 94.8 |\n| InfoVQA | **87.3** | 84.5 | 83.4 |\n| LVBench |47.3 | - | **49.00** |\n| CharadesSTA |50.9 | - | **54.2** |\n| VideoMME |**73.3/79.1** | 71.2/77.8 | 70.5/77.9 |\n| MMBench-Video |**2.02** | 1.7 | 1.93 |\n| AITZ |**83.2** | - | 83.1 |\n| Android Control |**67.4/93.7** | 66.4/84.4 | 69.6/93.3 |\n| ScreenSpot |**87.1** | - | 88.5 |\n| ScreenSpot Pro |**43.6** | - | 39.4 |\n| AndroidWorld |**35** | - | 22.0 |\n| OSWorld |**8.83** | - | 5.92 |\n\n### Text\n\n| MODEL | MMLU | MMLU-PRO | MATH | GPQA-diamond | MBPP | Human Eval |\n|-----------------|--------|----------|---------|--------------|--------|------------|\n| Qwen2.5-VL-32B | 78.4 | 68.8 | 82.2 | 46.0 | 84.0 | 91.5 |\n| Mistral-Small-3.1-24B | 80.6 | 66.8 | 69.3 | 46.0 | 74.7 | 88.4 |\n| Gemma3-27B-IT | 76.9 | 67.5 | 89 | 42.4 | 74.4 | 87.8 |\n| GPT-4o-Mini | 82.0 | 61.7 | 70.2 | 39.4 | 84.8 | 87.2 |\n| Claude-3.5-Haiku | 77.6 | 65.0 | 69.2 | 41.6 | 85.6 | 88.1 |\n\n## Requirements\nThe code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command:\n```\n\npip install git+https://github.com/huggingface/transformers accelerate\n\n```\nor you might encounter the following error:\n```\n\nKeyError: 'qwen2_5_vl'\n\n```\n## Quickstart\n\nBelow, we provide simple examples to show how to use Qwen2.5-VL with \ud83e\udd16 ModelScope and \ud83e\udd17 Transformers.\n\nThe code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command:\n```\n\npip install git+https://github.com/huggingface/transformers accelerate\n\n```\nor you might encounter the following error:\n```\n\nKeyError: 'qwen2_5_vl'\n\n```\nWe offer a toolkit to help you handle various types of visual input more conveniently, as if you were using an API. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:\n\n```bash\n# It's highly recommanded to use `[decord]` feature for faster video loading.\npip install qwen-vl-utils[decord]==0.0.8\n```\n\nIf you are not using Linux, you might not be able to install `decord` from PyPI. In that case, you can use `pip install qwen-vl-utils` which will fall back to using torchvision for video processing. However, you can still [install decord from source](https://github.com/dmlc/decord?tab=readme-ov-file#install-from-source) to get decord used when loading video.\n\n### Using \ud83e\udd17 Transformers to Chat\n\nHere we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:\n\n```python\nfrom transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor\nfrom qwen_vl_utils import process_vision_info\n\n# default: Load the model on the available device(s)\nmodel = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n \"Qwen/Qwen2.5-VL-32B-Instruct\", torch_dtype=\"auto\", device_map=\"auto\"\n)\n\n# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.\n# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n# \"Qwen/Qwen2.5-VL-32B-Instruct\",\n# torch_dtype=torch.bfloat16,\n# attn_implementation=\"flash_attention_2\",\n# device_map=\"auto\",\n# )\n\n# default processer\nprocessor = AutoProcessor.from_pretrained(\"Qwen/Qwen2.5-VL-32B-Instruct\")\n\n# The default range for the number of visual tokens per image in the model is 4-16384.\n# You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost.\n# min_pixels = 256*28*28\n# max_pixels = 1280*28*28\n# processor = AutoProcessor.from_pretrained(\"Qwen/Qwen2.5-VL-32B-Instruct\", min_pixels=min_pixels, max_pixels=max_pixels)\n\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image\",\n \"image\": \"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg\",\n },\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n\n# Preparation for inference\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\nimage_inputs, video_inputs = process_vision_info(messages)\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n padding=True,\n return_tensors=\"pt\",\n)\ninputs = inputs.to(\"cuda\")\n\n# Inference: Generation of the output\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_text = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_text)\n```\n\n

\nMulti image inference\n\n```python\n# Messages containing multiple images and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"file:///path/to/image1.jpg\"},\n {\"type\": \"image\", \"image\": \"file:///path/to/image2.jpg\"},\n {\"type\": \"text\", \"text\": \"Identify the similarities between these images.\"},\n ],\n }\n]\n\n# Preparation for inference\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\nimage_inputs, video_inputs = process_vision_info(messages)\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n padding=True,\n return_tensors=\"pt\",\n)\ninputs = inputs.to(\"cuda\")\n\n# Inference\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_text = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_text)\n```\n\n
\n\n
\nVideo inference\n\n```python\n# Messages containing a images list as a video and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"video\",\n \"video\": [\n \"file:///path/to/frame1.jpg\",\n \"file:///path/to/frame2.jpg\",\n \"file:///path/to/frame3.jpg\",\n \"file:///path/to/frame4.jpg\",\n ],\n },\n {\"type\": \"text\", \"text\": \"Describe this video.\"},\n ],\n }\n]\n\n# Messages containing a local video path and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"video\",\n \"video\": \"file:///path/to/video1.mp4\",\n \"max_pixels\": 360 * 420,\n \"fps\": 1.0,\n },\n {\"type\": \"text\", \"text\": \"Describe this video.\"},\n ],\n }\n]\n\n# Messages containing a video url and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"video\",\n \"video\": \"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-VL/space_woaudio.mp4\",\n },\n {\"type\": \"text\", \"text\": \"Describe this video.\"},\n ],\n }\n]\n\n#In Qwen 2.5 VL, frame rate information is also input into the model to align with absolute time.\n# Preparation for inference\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\nimage_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n fps=fps,\n padding=True,\n return_tensors=\"pt\",\n **video_kwargs,\n)\ninputs = inputs.to(\"cuda\")\n\n# Inference\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_text = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_text)\n```\n\nVideo URL compatibility largely depends on the third-party library version. The details are in the table below. change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one.\n\n| Backend | HTTP | HTTPS |\n|-------------|------|-------|\n| torchvision >= 0.19.0 | \u2705 | \u2705 |\n| torchvision < 0.19.0 | \u274c | \u274c |\n| decord | \u2705 | \u274c |\n\n
\n\n
\nBatch inference\n\n```python\n# Sample messages for batch inference\nmessages1 = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"file:///path/to/image1.jpg\"},\n {\"type\": \"image\", \"image\": \"file:///path/to/image2.jpg\"},\n {\"type\": \"text\", \"text\": \"What are the common elements in these pictures?\"},\n ],\n }\n]\nmessages2 = [\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n {\"role\": \"user\", \"content\": \"Who are you?\"},\n]\n# Combine messages for batch processing\nmessages = [messages1, messages2]\n\n# Preparation for batch inference\ntexts = [\n processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)\n for msg in messages\n]\nimage_inputs, video_inputs = process_vision_info(messages)\ninputs = processor(\n text=texts,\n images=image_inputs,\n videos=video_inputs,\n padding=True,\n return_tensors=\"pt\",\n)\ninputs = inputs.to(\"cuda\")\n\n# Batch Inference\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_texts = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_texts)\n```\n\n
\n\n### \ud83e\udd16 ModelScope\n\nWe strongly advise users especially those in mainland China to use ModelScope. `snapshot_download` can help you solve issues concerning downloading checkpoints.\n\n### More Usage Tips\n\nFor input images, we support local files, base64, and URLs. For videos, we currently only support local files.\n\n```python\n# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.\n## Local file path\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"file:///path/to/your/image.jpg\"},\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n## Image URL\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"http://path/to/your/image.jpg\"},\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n## Base64 encoded image\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"data:image;base64,/9j/...\"},\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n```\n\n#### Image Resolution for performance boost\n\nThe model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.\n\n```python\nmin_pixels = 256 * 28 * 28\nmax_pixels = 1280 * 28 * 28\nprocessor = AutoProcessor.from_pretrained(\n \"Qwen/Qwen2.5-VL-32B-Instruct\", min_pixels=min_pixels, max_pixels=max_pixels\n)\n```\n\nBesides, We provide two methods for fine-grained control over the image size input to the model:\n\n1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.\n2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.\n\n```python\n# min_pixels and max_pixels\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image\",\n \"image\": \"file:///path/to/your/image.jpg\",\n \"resized_height\": 280,\n \"resized_width\": 420,\n },\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n# resized_height and resized_width\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image\",\n \"image\": \"file:///path/to/your/image.jpg\",\n \"min_pixels\": 50176,\n \"max_pixels\": 50176,\n },\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n```\n\n### Processing Long Texts\n\nThe current `config.json` is set for context length up to 32,768 tokens.\nTo handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.\n\nFor supported frameworks, you could add the following to `config.json` to enable YaRN:\n\n{\n...,\n\"type\": \"yarn\",\n\"mrope_section\": [\n16,\n24,\n24\n],\n\"factor\": 4,\n\"original_max_position_embeddings\": 32768\n}\n\nHowever, it should be noted that this method has a significant impact on the performance of temporal and spatial localization tasks, and is therefore not recommended for use.\n\nAt the same time, for long video inputs, since MRoPE itself is more economical with ids, the max_position_embeddings can be directly modified to a larger value, such as 64k.\n\n## Citation\n\nIf you find our work helpful, feel free to give us a cite.\n\n```\n@article{Qwen2.5-VL,\n title={Qwen2.5-VL Technical Report},\n author={Bai, Shuai and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Song, Sibo and Dang, Kai and Wang, Peng and Wang, Shijie and Tang, Jun and Zhong, Humen and Zhu, Yuanzhi and Yang, Mingkun and Li, Zhaohai and Wan, Jianqiang and Wang, Pengfei and Ding, Wei and Fu, Zheren and Xu, Yiheng and Ye, Jiabo and Zhang, Xi and Xie, Tianbao and Cheng, Zesen and Zhang, Hang and Yang, Zhibo and Xu, Haiyang and Lin, Junyang},\n journal={arXiv preprint arXiv:2502.13923},\n year={2025}\n}\n```\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [ "itztheking/FMAX-testrun-4.0-16bit", "egemensert/inek-qwen2_5VL-dd-full-bnb-64", "itztheking/FMAX-testrun-7", "itztheking/FMAX-testrun-8", "itztheking/FMAX-testrun-embed-1", "egemensert/inek-qwen2_5VL-dd-full-bnb-64-5e" ], "children_count": 6, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 6, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "unsloth/Qwen2.5-VL-32B-Instruct", "base_model_relation": "base" }, { "model_id": "minhtien2405/Qwen2.5-VL-32B-Instruct-Golf-Scorecard", "gated": "False", "card": "---\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\nlibrary_name: transformers\nmodel_name: Qwen2.5-VL-32B-Instruct-Golf-Scorecard\ntags:\n- generated_from_trainer\n- trl\n- sft\nlicence: license\n---\n\n# Model Card for Qwen2.5-VL-32B-Instruct-Golf-Scorecard\n\nThis model is a fine-tuned version of [Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"minhtien2405/Qwen2.5-VL-32B-Instruct-Golf-Scorecard\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n[\"Visualize](https://wandb.ai/phamminhtien2405-vg/Qwen2.5-VL-32B-Instruct-Golf-Scorecard/runs/fjxwmu8u) \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.16.0\n- Transformers: 4.50.3\n- Pytorch: 2.6.0+cu124\n- Datasets: 3.5.0\n- Tokenizers: 0.21.1\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou\u00e9dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "minhtien2405/Qwen2.5-VL-32B-Instruct-Golf-Scorecard", "base_model_relation": "base" }, { "model_id": "Flo0620/Qwen2_5_32GB_Debugging", "gated": "False", "card": "---\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\nlibrary_name: transformers\nmodel_name: Qwen2_5_32GB_Debugging\ntags:\n- generated_from_trainer\n- trl\n- sft\nlicence: license\n---\n\n# Model Card for Qwen2_5_32GB_Debugging\n\nThis model is a fine-tuned version of [Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"Flo0620/Qwen2_5_32GB_Debugging\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.15.2\n- Transformers: 4.52.0.dev0\n- Pytorch: 2.6.0+cu124\n- Datasets: 3.5.0\n- Tokenizers: 0.21.1\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou\u00e9dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "Flo0620/Qwen2_5_32GB_Debugging", "base_model_relation": "base" }, { "model_id": "QAdottech/Qwen2.5-VL-32B-Instruct-merged-v2", "gated": "False", "card": "---\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2_5_vl\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded finetuned model\n\n- **Developed by:** QAdottech\n- **License:** apache-2.0\n- **Finetuned from model :** Qwen/Qwen2.5-VL-32B-Instruct\n\nThis qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "QAdottech/Qwen2.5-VL-32B-Instruct-merged-v2", "base_model_relation": "base" }, { "model_id": "Flo0620/Qwen2_5_32BDebugging", "gated": "False", "card": "---\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\nlibrary_name: transformers\nmodel_name: Qwen2_5_32BDebugging\ntags:\n- generated_from_trainer\n- trl\n- sft\nlicence: license\n---\n\n# Model Card for Qwen2_5_32BDebugging\n\nThis model is a fine-tuned version of [Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"Flo0620/Qwen2_5_32BDebugging\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.15.2\n- Transformers: 4.52.0.dev0\n- Pytorch: 2.6.0+cu124\n- Datasets: 3.5.0\n- Tokenizers: 0.21.1\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou\u00e9dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "Flo0620/Qwen2_5_32BDebugging", "base_model_relation": "base" }, { "model_id": "Flo0620/Qwen2_5_32B-8bit", "gated": "False", "card": "---\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\nlibrary_name: transformers\nmodel_name: Qwen2_5_32B-8bit\ntags:\n- generated_from_trainer\n- trl\n- sft\nlicence: license\n---\n\n# Model Card for Qwen2_5_32B-8bit\n\nThis model is a fine-tuned version of [Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"Flo0620/Qwen2_5_32B-8bit\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.15.2\n- Transformers: 4.52.0.dev0\n- Pytorch: 2.6.0+cu124\n- Datasets: 3.5.0\n- Tokenizers: 0.21.1\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou\u00e9dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "Flo0620/Qwen2_5_32B", "base_model_relation": "finetune" }, { "model_id": "Flo0620/Qwen2_5_32B-8bit_2Epochs", "gated": "False", "card": "---\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\nlibrary_name: transformers\nmodel_name: Qwen2_5_32B-8bit_2Epochs\ntags:\n- generated_from_trainer\n- trl\n- sft\nlicence: license\n---\n\n# Model Card for Qwen2_5_32B-8bit_2Epochs\n\nThis model is a fine-tuned version of [Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"Flo0620/Qwen2_5_32B-8bit_2Epochs\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.15.2\n- Transformers: 4.52.0.dev0\n- Pytorch: 2.6.0+cu124\n- Datasets: 3.5.0\n- Tokenizers: 0.21.1\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou\u00e9dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "Flo0620/Qwen2_5_32B-8bit_2Epochs", "base_model_relation": "base" }, { "model_id": "Flo0620/Qwen2_5_32B-8bit_r64_a128_d0_1_Final", "gated": "False", "card": "---\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\nlibrary_name: transformers\nmodel_name: Qwen2_5_32B-8bit_r64_a128_d0_1_Final\ntags:\n- generated_from_trainer\n- trl\n- sft\nlicence: license\n---\n\n# Model Card for Qwen2_5_32B-8bit_r64_a128_d0_1_Final\n\nThis model is a fine-tuned version of [Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"Flo0620/Qwen2_5_32B-8bit_r64_a128_d0_1_Final\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.15.2\n- Transformers: 4.52.0.dev0\n- Pytorch: 2.6.0+cu124\n- Datasets: 3.5.0\n- Tokenizers: 0.21.1\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou\u00e9dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "Flo0620/Qwen2_5_32B-8bit_r64_a128_d0_1_Final", "base_model_relation": "base" }, { "model_id": "Flo0620/Qwen2_5_32B_r64_a128_d0_2_AllData", "gated": "False", "card": "---\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\nlibrary_name: transformers\nmodel_name: Qwen2_5_32B_r64_a128_d0_2_AllData\ntags:\n- generated_from_trainer\n- trl\n- sft\nlicence: license\n---\n\n# Model Card for Qwen2_5_32B_r64_a128_d0_2_AllData\n\nThis model is a fine-tuned version of [Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"Flo0620/Qwen2_5_32B_r64_a128_d0_2_AllData\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.15.2\n- Transformers: 4.52.0.dev0\n- Pytorch: 2.6.0+cu124\n- Datasets: 3.5.0\n- Tokenizers: 0.21.1\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou\u00e9dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "Flo0620/Qwen2_5_32B_r64_a128_d0_2_AllData", "base_model_relation": "base" }, { "model_id": "Flo0620/Qwen2_5_32B_r64_a128_d0_2_AllData2", "gated": "False", "card": "---\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\nlibrary_name: transformers\nmodel_name: Qwen2_5_32B_r64_a128_d0_2_AllData2\ntags:\n- generated_from_trainer\n- trl\n- sft\nlicence: license\n---\n\n# Model Card for Qwen2_5_32B_r64_a128_d0_2_AllData2\n\nThis model is a fine-tuned version of [Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"Flo0620/Qwen2_5_32B_r64_a128_d0_2_AllData2\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.15.2\n- Transformers: 4.52.0.dev0\n- Pytorch: 2.6.0+cu124\n- Datasets: 3.5.0\n- Tokenizers: 0.21.1\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou\u00e9dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "Flo0620/Qwen2_5_32B_r64_a128_d0_2_AllData2", "base_model_relation": "base" }, { "model_id": "litmudoc/Qwen2.5-VL-32B-Instruct-abliterated-MLX-Q8", "gated": "False", "card": "---\nlicense: apache-2.0\nlanguage:\n- en\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- abliterated\n- uncensored\n- mlx\nlibrary_name: transformers\nbase_model:\n- Qwen/Qwen2.5-VL-32B-Instruct\n---\n\n# litmudoc/Qwen2.5-VL-32B-Instruct-abliterated-MLX-Q8\nThis model was converted to MLX format from [`huihui-ai/Qwen2.5-VL-32B-Instruct-abliterated`]() using mlx-vlm version **0.1.26**.\nRefer to the [original model card](https://huggingface.co/huihui-ai/Qwen2.5-VL-32B-Instruct-abliterated) for more details on the model.\n## Use with mlx\n\n```bash\npip install -U mlx-vlm\n```\n\n```bash\npython -m mlx_vlm.generate --model litmudoc/Qwen2.5-VL-32B-Instruct-abliterated-MLX-Q8 --max-tokens 100 --temperature 0.0 --prompt \"Describe this image.\" --image \n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "litmudoc/Qwen2.5-VL-32B-Instruct-abliterated-MLX-Q8", "base_model_relation": "base" }, { "model_id": "chancharikm/qwen2.5-vl-32b-cam-motion-preview", "gated": "False", "card": "---\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\nlibrary_name: transformers\nlicense: other\ntags:\n- llama-factory\n- full\n- generated_from_trainer\npipeline_tag: video-text-to-text\nmodel-index:\n- name: bal_imb_cap_full_lr2e-4_epoch10.0_freezevisTrue_fps8\n results: []\n---\n\n\n\n\n## Model description\n\n\nThis model is a fine-tuned version of [Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct) on the current most, high-quality camera motion dataset that is publically available. This preview model is the current SOTA for classifying camera motion or being used for video-text retrieval with camera motion captions using [VQAScore](https://arxiv.org/pdf/2404.01291). Find more information about our work on our Github page for [CameraBench](https://github.com/sy77777en/CameraBench). *More updates to the benchmark and models will come in the future. Stay tuned!*\n## Intended uses & limitations\n\n The usage is identical to a [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL) model. Our model is primarily useful for camera motion classification in videos as well as video-text retrieval (current SOTA in both tasks).\n \n **A quick demo is shown below:**\n
\nGenerative Scoring (for classification and retrieval):\n \n```python\n# Import necessary libraries\nfrom transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor\nfrom qwen_vl_utils import process_vision_info\nimport torch\n\n# Load the model\nmodel = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n \"chancharikm/qwen2.5-vl-32B-cam-motion-preview\", torch_dtype=\"auto\", device_map=\"auto\"\n)\nprocessor = AutoProcessor.from_pretrained(\"Qwen/Qwen2.5-VL-32B-Instruct\")\n\n# Prepare input data\nvideo_path = \"file:///path/to/video1.mp4\"\ntext_description = \"the camera tilting upward\"\nquestion = f\"Does this video show \\\"{text_description}\\\"?\"\n\n# Format the input for the model\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"video\",\n \"video\": video_path,\n \"fps\": 8.0, # Recommended FPS for optimal inference\n },\n {\"type\": \"text\", \"text\": question},\n ],\n }\n]\n\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\nimage_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n padding=True,\n return_tensors=\"pt\",\n **video_kwargs\n)\ninputs = inputs.to(\"cuda\")\n\n# Generate with score output\nwith torch.inference_mode():\n outputs = model.generate(\n **inputs,\n max_new_tokens=1,\n do_sample=False, # Use greedy decoding to get reliable logprobs\n output_scores=True,\n return_dict_in_generate=True\n )\n\n# Calculate probability of \"Yes\" response\nscores = outputs.scores[0]\nprobs = torch.nn.functional.softmax(scores, dim=-1)\nyes_token_id = processor.tokenizer.encode(\"Yes\")[0]\nscore = probs[0, yes_token_id].item()\n\nprint(f\"Video: {video_path}\")\nprint(f\"Description: '{text_description}'\")\nprint(f\"Score: {score:.4f}\")\n```\n
\n\n
\nNatural Language Generation\n \n```python\n# The model is trained on 8.0 FPS which we recommend for optimal inference\n\nfrom transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor\nfrom qwen_vl_utils import process_vision_info\n\n# default: Load the model on the available device(s)\nmodel = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n \"chancharikm/qwen2.5-vl-32B-cam-motion-preview\", torch_dtype=\"auto\", device_map=\"auto\"\n)\n\n# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.\n# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n# \"chancharikm/qwen2.5-vl-32B-cam-motion-preview\",\n# torch_dtype=torch.bfloat16,\n# attn_implementation=\"flash_attention_2\",\n# device_map=\"auto\",\n# )\n\n# default processor\nprocessor = AutoProcessor.from_pretrained(\"Qwen/Qwen2.5-VL-32B-Instruct\")\n\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"video\",\n \"video\": \"file:///path/to/video1.mp4\",\n \"fps\": 8.0,\n },\n {\"type\": \"text\", \"text\": \"Describe the camera motion in this video.\"},\n ],\n }\n]\n\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\nimage_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n fps=fps,\n padding=True,\n return_tensors=\"pt\",\n **video_kwargs,\n)\ninputs = inputs.to(\"cuda\")\n\n# Inference\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_text = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_text)\n```\n
\n\n\n## Training and evaluation data\n\nTraining and evaluation data can be found in our [repo](https://github.com/sy77777en/CameraBench).\n\n## \u270f\ufe0f Citation\n\nIf you find this repository useful for your research, please use the following.\n```\n@article{lin2025camerabench,\n title={Towards Understanding Camera Motions in Any Video},\n author={Lin, Zhiqiu and Cen, Siyuan and Jiang, Daniel and Karhade, Jay and Wang, Hewei and Mitra, Chancharik and Ling, Tiffany and Huang, Yuhan and Liu, Sifan and Chen, Mingyu and Zawar, Rushikesh and Bai, Xue and Du, Yilun and Gan, Chuang and Ramanan, Deva},\n journal={arXiv preprint arXiv:2504.15376},\n year={2025},\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "chancharikm/qwen2.5-vl-32b-cam-motion-preview", "base_model_relation": "base" }, { "model_id": "One-RL-to-See-Them-All/Orsta-32B-0321", "gated": "unknown", "card": "---\nlicense: mit\nlanguage:\n- en\npipeline_tag: image-text-to-text\ntags:\n- VLM\n- multimodal\nlibrary_name: transformers\nbase_model:\n- Qwen/Qwen2.5-VL-32B-Instruct\ndatasets:\n- One-RL-to-See-Them-All/Orsta-Data-47k\n---\n# One RL to See Them All\n* \ud83d\udc19 **GitHub Repo:** [MiniMax-AI/One-RL-to-See-Them-All](https://github.com/MiniMax-AI/One-RL-to-See-Them-All)\n* \ud83d\udcdc **Paper (arXiv):** [V-Triune: One RL to See Them All (arXiv:2505.18129)](https://arxiv.org/abs/2505.18129)\n* \ud83d\udcbe **Dataset:** [Orsta-Data-47k on Hugging Face](https://huggingface.co/datasets/One-RL-to-See-Them-All/Orsta-Data-47k)\n\n## Model Overview\n\n**Orsta-32B-0321** is a cutting-edge vision-language model (VLM) designed to achieve superior performance across a wide spectrum of both visual reasoning and visual perception tasks. This model is a result of post-training with [**V-Triune**](https://github.com/MiniMax-AI/One-RL-to-See-Them-All), our novel unified reinforcement learning (RL) system.\n\nThe V-Triune system enables VLMs to be jointly optimized on diverse multimodal tasks within a single, cohesive training pipeline. Orsta-32B-0321 has been specifically trained using V-Triune on a carefully curated set of eight challenging visual tasks, fostering robust generalization and enhanced capabilities.\n\n\n## Training with V-Triune\n\nOrsta-32B-0321's advanced abilities stem from its training with the V-Triune system. Key aspects of its training include:\n\n* **Unified RL Framework (V-Triune):** V-Triune is a Visual Triple-Unified Reinforcement Learning system featuring three core complementary components:\n * *Sample-Level Data Formatting* (to unify diverse task inputs)\n * *Verifier-Level Reward Computation* (to deliver custom rewards via specialized verifiers)\n * *Source-Level Metric Monitoring* (to diagnose problems at the data-source level)\n\u00a0 * It also incorporates an innovative *Dynamic IoU reward* mechanism, crucial for optimizing visual perception tasks. You can find more details in our paper: [V-Triune](https://arxiv.org/abs/2505.18129)\n\n* **Diverse Joint Task Optimization:** Orsta-32B-0321 was jointly optimized on the following eight visual tasks:\n * *Visual Reasoning Tasks:* Mathematics, Science Question Answering, Chart Understanding, and Puzzle Solving.\n * *Visual Perception Tasks:* Object Detection, Visual Grounding, Optical Character Recognition (OCR), and Object Counting.\n\nThis comprehensive training allows Orsta-32B-0321 to develop a deeper understanding of visual content and its relation to textual prompts, excelling in tasks that require intricate reasoning and precise perception.\n\n## Performance\n| Model | Knowledge | Mathematics | Perception | Coding | Info. Ex. | Planning | Science | Metrics | MEGA-Bench
Core |\n| :--------------------------------------------- | ----------: | ------------: | -----------: | -------: | ----------: | ---------: | --------: | --------: | ------------------: |\n| QwenVL-2.5-32B-0321 | 8.48 | 12.62 | 11.99 | 13.59 | 15.44 | 8.61 | 16.78 | 14.91 | 11.87 |\n| MM-Eureka-32B \ud83d\udca1 | 12.20 | 20.19 | 21.88 | 15.86 | 21.23 | 15.47 | 19.95 | 22.77 | 18.57 |\n| VL-Rethinker-32B \ud83d\udca1 | 12.16 | 28.09 | 22.99 | 11.89 | 21.50 | 15.09 | 28.10 | 15.73 | 19.41 |\n| **Orsta-32B-0321 (Ours) \ud83d\udca1** | **21.33** | **28.55** | **32.23** | **19.44**| **26.38** | **17.78** | **33.20** | **24.18** | **25.94** |\n| - | - | - | - | - | - | - | - | - | - |\n| \u0394 (Ours - Backbone) | +12.9 | +15.9 | +20.2 | +5.9 | +10.9 | +9.2 | +16.4 | +9.3 | +14.1 |\n\n## How to Use\n\n**Orsta-32B-0321** is developed by post-training the [**Qwen2.5-VL-32B-Instruct (0321 checkpoint)**](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct/tree/98948557b47f3244ac2764806ddd334ce3c684f9) model using our V-Triune reinforcement learning system. The Qwen2.5-VL-32B-Instruct (0321 checkpoint) is a publicly available baseline known for its reliable core reasoning abilities, alongside certain recognized limitations in perception and output formatting (which have been addressed in subsequent Qwen releases). Applying V-Triune to this specific baseline demonstrates its powerful post-training capability to unlock the model's inherent potential and significantly elevate its performance by refining and amplifying existing strengths.\n\nConsequently, the core usage of **Orsta-32B-0321**, particularly regarding input formatting and model interaction, largely follows the established patterns of the Qwen2.5-VL series. Users familiar with Qwen2.5-VL models should find the interface intuitive.\n\nFor comprehensive details on the general capabilities of Qwen2.5-VL models, including multi-turn dialogue format and image input specifics, we recommend referring to the official [Qwen2.5-VL series documentation](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct) (please ensure to consult information relevant to the 32B Instruct version).\n\n## Citation \ud83c\udfc6\nIf you use Orsta-32B-0321 or the V-Triune system in your research, please cite our work:\n```bibtex\n@article{ma2025one,\n title={One RL to See Them All: Visual Triple Unified Reinforcement Learning}, \n author={Ma, Yan and Du, Linge and Shen, Xuyang and Chen, Shaoxiang and Li, Pengfei and Ren, Qibing and Ma, Lizhuang and Dai, Yuchao and Liu, Pengfei and Yan, Junjie},\n journal={arXiv preprint arXiv:2505.18129},\n year={2025}\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [ "mradermacher/Orsta-32B-0321-GGUF" ], "quantized_count": 1, "merges": [], "merges_count": 0, "total_derivatives": 1, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "CodeGoat24/UnifiedReward-qwen-32b", "gated": "unknown", "card": "---\nlicense: mit\ndatasets:\n- CodeGoat24/HPD\n- CodeGoat24/LiFT-HRA\n- CodeGoat24/OIP\n- CodeGoat24/EvalMuse\n- CodeGoat24/ShareGPTVideo-DPO\n- CodeGoat24/VideoFeedback\n- CodeGoat24/LLaVA-Critic-113k\n- CodeGoat24/VideoDPO\nbase_model:\n- Qwen/Qwen2.5-VL-32B-Instruct\n---\n\n\n# UnifiedReward-qwen-32B\nWe are actively gathering feedback from the community to improve our models. **We welcome your input and encourage you to stay updated through our repository**!!\n\n## Model Summary\n\n`UnifiedReward-qwen-32b` is the first unified reward model based on [Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct) for multimodal understanding and generation assessment, enabling both pairwise ranking and pointwise scoring, which can be employed for vision model preference alignment.\n\nFor further details, please refer to the following resources:\n- \ud83d\udcf0 Paper: https://arxiv.org/pdf/2503.05236\n- \ud83e\ude90 Project Page: https://codegoat24.github.io/UnifiedReward/\n- \ud83e\udd17 Model Collections: https://huggingface.co/collections/CodeGoat24/unifiedreward-models-67c3008148c3a380d15ac63a\n- \ud83e\udd17 Dataset Collections: https://huggingface.co/collections/CodeGoat24/unifiedreward-training-data-67c300d4fd5eff00fa7f1ede\n- \ud83d\udc4b Point of Contact: [Yibin Wang](https://codegoat24.github.io)\n\n\n## \ud83c\udfc1 Compared with Current Reward Models\n\n| Reward Model | Method| Image Generation | Image Understanding | Video Generation | Video Understanding\n| :-----: | :-----: |:-----: |:-----: | :-----: | :-----: |\n| [PickScore](https://github.com/yuvalkirstain/PickScore) |Point | \u221a | | ||\n| [HPS](https://github.com/tgxs002/HPSv2) | Point | \u221a | |||\n| [ImageReward](https://github.com/THUDM/ImageReward) | Point| \u221a| |||\n| [LLaVA-Critic](https://huggingface.co/lmms-lab/llava-critic-7b) | Pair/Point | | \u221a |||\n| [IXC-2.5-Reward](https://github.com/InternLM/InternLM-XComposer) | Pair/Point | | \u221a ||\u221a|\n| [VideoScore](https://github.com/TIGER-AI-Lab/VideoScore) | Point | | |\u221a ||\n| [LiFT](https://github.com/CodeGoat24/LiFT) | Point | | |\u221a| |\n| [VisionReward](https://github.com/THUDM/VisionReward) | Point |\u221a | |\u221a||\n| [VideoReward](https://github.com/KwaiVGI/VideoAlign) | Point | | |\u221a ||\n| UnifiedReward (Ours) | Pair/Point | \u221a | \u221a |\u221a|\u221a|\n\n\n### Quick Start\nAll pair rank and point score inference codes are provided in our [github](https://github.com/CodeGoat24/UnifiedReward).\n\nWe take image understanding assessment as example here:\n~~~python\nimport json\nimport random\nimport torch\nimport tqdm\nfrom PIL import Image\nimport warnings\nimport os\nfrom transformers import AutoProcessor, AutoTokenizer, Qwen2_5_VLForConditionalGeneration\nfrom qwen_vl_utils import process_vision_info\n\nwarnings.filterwarnings(\"ignore\")\n\nmodel_path = \"CodeGoat24/UnifiedReward-qwen-32b\"\nmodel = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n model_path, torch_dtype=\"auto\", device_map=\"auto\"\n)\nprocessor = AutoProcessor.from_pretrained(model_path)\n\n\nurl = \"https://github.com/LLaVA-VL/blog/blob/main/2024-10-03-llava-critic/static/images/critic_img_seven.png?raw=True\"\nimage = Image.open(requests.get(url, stream=True).raw)\n\nprompt_text = f'Given an image and a corresponding question, please serve as an unbiased and fair judge to evaluate the quality of the answers provided by a Large Multimodal Model (LMM). Determine which answer is better and explain your reasoning with specific details. Your task is provided as follows:\\nQuestion: [What this image presents?]\\nThe first response: [The image is a black and white sketch of a line that appears to be in the shape of a cross. The line is a simple and straightforward representation of the cross shape, with two straight lines intersecting at a point.]\\nThe second response: [This is a handwritten number seven.]\\nASSISTANT:\\n'\n\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": image},\n {\"type\": \"text\", \"text\": prompt_text},\n ],\n }\n]\n\nchat_input = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)\nimage_inputs, video_inputs = process_vision_info(messages)\n\ninputs = processor(\n text=[chat_input],\n images=image_inputs,\n videos=video_inputs,\n return_tensors=\"pt\",\n padding=True\n).to(\"cuda\")\n\nwith torch.no_grad():\n generated_ids = model.generate(**inputs, max_new_tokens=4096)\ngenerated_trimmed = [\n out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput = processor.batch_decode(generated_trimmed, skip_special_tokens=True)[0]\n\n\nprint(output)\n~~~\n\n\n## Citation\n\n```\n@article{UnifiedReward,\n title={Unified Reward Model for Multimodal Understanding and Generation.},\n author={Wang, Yibin and Zang, Yuhang, and Li, Hao and Jin, Cheng and Wang Jiaqi},\n journal={arXiv preprint arXiv:2503.05236},\n year={2025}\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [ "mradermacher/UnifiedReward-qwen-32b-GGUF", "mradermacher/UnifiedReward-qwen-32b-i1-GGUF" ], "quantized_count": 2, "merges": [], "merges_count": 0, "total_derivatives": 2, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "One-RL-to-See-Them-All/Orsta-32B-0326", "gated": "unknown", "card": "---\nbase_model:\n- Qwen/Qwen2.5-VL-32B-Instruct\ndatasets:\n- One-RL-to-See-Them-All/Orsta-Data-47k\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\npipeline_tag: image-text-to-text\ntags:\n- VLM\n- multimodal\n---\n\n# One RL to See Them All: Visual Triple Unified Reinforcement Learning\n\n* \ud83d\udc19 **GitHub Repo:** [MiniMax-AI/One-RL-to-See-Them-All](https://github.com/MiniMax-AI/One-RL-to-See-Them-All)\n* \ud83d\udcdc **Paper (arXiv):** [V-Triune: One RL to See Them All (arXiv:2505.18129)](https://arxiv.org/abs/2505.18129)\n* \ud83d\udcbe **Dataset:** [Orsta-Data-47k on Hugging Face](https://huggingface.co/datasets/One-RL-to-See-Them-All/Orsta-Data-47k)\n\n## Model Overview\n\n**Orsta-Orsta-32B-0326** is a cutting-edge vision-language model (VLM) designed to achieve superior performance across a wide spectrum of both visual reasoning and visual perception tasks. This model is a result of post-training with [**V-Triune**](https://github.com/MiniMax-AI/One-RL-to-See-Them-All), our novel unified reinforcement learning (RL) system.\n\nThe V-Triune system enables VLMs to be jointly optimized on diverse multimodal tasks within a single, cohesive training pipeline. Orsta-7B has been specifically trained using V-Triune on a carefully curated set of eight challenging visual tasks, fostering robust generalization and enhanced capabilities.\n\n## Training with V-Triune\n\nOrsta-32B-0326's advanced abilities stem from its training with the V-Triune system. Key aspects of its training include:\n\n* **Unified RL Framework (V-Triune):** V-Triune is a Visual Triple-Unified Reinforcement Learning system featuring three core complementary components:\n\n * *Sample-Level Data Formatting* (to unify diverse task inputs)\n * *Verifier-Level Reward Computation* (to deliver custom rewards via specialized verifiers)\n * *Source-Level Metric Monitoring* (to diagnose problems at the data-source level)\n * It also incorporates an innovative *Dynamic IoU reward* mechanism, crucial for optimizing visual perception tasks. You can find more details in our paper: [V-Triune](https://arxiv.org/abs/2505.18129)\n\n* **Diverse Joint Task Optimization:** Orsta-32B-0326 was jointly optimized on the following eight visual tasks:\n\n * *Visual Reasoning Tasks:* Mathematics, Science Question Answering, Chart Understanding, and Puzzle Solving.\n * *Visual Perception Tasks:* Object Detection, Visual Grounding, Optical Character Recognition (OCR), and Object Counting.\n\nThis comprehensive training allows Orsta-32B-0326 to develop a deeper understanding of visual content and its relation to textual prompts, excelling in tasks that require intricate reasoning and precise perception.\n\n## Performance\n| Model | Knowledge | Mathematics | Perception | Coding | Info. Ex. | Planning | Science | Metrics | MEGA-Bench
Core |\n| :--------------------------------------------- | ----------: | ------------: | -----------: | -------: | ----------: | ---------: | --------: | --------: | ------------------: |\n| Gemma3-27B | 49.43 | 42.20 | 45.46 | 40.18 | 49.30 | 24.96 | 47.08 | 58.99 | 41.82 \u2020 |\n| QwenVL-2.5-32B-0326 | 46.09 | 32.04 | 47.55 | 38.36 | 61.65 | 28.43 | 37.55 | 50.38 | 43.67 |\n| InternVL-3-38B | 46.32 | **40.29** | **55.05** | **45.29**| 56.63 | 22.88 | **52.04** | **58.04** | **46.69** |\n| Skywork-R1V-38B \ud83d\udca1 | 25.59 | 28.45 | 22.95 | 19.88 | 19.53 | 9.74 | 22.64 | 37.55 | 21.54 |\n| Skywork-R1V2-38B \ud83d\udca1 | 17.08 | 12.38 | 15.65 | 7.14 | 9.90 | 17.60 | 14.29 | 0.0 | 15.39 |\n| **Orsta-32B-0326 (Ours) \ud83d\udca1** | **46.78** | 37.43 | 50.86 | 38.92 | **63.14** | 28.05 | 42.68 | 53.01 | **45.78** |\n| - | - | - | - | - | - | - | - | - | - |\n| \u0394 (Ours - Backbone) | +0.7 | +5.4 | +3.3 | +0.6 | +1.5 | -0.4 | +5.1 | +2.6 | +2.1 |\n\n## How to Use\n\n**Orsta-32B-0326** is developed by post-training the latest [**Qwen2.5-VL-32B-Instruct**](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct) model using our V-Triune reinforcement learning system. Consequently, its core usage, particularly regarding input formatting and model interaction, largely follows the established patterns of the Qwen2.5-VL series.\n\nFor comprehensive details on the base model's capabilities, multi-turn dialogue format, image input encoding specifics, and other functionalities, we recommend referring to the official [Qwen2.5-VL documentation](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct).\n\n## Citation \ud83c\udfc6\nIf you use Orsta-32B-0326 or the V-Triune system in your research, please cite our work:\n```bibtex\n@article{ma2025one,\n title={One RL to See Them All: Visual Triple Unified Reinforcement Learning}, \n author={Ma, Yan and Du, Linge and Shen, Xuyang and Chen, Shaoxiang and Li, Pengfei and Ren, Qibing and Ma, Lizhuang and Dai, Yuchao and Liu, Pengfei and Yan, Junjie},\n journal={arXiv preprint arXiv:2505.18129},\n year={2025}\n}\n```\n\n## Project Page\nhttps://github.com/MiniMax-AI/One-RL-to-See-Them-All.", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [ "mradermacher/Orsta-32B-0326-GGUF", "sizzlebop/Orsta-32B-0326-Q8_0-GGUF" ], "quantized_count": 2, "merges": [], "merges_count": 0, "total_derivatives": 2, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "Bofeee5675/TongUI-32B", "gated": "unknown", "card": "---\nlicense: apache-2.0\nbase_model:\n- Qwen/Qwen2.5-VL-32B-Instruct\n---\n# TongUI: Building Generalized GUI Agents by Learning from Multimodal Web Tutorials\n\nModel trained from [GUI-Net Dataset](https://huggingface.co/datasets/Bofeee5675/GUI-Net-1M)\n\nSee detail at our [Project Page](https://github.com/TongUI-agent/TongUI-agent)\n\n\n## Model Details\n\nThe base model is `Qwen/Qwen2.5-VL-32B-Instruct`. We fine-tuned base model by Lora.\n\n**Note:** Due to large size of 32B model, we only release the LoRA part of this model. To merge the weights, use the following script:\n\n```python\nfrom transformers import AutoProcessor, Qwen2_5_VLForConditionalGeneration, AutoConfig, AutoModelForImageTextToText\nimport torch\nfrom peft.peft_model import PeftModel\n\ndef load_model_and_processor(model_path, precision=\"bf16\", lora_path=None, merge_lora=True):\n \"\"\"\n Load the Qwen2.5-VL model and processor with optional LoRA weights.\n \n Args:\n args: Arguments containing:\n - model_path: Path to the base model\n - precision: Model precision (\"fp16\", \"bf16\", or \"fp32\")\n - lora_path: Path to LoRA weights (optional)\n - merge_lora: Boolean indicating whether to merge LoRA weights\n \n Returns:\n tuple: (processor, model) - The initialized processor and model\n \"\"\"\n # Initialize processor\n try:\n processor = AutoProcessor.from_pretrained(\n model_path\n )\n except Exception as e:\n print(f\"Error loading processor: {e}\")\n processor = None\n config = AutoConfig.from_pretrained(model_path)\n print(config)\n raise e\n # Initialize base model\n from transformers import Qwen2_5_VLForConditionalGeneration\n # Initialize base model\n model_cls = Qwen2_5_VLForConditionalGeneration\n model = model_cls.from_pretrained(\n model_path,\n device_map=\"auto\",\n torch_dtype=torch.float16 if precision == \"fp16\" else torch.bfloat16 if precision == \"bf16\" else torch.float32,\n attn_implementation=\"flash_attention_2\",\n )\n \n # Load LoRA weights if path is provided\n if lora_path is not None and len(lora_path) > 0:\n print(f\"Loading LoRA weights from {lora_path}\")\n model = PeftModel.from_pretrained(model, lora_path)\n \n if merge_lora:\n print(\"Merging LoRA weights into base model\")\n model = model.merge_and_unload()\n \n model.eval()\n \n return processor, model\n```\n\n`model_path` is the base model, and `lora_path` is where you download this repo.", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "acchf/vision-price-trade-qwenvl-qlora-p", "gated": "unknown", "card": "---\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\nlibrary_name: transformers\nmodel_name: vision-price-trade-qwenvl-qlora-p\ntags:\n- generated_from_trainer\n- trl\n- sft\nlicence: license\n---\n\n# Model Card for vision-price-trade-qwenvl-qlora-p\n\nThis model is a fine-tuned version of [Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct).\nIt has been trained using [TRL](https://github.com/huggingface/trl).\n\n## Quick start\n\n```python\nfrom transformers import pipeline\n\nquestion = \"If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?\"\ngenerator = pipeline(\"text-generation\", model=\"acchf/vision-price-trade-qwenvl-qlora-p\", device=\"cuda\")\noutput = generator([{\"role\": \"user\", \"content\": question}], max_new_tokens=128, return_full_text=False)[0]\nprint(output[\"generated_text\"])\n```\n\n## Training procedure\n\n \n\n\nThis model was trained with SFT.\n\n### Framework versions\n\n- TRL: 0.13.0\n- Transformers: 4.49.0\n- Pytorch: 2.6.0\n- Datasets: 3.6.0\n- Tokenizers: 0.21.1\n\n## Citations\n\n\n\nCite TRL as:\n \n```bibtex\n@misc{vonwerra2022trl,\n\ttitle = {{TRL: Transformer Reinforcement Learning}},\n\tauthor = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou\u00e9dec},\n\tyear = 2020,\n\tjournal = {GitHub repository},\n\tpublisher = {GitHub},\n\thowpublished = {\\url{https://github.com/huggingface/trl}}\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "orkungedik/recruitment-docs-32b-extractor", "gated": "unknown", "card": "---\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2_5_vl\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded finetuned model\n\n- **Developed by:** orkungedik\n- **License:** apache-2.0\n- **Finetuned from model :** Qwen/Qwen2.5-VL-32B-Instruct\n\nThis qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "HongxinLi/0406_Qwen32B_AndWorld-CoT", "gated": "False", "card": "---\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\nlibrary_name: peft\n---\n\n# Model Card for Model ID\n\n\n\n\n\n## Model Details\n\n### Model Description\n\n\n\n\n\n- **Developed by:** [More Information Needed]\n- **Funded by [optional]:** [More Information Needed]\n- **Shared by [optional]:** [More Information Needed]\n- **Model type:** [More Information Needed]\n- **Language(s) (NLP):** [More Information Needed]\n- **License:** [More Information Needed]\n- **Finetuned from model [optional]:** [More Information Needed]\n\n### Model Sources [optional]\n\n\n\n- **Repository:** [More Information Needed]\n- **Paper [optional]:** [More Information Needed]\n- **Demo [optional]:** [More Information Needed]\n\n## Uses\n\n\n\n### Direct Use\n\n\n\n[More Information Needed]\n\n### Downstream Use [optional]\n\n\n\n[More Information Needed]\n\n### Out-of-Scope Use\n\n\n\n[More Information Needed]\n\n## Bias, Risks, and Limitations\n\n\n\n[More Information Needed]\n\n### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.\n\n## How to Get Started with the Model\n\nUse the code below to get started with the model.\n\n[More Information Needed]\n\n## Training Details\n\n### Training Data\n\n\n\n[More Information Needed]\n\n### Training Procedure\n\n\n\n#### Preprocessing [optional]\n\n[More Information Needed]\n\n\n#### Training Hyperparameters\n\n- **Training regime:** [More Information Needed] \n\n#### Speeds, Sizes, Times [optional]\n\n\n\n[More Information Needed]\n\n## Evaluation\n\n\n\n### Testing Data, Factors & Metrics\n\n#### Testing Data\n\n\n\n[More Information Needed]\n\n#### Factors\n\n\n\n[More Information Needed]\n\n#### Metrics\n\n\n\n[More Information Needed]\n\n### Results\n\n[More Information Needed]\n\n#### Summary\n\n\n\n## Model Examination [optional]\n\n\n\n[More Information Needed]\n\n## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).\n\n- **Hardware Type:** [More Information Needed]\n- **Hours used:** [More Information Needed]\n- **Cloud Provider:** [More Information Needed]\n- **Compute Region:** [More Information Needed]\n- **Carbon Emitted:** [More Information Needed]\n\n## Technical Specifications [optional]\n\n### Model Architecture and Objective\n\n[More Information Needed]\n\n### Compute Infrastructure\n\n[More Information Needed]\n\n#### Hardware\n\n[More Information Needed]\n\n#### Software\n\n[More Information Needed]\n\n## Citation [optional]\n\n\n\n**BibTeX:**\n\n[More Information Needed]\n\n**APA:**\n\n[More Information Needed]\n\n## Glossary [optional]\n\n\n\n[More Information Needed]\n\n## More Information [optional]\n\n[More Information Needed]\n\n## Model Card Authors [optional]\n\n[More Information Needed]\n\n## Model Card Contact\n\n[More Information Needed]\n### Framework versions\n\n- PEFT 0.14.0", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "HongxinLi/0406_Qwen32B_AndWorld-CoT", "base_model_relation": "base" }, { "model_id": "srai86825/qwen-vl-tool-assistant-lora", "gated": "False", "card": "---\nlicense: apache-2.0\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\ntags:\n- qwen\n- vision-language\n- tool-use\n- lora\n- fine-tuned\n- multimodal\n- visual-reasoning\nlanguage:\n- en\npipeline_tag: text-generation\n---\n\n# Qwen2.5-VL-32B Tool Assistant with LoRA fine-tuning\n\nThis is a LoRA adapter for the Qwen2.5-VL-32B model, fine-tuned for tool-use with visual input.\n\n## Usage\n\n```python\nfrom transformers import AutoProcessor, AutoModelForCausalLM\nfrom peft import PeftModel\nimport torch\nfrom PIL import Image\n\n# Load the model\nprocessor = AutoProcessor.from_pretrained(\"Qwen/Qwen2.5-VL-32B-Instruct\")\nbase_model = AutoModelForCausalLM.from_pretrained(\n \"Qwen/Qwen2.5-VL-32B-Instruct\", \n torch_dtype=torch.bfloat16,\n device_map=\"auto\",\n trust_remote_code=True\n)\nmodel = PeftModel.from_pretrained(\n base_model, \n \"srai86825/qwen-vl-tool-assistant-lora\"\n)\n\n# Use the model\nimage = Image.open(\"your_image.jpg\")\ntext = \"What is in this image?\"\n\ninputs = processor(text=text, images=image, return_tensors=\"pt\").to(\"cuda\")\noutputs = model.generate(**inputs, max_new_tokens=100)\nresult = processor.decode(outputs[0], skip_special_tokens=True)\nprint(result)\n```\n\n## Training Details\n- Base model: Qwen/Qwen2.5-VL-32B-Instruct\n- Fine-tuning method: LoRA with rank 8\n- Target modules: all\n- Training data: Custom tool-use dataset\n\n## Model Architecture\n\nThis model uses the Low-Rank Adaptation (LoRA) technique to efficiently fine-tune the Qwen2.5-VL-32B-Instruct model. LoRA works by adding small, trainable rank decomposition matrices to existing weights, allowing for parameter-efficient fine-tuning.\n\nThe adapter is applied to all attention layers in the model, which allows it to learn new capabilities without modifying the entire model.\n\n## Limitations\n\n- This model inherits the limitations of the base Qwen2.5-VL model\n- The fine-tuning data may introduce biases or limitations in certain domains\n- For optimal performance, use images similar in style and content to what the model was trained on", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "srai86825/qwen-vl-tool-assistant-lora", "base_model_relation": "base" }, { "model_id": "Qwen/Qwen2.5-VL-32B-Instruct-AWQ", "gated": "False", "card": "---\nlicense: apache-2.0\nlanguage:\n- en\npipeline_tag: image-text-to-text\ntags:\n- multimodal\nlibrary_name: transformers\nbase_model:\n- Qwen/Qwen2.5-VL-32B-Instruct\n---\n\n# Qwen2.5-VL-32B-Instruct-AWQ\n\n \"Chat\"\n\n\n\n## Latest Updates:\nIn addition to the original formula, we have further enhanced Qwen2.5-VL-32B's mathematical and problem-solving abilities through reinforcement learning. This has also significantly improved the model's subjective user experience, with response styles adjusted to better align with human preferences. Particularly for objective queries such as mathematics, logical reasoning, and knowledge-based Q&A, the level of detail in responses and the clarity of formatting have been noticeably enhanced.\n\n## Introduction\n\nIn the past five months since Qwen2-VL\u2019s release, numerous developers have built new models on the Qwen2-VL vision-language models, providing us with valuable feedback. During this period, we focused on building more useful vision-language models. Today, we are excited to introduce the latest addition to the Qwen family: Qwen2.5-VL.\n\n#### Key Enhancements:\n* **Understand things visually**: Qwen2.5-VL is not only proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images.\n\n* **Being agentic**: Qwen2.5-VL directly plays as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use.\n\n* **Understanding long videos and capturing events**: Qwen2.5-VL can comprehend videos of over 1 hour, and this time it has a new ability of cpaturing event by pinpointing the relevant video segments.\n\n* **Capable of visual localization in different formats**: Qwen2.5-VL can accurately localize objects in an image by generating bounding boxes or points, and it can provide stable JSON outputs for coordinates and attributes.\n\n* **Generating structured outputs**: for data like scans of invoices, forms, tables, etc. Qwen2.5-VL supports structured outputs of their contents, benefiting usages in finance, commerce, etc.\n\n\n#### Model Architecture Updates:\n\n* **Dynamic Resolution and Frame Rate Training for Video Understanding**:\n\nWe extend dynamic resolution to the temporal dimension by adopting dynamic FPS sampling, enabling the model to comprehend videos at various sampling rates. Accordingly, we update mRoPE in the time dimension with IDs and absolute time alignment, enabling the model to learn temporal sequence and speed, and ultimately acquire the ability to pinpoint specific moments.\n\n

\n \n

\n\n\n* **Streamlined and Efficient Vision Encoder**\n\nWe enhance both training and inference speeds by strategically implementing window attention into the ViT. The ViT architecture is further optimized with SwiGLU and RMSNorm, aligning it with the structure of the Qwen2.5 LLM.\n\n\nWe have three models with 3, 7 and 72 billion parameters. This repository contains the quantized instruction-tuned 32B Qwen2.5-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2.5-vl/) and [GitHub](https://github.com/QwenLM/Qwen2.5-VL).\n\n\n\n## Evaluation\n\n| Model | MMMU | DocVQA_VAL | MMBench_DEV_EN | MathVista_MINI |\n|---------------------------|--------------------|------------|------------------------|----------------|\n| Qwen2.5-VL-32B-Instruct | 70.0 | 93.9107 | 87.3 | 74.7 |\n| Qwen2.5-VL-32B-Instruct-AWQ | 67.8 | 94.1489 | 86.9 | 73.6 |\n\n\n\n## Requirements\nThe code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command:\n```\n\npip install git+https://github.com/huggingface/transformers accelerate\n\n```\nor you might encounter the following error:\n```\n\nKeyError: 'qwen2_5_vl'\n\n```\n## Quickstart\n\nBelow, we provide simple examples to show how to use Qwen2.5-VL with \ud83e\udd16 ModelScope and \ud83e\udd17 Transformers.\n\nThe code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command:\n```\n\npip install git+https://github.com/huggingface/transformers accelerate\n\n```\nor you might encounter the following error:\n```\n\nKeyError: 'qwen2_5_vl'\n\n```\nWe offer a toolkit to help you handle various types of visual input more conveniently, as if you were using an API. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:\n\n```bash\n# It's highly recommanded to use `[decord]` feature for faster video loading.\npip install qwen-vl-utils[decord]==0.0.8\n```\n\nIf you are not using Linux, you might not be able to install `decord` from PyPI. In that case, you can use `pip install qwen-vl-utils` which will fall back to using torchvision for video processing. However, you can still [install decord from source](https://github.com/dmlc/decord?tab=readme-ov-file#install-from-source) to get decord used when loading video.\n\n### Using \ud83e\udd17 Transformers to Chat\n\nHere we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:\n\n```python\nfrom transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor\nfrom qwen_vl_utils import process_vision_info\n\n# default: Load the model on the available device(s)\nmodel = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n \"Qwen/Qwen2.5-VL-32B-Instruct-AWQ\", torch_dtype=\"auto\", device_map=\"auto\"\n)\n\n# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.\n# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n# \"Qwen/Qwen2.5-VL-32B-Instruct-AWQ\",\n# torch_dtype=torch.bfloat16,\n# attn_implementation=\"flash_attention_2\",\n# device_map=\"auto\",\n# )\n\n# default processer\nprocessor = AutoProcessor.from_pretrained(\"Qwen/Qwen2.5-VL-32B-Instruct-AWQ\")\n\n# The default range for the number of visual tokens per image in the model is 4-16384.\n# You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost.\n# min_pixels = 256*28*28\n# max_pixels = 1280*28*28\n# processor = AutoProcessor.from_pretrained(\"Qwen/Qwen2.5-VL-32B-Instruct-AWQ\", min_pixels=min_pixels, max_pixels=max_pixels)\n\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image\",\n \"image\": \"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg\",\n },\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n\n# Preparation for inference\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\nimage_inputs, video_inputs = process_vision_info(messages)\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n padding=True,\n return_tensors=\"pt\",\n)\ninputs = inputs.to(\"cuda\")\n\n# Inference: Generation of the output\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_text = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_text)\n```\n\n

\nMulti image inference\n\n```python\n# Messages containing multiple images and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"file:///path/to/image1.jpg\"},\n {\"type\": \"image\", \"image\": \"file:///path/to/image2.jpg\"},\n {\"type\": \"text\", \"text\": \"Identify the similarities between these images.\"},\n ],\n }\n]\n\n# Preparation for inference\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\nimage_inputs, video_inputs = process_vision_info(messages)\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n padding=True,\n return_tensors=\"pt\",\n)\ninputs = inputs.to(\"cuda\")\n\n# Inference\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_text = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_text)\n```\n\n
\n\n
\nVideo inference\n\n```python\n# Messages containing a images list as a video and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"video\",\n \"video\": [\n \"file:///path/to/frame1.jpg\",\n \"file:///path/to/frame2.jpg\",\n \"file:///path/to/frame3.jpg\",\n \"file:///path/to/frame4.jpg\",\n ],\n },\n {\"type\": \"text\", \"text\": \"Describe this video.\"},\n ],\n }\n]\n\n# Messages containing a local video path and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"video\",\n \"video\": \"file:///path/to/video1.mp4\",\n \"max_pixels\": 360 * 420,\n \"fps\": 1.0,\n },\n {\"type\": \"text\", \"text\": \"Describe this video.\"},\n ],\n }\n]\n\n# Messages containing a video url and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"video\",\n \"video\": \"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-VL/space_woaudio.mp4\",\n },\n {\"type\": \"text\", \"text\": \"Describe this video.\"},\n ],\n }\n]\n\n#In Qwen 2.5 VL, frame rate information is also input into the model to align with absolute time.\n# Preparation for inference\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\nimage_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n fps=fps,\n padding=True,\n return_tensors=\"pt\",\n **video_kwargs,\n)\ninputs = inputs.to(\"cuda\")\n\n# Inference\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_text = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_text)\n```\n\nVideo URL compatibility largely depends on the third-party library version. The details are in the table below. change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one.\n\n| Backend | HTTP | HTTPS |\n|-------------|------|-------|\n| torchvision >= 0.19.0 | \u2705 | \u2705 |\n| torchvision < 0.19.0 | \u274c | \u274c |\n| decord | \u2705 | \u274c |\n\n
\n\n
\nBatch inference\n\n```python\n# Sample messages for batch inference\nmessages1 = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"file:///path/to/image1.jpg\"},\n {\"type\": \"image\", \"image\": \"file:///path/to/image2.jpg\"},\n {\"type\": \"text\", \"text\": \"What are the common elements in these pictures?\"},\n ],\n }\n]\nmessages2 = [\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n {\"role\": \"user\", \"content\": \"Who are you?\"},\n]\n# Combine messages for batch processing\nmessages = [messages1, messages2]\n\n# Preparation for batch inference\ntexts = [\n processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)\n for msg in messages\n]\nimage_inputs, video_inputs = process_vision_info(messages)\ninputs = processor(\n text=texts,\n images=image_inputs,\n videos=video_inputs,\n padding=True,\n return_tensors=\"pt\",\n)\ninputs = inputs.to(\"cuda\")\n\n# Batch Inference\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_texts = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_texts)\n```\n\n
\n\n### \ud83e\udd16 ModelScope\n\nWe strongly advise users especially those in mainland China to use ModelScope. `snapshot_download` can help you solve issues concerning downloading checkpoints.\n\n### More Usage Tips\n\nFor input images, we support local files, base64, and URLs. For videos, we currently only support local files.\n\n```python\n# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.\n## Local file path\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"file:///path/to/your/image.jpg\"},\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n## Image URL\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"http://path/to/your/image.jpg\"},\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n## Base64 encoded image\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"data:image;base64,/9j/...\"},\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n```\n\n#### Image Resolution for performance boost\n\nThe model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.\n\n```python\nmin_pixels = 256 * 28 * 28\nmax_pixels = 1280 * 28 * 28\nprocessor = AutoProcessor.from_pretrained(\n \"Qwen/Qwen2.5-VL-32B-Instruct-AWQ\", min_pixels=min_pixels, max_pixels=max_pixels\n)\n```\n\nBesides, We provide two methods for fine-grained control over the image size input to the model:\n\n1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.\n2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.\n\n```python\n# min_pixels and max_pixels\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image\",\n \"image\": \"file:///path/to/your/image.jpg\",\n \"resized_height\": 280,\n \"resized_width\": 420,\n },\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n# resized_height and resized_width\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image\",\n \"image\": \"file:///path/to/your/image.jpg\",\n \"min_pixels\": 50176,\n \"max_pixels\": 50176,\n },\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n```\n\n### Processing Long Texts\n\nThe current `config.json` is set for context length up to 32,768 tokens.\nTo handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.\n\nFor supported frameworks, you could add the following to `config.json` to enable YaRN:\n\n{\n...,\n\"type\": \"yarn\",\n\"mrope_section\": [\n16,\n24,\n24\n],\n\"factor\": 4,\n\"original_max_position_embeddings\": 32768\n}\n\nHowever, it should be noted that this method has a significant impact on the performance of temporal and spatial localization tasks, and is therefore not recommended for use.\n\nAt the same time, for long video inputs, since MRoPE itself is more economical with ids, the max_position_embeddings can be directly modified to a larger value, such as 64k.\n\n## Citation\n\nIf you find our work helpful, feel free to give us a cite.\n\n```\n@article{Qwen2.5-VL,\n title={Qwen2.5-VL Technical Report},\n author={Bai, Shuai and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Song, Sibo and Dang, Kai and Wang, Peng and Wang, Shijie and Tang, Jun and Zhong, Humen and Zhu, Yuanzhi and Yang, Mingkun and Li, Zhaohai and Wan, Jianqiang and Wang, Pengfei and Ding, Wei and Fu, Zheren and Xu, Yiheng and Ye, Jiabo and Zhang, Xi and Xie, Tianbao and Cheng, Zesen and Zhang, Hang and Yang, Zhibo and Xu, Haiyang and Lin, Junyang},\n journal={arXiv preprint arXiv:2502.13923},\n year={2025}\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "Qwen/Qwen2.5-VL-32B-Instruct-AWQ", "base_model_relation": "base" }, { "model_id": "BCCard/Qwen2.5-VL-32B-Instruct-FP8-Dynamic", "gated": "False", "card": "---\ntags:\n- vllm\n- vision\n- fp8\nlicense: apache-2.0\nlicense_link: >-\n https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md\nlanguage:\n- en\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\nlibrary_name: transformers\n---\n\n# Qwen2.5-VL-32B-Instruct-FP8-Dynamic\n\n## Model Overview\n- **Model Architecture:** Qwen2.5-VL-32B-Instruct\n - **Input:** Vision-Text\n - **Output:** Text\n- **Model Optimizations:**\n - **Weight quantization:** FP8\n - **Activation quantization:** FP8\n- **Release Date:** 5/3/2025\n- **Version:** 1.0\n- **Model Developers:** BC Card\n\nQuantized version of [Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct).\n\n### Model Optimizations\n\nThis model was obtained by quantizing the weights of [Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct) to FP8 data type, ready for inference with vLLM >= 0.5.2.\n\n## Deployment\n\n### Use with vLLM\n\nThis model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.\n\n```python\nfrom vllm.assets.image import ImageAsset\nfrom vllm import LLM, SamplingParams\n\n# prepare model\nllm = LLM(\n model=\"BCCard/Qwen2.5-VL-32B-Instruct-FP8-Dynamic\",\n trust_remote_code=True,\n max_model_len=4096,\n max_num_seqs=2,\n)\n\n# prepare inputs\nquestion = \"What is the content of this image?\"\ninputs = {\n \"prompt\": f\"<|user|>\\n<|image_1|>\\n{question}<|end|>\\n<|assistant|>\\n\",\n \"multi_modal_data\": {\n \"image\": ImageAsset(\"cherry_blossom\").pil_image.convert(\"RGB\")\n },\n}\n\n# generate response\nprint(\"========== SAMPLE GENERATION ==============\")\noutputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))\nprint(f\"PROMPT : {outputs[0].prompt}\")\nprint(f\"RESPONSE: {outputs[0].outputs[0].text}\")\nprint(\"==========================================\")\n```\n\nvLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "BCCard/Qwen2.5-VL-32B-Instruct-FP8-Dynamic", "base_model_relation": "base" }, { "model_id": "unsloth/Qwen2.5-VL-32B-Instruct-GGUF", "gated": "False", "card": "---\nbase_model:\n- Qwen/Qwen2.5-VL-32B-Instruct\nlicense: apache-2.0\nlanguage:\n- en\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- unsloth\nlibrary_name: transformers\n---\n\n# Qwen2.5-VL-32B-Instruct\n\n \"Chat\"\n\n\n\n## Latest Updates:\nIn addition to the original formula, we have further enhanced Qwen2.5-VL-32B's mathematical and problem-solving abilities through reinforcement learning. This has also significantly improved the model's subjective user experience, with response styles adjusted to better align with human preferences. Particularly for objective queries such as mathematics, logical reasoning, and knowledge-based Q&A, the level of detail in responses and the clarity of formatting have been noticeably enhanced.\n\n## Introduction\n\nIn the past five months since Qwen2-VL\u2019s release, numerous developers have built new models on the Qwen2-VL vision-language models, providing us with valuable feedback. During this period, we focused on building more useful vision-language models. Today, we are excited to introduce the latest addition to the Qwen family: Qwen2.5-VL.\n\n#### Key Enhancements:\n* **Understand things visually**: Qwen2.5-VL is not only proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images.\n\n* **Being agentic**: Qwen2.5-VL directly plays as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use.\n\n* **Understanding long videos and capturing events**: Qwen2.5-VL can comprehend videos of over 1 hour, and this time it has a new ability of cpaturing event by pinpointing the relevant video segments.\n\n* **Capable of visual localization in different formats**: Qwen2.5-VL can accurately localize objects in an image by generating bounding boxes or points, and it can provide stable JSON outputs for coordinates and attributes.\n\n* **Generating structured outputs**: for data like scans of invoices, forms, tables, etc. Qwen2.5-VL supports structured outputs of their contents, benefiting usages in finance, commerce, etc.\n\n\n#### Model Architecture Updates:\n\n* **Dynamic Resolution and Frame Rate Training for Video Understanding**:\n\nWe extend dynamic resolution to the temporal dimension by adopting dynamic FPS sampling, enabling the model to comprehend videos at various sampling rates. Accordingly, we update mRoPE in the time dimension with IDs and absolute time alignment, enabling the model to learn temporal sequence and speed, and ultimately acquire the ability to pinpoint specific moments.\n\n

\n \n

\n\n\n* **Streamlined and Efficient Vision Encoder**\n\nWe enhance both training and inference speeds by strategically implementing window attention into the ViT. The ViT architecture is further optimized with SwiGLU and RMSNorm, aligning it with the structure of the Qwen2.5 LLM.\n\n\nWe have four models with 3, 7, 32 and 72 billion parameters. This repo contains the instruction-tuned 32B Qwen2.5-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2.5-vl/) and [GitHub](https://github.com/QwenLM/Qwen2.5-VL).\n\n\n\n## Evaluation\n\n### Vision\n\n| Dataset | Qwen2.5-VL-72B
([\ud83e\udd17](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct)[\ud83e\udd16](https://modelscope.cn/models/qwen/Qwen2.5-VL-72B-Instruct)) | Qwen2-VL-72B
([\ud83e\udd17](https://huggingface.co/Qwen/Qwen2-VL-72B-Instruct)[\ud83e\udd16](https://modelscope.cn/models/qwen/Qwen2-VL-72B-Instruct)) | Qwen2.5-VL-32B
([\ud83e\udd17](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct)[\ud83e\udd16](https://modelscope.cn/models/qwen/Qwen2.5-VL-32B-Instruct)) |\n|--------------------|--------|--------------|------------------|\n| MMMU |**70.2** | 64.5 | 70 |\n| MMMU Pro |**51.1** | 46.2 | 49.5 |\n| MMStar | **70.8** | 68.3 | 69.5 |\n| MathVista | **74.8** | 70.5 | 74.7 |\n| MathVision |38.1 | 25.9 | **40.0**|\n| OCRBenchV2 | **61.5/63.7** | 47.8/46.1 | 57.2/59.1 |\n| CC-OCR | **79.8** | 68.7 | 77.1 |\n| DocVQA | **96.4** | **96.5** | 94.8 |\n| InfoVQA | **87.3** | 84.5 | 83.4 |\n| LVBench |47.3 | - | **49.00** |\n| CharadesSTA |50.9 | - | **54.2** |\n| VideoMME |**73.3/79.1** | 71.2/77.8 | 70.5/77.9 |\n| MMBench-Video |**2.02** | 1.7 | 1.93 |\n| AITZ |**83.2** | - | 83.1 |\n| Android Control |**67.4/93.7** | 66.4/84.4 | 69.6/93.3 |\n| ScreenSpot |**87.1** | - | 88.5 |\n| ScreenSpot Pro |**43.6** | - | 39.4 |\n| AndroidWorld |**35** | - | 22.0 |\n| OSWorld |**8.83** | - | 5.92 |\n\n### Text\n\n| MODEL | MMLU | MMLU-PRO | MATH | GPQA-diamond | MBPP | Human Eval |\n|-----------------|--------|----------|---------|--------------|--------|------------|\n| Qwen2.5-VL-32B | 78.4 | 68.8 | 82.2 | 46.0 | 84.0 | 91.5 |\n| Mistral-Small-3.1-24B | 80.6 | 66.8 | 69.3 | 46.0 | 74.7 | 88.4 |\n| Gemma3-27B-IT | 76.9 | 67.5 | 89 | 42.4 | 74.4 | 87.8 |\n| GPT-4o-Mini | 82.0 | 61.7 | 70.2 | 39.4 | 84.8 | 87.2 |\n| Claude-3.5-Haiku | 77.6 | 65.0 | 69.2 | 41.6 | 85.6 | 88.1 |\n\n## Requirements\nThe code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command:\n```\n\npip install git+https://github.com/huggingface/transformers accelerate\n\n```\nor you might encounter the following error:\n```\n\nKeyError: 'qwen2_5_vl'\n\n```\n## Quickstart\n\nBelow, we provide simple examples to show how to use Qwen2.5-VL with \ud83e\udd16 ModelScope and \ud83e\udd17 Transformers.\n\nThe code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command:\n```\n\npip install git+https://github.com/huggingface/transformers accelerate\n\n```\nor you might encounter the following error:\n```\n\nKeyError: 'qwen2_5_vl'\n\n```\nWe offer a toolkit to help you handle various types of visual input more conveniently, as if you were using an API. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:\n\n```bash\n# It's highly recommanded to use `[decord]` feature for faster video loading.\npip install qwen-vl-utils[decord]==0.0.8\n```\n\nIf you are not using Linux, you might not be able to install `decord` from PyPI. In that case, you can use `pip install qwen-vl-utils` which will fall back to using torchvision for video processing. However, you can still [install decord from source](https://github.com/dmlc/decord?tab=readme-ov-file#install-from-source) to get decord used when loading video.\n\n### Using \ud83e\udd17 Transformers to Chat\n\nHere we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:\n\n```python\nfrom transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor\nfrom qwen_vl_utils import process_vision_info\n\n# default: Load the model on the available device(s)\nmodel = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n \"Qwen/Qwen2.5-VL-32B-Instruct\", torch_dtype=\"auto\", device_map=\"auto\"\n)\n\n# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.\n# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n# \"Qwen/Qwen2.5-VL-32B-Instruct\",\n# torch_dtype=torch.bfloat16,\n# attn_implementation=\"flash_attention_2\",\n# device_map=\"auto\",\n# )\n\n# default processer\nprocessor = AutoProcessor.from_pretrained(\"Qwen/Qwen2.5-VL-32B-Instruct\")\n\n# The default range for the number of visual tokens per image in the model is 4-16384.\n# You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost.\n# min_pixels = 256*28*28\n# max_pixels = 1280*28*28\n# processor = AutoProcessor.from_pretrained(\"Qwen/Qwen2.5-VL-32B-Instruct\", min_pixels=min_pixels, max_pixels=max_pixels)\n\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image\",\n \"image\": \"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg\",\n },\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n\n# Preparation for inference\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\nimage_inputs, video_inputs = process_vision_info(messages)\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n padding=True,\n return_tensors=\"pt\",\n)\ninputs = inputs.to(\"cuda\")\n\n# Inference: Generation of the output\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_text = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_text)\n```\n\n

\nMulti image inference\n\n```python\n# Messages containing multiple images and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"file:///path/to/image1.jpg\"},\n {\"type\": \"image\", \"image\": \"file:///path/to/image2.jpg\"},\n {\"type\": \"text\", \"text\": \"Identify the similarities between these images.\"},\n ],\n }\n]\n\n# Preparation for inference\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\nimage_inputs, video_inputs = process_vision_info(messages)\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n padding=True,\n return_tensors=\"pt\",\n)\ninputs = inputs.to(\"cuda\")\n\n# Inference\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_text = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_text)\n```\n\n
\n\n
\nVideo inference\n\n```python\n# Messages containing a images list as a video and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"video\",\n \"video\": [\n \"file:///path/to/frame1.jpg\",\n \"file:///path/to/frame2.jpg\",\n \"file:///path/to/frame3.jpg\",\n \"file:///path/to/frame4.jpg\",\n ],\n },\n {\"type\": \"text\", \"text\": \"Describe this video.\"},\n ],\n }\n]\n\n# Messages containing a local video path and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"video\",\n \"video\": \"file:///path/to/video1.mp4\",\n \"max_pixels\": 360 * 420,\n \"fps\": 1.0,\n },\n {\"type\": \"text\", \"text\": \"Describe this video.\"},\n ],\n }\n]\n\n# Messages containing a video url and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"video\",\n \"video\": \"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-VL/space_woaudio.mp4\",\n },\n {\"type\": \"text\", \"text\": \"Describe this video.\"},\n ],\n }\n]\n\n#In Qwen 2.5 VL, frame rate information is also input into the model to align with absolute time.\n# Preparation for inference\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\nimage_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n fps=fps,\n padding=True,\n return_tensors=\"pt\",\n **video_kwargs,\n)\ninputs = inputs.to(\"cuda\")\n\n# Inference\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_text = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_text)\n```\n\nVideo URL compatibility largely depends on the third-party library version. The details are in the table below. change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one.\n\n| Backend | HTTP | HTTPS |\n|-------------|------|-------|\n| torchvision >= 0.19.0 | \u2705 | \u2705 |\n| torchvision < 0.19.0 | \u274c | \u274c |\n| decord | \u2705 | \u274c |\n\n
\n\n
\nBatch inference\n\n```python\n# Sample messages for batch inference\nmessages1 = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"file:///path/to/image1.jpg\"},\n {\"type\": \"image\", \"image\": \"file:///path/to/image2.jpg\"},\n {\"type\": \"text\", \"text\": \"What are the common elements in these pictures?\"},\n ],\n }\n]\nmessages2 = [\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n {\"role\": \"user\", \"content\": \"Who are you?\"},\n]\n# Combine messages for batch processing\nmessages = [messages1, messages2]\n\n# Preparation for batch inference\ntexts = [\n processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)\n for msg in messages\n]\nimage_inputs, video_inputs = process_vision_info(messages)\ninputs = processor(\n text=texts,\n images=image_inputs,\n videos=video_inputs,\n padding=True,\n return_tensors=\"pt\",\n)\ninputs = inputs.to(\"cuda\")\n\n# Batch Inference\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_texts = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_texts)\n```\n\n
\n\n### \ud83e\udd16 ModelScope\n\nWe strongly advise users especially those in mainland China to use ModelScope. `snapshot_download` can help you solve issues concerning downloading checkpoints.\n\n### More Usage Tips\n\nFor input images, we support local files, base64, and URLs. For videos, we currently only support local files.\n\n```python\n# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.\n## Local file path\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"file:///path/to/your/image.jpg\"},\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n## Image URL\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"http://path/to/your/image.jpg\"},\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n## Base64 encoded image\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"data:image;base64,/9j/...\"},\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n```\n\n#### Image Resolution for performance boost\n\nThe model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.\n\n```python\nmin_pixels = 256 * 28 * 28\nmax_pixels = 1280 * 28 * 28\nprocessor = AutoProcessor.from_pretrained(\n \"Qwen/Qwen2.5-VL-32B-Instruct\", min_pixels=min_pixels, max_pixels=max_pixels\n)\n```\n\nBesides, We provide two methods for fine-grained control over the image size input to the model:\n\n1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.\n2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.\n\n```python\n# min_pixels and max_pixels\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image\",\n \"image\": \"file:///path/to/your/image.jpg\",\n \"resized_height\": 280,\n \"resized_width\": 420,\n },\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n# resized_height and resized_width\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image\",\n \"image\": \"file:///path/to/your/image.jpg\",\n \"min_pixels\": 50176,\n \"max_pixels\": 50176,\n },\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n```\n\n### Processing Long Texts\n\nThe current `config.json` is set for context length up to 32,768 tokens.\nTo handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.\n\nFor supported frameworks, you could add the following to `config.json` to enable YaRN:\n\n{\n...,\n\"type\": \"yarn\",\n\"mrope_section\": [\n16,\n24,\n24\n],\n\"factor\": 4,\n\"original_max_position_embeddings\": 32768\n}\n\nHowever, it should be noted that this method has a significant impact on the performance of temporal and spatial localization tasks, and is therefore not recommended for use.\n\nAt the same time, for long video inputs, since MRoPE itself is more economical with ids, the max_position_embeddings can be directly modified to a larger value, such as 64k.\n\n## Citation\n\nIf you find our work helpful, feel free to give us a cite.\n\n```\n@article{Qwen2.5-VL,\n title={Qwen2.5-VL Technical Report},\n author={Bai, Shuai and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Song, Sibo and Dang, Kai and Wang, Peng and Wang, Shijie and Tang, Jun and Zhong, Humen and Zhu, Yuanzhi and Yang, Mingkun and Li, Zhaohai and Wan, Jianqiang and Wang, Pengfei and Ding, Wei and Fu, Zheren and Xu, Yiheng and Ye, Jiabo and Zhang, Xi and Xie, Tianbao and Cheng, Zesen and Zhang, Hang and Yang, Zhibo and Xu, Haiyang and Lin, Junyang},\n journal={arXiv preprint arXiv:2502.13923},\n year={2025}\n}\n```\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "unsloth/Qwen2.5-VL-32B-Instruct-GGUF", "base_model_relation": "base" }, { "model_id": "unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit", "gated": "False", "card": "---\nbase_model:\n- Qwen/Qwen2.5-VL-32B-Instruct\nlicense: apache-2.0\nlanguage:\n- en\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- unsloth\nlibrary_name: transformers\n---\n\n# Qwen2.5-VL-32B-Instruct\n\n \"Chat\"\n\n\n\n## Latest Updates:\nIn addition to the original formula, we have further enhanced Qwen2.5-VL-32B's mathematical and problem-solving abilities through reinforcement learning. This has also significantly improved the model's subjective user experience, with response styles adjusted to better align with human preferences. Particularly for objective queries such as mathematics, logical reasoning, and knowledge-based Q&A, the level of detail in responses and the clarity of formatting have been noticeably enhanced.\n\n## Introduction\n\nIn the past five months since Qwen2-VL\u2019s release, numerous developers have built new models on the Qwen2-VL vision-language models, providing us with valuable feedback. During this period, we focused on building more useful vision-language models. Today, we are excited to introduce the latest addition to the Qwen family: Qwen2.5-VL.\n\n#### Key Enhancements:\n* **Understand things visually**: Qwen2.5-VL is not only proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images.\n\n* **Being agentic**: Qwen2.5-VL directly plays as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use.\n\n* **Understanding long videos and capturing events**: Qwen2.5-VL can comprehend videos of over 1 hour, and this time it has a new ability of cpaturing event by pinpointing the relevant video segments.\n\n* **Capable of visual localization in different formats**: Qwen2.5-VL can accurately localize objects in an image by generating bounding boxes or points, and it can provide stable JSON outputs for coordinates and attributes.\n\n* **Generating structured outputs**: for data like scans of invoices, forms, tables, etc. Qwen2.5-VL supports structured outputs of their contents, benefiting usages in finance, commerce, etc.\n\n\n#### Model Architecture Updates:\n\n* **Dynamic Resolution and Frame Rate Training for Video Understanding**:\n\nWe extend dynamic resolution to the temporal dimension by adopting dynamic FPS sampling, enabling the model to comprehend videos at various sampling rates. Accordingly, we update mRoPE in the time dimension with IDs and absolute time alignment, enabling the model to learn temporal sequence and speed, and ultimately acquire the ability to pinpoint specific moments.\n\n

\n \n

\n\n\n* **Streamlined and Efficient Vision Encoder**\n\nWe enhance both training and inference speeds by strategically implementing window attention into the ViT. The ViT architecture is further optimized with SwiGLU and RMSNorm, aligning it with the structure of the Qwen2.5 LLM.\n\n\nWe have four models with 3, 7, 32 and 72 billion parameters. This repo contains the instruction-tuned 32B Qwen2.5-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2.5-vl/) and [GitHub](https://github.com/QwenLM/Qwen2.5-VL).\n\n\n\n## Evaluation\n\n### Vision\n\n| Dataset | Qwen2.5-VL-72B
([\ud83e\udd17](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct)[\ud83e\udd16](https://modelscope.cn/models/qwen/Qwen2.5-VL-72B-Instruct)) | Qwen2-VL-72B
([\ud83e\udd17](https://huggingface.co/Qwen/Qwen2-VL-72B-Instruct)[\ud83e\udd16](https://modelscope.cn/models/qwen/Qwen2-VL-72B-Instruct)) | Qwen2.5-VL-32B
([\ud83e\udd17](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct)[\ud83e\udd16](https://modelscope.cn/models/qwen/Qwen2.5-VL-32B-Instruct)) |\n|--------------------|--------|--------------|------------------|\n| MMMU |**70.2** | 64.5 | 70 |\n| MMMU Pro |**51.1** | 46.2 | 49.5 |\n| MMStar | **70.8** | 68.3 | 69.5 |\n| MathVista | **74.8** | 70.5 | 74.7 |\n| MathVision |38.1 | 25.9 | **40.0**|\n| OCRBenchV2 | **61.5/63.7** | 47.8/46.1 | 57.2/59.1 |\n| CC-OCR | **79.8** | 68.7 | 77.1 |\n| DocVQA | **96.4** | **96.5** | 94.8 |\n| InfoVQA | **87.3** | 84.5 | 83.4 |\n| LVBench |47.3 | - | **49.00** |\n| CharadesSTA |50.9 | - | **54.2** |\n| VideoMME |**73.3/79.1** | 71.2/77.8 | 70.5/77.9 |\n| MMBench-Video |**2.02** | 1.7 | 1.93 |\n| AITZ |**83.2** | - | 83.1 |\n| Android Control |**67.4/93.7** | 66.4/84.4 | 69.6/93.3 |\n| ScreenSpot |**87.1** | - | 88.5 |\n| ScreenSpot Pro |**43.6** | - | 39.4 |\n| AndroidWorld |**35** | - | 22.0 |\n| OSWorld |**8.83** | - | 5.92 |\n\n### Text\n\n| MODEL | MMLU | MMLU-PRO | MATH | GPQA-diamond | MBPP | Human Eval |\n|-----------------|--------|----------|---------|--------------|--------|------------|\n| Qwen2.5-VL-32B | 78.4 | 68.8 | 82.2 | 46.0 | 84.0 | 91.5 |\n| Mistral-Small-3.1-24B | 80.6 | 66.8 | 69.3 | 46.0 | 74.7 | 88.4 |\n| Gemma3-27B-IT | 76.9 | 67.5 | 89 | 42.4 | 74.4 | 87.8 |\n| GPT-4o-Mini | 82.0 | 61.7 | 70.2 | 39.4 | 84.8 | 87.2 |\n| Claude-3.5-Haiku | 77.6 | 65.0 | 69.2 | 41.6 | 85.6 | 88.1 |\n\n## Requirements\nThe code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command:\n```\n\npip install git+https://github.com/huggingface/transformers accelerate\n\n```\nor you might encounter the following error:\n```\n\nKeyError: 'qwen2_5_vl'\n\n```\n## Quickstart\n\nBelow, we provide simple examples to show how to use Qwen2.5-VL with \ud83e\udd16 ModelScope and \ud83e\udd17 Transformers.\n\nThe code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command:\n```\n\npip install git+https://github.com/huggingface/transformers accelerate\n\n```\nor you might encounter the following error:\n```\n\nKeyError: 'qwen2_5_vl'\n\n```\nWe offer a toolkit to help you handle various types of visual input more conveniently, as if you were using an API. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:\n\n```bash\n# It's highly recommanded to use `[decord]` feature for faster video loading.\npip install qwen-vl-utils[decord]==0.0.8\n```\n\nIf you are not using Linux, you might not be able to install `decord` from PyPI. In that case, you can use `pip install qwen-vl-utils` which will fall back to using torchvision for video processing. However, you can still [install decord from source](https://github.com/dmlc/decord?tab=readme-ov-file#install-from-source) to get decord used when loading video.\n\n### Using \ud83e\udd17 Transformers to Chat\n\nHere we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:\n\n```python\nfrom transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor\nfrom qwen_vl_utils import process_vision_info\n\n# default: Load the model on the available device(s)\nmodel = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n \"Qwen/Qwen2.5-VL-32B-Instruct\", torch_dtype=\"auto\", device_map=\"auto\"\n)\n\n# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.\n# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n# \"Qwen/Qwen2.5-VL-32B-Instruct\",\n# torch_dtype=torch.bfloat16,\n# attn_implementation=\"flash_attention_2\",\n# device_map=\"auto\",\n# )\n\n# default processer\nprocessor = AutoProcessor.from_pretrained(\"Qwen/Qwen2.5-VL-32B-Instruct\")\n\n# The default range for the number of visual tokens per image in the model is 4-16384.\n# You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost.\n# min_pixels = 256*28*28\n# max_pixels = 1280*28*28\n# processor = AutoProcessor.from_pretrained(\"Qwen/Qwen2.5-VL-32B-Instruct\", min_pixels=min_pixels, max_pixels=max_pixels)\n\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image\",\n \"image\": \"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg\",\n },\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n\n# Preparation for inference\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\nimage_inputs, video_inputs = process_vision_info(messages)\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n padding=True,\n return_tensors=\"pt\",\n)\ninputs = inputs.to(\"cuda\")\n\n# Inference: Generation of the output\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_text = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_text)\n```\n\n

\nMulti image inference\n\n```python\n# Messages containing multiple images and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"file:///path/to/image1.jpg\"},\n {\"type\": \"image\", \"image\": \"file:///path/to/image2.jpg\"},\n {\"type\": \"text\", \"text\": \"Identify the similarities between these images.\"},\n ],\n }\n]\n\n# Preparation for inference\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\nimage_inputs, video_inputs = process_vision_info(messages)\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n padding=True,\n return_tensors=\"pt\",\n)\ninputs = inputs.to(\"cuda\")\n\n# Inference\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_text = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_text)\n```\n\n
\n\n
\nVideo inference\n\n```python\n# Messages containing a images list as a video and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"video\",\n \"video\": [\n \"file:///path/to/frame1.jpg\",\n \"file:///path/to/frame2.jpg\",\n \"file:///path/to/frame3.jpg\",\n \"file:///path/to/frame4.jpg\",\n ],\n },\n {\"type\": \"text\", \"text\": \"Describe this video.\"},\n ],\n }\n]\n\n# Messages containing a local video path and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"video\",\n \"video\": \"file:///path/to/video1.mp4\",\n \"max_pixels\": 360 * 420,\n \"fps\": 1.0,\n },\n {\"type\": \"text\", \"text\": \"Describe this video.\"},\n ],\n }\n]\n\n# Messages containing a video url and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"video\",\n \"video\": \"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-VL/space_woaudio.mp4\",\n },\n {\"type\": \"text\", \"text\": \"Describe this video.\"},\n ],\n }\n]\n\n#In Qwen 2.5 VL, frame rate information is also input into the model to align with absolute time.\n# Preparation for inference\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\nimage_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n fps=fps,\n padding=True,\n return_tensors=\"pt\",\n **video_kwargs,\n)\ninputs = inputs.to(\"cuda\")\n\n# Inference\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_text = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_text)\n```\n\nVideo URL compatibility largely depends on the third-party library version. The details are in the table below. change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one.\n\n| Backend | HTTP | HTTPS |\n|-------------|------|-------|\n| torchvision >= 0.19.0 | \u2705 | \u2705 |\n| torchvision < 0.19.0 | \u274c | \u274c |\n| decord | \u2705 | \u274c |\n\n
\n\n
\nBatch inference\n\n```python\n# Sample messages for batch inference\nmessages1 = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"file:///path/to/image1.jpg\"},\n {\"type\": \"image\", \"image\": \"file:///path/to/image2.jpg\"},\n {\"type\": \"text\", \"text\": \"What are the common elements in these pictures?\"},\n ],\n }\n]\nmessages2 = [\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n {\"role\": \"user\", \"content\": \"Who are you?\"},\n]\n# Combine messages for batch processing\nmessages = [messages1, messages2]\n\n# Preparation for batch inference\ntexts = [\n processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)\n for msg in messages\n]\nimage_inputs, video_inputs = process_vision_info(messages)\ninputs = processor(\n text=texts,\n images=image_inputs,\n videos=video_inputs,\n padding=True,\n return_tensors=\"pt\",\n)\ninputs = inputs.to(\"cuda\")\n\n# Batch Inference\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_texts = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_texts)\n```\n\n
\n\n### \ud83e\udd16 ModelScope\n\nWe strongly advise users especially those in mainland China to use ModelScope. `snapshot_download` can help you solve issues concerning downloading checkpoints.\n\n### More Usage Tips\n\nFor input images, we support local files, base64, and URLs. For videos, we currently only support local files.\n\n```python\n# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.\n## Local file path\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"file:///path/to/your/image.jpg\"},\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n## Image URL\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"http://path/to/your/image.jpg\"},\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n## Base64 encoded image\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"data:image;base64,/9j/...\"},\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n```\n\n#### Image Resolution for performance boost\n\nThe model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.\n\n```python\nmin_pixels = 256 * 28 * 28\nmax_pixels = 1280 * 28 * 28\nprocessor = AutoProcessor.from_pretrained(\n \"Qwen/Qwen2.5-VL-32B-Instruct\", min_pixels=min_pixels, max_pixels=max_pixels\n)\n```\n\nBesides, We provide two methods for fine-grained control over the image size input to the model:\n\n1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.\n2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.\n\n```python\n# min_pixels and max_pixels\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image\",\n \"image\": \"file:///path/to/your/image.jpg\",\n \"resized_height\": 280,\n \"resized_width\": 420,\n },\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n# resized_height and resized_width\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image\",\n \"image\": \"file:///path/to/your/image.jpg\",\n \"min_pixels\": 50176,\n \"max_pixels\": 50176,\n },\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n```\n\n### Processing Long Texts\n\nThe current `config.json` is set for context length up to 32,768 tokens.\nTo handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.\n\nFor supported frameworks, you could add the following to `config.json` to enable YaRN:\n\n{\n...,\n\"type\": \"yarn\",\n\"mrope_section\": [\n16,\n24,\n24\n],\n\"factor\": 4,\n\"original_max_position_embeddings\": 32768\n}\n\nHowever, it should be noted that this method has a significant impact on the performance of temporal and spatial localization tasks, and is therefore not recommended for use.\n\nAt the same time, for long video inputs, since MRoPE itself is more economical with ids, the max_position_embeddings can be directly modified to a larger value, such as 64k.\n\n## Citation\n\nIf you find our work helpful, feel free to give us a cite.\n\n```\n@article{Qwen2.5-VL,\n title={Qwen2.5-VL Technical Report},\n author={Bai, Shuai and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Song, Sibo and Dang, Kai and Wang, Peng and Wang, Shijie and Tang, Jun and Zhong, Humen and Zhu, Yuanzhi and Yang, Mingkun and Li, Zhaohai and Wan, Jianqiang and Wang, Pengfei and Ding, Wei and Fu, Zheren and Xu, Yiheng and Ye, Jiabo and Zhang, Xi and Xie, Tianbao and Cheng, Zesen and Zhang, Hang and Yang, Zhibo and Xu, Haiyang and Lin, Junyang},\n journal={arXiv preprint arXiv:2502.13923},\n year={2025}\n}\n```\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [ "itztheking/FMAX-testrun-1.0", "yoshimaru4/bashk_qwen_test", "itztheking/FMAX-testrun-3.0-lora", "farhan9801/32B-fine_tuned" ], "children_count": 4, "adapters": [], "adapters_count": 0, "quantized": [ "QAdottech/Qwen2.5-VL-32B-Instruct-unsloth-merged16", "QAdottech/Qwen2.5-VL-32B-Instruct-unsloth-bnb-sft-merged", "QAdottech/Qwen2.5-VL-32B-Instruct-unsloth-bnb-16-merged" ], "quantized_count": 3, "merges": [], "merges_count": 0, "total_derivatives": 7, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit", "base_model_relation": "base" }, { "model_id": "unsloth/Qwen2.5-VL-32B-Instruct-bnb-4bit", "gated": "False", "card": "---\nbase_model:\n- Qwen/Qwen2.5-VL-32B-Instruct\nlicense: apache-2.0\nlanguage:\n- en\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- unsloth\nlibrary_name: transformers\n---\n\n# Qwen2.5-VL-32B-Instruct\n\n \"Chat\"\n\n\n\n## Latest Updates:\nIn addition to the original formula, we have further enhanced Qwen2.5-VL-32B's mathematical and problem-solving abilities through reinforcement learning. This has also significantly improved the model's subjective user experience, with response styles adjusted to better align with human preferences. Particularly for objective queries such as mathematics, logical reasoning, and knowledge-based Q&A, the level of detail in responses and the clarity of formatting have been noticeably enhanced.\n\n## Introduction\n\nIn the past five months since Qwen2-VL\u2019s release, numerous developers have built new models on the Qwen2-VL vision-language models, providing us with valuable feedback. During this period, we focused on building more useful vision-language models. Today, we are excited to introduce the latest addition to the Qwen family: Qwen2.5-VL.\n\n#### Key Enhancements:\n* **Understand things visually**: Qwen2.5-VL is not only proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images.\n\n* **Being agentic**: Qwen2.5-VL directly plays as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use.\n\n* **Understanding long videos and capturing events**: Qwen2.5-VL can comprehend videos of over 1 hour, and this time it has a new ability of cpaturing event by pinpointing the relevant video segments.\n\n* **Capable of visual localization in different formats**: Qwen2.5-VL can accurately localize objects in an image by generating bounding boxes or points, and it can provide stable JSON outputs for coordinates and attributes.\n\n* **Generating structured outputs**: for data like scans of invoices, forms, tables, etc. Qwen2.5-VL supports structured outputs of their contents, benefiting usages in finance, commerce, etc.\n\n\n#### Model Architecture Updates:\n\n* **Dynamic Resolution and Frame Rate Training for Video Understanding**:\n\nWe extend dynamic resolution to the temporal dimension by adopting dynamic FPS sampling, enabling the model to comprehend videos at various sampling rates. Accordingly, we update mRoPE in the time dimension with IDs and absolute time alignment, enabling the model to learn temporal sequence and speed, and ultimately acquire the ability to pinpoint specific moments.\n\n

\n \n

\n\n\n* **Streamlined and Efficient Vision Encoder**\n\nWe enhance both training and inference speeds by strategically implementing window attention into the ViT. The ViT architecture is further optimized with SwiGLU and RMSNorm, aligning it with the structure of the Qwen2.5 LLM.\n\n\nWe have four models with 3, 7, 32 and 72 billion parameters. This repo contains the instruction-tuned 32B Qwen2.5-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2.5-vl/) and [GitHub](https://github.com/QwenLM/Qwen2.5-VL).\n\n\n\n## Evaluation\n\n### Vision\n\n| Dataset | Qwen2.5-VL-72B
([\ud83e\udd17](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct)[\ud83e\udd16](https://modelscope.cn/models/qwen/Qwen2.5-VL-72B-Instruct)) | Qwen2-VL-72B
([\ud83e\udd17](https://huggingface.co/Qwen/Qwen2-VL-72B-Instruct)[\ud83e\udd16](https://modelscope.cn/models/qwen/Qwen2-VL-72B-Instruct)) | Qwen2.5-VL-32B
([\ud83e\udd17](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct)[\ud83e\udd16](https://modelscope.cn/models/qwen/Qwen2.5-VL-32B-Instruct)) |\n|--------------------|--------|--------------|------------------|\n| MMMU |**70.2** | 64.5 | 70 |\n| MMMU Pro |**51.1** | 46.2 | 49.5 |\n| MMStar | **70.8** | 68.3 | 69.5 |\n| MathVista | **74.8** | 70.5 | 74.7 |\n| MathVision |38.1 | 25.9 | **40.0**|\n| OCRBenchV2 | **61.5/63.7** | 47.8/46.1 | 57.2/59.1 |\n| CC-OCR | **79.8** | 68.7 | 77.1 |\n| DocVQA | **96.4** | **96.5** | 94.8 |\n| InfoVQA | **87.3** | 84.5 | 83.4 |\n| LVBench |47.3 | - | **49.00** |\n| CharadesSTA |50.9 | - | **54.2** |\n| VideoMME |**73.3/79.1** | 71.2/77.8 | 70.5/77.9 |\n| MMBench-Video |**2.02** | 1.7 | 1.93 |\n| AITZ |**83.2** | - | 83.1 |\n| Android Control |**67.4/93.7** | 66.4/84.4 | 69.6/93.3 |\n| ScreenSpot |**87.1** | - | 88.5 |\n| ScreenSpot Pro |**43.6** | - | 39.4 |\n| AndroidWorld |**35** | - | 22.0 |\n| OSWorld |**8.83** | - | 5.92 |\n\n### Text\n\n| MODEL | MMLU | MMLU-PRO | MATH | GPQA-diamond | MBPP | Human Eval |\n|-----------------|--------|----------|---------|--------------|--------|------------|\n| Qwen2.5-VL-32B | 78.4 | 68.8 | 82.2 | 46.0 | 84.0 | 91.5 |\n| Mistral-Small-3.1-24B | 80.6 | 66.8 | 69.3 | 46.0 | 74.7 | 88.4 |\n| Gemma3-27B-IT | 76.9 | 67.5 | 89 | 42.4 | 74.4 | 87.8 |\n| GPT-4o-Mini | 82.0 | 61.7 | 70.2 | 39.4 | 84.8 | 87.2 |\n| Claude-3.5-Haiku | 77.6 | 65.0 | 69.2 | 41.6 | 85.6 | 88.1 |\n\n## Requirements\nThe code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command:\n```\n\npip install git+https://github.com/huggingface/transformers accelerate\n\n```\nor you might encounter the following error:\n```\n\nKeyError: 'qwen2_5_vl'\n\n```\n## Quickstart\n\nBelow, we provide simple examples to show how to use Qwen2.5-VL with \ud83e\udd16 ModelScope and \ud83e\udd17 Transformers.\n\nThe code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command:\n```\n\npip install git+https://github.com/huggingface/transformers accelerate\n\n```\nor you might encounter the following error:\n```\n\nKeyError: 'qwen2_5_vl'\n\n```\nWe offer a toolkit to help you handle various types of visual input more conveniently, as if you were using an API. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:\n\n```bash\n# It's highly recommanded to use `[decord]` feature for faster video loading.\npip install qwen-vl-utils[decord]==0.0.8\n```\n\nIf you are not using Linux, you might not be able to install `decord` from PyPI. In that case, you can use `pip install qwen-vl-utils` which will fall back to using torchvision for video processing. However, you can still [install decord from source](https://github.com/dmlc/decord?tab=readme-ov-file#install-from-source) to get decord used when loading video.\n\n### Using \ud83e\udd17 Transformers to Chat\n\nHere we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:\n\n```python\nfrom transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor\nfrom qwen_vl_utils import process_vision_info\n\n# default: Load the model on the available device(s)\nmodel = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n \"Qwen/Qwen2.5-VL-32B-Instruct\", torch_dtype=\"auto\", device_map=\"auto\"\n)\n\n# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.\n# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n# \"Qwen/Qwen2.5-VL-32B-Instruct\",\n# torch_dtype=torch.bfloat16,\n# attn_implementation=\"flash_attention_2\",\n# device_map=\"auto\",\n# )\n\n# default processer\nprocessor = AutoProcessor.from_pretrained(\"Qwen/Qwen2.5-VL-32B-Instruct\")\n\n# The default range for the number of visual tokens per image in the model is 4-16384.\n# You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost.\n# min_pixels = 256*28*28\n# max_pixels = 1280*28*28\n# processor = AutoProcessor.from_pretrained(\"Qwen/Qwen2.5-VL-32B-Instruct\", min_pixels=min_pixels, max_pixels=max_pixels)\n\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image\",\n \"image\": \"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg\",\n },\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n\n# Preparation for inference\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\nimage_inputs, video_inputs = process_vision_info(messages)\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n padding=True,\n return_tensors=\"pt\",\n)\ninputs = inputs.to(\"cuda\")\n\n# Inference: Generation of the output\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_text = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_text)\n```\n\n

\nMulti image inference\n\n```python\n# Messages containing multiple images and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"file:///path/to/image1.jpg\"},\n {\"type\": \"image\", \"image\": \"file:///path/to/image2.jpg\"},\n {\"type\": \"text\", \"text\": \"Identify the similarities between these images.\"},\n ],\n }\n]\n\n# Preparation for inference\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\nimage_inputs, video_inputs = process_vision_info(messages)\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n padding=True,\n return_tensors=\"pt\",\n)\ninputs = inputs.to(\"cuda\")\n\n# Inference\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_text = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_text)\n```\n\n
\n\n
\nVideo inference\n\n```python\n# Messages containing a images list as a video and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"video\",\n \"video\": [\n \"file:///path/to/frame1.jpg\",\n \"file:///path/to/frame2.jpg\",\n \"file:///path/to/frame3.jpg\",\n \"file:///path/to/frame4.jpg\",\n ],\n },\n {\"type\": \"text\", \"text\": \"Describe this video.\"},\n ],\n }\n]\n\n# Messages containing a local video path and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"video\",\n \"video\": \"file:///path/to/video1.mp4\",\n \"max_pixels\": 360 * 420,\n \"fps\": 1.0,\n },\n {\"type\": \"text\", \"text\": \"Describe this video.\"},\n ],\n }\n]\n\n# Messages containing a video url and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"video\",\n \"video\": \"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-VL/space_woaudio.mp4\",\n },\n {\"type\": \"text\", \"text\": \"Describe this video.\"},\n ],\n }\n]\n\n#In Qwen 2.5 VL, frame rate information is also input into the model to align with absolute time.\n# Preparation for inference\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\nimage_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n fps=fps,\n padding=True,\n return_tensors=\"pt\",\n **video_kwargs,\n)\ninputs = inputs.to(\"cuda\")\n\n# Inference\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_text = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_text)\n```\n\nVideo URL compatibility largely depends on the third-party library version. The details are in the table below. change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one.\n\n| Backend | HTTP | HTTPS |\n|-------------|------|-------|\n| torchvision >= 0.19.0 | \u2705 | \u2705 |\n| torchvision < 0.19.0 | \u274c | \u274c |\n| decord | \u2705 | \u274c |\n\n
\n\n
\nBatch inference\n\n```python\n# Sample messages for batch inference\nmessages1 = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"file:///path/to/image1.jpg\"},\n {\"type\": \"image\", \"image\": \"file:///path/to/image2.jpg\"},\n {\"type\": \"text\", \"text\": \"What are the common elements in these pictures?\"},\n ],\n }\n]\nmessages2 = [\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n {\"role\": \"user\", \"content\": \"Who are you?\"},\n]\n# Combine messages for batch processing\nmessages = [messages1, messages2]\n\n# Preparation for batch inference\ntexts = [\n processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)\n for msg in messages\n]\nimage_inputs, video_inputs = process_vision_info(messages)\ninputs = processor(\n text=texts,\n images=image_inputs,\n videos=video_inputs,\n padding=True,\n return_tensors=\"pt\",\n)\ninputs = inputs.to(\"cuda\")\n\n# Batch Inference\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_texts = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_texts)\n```\n\n
\n\n### \ud83e\udd16 ModelScope\n\nWe strongly advise users especially those in mainland China to use ModelScope. `snapshot_download` can help you solve issues concerning downloading checkpoints.\n\n### More Usage Tips\n\nFor input images, we support local files, base64, and URLs. For videos, we currently only support local files.\n\n```python\n# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.\n## Local file path\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"file:///path/to/your/image.jpg\"},\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n## Image URL\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"http://path/to/your/image.jpg\"},\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n## Base64 encoded image\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"data:image;base64,/9j/...\"},\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n```\n\n#### Image Resolution for performance boost\n\nThe model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.\n\n```python\nmin_pixels = 256 * 28 * 28\nmax_pixels = 1280 * 28 * 28\nprocessor = AutoProcessor.from_pretrained(\n \"Qwen/Qwen2.5-VL-32B-Instruct\", min_pixels=min_pixels, max_pixels=max_pixels\n)\n```\n\nBesides, We provide two methods for fine-grained control over the image size input to the model:\n\n1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.\n2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.\n\n```python\n# min_pixels and max_pixels\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image\",\n \"image\": \"file:///path/to/your/image.jpg\",\n \"resized_height\": 280,\n \"resized_width\": 420,\n },\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n# resized_height and resized_width\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image\",\n \"image\": \"file:///path/to/your/image.jpg\",\n \"min_pixels\": 50176,\n \"max_pixels\": 50176,\n },\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n```\n\n### Processing Long Texts\n\nThe current `config.json` is set for context length up to 32,768 tokens.\nTo handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.\n\nFor supported frameworks, you could add the following to `config.json` to enable YaRN:\n\n{\n...,\n\"type\": \"yarn\",\n\"mrope_section\": [\n16,\n24,\n24\n],\n\"factor\": 4,\n\"original_max_position_embeddings\": 32768\n}\n\nHowever, it should be noted that this method has a significant impact on the performance of temporal and spatial localization tasks, and is therefore not recommended for use.\n\nAt the same time, for long video inputs, since MRoPE itself is more economical with ids, the max_position_embeddings can be directly modified to a larger value, such as 64k.\n\n## Citation\n\nIf you find our work helpful, feel free to give us a cite.\n\n```\n@article{Qwen2.5-VL,\n title={Qwen2.5-VL Technical Report},\n author={Bai, Shuai and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Song, Sibo and Dang, Kai and Wang, Peng and Wang, Shijie and Tang, Jun and Zhong, Humen and Zhu, Yuanzhi and Yang, Mingkun and Li, Zhaohai and Wan, Jianqiang and Wang, Pengfei and Ding, Wei and Fu, Zheren and Xu, Yiheng and Ye, Jiabo and Zhang, Xi and Xie, Tianbao and Cheng, Zesen and Zhang, Hang and Yang, Zhibo and Xu, Haiyang and Lin, Junyang},\n journal={arXiv preprint arXiv:2502.13923},\n year={2025}\n}\n```\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "unsloth/Qwen2.5-VL-32B-Instruct-bnb-4bit", "base_model_relation": "base" }, { "model_id": "leon-se/Qwen2.5-VL-32B-Instruct-FP8-Dynamic", "gated": "False", "card": "---\nlicense: apache-2.0\nbase_model:\n- Qwen/Qwen2.5-VL-32B-Instruct\n---\nRun with: \n```\nvllm serve leon-se/Qwen2.5-VL-32B-Instruct-FP8-Dynamic\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "leon-se/Qwen2.5-VL-32B-Instruct-FP8-Dynamic", "base_model_relation": "base" }, { "model_id": "samgreen/Qwen2.5-VL-32B-Instruct-GGUF", "gated": "False", "card": "---\nlicense: apache-2.0\nlanguage:\n- en\npipeline_tag: image-text-to-text\ntags:\n- multimodal\nbase_model:\n- Qwen/Qwen2.5-VL-32B-Instruct\n---\n\n# Qwen2.5-VL-32B-Instruct\n\nConverted and quantized using [HimariO's\nfork](https://github.com/HimariO/llama.cpp.qwen2vl/tree/qwen25-vl) using [this\nprocedure](https://github.com/ggml-org/llama.cpp/issues/11483#issuecomment-2727577078).\nNo IMatrix.\n\nThe fork is currently required to run inference and there's no guarantee these checkpoints will work with future builds. Temporary builds are available [here](https://github.com/green-s/llama.cpp.qwen2vl/releases). The latest tested build as of writing is `qwen25-vl-b4899-bc4163b`.\n\nEdit:\n\nAs of 1-April-2025 inference support has been added to [koboldcpp](https://github.com/LostRuins/koboldcpp).\n\n[Original model](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct)\n\n[Unquantized GGUFs](https://huggingface.co/RzZ/Qwen2.5-VL-32B-Instruct-GGUF)\n\n## Usage\n\n```bash\n./llama-qwen2vl-cli -m Qwen2.5-VL-32B-Instruct-Q4_K_M.gguf --mmproj qwen2.5-vl-32b-instruct-vision-f16.gguf -p \"Please describe this image.\" --image ./image.jpg\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "samgreen/Qwen2.5-VL-32B-Instruct-GGUF", "base_model_relation": "base" }, { "model_id": "leon-se/Qwen2.5-VL-32B-Instruct-W4A16-G128", "gated": "False", "card": "---\nlicense: apache-2.0\nbase_model:\n- Qwen/Qwen2.5-VL-32B-Instruct\npipeline_tag: image-text-to-text\n---", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "leon-se/Qwen2.5-VL-32B-Instruct-W4A16-G128", "base_model_relation": "base" }, { "model_id": "christopherthompson81/Qwen2.5-VL-32B-Instruct-exl2-4_25bpw", "gated": "False", "card": "---\nlicense: apache-2.0\nlanguage:\n- en\npipeline_tag: image-text-to-text\ntags:\n- multimodal\nlibrary_name: transformers\nbase_model:\n- Qwen/Qwen2.5-VL-32B-Instruct\n---\n\n# Qwen2.5-VL-32B-Instruct\n\n \"Chat\"\n\n\n\n## Latest Updates:\nIn addition to the original formula, we have further enhanced Qwen2.5-VL-32B's mathematical and problem-solving abilities through reinforcement learning. This has also significantly improved the model's subjective user experience, with response styles adjusted to better align with human preferences. Particularly for objective queries such as mathematics, logical reasoning, and knowledge-based Q&A, the level of detail in responses and the clarity of formatting have been noticeably enhanced.\n\n## Introduction\n\nIn the past five months since Qwen2-VL\u2019s release, numerous developers have built new models on the Qwen2-VL vision-language models, providing us with valuable feedback. During this period, we focused on building more useful vision-language models. Today, we are excited to introduce the latest addition to the Qwen family: Qwen2.5-VL.\n\n#### Key Enhancements:\n* **Understand things visually**: Qwen2.5-VL is not only proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images.\n\n* **Being agentic**: Qwen2.5-VL directly plays as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use.\n\n* **Understanding long videos and capturing events**: Qwen2.5-VL can comprehend videos of over 1 hour, and this time it has a new ability of cpaturing event by pinpointing the relevant video segments.\n\n* **Capable of visual localization in different formats**: Qwen2.5-VL can accurately localize objects in an image by generating bounding boxes or points, and it can provide stable JSON outputs for coordinates and attributes.\n\n* **Generating structured outputs**: for data like scans of invoices, forms, tables, etc. Qwen2.5-VL supports structured outputs of their contents, benefiting usages in finance, commerce, etc.\n\n\n#### Model Architecture Updates:\n\n* **Dynamic Resolution and Frame Rate Training for Video Understanding**:\n\nWe extend dynamic resolution to the temporal dimension by adopting dynamic FPS sampling, enabling the model to comprehend videos at various sampling rates. Accordingly, we update mRoPE in the time dimension with IDs and absolute time alignment, enabling the model to learn temporal sequence and speed, and ultimately acquire the ability to pinpoint specific moments.\n\n

\n \n

\n\n\n* **Streamlined and Efficient Vision Encoder**\n\nWe enhance both training and inference speeds by strategically implementing window attention into the ViT. The ViT architecture is further optimized with SwiGLU and RMSNorm, aligning it with the structure of the Qwen2.5 LLM.\n\n\nWe have three models with 3, 7 and 72 billion parameters. This repo contains the instruction-tuned 32B Qwen2.5-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2.5-vl/) and [GitHub](https://github.com/QwenLM/Qwen2.5-VL).\n\n\n\n## Evaluation\n\n### Vision\n\n| Dataset | Qwen2.5-VL-72B
([\ud83e\udd17](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct)[\ud83e\udd16](https://modelscope.cn/models/qwen/Qwen2.5-VL-72B-Instruct)) | Qwen2-VL-72B
([\ud83e\udd17](https://huggingface.co/Qwen/Qwen2-VL-72B-Instruct)[\ud83e\udd16](https://modelscope.cn/models/qwen/Qwen2-VL-72B-Instruct)) | Qwen2.5-VL-32B
([\ud83e\udd17](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct)[\ud83e\udd16](https://modelscope.cn/models/qwen/Qwen2.5-VL-32B-Instruct)) |\n|--------------------|--------|--------------|------------------|\n| MMMU |**70.2** | 64.5 | 70 |\n| MMMU Pro |**51.1** | 46.2 | 49.5 |\n| MMStar | **70.8** | 68.3 | 69.5 |\n| MathVista | **74.8** | 70.5 | 74.7 |\n| MathVision |38.1 | 25.9 | **40.0**|\n| OCRBenchV2 | **61.5/63.7** | 47.8/46.1 | 57.2/59.1 |\n| CC-OCR | **79.8** | 68.7 | 77.1 |\n| DocVQA | **96.4** | **96.5** | 94.8 |\n| InfoVQA | **87.3** | 84.5 | 83.4 |\n| LVBench |47.3 | - | **49.00** |\n| CharadesSTA |50.9 | - | **54.2** |\n| VideoMME |**73.3/79.1** | 71.2/77.8 | 70.5/77.9 |\n| MMBench-Video |**2.02** | 1.7 | 1.93 |\n| AITZ |**83.2** | - | 83.1 |\n| Android Control |**67.4/93.7** | 66.4/84.4 | 69.6/93.3 |\n| ScreenSpot |**87.1** | - | 88.5 |\n| ScreenSpot Pro |**43.6** | - | 39.4 |\n| AndroidWorld |**35** | - | 22.0 |\n| OSWorld |**8.83** | - | 5.92 |\n\n### Text\n\n| MODEL | MMLU | MMLU-PRO | MATH | GPQA-diamond | MBPP | Human Eval |\n|-----------------|--------|----------|---------|--------------|--------|------------|\n| Qwen2.5-VL-32B | 78.4 | 68.8 | 82.2 | 46.0 | 84.0 | 91.5 |\n| Mistral-Small-3.1-24B | 80.6 | 66.8 | 69.3 | 46.0 | 74.7 | 88.4 |\n| Gemma3-27B-IT | 76.9 | 67.5 | 89 | 42.4 | 74.4 | 87.8 |\n| GPT-4o-Mini | 82.0 | 61.7 | 70.2 | 39.4 | 84.8 | 87.2 |\n| Claude-3.5-Haiku | 77.6 | 65.0 | 69.2 | 41.6 | 85.6 | 88.1 |\n\n## Requirements\nThe code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command:\n```\n\npip install git+https://github.com/huggingface/transformers accelerate\n\n```\nor you might encounter the following error:\n```\n\nKeyError: 'qwen2_5_vl'\n\n```\n## Quickstart\n\nBelow, we provide simple examples to show how to use Qwen2.5-VL with \ud83e\udd16 ModelScope and \ud83e\udd17 Transformers.\n\nThe code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command:\n```\n\npip install git+https://github.com/huggingface/transformers accelerate\n\n```\nor you might encounter the following error:\n```\n\nKeyError: 'qwen2_5_vl'\n\n```\nWe offer a toolkit to help you handle various types of visual input more conveniently, as if you were using an API. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:\n\n```bash\n# It's highly recommanded to use `[decord]` feature for faster video loading.\npip install qwen-vl-utils[decord]==0.0.8\n```\n\nIf you are not using Linux, you might not be able to install `decord` from PyPI. In that case, you can use `pip install qwen-vl-utils` which will fall back to using torchvision for video processing. However, you can still [install decord from source](https://github.com/dmlc/decord?tab=readme-ov-file#install-from-source) to get decord used when loading video.\n\n### Using \ud83e\udd17 Transformers to Chat\n\nHere we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:\n\n```python\nfrom transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor\nfrom qwen_vl_utils import process_vision_info\n\n# default: Load the model on the available device(s)\nmodel = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n \"Qwen/Qwen2.5-VL-32B-Instruct\", torch_dtype=\"auto\", device_map=\"auto\"\n)\n\n# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.\n# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n# \"Qwen/Qwen2.5-VL-32B-Instruct\",\n# torch_dtype=torch.bfloat16,\n# attn_implementation=\"flash_attention_2\",\n# device_map=\"auto\",\n# )\n\n# default processer\nprocessor = AutoProcessor.from_pretrained(\"Qwen/Qwen2.5-VL-32B-Instruct\")\n\n# The default range for the number of visual tokens per image in the model is 4-16384.\n# You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost.\n# min_pixels = 256*28*28\n# max_pixels = 1280*28*28\n# processor = AutoProcessor.from_pretrained(\"Qwen/Qwen2.5-VL-32B-Instruct\", min_pixels=min_pixels, max_pixels=max_pixels)\n\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image\",\n \"image\": \"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg\",\n },\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n\n# Preparation for inference\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\nimage_inputs, video_inputs = process_vision_info(messages)\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n padding=True,\n return_tensors=\"pt\",\n)\ninputs = inputs.to(\"cuda\")\n\n# Inference: Generation of the output\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_text = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_text)\n```\n\n

\nMulti image inference\n\n```python\n# Messages containing multiple images and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"file:///path/to/image1.jpg\"},\n {\"type\": \"image\", \"image\": \"file:///path/to/image2.jpg\"},\n {\"type\": \"text\", \"text\": \"Identify the similarities between these images.\"},\n ],\n }\n]\n\n# Preparation for inference\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\nimage_inputs, video_inputs = process_vision_info(messages)\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n padding=True,\n return_tensors=\"pt\",\n)\ninputs = inputs.to(\"cuda\")\n\n# Inference\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_text = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_text)\n```\n\n
\n\n
\nVideo inference\n\n```python\n# Messages containing a images list as a video and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"video\",\n \"video\": [\n \"file:///path/to/frame1.jpg\",\n \"file:///path/to/frame2.jpg\",\n \"file:///path/to/frame3.jpg\",\n \"file:///path/to/frame4.jpg\",\n ],\n },\n {\"type\": \"text\", \"text\": \"Describe this video.\"},\n ],\n }\n]\n\n# Messages containing a local video path and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"video\",\n \"video\": \"file:///path/to/video1.mp4\",\n \"max_pixels\": 360 * 420,\n \"fps\": 1.0,\n },\n {\"type\": \"text\", \"text\": \"Describe this video.\"},\n ],\n }\n]\n\n# Messages containing a video url and a text query\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"video\",\n \"video\": \"https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-VL/space_woaudio.mp4\",\n },\n {\"type\": \"text\", \"text\": \"Describe this video.\"},\n ],\n }\n]\n\n#In Qwen 2.5 VL, frame rate information is also input into the model to align with absolute time.\n# Preparation for inference\ntext = processor.apply_chat_template(\n messages, tokenize=False, add_generation_prompt=True\n)\nimage_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)\ninputs = processor(\n text=[text],\n images=image_inputs,\n videos=video_inputs,\n fps=fps,\n padding=True,\n return_tensors=\"pt\",\n **video_kwargs,\n)\ninputs = inputs.to(\"cuda\")\n\n# Inference\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_text = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_text)\n```\n\nVideo URL compatibility largely depends on the third-party library version. The details are in the table below. change the backend by `FORCE_QWENVL_VIDEO_READER=torchvision` or `FORCE_QWENVL_VIDEO_READER=decord` if you prefer not to use the default one.\n\n| Backend | HTTP | HTTPS |\n|-------------|------|-------|\n| torchvision >= 0.19.0 | \u2705 | \u2705 |\n| torchvision < 0.19.0 | \u274c | \u274c |\n| decord | \u2705 | \u274c |\n\n
\n\n
\nBatch inference\n\n```python\n# Sample messages for batch inference\nmessages1 = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"file:///path/to/image1.jpg\"},\n {\"type\": \"image\", \"image\": \"file:///path/to/image2.jpg\"},\n {\"type\": \"text\", \"text\": \"What are the common elements in these pictures?\"},\n ],\n }\n]\nmessages2 = [\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n {\"role\": \"user\", \"content\": \"Who are you?\"},\n]\n# Combine messages for batch processing\nmessages = [messages1, messages2]\n\n# Preparation for batch inference\ntexts = [\n processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)\n for msg in messages\n]\nimage_inputs, video_inputs = process_vision_info(messages)\ninputs = processor(\n text=texts,\n images=image_inputs,\n videos=video_inputs,\n padding=True,\n return_tensors=\"pt\",\n)\ninputs = inputs.to(\"cuda\")\n\n# Batch Inference\ngenerated_ids = model.generate(**inputs, max_new_tokens=128)\ngenerated_ids_trimmed = [\n out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n]\noutput_texts = processor.batch_decode(\n generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n)\nprint(output_texts)\n```\n\n
\n\n### \ud83e\udd16 ModelScope\n\nWe strongly advise users especially those in mainland China to use ModelScope. `snapshot_download` can help you solve issues concerning downloading checkpoints.\n\n### More Usage Tips\n\nFor input images, we support local files, base64, and URLs. For videos, we currently only support local files.\n\n```python\n# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.\n## Local file path\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"file:///path/to/your/image.jpg\"},\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n## Image URL\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"http://path/to/your/image.jpg\"},\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n## Base64 encoded image\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": \"data:image;base64,/9j/...\"},\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n```\n\n#### Image Resolution for performance boost\n\nThe model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.\n\n```python\nmin_pixels = 256 * 28 * 28\nmax_pixels = 1280 * 28 * 28\nprocessor = AutoProcessor.from_pretrained(\n \"Qwen/Qwen2.5-VL-32B-Instruct\", min_pixels=min_pixels, max_pixels=max_pixels\n)\n```\n\nBesides, We provide two methods for fine-grained control over the image size input to the model:\n\n1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.\n2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.\n\n```python\n# min_pixels and max_pixels\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image\",\n \"image\": \"file:///path/to/your/image.jpg\",\n \"resized_height\": 280,\n \"resized_width\": 420,\n },\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n# resized_height and resized_width\nmessages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image\",\n \"image\": \"file:///path/to/your/image.jpg\",\n \"min_pixels\": 50176,\n \"max_pixels\": 50176,\n },\n {\"type\": \"text\", \"text\": \"Describe this image.\"},\n ],\n }\n]\n```\n\n### Processing Long Texts\n\nThe current `config.json` is set for context length up to 32,768 tokens.\nTo handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.\n\nFor supported frameworks, you could add the following to `config.json` to enable YaRN:\n\n{\n...,\n\"type\": \"yarn\",\n\"mrope_section\": [\n16,\n24,\n24\n],\n\"factor\": 4,\n\"original_max_position_embeddings\": 32768\n}\n\nHowever, it should be noted that this method has a significant impact on the performance of temporal and spatial localization tasks, and is therefore not recommended for use.\n\nAt the same time, for long video inputs, since MRoPE itself is more economical with ids, the max_position_embeddings can be directly modified to a larger value, such as 64k.\n\n## Citation\n\nIf you find our work helpful, feel free to give us a cite.\n\n```\n@article{Qwen2.5-VL,\n title={Qwen2.5-VL Technical Report},\n author={Bai, Shuai and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Song, Sibo and Dang, Kai and Wang, Peng and Wang, Shijie and Tang, Jun and Zhong, Humen and Zhu, Yuanzhi and Yang, Mingkun and Li, Zhaohai and Wan, Jianqiang and Wang, Pengfei and Ding, Wei and Fu, Zheren and Xu, Yiheng and Ye, Jiabo and Zhang, Xi and Xie, Tianbao and Cheng, Zesen and Zhang, Hang and Yang, Zhibo and Xu, Haiyang and Lin, Junyang},\n journal={arXiv preprint arXiv:2502.13923},\n year={2025}\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "christopherthompson81/Qwen2.5-VL-32B-Instruct-exl2-4_25bpw", "base_model_relation": "base" }, { "model_id": "DevQuasar/Qwen.Qwen2.5-VL-32B-Instruct-GGUF", "gated": "False", "card": "---\nbase_model:\n- Qwen/Qwen2.5-VL-32B-Instruct\npipeline_tag: image-text-to-text\n---\n\n[](https://devquasar.com)\n\nQuantized version of: [Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct)\n\nYou have to use the backend from [HimariO's](https://github.com/HimariO/llama.cpp.qwen2vl/tree/qwen25-vl) branch. Big thanks to add Qwen2.5VL support!\nAdditional [discussions](https://github.com/ggml-org/llama.cpp/issues/11483#issuecomment-2727577078)\n\n'Make knowledge free for everyone'\n\n

\n Made with
\n \n \n \n

\n\nBuy Me a Coffee at ko-fi.com", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "DevQuasar/Qwen.Qwen2.5-VL-32B-Instruct-GGUF", "base_model_relation": "base" }, { "model_id": "bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF", "gated": "False", "card": "---\nquantized_by: bartowski\npipeline_tag: text-generation\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\nlicense: apache-2.0\ntags:\n- multimodal\nlanguage:\n- en\nbase_model_relation: quantized\n---\n\n## Llamacpp imatrix Quantizations of Qwen2.5-VL-32B-Instruct by Qwen\n\nUsing llama.cpp release b5284 for quantization.\n\nOriginal model: https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct\n\nAll quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)\n\nRun them in [LM Studio](https://lmstudio.ai/)\n\nRun them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project\n\n## Prompt format\n\n```\n<|im_start|>system\n{system_prompt}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n```\n\n## What's new:\n\nUpdate to new llama.cpp\n\n## Download a file (not the whole branch) from below:\n\n| Filename | Quant type | File Size | Split | Description |\n| -------- | ---------- | --------- | ----- | ----------- |\n| [Qwen2.5-VL-32B-Instruct-bf16.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/tree/main/Qwen_Qwen2.5-VL-32B-Instruct-bf16) | bf16 | 65.54GB | true | Full BF16 weights. |\n| [Qwen2.5-VL-32B-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-Q8_0.gguf) | Q8_0 | 34.82GB | false | Extremely high quality, generally unneeded but max available quant. |\n| [Qwen2.5-VL-32B-Instruct-Q6_K_L.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-Q6_K_L.gguf) | Q6_K_L | 27.26GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |\n| [Qwen2.5-VL-32B-Instruct-Q6_K.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-Q6_K.gguf) | Q6_K | 26.89GB | false | Very high quality, near perfect, *recommended*. |\n| [Qwen2.5-VL-32B-Instruct-Q5_K_L.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-Q5_K_L.gguf) | Q5_K_L | 23.74GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |\n| [Qwen2.5-VL-32B-Instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-Q5_K_M.gguf) | Q5_K_M | 23.26GB | false | High quality, *recommended*. |\n| [Qwen2.5-VL-32B-Instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-Q5_K_S.gguf) | Q5_K_S | 22.64GB | false | High quality, *recommended*. |\n| [Qwen2.5-VL-32B-Instruct-Q4_1.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-Q4_1.gguf) | Q4_1 | 20.64GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |\n| [Qwen2.5-VL-32B-Instruct-Q4_K_L.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-Q4_K_L.gguf) | Q4_K_L | 20.43GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |\n| [Qwen2.5-VL-32B-Instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-Q4_K_M.gguf) | Q4_K_M | 19.85GB | false | Good quality, default size for most use cases, *recommended*. |\n| [Qwen2.5-VL-32B-Instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-Q4_K_S.gguf) | Q4_K_S | 18.78GB | false | Slightly lower quality with more space savings, *recommended*. |\n| [Qwen2.5-VL-32B-Instruct-Q4_0.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-Q4_0.gguf) | Q4_0 | 18.71GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |\n| [Qwen2.5-VL-32B-Instruct-IQ4_NL.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-IQ4_NL.gguf) | IQ4_NL | 18.68GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |\n| [Qwen2.5-VL-32B-Instruct-Q3_K_XL.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-Q3_K_XL.gguf) | Q3_K_XL | 17.93GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |\n| [Qwen2.5-VL-32B-Instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-IQ4_XS.gguf) | IQ4_XS | 17.69GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |\n| [Qwen2.5-VL-32B-Instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-Q3_K_L.gguf) | Q3_K_L | 17.25GB | false | Lower quality but usable, good for low RAM availability. |\n| [Qwen2.5-VL-32B-Instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-Q3_K_M.gguf) | Q3_K_M | 15.94GB | false | Low quality. |\n| [Qwen2.5-VL-32B-Instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-IQ3_M.gguf) | IQ3_M | 14.81GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |\n| [Qwen2.5-VL-32B-Instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-Q3_K_S.gguf) | Q3_K_S | 14.39GB | false | Low quality, not recommended. |\n| [Qwen2.5-VL-32B-Instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-IQ3_XS.gguf) | IQ3_XS | 13.71GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |\n| [Qwen2.5-VL-32B-Instruct-Q2_K_L.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-Q2_K_L.gguf) | Q2_K_L | 13.07GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |\n| [Qwen2.5-VL-32B-Instruct-IQ3_XXS.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-IQ3_XXS.gguf) | IQ3_XXS | 12.84GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |\n| [Qwen2.5-VL-32B-Instruct-Q2_K.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-Q2_K.gguf) | Q2_K | 12.31GB | false | Very low quality but surprisingly usable. |\n| [Qwen2.5-VL-32B-Instruct-IQ2_M.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-IQ2_M.gguf) | IQ2_M | 11.26GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |\n| [Qwen2.5-VL-32B-Instruct-IQ2_S.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-IQ2_S.gguf) | IQ2_S | 10.39GB | false | Low quality, uses SOTA techniques to be usable. |\n| [Qwen2.5-VL-32B-Instruct-IQ2_XS.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-IQ2_XS.gguf) | IQ2_XS | 9.96GB | false | Low quality, uses SOTA techniques to be usable. |\n| [Qwen2.5-VL-32B-Instruct-IQ2_XXS.gguf](https://huggingface.co/bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen_Qwen2.5-VL-32B-Instruct-IQ2_XXS.gguf) | IQ2_XXS | 9.03GB | false | Very low quality, uses SOTA techniques to be usable. |\n\n## Embed/output weights\n\nSome of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.\n\n## Downloading using huggingface-cli\n\n
\n Click to view download instructions\n\nFirst, make sure you have hugginface-cli installed:\n\n```\npip install -U \"huggingface_hub[cli]\"\n```\n\nThen, you can target the specific file you want:\n\n```\nhuggingface-cli download bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF --include \"Qwen_Qwen2.5-VL-32B-Instruct-Q4_K_M.gguf\" --local-dir ./\n```\n\nIf the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:\n\n```\nhuggingface-cli download bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF --include \"Qwen_Qwen2.5-VL-32B-Instruct-Q8_0/*\" --local-dir ./\n```\n\nYou can either specify a new local-dir (Qwen_Qwen2.5-VL-32B-Instruct-Q8_0) or download them all in place (./)\n\n
\n\n## ARM/AVX information\n\nPreviously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.\n\nNow, however, there is something called \"online repacking\" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.\n\nAs of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.\n\nAdditionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.\n\n
\n Click to view Q4_0_X_X information (deprecated\n\nI'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.\n\n
\n Click to view benchmarks on an AVX2 system (EPYC7702)\n\n| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |\n| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |\n| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 \u00b1 1.03 | 100% |\n| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 \u00b1 0.19 | 100% |\n| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 \u00b1 0.44 | 100% |\n| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 \u00b1 0.27 | 100% |\n| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 \u00b1 0.69 | 100% |\n| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 \u00b1 0.03 | 100% |\n| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 \u00b1 1.74 | 147% |\n| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 \u00b1 0.20 | 101% |\n| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 \u00b1 1.81 | 101% |\n| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 \u00b1 0.99 | 48% |\n| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 \u00b1 3.04 | 83% |\n| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 \u00b1 3.59 | 90% |\n| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 \u00b1 3.53 | 133% |\n| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 \u00b1 45.63 | 100% |\n| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 \u00b1 5.00 | 124% |\n| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 \u00b1 0.05 | 111% |\n| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 \u00b1 0.09 | 110% |\n| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 \u00b1 0.31 | 105% |\n\nQ4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation\n\n
\n\n
\n\n## Which file should I choose?\n\n
\n Click here for details\n\nA great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)\n\nThe first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.\n\nIf you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.\n\nIf you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.\n\nNext, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.\n\nIf you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.\n\nIf you want to get more into the weeds, you can check out this extremely useful feature chart:\n\n[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)\n\nBut basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.\n\nThese I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.\n\n
\n\n## Credits\n\nThank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.\n\nThank you ZeroWw for the inspiration to experiment with embed/output.\n\nThank you to LM Studio for sponsoring my work.\n\nWant to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "bartowski/Qwen_Qwen2.5-VL-32B-Instruct-GGUF", "base_model_relation": "base" }, { "model_id": "lmstudio-community/Qwen2.5-VL-32B-Instruct-GGUF", "gated": "False", "card": "---\nquantized_by: bartowski\npipeline_tag: text-generation\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\nlicense: apache-2.0\ntags:\n- multimodal\nlanguage:\n- en\nbase_model_relation: quantized\n---\n## \ud83d\udcab Community Model> Qwen2.5 VL 32B Instruct by Qwen\n\n*\ud83d\udc7e [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.\n\n**Model creator:** [Qwen](https://huggingface.co/Qwen)
\n**Original model**: [Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct)
\n**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b5284](https://github.com/ggerganov/llama.cpp/releases/tag/b5284)
\n\n## Technical Details\n\nSupports context length of 128k tokens.\n\nProficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images.\n\nCapable of acting as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use.\n\nUseful for generating structured outputs and stable JSON outputs.\n\nMultilingual support.\n\n## Special thanks\n\n\ud83d\ude4f Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.\n\n## Disclaimers\n\nLM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "lmstudio-community/Qwen2.5-VL-32B-Instruct-GGUF", "base_model_relation": "base" }, { "model_id": "openfree/Qwen2.5-VL-32B-Instruct-Q8_0-GGUF", "gated": "False", "card": "---\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\nlanguage:\n- en\nlibrary_name: transformers\nlicense: apache-2.0\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- llama-cpp\n- gguf-my-repo\n---\n\n# openfree/Qwen2.5-VL-32B-Instruct-Q8_0-GGUF\nThis model was converted to GGUF format from [`Qwen/Qwen2.5-VL-32B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo openfree/Qwen2.5-VL-32B-Instruct-Q8_0-GGUF --hf-file qwen2.5-vl-32b-instruct-q8_0.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo openfree/Qwen2.5-VL-32B-Instruct-Q8_0-GGUF --hf-file qwen2.5-vl-32b-instruct-q8_0.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo openfree/Qwen2.5-VL-32B-Instruct-Q8_0-GGUF --hf-file qwen2.5-vl-32b-instruct-q8_0.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo openfree/Qwen2.5-VL-32B-Instruct-Q8_0-GGUF --hf-file qwen2.5-vl-32b-instruct-q8_0.gguf -c 2048\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "openfree/Qwen2.5-VL-32B-Instruct-Q8_0-GGUF", "base_model_relation": "base" }, { "model_id": "openfree/Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF", "gated": "False", "card": "---\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\nlanguage:\n- en\nlibrary_name: transformers\nlicense: apache-2.0\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- llama-cpp\n- gguf-my-repo\n---\n\n# openfree/Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF\nThis model was converted to GGUF format from [`Qwen/Qwen2.5-VL-32B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo openfree/Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-vl-32b-instruct-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo openfree/Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-vl-32b-instruct-q4_k_m.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo openfree/Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-vl-32b-instruct-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo openfree/Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-vl-32b-instruct-q4_k_m.gguf -c 2048\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "openfree/Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF", "base_model_relation": "base" }, { "model_id": "mradermacher/Qwen2.5-VL-32B-Instruct-GGUF", "gated": "False", "card": "---\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\nlanguage:\n- en\nlibrary_name: transformers\nlicense: apache-2.0\nquantized_by: mradermacher\ntags:\n- multimodal\n---\n## About\n\n\n\n\n\n\nstatic quants of https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct\n\n\nweighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.Q2_K.gguf) | Q2_K | 12.4 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.Q3_K_S.gguf) | Q3_K_S | 14.5 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.Q3_K_L.gguf) | Q3_K_L | 17.3 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.IQ4_XS.gguf) | IQ4_XS | 18.0 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.Q5_K_S.gguf) | Q5_K_S | 22.7 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.Q5_K_M.gguf) | Q5_K_M | 23.4 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.Q6_K.gguf) | Q6_K | 27.0 | very good quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "mradermacher/Qwen2.5-VL-32B-Instruct-GGUF", "base_model_relation": "base" }, { "model_id": "mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF", "gated": "False", "card": "---\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\nlanguage:\n- en\nlibrary_name: transformers\nlicense: apache-2.0\nquantized_by: mradermacher\ntags:\n- multimodal\n---\n## About\n\n\n\n\n\n\nweighted/imatrix quants of https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct\n\n\nstatic quants are available at https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "mradermacher/Qwen2.5-VL-32B-Instruct-i1-GGUF", "base_model_relation": "base" }, { "model_id": "Theta-Lev/Qwen2.5-VL-32B-Instruct-Q5_K_M-GGUF", "gated": "False", "card": "---\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\nlanguage:\n- en\nlibrary_name: transformers\nlicense: apache-2.0\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- llama-cpp\n- gguf-my-repo\n---\n\n# Theta-Lev/Qwen2.5-VL-32B-Instruct-Q5_K_M-GGUF\nThis model was converted to GGUF format from [`Qwen/Qwen2.5-VL-32B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo Theta-Lev/Qwen2.5-VL-32B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-vl-32b-instruct-q5_k_m.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo Theta-Lev/Qwen2.5-VL-32B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-vl-32b-instruct-q5_k_m.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo Theta-Lev/Qwen2.5-VL-32B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-vl-32b-instruct-q5_k_m.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo Theta-Lev/Qwen2.5-VL-32B-Instruct-Q5_K_M-GGUF --hf-file qwen2.5-vl-32b-instruct-q5_k_m.gguf -c 2048\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "Theta-Lev/Qwen2.5-VL-32B-Instruct-Q5_K_M-GGUF", "base_model_relation": "base" }, { "model_id": "TheMagicianGamer/Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF", "gated": "False", "card": "---\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\nlanguage:\n- en\nlibrary_name: transformers\nlicense: apache-2.0\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- llama-cpp\n- gguf-my-repo\n---\n\n# TheMagicianGamer/Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF\nThis model was converted to GGUF format from [`Qwen/Qwen2.5-VL-32B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo TheMagicianGamer/Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-vl-32b-instruct-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo TheMagicianGamer/Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-vl-32b-instruct-q4_k_m.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo TheMagicianGamer/Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-vl-32b-instruct-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo TheMagicianGamer/Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-vl-32b-instruct-q4_k_m.gguf -c 2048\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "TheMagicianGamer/Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF", "base_model_relation": "base" }, { "model_id": "xiongwen/Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF", "gated": "False", "card": "---\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\nlanguage:\n- en\nlibrary_name: transformers\nlicense: apache-2.0\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- llama-cpp\n- gguf-my-repo\n---\n\n# xiongwen/Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF\nThis model was converted to GGUF format from [`Qwen/Qwen2.5-VL-32B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo xiongwen/Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-vl-32b-instruct-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo xiongwen/Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-vl-32b-instruct-q4_k_m.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo xiongwen/Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-vl-32b-instruct-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo xiongwen/Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-vl-32b-instruct-q4_k_m.gguf -c 2048\n```\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "xiongwen/Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF", "base_model_relation": "base" }, { "model_id": "second-state/Qwen2.5-VL-32B-Instruct-GGUF", "gated": "False", "card": "---\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\nlicense: apache-2.0\nmodel_creator: Qwen\nmodel_name: Qwen2.5-VL-32B-Instruct\nquantized_by: Second State Inc.\nlanguage:\n- en\npipeline_tag: image-text-to-text\ntags:\n- multimodal\nlibrary_name: transformers\n---\n\n\n\n
\n\n
\n
\n\n\n# Qwen2.5-VL-32B-Instruct-GGUF\n\n## Original Model\n\n[Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct)\n\n## Run with LlamaEdge\n\n- LlamaEdge version: [v0.18.4](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.18.4)\n\n- Prompt template\n\n - Prompt type: `qwen2-vision`\n\n - Prompt string\n\n ```text\n <|im_start|>system\n {system_prompt}<|im_end|>\n <|im_start|>user\n <|vision_start|>{image_placeholder}<|vision_end|>{user_prompt}<|im_end|>\n <|im_start|>assistant\n ```\n\n- Context size: `32000`\n\n- Run as LlamaEdge service\n\n ```bash\n wasmedge --dir .:. \\\n --nn-preload default:GGML:AUTO:Qwen2.5-VL-32B-Instruct-Q5_K_M.gguf \\\n llama-api-server.wasm \\\n --model-name Qwen2.5-VL-32B-Instruct \\\n --prompt-template qwen2-vision \\\n --llava-mmproj Qwen2.5-VL-32B-Instruct-vision.gguf \\\n --ctx-size 32000\n ```\n\n## Quantized GGUF Models\n\n| Name | Quant method | Bits | Size | Use case |\n| ---- | ---- | ---- | ---- | ----- |\n| [Qwen2.5-VL-32B-Instruct-Q2_K.gguf](https://huggingface.co/second-state/Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen2.5-VL-32B-Instruct-Q2_K.gguf) | Q2_K | 2 | 12.3 GB| smallest, significant quality loss - not recommended for most purposes |\n| [Qwen2.5-VL-32B-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen2.5-VL-32B-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 17.2 GB| small, substantial quality loss |\n| [Qwen2.5-VL-32B-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen2.5-VL-32B-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 15.9 GB| very small, high quality loss |\n| [Qwen2.5-VL-32B-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen2.5-VL-32B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 14.4 GB| very small, high quality loss |\n| [Qwen2.5-VL-32B-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen2.5-VL-32B-Instruct-Q4_0.gguf) | Q4_0 | 4 | 18.6 GB| legacy; small, very high quality loss - prefer using Q3_K_M |\n| [Qwen2.5-VL-32B-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen2.5-VL-32B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 19.9 GB| medium, balanced quality - recommended |\n| [Qwen2.5-VL-32B-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen2.5-VL-32B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 18.8 GB| small, greater quality loss |\n| [Qwen2.5-VL-32B-Instruct-Q5_0.gguf](https://huggingface.co/second-state/Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen2.5-VL-32B-Instruct-Q5_0.gguf) | Q5_0 | 5 | 22.6 GB| legacy; medium, balanced quality - prefer using Q4_K_M |\n| [Qwen2.5-VL-32B-Instruct-Q5_K_M.gguf](https://huggingface.co/second-state/Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen2.5-VL-32B-Instruct-Q5_K_M.gguf) | Q5_K_M | 5 | 23.3 GB| large, very low quality loss - recommended |\n| [Qwen2.5-VL-32B-Instruct-Q5_K_S.gguf](https://huggingface.co/second-state/Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen2.5-VL-32B-Instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 22.6 GB| large, low quality loss - recommended |\n| [Qwen2.5-VL-32B-Instruct-Q6_K.gguf](https://huggingface.co/second-state/Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen2.5-VL-32B-Instruct-Q6_K.gguf) | Q6_K | 6 | 26.9 GB| very large, extremely low quality loss |\n| [Qwen2.5-VL-32B-Instruct-Q8_0.gguf](https://huggingface.co/second-state/Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen2.5-VL-32B-Instruct-Q8_0.gguf) | Q8_0 | 8 | 34.8 GB| very large, extremely low quality loss - not recommended |\n| [Qwen2.5-VL-32B-Instruct-f16-00001-of-00003.gguf](https://huggingface.co/second-state/Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen2.5-VL-32B-Instruct-f16-00001-of-00003.gguf) | f16 | 16 | 29.8 GB| |\n| [Qwen2.5-VL-32B-Instruct-f16-00002-of-00003.gguf](https://huggingface.co/second-state/Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen2.5-VL-32B-Instruct-f16-00002-of-00003.gguf) | f16 | 16 | 29.8 GB| |\n| [Qwen2.5-VL-32B-Instruct-f16-00003-of-00003.gguf](https://huggingface.co/second-state/Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen2.5-VL-32B-Instruct-f16-00003-of-00003.gguf) | f16 | 16 | 5.87 GB| |\n| [Qwen2.5-VL-32B-Instruct-vision.gguf](https://huggingface.co/second-state/Qwen2.5-VL-32B-Instruct-GGUF/blob/main/Qwen2.5-VL-32B-Instruct-vision.gguf) | f16 | 16 | 1.38 GB| |\n\n*Quantized with llama.cpp b5196*", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "second-state/Qwen2.5-VL-32B-Instruct-GGUF", "base_model_relation": "base" }, { "model_id": "gaianet/Qwen2.5-VL-32B-Instruct-GGUF", "gated": "False", "card": "---\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\nlicense: apache-2.0\nmodel_creator: Qwen\nmodel_name: Qwen2.5-VL-32B-Instruct\nquantized_by: Second State Inc.\nlanguage:\n- en\npipeline_tag: image-text-to-text\ntags:\n- multimodal\nlibrary_name: transformers\n---\n\n# Qwen2.5-VL-32B-Instruct-GGUF\n\n## Original Model\n\n[Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct)\n\n## Run with Gaianet\n\n**Prompt template:**\n\nprompt template: `qwen2-vision`\n\n**Context size:**\n\nchat_ctx_size: `32000`\n\n\n**Run with GaiaNet:**\n\n- Quick start: https://docs.gaianet.ai/node-guide/quick-start\n\n- Customize your node: https://docs.gaianet.ai/node-guide/customize\n\n*Quantized with llama.cpp b5196*", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "gaianet/Qwen2.5-VL-32B-Instruct-GGUF", "base_model_relation": "base" }, { "model_id": "ggml-org/Qwen2.5-VL-32B-Instruct-GGUF", "gated": "False", "card": "---\nlicense: apache-2.0\nbase_model: Qwen/Qwen2.5-VL-32B-Instruct\n---\n\n# Qwen2.5-VL-32B-Instruct\n\nOriginal model: https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": "ggml-org/Qwen2.5-VL-32B-Instruct-GGUF", "base_model_relation": "base" }, { "model_id": "ig1/Qwen2.5-VL-32B-Instruct-FP8-Dynamic", "gated": "unknown", "card": "---\nlicense: apache-2.0\nlanguage:\n- en\nbase_model:\n- Qwen/Qwen2.5-VL-32B-Instruct\npipeline_tag: image-text-to-text\nlibrary_name: transformers\ntags:\n- multimodal\n---\n\n[FP8 activation quantization](https://github.com/vllm-project/llm-compressor/tree/main/examples/quantization_w8a8_fp8) performed with [llm-compressor](https://github.com/vllm-project/llm-compressor)", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "Qwen/Qwen2.5-VL-32B-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-GGUF", "gated": "False", "card": "---\nbase_model: huihui-ai/Qwen2.5-VL-32B-Instruct-abliterated\nlanguage:\n- en\nlibrary_name: transformers\nlicense: apache-2.0\nquantized_by: mradermacher\ntags:\n- multimodal\n- abliterated\n- uncensored\n---\n## About\n\n\n\n\n\n\nstatic quants of https://huggingface.co/huihui-ai/Qwen2.5-VL-32B-Instruct-abliterated\n\n\nweighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.Q2_K.gguf) | Q2_K | 12.4 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.Q3_K_S.gguf) | Q3_K_S | 14.5 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.Q3_K_L.gguf) | Q3_K_L | 17.3 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.IQ4_XS.gguf) | IQ4_XS | 18.0 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.Q5_K_S.gguf) | Q5_K_S | 22.7 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.Q5_K_M.gguf) | Q5_K_M | 23.4 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.Q6_K.gguf) | Q6_K | 27.0 | very good quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "huihui-ai/Qwen2.5-VL-32B-Instruct-abliterated" ], "base_model": "mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-GGUF", "base_model_relation": "base" }, { "model_id": "mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF", "gated": "False", "card": "---\nbase_model: huihui-ai/Qwen2.5-VL-32B-Instruct-abliterated\nlanguage:\n- en\nlibrary_name: transformers\nlicense: apache-2.0\nquantized_by: mradermacher\ntags:\n- multimodal\n- abliterated\n- uncensored\n---\n## About\n\n\n\n\n\n\nweighted/imatrix quants of https://huggingface.co/huihui-ai/Qwen2.5-VL-32B-Instruct-abliterated\n\n\nstatic quants are available at https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |\n| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-VL-32B-Instruct-abliterated.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "huihui-ai/Qwen2.5-VL-32B-Instruct-abliterated" ], "base_model": "mradermacher/Qwen2.5-VL-32B-Instruct-abliterated-i1-GGUF", "base_model_relation": "base" }, { "model_id": "mradermacher/QoQ-Med-VL-32B-GGUF", "gated": "unknown", "card": "---\nbase_model: ddvd233/QoQ-Med-VL-32B\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\nquantized_by: mradermacher\n---\n## About\n\n\n\n\n\n\nstatic quants of https://huggingface.co/ddvd233/QoQ-Med-VL-32B\n\n\nweighted/imatrix quants are available at https://huggingface.co/mradermacher/QoQ-Med-VL-32B-i1-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-GGUF/resolve/main/QoQ-Med-VL-32B.Q2_K.gguf) | Q2_K | 12.4 | |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-GGUF/resolve/main/QoQ-Med-VL-32B.Q3_K_S.gguf) | Q3_K_S | 14.5 | |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-GGUF/resolve/main/QoQ-Med-VL-32B.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-GGUF/resolve/main/QoQ-Med-VL-32B.Q3_K_L.gguf) | Q3_K_L | 17.3 | |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-GGUF/resolve/main/QoQ-Med-VL-32B.IQ4_XS.gguf) | IQ4_XS | 18.0 | |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-GGUF/resolve/main/QoQ-Med-VL-32B.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-GGUF/resolve/main/QoQ-Med-VL-32B.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-GGUF/resolve/main/QoQ-Med-VL-32B.Q5_K_S.gguf) | Q5_K_S | 22.7 | |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-GGUF/resolve/main/QoQ-Med-VL-32B.Q5_K_M.gguf) | Q5_K_M | 23.4 | |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-GGUF/resolve/main/QoQ-Med-VL-32B.Q6_K.gguf) | Q6_K | 27.0 | very good quality |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-GGUF/resolve/main/QoQ-Med-VL-32B.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ddvd233/QoQ-Med-VL-32B" ], "base_model": null, "base_model_relation": null }, { "model_id": "mradermacher/QoQ-Med-VL-32B-i1-GGUF", "gated": "unknown", "card": "---\nbase_model: ddvd233/QoQ-Med-VL-32B\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\nquantized_by: mradermacher\n---\n## About\n\n\n\n\n\n\nweighted/imatrix quants of https://huggingface.co/ddvd233/QoQ-Med-VL-32B\n\n\nstatic quants are available at https://huggingface.co/mradermacher/QoQ-Med-VL-32B-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-i1-GGUF/resolve/main/QoQ-Med-VL-32B.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-i1-GGUF/resolve/main/QoQ-Med-VL-32B.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-i1-GGUF/resolve/main/QoQ-Med-VL-32B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-i1-GGUF/resolve/main/QoQ-Med-VL-32B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-i1-GGUF/resolve/main/QoQ-Med-VL-32B.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-i1-GGUF/resolve/main/QoQ-Med-VL-32B.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-i1-GGUF/resolve/main/QoQ-Med-VL-32B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-i1-GGUF/resolve/main/QoQ-Med-VL-32B.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-i1-GGUF/resolve/main/QoQ-Med-VL-32B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-i1-GGUF/resolve/main/QoQ-Med-VL-32B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-i1-GGUF/resolve/main/QoQ-Med-VL-32B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-i1-GGUF/resolve/main/QoQ-Med-VL-32B.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-i1-GGUF/resolve/main/QoQ-Med-VL-32B.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-i1-GGUF/resolve/main/QoQ-Med-VL-32B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-i1-GGUF/resolve/main/QoQ-Med-VL-32B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-i1-GGUF/resolve/main/QoQ-Med-VL-32B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-i1-GGUF/resolve/main/QoQ-Med-VL-32B.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-i1-GGUF/resolve/main/QoQ-Med-VL-32B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-i1-GGUF/resolve/main/QoQ-Med-VL-32B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-i1-GGUF/resolve/main/QoQ-Med-VL-32B.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-i1-GGUF/resolve/main/QoQ-Med-VL-32B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-i1-GGUF/resolve/main/QoQ-Med-VL-32B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |\n| [GGUF](https://huggingface.co/mradermacher/QoQ-Med-VL-32B-i1-GGUF/resolve/main/QoQ-Med-VL-32B.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "ddvd233/QoQ-Med-VL-32B" ], "base_model": null, "base_model_relation": null }, { "model_id": "itztheking/FMAX-testrun-4.0-16bit", "gated": "False", "card": "---\nbase_model: unsloth/Qwen2.5-VL-32B-Instruct\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2_5_vl\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded finetuned model\n\n- **Developed by:** itztheking\n- **License:** apache-2.0\n- **Finetuned from model :** unsloth/Qwen2.5-VL-32B-Instruct\n\nThis qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "unsloth/Qwen2.5-VL-32B-Instruct" ], "base_model": "itztheking/FMAX-testrun-4.0-16bit", "base_model_relation": "base" }, { "model_id": "egemensert/inek-qwen2_5VL-dd-full-bnb-64", "gated": "unknown", "card": "---\nbase_model: unsloth/Qwen2.5-VL-32B-Instruct\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2_5_vl\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded finetuned model\n\n- **Developed by:** egemensert\n- **License:** apache-2.0\n- **Finetuned from model :** unsloth/Qwen2.5-VL-32B-Instruct\n\nThis qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "unsloth/Qwen2.5-VL-32B-Instruct" ], "base_model": null, "base_model_relation": null }, { "model_id": "itztheking/FMAX-testrun-7", "gated": "False", "card": "---\nbase_model: unsloth/Qwen2.5-VL-32B-Instruct\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2_5_vl\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded finetuned model\n\n- **Developed by:** itztheking\n- **License:** apache-2.0\n- **Finetuned from model :** unsloth/Qwen2.5-VL-32B-Instruct\n\nThis qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "unsloth/Qwen2.5-VL-32B-Instruct" ], "base_model": "itztheking/FMAX-testrun", "base_model_relation": "finetune" }, { "model_id": "itztheking/FMAX-testrun-8", "gated": "False", "card": "---\nbase_model: unsloth/Qwen2.5-VL-32B-Instruct\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2_5_vl\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded finetuned model\n\n- **Developed by:** itztheking\n- **License:** apache-2.0\n- **Finetuned from model :** unsloth/Qwen2.5-VL-32B-Instruct\n\nThis qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "unsloth/Qwen2.5-VL-32B-Instruct" ], "base_model": "itztheking/FMAX-testrun", "base_model_relation": "finetune" }, { "model_id": "itztheking/FMAX-testrun-embed-1", "gated": "False", "card": "---\nbase_model: unsloth/Qwen2.5-VL-32B-Instruct\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2_5_vl\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded finetuned model\n\n- **Developed by:** itztheking\n- **License:** apache-2.0\n- **Finetuned from model :** unsloth/Qwen2.5-VL-32B-Instruct\n\nThis qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "unsloth/Qwen2.5-VL-32B-Instruct" ], "base_model": "itztheking/FMAX-testrun-embed", "base_model_relation": "finetune" }, { "model_id": "egemensert/inek-qwen2_5VL-dd-full-bnb-64-5e", "gated": "False", "card": "---\nbase_model: unsloth/Qwen2.5-VL-32B-Instruct\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2_5_vl\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded finetuned model\n\n- **Developed by:** egemensert\n- **License:** apache-2.0\n- **Finetuned from model :** unsloth/Qwen2.5-VL-32B-Instruct\n\nThis qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "unsloth/Qwen2.5-VL-32B-Instruct" ], "base_model": "egemensert/inek-qwen2_5VL-dd-full-bnb-64", "base_model_relation": "finetune" }, { "model_id": "mradermacher/Orsta-32B-0321-GGUF", "gated": "unknown", "card": "---\nbase_model: One-RL-to-See-Them-All/Orsta-32B-0321\ndatasets:\n- One-RL-to-See-Them-All/Orsta-Data-47k\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\nquantized_by: mradermacher\ntags:\n- VLM\n- multimodal\n---\n## About\n\n\n\n\n\n\nstatic quants of https://huggingface.co/One-RL-to-See-Them-All/Orsta-32B-0321\n\n\nweighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/Orsta-32B-0321-GGUF/resolve/main/Orsta-32B-0321.Q2_K.gguf) | Q2_K | 12.4 | |\n| [GGUF](https://huggingface.co/mradermacher/Orsta-32B-0321-GGUF/resolve/main/Orsta-32B-0321.Q3_K_S.gguf) | Q3_K_S | 14.5 | |\n| [GGUF](https://huggingface.co/mradermacher/Orsta-32B-0321-GGUF/resolve/main/Orsta-32B-0321.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/Orsta-32B-0321-GGUF/resolve/main/Orsta-32B-0321.Q3_K_L.gguf) | Q3_K_L | 17.3 | |\n| [GGUF](https://huggingface.co/mradermacher/Orsta-32B-0321-GGUF/resolve/main/Orsta-32B-0321.IQ4_XS.gguf) | IQ4_XS | 18.0 | |\n| [GGUF](https://huggingface.co/mradermacher/Orsta-32B-0321-GGUF/resolve/main/Orsta-32B-0321.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/Orsta-32B-0321-GGUF/resolve/main/Orsta-32B-0321.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/Orsta-32B-0321-GGUF/resolve/main/Orsta-32B-0321.Q5_K_S.gguf) | Q5_K_S | 22.7 | |\n| [GGUF](https://huggingface.co/mradermacher/Orsta-32B-0321-GGUF/resolve/main/Orsta-32B-0321.Q5_K_M.gguf) | Q5_K_M | 23.4 | |\n| [GGUF](https://huggingface.co/mradermacher/Orsta-32B-0321-GGUF/resolve/main/Orsta-32B-0321.Q6_K.gguf) | Q6_K | 27.0 | very good quality |\n| [GGUF](https://huggingface.co/mradermacher/Orsta-32B-0321-GGUF/resolve/main/Orsta-32B-0321.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "One-RL-to-See-Them-All/Orsta-32B-0321" ], "base_model": null, "base_model_relation": null }, { "model_id": "mradermacher/UnifiedReward-qwen-32b-GGUF", "gated": "unknown", "card": "---\nbase_model: CodeGoat24/UnifiedReward-qwen-32b\ndatasets:\n- CodeGoat24/HPD\n- CodeGoat24/LiFT-HRA\n- CodeGoat24/OIP\n- CodeGoat24/EvalMuse\n- CodeGoat24/ShareGPTVideo-DPO\n- CodeGoat24/VideoFeedback\n- CodeGoat24/LLaVA-Critic-113k\n- CodeGoat24/VideoDPO\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\nquantized_by: mradermacher\n---\n## About\n\n\n\n\n\n\nstatic quants of https://huggingface.co/CodeGoat24/UnifiedReward-qwen-32b\n\n\nweighted/imatrix quants are available at https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-i1-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-GGUF/resolve/main/UnifiedReward-qwen-32b.Q2_K.gguf) | Q2_K | 12.4 | |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-GGUF/resolve/main/UnifiedReward-qwen-32b.Q3_K_S.gguf) | Q3_K_S | 14.5 | |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-GGUF/resolve/main/UnifiedReward-qwen-32b.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-GGUF/resolve/main/UnifiedReward-qwen-32b.Q3_K_L.gguf) | Q3_K_L | 17.3 | |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-GGUF/resolve/main/UnifiedReward-qwen-32b.IQ4_XS.gguf) | IQ4_XS | 18.0 | |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-GGUF/resolve/main/UnifiedReward-qwen-32b.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-GGUF/resolve/main/UnifiedReward-qwen-32b.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-GGUF/resolve/main/UnifiedReward-qwen-32b.Q5_K_S.gguf) | Q5_K_S | 22.7 | |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-GGUF/resolve/main/UnifiedReward-qwen-32b.Q5_K_M.gguf) | Q5_K_M | 23.4 | |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-GGUF/resolve/main/UnifiedReward-qwen-32b.Q6_K.gguf) | Q6_K | 27.0 | very good quality |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-GGUF/resolve/main/UnifiedReward-qwen-32b.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "CodeGoat24/UnifiedReward-qwen-32b" ], "base_model": null, "base_model_relation": null }, { "model_id": "mradermacher/UnifiedReward-qwen-32b-i1-GGUF", "gated": "unknown", "card": "---\nbase_model: CodeGoat24/UnifiedReward-qwen-32b\ndatasets:\n- CodeGoat24/HPD\n- CodeGoat24/LiFT-HRA\n- CodeGoat24/OIP\n- CodeGoat24/EvalMuse\n- CodeGoat24/ShareGPTVideo-DPO\n- CodeGoat24/VideoFeedback\n- CodeGoat24/LLaVA-Critic-113k\n- CodeGoat24/VideoDPO\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\nquantized_by: mradermacher\n---\n## About\n\n\n\n\n\n\nweighted/imatrix quants of https://huggingface.co/CodeGoat24/UnifiedReward-qwen-32b\n\n\nstatic quants are available at https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-i1-GGUF/resolve/main/UnifiedReward-qwen-32b.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-i1-GGUF/resolve/main/UnifiedReward-qwen-32b.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-i1-GGUF/resolve/main/UnifiedReward-qwen-32b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-i1-GGUF/resolve/main/UnifiedReward-qwen-32b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-i1-GGUF/resolve/main/UnifiedReward-qwen-32b.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-i1-GGUF/resolve/main/UnifiedReward-qwen-32b.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-i1-GGUF/resolve/main/UnifiedReward-qwen-32b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-i1-GGUF/resolve/main/UnifiedReward-qwen-32b.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-i1-GGUF/resolve/main/UnifiedReward-qwen-32b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-i1-GGUF/resolve/main/UnifiedReward-qwen-32b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-i1-GGUF/resolve/main/UnifiedReward-qwen-32b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-i1-GGUF/resolve/main/UnifiedReward-qwen-32b.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-i1-GGUF/resolve/main/UnifiedReward-qwen-32b.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-i1-GGUF/resolve/main/UnifiedReward-qwen-32b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-i1-GGUF/resolve/main/UnifiedReward-qwen-32b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-i1-GGUF/resolve/main/UnifiedReward-qwen-32b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-i1-GGUF/resolve/main/UnifiedReward-qwen-32b.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-i1-GGUF/resolve/main/UnifiedReward-qwen-32b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-i1-GGUF/resolve/main/UnifiedReward-qwen-32b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-i1-GGUF/resolve/main/UnifiedReward-qwen-32b.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-i1-GGUF/resolve/main/UnifiedReward-qwen-32b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-i1-GGUF/resolve/main/UnifiedReward-qwen-32b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |\n| [GGUF](https://huggingface.co/mradermacher/UnifiedReward-qwen-32b-i1-GGUF/resolve/main/UnifiedReward-qwen-32b.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "CodeGoat24/UnifiedReward-qwen-32b" ], "base_model": null, "base_model_relation": null }, { "model_id": "mradermacher/Orsta-32B-0326-GGUF", "gated": "unknown", "card": "---\nbase_model: One-RL-to-See-Them-All/Orsta-32B-0326\ndatasets:\n- One-RL-to-See-Them-All/Orsta-Data-47k\nlanguage:\n- en\nlibrary_name: transformers\nlicense: mit\nquantized_by: mradermacher\ntags:\n- VLM\n- multimodal\n---\n## About\n\n\n\n\n\n\nstatic quants of https://huggingface.co/One-RL-to-See-Them-All/Orsta-32B-0326\n\n\nweighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/Orsta-32B-0326-GGUF/resolve/main/Orsta-32B-0326.Q2_K.gguf) | Q2_K | 12.4 | |\n| [GGUF](https://huggingface.co/mradermacher/Orsta-32B-0326-GGUF/resolve/main/Orsta-32B-0326.Q3_K_S.gguf) | Q3_K_S | 14.5 | |\n| [GGUF](https://huggingface.co/mradermacher/Orsta-32B-0326-GGUF/resolve/main/Orsta-32B-0326.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/Orsta-32B-0326-GGUF/resolve/main/Orsta-32B-0326.Q3_K_L.gguf) | Q3_K_L | 17.3 | |\n| [GGUF](https://huggingface.co/mradermacher/Orsta-32B-0326-GGUF/resolve/main/Orsta-32B-0326.IQ4_XS.gguf) | IQ4_XS | 18.0 | |\n| [GGUF](https://huggingface.co/mradermacher/Orsta-32B-0326-GGUF/resolve/main/Orsta-32B-0326.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/Orsta-32B-0326-GGUF/resolve/main/Orsta-32B-0326.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/Orsta-32B-0326-GGUF/resolve/main/Orsta-32B-0326.Q5_K_S.gguf) | Q5_K_S | 22.7 | |\n| [GGUF](https://huggingface.co/mradermacher/Orsta-32B-0326-GGUF/resolve/main/Orsta-32B-0326.Q5_K_M.gguf) | Q5_K_M | 23.4 | |\n| [GGUF](https://huggingface.co/mradermacher/Orsta-32B-0326-GGUF/resolve/main/Orsta-32B-0326.Q6_K.gguf) | Q6_K | 27.0 | very good quality |\n| [GGUF](https://huggingface.co/mradermacher/Orsta-32B-0326-GGUF/resolve/main/Orsta-32B-0326.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "One-RL-to-See-Them-All/Orsta-32B-0326" ], "base_model": null, "base_model_relation": null }, { "model_id": "sizzlebop/Orsta-32B-0326-Q8_0-GGUF", "gated": "unknown", "card": "---\nlicense: mit\nlanguage:\n- en\npipeline_tag: image-text-to-text\ntags:\n- VLM\n- multimodal\n- llama-cpp\n- gguf-my-repo\nlibrary_name: transformers\nbase_model: One-RL-to-See-Them-All/Orsta-32B-0326\ndatasets:\n- One-RL-to-See-Them-All/Orsta-Data-47k\n---\n\n# sizzlebop/Orsta-32B-0326-Q8_0-GGUF\nThis model was converted to GGUF format from [`One-RL-to-See-Them-All/Orsta-32B-0326`](https://huggingface.co/One-RL-to-See-Them-All/Orsta-32B-0326) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/One-RL-to-See-Them-All/Orsta-32B-0326) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo sizzlebop/Orsta-32B-0326-Q8_0-GGUF --hf-file orsta-32b-0326-q8_0.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo sizzlebop/Orsta-32B-0326-Q8_0-GGUF --hf-file orsta-32b-0326-q8_0.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo sizzlebop/Orsta-32B-0326-Q8_0-GGUF --hf-file orsta-32b-0326-q8_0.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo sizzlebop/Orsta-32B-0326-Q8_0-GGUF --hf-file orsta-32b-0326-q8_0.gguf -c 2048\n```\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "One-RL-to-See-Them-All/Orsta-32B-0326" ], "base_model": null, "base_model_relation": null }, { "model_id": "itztheking/FMAX-testrun-1.0", "gated": "False", "card": "---\nbase_model: unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2_5_vl\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded finetuned model\n\n- **Developed by:** itztheking\n- **License:** apache-2.0\n- **Finetuned from model :** unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit\n\nThis qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit" ], "base_model": "itztheking/FMAX-testrun", "base_model_relation": "finetune" }, { "model_id": "yoshimaru4/bashk_qwen_test", "gated": "False", "card": "---\nbase_model: unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2_5_vl\n- trl\n- sft\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded finetuned model\n\n- **Developed by:** yoshimaru4\n- **License:** apache-2.0\n- **Finetuned from model :** unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit\n\nThis qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit" ], "base_model": "yoshimaru4/bashk_qwen_test", "base_model_relation": "base" }, { "model_id": "itztheking/FMAX-testrun-3.0-lora", "gated": "False", "card": "---\nbase_model: unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2_5_vl\n- trl\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded model\n\n- **Developed by:** itztheking\n- **License:** apache-2.0\n- **Finetuned from model :** unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit\n\nThis qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit" ], "base_model": "itztheking/FMAX-testrun-3.0-lora", "base_model_relation": "base" }, { "model_id": "farhan9801/32B-fine_tuned", "gated": "False", "card": "---\nbase_model: unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2_5_vl\n- trl\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded model\n\n- **Developed by:** farhan9801\n- **License:** apache-2.0\n- **Finetuned from model :** unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit\n\nThis qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit" ], "base_model": "farhan9801/32B-fine_tuned", "base_model_relation": "base" }, { "model_id": "QAdottech/Qwen2.5-VL-32B-Instruct-unsloth-merged16", "gated": "False", "card": "---\nbase_model: unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2_5_vl\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded finetuned model\n\n- **Developed by:** QAdottech\n- **License:** apache-2.0\n- **Finetuned from model :** unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit\n\nThis qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit" ], "base_model": "QAdottech/Qwen2.5-VL-32B-Instruct-unsloth-merged16", "base_model_relation": "base" }, { "model_id": "QAdottech/Qwen2.5-VL-32B-Instruct-unsloth-bnb-sft-merged", "gated": "False", "card": "---\nbase_model: unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2_5_vl\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded finetuned model\n\n- **Developed by:** QAdottech\n- **License:** apache-2.0\n- **Finetuned from model :** unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit\n\nThis qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit" ], "base_model": "QAdottech/Qwen2.5-VL-32B-Instruct-unsloth-bnb-sft-merged", "base_model_relation": "base" }, { "model_id": "QAdottech/Qwen2.5-VL-32B-Instruct-unsloth-bnb-16-merged", "gated": "False", "card": "---\nbase_model: unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit\ntags:\n- text-generation-inference\n- transformers\n- unsloth\n- qwen2_5_vl\nlicense: apache-2.0\nlanguage:\n- en\n---\n\n# Uploaded finetuned model\n\n- **Developed by:** QAdottech\n- **License:** apache-2.0\n- **Finetuned from model :** unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit\n\nThis qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.\n\n[](https://github.com/unslothai/unsloth)\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit" ], "base_model": "QAdottech/Qwen2.5-VL-32B-Instruct-unsloth-bnb-16-merged", "base_model_relation": "base" } ] }