midah commited on
Commit
14ec1d6
·
verified ·
1 Parent(s): 7e4e4c8

Add tree for ByteDance-Seed/UI-TARS-1.5-7B

Browse files
ByteDance-Seed_UI-TARS-1.5-7B_tree.json ADDED
@@ -0,0 +1,268 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "base_model": "ByteDance-Seed/UI-TARS-1.5-7B",
3
+ "tree": [
4
+ {
5
+ "model_id": "ByteDance-Seed/UI-TARS-1.5-7B",
6
+ "gated": "False",
7
+ "card": "\n---\nlicense: apache-2.0\nlanguage:\n- en\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- gui\nlibrary_name: transformers\n---\n\n\n# UI-TARS-1.5 Model\n\nWe shared the latest progress of the UI-TARS-1.5 model in [our blog](https://seed-tars.com/1.5/), which excels in playing games and performing GUI tasks.\n\n## Introduction\n\nUI-TARS-1.5, an open-source multimodal agent built upon a powerful vision-language model. It is capable of effectively performing diverse tasks within virtual worlds.\n\nLeveraging the foundational architecture introduced in [our recent paper](https://arxiv.org/abs/2501.12326), UI-TARS-1.5 integrates advanced reasoning enabled by reinforcement learning. This allows the model to reason through its thoughts before taking action, significantly enhancing its performance and adaptability, particularly in inference-time scaling. Our new 1.5 version achieves state-of-the-art results across a variety of standard benchmarks, demonstrating strong reasoning capabilities and notable improvements over prior models.\n<!-- ![Local Image](figures/UI-TARS.png) -->\n<p align=\"center\">\n <video controls width=\"480\">\n <source src=\"https://huggingface.co/datasets/JjjFangg/Demo_video/resolve/main/GUI_demo.mp4\" type=\"video/mp4\">\n </video>\n\n<p>\n<p align=\"center\">\n <video controls width=\"480\">\n <source src=\"https://huggingface.co/datasets/JjjFangg/Demo_video/resolve/main/Game_demo.mp4\" type=\"video/mp4\">\n </video>\n<p>\n\n<!-- ![Local Image](figures/UI-TARS-vs-Previous-SOTA.png) -->\nCode: https://github.com/bytedance/UI-TARS\n\nApplication: https://github.com/bytedance/UI-TARS-desktop\n\n## Performance\n**Online Benchmark Evaluation**\n| Benchmark type | Benchmark | UI-TARS-1.5 | OpenAI CUA | Claude 3.7 | Previous SOTA |\n|----------------|--------------------------------------------------------------------------------------------------------------------------------------------------|-------------|-------------|-------------|----------------------|\n| **Computer Use** | [OSworld](https://arxiv.org/abs/2404.07972) (100 steps) | **42.5** | 36.4 | 28 | 38.1 (200 step) |\n| | [Windows Agent Arena](https://arxiv.org/abs/2409.08264) (50 steps) | **42.1** | - | - | 29.8 |\n| **Browser Use** | [WebVoyager](https://arxiv.org/abs/2401.13919) | 84.8 | **87** | 84.1 | 87 |\n| | [Online-Mind2web](https://arxiv.org/abs/2504.01382) | **75.8** | 71 | 62.9 | 71 |\n| **Phone Use** | [Android World](https://arxiv.org/abs/2405.14573) | **64.2** | - | - | 59.5 |\n\n\n**Grounding Capability Evaluation**\n| Benchmark | UI-TARS-1.5 | OpenAI CUA | Claude 3.7 | Previous SOTA |\n|-----------|-------------|------------|------------|----------------|\n| [ScreensSpot-V2](https://arxiv.org/pdf/2410.23218) | **94.2** | 87.9 | 87.6 | 91.6 |\n| [ScreenSpotPro](https://arxiv.org/pdf/2504.07981v1) | **61.6** | 23.4 | 27.7 | 43.6 |\n\n\n\n**Poki Game**\n\n| Model | [2048](https://poki.com/en/g/2048) | [cubinko](https://poki.com/en/g/cubinko) | [energy](https://poki.com/en/g/energy) | [free-the-key](https://poki.com/en/g/free-the-key) | [Gem-11](https://poki.com/en/g/gem-11) | [hex-frvr](https://poki.com/en/g/hex-frvr) | [Infinity-Loop](https://poki.com/en/g/infinity-loop) | [Maze:Path-of-Light](https://poki.com/en/g/maze-path-of-light) | [shapes](https://poki.com/en/g/shapes) | [snake-solver](https://poki.com/en/g/snake-solver) | [wood-blocks-3d](https://poki.com/en/g/wood-blocks-3d) | [yarn-untangle](https://poki.com/en/g/yarn-untangle) | [laser-maze-puzzle](https://poki.com/en/g/laser-maze-puzzle) | [tiles-master](https://poki.com/en/g/tiles-master) |\n|-------------|-----------|--------------|-------------|-------------------|-------------|---------------|---------------------|--------------------------|-------------|--------------------|----------------------|---------------------|------------------------|---------------------|\n| OpenAI CUA | 31.04 | 0.00 | 32.80 | 0.00 | 46.27 | 92.25 | 23.08 | 35.00 | 52.18 | 42.86 | 2.02 | 44.56 | 80.00 | 78.27 |\n| Claude 3.7 | 43.05 | 0.00 | 41.60 | 0.00 | 0.00 | 30.76 | 2.31 | 82.00 | 6.26 | 42.86 | 0.00 | 13.77 | 28.00 | 52.18 |\n| UI-TARS-1.5 | 100.00 | 0.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 |\n\n\n**Minecraft**\n\n| Task Type | Task Name | [VPT](https://openai.com/index/vpt/) | [DreamerV3](https://www.nature.com/articles/s41586-025-08744-2) | Previous SOTA | UI-TARS-1.5 w/o Thought | UI-TARS-1.5 w/ Thought |\n|-------------|---------------------|----------|----------------|--------------------|------------------|-----------------|\n| Mine Blocks | (oak_log) | 0.8 | 1.0 | 1.0 | 1.0 | 1.0 |\n| | (obsidian) | 0.0 | 0.0 | 0.0 | 0.2 | 0.3 |\n| | (white_bed) | 0.0 | 0.0 | 0.1 | 0.4 | 0.6 |\n| | **200 Tasks Avg.** | 0.06 | 0.03 | 0.32 | 0.35 | 0.42 |\n| Kill Mobs | (mooshroom) | 0.0 | 0.0 | 0.1 | 0.3 | 0.4 |\n| | (zombie) | 0.4 | 0.1 | 0.6 | 0.7 | 0.9 |\n| | (chicken) | 0.1 | 0.0 | 0.4 | 0.5 | 0.6 |\n| | **100 Tasks Avg.** | 0.04 | 0.03 | 0.18 | 0.25 | 0.31 |\n\n## Model Scale Comparison\n\nThis table compares performance across different model scales of UI-TARS on the OSworld benchmark.\n\n| **Benchmark Type** | **Benchmark** | **UI-TARS-72B-DPO** | **UI-TARS-1.5-7B** | **UI-TARS-1.5** |\n|--------------------|------------------------------------|---------------------|--------------------|-----------------|\n| Computer Use | [OSWorld](https://arxiv.org/abs/2404.07972) | 24.6 | 27.5 | **42.5** |\n| GUI Grounding | [ScreenSpotPro](https://arxiv.org/pdf/2504.07981v1) | 38.1 | 49.6 | **61.6** |\n\nThe released UI-TARS-1.5-7B focuses primarily on enhancing general computer use capabilities and is not specifically optimized for game-based scenarios, where the UI-TARS-1.5 still holds a significant advantage.\n\n## What's next\nWe are providing early research access to our top-performing UI-TARS-1.5 model to facilitate collaborative research. Interested researchers can contact us at TARS@bytedance.com.\n\n\n## Citation\nIf you find our paper and model useful in your research, feel free to give us a cite.\n\n```BibTeX\n@article{qin2025ui,\n title={UI-TARS: Pioneering Automated GUI Interaction with Native Agents},\n author={Qin, Yujia and Ye, Yining and Fang, Junjie and Wang, Haoming and Liang, Shihao and Tian, Shizuo and Zhang, Junda and Li, Jiahao and Li, Yunxin and Huang, Shijue and others},\n journal={arXiv preprint arXiv:2501.12326},\n year={2025}\n}\n```",
8
+ "metadata": "\"N/A\"",
9
+ "depth": 0,
10
+ "children": [
11
+ "adriabama06/UI-TARS-1.5-7B-exl2"
12
+ ],
13
+ "children_count": 1,
14
+ "adapters": [],
15
+ "adapters_count": 0,
16
+ "quantized": [
17
+ "adriabama06/UI-TARS-1.5-7B-Q4_K_M-GGUF",
18
+ "adriabama06/UI-TARS-1.5-7B-GGUF",
19
+ "mradermacher/UI-TARS-1.5-7B-GGUF",
20
+ "mradermacher/UI-TARS-1.5-7B-i1-GGUF",
21
+ "Lucy-in-the-Sky/UI-TARS-1.5-7B-Q4_K_M-GGUF",
22
+ "Lucy-in-the-Sky/UI-TARS-1.5-7B-Q6_K-GGUF",
23
+ "Lucy-in-the-Sky/UI-TARS-1.5-7B-Q8_0-GGUF",
24
+ "rosethelocalfem/UI-TARS-1.5-7B-Q4_K_M-GGUF",
25
+ "yujiepan/ui-tars-1.5-7B-GPTQ-W4A16g128"
26
+ ],
27
+ "quantized_count": 9,
28
+ "merges": [],
29
+ "merges_count": 0,
30
+ "total_derivatives": 10,
31
+ "spaces": [],
32
+ "spaces_count": 0,
33
+ "parents": [],
34
+ "base_model": "ByteDance-Seed/UI-TARS-1.5-7B",
35
+ "base_model_relation": "base"
36
+ },
37
+ {
38
+ "model_id": "adriabama06/UI-TARS-1.5-7B-exl2",
39
+ "gated": "unknown",
40
+ "card": "---\nlicense: apache-2.0\nbase_model:\n- ByteDance-Seed/UI-TARS-1.5-7B\ntags:\n- qwen2_5_vl\n- multimodal\n- gui\n- conversational\nlanguage:\n- en\npipeline_tag: image-text-to-text\nlibrary_name: transformers\n---\n\nEXL2 quants of [UI-TARS-1.5-7B](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B)\n\n[4.00 bits per weight](https://huggingface.co/adriabama06/UI-TARS-1.5-7B-exl2/tree/4.0bpw) \n[6.00 bits per weight](https://huggingface.co/adriabama06/UI-TARS-1.5-7B-exl2/tree/6.0bpw)\n\n| Model | Size |\n|----------|------------------|\n| 4.00 bpw | 7.49 GB |\n| 6.00 bpw | 9.13 GB |",
41
+ "metadata": "\"N/A\"",
42
+ "depth": 1,
43
+ "children": [],
44
+ "children_count": 0,
45
+ "adapters": [],
46
+ "adapters_count": 0,
47
+ "quantized": [],
48
+ "quantized_count": 0,
49
+ "merges": [],
50
+ "merges_count": 0,
51
+ "total_derivatives": 0,
52
+ "spaces": [],
53
+ "spaces_count": 0,
54
+ "parents": [
55
+ "ByteDance-Seed/UI-TARS-1.5-7B"
56
+ ],
57
+ "base_model": null,
58
+ "base_model_relation": null
59
+ },
60
+ {
61
+ "model_id": "adriabama06/UI-TARS-1.5-7B-Q4_K_M-GGUF",
62
+ "gated": "unknown",
63
+ "card": "---\nlicense: apache-2.0\nlanguage:\n- en\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- gui\n- llama-cpp\n- gguf-my-repo\nlibrary_name: transformers\nbase_model: ByteDance-Seed/UI-TARS-1.5-7B\n---\n\n# adriabama06/UI-TARS-1.5-7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from [`ByteDance-Seed/UI-TARS-1.5-7B`](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo adriabama06/UI-TARS-1.5-7B-Q4_K_M-GGUF --hf-file ui-tars-1.5-7b-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo adriabama06/UI-TARS-1.5-7B-Q4_K_M-GGUF --hf-file ui-tars-1.5-7b-q4_k_m.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo adriabama06/UI-TARS-1.5-7B-Q4_K_M-GGUF --hf-file ui-tars-1.5-7b-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo adriabama06/UI-TARS-1.5-7B-Q4_K_M-GGUF --hf-file ui-tars-1.5-7b-q4_k_m.gguf -c 2048\n```\n",
64
+ "metadata": "\"N/A\"",
65
+ "depth": 1,
66
+ "children": [],
67
+ "children_count": 0,
68
+ "adapters": [],
69
+ "adapters_count": 0,
70
+ "quantized": [],
71
+ "quantized_count": 0,
72
+ "merges": [],
73
+ "merges_count": 0,
74
+ "total_derivatives": 0,
75
+ "spaces": [],
76
+ "spaces_count": 0,
77
+ "parents": [
78
+ "ByteDance-Seed/UI-TARS-1.5-7B"
79
+ ],
80
+ "base_model": null,
81
+ "base_model_relation": null
82
+ },
83
+ {
84
+ "model_id": "adriabama06/UI-TARS-1.5-7B-GGUF",
85
+ "gated": "unknown",
86
+ "card": "---\nlicense: apache-2.0\nlanguage:\n- en\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- gui\n- llama-cpp\nlibrary_name: transformers\nbase_model: ByteDance-Seed/UI-TARS-1.5-7B\n---\n\nGGUF quants (with MMPROJ) of [UI-TARS-1.5-7B](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B)\n\n| Model | Size |\n|----------|-----------|\n| [mmproj](https://huggingface.co/adriabama06/UI-TARS-1.5-7B-GGUF/blob/main/mmproj-ByteDance-Seed_UI-TARS-1.5-7B.gguf) | 1.32 GB |\n| [Q4_K_M](https://huggingface.co/adriabama06/UI-TARS-1.5-7B-GGUF/blob/main/ByteDance-Seed_UI-TARS-1.5-7B-Q4_K_M.gguf) | 4.57 GB |\n| [Q6_K](https://huggingface.co/adriabama06/UI-TARS-1.5-7B-GGUF/blob/main/ByteDance-Seed_UI-TARS-1.5-7B-Q6_K.gguf) | 6.11 GB |\n| [Q8_0](https://huggingface.co/adriabama06/UI-TARS-1.5-7B-GGUF/blob/main/ByteDance-Seed_UI-TARS-1.5-7B-Q8_0.gguf) | 7.91 GB |\n| [F16](https://huggingface.co/adriabama06/UI-TARS-1.5-7B-GGUF/blob/main/ByteDance-Seed_UI-TARS-1.5-7B-F16.gguf) | 14.88 GB |\n",
87
+ "metadata": "\"N/A\"",
88
+ "depth": 1,
89
+ "children": [],
90
+ "children_count": 0,
91
+ "adapters": [],
92
+ "adapters_count": 0,
93
+ "quantized": [],
94
+ "quantized_count": 0,
95
+ "merges": [],
96
+ "merges_count": 0,
97
+ "total_derivatives": 0,
98
+ "spaces": [],
99
+ "spaces_count": 0,
100
+ "parents": [
101
+ "ByteDance-Seed/UI-TARS-1.5-7B"
102
+ ],
103
+ "base_model": null,
104
+ "base_model_relation": null
105
+ },
106
+ {
107
+ "model_id": "mradermacher/UI-TARS-1.5-7B-GGUF",
108
+ "gated": "False",
109
+ "card": "---\nbase_model: ByteDance-Seed/UI-TARS-1.5-7B\nlanguage:\n- en\nlibrary_name: transformers\nquantized_by: mradermacher\n---\n## About\n\n<!-- ### quantize_version: 2 -->\n<!-- ### output_tensor_quantised: 1 -->\n<!-- ### convert_type: hf -->\n<!-- ### vocab_type: -->\n<!-- ### tags: -->\nstatic quants of https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B\n\n<!-- provided-files -->\nweighted/imatrix quants are available at https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF/resolve/main/UI-TARS-1.5-7B.Q2_K.gguf) | Q2_K | 3.1 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF/resolve/main/UI-TARS-1.5-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF/resolve/main/UI-TARS-1.5-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF/resolve/main/UI-TARS-1.5-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF/resolve/main/UI-TARS-1.5-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF/resolve/main/UI-TARS-1.5-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF/resolve/main/UI-TARS-1.5-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF/resolve/main/UI-TARS-1.5-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF/resolve/main/UI-TARS-1.5-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF/resolve/main/UI-TARS-1.5-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF/resolve/main/UI-TARS-1.5-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF/resolve/main/UI-TARS-1.5-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n<!-- end -->\n",
110
+ "metadata": "\"N/A\"",
111
+ "depth": 1,
112
+ "children": [],
113
+ "children_count": 0,
114
+ "adapters": [],
115
+ "adapters_count": 0,
116
+ "quantized": [],
117
+ "quantized_count": 0,
118
+ "merges": [],
119
+ "merges_count": 0,
120
+ "total_derivatives": 0,
121
+ "spaces": [],
122
+ "spaces_count": 0,
123
+ "parents": [
124
+ "ByteDance-Seed/UI-TARS-1.5-7B"
125
+ ],
126
+ "base_model": "mradermacher/UI-TARS-1.5-7B-GGUF",
127
+ "base_model_relation": "base"
128
+ },
129
+ {
130
+ "model_id": "mradermacher/UI-TARS-1.5-7B-i1-GGUF",
131
+ "gated": "False",
132
+ "card": "---\nbase_model: ByteDance-Seed/UI-TARS-1.5-7B\nlanguage:\n- en\nlibrary_name: transformers\nquantized_by: mradermacher\n---\n## About\n\n<!-- ### quantize_version: 2 -->\n<!-- ### output_tensor_quantised: 1 -->\n<!-- ### convert_type: hf -->\n<!-- ### vocab_type: -->\n<!-- ### tags: nicoboss -->\nweighted/imatrix quants of https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B\n\n<!-- provided-files -->\nstatic quants are available at https://huggingface.co/mradermacher/UI-TARS-1.5-7B-GGUF\n## Usage\n\nIf you are unsure how to use GGUF files, refer to one of [TheBloke's\nREADMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for\nmore details, including on how to concatenate multi-part files.\n\n## Provided Quants\n\n(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)\n\n| Link | Type | Size/GB | Notes |\n|:-----|:-----|--------:|:------|\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |\n| [GGUF](https://huggingface.co/mradermacher/UI-TARS-1.5-7B-i1-GGUF/resolve/main/UI-TARS-1.5-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |\n\nHere is a handy graph by ikawrakow comparing some lower-quality quant\ntypes (lower is better):\n\n![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)\n\nAnd here are Artefact2's thoughts on the matter:\nhttps://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9\n\n## FAQ / Model Request\n\nSee https://huggingface.co/mradermacher/model_requests for some answers to\nquestions you might have and/or if you want some other model quantized.\n\n## Thanks\n\nI thank my company, [nethype GmbH](https://www.nethype.de/), for letting\nme use its servers and providing upgrades to my workstation to enable\nthis work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.\n\n<!-- end -->\n",
133
+ "metadata": "\"N/A\"",
134
+ "depth": 1,
135
+ "children": [],
136
+ "children_count": 0,
137
+ "adapters": [],
138
+ "adapters_count": 0,
139
+ "quantized": [],
140
+ "quantized_count": 0,
141
+ "merges": [],
142
+ "merges_count": 0,
143
+ "total_derivatives": 0,
144
+ "spaces": [],
145
+ "spaces_count": 0,
146
+ "parents": [
147
+ "ByteDance-Seed/UI-TARS-1.5-7B"
148
+ ],
149
+ "base_model": "mradermacher/UI-TARS-1.5-7B-i1-GGUF",
150
+ "base_model_relation": "base"
151
+ },
152
+ {
153
+ "model_id": "Lucy-in-the-Sky/UI-TARS-1.5-7B-Q4_K_M-GGUF",
154
+ "gated": "False",
155
+ "card": "---\nbase_model: ByteDance-Seed/UI-TARS-1.5-7B\nlanguage:\n- en\nlibrary_name: transformers\nlicense: apache-2.0\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- gui\n- llama-cpp\n- gguf-my-repo\n---\n\n# Lucy-in-the-Sky/UI-TARS-1.5-7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from [`ByteDance-Seed/UI-TARS-1.5-7B`](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q4_K_M-GGUF --hf-file ui-tars-1.5-7b-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q4_K_M-GGUF --hf-file ui-tars-1.5-7b-q4_k_m.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q4_K_M-GGUF --hf-file ui-tars-1.5-7b-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q4_K_M-GGUF --hf-file ui-tars-1.5-7b-q4_k_m.gguf -c 2048\n```\n",
156
+ "metadata": "\"N/A\"",
157
+ "depth": 1,
158
+ "children": [],
159
+ "children_count": 0,
160
+ "adapters": [],
161
+ "adapters_count": 0,
162
+ "quantized": [],
163
+ "quantized_count": 0,
164
+ "merges": [],
165
+ "merges_count": 0,
166
+ "total_derivatives": 0,
167
+ "spaces": [],
168
+ "spaces_count": 0,
169
+ "parents": [
170
+ "ByteDance-Seed/UI-TARS-1.5-7B"
171
+ ],
172
+ "base_model": "Lucy-in-the-Sky/UI-TARS-1.5-7B-Q4_K_M-GGUF",
173
+ "base_model_relation": "base"
174
+ },
175
+ {
176
+ "model_id": "Lucy-in-the-Sky/UI-TARS-1.5-7B-Q6_K-GGUF",
177
+ "gated": "False",
178
+ "card": "---\nbase_model: ByteDance-Seed/UI-TARS-1.5-7B\nlanguage:\n- en\nlibrary_name: transformers\nlicense: apache-2.0\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- gui\n- llama-cpp\n- gguf-my-repo\n---\n\n# Lucy-in-the-Sky/UI-TARS-1.5-7B-Q6_K-GGUF\nThis model was converted to GGUF format from [`ByteDance-Seed/UI-TARS-1.5-7B`](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q6_K-GGUF --hf-file ui-tars-1.5-7b-q6_k.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q6_K-GGUF --hf-file ui-tars-1.5-7b-q6_k.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q6_K-GGUF --hf-file ui-tars-1.5-7b-q6_k.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q6_K-GGUF --hf-file ui-tars-1.5-7b-q6_k.gguf -c 2048\n```\n",
179
+ "metadata": "\"N/A\"",
180
+ "depth": 1,
181
+ "children": [],
182
+ "children_count": 0,
183
+ "adapters": [],
184
+ "adapters_count": 0,
185
+ "quantized": [],
186
+ "quantized_count": 0,
187
+ "merges": [],
188
+ "merges_count": 0,
189
+ "total_derivatives": 0,
190
+ "spaces": [],
191
+ "spaces_count": 0,
192
+ "parents": [
193
+ "ByteDance-Seed/UI-TARS-1.5-7B"
194
+ ],
195
+ "base_model": "Lucy-in-the-Sky/UI-TARS-1.5-7B-Q6_K-GGUF",
196
+ "base_model_relation": "base"
197
+ },
198
+ {
199
+ "model_id": "Lucy-in-the-Sky/UI-TARS-1.5-7B-Q8_0-GGUF",
200
+ "gated": "False",
201
+ "card": "---\nbase_model: ByteDance-Seed/UI-TARS-1.5-7B\nlanguage:\n- en\nlibrary_name: transformers\nlicense: apache-2.0\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- gui\n- llama-cpp\n- gguf-my-repo\n---\n\n# Lucy-in-the-Sky/UI-TARS-1.5-7B-Q8_0-GGUF\nThis model was converted to GGUF format from [`ByteDance-Seed/UI-TARS-1.5-7B`](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q8_0-GGUF --hf-file ui-tars-1.5-7b-q8_0.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q8_0-GGUF --hf-file ui-tars-1.5-7b-q8_0.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q8_0-GGUF --hf-file ui-tars-1.5-7b-q8_0.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q8_0-GGUF --hf-file ui-tars-1.5-7b-q8_0.gguf -c 2048\n```\n",
202
+ "metadata": "\"N/A\"",
203
+ "depth": 1,
204
+ "children": [],
205
+ "children_count": 0,
206
+ "adapters": [],
207
+ "adapters_count": 0,
208
+ "quantized": [],
209
+ "quantized_count": 0,
210
+ "merges": [],
211
+ "merges_count": 0,
212
+ "total_derivatives": 0,
213
+ "spaces": [],
214
+ "spaces_count": 0,
215
+ "parents": [
216
+ "ByteDance-Seed/UI-TARS-1.5-7B"
217
+ ],
218
+ "base_model": "Lucy-in-the-Sky/UI-TARS-1.5-7B-Q8_0-GGUF",
219
+ "base_model_relation": "base"
220
+ },
221
+ {
222
+ "model_id": "rosethelocalfem/UI-TARS-1.5-7B-Q4_K_M-GGUF",
223
+ "gated": "unknown",
224
+ "card": "---\nlicense: apache-2.0\nlanguage:\n- en\npipeline_tag: image-text-to-text\ntags:\n- multimodal\n- gui\n- llama-cpp\n- gguf-my-repo\nlibrary_name: transformers\nbase_model: ByteDance-Seed/UI-TARS-1.5-7B\n---\n\n# rosethelocalfem/UI-TARS-1.5-7B-Q4_K_M-GGUF\nThis model was converted to GGUF format from [`ByteDance-Seed/UI-TARS-1.5-7B`](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.\nRefer to the [original model card](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B) for more details on the model.\n\n## Use with llama.cpp\nInstall llama.cpp through brew (works on Mac and Linux)\n\n```bash\nbrew install llama.cpp\n\n```\nInvoke the llama.cpp server or the CLI.\n\n### CLI:\n```bash\nllama-cli --hf-repo rosethelocalfem/UI-TARS-1.5-7B-Q4_K_M-GGUF --hf-file ui-tars-1.5-7b-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\n\n### Server:\n```bash\nllama-server --hf-repo rosethelocalfem/UI-TARS-1.5-7B-Q4_K_M-GGUF --hf-file ui-tars-1.5-7b-q4_k_m.gguf -c 2048\n```\n\nNote: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.\n\nStep 1: Clone llama.cpp from GitHub.\n```\ngit clone https://github.com/ggerganov/llama.cpp\n```\n\nStep 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).\n```\ncd llama.cpp && LLAMA_CURL=1 make\n```\n\nStep 3: Run inference through the main binary.\n```\n./llama-cli --hf-repo rosethelocalfem/UI-TARS-1.5-7B-Q4_K_M-GGUF --hf-file ui-tars-1.5-7b-q4_k_m.gguf -p \"The meaning to life and the universe is\"\n```\nor \n```\n./llama-server --hf-repo rosethelocalfem/UI-TARS-1.5-7B-Q4_K_M-GGUF --hf-file ui-tars-1.5-7b-q4_k_m.gguf -c 2048\n```\n",
225
+ "metadata": "\"N/A\"",
226
+ "depth": 1,
227
+ "children": [],
228
+ "children_count": 0,
229
+ "adapters": [],
230
+ "adapters_count": 0,
231
+ "quantized": [],
232
+ "quantized_count": 0,
233
+ "merges": [],
234
+ "merges_count": 0,
235
+ "total_derivatives": 0,
236
+ "spaces": [],
237
+ "spaces_count": 0,
238
+ "parents": [
239
+ "ByteDance-Seed/UI-TARS-1.5-7B"
240
+ ],
241
+ "base_model": null,
242
+ "base_model_relation": null
243
+ },
244
+ {
245
+ "model_id": "yujiepan/ui-tars-1.5-7B-GPTQ-W4A16g128",
246
+ "gated": "unknown",
247
+ "card": "---\nbase_model: ByteDance-Seed/UI-TARS-1.5-7B\npipeline_tag: image-text-to-text\n---\n\n\n## Codes\n\nSee [run_compression.py](https://huggingface.co/yujiepan/ui-tars-1.5-7B-GPTQ-W4A16g128/blob/main/run_compression.py)",
248
+ "metadata": "\"N/A\"",
249
+ "depth": 1,
250
+ "children": [],
251
+ "children_count": 0,
252
+ "adapters": [],
253
+ "adapters_count": 0,
254
+ "quantized": [],
255
+ "quantized_count": 0,
256
+ "merges": [],
257
+ "merges_count": 0,
258
+ "total_derivatives": 0,
259
+ "spaces": [],
260
+ "spaces_count": 0,
261
+ "parents": [
262
+ "ByteDance-Seed/UI-TARS-1.5-7B"
263
+ ],
264
+ "base_model": null,
265
+ "base_model_relation": null
266
+ }
267
+ ]
268
+ }