-
Multimodal Prompt Optimization: Why Not Leverage Multiple Modalities for MLLMs
Paper • 2510.09201 • Published • 49 -
Similarity-Based Domain Adaptation with LLMs
Paper • 2503.05281 • Published -
Text Clustering as Classification with LLMs
Paper • 2410.00927 • Published -
Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs
Paper • 2504.17432 • Published • 39
Collections
Discover the best community collections!
Collections including paper arxiv:2504.17432
-
DeepGlint-AI/UniME-Phi3.5-V-4.2B
Image-Text-to-Text • Updated • 213 • 7 -
DeepGlint-AI/UniME-LLaVA-OneVision-7B
Image-Text-to-Text • 8B • Updated • 39 • 3 -
DeepGlint-AI/UniME-LLaVA-1.6-7B
Image-Text-to-Text • 8B • Updated • 126 • 5 -
Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs
Paper • 2504.17432 • Published • 39
-
DeepGlint-AI/UniME-Phi3.5-V-4.2B
Image-Text-to-Text • Updated • 213 • 7 -
DeepGlint-AI/UniME-LLaVA-1.6-7B
Image-Text-to-Text • 8B • Updated • 126 • 5 -
Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs
Paper • 2504.17432 • Published • 39 -
DeepGlint-AI/UniME-LLaVA-OneVision-7B
Image-Text-to-Text • 8B • Updated • 39 • 3
-
How to Synthesize Text Data without Model Collapse?
Paper • 2412.14689 • Published • 52 -
SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator
Paper • 2412.12094 • Published • 11 -
StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models
Paper • 2306.07691 • Published • 12 -
iSTFTNet: Fast and Lightweight Mel-Spectrogram Vocoder Incorporating Inverse Short-Time Fourier Transform
Paper • 2203.02395 • Published • 1
-
LLM Pruning and Distillation in Practice: The Minitron Approach
Paper • 2408.11796 • Published • 57 -
TableBench: A Comprehensive and Complex Benchmark for Table Question Answering
Paper • 2408.09174 • Published • 52 -
To Code, or Not To Code? Exploring Impact of Code in Pre-training
Paper • 2408.10914 • Published • 44 -
Open-FinLLMs: Open Multimodal Large Language Models for Financial Applications
Paper • 2408.11878 • Published • 63
-
Less is More: Recursive Reasoning with Tiny Networks
Paper • 2510.04871 • Published • 493 -
When Thoughts Meet Facts: Reusable Reasoning for Long-Context LMs
Paper • 2510.07499 • Published • 48 -
Improving Context Fidelity via Native Retrieval-Augmented Reasoning
Paper • 2509.13683 • Published • 8 -
Multimodal Iterative RAG for Knowledge-Intensive Visual Question Answering
Paper • 2509.00798 • Published • 1
-
Describe What You See with Multimodal Large Language Models to Enhance Video Recommendations
Paper • 2508.09789 • Published • 5 -
MM-BrowseComp: A Comprehensive Benchmark for Multimodal Browsing Agents
Paper • 2508.13186 • Published • 18 -
ZARA: Zero-shot Motion Time-Series Analysis via Knowledge and Retrieval Driven LLM Agents
Paper • 2508.04038 • Published • 1 -
Prompt Orchestration Markup Language
Paper • 2508.13948 • Published • 48
-
Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters
Paper • 2408.03314 • Published • 63 -
TAG: A Decentralized Framework for Multi-Agent Hierarchical Reinforcement Learning
Paper • 2502.15425 • Published • 9 -
EgoLife: Towards Egocentric Life Assistant
Paper • 2503.03803 • Published • 46 -
Visual-RFT: Visual Reinforcement Fine-Tuning
Paper • 2503.01785 • Published • 85
-
LinFusion: 1 GPU, 1 Minute, 16K Image
Paper • 2409.02097 • Published • 34 -
Phidias: A Generative Model for Creating 3D Content from Text, Image, and 3D Conditions with Reference-Augmented Diffusion
Paper • 2409.11406 • Published • 27 -
Diffusion Models Are Real-Time Game Engines
Paper • 2408.14837 • Published • 126 -
Segment Anything with Multiple Modalities
Paper • 2408.09085 • Published • 22
-
CatLIP: CLIP-level Visual Recognition Accuracy with 2.7x Faster Pre-training on Web-scale Image-Text Data
Paper • 2404.15653 • Published • 29 -
MoDE: CLIP Data Experts via Clustering
Paper • 2404.16030 • Published • 15 -
MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning
Paper • 2405.12130 • Published • 50 -
Reducing Transformer Key-Value Cache Size with Cross-Layer Attention
Paper • 2405.12981 • Published • 33
-
Multimodal Prompt Optimization: Why Not Leverage Multiple Modalities for MLLMs
Paper • 2510.09201 • Published • 49 -
Similarity-Based Domain Adaptation with LLMs
Paper • 2503.05281 • Published -
Text Clustering as Classification with LLMs
Paper • 2410.00927 • Published -
Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs
Paper • 2504.17432 • Published • 39
-
Less is More: Recursive Reasoning with Tiny Networks
Paper • 2510.04871 • Published • 493 -
When Thoughts Meet Facts: Reusable Reasoning for Long-Context LMs
Paper • 2510.07499 • Published • 48 -
Improving Context Fidelity via Native Retrieval-Augmented Reasoning
Paper • 2509.13683 • Published • 8 -
Multimodal Iterative RAG for Knowledge-Intensive Visual Question Answering
Paper • 2509.00798 • Published • 1
-
DeepGlint-AI/UniME-Phi3.5-V-4.2B
Image-Text-to-Text • Updated • 213 • 7 -
DeepGlint-AI/UniME-LLaVA-OneVision-7B
Image-Text-to-Text • 8B • Updated • 39 • 3 -
DeepGlint-AI/UniME-LLaVA-1.6-7B
Image-Text-to-Text • 8B • Updated • 126 • 5 -
Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs
Paper • 2504.17432 • Published • 39
-
Describe What You See with Multimodal Large Language Models to Enhance Video Recommendations
Paper • 2508.09789 • Published • 5 -
MM-BrowseComp: A Comprehensive Benchmark for Multimodal Browsing Agents
Paper • 2508.13186 • Published • 18 -
ZARA: Zero-shot Motion Time-Series Analysis via Knowledge and Retrieval Driven LLM Agents
Paper • 2508.04038 • Published • 1 -
Prompt Orchestration Markup Language
Paper • 2508.13948 • Published • 48
-
DeepGlint-AI/UniME-Phi3.5-V-4.2B
Image-Text-to-Text • Updated • 213 • 7 -
DeepGlint-AI/UniME-LLaVA-1.6-7B
Image-Text-to-Text • 8B • Updated • 126 • 5 -
Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs
Paper • 2504.17432 • Published • 39 -
DeepGlint-AI/UniME-LLaVA-OneVision-7B
Image-Text-to-Text • 8B • Updated • 39 • 3
-
Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters
Paper • 2408.03314 • Published • 63 -
TAG: A Decentralized Framework for Multi-Agent Hierarchical Reinforcement Learning
Paper • 2502.15425 • Published • 9 -
EgoLife: Towards Egocentric Life Assistant
Paper • 2503.03803 • Published • 46 -
Visual-RFT: Visual Reinforcement Fine-Tuning
Paper • 2503.01785 • Published • 85
-
How to Synthesize Text Data without Model Collapse?
Paper • 2412.14689 • Published • 52 -
SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator
Paper • 2412.12094 • Published • 11 -
StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models
Paper • 2306.07691 • Published • 12 -
iSTFTNet: Fast and Lightweight Mel-Spectrogram Vocoder Incorporating Inverse Short-Time Fourier Transform
Paper • 2203.02395 • Published • 1
-
LinFusion: 1 GPU, 1 Minute, 16K Image
Paper • 2409.02097 • Published • 34 -
Phidias: A Generative Model for Creating 3D Content from Text, Image, and 3D Conditions with Reference-Augmented Diffusion
Paper • 2409.11406 • Published • 27 -
Diffusion Models Are Real-Time Game Engines
Paper • 2408.14837 • Published • 126 -
Segment Anything with Multiple Modalities
Paper • 2408.09085 • Published • 22
-
LLM Pruning and Distillation in Practice: The Minitron Approach
Paper • 2408.11796 • Published • 57 -
TableBench: A Comprehensive and Complex Benchmark for Table Question Answering
Paper • 2408.09174 • Published • 52 -
To Code, or Not To Code? Exploring Impact of Code in Pre-training
Paper • 2408.10914 • Published • 44 -
Open-FinLLMs: Open Multimodal Large Language Models for Financial Applications
Paper • 2408.11878 • Published • 63
-
CatLIP: CLIP-level Visual Recognition Accuracy with 2.7x Faster Pre-training on Web-scale Image-Text Data
Paper • 2404.15653 • Published • 29 -
MoDE: CLIP Data Experts via Clustering
Paper • 2404.16030 • Published • 15 -
MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning
Paper • 2405.12130 • Published • 50 -
Reducing Transformer Key-Value Cache Size with Cross-Layer Attention
Paper • 2405.12981 • Published • 33