SentenceKV: Efficient LLM Inference via Sentence-Level Semantic KV Caching Paper • 2504.00970 • Published Apr 1, 2025 • 2