2025 "kv cache management" Papers
4 papers found
DeFT: Decoding with Flash Tree-attention for Efficient Tree-structured LLM Inference
Jinwei Yao, Kaiqi Chen, Kexun Zhang et al.
ICLR 2025posterarXiv:2404.00242
8
citations
KVFlow: Efficient Prefix Caching for Accelerating LLM-Based Multi-Agent Workflows
Zaifeng Pan, AJJKUMAR DAHYALAL PATEL, Yipeng Shen et al.
NEURIPS 2025oralarXiv:2507.07400
8
citations
Tail-Optimized Caching for LLM Inference
Wenxin Zhang, Yueying Li, Ciamac C Moallemi et al.
NEURIPS 2025posterarXiv:2510.15152
2
citations
Transcending Cost-Quality Tradeoff in Agent Serving via Session-Awareness
Yanyu Ren, Li Chen, Dan Li et al.
NEURIPS 2025poster