2025 "llm inference efficiency" Papers
5 papers found
CodeGEMM: A Codebook-Centric Approach to Efficient GEMM in Quantized LLMs
Gunho Park, Jeongin Bae, Byeongwook Kim et al.
NEURIPS 2025posterarXiv:2512.17970
KVCOMM: Online Cross-context KV-cache Communication for Efficient LLM-based Multi-agent Systems
Hancheng Ye, Zhengqi Gao, Mingyuan Ma et al.
NEURIPS 2025posterarXiv:2510.12872
1
citations
Progressive Mixed-Precision Decoding for Efficient LLM Inference
Hao (Mark) Chen, Fuwen Tan, Alexandros Kouris et al.
ICLR 2025posterarXiv:2410.13461
8
citations
RazorAttention: Efficient KV Cache Compression Through Retrieval Heads
Hanlin Tang, Yang Lin, Jing Lin et al.
ICLR 2025posterarXiv:2407.15891
59
citations
STBLLM: Breaking the 1-Bit Barrier with Structured Binary LLMs
Peijie Dong, Lujun Li, Yuedong Zhong et al.
ICLR 2025posterarXiv:2408.01803
31
citations