InferCept: Efficient Intercept Support for Augmented Large Language Model Inference

0citations
PDFProject
0
Citations
#10
in ICML 2024
of 2635 papers
5
Authors
1
Data Points

Abstract

Large language models are increasingly integrated with external environments, tools, and agents like ChatGPT plugins to extend their capability beyond language-centric tasks. However, today's LLM inference systems are designed for standalone LLMs. They treat each external interaction as the end of LLM generation and form a new request when the interaction finishes, causing unnecessary recomputation of already computed contexts, which accounts for 37-40% of total model forwarding time. This paper presentsInferCept, the first LLM inference framework targeting augmented LLMsand supporting the efficient interception of LLM generation. InferCept minimizes the GPU resource waste caused by LLM interceptions and dedicates saved memory for serving more requests.InferCept improves the overall serving throughput by1.6x-2xand completes 2x more requests per second compared to the state-of-the-art LLM inference systems.

Citation History

Jan 28, 2026
0