SeCom: On Memory Construction and Retrieval for Personalized Conversational Agents

0citations
Project
0
citations
#2529
in ICLR 2025
of 3827 papers
11
Top Authors
4
Data Points

Abstract

To deliver coherent and personalized experiences in long-term conversations, existing approaches typically perform retrieval augmented response generation by constructing memory banks from conversation history at either the turn-level, session-level, or through summarization techniques.In this paper, we explore the impact of different memory granularities and present two key findings: (1) Both turn-level and session-level memory units are suboptimal, affecting not only the quality of final responses, but also the accuracy of the retrieval process.(2) The redundancy in natural language introduces noise, hindering precise retrieval. We demonstrate thatLLMLingua-2, originally designed for prompt compression to accelerate LLM inference, can serve as an effective denoising method to enhance memory retrieval accuracy.Building on these insights, we proposeSeCom, a method that constructs a memory bank with topical segments by introducing a conversationSegmentation model, while performing memory retrieval based onCompressed memory units.Experimental results show thatSeComoutperforms turn-level, session-level, and several summarization-based methods on long-term conversation benchmarks such asLOCOMOandLong-MT-Bench+. Additionally, the proposed conversation segmentation method demonstrates superior performance on dialogue segmentation datasets such asDialSeg711,TIAGE, andSuperDialSeg.

Citation History

Jan 25, 2026
0
Jan 27, 2026
0
Jan 27, 2026
0
Jan 28, 2026
0