ICLR 2025 "in-context learning" Papers
13 papers found
BenTo: Benchmark Reduction with In-Context Transferability
Hongyu Zhao, Ming Li, Lichao Sun et al.
ICLR 2025poster
Can In-context Learning Really Generalize to Out-of-distribution Tasks?
Qixun Wang, Yifei Wang, Xianghua Ying et al.
ICLR 2025posterarXiv:2410.09695
15
citations
Density estimation with LLMs: a geometric investigation of in-context learning trajectories
Toni Liu, Nicolas Boulle, Raphaël Sarfati et al.
ICLR 2025posterarXiv:2410.05218
2
citations
Differential Transformer
Tianzhu Ye, Li Dong, Yuqing Xia et al.
ICLR 2025posterarXiv:2410.05258
Efficient Cross-Episode Meta-RL
Gresa Shala, André Biedenkapp, Pierre Krack et al.
ICLR 2025poster
ELICIT: LLM Augmentation Via External In-context Capability
Futing Wang, Jianhao (Elliott) Yan, Yue Zhang et al.
ICLR 2025posterarXiv:2410.09343
6
citations
Implicit In-context Learning
Zhuowei Li, Zihao Xu, Ligong Han et al.
ICLR 2025posterarXiv:2405.14660
8
citations
Improving Large Language Model Planning with Action Sequence Similarity
Xinran Zhao, Hanie Sedghi, Bernd Bohnet et al.
ICLR 2025posterarXiv:2505.01009
5
citations
Inference Scaling for Long-Context Retrieval Augmented Generation
Zhenrui Yue, Honglei Zhuang, Aijun Bai et al.
ICLR 2025posterarXiv:2410.04343
51
citations
On the Learn-to-Optimize Capabilities of Transformers in In-Context Sparse Recovery
Renpu Liu, Ruida Zhou, Cong Shen et al.
ICLR 2025posterarXiv:2410.13981
4
citations
Task Descriptors Help Transformers Learn Linear Models In-Context
Ruomin Huang, Rong Ge
ICLR 2025poster
3
citations
Transformers Learn to Implement Multi-step Gradient Descent with Chain of Thought
Jianhao Huang, Zixuan Wang, Jason Lee
ICLR 2025posterarXiv:2502.21212
18
citations
Why In-Context Learning Models are Good Few-Shot Learners?
Shiguang Wu, Yaqing Wang, Quanming Yao
ICLR 2025poster