2025 "in-context learning" Papers
30 papers found
Attention-based clustering
Rodrigo Maulen Soto, Pierre Marion, Claire Boyer
BenTo: Benchmark Reduction with In-Context Transferability
Hongyu Zhao, Ming Li, Lichao Sun et al.
Bridging Sign and Spoken Languages: Pseudo Gloss Generation for Sign Language Translation
Jianyuan Guo, Peike Li, Trevor Cohn
Can In-context Learning Really Generalize to Out-of-distribution Tasks?
Qixun Wang, Yifei Wang, Xianghua Ying et al.
Can LLMs Reason Over Non-Text Modalities in a Training-Free Manner? A Case Study with In-Context Representation Learning
Tianle Zhang, Wanlong Fang, Jonathan Woo et al.
Density estimation with LLMs: a geometric investigation of in-context learning trajectories
Toni Liu, Nicolas Boulle, Raphaël Sarfati et al.
Differential Transformer
Tianzhu Ye, Li Dong, Yuqing Xia et al.
Efficient Cross-Episode Meta-RL
Gresa Shala, André Biedenkapp, Pierre Krack et al.
ELICIT: LLM Augmentation Via External In-context Capability
Futing Wang, Jianhao (Elliott) Yan, Yue Zhang et al.
Explore In-Context Message Passing Operator for Graph Neural Networks in A Mean Field Game
Tingting Dan, Xinwei Huang, Won Hwa Kim et al.
Implicit In-context Learning
Zhuowei Li, Zihao Xu, Ligong Han et al.
Improving Large Language Model Planning with Action Sequence Similarity
Xinran Zhao, Hanie Sedghi, Bernd Bohnet et al.
In-Context Learning Strategies Emerge Rationally
Daniel Wurgaft, Ekdeep S Lubana, Core Francisco Park et al.
Inference Scaling for Long-Context Retrieval Augmented Generation
Zhenrui Yue, Honglei Zhuang, Aijun Bai et al.
Knowledge Starts with Practice: Knowledge-Aware Exercise Generative Recommendation with Adaptive Multi-Agent Cooperation
Yangtao Zhou, Hua Chu, chen et al.
Learning to Rank for In-Context Example Retrieval
Yuwen Ji, Luodan Zhang, Ambyer han et al.
Neuroverse3D: Developing In-Context Learning Universal Model for Neuroimaging in 3D
Jiesi Hu, Hanyang Peng, Yanwu Yang et al.
On the Learn-to-Optimize Capabilities of Transformers in In-Context Sparse Recovery
Renpu Liu, Ruida Zhou, Cong Shen et al.
Optimal Dynamic Regret by Transformers for Non-Stationary Reinforcement Learning
Baiyuan Chen, Shinji Ito, Masaaki Imaizumi
Reasoning Models Better Express Their Confidence
Dongkeun Yoon, Seungone Kim, Sohee Yang et al.
Self-Generated In-Context Examples Improve LLM Agents for Sequential Decision-Making Tasks
Vishnu Sarukkai, Zhiqiang Xie, Kayvon Fatahalian
Task Descriptors Help Transformers Learn Linear Models In-Context
Ruomin Huang, Rong Ge
Theoretical Insights into In-context Learning with Unlabeled Data
Yingcong Li, Xiangyu Chang, Muti Kara et al.
Transformers are almost optimal metalearners for linear classification
Roey Magen, Gal Vardi
Transformers Learn to Implement Multi-step Gradient Descent with Chain of Thought
Jianhao Huang, Zixuan Wang, Jason Lee
Unlabeled Data Can Provably Enhance In-Context Learning of Transformers
Renpu Liu, Jing Yang
Vision-centric Token Compression in Large Language Model
Ling Xing, Alex Jinpeng Wang, Rui Yan et al.
Vocabulary In-Context Learning in Transformers: Benefits of Positional Encoding
Qian Ma, Ruoxiang Xu, Yongqiang Cai
What One Cannot, Two Can: Two-Layer Transformers Provably Represent Induction Heads on Any-Order Markov Chains
Chanakya Ekbote, Ashok Vardhan Makkuva, Marco Bondaschi et al.
Why In-Context Learning Models are Good Few-Shot Learners?
Shiguang Wu, Yaqing Wang, Quanming Yao