2025 "in-context learning" Papers

30 papers found

Attention-based clustering

Rodrigo Maulen Soto, Pierre Marion, Claire Boyer

NeurIPS 2025posterarXiv:2505.13112

BenTo: Benchmark Reduction with In-Context Transferability

Hongyu Zhao, Ming Li, Lichao Sun et al.

ICLR 2025poster

Bridging Sign and Spoken Languages: Pseudo Gloss Generation for Sign Language Translation

Jianyuan Guo, Peike Li, Trevor Cohn

NeurIPS 2025oralarXiv:2505.15438
3
citations

Can In-context Learning Really Generalize to Out-of-distribution Tasks?

Qixun Wang, Yifei Wang, Xianghua Ying et al.

ICLR 2025posterarXiv:2410.09695
15
citations

Can LLMs Reason Over Non-Text Modalities in a Training-Free Manner? A Case Study with In-Context Representation Learning

Tianle Zhang, Wanlong Fang, Jonathan Woo et al.

NeurIPS 2025posterarXiv:2509.17552
1
citations

Density estimation with LLMs: a geometric investigation of in-context learning trajectories

Toni Liu, Nicolas Boulle, Raphaël Sarfati et al.

ICLR 2025posterarXiv:2410.05218
2
citations

Differential Transformer

Tianzhu Ye, Li Dong, Yuqing Xia et al.

ICLR 2025posterarXiv:2410.05258

Efficient Cross-Episode Meta-RL

Gresa Shala, André Biedenkapp, Pierre Krack et al.

ICLR 2025poster

ELICIT: LLM Augmentation Via External In-context Capability

Futing Wang, Jianhao (Elliott) Yan, Yue Zhang et al.

ICLR 2025posterarXiv:2410.09343
6
citations

Explore In-Context Message Passing Operator for Graph Neural Networks in A Mean Field Game

Tingting Dan, Xinwei Huang, Won Hwa Kim et al.

NeurIPS 2025poster

Implicit In-context Learning

Zhuowei Li, Zihao Xu, Ligong Han et al.

ICLR 2025posterarXiv:2405.14660
8
citations

Improving Large Language Model Planning with Action Sequence Similarity

Xinran Zhao, Hanie Sedghi, Bernd Bohnet et al.

ICLR 2025posterarXiv:2505.01009
5
citations

In-Context Learning Strategies Emerge Rationally

Daniel Wurgaft, Ekdeep S Lubana, Core Francisco Park et al.

NeurIPS 2025posterarXiv:2506.17859
4
citations

Inference Scaling for Long-Context Retrieval Augmented Generation

Zhenrui Yue, Honglei Zhuang, Aijun Bai et al.

ICLR 2025posterarXiv:2410.04343
51
citations

Knowledge Starts with Practice: Knowledge-Aware Exercise Generative Recommendation with Adaptive Multi-Agent Cooperation

Yangtao Zhou, Hua Chu, chen et al.

NeurIPS 2025poster

Learning to Rank for In-Context Example Retrieval

Yuwen Ji, Luodan Zhang, Ambyer han et al.

NeurIPS 2025poster

Neuroverse3D: Developing In-Context Learning Universal Model for Neuroimaging in 3D

Jiesi Hu, Hanyang Peng, Yanwu Yang et al.

ICCV 2025posterarXiv:2503.02410

On the Learn-to-Optimize Capabilities of Transformers in In-Context Sparse Recovery

Renpu Liu, Ruida Zhou, Cong Shen et al.

ICLR 2025posterarXiv:2410.13981
4
citations

Optimal Dynamic Regret by Transformers for Non-Stationary Reinforcement Learning

Baiyuan Chen, Shinji Ito, Masaaki Imaizumi

NeurIPS 2025posterarXiv:2508.16027

Reasoning Models Better Express Their Confidence

Dongkeun Yoon, Seungone Kim, Sohee Yang et al.

NeurIPS 2025posterarXiv:2505.14489
32
citations

Self-Generated In-Context Examples Improve LLM Agents for Sequential Decision-Making Tasks

Vishnu Sarukkai, Zhiqiang Xie, Kayvon Fatahalian

NeurIPS 2025posterarXiv:2505.00234
4
citations

Task Descriptors Help Transformers Learn Linear Models In-Context

Ruomin Huang, Rong Ge

ICLR 2025poster
3
citations

Theoretical Insights into In-context Learning with Unlabeled Data

Yingcong Li, Xiangyu Chang, Muti Kara et al.

NeurIPS 2025poster

Transformers are almost optimal metalearners for linear classification

Roey Magen, Gal Vardi

NeurIPS 2025posterarXiv:2510.19797
1
citations

Transformers Learn to Implement Multi-step Gradient Descent with Chain of Thought

Jianhao Huang, Zixuan Wang, Jason Lee

ICLR 2025posterarXiv:2502.21212
18
citations

Unlabeled Data Can Provably Enhance In-Context Learning of Transformers

Renpu Liu, Jing Yang

NeurIPS 2025posterarXiv:2601.10058
1
citations

Vision-centric Token Compression in Large Language Model

Ling Xing, Alex Jinpeng Wang, Rui Yan et al.

NeurIPS 2025spotlightarXiv:2502.00791
7
citations

Vocabulary In-Context Learning in Transformers: Benefits of Positional Encoding

Qian Ma, Ruoxiang Xu, Yongqiang Cai

NeurIPS 2025posterarXiv:2511.06376

What One Cannot, Two Can: Two-Layer Transformers Provably Represent Induction Heads on Any-Order Markov Chains

Chanakya Ekbote, Ashok Vardhan Makkuva, Marco Bondaschi et al.

NeurIPS 2025spotlightarXiv:2508.07208

Why In-Context Learning Models are Good Few-Shot Learners?

Shiguang Wu, Yaqing Wang, Quanming Yao

ICLR 2025poster