NEURIPS 2025 "in-context learning" Papers
47 papers found
Attention-based clustering
Rodrigo Maulen Soto, Pierre Marion, Claire Boyer
Axial Neural Networks for Dimension-Free Foundation Models
Hyunsu Kim, Jonggeon Park, Joan Bruna et al.
Breaking the Gradient Barrier: Unveiling Large Language Models for Strategic Classification
Xinpeng Lv, Yunxin Mao, Haoxuan Li et al.
Bridging Sign and Spoken Languages: Pseudo Gloss Generation for Sign Language Translation
Jianyuan Guo, Peike Li, Trevor Cohn
Can LLMs Reason Over Non-Text Modalities in a Training-Free Manner? A Case Study with In-Context Representation Learning
Tianle Zhang, Wanlong Fang, Jonathan Woo et al.
CCL: Causal-aware In-context Learning for Out-of-Distribution Generalization
Hoyoon Byun, Gyeongdeok Seo, Joonseong Kang et al.
Do-PFN: In-Context Learning for Causal Effect Estimation
Jake Robertson, Arik Reuter, Siyuan Guo et al.
Explore In-Context Message Passing Operator for Graph Neural Networks in A Mean Field Game
Tingting Dan, Xinwei Huang, Won Hwa Kim et al.
Exploring the Limits of Vision-Language-Action Manipulation in Cross-task Generalization
Jiaming Zhou, Ke Ye, Jiayi Liu et al.
From Softmax to Score: Transformers Can Effectively Implement In-Context Denoising Steps
Paul Rosu, Lawrence Carin, Xiang Cheng
GRAVER: Generative Graph Vocabularies for Robust Graph Foundation Models Fine-tuning
Haonan Yuan, Qingyun Sun, Junhua Shi et al.
Hierarchical Demonstration Order Optimization for Many-shot In-Context Learning
Yinhan He, Wendy Zheng, Song Wang et al.
How Data Mixing Shapes In-Context Learning: Asymptotic Equivalence for Transformers with MLPs
Samet Demir, Zafer Dogan
In-Context Learning of Stochastic Differential Equations with Foundation Inference Models
Patrick Seifner, Kostadin Cvejoski, David Berghaus et al.
In-Context Learning Strategies Emerge Rationally
Daniel Wurgaft, Ekdeep S Lubana, Core Francisco Park et al.
Knowledge Starts with Practice: Knowledge-Aware Exercise Generative Recommendation with Adaptive Multi-Agent Cooperation
Yangtao Zhou, Hua Chu, chen et al.
Learning to Rank for In-Context Example Retrieval
Yuwen Ji, Luodan Zhang, Ambyer han et al.
Linear Transformers Implicitly Discover Unified Numerical Algorithms
Patrick Lutz, Aditya Gangrade, Hadi Daneshmand et al.
Memory Mosaics at scale
Jianyu Zhang, Leon Bottou
Meta-Learning an In-Context Transformer Model of Human Higher Visual Cortex
Muquan Yu, Mu Nan, Hossein Adeli et al.
Nested Learning: The Illusion of Deep Learning Architectures
Ali Behrouz, Meisam Razaviyayn, Peilin Zhong et al.
On the Robustness of Transformers against Context Hijacking for Linear Classification
Tianle Li, Chenyang Zhang, Xingwu Chen et al.
Optimal Dynamic Regret by Transformers for Non-Stationary Reinforcement Learning
Baiyuan Chen, Shinji Ito, Masaaki Imaizumi
Optimality and NP-Hardness of Transformers in Learning Markovian Dynamical Functions
Yanna Ding, Songtao Lu, Yingdong Lu et al.
Optimization Inspired Few-Shot Adaptation for Large Language Models
Boyan Gao, Xin Wang, Yibo Yang et al.
Pre-trained Large Language Models Learn to Predict Hidden Markov Models In-context
Yijia Dai, Zhaolin Gao, Yahya Sattar et al.
Reasoning Models Better Express Their Confidence
Dongkeun Yoon, Seungone Kim, Sohee Yang et al.
RelationAdapter: Learning and Transferring Visual Relation with Diffusion Transformers
Yan Gong, Yiren Song, Yicheng Li et al.
ROVER: Recursive Reasoning Over Videos with Vision-Language Models for Embodied Tasks
Philip Schroeder, Ondrej Biza, Thomas Weng et al.
Self-Generated In-Context Examples Improve LLM Agents for Sequential Decision-Making Tasks
Vishnu Sarukkai, Zhiqiang Xie, Kayvon Fatahalian
Short-length Adversarial Training Helps LLMs Defend Long-length Jailbreak Attacks: Theoretical and Empirical Evidence
Shaopeng Fu, Liang Ding, Jingfeng ZHANG et al.
TabDPT: Scaling Tabular Foundation Models on Real Data
Junwei Ma, Valentin Thomas, Rasa Hosseinzadeh et al.
Technical Debt in In-Context Learning: Diminishing Efficiency in Long Context
Taejong Joo, Diego Klabjan
The Atlas of In-Context Learning: How Attention Heads Shape In-Context Retrieval Augmentation
Patrick Kahardipraja, Reduan Achtibat, Thomas Wiegand et al.
The emergence of sparse attention: impact of data distribution and benefits of repetition
Nicolas Zucchet, Francesco D'Angelo, Andrew Lampinen et al.
Theoretical Insights into In-context Learning with Unlabeled Data
Yingcong Li, Xiangyu Chang, Muti Kara et al.
TiRex: Zero-Shot Forecasting Across Long and Short Horizons with Enhanced In-Context Learning
Andreas Auer, Patrick Podest, Daniel Klotz et al.
Towards Predicting Any Human Trajectory In Context
Ryo Fujii, Hideo Saito, Ryo Hachiuma
Trained Mamba Emulates Online Gradient Descent in In-Context Linear Regression
Jiarui Jiang, Wei Huang, Miao Zhang et al.
Transformers are almost optimal metalearners for linear classification
Roey Magen, Gal Vardi
Understanding Prompt Tuning and In-Context Learning via Meta-Learning
Tim Genewein, Kevin Li, Jordi Grau-Moya et al.
Unlabeled Data Can Provably Enhance In-Context Learning of Transformers
Renpu Liu, Jing Yang
Unlocking SLM Potential for Data Analysis Code Generation via Non-Parametric Knowledge Distillation
Jinyang Li, Jack Williams, Nick McKenna et al.
Variational Uncertainty Decomposition for In-Context Learning
I. Shavindra Jayasekera, Jacob Si, Filippo Valdettaro et al.
Vision-centric Token Compression in Large Language Model
Ling Xing, Alex Jinpeng Wang, Rui Yan et al.
Vocabulary In-Context Learning in Transformers: Benefits of Positional Encoding
Qian Ma, Ruoxiang Xu, Yongqiang Cai
What One Cannot, Two Can: Two-Layer Transformers Provably Represent Induction Heads on Any-Order Markov Chains
Chanakya Ekbote, Ashok Vardhan Makkuva, Marco Bondaschi et al.