NEURIPS 2025 "in-context learning" Papers

47 papers found

Attention-based clustering

Rodrigo Maulen Soto, Pierre Marion, Claire Boyer

NEURIPS 2025posterarXiv:2505.13112

Axial Neural Networks for Dimension-Free Foundation Models

Hyunsu Kim, Jonggeon Park, Joan Bruna et al.

NEURIPS 2025spotlightarXiv:2510.13665

Breaking the Gradient Barrier: Unveiling Large Language Models for Strategic Classification

Xinpeng Lv, Yunxin Mao, Haoxuan Li et al.

NEURIPS 2025posterarXiv:2511.06979
1
citations

Bridging Sign and Spoken Languages: Pseudo Gloss Generation for Sign Language Translation

Jianyuan Guo, Peike Li, Trevor Cohn

NEURIPS 2025oralarXiv:2505.15438
3
citations

Can LLMs Reason Over Non-Text Modalities in a Training-Free Manner? A Case Study with In-Context Representation Learning

Tianle Zhang, Wanlong Fang, Jonathan Woo et al.

NEURIPS 2025posterarXiv:2509.17552
1
citations

CCL: Causal-aware In-context Learning for Out-of-Distribution Generalization

Hoyoon Byun, Gyeongdeok Seo, Joonseong Kang et al.

NEURIPS 2025poster

Do-PFN: In-Context Learning for Causal Effect Estimation

Jake Robertson, Arik Reuter, Siyuan Guo et al.

NEURIPS 2025spotlightarXiv:2506.06039
11
citations

Explore In-Context Message Passing Operator for Graph Neural Networks in A Mean Field Game

Tingting Dan, Xinwei Huang, Won Hwa Kim et al.

NEURIPS 2025poster

Exploring the Limits of Vision-Language-Action Manipulation in Cross-task Generalization

Jiaming Zhou, Ke Ye, Jiayi Liu et al.

NEURIPS 2025posterarXiv:2505.15660
16
citations

From Softmax to Score: Transformers Can Effectively Implement In-Context Denoising Steps

Paul Rosu, Lawrence Carin, Xiang Cheng

NEURIPS 2025poster

GRAVER: Generative Graph Vocabularies for Robust Graph Foundation Models Fine-tuning

Haonan Yuan, Qingyun Sun, Junhua Shi et al.

NEURIPS 2025posterarXiv:2511.05592
3
citations

Hierarchical Demonstration Order Optimization for Many-shot In-Context Learning

Yinhan He, Wendy Zheng, Song Wang et al.

NEURIPS 2025poster

How Data Mixing Shapes In-Context Learning: Asymptotic Equivalence for Transformers with MLPs

Samet Demir, Zafer Dogan

NEURIPS 2025posterarXiv:2510.25753

In-Context Learning of Stochastic Differential Equations with Foundation Inference Models

Patrick Seifner, Kostadin Cvejoski, David Berghaus et al.

NEURIPS 2025posterarXiv:2502.19049
5
citations

In-Context Learning Strategies Emerge Rationally

Daniel Wurgaft, Ekdeep S Lubana, Core Francisco Park et al.

NEURIPS 2025posterarXiv:2506.17859
4
citations

Knowledge Starts with Practice: Knowledge-Aware Exercise Generative Recommendation with Adaptive Multi-Agent Cooperation

Yangtao Zhou, Hua Chu, chen et al.

NEURIPS 2025poster

Learning to Rank for In-Context Example Retrieval

Yuwen Ji, Luodan Zhang, Ambyer han et al.

NEURIPS 2025poster

Linear Transformers Implicitly Discover Unified Numerical Algorithms

Patrick Lutz, Aditya Gangrade, Hadi Daneshmand et al.

NEURIPS 2025posterarXiv:2509.19702
1
citations

Memory Mosaics at scale

Jianyu Zhang, Leon Bottou

NEURIPS 2025oralarXiv:2507.03285
3
citations

Meta-Learning an In-Context Transformer Model of Human Higher Visual Cortex

Muquan Yu, Mu Nan, Hossein Adeli et al.

NEURIPS 2025posterarXiv:2505.15813

Nested Learning: The Illusion of Deep Learning Architectures

Ali Behrouz, Meisam Razaviyayn, Peilin Zhong et al.

NEURIPS 2025posterarXiv:2512.24695
12
citations

On the Robustness of Transformers against Context Hijacking for Linear Classification

Tianle Li, Chenyang Zhang, Xingwu Chen et al.

NEURIPS 2025posterarXiv:2502.15609
3
citations

Optimal Dynamic Regret by Transformers for Non-Stationary Reinforcement Learning

Baiyuan Chen, Shinji Ito, Masaaki Imaizumi

NEURIPS 2025posterarXiv:2508.16027

Optimality and NP-Hardness of Transformers in Learning Markovian Dynamical Functions

Yanna Ding, Songtao Lu, Yingdong Lu et al.

NEURIPS 2025posterarXiv:2510.18638

Optimization Inspired Few-Shot Adaptation for Large Language Models

Boyan Gao, Xin Wang, Yibo Yang et al.

NEURIPS 2025spotlightarXiv:2505.19107

Pre-trained Large Language Models Learn to Predict Hidden Markov Models In-context

Yijia Dai, Zhaolin Gao, Yahya Sattar et al.

NEURIPS 2025poster

Reasoning Models Better Express Their Confidence

Dongkeun Yoon, Seungone Kim, Sohee Yang et al.

NEURIPS 2025posterarXiv:2505.14489
32
citations

RelationAdapter: Learning and Transferring Visual Relation with Diffusion Transformers

Yan Gong, Yiren Song, Yicheng Li et al.

NEURIPS 2025posterarXiv:2506.02528
15
citations

ROVER: Recursive Reasoning Over Videos with Vision-Language Models for Embodied Tasks

Philip Schroeder, Ondrej Biza, Thomas Weng et al.

NEURIPS 2025oralarXiv:2508.01943

Self-Generated In-Context Examples Improve LLM Agents for Sequential Decision-Making Tasks

Vishnu Sarukkai, Zhiqiang Xie, Kayvon Fatahalian

NEURIPS 2025posterarXiv:2505.00234
4
citations

Short-length Adversarial Training Helps LLMs Defend Long-length Jailbreak Attacks: Theoretical and Empirical Evidence

Shaopeng Fu, Liang Ding, Jingfeng ZHANG et al.

NEURIPS 2025posterarXiv:2502.04204
6
citations

TabDPT: Scaling Tabular Foundation Models on Real Data

Junwei Ma, Valentin Thomas, Rasa Hosseinzadeh et al.

NEURIPS 2025posterarXiv:2410.18164
15
citations

Technical Debt in In-Context Learning: Diminishing Efficiency in Long Context

Taejong Joo, Diego Klabjan

NEURIPS 2025posterarXiv:2502.04580

The Atlas of In-Context Learning: How Attention Heads Shape In-Context Retrieval Augmentation

Patrick Kahardipraja, Reduan Achtibat, Thomas Wiegand et al.

NEURIPS 2025posterarXiv:2505.15807
4
citations

The emergence of sparse attention: impact of data distribution and benefits of repetition

Nicolas Zucchet, Francesco D'Angelo, Andrew Lampinen et al.

NEURIPS 2025oralarXiv:2505.17863
6
citations

Theoretical Insights into In-context Learning with Unlabeled Data

Yingcong Li, Xiangyu Chang, Muti Kara et al.

NEURIPS 2025poster

TiRex: Zero-Shot Forecasting Across Long and Short Horizons with Enhanced In-Context Learning

Andreas Auer, Patrick Podest, Daniel Klotz et al.

NEURIPS 2025posterarXiv:2505.23719
31
citations

Towards Predicting Any Human Trajectory In Context

Ryo Fujii, Hideo Saito, Ryo Hachiuma

NEURIPS 2025oralarXiv:2506.00871
2
citations

Trained Mamba Emulates Online Gradient Descent in In-Context Linear Regression

Jiarui Jiang, Wei Huang, Miao Zhang et al.

NEURIPS 2025posterarXiv:2509.23779
1
citations

Transformers are almost optimal metalearners for linear classification

Roey Magen, Gal Vardi

NEURIPS 2025posterarXiv:2510.19797
1
citations

Understanding Prompt Tuning and In-Context Learning via Meta-Learning

Tim Genewein, Kevin Li, Jordi Grau-Moya et al.

NEURIPS 2025spotlightarXiv:2505.17010
4
citations

Unlabeled Data Can Provably Enhance In-Context Learning of Transformers

Renpu Liu, Jing Yang

NEURIPS 2025posterarXiv:2601.10058
1
citations

Unlocking SLM Potential for Data Analysis Code Generation via Non-Parametric Knowledge Distillation

Jinyang Li, Jack Williams, Nick McKenna et al.

NEURIPS 2025poster

Variational Uncertainty Decomposition for In-Context Learning

I. Shavindra Jayasekera, Jacob Si, Filippo Valdettaro et al.

NEURIPS 2025posterarXiv:2509.02327
1
citations

Vision-centric Token Compression in Large Language Model

Ling Xing, Alex Jinpeng Wang, Rui Yan et al.

NEURIPS 2025spotlightarXiv:2502.00791
7
citations

Vocabulary In-Context Learning in Transformers: Benefits of Positional Encoding

Qian Ma, Ruoxiang Xu, Yongqiang Cai

NEURIPS 2025posterarXiv:2511.06376

What One Cannot, Two Can: Two-Layer Transformers Provably Represent Induction Heads on Any-Order Markov Chains

Chanakya Ekbote, Ashok Vardhan Makkuva, Marco Bondaschi et al.

NEURIPS 2025spotlightarXiv:2508.07208