"in-context learning" Papers

190 papers found • Page 3 of 4

Trained Mamba Emulates Online Gradient Descent in In-Context Linear Regression

Jiarui Jiang, Wei Huang, Miao Zhang et al.

NEURIPS 2025arXiv:2509.23779
1
citations

Transformers are almost optimal metalearners for linear classification

Roey Magen, Gal Vardi

NEURIPS 2025arXiv:2510.19797
1
citations

Transformers Handle Endogeneity in In-Context Linear Regression

Haodong Liang, Krishna Balasubramanian, Lifeng Lai

ICLR 2025arXiv:2410.01265
4
citations

Transformers Learn to Implement Multi-step Gradient Descent with Chain of Thought

Jianhao Huang, Zixuan Wang, Jason Lee

ICLR 2025arXiv:2502.21212
22
citations

Transformers Struggle to Learn to Search

Abulhair Saparov, Srushti Ajay Pawar, Shreyas Pimpalgaonkar et al.

ICLR 2025arXiv:2412.04703
15
citations

Understanding Prompt Tuning and In-Context Learning via Meta-Learning

Tim Genewein, Kevin Li, Jordi Grau-Moya et al.

NEURIPS 2025spotlightarXiv:2505.17010
5
citations

Understanding the Generalization of In-Context Learning in Transformers: An Empirical Study

Xingxuan Zhang, Haoran Wang, Jiansheng Li et al.

ICLR 2025arXiv:2503.15579
5
citations

Unlabeled Data Can Provably Enhance In-Context Learning of Transformers

Renpu Liu, Jing Yang

NEURIPS 2025arXiv:2601.10058
1
citations

Unleashing In-context Learning of Autoregressive Models for Few-shot Image Manipulation

Bolin Lai, Felix Juefei-Xu, Miao Liu et al.

CVPR 2025highlightarXiv:2412.01027
8
citations

Unlocking SLM Potential for Data Analysis Code Generation via Non-Parametric Knowledge Distillation

Jinyang Li, Jack Williams, Nick McKenna et al.

NEURIPS 2025

Unsupervised Meta-Learning via In-Context Learning

Anna Vettoruzzo, Lorenzo Braccaioli, Joaquin Vanschoren et al.

ICLR 2025arXiv:2405.16124
3
citations

Variational Uncertainty Decomposition for In-Context Learning

I. Shavindra Jayasekera, Jacob Si, Filippo Valdettaro et al.

NEURIPS 2025arXiv:2509.02327
2
citations

Vector-ICL: In-context Learning with Continuous Vector Representations

Yufan Zhuang, Chandan Singh, Liyuan Liu et al.

ICLR 2025arXiv:2410.05629
10
citations

VideoICL: Confidence-based Iterative In-context Learning for Out-of-Distribution Video Understanding

Kangsan Kim, Geon Park, Youngwan Lee et al.

CVPR 2025arXiv:2412.02186
12
citations

Vision-centric Token Compression in Large Language Model

Ling Xing, Alex Jinpeng Wang, Rui Yan et al.

NEURIPS 2025spotlightarXiv:2502.00791
11
citations

Vision Language Models are In-Context Value Learners

Yecheng Jason Ma, Joey Hejna, Chuyuan Fu et al.

ICLR 2025oralarXiv:2411.04549
49
citations

Vocabulary In-Context Learning in Transformers: Benefits of Positional Encoding

Qian Ma, Ruoxiang Xu, Yongqiang Cai

NEURIPS 2025arXiv:2511.06376

WeatherGFM: Learning a Weather Generalist Foundation Model via In-context Learning

Xiangyu Zhao, Zhiwang Zhou, Wenlong Zhang et al.

ICLR 2025oralarXiv:2411.05420
9
citations

What One Cannot, Two Can: Two-Layer Transformers Provably Represent Induction Heads on Any-Order Markov Chains

Chanakya Ekbote, Ashok Vardhan Makkuva, Marco Bondaschi et al.

NEURIPS 2025spotlightarXiv:2508.07208
1
citations

Why In-Context Learning Models are Good Few-Shot Learners?

Shiguang Wu, Yaqing Wang, Quanming Yao

ICLR 2025

Zero-shot forecasting of chaotic systems

Yuanzhao Zhang, William Gilpin

ICLR 2025arXiv:2409.15771
19
citations

Zero-shot Model-based Reinforcement Learning using Large Language Models

Abdelhakim Benechehab, Youssef Attia El Hili, Ambroise Odonnat et al.

ICLR 2025arXiv:2410.11711
5
citations

Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models

Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar et al.

ICML 2024arXiv:2308.10379
99
citations

An Information-Theoretic Analysis of In-Context Learning

Hong Jun Jeon, Jason Lee, Qi Lei et al.

ICML 2024arXiv:2401.15530
37
citations

Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue Abilities

Zhifeng Kong, ARUSHI GOEL, Rohan Badlani et al.

ICML 2024arXiv:2402.01831
168
citations

BAGEL: Bootstrapping Agents by Guiding Exploration with Language

Shikhar Murty, Christopher Manning, Peter Shaw et al.

ICML 2024arXiv:2403.08140
29
citations

Breaking through the learning plateaus of in-context learning in Transformer

Jingwen Fu, Tao Yang, Yuwang Wang et al.

ICML 2024arXiv:2309.06054
5
citations

Can Looped Transformers Learn to Implement Multi-step Gradient Descent for In-context Learning?

Khashayar Gatmiry, Nikunj Saunshi, Sashank J. Reddi et al.

ICML 2024arXiv:2410.08292
38
citations

Can Mamba Learn How To Learn? A Comparative Study on In-Context Learning Tasks

Jong Ho Park, Jaden Park, Zheyang Xiong et al.

ICML 2024arXiv:2402.04248
107
citations

Code-Style In-Context Learning for Knowledge-Based Question Answering

Zhijie Nie, Richong Zhang, Zhongyuan Wang et al.

AAAI 2024paperarXiv:2309.04695
19
citations

Compositional Text-to-Image Generation with Dense Blob Representations

Weili Nie, Sifei Liu, Morteza Mardani et al.

ICML 2024arXiv:2405.08246
37
citations

Context Diffusion: In-Context Aware Image Generation

Ivona Najdenkoska, Animesh Sinha, Abhimanyu Dubey et al.

ECCV 2024arXiv:2312.03584
17
citations

CoReS: Orchestrating the Dance of Reasoning and Segmentation

Xiaoyi Bao, Siyang Sun, Shuailei Ma et al.

ECCV 2024arXiv:2404.05673
18
citations

Customizing Language Model Responses with Contrastive In-Context Learning

Xiang Gao, Kamalika Das

AAAI 2024paperarXiv:2401.17390
20
citations

DG-PIC: Domain Generalized Point-In-Context Learning for Point Cloud Understanding

Jincen Jiang, Qianyu Zhou, Yuhang Li et al.

ECCV 2024arXiv:2407.08801
15
citations

Dolphins: Multimodal Language Model for Driving

Yingzi Ma, Yulong Cao, Jiachen Sun et al.

ECCV 2024arXiv:2312.00438
128
citations

DQ-LoRe: Dual Queries with Low Rank Approximation Re-ranking for In-Context Learning

Jing Xiong, Zixuan Li, Chuanyang Zheng et al.

ICLR 2024arXiv:2310.02954

Dual Operating Modes of In-Context Learning

Ziqian Lin, Kangwook Lee

ICML 2024arXiv:2402.18819
46
citations

Eureka-Moments in Transformers: Multi-Step Tasks Reveal Softmax Induced Optimization Problems

David T. Hoffmann, Simon Schrodi, Jelena Bratulić et al.

ICML 2024arXiv:2310.12956
11
citations

Exact Conversion of In-Context Learning to Model Weights in Linearized-Attention Transformers

Brian Chen, Tianyang Hu, Hui Jin et al.

ICML 2024arXiv:2406.02847
6
citations

Feedback Loops With Language Models Drive In-Context Reward Hacking

Alexander Pan, Erik Jones, Meena Jagadeesan et al.

ICML 2024arXiv:2402.06627
60
citations

Finding Visual Task Vectors

Alberto Hojel, Yutong Bai, Trevor Darrell et al.

ECCV 2024arXiv:2404.05729
14
citations

FlashST: A Simple and Universal Prompt-Tuning Framework for Traffic Prediction

Zhonghang Li, Lianghao Xia, Yong Xu et al.

ICML 2024oralarXiv:2405.17898
24
citations

Fool Your (Vision and) Language Model with Embarrassingly Simple Permutations

Yongshuo Zong, Tingyang Yu, Ruchika Chavhan et al.

ICML 2024arXiv:2310.01651
27
citations

From Words to Actions: Unveiling the Theoretical Underpinnings of LLM-Driven Autonomous Systems

Jianliang He, Siyu Chen, Fengzhuo Zhang et al.

ICML 2024arXiv:2405.19883
11
citations

Generalization to New Sequential Decision Making Tasks with In-Context Learning

Sharath Chandra Raparthy, Eric Hambro, Robert Kirk et al.

ICML 2024arXiv:2312.03801
37
citations

Generative Multimodal Models are In-Context Learners

Quan Sun, Yufeng Cui, Xiaosong Zhang et al.

CVPR 2024arXiv:2312.13286
438
citations

GistScore: Learning Better Representations for In-Context Example Selection with Gist Bottlenecks

Shivanshu Gupta, Clemens Rosenbaum, Ethan R. Elenberg

ICML 2024arXiv:2311.09606
9
citations

GRACE: Graph-Based Contextual Debiasing for Fair Visual Question Answering

Yifeng Zhang, Ming Jiang, Qi Zhao

ECCV 2024
2
citations

How Do Nonlinear Transformers Learn and Generalize in In-Context Learning?

Hongkang Li, Meng Wang, Songtao Lu et al.

ICML 2024arXiv:2402.15607
34
citations