2025 Poster "in-context learning" Papers

57 papers found • Page 1 of 2

Adaptive Transformer Programs: Bridging the Gap Between Performance and Interpretability in Transformers

Quoc-Vinh Lai-Dang, Taemin Kang, Seungah Son

ICLR 2025poster

A Recipe for Generating 3D Worlds from a Single Image

Katja Schwarz, Denis Rozumny, Samuel Rota Bulò et al.

ICCV 2025posterarXiv:2503.16611
10
citations

Attention-based clustering

Rodrigo Maulen Soto, Pierre Marion, Claire Boyer

NeurIPS 2025posterarXiv:2505.13112

BenTo: Benchmark Reduction with In-Context Transferability

Hongyu Zhao, Ming Li, Lichao Sun et al.

ICLR 2025poster

Can In-context Learning Really Generalize to Out-of-distribution Tasks?

Qixun Wang, Yifei Wang, Xianghua Ying et al.

ICLR 2025posterarXiv:2410.09695
15
citations

Can LLMs Reason Over Non-Text Modalities in a Training-Free Manner? A Case Study with In-Context Representation Learning

Tianle Zhang, Wanlong Fang, Jonathan Woo et al.

NeurIPS 2025posterarXiv:2509.17552
1
citations

Competition Dynamics Shape Algorithmic Phases of In-Context Learning

Core Francisco Park, Ekdeep Singh Lubana, Hidenori Tanaka

ICLR 2025posterarXiv:2412.01003
34
citations

Cropper: Vision-Language Model for Image Cropping through In-Context Learning

Seung Hyun Lee, Jijun jiang, Yiran Xu et al.

CVPR 2025posterarXiv:2408.07790
5
citations

DataMan: Data Manager for Pre-training Large Language Models

Ru Peng, Kexin Yang, Yawen Zeng et al.

ICLR 2025posterarXiv:2502.19363
8
citations

Density estimation with LLMs: a geometric investigation of in-context learning trajectories

Toni Liu, Nicolas Boulle, Raphaël Sarfati et al.

ICLR 2025posterarXiv:2410.05218
2
citations

Differential Transformer

Tianzhu Ye, Li Dong, Yuqing Xia et al.

ICLR 2025posterarXiv:2410.05258

Efficient Cross-Episode Meta-RL

Gresa Shala, André Biedenkapp, Pierre Krack et al.

ICLR 2025poster

ELICIT: LLM Augmentation Via External In-context Capability

Futing Wang, Jianhao (Elliott) Yan, Yue Zhang et al.

ICLR 2025posterarXiv:2410.09343
6
citations

Endless Jailbreaks with Bijection Learning

Brian R.Y. Huang, Max Li, Leonard Tang

ICLR 2025posterarXiv:2410.01294
14
citations

Explore In-Context Message Passing Operator for Graph Neural Networks in A Mean Field Game

Tingting Dan, Xinwei Huang, Won Hwa Kim et al.

NeurIPS 2025poster

Exploring the Limits of Vision-Language-Action Manipulation in Cross-task Generalization

Jiaming Zhou, Ke Ye, Jiayi Liu et al.

NeurIPS 2025posterarXiv:2505.15660
16
citations

Generative Adapter: Contextualizing Language Models in Parameters with A Single Forward Pass

Tong Chen, Hao Fang, Patrick Xia et al.

ICLR 2025posterarXiv:2411.05877
8
citations

GRAVER: Generative Graph Vocabularies for Robust Graph Foundation Models Fine-tuning

Haonan Yuan, Qingyun Sun, Junhua Shi et al.

NeurIPS 2025posterarXiv:2511.05592
3
citations

How Data Mixing Shapes In-Context Learning: Asymptotic Equivalence for Transformers with MLPs

Samet Demir, Zafer Dogan

NeurIPS 2025posterarXiv:2510.25753

Implicit In-context Learning

Zhuowei Li, Zihao Xu, Ligong Han et al.

ICLR 2025posterarXiv:2405.14660
8
citations

Improving Large Language Model Planning with Action Sequence Similarity

Xinran Zhao, Hanie Sedghi, Bernd Bohnet et al.

ICLR 2025posterarXiv:2505.01009
5
citations

In-Context Learning Strategies Emerge Rationally

Daniel Wurgaft, Ekdeep S Lubana, Core Francisco Park et al.

NeurIPS 2025posterarXiv:2506.17859
4
citations

Inference Scaling for Long-Context Retrieval Augmented Generation

Zhenrui Yue, Honglei Zhuang, Aijun Bai et al.

ICLR 2025posterarXiv:2410.04343
51
citations

InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales

Zhepei Wei, Wei-Lin Chen, Yu Meng

ICLR 2025posterarXiv:2406.13629
70
citations

Investigating the Pre-Training Dynamics of In-Context Learning: Task Recognition vs. Task Learning

Xiaolei Wang, Xinyu Tang, Junyi Li et al.

ICLR 2025posterarXiv:2406.14022
6
citations

Is In-Context Learning Sufficient for Instruction Following in LLMs?

Hao Zhao, Maksym Andriushchenko, francesco croce et al.

ICLR 2025posterarXiv:2405.19874
21
citations

Knowledge Starts with Practice: Knowledge-Aware Exercise Generative Recommendation with Adaptive Multi-Agent Cooperation

Yangtao Zhou, Hua Chu, chen et al.

NeurIPS 2025poster

Large (Vision) Language Models are Unsupervised In-Context Learners

Artyom Gadetsky, Andrei Atanov, Yulun Jiang et al.

ICLR 2025posterarXiv:2504.02349
3
citations

Learning to Rank for In-Context Example Retrieval

Yuwen Ji, Luodan Zhang, Ambyer han et al.

NeurIPS 2025poster

Linear Transformers Implicitly Discover Unified Numerical Algorithms

Patrick Lutz, Aditya Gangrade, Hadi Daneshmand et al.

NeurIPS 2025posterarXiv:2509.19702
1
citations

Nested Learning: The Illusion of Deep Learning Architectures

Ali Behrouz, Meisam Razaviyayn, Peilin Zhong et al.

NeurIPS 2025posterarXiv:2512.24695
12
citations

Neuroverse3D: Developing In-Context Learning Universal Model for Neuroimaging in 3D

Jiesi Hu, Hanyang Peng, Yanwu Yang et al.

ICCV 2025posterarXiv:2503.02410

On Linear Representations and Pretraining Data Frequency in Language Models

Jack Merullo, Noah Smith, Sarah Wiegreffe et al.

ICLR 2025posterarXiv:2504.12459
11
citations

On the Learn-to-Optimize Capabilities of Transformers in In-Context Sparse Recovery

Renpu Liu, Ruida Zhou, Cong Shen et al.

ICLR 2025posterarXiv:2410.13981
4
citations

On the Robustness of Transformers against Context Hijacking for Linear Classification

Tianle Li, Chenyang Zhang, Xingwu Chen et al.

NeurIPS 2025posterarXiv:2502.15609
3
citations

Optimal Dynamic Regret by Transformers for Non-Stationary Reinforcement Learning

Baiyuan Chen, Shinji Ito, Masaaki Imaizumi

NeurIPS 2025posterarXiv:2508.16027

Optimality and NP-Hardness of Transformers in Learning Markovian Dynamical Functions

Yanna Ding, Songtao Lu, Yingdong Lu et al.

NeurIPS 2025posterarXiv:2510.18638

PersonalLLM: Tailoring LLMs to Individual Preferences

Thomas Zollo, Andrew Siah, Naimeng Ye et al.

ICLR 2025posterarXiv:2409.20296
27
citations

Reasoning Models Better Express Their Confidence

Dongkeun Yoon, Seungone Kim, Sohee Yang et al.

NeurIPS 2025posterarXiv:2505.14489
32
citations

REGENT: A Retrieval-Augmented Generalist Agent That Can Act In-Context in New Environments

Kaustubh Sridhar, Souradeep Dutta, Dinesh Jayaraman et al.

ICLR 2025posterarXiv:2412.04759
9
citations

RelationAdapter: Learning and Transferring Visual Relation with Diffusion Transformers

Yan Gong, Yiren Song, Yicheng Li et al.

NeurIPS 2025posterarXiv:2506.02528
15
citations

Selective induction Heads: How Transformers Select Causal Structures in Context

Francesco D'Angelo, francesco croce, Nicolas Flammarion

ICLR 2025posterarXiv:2509.08184
4
citations

Self-Generated In-Context Examples Improve LLM Agents for Sequential Decision-Making Tasks

Vishnu Sarukkai, Zhiqiang Xie, Kayvon Fatahalian

NeurIPS 2025posterarXiv:2505.00234
4
citations

Short-length Adversarial Training Helps LLMs Defend Long-length Jailbreak Attacks: Theoretical and Empirical Evidence

Shaopeng Fu, Liang Ding, Jingfeng ZHANG et al.

NeurIPS 2025posterarXiv:2502.04204
6
citations

Show and Segment: Universal Medical Image Segmentation via In-Context Learning

Yunhe Gao, Di Liu, Zhuowei Li et al.

CVPR 2025posterarXiv:2503.19359
8
citations

Task Descriptors Help Transformers Learn Linear Models In-Context

Ruomin Huang, Rong Ge

ICLR 2025poster
3
citations

Technical Debt in In-Context Learning: Diminishing Efficiency in Long Context

Taejong Joo, Diego Klabjan

NeurIPS 2025posterarXiv:2502.04580

Theoretical Insights into In-context Learning with Unlabeled Data

Yingcong Li, Xiangyu Chang, Muti Kara et al.

NeurIPS 2025poster

TiRex: Zero-Shot Forecasting Across Long and Short Horizons with Enhanced In-Context Learning

Andreas Auer, Patrick Podest, Daniel Klotz et al.

NeurIPS 2025posterarXiv:2505.23719
31
citations

Transformers are almost optimal metalearners for linear classification

Roey Magen, Gal Vardi

NeurIPS 2025posterarXiv:2510.19797
1
citations
← PreviousNext →