Poster "in-context learning" Papers

155 papers found • Page 1 of 4

Adaptive Transformer Programs: Bridging the Gap Between Performance and Interpretability in Transformers

Quoc-Vinh Lai-Dang, Taemin Kang, Seungah Son

ICLR 2025

A Recipe for Generating 3D Worlds from a Single Image

Katja Schwarz, Denis Rozumny, Samuel Rota Bulò et al.

ICCV 2025arXiv:2503.16611
10
citations

A Solvable Attention for Neural Scaling Laws

Bochen Lyu, Di Wang, Zhanxing Zhu

ICLR 2025
5
citations

Attention-based clustering

Rodrigo Maulen Soto, Pierre Marion, Claire Boyer

NEURIPS 2025arXiv:2505.13112
1
citations

BenTo: Benchmark Reduction with In-Context Transferability

Hongyu Zhao, Ming Li, Lichao Sun et al.

ICLR 2025

Breaking the Gradient Barrier: Unveiling Large Language Models for Strategic Classification

Xinpeng Lv, Yunxin Mao, Haoxuan Li et al.

NEURIPS 2025arXiv:2511.06979
1
citations

Can In-context Learning Really Generalize to Out-of-distribution Tasks?

Qixun Wang, Yifei Wang, Xianghua Ying et al.

ICLR 2025arXiv:2410.09695
16
citations

Can LLMs Reason Over Non-Text Modalities in a Training-Free Manner? A Case Study with In-Context Representation Learning

Tianle Zhang, Wanlong Fang, Jonathan Woo et al.

NEURIPS 2025arXiv:2509.17552
2
citations

CCL: Causal-aware In-context Learning for Out-of-Distribution Generalization

Hoyoon Byun, Gyeongdeok Seo, Joonseong Kang et al.

NEURIPS 2025

Competition Dynamics Shape Algorithmic Phases of In-Context Learning

Core Francisco Park, Ekdeep Singh Lubana, Hidenori Tanaka

ICLR 2025arXiv:2412.01003
43
citations

Cropper: Vision-Language Model for Image Cropping through In-Context Learning

Seung Hyun Lee, Jijun jiang, Yiran Xu et al.

CVPR 2025arXiv:2408.07790
5
citations

Data-adaptive Differentially Private Prompt Synthesis for In-Context Learning

Fengyu Gao, Ruida Zhou, Tianhao Wang et al.

ICLR 2025arXiv:2410.12085
5
citations

DataMan: Data Manager for Pre-training Large Language Models

Ru Peng, Kexin Yang, Yawen Zeng et al.

ICLR 2025arXiv:2502.19363
8
citations

Decoupling Layout from Glyph in Online Chinese Handwriting Generation

Minsi Ren, Yan-Ming Zhang, yi chen

ICLR 2025arXiv:2410.02309
6
citations

Density estimation with LLMs: a geometric investigation of in-context learning trajectories

Toni Liu, Nicolas Boulle, Raphaël Sarfati et al.

ICLR 2025arXiv:2410.05218
4
citations

Differential Transformer

Tianzhu Ye, Li Dong, Yuqing Xia et al.

ICLR 2025arXiv:2410.05258

Dual-Process Image Generation

Grace Luo, Jonathan Granskog, Aleksander Holynski et al.

ICCV 2025arXiv:2506.01955
6
citations

Efficient Cross-Episode Meta-RL

Gresa Shala, André Biedenkapp, Pierre Krack et al.

ICLR 2025

ELICIT: LLM Augmentation Via External In-context Capability

Futing Wang, Jianhao (Elliott) Yan, Yue Zhang et al.

ICLR 2025arXiv:2410.09343
6
citations

Endless Jailbreaks with Bijection Learning

Brian R.Y. Huang, Max Li, Leonard Tang

ICLR 2025arXiv:2410.01294
16
citations

Explore In-Context Message Passing Operator for Graph Neural Networks in A Mean Field Game

Tingting Dan, Xinwei Huang, Won Hwa Kim et al.

NEURIPS 2025

Exploring the Limits of Vision-Language-Action Manipulation in Cross-task Generalization

Jiaming Zhou, Ke Ye, Jiayi Liu et al.

NEURIPS 2025arXiv:2505.15660
18
citations

Federated In-Context Learning: Iterative Refinement for Improved Answer Quality

Ruhan Wang, Zhiyong Wang, Chengkai Huang et al.

ICML 2025arXiv:2506.07440
3
citations

From Softmax to Score: Transformers Can Effectively Implement In-Context Denoising Steps

Paul Rosu, Lawrence Carin, Xiang Cheng

NEURIPS 2025

Generative Adapter: Contextualizing Language Models in Parameters with A Single Forward Pass

Tong Chen, Hao Fang, Patrick Xia et al.

ICLR 2025arXiv:2411.05877
10
citations

GRAVER: Generative Graph Vocabularies for Robust Graph Foundation Models Fine-tuning

Haonan Yuan, Qingyun Sun, Junhua Shi et al.

NEURIPS 2025arXiv:2511.05592
4
citations

GuardAgent: Safeguard LLM Agents via Knowledge-Enabled Reasoning

Zhen Xiang, Linzhi Zheng, Yanjie Li et al.

ICML 2025
69
citations

Hierarchical Demonstration Order Optimization for Many-shot In-Context Learning

Yinhan He, Wendy Zheng, Song Wang et al.

NEURIPS 2025

How Data Mixing Shapes In-Context Learning: Asymptotic Equivalence for Transformers with MLPs

Samet Demir, Zafer Dogan

NEURIPS 2025arXiv:2510.25753

ICLR: In-Context Learning of Representations

Core Francisco Park, Andrew Lee, Ekdeep Singh Lubana et al.

ICLR 2025arXiv:2501.00070
32
citations

ImageGen-CoT: Enhancing Text-to-Image In-context Learning with Chain-of-Thought Reasoning

Jiaqi Liao, Zhengyuan Yang, Linjie Li et al.

ICCV 2025arXiv:2503.19312
22
citations

Implicit In-context Learning

Zhuowei Li, Zihao Xu, Ligong Han et al.

ICLR 2025arXiv:2405.14660
10
citations

Improving Large Language Model Planning with Action Sequence Similarity

Xinran Zhao, Hanie Sedghi, Bernd Bohnet et al.

ICLR 2025arXiv:2505.01009
5
citations

In-Context Learning of Stochastic Differential Equations with Foundation Inference Models

Patrick Seifner, Kostadin Cvejoski, David Berghaus et al.

NEURIPS 2025arXiv:2502.19049
5
citations

In-Context Learning Strategies Emerge Rationally

Daniel Wurgaft, Ekdeep S Lubana, Core Francisco Park et al.

NEURIPS 2025arXiv:2506.17859
5
citations

Inference Scaling for Long-Context Retrieval Augmented Generation

Zhenrui Yue, Honglei Zhuang, Aijun Bai et al.

ICLR 2025arXiv:2410.04343
54
citations

InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales

Zhepei Wei, Wei-Lin Chen, Yu Meng

ICLR 2025arXiv:2406.13629
76
citations

Investigating the Pre-Training Dynamics of In-Context Learning: Task Recognition vs. Task Learning

Xiaolei Wang, Xinyu Tang, Junyi Li et al.

ICLR 2025arXiv:2406.14022
6
citations

Is In-Context Learning Sufficient for Instruction Following in LLMs?

Hao Zhao, Maksym Andriushchenko, francesco croce et al.

ICLR 2025arXiv:2405.19874
22
citations

Knowledge Starts with Practice: Knowledge-Aware Exercise Generative Recommendation with Adaptive Multi-Agent Cooperation

Yangtao Zhou, Hua Chu, chen et al.

NEURIPS 2025

Large (Vision) Language Models are Unsupervised In-Context Learners

Artyom Gadetsky, Andrei Atanov, Yulun Jiang et al.

ICLR 2025arXiv:2504.02349
3
citations

LaVin-DiT: Large Vision Diffusion Transformer

Zhaoqing Wang, Xiaobo Xia, Runnan Chen et al.

CVPR 2025arXiv:2411.11505
20
citations

Learning to Rank for In-Context Example Retrieval

Yuwen Ji, Luodan Zhang, Ambyer han et al.

NEURIPS 2025

Linear Transformers Implicitly Discover Unified Numerical Algorithms

Patrick Lutz, Aditya Gangrade, Hadi Daneshmand et al.

NEURIPS 2025arXiv:2509.19702
1
citations

Making Text Embedders Few-Shot Learners

Chaofan Li, Minghao Qin, Shitao Xiao et al.

ICLR 2025arXiv:2409.15700
89
citations

Meta-Learning an In-Context Transformer Model of Human Higher Visual Cortex

Muquan Yu, Mu Nan, Hossein Adeli et al.

NEURIPS 2025arXiv:2505.15813

Mimic In-Context Learning for Multimodal Tasks

Yuchu Jiang, Jiale Fu, chenduo hao et al.

CVPR 2025arXiv:2504.08851
9
citations

Mixture of In-Context Prompters for Tabular PFNs

Derek Xu, Olcay Cirit, Reza Asadi et al.

ICLR 2025arXiv:2405.16156
21
citations

Multi-Perspective Data Augmentation for Few-shot Object Detection

Anh-Khoa Nguyen Vu, Quoc Truong Truong, Vinh-Tiep Nguyen et al.

ICLR 2025arXiv:2502.18195
7
citations

MultiVerse: A Multi-Turn Conversation Benchmark for Evaluating Large Vision and Language Models

Young-Jun Lee, Byung-Kwan Lee, Jianshu Zhang et al.

ICCV 2025arXiv:2510.16641
4
citations
PreviousNext