"in-context learning" Papers

76 papers found • Page 1 of 2

Bridging Sign and Spoken Languages: Pseudo Gloss Generation for Sign Language Translation

Jianyuan Guo, Peike Li, Trevor Cohn

NeurIPS 2025oralarXiv:2505.15438
3
citations

Density estimation with LLMs: a geometric investigation of in-context learning trajectories

Toni Liu, Nicolas Boulle, Raphaël Sarfati et al.

ICLR 2025posterarXiv:2410.05218
2
citations

Efficient Cross-Episode Meta-RL

Gresa Shala, André Biedenkapp, Pierre Krack et al.

ICLR 2025poster

ELICIT: LLM Augmentation Via External In-context Capability

Futing Wang, Jianhao (Elliott) Yan, Yue Zhang et al.

ICLR 2025posterarXiv:2410.09343
6
citations

Implicit In-context Learning

Zhuowei Li, Zihao Xu, Ligong Han et al.

ICLR 2025posterarXiv:2405.14660
8
citations

Improving Large Language Model Planning with Action Sequence Similarity

Xinran Zhao, Hanie Sedghi, Bernd Bohnet et al.

ICLR 2025posterarXiv:2505.01009
5
citations

Inference Scaling for Long-Context Retrieval Augmented Generation

Zhenrui Yue, Honglei Zhuang, Aijun Bai et al.

ICLR 2025posterarXiv:2410.04343
51
citations

Knowledge Starts with Practice: Knowledge-Aware Exercise Generative Recommendation with Adaptive Multi-Agent Cooperation

Yangtao Zhou, Hua Chu, chen et al.

NeurIPS 2025poster

Learning to Rank for In-Context Example Retrieval

Yuwen Ji, Luodan Zhang, Ambyer han et al.

NeurIPS 2025poster

Neuroverse3D: Developing In-Context Learning Universal Model for Neuroimaging in 3D

Jiesi Hu, Hanyang Peng, Yanwu Yang et al.

ICCV 2025posterarXiv:2503.02410

On the Learn-to-Optimize Capabilities of Transformers in In-Context Sparse Recovery

Renpu Liu, Ruida Zhou, Cong Shen et al.

ICLR 2025posterarXiv:2410.13981
4
citations

Optimal Dynamic Regret by Transformers for Non-Stationary Reinforcement Learning

Baiyuan Chen, Shinji Ito, Masaaki Imaizumi

NeurIPS 2025posterarXiv:2508.16027

Self-Generated In-Context Examples Improve LLM Agents for Sequential Decision-Making Tasks

Vishnu Sarukkai, Zhiqiang Xie, Kayvon Fatahalian

NeurIPS 2025posterarXiv:2505.00234
4
citations

Theoretical Insights into In-context Learning with Unlabeled Data

Yingcong Li, Xiangyu Chang, Muti Kara et al.

NeurIPS 2025poster

Transformers are almost optimal metalearners for linear classification

Roey Magen, Gal Vardi

NeurIPS 2025posterarXiv:2510.19797
1
citations

Unlabeled Data Can Provably Enhance In-Context Learning of Transformers

Renpu Liu, Jing Yang

NeurIPS 2025posterarXiv:2601.10058
1
citations

Vision-centric Token Compression in Large Language Model

Ling Xing, Alex Jinpeng Wang, Rui Yan et al.

NeurIPS 2025spotlightarXiv:2502.00791
7
citations

What One Cannot, Two Can: Two-Layer Transformers Provably Represent Induction Heads on Any-Order Markov Chains

Chanakya Ekbote, Ashok Vardhan Makkuva, Marco Bondaschi et al.

NeurIPS 2025spotlightarXiv:2508.07208

Why In-Context Learning Models are Good Few-Shot Learners?

Shiguang Wu, Yaqing Wang, Quanming Yao

ICLR 2025poster

Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models

Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar et al.

ICML 2024poster

An Information-Theoretic Analysis of In-Context Learning

Hong Jun Jeon, Jason Lee, Qi Lei et al.

ICML 2024poster

Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue Abilities

Zhifeng Kong, ARUSHI GOEL, Rohan Badlani et al.

ICML 2024poster

BAGEL: Bootstrapping Agents by Guiding Exploration with Language

Shikhar Murty, Christopher Manning, Peter Shaw et al.

ICML 2024poster

Breaking through the learning plateaus of in-context learning in Transformer

Jingwen Fu, Tao Yang, Yuwang Wang et al.

ICML 2024poster

Can Looped Transformers Learn to Implement Multi-step Gradient Descent for In-context Learning?

Khashayar Gatmiry, Nikunj Saunshi, Sashank J. Reddi et al.

ICML 2024poster

Can Mamba Learn How To Learn? A Comparative Study on In-Context Learning Tasks

Jong Ho Park, Jaden Park, Zheyang Xiong et al.

ICML 2024poster

Code-Style In-Context Learning for Knowledge-Based Question Answering

Zhijie Nie, Richong Zhang, Zhongyuan Wang et al.

AAAI 2024paperarXiv:2309.04695
18
citations

Compositional Text-to-Image Generation with Dense Blob Representations

Weili Nie, Sifei Liu, Morteza Mardani et al.

ICML 2024poster

Customizing Language Model Responses with Contrastive In-Context Learning

Xiang Gao, Kamalika Das

AAAI 2024paperarXiv:2401.17390
19
citations

DG-PIC: Domain Generalized Point-In-Context Learning for Point Cloud Understanding

Jincen Jiang, Qianyu Zhou, Yuhang Li et al.

ECCV 2024posterarXiv:2407.08801
15
citations

Dual Operating Modes of In-Context Learning

Ziqian Lin, Kangwook Lee

ICML 2024poster

Eureka-Moments in Transformers: Multi-Step Tasks Reveal Softmax Induced Optimization Problems

David T. Hoffmann, Simon Schrodi, Jelena Bratulić et al.

ICML 2024poster

Exact Conversion of In-Context Learning to Model Weights in Linearized-Attention Transformers

Brian Chen, Tianyang Hu, Hui Jin et al.

ICML 2024poster

Feedback Loops With Language Models Drive In-Context Reward Hacking

Alexander Pan, Erik Jones, Meena Jagadeesan et al.

ICML 2024poster

FlashST: A Simple and Universal Prompt-Tuning Framework for Traffic Prediction

Zhonghang Li, Lianghao Xia, Yong Xu et al.

ICML 2024oral

Fool Your (Vision and) Language Model with Embarrassingly Simple Permutations

Yongshuo Zong, Tingyang Yu, Ruchika Chavhan et al.

ICML 2024poster

From Words to Actions: Unveiling the Theoretical Underpinnings of LLM-Driven Autonomous Systems

Jianliang He, Siyu Chen, Fengzhuo Zhang et al.

ICML 2024poster

Generalization to New Sequential Decision Making Tasks with In-Context Learning

Sharath Chandra Raparthy, Eric Hambro, Robert Kirk et al.

ICML 2024poster

GistScore: Learning Better Representations for In-Context Example Selection with Gist Bottlenecks

Shivanshu Gupta, Clemens Rosenbaum, Ethan R. Elenberg

ICML 2024poster

How Do Nonlinear Transformers Learn and Generalize in In-Context Learning?

Hongkang Li, Meng Wang, Songtao Lu et al.

ICML 2024poster

How do Transformers Perform In-Context Autoregressive Learning ?

Michael Sander, Raja Giryes, Taiji Suzuki et al.

ICML 2024poster

How Transformers Learn Causal Structure with Gradient Descent

Eshaan Nichani, Alex Damian, Jason Lee

ICML 2024poster

In-context Convergence of Transformers

Yu Huang, Yuan Cheng, Yingbin LIANG

ICML 2024poster

In-Context Decision Transformer: Reinforcement Learning via Hierarchical Chain-of-Thought

sili huang, Jifeng Hu, Hechang Chen et al.

ICML 2024poster

In-Context Freeze-Thaw Bayesian Optimization for Hyperparameter Optimization

Herilalaina Rakotoarison, Steven Adriaensen, Neeratyoy Mallik et al.

ICML 2024poster

In-Context Language Learning: Architectures and Algorithms

Ekin Akyürek, Bailin Wang, Yoon Kim et al.

ICML 2024poster

In-Context Learning Agents Are Asymmetric Belief Updaters

Johannes A. Schubert, Akshay Kumar Jagadish, Marcel Binz et al.

ICML 2024poster

In-context Learning on Function Classes Unveiled for Transformers

Zhijie Wang, Bo Jiang, Shuai Li

ICML 2024poster

In-Context Principle Learning from Mistakes

Tianjun Zhang, Aman Madaan, Luyu Gao et al.

ICML 2024poster

In-Context Unlearning: Language Models as Few-Shot Unlearners

Martin Pawelczyk, Seth Neel, Himabindu Lakkaraju

ICML 2024poster
← PreviousNext →