Poster "in-context learning" Papers

155 papers found • Page 3 of 4

BAGEL: Bootstrapping Agents by Guiding Exploration with Language

Shikhar Murty, Christopher Manning, Peter Shaw et al.

ICML 2024arXiv:2403.08140
29
citations

Breaking through the learning plateaus of in-context learning in Transformer

Jingwen Fu, Tao Yang, Yuwang Wang et al.

ICML 2024arXiv:2309.06054
5
citations

Can Looped Transformers Learn to Implement Multi-step Gradient Descent for In-context Learning?

Khashayar Gatmiry, Nikunj Saunshi, Sashank J. Reddi et al.

ICML 2024arXiv:2410.08292
38
citations

Can Mamba Learn How To Learn? A Comparative Study on In-Context Learning Tasks

Jong Ho Park, Jaden Park, Zheyang Xiong et al.

ICML 2024arXiv:2402.04248
107
citations

Compositional Text-to-Image Generation with Dense Blob Representations

Weili Nie, Sifei Liu, Morteza Mardani et al.

ICML 2024arXiv:2405.08246
37
citations

Context Diffusion: In-Context Aware Image Generation

Ivona Najdenkoska, Animesh Sinha, Abhimanyu Dubey et al.

ECCV 2024arXiv:2312.03584
17
citations

CoReS: Orchestrating the Dance of Reasoning and Segmentation

Xiaoyi Bao, Siyang Sun, Shuailei Ma et al.

ECCV 2024arXiv:2404.05673
18
citations

DG-PIC: Domain Generalized Point-In-Context Learning for Point Cloud Understanding

Jincen Jiang, Qianyu Zhou, Yuhang Li et al.

ECCV 2024arXiv:2407.08801
15
citations

Dolphins: Multimodal Language Model for Driving

Yingzi Ma, Yulong Cao, Jiachen Sun et al.

ECCV 2024arXiv:2312.00438
128
citations

DQ-LoRe: Dual Queries with Low Rank Approximation Re-ranking for In-Context Learning

Jing Xiong, Zixuan Li, Chuanyang Zheng et al.

ICLR 2024arXiv:2310.02954

Dual Operating Modes of In-Context Learning

Ziqian Lin, Kangwook Lee

ICML 2024arXiv:2402.18819
46
citations

Eureka-Moments in Transformers: Multi-Step Tasks Reveal Softmax Induced Optimization Problems

David T. Hoffmann, Simon Schrodi, Jelena Bratulić et al.

ICML 2024arXiv:2310.12956
11
citations

Exact Conversion of In-Context Learning to Model Weights in Linearized-Attention Transformers

Brian Chen, Tianyang Hu, Hui Jin et al.

ICML 2024arXiv:2406.02847
6
citations

Feedback Loops With Language Models Drive In-Context Reward Hacking

Alexander Pan, Erik Jones, Meena Jagadeesan et al.

ICML 2024arXiv:2402.06627
60
citations

Finding Visual Task Vectors

Alberto Hojel, Yutong Bai, Trevor Darrell et al.

ECCV 2024arXiv:2404.05729
14
citations

Fool Your (Vision and) Language Model with Embarrassingly Simple Permutations

Yongshuo Zong, Tingyang Yu, Ruchika Chavhan et al.

ICML 2024arXiv:2310.01651
27
citations

From Words to Actions: Unveiling the Theoretical Underpinnings of LLM-Driven Autonomous Systems

Jianliang He, Siyu Chen, Fengzhuo Zhang et al.

ICML 2024arXiv:2405.19883
11
citations

Generalization to New Sequential Decision Making Tasks with In-Context Learning

Sharath Chandra Raparthy, Eric Hambro, Robert Kirk et al.

ICML 2024arXiv:2312.03801
37
citations

Generative Multimodal Models are In-Context Learners

Quan Sun, Yufeng Cui, Xiaosong Zhang et al.

CVPR 2024arXiv:2312.13286
438
citations

GistScore: Learning Better Representations for In-Context Example Selection with Gist Bottlenecks

Shivanshu Gupta, Clemens Rosenbaum, Ethan R. Elenberg

ICML 2024arXiv:2311.09606
9
citations

GRACE: Graph-Based Contextual Debiasing for Fair Visual Question Answering

Yifeng Zhang, Ming Jiang, Qi Zhao

ECCV 2024
2
citations

How Do Nonlinear Transformers Learn and Generalize in In-Context Learning?

Hongkang Li, Meng Wang, Songtao Lu et al.

ICML 2024arXiv:2402.15607
34
citations

How do Transformers Perform In-Context Autoregressive Learning ?

Michael Sander, Raja Giryes, Taiji Suzuki et al.

ICML 2024

How Transformers Learn Causal Structure with Gradient Descent

Eshaan Nichani, Alex Damian, Jason Lee

ICML 2024arXiv:2402.14735
102
citations

In-context Convergence of Transformers

Yu Huang, Yuan Cheng, Yingbin LIANG

ICML 2024arXiv:2310.05249
106
citations

In-Context Decision Transformer: Reinforcement Learning via Hierarchical Chain-of-Thought

sili huang, Jifeng Hu, Hechang Chen et al.

ICML 2024arXiv:2405.20692
19
citations

In-Context Freeze-Thaw Bayesian Optimization for Hyperparameter Optimization

Herilalaina Rakotoarison, Steven Adriaensen, Neeratyoy Mallik et al.

ICML 2024arXiv:2404.16795
27
citations

In-Context Language Learning: Architectures and Algorithms

Ekin Akyürek, Bailin Wang, Yoon Kim et al.

ICML 2024arXiv:2401.12973
83
citations

In-Context Learning Agents Are Asymmetric Belief Updaters

Johannes A. Schubert, Akshay Kumar Jagadish, Marcel Binz et al.

ICML 2024arXiv:2402.03969
16
citations

In-context Learning on Function Classes Unveiled for Transformers

Zhijie Wang, Bo Jiang, Shuai Li

ICML 2024

In-Context Principle Learning from Mistakes

Tianjun Zhang, Aman Madaan, Luyu Gao et al.

ICML 2024arXiv:2402.05403
40
citations

In-Context Unlearning: Language Models as Few-Shot Unlearners

Martin Pawelczyk, Seth Neel, Himabindu Lakkaraju

ICML 2024

In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering

Sheng Liu, Haotian Ye, Lei Xing et al.

ICML 2024arXiv:2311.06668
224
citations

InstructGIE: Towards Generalizable Image Editing

Zichong Meng, Changdi Yang, Jun Liu et al.

ECCV 2024arXiv:2403.05018
13
citations

Is In-Context Learning in Large Language Models Bayesian? A Martingale Perspective

Fabian Falck, Ziyu Wang, Christopher Holmes

ICML 2024arXiv:2406.00793
42
citations

Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models

Andy Zhou, Kai Yan, Michal Shlapentokh-Rothman et al.

ICML 2024

Large Language Models Can Automatically Engineer Features for Few-Shot Tabular Learning

Sungwon Han, Jinsung Yoon, Sercan Arik et al.

ICML 2024arXiv:2404.09491
66
citations

Learning Cognitive Maps from Transformer Representations for Efficient Planning in Partially Observed Environments

Antoine Dedieu, Wolfgang Lehrach, Guangyao Zhou et al.

ICML 2024arXiv:2401.05946
6
citations

Mastering Robot Manipulation with Multimodal Prompts through Pretraining and Multi-task Fine-tuning

Jiachen Li, Qiaozi Gao, Michael Johnston et al.

ICML 2024arXiv:2310.09676
17
citations

Meta-Reinforcement Learning Robust to Distributional Shift Via Performing Lifelong In-Context Learning

TengYe Xu, Zihao Li, Qinyuan Ren

ICML 2024

One Prompt is not Enough: Automated Construction of a Mixture-of-Expert Prompts

Ruochen Wang, Sohyun An, Minhao Cheng et al.

ICML 2024arXiv:2407.00256
16
citations

PALM: Predicting Actions through Language Models

Sanghwan Kim, Daoji Huang, Yongqin Xian et al.

ECCV 2024arXiv:2311.17944
23
citations

Position: Do pretrained Transformers Learn In-Context by Gradient Descent?

Lingfeng Shen, Aayush Mishra, Daniel Khashabi

ICML 2024

Position: Video as the New Language for Real-World Decision Making

Sherry Yang, Jacob C Walker, Jack Parker-Holder et al.

ICML 2024

Reason for Future, Act for Now: A Principled Architecture for Autonomous LLM Agents

Zhihan Liu, Hao Hu, Shenao Zhang et al.

ICML 2024

Rethinking and Improving Visual Prompt Selection for In-Context Learning Segmentation Framework

Wei Suo, Lanqing Lai, Mengyang Sun et al.

ECCV 2024

RoboMP$^2$: A Robotic Multimodal Perception-Planning Framework with Multimodal Large Language Models

Qi Lv, Hao Li, Xiang Deng et al.

ICML 2024arXiv:2404.04929
4
citations

Skill Set Optimization: Reinforcing Language Model Behavior via Transferable Skills

Kolby Nottingham, Bodhisattwa Prasad Majumder, Bhavana Dalvi et al.

ICML 2024arXiv:2402.03244
12
citations

Subgoal-based Demonstration Learning for Formal Theorem Proving

Xueliang Zhao, Wenda Li, Lingpeng Kong

ICML 2024arXiv:2305.16366
38
citations

Trainable Transformer in Transformer

Abhishek Panigrahi, Sadhika Malladi, Mengzhou Xia et al.

ICML 2024arXiv:2307.01189
14
citations