"in-context learning" Papers
76 篇论文 • 第 1 页,共 2 页
Bridging Sign and Spoken Languages: Pseudo Gloss Generation for Sign Language Translation
Jianyuan Guo, Peike Li, Trevor Cohn
Density estimation with LLMs: a geometric investigation of in-context learning trajectories
Toni Liu, Nicolas Boulle, Raphaël Sarfati et al.
Efficient Cross-Episode Meta-RL
Gresa Shala, André Biedenkapp, Pierre Krack et al.
ELICIT: LLM Augmentation Via External In-context Capability
Futing Wang, Jianhao (Elliott) Yan, Yue Zhang et al.
Implicit In-context Learning
Zhuowei Li, Zihao Xu, Ligong Han et al.
Improving Large Language Model Planning with Action Sequence Similarity
Xinran Zhao, Hanie Sedghi, Bernd Bohnet et al.
Inference Scaling for Long-Context Retrieval Augmented Generation
Zhenrui Yue, Honglei Zhuang, Aijun Bai et al.
Knowledge Starts with Practice: Knowledge-Aware Exercise Generative Recommendation with Adaptive Multi-Agent Cooperation
Yangtao Zhou, Hua Chu, chen et al.
Learning to Rank for In-Context Example Retrieval
Yuwen Ji, Luodan Zhang, Ambyer han et al.
Neuroverse3D: Developing In-Context Learning Universal Model for Neuroimaging in 3D
Jiesi Hu, Hanyang Peng, Yanwu Yang et al.
On the Learn-to-Optimize Capabilities of Transformers in In-Context Sparse Recovery
Renpu Liu, Ruida Zhou, Cong Shen et al.
Optimal Dynamic Regret by Transformers for Non-Stationary Reinforcement Learning
Baiyuan Chen, Shinji Ito, Masaaki Imaizumi
Self-Generated In-Context Examples Improve LLM Agents for Sequential Decision-Making Tasks
Vishnu Sarukkai, Zhiqiang Xie, Kayvon Fatahalian
Theoretical Insights into In-context Learning with Unlabeled Data
Yingcong Li, Xiangyu Chang, Muti Kara et al.
Transformers are almost optimal metalearners for linear classification
Roey Magen, Gal Vardi
Unlabeled Data Can Provably Enhance In-Context Learning of Transformers
Renpu Liu, Jing Yang
Vision-centric Token Compression in Large Language Model
Ling Xing, Alex Jinpeng Wang, Rui Yan et al.
What One Cannot, Two Can: Two-Layer Transformers Provably Represent Induction Heads on Any-Order Markov Chains
Chanakya Ekbote, Ashok Vardhan Makkuva, Marco Bondaschi et al.
Why In-Context Learning Models are Good Few-Shot Learners?
Shiguang Wu, Yaqing Wang, Quanming Yao
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar et al.
An Information-Theoretic Analysis of In-Context Learning
Hong Jun Jeon, Jason Lee, Qi Lei et al.
Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue Abilities
Zhifeng Kong, ARUSHI GOEL, Rohan Badlani et al.
BAGEL: Bootstrapping Agents by Guiding Exploration with Language
Shikhar Murty, Christopher Manning, Peter Shaw et al.
Breaking through the learning plateaus of in-context learning in Transformer
Jingwen Fu, Tao Yang, Yuwang Wang et al.
Can Looped Transformers Learn to Implement Multi-step Gradient Descent for In-context Learning?
Khashayar Gatmiry, Nikunj Saunshi, Sashank J. Reddi et al.
Can Mamba Learn How To Learn? A Comparative Study on In-Context Learning Tasks
Jong Ho Park, Jaden Park, Zheyang Xiong et al.
Code-Style In-Context Learning for Knowledge-Based Question Answering
Zhijie Nie, Richong Zhang, Zhongyuan Wang et al.
Compositional Text-to-Image Generation with Dense Blob Representations
Weili Nie, Sifei Liu, Morteza Mardani et al.
Customizing Language Model Responses with Contrastive In-Context Learning
Xiang Gao, Kamalika Das
DG-PIC: Domain Generalized Point-In-Context Learning for Point Cloud Understanding
Jincen Jiang, Qianyu Zhou, Yuhang Li et al.
Dual Operating Modes of In-Context Learning
Ziqian Lin, Kangwook Lee
Eureka-Moments in Transformers: Multi-Step Tasks Reveal Softmax Induced Optimization Problems
David T. Hoffmann, Simon Schrodi, Jelena Bratulić et al.
Exact Conversion of In-Context Learning to Model Weights in Linearized-Attention Transformers
Brian Chen, Tianyang Hu, Hui Jin et al.
Feedback Loops With Language Models Drive In-Context Reward Hacking
Alexander Pan, Erik Jones, Meena Jagadeesan et al.
FlashST: A Simple and Universal Prompt-Tuning Framework for Traffic Prediction
Zhonghang Li, Lianghao Xia, Yong Xu et al.
Fool Your (Vision and) Language Model with Embarrassingly Simple Permutations
Yongshuo Zong, Tingyang Yu, Ruchika Chavhan et al.
From Words to Actions: Unveiling the Theoretical Underpinnings of LLM-Driven Autonomous Systems
Jianliang He, Siyu Chen, Fengzhuo Zhang et al.
Generalization to New Sequential Decision Making Tasks with In-Context Learning
Sharath Chandra Raparthy, Eric Hambro, Robert Kirk et al.
GistScore: Learning Better Representations for In-Context Example Selection with Gist Bottlenecks
Shivanshu Gupta, Clemens Rosenbaum, Ethan R. Elenberg
How Do Nonlinear Transformers Learn and Generalize in In-Context Learning?
Hongkang Li, Meng Wang, Songtao Lu et al.
How do Transformers Perform In-Context Autoregressive Learning ?
Michael Sander, Raja Giryes, Taiji Suzuki et al.
How Transformers Learn Causal Structure with Gradient Descent
Eshaan Nichani, Alex Damian, Jason Lee
In-context Convergence of Transformers
Yu Huang, Yuan Cheng, Yingbin LIANG
In-Context Decision Transformer: Reinforcement Learning via Hierarchical Chain-of-Thought
sili huang, Jifeng Hu, Hechang Chen et al.
In-Context Freeze-Thaw Bayesian Optimization for Hyperparameter Optimization
Herilalaina Rakotoarison, Steven Adriaensen, Neeratyoy Mallik et al.
In-Context Language Learning: Architectures and Algorithms
Ekin Akyürek, Bailin Wang, Yoon Kim et al.
In-Context Learning Agents Are Asymmetric Belief Updaters
Johannes A. Schubert, Akshay Kumar Jagadish, Marcel Binz et al.
In-context Learning on Function Classes Unveiled for Transformers
Zhijie Wang, Bo Jiang, Shuai Li
In-Context Principle Learning from Mistakes
Tianjun Zhang, Aman Madaan, Luyu Gao et al.
In-Context Unlearning: Language Models as Few-Shot Unlearners
Martin Pawelczyk, Seth Neel, Himabindu Lakkaraju