Spotlight "in-context learning" Papers
8 papers found
Conference
Axial Neural Networks for Dimension-Free Foundation Models
Hyunsu Kim, Jonggeon Park, Joan Bruna et al.
NEURIPS 2025spotlightarXiv:2510.13665
Do-PFN: In-Context Learning for Causal Effect Estimation
Jake Robertson, Arik Reuter, Siyuan Guo et al.
NEURIPS 2025spotlightarXiv:2506.06039
14
citations
Optimization Inspired Few-Shot Adaptation for Large Language Models
Boyan Gao, Xin Wang, Yibo Yang et al.
NEURIPS 2025spotlightarXiv:2505.19107
Understanding Prompt Tuning and In-Context Learning via Meta-Learning
Tim Genewein, Kevin Li, Jordi Grau-Moya et al.
NEURIPS 2025spotlightarXiv:2505.17010
5
citations
Vision-centric Token Compression in Large Language Model
Ling Xing, Alex Jinpeng Wang, Rui Yan et al.
NEURIPS 2025spotlightarXiv:2502.00791
11
citations
What One Cannot, Two Can: Two-Layer Transformers Provably Represent Induction Heads on Any-Order Markov Chains
Chanakya Ekbote, Ashok Vardhan Makkuva, Marco Bondaschi et al.
NEURIPS 2025spotlightarXiv:2508.07208
1
citations
Position: Understanding LLMs Requires More Than Statistical Generalization
Patrik Reizinger, Szilvia Ujváry, Anna Mészáros et al.
ICML 2024spotlightarXiv:2405.01964
22
citations
What needs to go right for an induction head? A mechanistic study of in-context learning circuits and their formation
Aaditya Singh, Ted Moskovitz, Feilx Hill et al.
ICML 2024spotlightarXiv:2404.07129
64
citations