ICML "large language models" Papers

180 papers found • Page 2 of 4

Efficient Exploration for LLMs

Vikranth Dwaracherla, Seyed Mohammad Asghari, Botao Hao et al.

ICML 2024posterarXiv:2402.00396

Envisioning Outlier Exposure by Large Language Models for Out-of-Distribution Detection

Chentao Cao, Zhun Zhong, Zhanke Zhou et al.

ICML 2024posterarXiv:2406.00806

Evaluating Quantized Large Language Models

Shiyao Li, Xuefei Ning, Luning Wang et al.

ICML 2024posterarXiv:2402.18158

Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks

Linyuan Gong, Sida Wang, Mostafa Elhoushi et al.

ICML 2024posterarXiv:2403.04814

Evolution of Heuristics: Towards Efficient Automatic Algorithm Design Using Large Language Model

Fei Liu, Tong Xialiang, Mingxuan Yuan et al.

ICML 2024posterarXiv:2401.02051

Evolving Subnetwork Training for Large Language Models

hanqi li, Lu Chen, Da Ma et al.

ICML 2024posterarXiv:2406.06962

ExCP: Extreme LLM Checkpoint Compression via Weight-Momentum Joint Shrinking

Wenshuo Li, Xinghao Chen, Han Shu et al.

ICML 2024posterarXiv:2406.11257

Exploiting Code Symmetries for Learning Program Semantics

Kexin Pei, Weichen Li, Qirui Jin et al.

ICML 2024spotlightarXiv:2308.03312

Extreme Compression of Large Language Models via Additive Quantization

Vage Egiazarian, Andrei Panferov, Denis Kuznedelev et al.

ICML 2024posterarXiv:2401.06118

FedBPT: Efficient Federated Black-box Prompt Tuning for Large Language Models

Jingwei Sun, Ziyue Xu, Hongxu Yin et al.

ICML 2024posterarXiv:2310.01467

Federated Full-Parameter Tuning of Billion-Sized Language Models with Communication Cost under 18 Kilobytes

Zhen Qin, Daoyuan Chen, Bingchen Qian et al.

ICML 2024posterarXiv:2312.06353

Flextron: Many-in-One Flexible Large Language Model

Ruisi Cai, Saurav Muralidharan, Greg Heinrich et al.

ICML 2024posterarXiv:2406.10260

From Yes-Men to Truth-Tellers: Addressing Sycophancy in Large Language Models with Pinpoint Tuning

Wei Chen, Zhen Huang, Liang Xie et al.

ICML 2024posterarXiv:2409.01658

Fundamental Limitations of Alignment in Large Language Models

Yotam Wolf, Noam Wies, Oshri Avnery et al.

ICML 2024posterarXiv:2304.11082

GALA3D: Towards Text-to-3D Complex Scene Generation via Layout-guided Generative Gaussian Splatting

Xiaoyu Zhou, Xingjian Ran, Yajiao Xiong et al.

ICML 2024posterarXiv:2402.07207

GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection

Jiawei Zhao, Zhenyu Zhang, Beidi Chen et al.

ICML 2024posterarXiv:2403.03507

Generating Chain-of-Thoughts with a Pairwise-Comparison Approach to Searching for the Most Promising Intermediate Thought

Zhen-Yu Zhang, Siwei Han, Huaxiu Yao et al.

ICML 2024posterarXiv:2402.06918

GiLOT: Interpreting Generative Language Models via Optimal Transport

Xuhong Li, Jiamin Chen, Yekun Chai et al.

ICML 2024poster

GistScore: Learning Better Representations for In-Context Example Selection with Gist Bottlenecks

Shivanshu Gupta, Clemens Rosenbaum, Ethan R. Elenberg

ICML 2024posterarXiv:2311.09606

GliDe with a CaPE: A Low-Hassle Method to Accelerate Speculative Decoding

Cunxiao Du, Jing Jiang, Xu Yuanchen et al.

ICML 2024posterarXiv:2402.02082

GRATH: Gradual Self-Truthifying for Large Language Models

Weixin Chen, Dawn Song, Bo Li

ICML 2024posterarXiv:2401.12292

Guiding LLMs The Right Way: Fast, Non-Invasive Constrained Generation

Luca Beurer-Kellner, Marc Fischer, Martin Vechev

ICML 2024posterarXiv:2403.06988

HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal

Mantas Mazeika, Long Phan, Xuwang Yin et al.

ICML 2024posterarXiv:2402.04249

Helpful or Harmful Data? Fine-tuning-free Shapley Attribution for Explaining Language Model Predictions

Jingtan Wang, Xiaoqiang Lin, Rui Qiao et al.

ICML 2024posterarXiv:2406.04606

How do Large Language Models Navigate Conflicts between Honesty and Helpfulness?

Ryan Liu, Theodore R Sumers, Ishita Dasgupta et al.

ICML 2024posterarXiv:2402.07282

Human-like Category Learning by Injecting Ecological Priors from Large Language Models into Neural Networks

Akshay Kumar Jagadish, Julian Coda-Forno, Mirko Thalmann et al.

ICML 2024posterarXiv:2402.01821

Implicit meta-learning may lead language models to trust more reliable sources

Dmitrii Krasheninnikov, Egor Krasheninnikov, Bruno Mlodozeniec et al.

ICML 2024posterarXiv:2310.15047

In-Context Learning Agents Are Asymmetric Belief Updaters

Johannes A. Schubert, Akshay Kumar Jagadish, Marcel Binz et al.

ICML 2024posterarXiv:2402.03969

In-Context Principle Learning from Mistakes

Tianjun Zhang, Aman Madaan, Luyu Gao et al.

ICML 2024posterarXiv:2402.05403

In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation

Shiqi Chen, Miao Xiong, Junteng Liu et al.

ICML 2024posterarXiv:2403.01548

In-Context Unlearning: Language Models as Few-Shot Unlearners

Martin Pawelczyk, Seth Neel, Himabindu Lakkaraju

ICML 2024poster

In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering

Sheng Liu, Haotian Ye, Lei Xing et al.

ICML 2024posterarXiv:2311.06668

InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining

Boxin Wang, Wei Ping, Lawrence McAfee et al.

ICML 2024posterarXiv:2310.07713

InstructSpeech: Following Speech Editing Instructions via Large Language Models

Rongjie Huang, Ruofan Hu, Yongqi Wang et al.

ICML 2024poster

Integrated Hardware Architecture and Device Placement Search

Irene Wang, Jakub Tarnawski, Amar Phanishayee et al.

ICML 2024spotlightarXiv:2407.13143

Interpreting and Improving Large Language Models in Arithmetic Calculation

Wei Zhang, Wan Chaoqun, Yonggang Zhang et al.

ICML 2024posterarXiv:2409.01659

Is In-Context Learning in Large Language Models Bayesian? A Martingale Perspective

Fabian Falck, Ziyu Wang, Christopher Holmes

ICML 2024posterarXiv:2406.00793

Junk DNA Hypothesis: Pruning Small Pre-Trained Weights $\textit{Irreversibly}$ and $\textit{Monotonically}$ Impairs ``Difficult" Downstream Tasks in LLMs

Lu Yin, Ajay Jaiswal, Shiwei Liu et al.

ICML 2024poster

Language Agents with Reinforcement Learning for Strategic Play in the Werewolf Game

Zelai Xu, Chao Yu, Fei Fang et al.

ICML 2024posterarXiv:2310.18940

Language Generation with Strictly Proper Scoring Rules

Chenze Shao, Fandong Meng, Yijin Liu et al.

ICML 2024posterarXiv:2405.18906

Language Models Represent Beliefs of Self and Others

Wentao Zhu, Zhining Zhang, Yizhou Wang

ICML 2024posterarXiv:2402.18496

Large Language Models are Geographically Biased

Rohin Manvi, Samar Khanna, Marshall Burke et al.

ICML 2024oralarXiv:2402.02680

Large Language Models Can Automatically Engineer Features for Few-Shot Tabular Learning

Sungwon Han, Jinsung Yoon, Sercan Arik et al.

ICML 2024posterarXiv:2404.09491

Larimar: Large Language Models with Episodic Memory Control

Payel Das, Subhajit Chaudhury, Elliot Nelson et al.

ICML 2024posterarXiv:2403.11901

Learning and Forgetting Unsafe Examples in Large Language Models

Jiachen Zhao, Zhun Deng, David Madras et al.

ICML 2024oralarXiv:2312.12736

Learning Reward for Robot Skills Using Large Language Models via Self-Alignment

Yuwei Zeng, Yao Mu, Lin Shao

ICML 2024posterarXiv:2405.07162

LESS: Selecting Influential Data for Targeted Instruction Tuning

Mengzhou Xia, Sadhika Malladi, Suchin Gururangan et al.

ICML 2024posterarXiv:2402.04333

Libra: Building Decoupled Vision System on Large Language Models

Yifan Xu, Xiaoshan Yang, Yaguang Song et al.

ICML 2024posterarXiv:2405.10140

LLaGA: Large Language and Graph Assistant

Runjin Chen, Tong Zhao, Ajay Jaiswal et al.

ICML 2024posterarXiv:2402.08170

LLM and Simulation as Bilevel Optimizers: A New Paradigm to Advance Physical Scientific Discovery

Pingchuan Ma, Johnson Tsun-Hsuan Wang, Minghao Guo et al.

ICML 2024posterarXiv:2405.09783