ICML Poster "large language models" Papers

165 papers found • Page 1 of 4

Accelerated Speculative Sampling Based on Tree Monte Carlo

Zhengmian Hu, Heng Huang

ICML 2024poster

Accurate LoRA-Finetuning Quantization of LLMs via Information Retention

Haotong Qin, Xudong Ma, Xingyu Zheng et al.

ICML 2024posterarXiv:2402.05445

A Closer Look at the Limitations of Instruction Tuning

Sreyan Ghosh, Chandra Kiran Evuru, Sonal Kumar et al.

ICML 2024posterarXiv:2402.05119

Active Preference Learning for Large Language Models

William Muldrew, Peter Hayes, Mingtian Zhang et al.

ICML 2024posterarXiv:2402.08114

Adaptive Text Watermark for Large Language Models

Yepeng Liu, Yuheng Bu

ICML 2024posterarXiv:2401.13927

Agent Instructs Large Language Models to be General Zero-Shot Reasoners

Nicholas Crispino, Kyle Montgomery, Fankun Zeng et al.

ICML 2024posterarXiv:2310.03710

Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models

Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar et al.

ICML 2024posterarXiv:2308.10379

AlphaZero-Like Tree-Search can Guide Large Language Model Decoding and Training

Ziyu Wan, Xidong Feng, Muning Wen et al.

ICML 2024posterarXiv:2309.17179

A Sober Look at LLMs for Material Discovery: Are They Actually Good for Bayesian Optimization Over Molecules?

Agustinus Kristiadi, Felix Strieth-Kalthoff, Marta Skreta et al.

ICML 2024posterarXiv:2402.05015

Assessing Large Language Models on Climate Information

Jannis Bulian, Mike Schäfer, Afra Amini et al.

ICML 2024posterarXiv:2310.02932

A Tale of Tails: Model Collapse as a Change of Scaling Laws

Elvis Dohmatob, Yunzhen Feng, Pu Yang et al.

ICML 2024posterarXiv:2402.07043

Autoformalizing Euclidean Geometry

Logan Murphy, Kaiyu Yang, Jialiang Sun et al.

ICML 2024posterarXiv:2405.17216

AutoOS: Make Your OS More Powerful by Exploiting Large Language Models

Huilai Chen, Yuanbo Wen, Limin Cheng et al.

ICML 2024poster

BAT: Learning to Reason about Spatial Sounds with Large Language Models

Zhisheng Zheng, Puyuan Peng, Ziyang Ma et al.

ICML 2024posterarXiv:2402.01591

BetterV: Controlled Verilog Generation with Discriminative Guidance

Zehua Pei, Huiling Zhen, Mingxuan Yuan et al.

ICML 2024posterarXiv:2402.03375

BiE: Bi-Exponent Block Floating-Point for Large Language Models Quantization

Lancheng Zou, Wenqian Zhao, Shuo Yin et al.

ICML 2024poster

BiLLM: Pushing the Limit of Post-Training Quantization for LLMs

Wei Huang, Yangdong Liu, Haotong Qin et al.

ICML 2024posterarXiv:2402.04291

Can AI Assistants Know What They Don't Know?

Qinyuan Cheng, Tianxiang Sun, Xiangyang Liu et al.

ICML 2024posterarXiv:2401.13275

Case-Based or Rule-Based: How Do Transformers Do the Math?

Yi Hu, Xiaojuan Tang, Haotong Yang et al.

ICML 2024posterarXiv:2402.17709

Characterizing Truthfulness in Large Language Model Generations with Local Intrinsic Dimension

Fan Yin, Jayanth Srinivasa, Kai-Wei Chang

ICML 2024posterarXiv:2402.18048

CHEMREASONER: Heuristic Search over a Large Language Model’s Knowledge Space using Quantum-Chemical Feedback

Henry W. Sprueill, Carl Edwards, Khushbu Agarwal et al.

ICML 2024posterarXiv:2402.10980

Coactive Learning for Large Language Models using Implicit User Feedback

Aaron D. Tucker, Kianté Brantley, Adam Cahall et al.

ICML 2024poster

Compressing Large Language Models by Joint Sparsification and Quantization

Jinyang Guo, Jianyu Wu, Zining Wang et al.

ICML 2024poster

ContPhy: Continuum Physical Concept Learning and Reasoning from Videos

Zhicheng Zheng, Xin Yan, Zhenfang Chen et al.

ICML 2024posterarXiv:2402.06119

Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation

Haoran Xu, Amr Sharaf, Yunmo Chen et al.

ICML 2024posterarXiv:2401.08417

COPAL: Continual Pruning in Large Language Generative Models

Srikanth Malla, Joon Hee Choi, Chiho Choi

ICML 2024posterarXiv:2405.02347

Curated LLM: Synergy of LLMs and Data Curation for tabular augmentation in low-data regimes

Nabeel Seedat, Nicolas Huynh, Boris van Breugel et al.

ICML 2024posterarXiv:2312.12112

Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression

Junyuan Hong, Jinhao Duan, Chenhui Zhang et al.

ICML 2024posterarXiv:2403.15447

Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling

Bairu Hou, Yujian Liu, Kaizhi Qian et al.

ICML 2024posterarXiv:2311.08718

Deep Fusion: Efficient Network Training via Pre-trained Initializations

Hanna Mazzawi, Xavi Gonzalvo, Michael Wunder et al.

ICML 2024posterarXiv:2306.11903

DFA-RAG: Conversational Semantic Router for Large Language Model with Definite Finite Automaton

Yiyou Sun, Junjie Hu, Wei Cheng et al.

ICML 2024posterarXiv:2402.04411

DiJiang: Efficient Large Language Models through Compact Kernelization

Hanting Chen, Liuzhicheng Liuzhicheng, Xutao Wang et al.

ICML 2024posterarXiv:2403.19928

DiNADO: Norm-Disentangled Neurally-Decomposed Oracles for Controlling Language Models

Sidi Lu, Wenbo Zhao, Chenyang Tao et al.

ICML 2024posterarXiv:2306.11825

DistiLLM: Towards Streamlined Distillation for Large Language Models

Jongwoo Ko, Sungnyun Kim, Tianyi Chen et al.

ICML 2024posterarXiv:2402.03898

Distinguishing the Knowable from the Unknowable with Language Models

Gustaf Ahdritz, Tian Qin, Nikhil Vyas et al.

ICML 2024posterarXiv:2402.03563

DOGE: Domain Reweighting with Generalization Estimation

Simin Fan, Matteo Pagliardini, Martin Jaggi

ICML 2024posterarXiv:2310.15393

Do Large Code Models Understand Programming Concepts? Counterfactual Analysis for Code Predicates

Ashish Hooda, Mihai Christodorescu, Miltiadis Allamanis et al.

ICML 2024posterarXiv:2402.05980

Do Large Language Models Perform the Way People Expect? Measuring the Human Generalization Function

Keyon Vafa, Ashesh Rambachan, Sendhil Mullainathan

ICML 2024posterarXiv:2406.01382

DPZero: Private Fine-Tuning of Language Models without Backpropagation

Liang Zhang, Bingcong Li, Kiran Thekumparampil et al.

ICML 2024posterarXiv:2310.09639

DS-Agent: Automated Data Science by Empowering Large Language Models with Case-Based Reasoning

Siyuan Guo, Cheng Deng, Ying Wen et al.

ICML 2024posterarXiv:2402.17453

Dual Operating Modes of In-Context Learning

Ziqian Lin, Kangwook Lee

ICML 2024posterarXiv:2402.18819

Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference

Piotr Nawrot, Adrian Łańcucki, Marcin Chochowski et al.

ICML 2024posterarXiv:2403.09636

EAGLE: Speculative Sampling Requires Rethinking Feature Uncertainty

Yuhui Li, Fangyun Wei, Chao Zhang et al.

ICML 2024posterarXiv:2401.15077

eCeLLM: Generalizing Large Language Models for E-commerce from Large-scale, High-quality Instruction Data

Peng, Xinyi Ling, Ziru Chen et al.

ICML 2024posterarXiv:2402.08831

EE-LLM: Large-Scale Training and Inference of Early-Exit Large Language Models with 3D Parallelism

Yanxi Chen, Xuchen Pan, Yaliang Li et al.

ICML 2024posterarXiv:2312.04916

Efficient Exploration for LLMs

Vikranth Dwaracherla, Seyed Mohammad Asghari, Botao Hao et al.

ICML 2024posterarXiv:2402.00396

Envisioning Outlier Exposure by Large Language Models for Out-of-Distribution Detection

Chentao Cao, Zhun Zhong, Zhanke Zhou et al.

ICML 2024posterarXiv:2406.00806

Evaluating Quantized Large Language Models

Shiyao Li, Xuefei Ning, Luning Wang et al.

ICML 2024posterarXiv:2402.18158

Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks

Linyuan Gong, Sida Wang, Mostafa Elhoushi et al.

ICML 2024posterarXiv:2403.04814

Evolution of Heuristics: Towards Efficient Automatic Algorithm Design Using Large Language Model

Fei Liu, Tong Xialiang, Mingxuan Yuan et al.

ICML 2024posterarXiv:2401.02051
← PreviousNext →