"large language models" Papers

986 papers found • Page 17 of 20

Flextron: Many-in-One Flexible Large Language Model

Ruisi Cai, Saurav Muralidharan, Greg Heinrich et al.

ICML 2024arXiv:2406.10260
34
citations

Fluctuation-Based Adaptive Structured Pruning for Large Language Models

Yongqi An, Xu Zhao, Tao Yu et al.

AAAI 2024paperarXiv:2312.11983
106
citations

From Yes-Men to Truth-Tellers: Addressing Sycophancy in Large Language Models with Pinpoint Tuning

Wei Chen, Zhen Huang, Liang Xie et al.

ICML 2024arXiv:2409.01658
42
citations

Fundamental Limitations of Alignment in Large Language Models

Yotam Wolf, Noam Wies, Oshri Avnery et al.

ICML 2024arXiv:2304.11082
178
citations

GALA3D: Towards Text-to-3D Complex Scene Generation via Layout-guided Generative Gaussian Splatting

Xiaoyu Zhou, Xingjian Ran, Yajiao Xiong et al.

ICML 2024arXiv:2402.07207
96
citations

GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection

Jiawei Zhao, Zhenyu Zhang, Beidi Chen et al.

ICML 2024arXiv:2403.03507
371
citations

Generalizable Whole Slide Image Classification with Fine-Grained Visual-Semantic Interaction

Hao Li, Ying Chen, Yifei Chen et al.

CVPR 2024arXiv:2402.19326
35
citations

Generating Chain-of-Thoughts with a Pairwise-Comparison Approach to Searching for the Most Promising Intermediate Thought

Zhen-Yu Zhang, Siwei Han, Huaxiu Yao et al.

ICML 2024arXiv:2402.06918
4
citations

GenSim: Generating Robotic Simulation Tasks via Large Language Models

Lirui Wang, Yiyang Ling, Zhecheng Yuan et al.

ICLR 2024spotlightarXiv:2310.01361
121
citations

Get an A in Math: Progressive Rectification Prompting

Zhenyu Wu, Meng Jiang, Chao Shen

AAAI 2024paperarXiv:2312.06867
15
citations

GiLOT: Interpreting Generative Language Models via Optimal Transport

Xuhong Li, Jiamin Chen, Yekun Chai et al.

ICML 2024

GistScore: Learning Better Representations for In-Context Example Selection with Gist Bottlenecks

Shivanshu Gupta, Clemens Rosenbaum, Ethan R. Elenberg

ICML 2024arXiv:2311.09606
9
citations

GliDe with a CaPE: A Low-Hassle Method to Accelerate Speculative Decoding

Cunxiao Du, Jing Jiang, Xu Yuanchen et al.

ICML 2024arXiv:2402.02082
65
citations

Graph Neural Prompting with Large Language Models

Yijun Tian, Huan Song, Zichen Wang et al.

AAAI 2024paperarXiv:2309.15427
77
citations

Graph of Thoughts: Solving Elaborate Problems with Large Language Models

Maciej Besta, Nils Blach, Ales Kubicek et al.

AAAI 2024paperarXiv:2308.09687
1116
citations

GRATH: Gradual Self-Truthifying for Large Language Models

Weixin Chen, Dawn Song, Bo Li

ICML 2024arXiv:2401.12292
7
citations

Grounded Text-to-Image Synthesis with Attention Refocusing

Quynh Phung, Songwei Ge, Jia-Bin Huang

CVPR 2024arXiv:2306.05427
162
citations

Guiding LLMs The Right Way: Fast, Non-Invasive Constrained Generation

Luca Beurer-Kellner, Marc Fischer, Martin Vechev

ICML 2024arXiv:2403.06988
82
citations

HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal

Mantas Mazeika, Long Phan, Xuwang Yin et al.

ICML 2024arXiv:2402.04249
802
citations

Helpful or Harmful Data? Fine-tuning-free Shapley Attribution for Explaining Language Model Predictions

Jingtan Wang, Xiaoqiang Lin, Rui Qiao et al.

ICML 2024arXiv:2406.04606
10
citations

History Matters: Temporal Knowledge Editing in Large Language Model

Xunjian Yin, Jin Jiang, Liming Yang et al.

AAAI 2024paperarXiv:2312.05497
16
citations

Hot or Cold? Adaptive Temperature Sampling for Code Generation with Large Language Models

Yuqi Zhu, Jia Li, Ge Li et al.

AAAI 2024paperarXiv:2309.02772
58
citations

How do Large Language Models Navigate Conflicts between Honesty and Helpfulness?

Ryan Liu, Theodore R Sumers, Ishita Dasgupta et al.

ICML 2024arXiv:2402.07282
28
citations

How to Protect Copyright Data in Optimization of Large Language Models?

Timothy Chu, Zhao Song, Chiwun Yang

AAAI 2024paperarXiv:2308.12247
40
citations

Human-like Category Learning by Injecting Ecological Priors from Large Language Models into Neural Networks

Akshay Kumar Jagadish, Julian Coda-Forno, Mirko Thalmann et al.

ICML 2024arXiv:2402.01821
6
citations

Image Captioning with Multi-Context Synthetic Data

Feipeng Ma, Y. Zhou, Fengyun Rao et al.

AAAI 2024paperarXiv:2305.18072
18
citations

Implicit meta-learning may lead language models to trust more reliable sources

Dmitrii Krasheninnikov, Egor Krasheninnikov, Bruno Mlodozeniec et al.

ICML 2024arXiv:2310.15047
7
citations

Improved Zero-Shot Classification by Adapting VLMs with Text Descriptions

Oindrila Saha, Grant Horn, Subhransu Maji

CVPR 2024arXiv:2401.02460
65
citations

In-Context Learning Agents Are Asymmetric Belief Updaters

Johannes A. Schubert, Akshay Kumar Jagadish, Marcel Binz et al.

ICML 2024arXiv:2402.03969
16
citations

In-Context Principle Learning from Mistakes

Tianjun Zhang, Aman Madaan, Luyu Gao et al.

ICML 2024arXiv:2402.05403
40
citations

In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation

Shiqi Chen, Miao Xiong, Junteng Liu et al.

ICML 2024arXiv:2403.01548
43
citations

In-Context Unlearning: Language Models as Few-Shot Unlearners

Martin Pawelczyk, Seth Neel, Himabindu Lakkaraju

ICML 2024

In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering

Sheng Liu, Haotian Ye, Lei Xing et al.

ICML 2024arXiv:2311.06668
224
citations

Incorporating Geo-Diverse Knowledge into Prompting for Increased Geographical Robustness in Object Recognition

Kyle Buettner, Sina Malakouti, Xiang Li et al.

CVPR 2024arXiv:2401.01482
6
citations

InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining

Boxin Wang, Wei Ping, Lawrence McAfee et al.

ICML 2024arXiv:2310.07713
70
citations

InstructSpeech: Following Speech Editing Instructions via Large Language Models

Rongjie Huang, Ruofan Hu, Yongqi Wang et al.

ICML 2024

Integrated Hardware Architecture and Device Placement Search

Irene Wang, Jakub Tarnawski, Amar Phanishayee et al.

ICML 2024spotlightarXiv:2407.13143
3
citations

Interpreting and Improving Large Language Models in Arithmetic Calculation

Wei Zhang, Wan Chaoqun, Yonggang Zhang et al.

ICML 2024arXiv:2409.01659
42
citations

Is In-Context Learning in Large Language Models Bayesian? A Martingale Perspective

Fabian Falck, Ziyu Wang, Christopher Holmes

ICML 2024arXiv:2406.00793
42
citations

Junk DNA Hypothesis: Pruning Small Pre-Trained Weights $\textit{Irreversibly}$ and $\textit{Monotonically}$ Impairs ``Difficult" Downstream Tasks in LLMs

Lu Yin, Ajay Jaiswal, Shiwei Liu et al.

ICML 2024

KAM-CoT: Knowledge Augmented Multimodal Chain-of-Thoughts Reasoning

Debjyoti Mondal, Suraj Modi, Subhadarshi Panda et al.

AAAI 2024paperarXiv:2401.12863
82
citations

Knowledge Graph Prompting for Multi-Document Question Answering

Yu Wang, Nedim Lipka, Ryan A. Rossi et al.

AAAI 2024paperarXiv:2308.11730
240
citations

Language Agents with Reinforcement Learning for Strategic Play in the Werewolf Game

Zelai Xu, Chao Yu, Fei Fang et al.

ICML 2024arXiv:2310.18940
136
citations

Language Generation with Strictly Proper Scoring Rules

Chenze Shao, Fandong Meng, Yijin Liu et al.

ICML 2024arXiv:2405.18906
7
citations

Language Models Represent Beliefs of Self and Others

Wentao Zhu, Zhining Zhang, Yizhou Wang

ICML 2024arXiv:2402.18496
16
citations

Large Language Models Are Clinical Reasoners: Reasoning-Aware Diagnosis Framework with Prompt-Generated Rationales

Taeyoon Kwon, Kai Ong, Dongjin Kang et al.

AAAI 2024paperarXiv:2312.07399
62
citations

Large Language Models are Efficient Learners of Noise-Robust Speech Recognition

Yuchen Hu, CHEN CHEN, Chao-Han Huck Yang et al.

ICLR 2024spotlightarXiv:2401.10446
36
citations

Large Language Models are Geographically Biased

Rohin Manvi, Samar Khanna, Marshall Burke et al.

ICML 2024oralarXiv:2402.02680
93
citations

Large Language Models are Good Prompt Learners for Low-Shot Image Classification

Zhaoheng Zheng, Jingmin Wei, Xuefeng Hu et al.

CVPR 2024arXiv:2312.04076
23
citations

Large Language Models Are Neurosymbolic Reasoners

Meng Fang, Shilong Deng, Yudi Zhang et al.

AAAI 2024paperarXiv:2401.09334
41
citations