"large language models" Papers

986 papers found • Page 19 of 20

Position: A Roadmap to Pluralistic Alignment

Taylor Sorensen, Jared Moore, Jillian Fisher et al.

ICML 2024

Position: Building Guardrails for Large Language Models Requires Systematic Design

Yi DONG, Ronghui Mu, Gaojie Jin et al.

ICML 2024

Position: Foundation Agents as the Paradigm Shift for Decision Making

Xiaoqian Liu, Xingzhou Lou, Jianbin Jiao et al.

ICML 2024arXiv:2405.17009
8
citations

Position: Key Claims in LLM Research Have a Long Tail of Footnotes

Anna Rogers, Sasha Luccioni

ICML 2024arXiv:2308.07120
24
citations

Position: Near to Mid-term Risks and Opportunities of Open-Source Generative AI

Francisco Eiras, Aleksandar Petrov, Bertie Vidgen et al.

ICML 2024

Position: On the Possibilities of AI-Generated Text Detection

Souradip Chakraborty, Amrit Singh Bedi, Sicheng Zhu et al.

ICML 2024

Position: Stop Making Unscientific AGI Performance Claims

Patrick Altmeyer, Andrew Demetriou, Antony Bartlett et al.

ICML 2024arXiv:2402.03962
9
citations

Position: What Can Large Language Models Tell Us about Time Series Analysis

Ming Jin, Yi-Fan Zhang, Wei Chen et al.

ICML 2024arXiv:2402.02713
56
citations

Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data

Fahim Tajwar, Anikait Singh, Archit Sharma et al.

ICML 2024arXiv:2404.14367
179
citations

Preference Ranking Optimization for Human Alignment

Feifan Song, Bowen Yu, Minghao Li et al.

AAAI 2024paperarXiv:2306.17492
337
citations

PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine

Chenrui Zhang, Lin Liu, Chuyuan Wang et al.

AAAI 2024paperarXiv:2308.12033
43
citations

Premise Order Matters in Reasoning with Large Language Models

Xinyun Chen, Ryan Chi, Xuezhi Wang et al.

ICML 2024arXiv:2402.08939
52
citations

Privacy-Preserving Instructions for Aligning Large Language Models

Da Yu, Peter Kairouz, Sewoong Oh et al.

ICML 2024arXiv:2402.13659
36
citations

Promptbreeder: Self-Referential Self-Improvement via Prompt Evolution

Chrisantha Fernando, Dylan Banarse, Henryk Michalewski et al.

ICML 2024arXiv:2309.16797
364
citations

Prompting Language-Informed Distribution for Compositional Zero-Shot Learning

Wentao Bao, Lichang Chen, Heng Huang et al.

ECCV 2024arXiv:2305.14428
35
citations

Prompt Sketching for Large Language Models

Luca Beurer-Kellner, Mark Müller, Marc Fischer et al.

ICML 2024arXiv:2311.04954
6
citations

Prompt to Transfer: Sim-to-Real Transfer for Traffic Signal Control with Prompt Learning

Longchao Da, Minquan Gao, Hua Wei et al.

AAAI 2024paperarXiv:2308.14284
52
citations

Propose, Assess, Search: Harnessing LLMs for Goal-Oriented Planning in Instructional Videos

Mohaiminul Islam, Tushar Nagarajan, Huiyu Wang et al.

ECCV 2024arXiv:2409.20557
10
citations

Pruner-Zero: Evolving Symbolic Pruning Metric From Scratch for Large Language Models

Peijie Dong, Lujun Li, Zhenheng Tang et al.

ICML 2024arXiv:2406.02924
54
citations

Random Masking Finds Winning Tickets for Parameter Efficient Fine-tuning

Jing Xu, Jingzhao Zhang

ICML 2024arXiv:2405.02596
13
citations

Repeat After Me: Transformers are Better than State Space Models at Copying

Samy Jelassi, David Brandfonbrener, Sham Kakade et al.

ICML 2024arXiv:2402.01032
162
citations

Rethinking Generative Large Language Model Evaluation for Semantic Comprehension

Fangyun Wei, Xi Chen, Lin Luo

ICML 2024arXiv:2403.07872
13
citations

Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark

Yihua Zhang, Pingzhi Li, Junyuan Hong et al.

ICML 2024arXiv:2402.11592
107
citations

RewriteLM: An Instruction-Tuned Large Language Model for Text Rewriting

Lei Shu, Liangchen Luo, Jayakumar Hoskere et al.

AAAI 2024paperarXiv:2305.15685
78
citations

Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models

Fangzhao Zhang, Mert Pilanci

ICML 2024arXiv:2402.02347
35
citations

RLVF: Learning from Verbal Feedback without Overgeneralization

Moritz Stephan, Alexander Khazatsky, Eric Mitchell et al.

ICML 2024arXiv:2402.10893
14
citations

RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation

Mahdi Nikdan, Soroush Tabesh, Elvir Crnčević et al.

ICML 2024arXiv:2401.04679
48
citations

Scaling Laws for Fine-Grained Mixture of Experts

Jan Ludziejewski, Jakub Krajewski, Kamil Adamczewski et al.

ICML 2024arXiv:2402.07871
120
citations

Scaling Up Video Summarization Pretraining with Large Language Models

Dawit Argaw Argaw, Seunghyun Yoon, Fabian Caba Heilbron et al.

CVPR 2024arXiv:2404.03398
24
citations

SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models

Xiaoxuan Wang, ziniu hu, Pan Lu et al.

ICML 2024arXiv:2307.10635
181
citations

SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research

Liangtai Sun, Yang Han, Zihan Zhao et al.

AAAI 2024paperarXiv:2308.13149
132
citations

SECap: Speech Emotion Captioning with Large Language Model

Yaoxun Xu, Hangting Chen, Jianwei Yu et al.

AAAI 2024paperarXiv:2312.10381
58
citations

SeGA: Preference-Aware Self-Contrastive Learning with Prompts for Anomalous User Detection on Twitter

Ying-Ying Chang, Wei-Yao Wang, Wen-Chih Peng

AAAI 2024paperarXiv:2312.11553
11
citations

Self-Alignment of Large Language Models via Monopolylogue-based Social Scene Simulation

Xianghe Pang, shuo tang, Rui Ye et al.

ICML 2024spotlightarXiv:2402.05699
48
citations

SelfIE: Self-Interpretation of Large Language Model Embeddings

Haozhe Chen, Carl Vondrick, Chengzhi Mao

ICML 2024arXiv:2403.10949
51
citations

SeqGPT: An Out-of-the-Box Large Language Model for Open Domain Sequence Understanding

Tianyu Yu, Chengyue Jiang, Chao Lou et al.

AAAI 2024paperarXiv:2308.10529
28
citations

Should we be going MAD? A Look at Multi-Agent Debate Strategies for LLMs

Andries Smit, Nathan Grinsztajn, Paul Duckworth et al.

ICML 2024arXiv:2311.17371
64
citations

Soft Prompt Recovers Compressed LLMs, Transferably

Zhaozhuo Xu, Zirui Liu, Beidi Chen et al.

ICML 2024

SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models

Xudong LU, Aojun Zhou, Yuhui Xu et al.

ICML 2024arXiv:2405.16057
14
citations

SqueezeLLM: Dense-and-Sparse Quantization

Sehoon Kim, Coleman Hooper, Amir Gholaminejad et al.

ICML 2024arXiv:2306.07629
272
citations

StackSight: Unveiling WebAssembly through Large Language Models and Neurosymbolic Chain-of-Thought Decompilation

Weike Fang, Zhejian Zhou, Junzhou He et al.

ICML 2024spotlightarXiv:2406.04568
4
citations

STAR: Boosting Low-Resource Information Extraction by Structure-to-Text Data Generation with Large Language Models

Mingyu Derek Ma, Xiaoxuan Wang, Po-Nien Kung et al.

AAAI 2024paperarXiv:2305.15090
21
citations

Structured Chemistry Reasoning with Large Language Models

Siru Ouyang, Zhuosheng Zhang, Bing Yan et al.

ICML 2024arXiv:2311.09656
27
citations

Subgoal-based Demonstration Learning for Formal Theorem Proving

Xueliang Zhao, Wenda Li, Lingpeng Kong

ICML 2024arXiv:2305.16366
38
citations

Tandem Transformers for Inference Efficient LLMs

Aishwarya P S, Pranav Nair, Yashas Samaga et al.

ICML 2024arXiv:2402.08644
10
citations

TaskLAMA: Probing the Complex Task Understanding of Language Models

Quan Yuan, Mehran Kazemi, Xin Xu et al.

AAAI 2024paperarXiv:2308.15299
20
citations

Task Planning for Object Rearrangement in Multi-Room Environments

Karan Mirakhor, Sourav Ghosh, Dipanjan Das et al.

AAAI 2024paperarXiv:2406.00451
2
citations

Text2Analysis: A Benchmark of Table Question Answering with Advanced Data Analysis and Unclear Queries

Xinyi He, Mengyu Zhou, Xinrun Xu et al.

AAAI 2024paperarXiv:2312.13671
44
citations

TextDiffuser-2: Unleashing the Power of Language Models for Text Rendering

Jingye Chen, Yupan Huang, Tengchao Lv et al.

ECCV 2024arXiv:2311.16465
106
citations

Text-to-Image Generation for Abstract Concepts

Jiayi Liao, Xu Chen, Qiang Fu et al.

AAAI 2024paperarXiv:2309.14623
24
citations