Poster "large language models" Papers

740 papers found • Page 14 of 15

LongVLM: Efficient Long Video Understanding via Large Language Models

Yuetian Weng, Mingfei Han, Haoyu He et al.

ECCV 2024arXiv:2404.03384
131
citations

LoRA+: Efficient Low Rank Adaptation of Large Models

Soufiane Hayou, Nikhil Ghosh, Bin Yu

ICML 2024arXiv:2402.12354
341
citations

LoRAP: Transformer Sub-Layers Deserve Differentiated Structured Compression for Large Language Models

guangyan li, Yongqiang Tang, Wensheng Zhang

ICML 2024arXiv:2404.09695
8
citations

LoRA Training in the NTK Regime has No Spurious Local Minima

Uijeong Jang, Jason Lee, Ernest Ryu

ICML 2024arXiv:2402.11867
35
citations

LQER: Low-Rank Quantization Error Reconstruction for LLMs

Cheng Zhang, Jianyi Cheng, George Constantinides et al.

ICML 2024arXiv:2402.02446
27
citations

Magicoder: Empowering Code Generation with OSS-Instruct

Yuxiang Wei, Zhe Wang, Jiawei Liu et al.

ICML 2024arXiv:2312.02120
208
citations

Making Large Language Models Better Planners with Reasoning-Decision Alignment

Zhijian Huang, Tao Tang, Shaoxiang Chen et al.

ECCV 2024arXiv:2408.13890
40
citations

MathScale: Scaling Instruction Tuning for Mathematical Reasoning

Zhengyang Tang, Xingxing Zhang, Benyou Wang et al.

ICML 2024arXiv:2403.02884
146
citations

Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews

Weixin Liang, Zachary Izzo, Yaohui Zhang et al.

ICML 2024arXiv:2403.07183
183
citations

MovieChat: From Dense Token to Sparse Memory for Long Video Understanding

Enxin Song, Wenhao Chai, Guanhong Wang et al.

CVPR 2024arXiv:2307.16449
471
citations

Multicalibration for Confidence Scoring in LLMs

Gianluca Detommaso, Martin A Bertran, Riccardo Fogliato et al.

ICML 2024arXiv:2404.04689
37
citations

Naturally Supervised 3D Visual Grounding with Language-Regularized Concept Learners

Chun Feng, Joy Hsu, Weiyu Liu et al.

CVPR 2024arXiv:2404.19696
9
citations

NavGPT-2: Unleashing Navigational Reasoning Capability for Large Vision-Language Models

Gengze Zhou, Yicong Hong, Zun Wang et al.

ECCV 2024arXiv:2407.12366
78
citations

Neighboring Perturbations of Knowledge Editing on Large Language Models

Jun-Yu Ma, Zhen-Hua Ling, Ningyu Zhang et al.

ICML 2024arXiv:2401.17623
6
citations

NExT: Teaching Large Language Models to Reason about Code Execution

Ansong Ni, Miltiadis Allamanis, Arman Cohan et al.

ICML 2024arXiv:2404.14662
65
citations

Non-Vacuous Generalization Bounds for Large Language Models

Sanae Lotfi, Marc Finzi, Yilun Kuang et al.

ICML 2024arXiv:2312.17173
41
citations

Online Speculative Decoding

Xiaoxuan Liu, Lanxiang Hu, Peter Bailis et al.

ICML 2024arXiv:2310.07177
92
citations

On Prompt-Driven Safeguarding for Large Language Models

Chujie Zheng, Fan Yin, Hao Zhou et al.

ICML 2024arXiv:2401.18018
106
citations

OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models

Fuzhao Xue, Zian Zheng, Yao Fu et al.

ICML 2024arXiv:2402.01739
160
citations

Optimizing Watermarks for Large Language Models

Bram Wouters

ICML 2024arXiv:2312.17295
18
citations

OptiMUS: Scalable Optimization Modeling with (MI)LP Solvers and Large Language Models

Ali AhmadiTeshnizi, Wenzhi Gao, Madeleine Udell

ICML 2024arXiv:2402.10172
62
citations

Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity

Lu Yin, You Wu, Zhenyu Zhang et al.

ICML 2024arXiv:2310.05175
152
citations

PALM: Predicting Actions through Language Models

Sanghwan Kim, Daoji Huang, Yongqin Xian et al.

ECCV 2024arXiv:2311.17944
23
citations

PARDEN, Can You Repeat That? Defending against Jailbreaks via Repetition

Ziyang Zhang, Qizhen Zhang, Jakob Foerster

ICML 2024arXiv:2405.07932
34
citations

Plan, Posture and Go: Towards Open-vocabulary Text-to-Motion Generation

Jinpeng Liu, Wenxun Dai, Chunyu Wang et al.

ECCV 2024
8
citations

PointLLM: Empowering Large Language Models to Understand Point Clouds

Runsen Xu, Xiaolong Wang, Tai Wang et al.

ECCV 2024arXiv:2308.16911
295
citations

Position: A Call for Embodied AI

Giuseppe Paolo, Jonas Gonzalez-Billandon, Balázs Kégl

ICML 2024

Position: A Roadmap to Pluralistic Alignment

Taylor Sorensen, Jared Moore, Jillian Fisher et al.

ICML 2024

Position: Building Guardrails for Large Language Models Requires Systematic Design

Yi DONG, Ronghui Mu, Gaojie Jin et al.

ICML 2024

Position: Foundation Agents as the Paradigm Shift for Decision Making

Xiaoqian Liu, Xingzhou Lou, Jianbin Jiao et al.

ICML 2024arXiv:2405.17009
8
citations

Position: Key Claims in LLM Research Have a Long Tail of Footnotes

Anna Rogers, Sasha Luccioni

ICML 2024arXiv:2308.07120
24
citations

Position: Near to Mid-term Risks and Opportunities of Open-Source Generative AI

Francisco Eiras, Aleksandar Petrov, Bertie Vidgen et al.

ICML 2024

Position: On the Possibilities of AI-Generated Text Detection

Souradip Chakraborty, Amrit Singh Bedi, Sicheng Zhu et al.

ICML 2024

Position: Stop Making Unscientific AGI Performance Claims

Patrick Altmeyer, Andrew Demetriou, Antony Bartlett et al.

ICML 2024arXiv:2402.03962
9
citations

Position: What Can Large Language Models Tell Us about Time Series Analysis

Ming Jin, Yi-Fan Zhang, Wei Chen et al.

ICML 2024arXiv:2402.02713
56
citations

Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data

Fahim Tajwar, Anikait Singh, Archit Sharma et al.

ICML 2024arXiv:2404.14367
179
citations

Premise Order Matters in Reasoning with Large Language Models

Xinyun Chen, Ryan Chi, Xuezhi Wang et al.

ICML 2024arXiv:2402.08939
52
citations

Privacy-Preserving Instructions for Aligning Large Language Models

Da Yu, Peter Kairouz, Sewoong Oh et al.

ICML 2024arXiv:2402.13659
36
citations

Promptbreeder: Self-Referential Self-Improvement via Prompt Evolution

Chrisantha Fernando, Dylan Banarse, Henryk Michalewski et al.

ICML 2024arXiv:2309.16797
364
citations

Prompting Language-Informed Distribution for Compositional Zero-Shot Learning

Wentao Bao, Lichang Chen, Heng Huang et al.

ECCV 2024arXiv:2305.14428
35
citations

Prompt Sketching for Large Language Models

Luca Beurer-Kellner, Mark Müller, Marc Fischer et al.

ICML 2024arXiv:2311.04954
6
citations

Propose, Assess, Search: Harnessing LLMs for Goal-Oriented Planning in Instructional Videos

Mohaiminul Islam, Tushar Nagarajan, Huiyu Wang et al.

ECCV 2024arXiv:2409.20557
10
citations

Pruner-Zero: Evolving Symbolic Pruning Metric From Scratch for Large Language Models

Peijie Dong, Lujun Li, Zhenheng Tang et al.

ICML 2024arXiv:2406.02924
54
citations

Random Masking Finds Winning Tickets for Parameter Efficient Fine-tuning

Jing Xu, Jingzhao Zhang

ICML 2024arXiv:2405.02596
13
citations

Repeat After Me: Transformers are Better than State Space Models at Copying

Samy Jelassi, David Brandfonbrener, Sham Kakade et al.

ICML 2024arXiv:2402.01032
162
citations

Rethinking Generative Large Language Model Evaluation for Semantic Comprehension

Fangyun Wei, Xi Chen, Lin Luo

ICML 2024arXiv:2403.07872
13
citations

Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark

Yihua Zhang, Pingzhi Li, Junyuan Hong et al.

ICML 2024arXiv:2402.11592
107
citations

Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models

Fangzhao Zhang, Mert Pilanci

ICML 2024arXiv:2402.02347
35
citations

RLVF: Learning from Verbal Feedback without Overgeneralization

Moritz Stephan, Alexander Khazatsky, Eric Mitchell et al.

ICML 2024arXiv:2402.10893
14
citations

RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation

Mahdi Nikdan, Soroush Tabesh, Elvir Crnčević et al.

ICML 2024arXiv:2401.04679
48
citations