Poster "large language models" Papers

538 papers found • Page 10 of 11

LLM and Simulation as Bilevel Optimizers: A New Paradigm to Advance Physical Scientific Discovery

Pingchuan Ma, Johnson Tsun-Hsuan Wang, Minghao Guo et al.

ICML 2024poster

LLM-Empowered State Representation for Reinforcement Learning

Boyuan Wang, Yun Qu, Yuhang Jiang et al.

ICML 2024posterarXiv:2407.13237

LoCoCo: Dropping In Convolutions for Long Context Compression

Ruisi Cai, Yuandong Tian, Zhangyang “Atlas” Wang et al.

ICML 2024poster

LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens

Yiran Ding, Li Lyna Zhang, Chengruidong Zhang et al.

ICML 2024posterarXiv:2402.13753

LongVLM: Efficient Long Video Understanding via Large Language Models

Yuetian Weng, Mingfei Han, Haoyu He et al.

ECCV 2024posterarXiv:2404.03384
128
citations

LoRA+: Efficient Low Rank Adaptation of Large Models

Soufiane Hayou, Nikhil Ghosh, Bin Yu

ICML 2024posterarXiv:2402.12354

LoRAP: Transformer Sub-Layers Deserve Differentiated Structured Compression for Large Language Models

guangyan li, Yongqiang Tang, Wensheng Zhang

ICML 2024posterarXiv:2404.09695

LoRA Training in the NTK Regime has No Spurious Local Minima

Uijeong Jang, Jason Lee, Ernest Ryu

ICML 2024posterarXiv:2402.11867

LQER: Low-Rank Quantization Error Reconstruction for LLMs

Cheng Zhang, Jianyi Cheng, George Constantinides et al.

ICML 2024posterarXiv:2402.02446

Magicoder: Empowering Code Generation with OSS-Instruct

Yuxiang Wei, Zhe Wang, Jiawei Liu et al.

ICML 2024poster

MathScale: Scaling Instruction Tuning for Mathematical Reasoning

Zhengyang Tang, Xingxing Zhang, Benyou Wang et al.

ICML 2024posterarXiv:2403.02884

Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews

Weixin Liang, Zachary Izzo, Yaohui Zhang et al.

ICML 2024posterarXiv:2403.07183

Multicalibration for Confidence Scoring in LLMs

Gianluca Detommaso, Martin A Bertran, Riccardo Fogliato et al.

ICML 2024posterarXiv:2404.04689

NavGPT-2: Unleashing Navigational Reasoning Capability for Large Vision-Language Models

Gengze Zhou, Yicong Hong, Zun Wang et al.

ECCV 2024posterarXiv:2407.12366
77
citations

Neighboring Perturbations of Knowledge Editing on Large Language Models

Jun-Yu Ma, Zhen-Hua Ling, Ningyu Zhang et al.

ICML 2024poster

NExT: Teaching Large Language Models to Reason about Code Execution

Ansong Ni, Miltiadis Allamanis, Arman Cohan et al.

ICML 2024poster

Non-Vacuous Generalization Bounds for Large Language Models

Sanae Lotfi, Marc Finzi, Yilun Kuang et al.

ICML 2024posterarXiv:2312.17173

Online Speculative Decoding

Xiaoxuan Liu, Lanxiang Hu, Peter Bailis et al.

ICML 2024posterarXiv:2310.07177

On Prompt-Driven Safeguarding for Large Language Models

Chujie Zheng, Fan Yin, Hao Zhou et al.

ICML 2024poster

OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models

Fuzhao Xue, Zian Zheng, Yao Fu et al.

ICML 2024poster

Optimizing Watermarks for Large Language Models

Bram Wouters

ICML 2024posterarXiv:2312.17295

OptiMUS: Scalable Optimization Modeling with (MI)LP Solvers and Large Language Models

Ali AhmadiTeshnizi, Wenzhi Gao, Madeleine Udell

ICML 2024posterarXiv:2402.10172

Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity

Lu Yin, You Wu, Zhenyu Zhang et al.

ICML 2024poster

PALM: Predicting Actions through Language Models

Sanghwan Kim, Daoji Huang, Yongqin Xian et al.

ECCV 2024posterarXiv:2311.17944
22
citations

PARDEN, Can You Repeat That? Defending against Jailbreaks via Repetition

Ziyang Zhang, Qizhen Zhang, Jakob Foerster

ICML 2024poster

Plan, Posture and Go: Towards Open-vocabulary Text-to-Motion Generation

Jinpeng Liu, Wenxun Dai, Chunyu Wang et al.

ECCV 2024poster
8
citations

PointLLM: Empowering Large Language Models to Understand Point Clouds

Runsen Xu, Xiaolong Wang, Tai Wang et al.

ECCV 2024posterarXiv:2308.16911
289
citations

Position: A Call for Embodied AI

Giuseppe Paolo, Jonas Gonzalez-Billandon, Balázs Kégl

ICML 2024poster

Position: A Roadmap to Pluralistic Alignment

Taylor Sorensen, Jared Moore, Jillian Fisher et al.

ICML 2024poster

Position: Building Guardrails for Large Language Models Requires Systematic Design

Yi DONG, Ronghui Mu, Gaojie Jin et al.

ICML 2024poster

Position: Foundation Agents as the Paradigm Shift for Decision Making

Xiaoqian Liu, Xingzhou Lou, Jianbin Jiao et al.

ICML 2024poster

Position: Key Claims in LLM Research Have a Long Tail of Footnotes

Anna Rogers, Sasha Luccioni

ICML 2024posterarXiv:2308.07120

Position: Near to Mid-term Risks and Opportunities of Open-Source Generative AI

Francisco Eiras, Aleksandar Petrov, Bertie Vidgen et al.

ICML 2024poster

Position: On the Possibilities of AI-Generated Text Detection

Souradip Chakraborty, Amrit Singh Bedi, Sicheng Zhu et al.

ICML 2024poster

Position: Stop Making Unscientific AGI Performance Claims

Patrick Altmeyer, Andrew Demetriou, Antony Bartlett et al.

ICML 2024poster

Position: What Can Large Language Models Tell Us about Time Series Analysis

Ming Jin, Yi-Fan Zhang, Wei Chen et al.

ICML 2024posterarXiv:2402.02713

Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data

Fahim Tajwar, Anikait Singh, Archit Sharma et al.

ICML 2024poster

Premise Order Matters in Reasoning with Large Language Models

Xinyun Chen, Ryan Chi, Xuezhi Wang et al.

ICML 2024posterarXiv:2402.08939

Privacy-Preserving Instructions for Aligning Large Language Models

Da Yu, Peter Kairouz, Sewoong Oh et al.

ICML 2024posterarXiv:2402.13659

Promptbreeder: Self-Referential Self-Improvement via Prompt Evolution

Chrisantha Fernando, Dylan Banarse, Henryk Michalewski et al.

ICML 2024poster

Prompting Language-Informed Distribution for Compositional Zero-Shot Learning

Wentao Bao, Lichang Chen, Heng Huang et al.

ECCV 2024posterarXiv:2305.14428
34
citations

Prompt Sketching for Large Language Models

Luca Beurer-Kellner, Mark Müller, Marc Fischer et al.

ICML 2024posterarXiv:2311.04954

Propose, Assess, Search: Harnessing LLMs for Goal-Oriented Planning in Instructional Videos

Mohaiminul Islam, Tushar Nagarajan, Huiyu Wang et al.

ECCV 2024posterarXiv:2409.20557
10
citations

Pruner-Zero: Evolving Symbolic Pruning Metric From Scratch for Large Language Models

Peijie Dong, Lujun Li, Zhenheng Tang et al.

ICML 2024posterarXiv:2406.02924

Random Masking Finds Winning Tickets for Parameter Efficient Fine-tuning

Jing Xu, Jingzhao Zhang

ICML 2024posterarXiv:2405.02596

Repeat After Me: Transformers are Better than State Space Models at Copying

Samy Jelassi, David Brandfonbrener, Sham Kakade et al.

ICML 2024posterarXiv:2402.01032

Rethinking Generative Large Language Model Evaluation for Semantic Comprehension

Fangyun Wei, Xi Chen, Lin Luo

ICML 2024posterarXiv:2403.07872

Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark

Yihua Zhang, Pingzhi Li, Junyuan Hong et al.

ICML 2024posterarXiv:2402.11592

Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models

Fangzhao Zhang, Mert Pilanci

ICML 2024posterarXiv:2402.02347

RLVF: Learning from Verbal Feedback without Overgeneralization

Moritz Stephan, Alexander Khazatsky, Eric Mitchell et al.

ICML 2024posterarXiv:2402.10893