2024 "large language models" Papers
238 papers found • Page 4 of 5
On Prompt-Driven Safeguarding for Large Language Models
Chujie Zheng, Fan Yin, Hao Zhou et al.
OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models
Fuzhao Xue, Zian Zheng, Yao Fu et al.
Optimizing Watermarks for Large Language Models
Bram Wouters
OptiMUS: Scalable Optimization Modeling with (MI)LP Solvers and Large Language Models
Ali AhmadiTeshnizi, Wenzhi Gao, Madeleine Udell
Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity
Lu Yin, You Wu, Zhenyu Zhang et al.
OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Models
Changhun Lee, Jungyu Jin, Taesu Kim et al.
PALM: Predicting Actions through Language Models
Sanghwan Kim, Daoji Huang, Yongqin Xian et al.
PARDEN, Can You Repeat That? Defending against Jailbreaks via Repetition
Ziyang Zhang, Qizhen Zhang, Jakob Foerster
PICLe: Eliciting Diverse Behaviors from Large Language Models with Persona In-Context Learning
Hyeong Kyu Choi, Sharon Li
Position: A Call for Embodied AI
Giuseppe Paolo, Jonas Gonzalez-Billandon, Balázs Kégl
Position: A Roadmap to Pluralistic Alignment
Taylor Sorensen, Jared Moore, Jillian Fisher et al.
Position: Building Guardrails for Large Language Models Requires Systematic Design
Yi DONG, Ronghui Mu, Gaojie Jin et al.
Position: Foundation Agents as the Paradigm Shift for Decision Making
Xiaoqian Liu, Xingzhou Lou, Jianbin Jiao et al.
Position: Key Claims in LLM Research Have a Long Tail of Footnotes
Anna Rogers, Sasha Luccioni
Position: Near to Mid-term Risks and Opportunities of Open-Source Generative AI
Francisco Eiras, Aleksandar Petrov, Bertie Vidgen et al.
Position: On the Possibilities of AI-Generated Text Detection
Souradip Chakraborty, Amrit Singh Bedi, Sicheng Zhu et al.
Position: Stop Making Unscientific AGI Performance Claims
Patrick Altmeyer, Andrew Demetriou, Antony Bartlett et al.
Position: What Can Large Language Models Tell Us about Time Series Analysis
Ming Jin, Yi-Fan Zhang, Wei Chen et al.
Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data
Fahim Tajwar, Anikait Singh, Archit Sharma et al.
Preference Ranking Optimization for Human Alignment
Feifan Song, Bowen Yu, Minghao Li et al.
PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine
Chenrui Zhang, Lin Liu, Chuyuan Wang et al.
Premise Order Matters in Reasoning with Large Language Models
Xinyun Chen, Ryan Chi, Xuezhi Wang et al.
Privacy-Preserving Instructions for Aligning Large Language Models
Da Yu, Peter Kairouz, Sewoong Oh et al.
Promptbreeder: Self-Referential Self-Improvement via Prompt Evolution
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski et al.
Prompt Sketching for Large Language Models
Luca Beurer-Kellner, Mark Müller, Marc Fischer et al.
Prompt to Transfer: Sim-to-Real Transfer for Traffic Signal Control with Prompt Learning
Longchao Da, Minquan Gao, Hua Wei et al.
Propose, Assess, Search: Harnessing LLMs for Goal-Oriented Planning in Instructional Videos
Mohaiminul Islam, Tushar Nagarajan, Huiyu Wang et al.
Pruner-Zero: Evolving Symbolic Pruning Metric From Scratch for Large Language Models
Peijie Dong, Lujun Li, Zhenheng Tang et al.
Random Masking Finds Winning Tickets for Parameter Efficient Fine-tuning
Jing Xu, Jingzhao Zhang
Repeat After Me: Transformers are Better than State Space Models at Copying
Samy Jelassi, David Brandfonbrener, Sham Kakade et al.
Rethinking Generative Large Language Model Evaluation for Semantic Comprehension
Fangyun Wei, Xi Chen, Lin Luo
Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Yihua Zhang, Pingzhi Li, Junyuan Hong et al.
RewriteLM: An Instruction-Tuned Large Language Model for Text Rewriting
Lei Shu, Liangchen Luo, Jayakumar Hoskere et al.
Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models
Fangzhao Zhang, Mert Pilanci
RLVF: Learning from Verbal Feedback without Overgeneralization
Moritz Stephan, Alexander Khazatsky, Eric Mitchell et al.
RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation
Mahdi Nikdan, Soroush Tabesh, Elvir Crnčević et al.
Scaling Laws for Fine-Grained Mixture of Experts
Jan Ludziejewski, Jakub Krajewski, Kamil Adamczewski et al.
SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models
Xiaoxuan Wang, ziniu hu, Pan Lu et al.
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
Liangtai Sun, Yang Han, Zihan Zhao et al.
SECap: Speech Emotion Captioning with Large Language Model
Yaoxun Xu, Hangting Chen, Jianwei Yu et al.
SeGA: Preference-Aware Self-Contrastive Learning with Prompts for Anomalous User Detection on Twitter
Ying-Ying Chang, Wei-Yao Wang, Wen-Chih Peng
Self-Alignment of Large Language Models via Monopolylogue-based Social Scene Simulation
Xianghe Pang, shuo tang, Rui Ye et al.
SelfIE: Self-Interpretation of Large Language Model Embeddings
Haozhe Chen, Carl Vondrick, Chengzhi Mao
SeqGPT: An Out-of-the-Box Large Language Model for Open Domain Sequence Understanding
Tianyu Yu, Chengyue Jiang, Chao Lou et al.
Should we be going MAD? A Look at Multi-Agent Debate Strategies for LLMs
Andries Smit, Nathan Grinsztajn, Paul Duckworth et al.
Soft Prompt Recovers Compressed LLMs, Transferably
Zhaozhuo Xu, Zirui Liu, Beidi Chen et al.
SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models
Xudong LU, Aojun Zhou, Yuhui Xu et al.
SqueezeLLM: Dense-and-Sparse Quantization
Sehoon Kim, Coleman Hooper, Amir Gholaminejad et al.
StackSight: Unveiling WebAssembly through Large Language Models and Neurosymbolic Chain-of-Thought Decompilation
Weike Fang, Zhejian Zhou, Junzhou He et al.
STAR: Boosting Low-Resource Information Extraction by Structure-to-Text Data Generation with Large Language Models
Mingyu Derek Ma, Xiaoxuan Wang, Po-Nien Kung et al.