Poster "large language models" Papers

740 papers found • Page 6 of 15

Language Guided Skill Discovery

Seungeun Rho, Laura Smith, Tianyu Li et al.

ICLR 2025arXiv:2406.06615
15
citations

Language Models Are Implicitly Continuous

Samuele Marro, Davide Evangelista, X. Huang et al.

ICLR 2025arXiv:2504.03933
3
citations

LaRes: Evolutionary Reinforcement Learning with LLM-based Adaptive Reward Search

Pengyi Li, Hongyao Tang, Jinbin Qiao et al.

NEURIPS 2025

Large Language Bayes

Justin Domke

NEURIPS 2025arXiv:2504.14025
2
citations

Large Language Models Assume People are More Rational than We Really are

Ryan Liu, Jiayi Geng, Joshua Peterson et al.

ICLR 2025arXiv:2406.17055
43
citations

Large Language Models can Become Strong Self-Detoxifiers

Ching-Yun Ko, Pin-Yu Chen, Payel Das et al.

ICLR 2025
3
citations

Large Language Models Meet Symbolic Provers for Logical Reasoning Evaluation

Chengwen Qi, Ren Ma, Bowen Li et al.

ICLR 2025arXiv:2502.06563
25
citations

Large Language Models Think Too Fast To Explore Effectively

Lan Pan, Hanbo Xie, Robert Wilson

NEURIPS 2025arXiv:2501.18009
6
citations

Large (Vision) Language Models are Unsupervised In-Context Learners

Artyom Gadetsky, Andrei Atanov, Yulun Jiang et al.

ICLR 2025arXiv:2504.02349
3
citations

LARGO: Latent Adversarial Reflection through Gradient Optimization for Jailbreaking LLMs

Ran Li, Hao Wang, Chengzhi Mao

NEURIPS 2025arXiv:2505.10838
4
citations

LASeR: Towards Diversified and Generalizable Robot Design with Large Language Models

JUNRU SONG, Yang Yang, Huan Xiao et al.

ICLR 2025
7
citations

Law of the Weakest Link: Cross Capabilities of Large Language Models

Ming Zhong, Aston Zhang, Xuewei Wang et al.

ICLR 2025arXiv:2409.19951
12
citations

Layer as Puzzle Pieces: Compressing Large Language Models through Layer Concatenation

Fei Wang, Li Shen, Liang Ding et al.

NEURIPS 2025arXiv:2510.15304
2
citations

LayerIF: Estimating Layer Quality for Large Language Models using Influence Functions

Hadi Askari, Shivanshu Gupta, Fei Wang et al.

NEURIPS 2025arXiv:2505.23811
6
citations

Layer Swapping for Zero-Shot Cross-Lingual Transfer in Large Language Models

Lucas Bandarkar, Benjamin Muller, Pritish Yuvraj et al.

ICLR 2025arXiv:2410.01335
15
citations

Layerwise Recurrent Router for Mixture-of-Experts

Zihan Qiu, Zeyu Huang, Shuang Cheng et al.

ICLR 2025arXiv:2408.06793
8
citations

Learning Diverse Attacks on Large Language Models for Robust Red-Teaming and Safety Tuning

Seanie Lee, Minsu Kim, Lynn Cherif et al.

ICLR 2025arXiv:2405.18540
47
citations

Learning from Neighbors: Category Extrapolation for Long-Tail Learning

Shizhen Zhao, Xin Wen, Jiahui Liu et al.

CVPR 2025arXiv:2410.15980
5
citations

Learning “Partner-Aware” Collaborators in Multi-Party Collaboration

Abhijnan Nath, Nikhil Krishnaswamy

NEURIPS 2025arXiv:2510.22462

Learning to Rank for In-Context Example Retrieval

Yuwen Ji, Luodan Zhang, Ambyer han et al.

NEURIPS 2025

Learning to Watermark: A Selective Watermarking Framework for Large Language Models via Multi-Objective Optimization

Chenrui Wang, Junyi Shu, Billy Chiu et al.

NEURIPS 2025arXiv:2510.15976

LIFEBENCH: Evaluating Length Instruction Following in Large Language Models

Wei Zhang, Zhenhong Zhou, Kun Wang et al.

NEURIPS 2025arXiv:2505.16234
2
citations

Linearization Explains Fine-Tuning in Large Language Models

Zahra Rahimi Afzal, Tara Esmaeilbeig, Mojtaba Soltanalian et al.

NEURIPS 2025

LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code

Naman Jain, Han, Alex Gu et al.

ICLR 2025arXiv:2403.07974
1108
citations

LLaMA-Omni: Seamless Speech Interaction with Large Language Models

Qingkai Fang, Shoutao Guo, Yan Zhou et al.

ICLR 2025arXiv:2409.06666
135
citations

LLM-Driven Treatment Effect Estimation Under Inference Time Text Confounding

Yuchen Ma, Dennis Frauen, Jonas Schweisthal et al.

NEURIPS 2025arXiv:2507.02843
3
citations

LLM-enhanced Action-aware Multi-modal Prompt Tuning for Image-Text Matching

Meng Tian, Shuo Yang, Xinxiao Wu

ICCV 2025arXiv:2506.23502
1
citations

LLM Generated Persona is a Promise with a Catch

Leon Li, Haozhe Chen, Hongseok Namkoong et al.

NEURIPS 2025arXiv:2503.16527
49
citations

LLM-PySC2: Starcraft II learning environment for Large Language Models

Zongyuan Li, Yanan Ni, Runnan Qi et al.

NEURIPS 2025arXiv:2411.05348
9
citations

LLMs Can Plan Only If We Tell Them

Bilgehan Sel, Ruoxi Jia, Ming Jin

ICLR 2025arXiv:2501.13545
16
citations

LLM-SR: Scientific Equation Discovery via Programming with Large Language Models

Parshin Shojaee, Kazem Meidani, Shashank Gupta et al.

ICLR 2025arXiv:2404.18400
57
citations

L-MTP: Leap Multi-Token Prediction Beyond Adjacent Context for Large Language Models

Xiaohao Liu, Xiaobo Xia, Weixiang Zhao et al.

NEURIPS 2025arXiv:2505.17505
5
citations

Logical Consistency of Large Language Models in Fact-Checking

Bishwamittra Ghosh, Sarah Hasan, Naheed Anjum Arafat et al.

ICLR 2025arXiv:2412.16100
15
citations

Logic.py: Bridging the Gap between LLMs and Constraint Solvers

Pascal Kesseli, Peter O'Hearn, Ricardo Cabral

NEURIPS 2025arXiv:2502.15776
5
citations

LongMagpie: A Self-synthesis Method for Generating Large-scale Long-context Instructions

Chaochen Gao, Xing W, Zijia Lin et al.

NEURIPS 2025arXiv:2505.17134
1
citations

LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization

Guanzheng Chen, Xin Li, Michael Qizhe Shieh et al.

ICLR 2025arXiv:2502.13922
15
citations

LoRA Done RITE: Robust Invariant Transformation Equilibration for LoRA Optimization

Jui-Nan Yen, Si Si, Zhao Meng et al.

ICLR 2025arXiv:2410.20625
16
citations

LoRA Learns Less and Forgets Less

Jonathan Frankle, Jose Javier Gonzalez Ortiz, Cody Blakeney et al.

ICLR 2025arXiv:2405.09673
252
citations

Lost in Translation, Found in Context: Sign Language Translation with Contextual Cues

Youngjoon Jang, Haran Raajesh, Liliane Momeni et al.

CVPR 2025arXiv:2501.09754
15
citations

Machine Unlearning Fails to Remove Data Poisoning Attacks

Martin Pawelczyk, Jimmy Di, Yiwei Lu et al.

ICLR 2025arXiv:2406.17216
29
citations

Magical: Medical Lay Language Generation via Semantic Invariance and Layperson-tailored Adaptation

Weibin Liao, Tianlong Wang, Yinghao Zhu et al.

NEURIPS 2025arXiv:2508.08730
1
citations

Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing

Zhangchen Xu, Fengqing Jiang, Luyao Niu et al.

ICLR 2025arXiv:2406.08464
276
citations

MallowsPO: Fine-Tune Your LLM with Preference Dispersions

Haoxian Chen, Hanyang Zhao, Henry Lam et al.

ICLR 2025arXiv:2405.14953
15
citations

Masked Gated Linear Unit

Yukito Tajima, Nakamasa Inoue, Yusuke Sekikawa et al.

NEURIPS 2025arXiv:2506.23225

Measuring what Matters: Construct Validity in Large Language Model Benchmarks

Andrew M. Bean, Ryan Othniel Kearns, Angelika Romanou et al.

NEURIPS 2025arXiv:2511.04703
9
citations

MeCeFO: Enhancing LLM Training Robustness via Fault-Tolerant Optimization

Rizhen Hu, Yutong He, Ran Yan et al.

NEURIPS 2025arXiv:2510.16415

Memory Decoder: A Pretrained, Plug-and-Play Memory for Large Language Models

Jiaqi Cao, Jiarui Wang, Rubin Wei et al.

NEURIPS 2025arXiv:2508.09874
3
citations

Merging LoRAs like Playing LEGO: Pushing the Modularity of LoRA to Extremes Through Rank-Wise Clustering

Ziyu Zhao, tao shen, Didi Zhu et al.

ICLR 2025arXiv:2409.16167
35
citations

MetaDesigner: Advancing Artistic Typography through AI-Driven, User-Centric, and Multilingual WordArt Synthesis

Jun-Yan He, Zhi-Qi Cheng, Chenyang Li et al.

ICLR 2025arXiv:2406.19859
5
citations

Mimir: Improving Video Diffusion Models for Precise Text Understanding

Shuai Tan, Biao Gong, Yutong Feng et al.

CVPR 2025arXiv:2412.03085
16
citations