2025 Poster "large language models" Papers

346 papers found • Page 1 of 7

$\texttt{G1}$: Teaching LLMs to Reason on Graphs with Reinforcement Learning

Xiaojun Guo, Ang Li, Yifei Wang et al.

NeurIPS 2025poster
4
citations

3D-AffordanceLLM: Harnessing Large Language Models for Open-Vocabulary Affordance Detection in 3D Worlds

Hengshuo Chu, Xiang Deng, Qi Lv et al.

ICLR 2025posterarXiv:2502.20041
15
citations

A$^3$E: Towards Compositional Model Editing

Hongming Piao, Hao Wang, Dapeng Wu et al.

NeurIPS 2025poster

ACC-Collab: An Actor-Critic Approach to Multi-Agent LLM Collaboration

Andrew Estornell, Jean-Francois Ton, Yuanshun Yao et al.

ICLR 2025posterarXiv:2411.00053
13
citations

Accelerating Block Coordinate Descent for LLM Finetuning via Landscape Expansion

Qijun Luo, Yifei Shen, Liangzu Peng et al.

NeurIPS 2025poster

Accelerating RL for LLM Reasoning with Optimal Advantage Regression

Kianté Brantley, Mingyu Chen, Zhaolin Gao et al.

NeurIPS 2025posterarXiv:2505.20686
12
citations

A Closer Look at Machine Unlearning for Large Language Models

Xiaojian Yuan, Tianyu Pang, Chao Du et al.

ICLR 2025posterarXiv:2410.08109
31
citations

ActionReasoningBench: Reasoning about Actions with and without Ramification Constraints

Divij Handa, Pavel Dolin, Shrinidhi Kumbhar et al.

ICLR 2025posterarXiv:2406.04046
7
citations

AcuRank: Uncertainty-Aware Adaptive Computation for Listwise Reranking

Soyoung Yoon, Gyuwan Kim, Gyu-Hwung Cho et al.

NeurIPS 2025posterarXiv:2505.18512
1
citations

Ada-K Routing: Boosting the Efficiency of MoE-based LLMs

Zijia Zhao, Longteng Guo, Jie Cheng et al.

ICLR 2025posterarXiv:2410.10456
8
citations

AdaLRS: Loss-Guided Adaptive Learning Rate Search for Efficient Foundation Model Pretraining

Hongyuan Dong, Dingkang Yang, Xiao Liang et al.

NeurIPS 2025posterarXiv:2506.13274
3
citations

Adaptive Distraction: Probing LLM Contextual Robustness with Automated Tree Search

Yanbo Wang, Zixiang Xu, Yue Huang et al.

NeurIPS 2025posterarXiv:2502.01609
3
citations

AdmTree: Compressing Lengthy Context with Adaptive Semantic Trees

Yangning Li, Shaoshen Chen, Yinghui Li et al.

NeurIPS 2025posterarXiv:2512.04550
4
citations

Advancing LLM Reasoning Generalists with Preference Trees

Lifan Yuan, Ganqu Cui, Hanbin Wang et al.

ICLR 2025posterarXiv:2404.02078
179
citations

Afterburner: Reinforcement Learning Facilitates Self-Improving Code Efficiency Optimization

Mingzhe Du, Anh Tuan Luu, Yue Liu et al.

NeurIPS 2025posterarXiv:2505.23387
6
citations

AgentTTS: Large Language Model Agent for Test-time Compute-optimal Scaling Strategy in Complex Tasks

Fali Wang, Hui Liu, Zhenwei Dai et al.

NeurIPS 2025posterarXiv:2508.00890
9
citations

AI as Humanity’s Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text

Ximing Lu, Melanie Sclar, Skyler Hallinan et al.

ICLR 2025posterarXiv:2410.04265
32
citations

AIMS.au: A Dataset for the Analysis of Modern Slavery Countermeasures in Corporate Statements

Adriana-Eufrosina Bora, Pierre-Luc St-Charles, Mirko Bronzi et al.

ICLR 2025posterarXiv:2502.07022
2
citations

Alignment of Large Language Models with Constrained Learning

Botong Zhang, Shuo Li, Ignacio Hounie et al.

NeurIPS 2025posterarXiv:2505.19387
2
citations

Alleviating Hallucinations in Large Language Models through Multi-Model Contrastive Decoding and Dynamic Hallucination Detection

Chenyu Zhu, Yefeng Liu, Hao Zhang et al.

NeurIPS 2025poster

AlphaDecay: Module-wise Weight Decay for Heavy-Tailed Balancing in LLMs

Di He, Songjun Tu, Ajay Jaiswal et al.

NeurIPS 2025posterarXiv:2506.14562
1
citations

A Multi-Power Law for Loss Curve Prediction Across Learning Rate Schedules

Kairong Luo, Haodong Wen, Shengding Hu et al.

ICLR 2025posterarXiv:2503.12811
13
citations

Analyzing the Power of Chain of Thought through Memorization Capabilities

Lijia Yu, Xiao-Shan Gao, Lijun Zhang

NeurIPS 2025posterarXiv:2511.01190

AnoLLM: Large Language Models for Tabular Anomaly Detection

Che-Ping Tsai, Ganyu Teng, Phillip Wallis et al.

ICLR 2025poster
7
citations

API Pack: A Massive Multi-Programming Language Dataset for API Call Generation

Gavin (Zhen) Guo, Adriana Meza Soria, Wei Sun et al.

ICLR 2025posterarXiv:2402.09615
4
citations

Approximately Aligned Decoding

Daniel Melcer, Sujan Kumar Gonugondla, Pramuditha Perera et al.

NeurIPS 2025posterarXiv:2410.01103
2
citations

A Probabilistic Perspective on Unlearning and Alignment for Large Language Models

Yan Scholten, Stephan Günnemann, Leo Schwinn

ICLR 2025posterarXiv:2410.03523
15
citations

AREAL: A Large-Scale Asynchronous Reinforcement Learning System for Language Reasoning

Wei Fu, Jiaxuan Gao, Xujie Shen et al.

NeurIPS 2025posterarXiv:2505.24298
95
citations

A Simple yet Effective Layout Token in Large Language Models for Document Understanding

Zhaoqing Zhu, Chuwei Luo, Zirui Shao et al.

CVPR 2025posterarXiv:2503.18434
7
citations

Ask, and it shall be given: On the Turing completeness of prompting

Ruizhong Qiu, Zhe Xu, Wenxuan Bao et al.

ICLR 2025posterarXiv:2411.01992
5
citations

A Statistical Approach for Controlled Training Data Detection

Zirui Hu, Yingjie Wang, Zheng Zhang et al.

ICLR 2025poster
2
citations

ATLAS: Autoformalizing Theorems through Lifting, Augmentation, and Synthesis of Data

Xiaoyang Liu, Kangjie Bao, Jiashuo Zhang et al.

NeurIPS 2025posterarXiv:2502.05567
13
citations

A Training-Free Sub-quadratic Cost Transformer Model Serving Framework with Hierarchically Pruned Attention

Heejun Lee, Geon Park, Youngwan Lee et al.

ICLR 2025posterarXiv:2406.09827
8
citations

AttriBoT: A Bag of Tricks for Efficiently Approximating Leave-One-Out Context Attribution

Fengyuan Liu, Nikhil Kandpal, Colin Raffel

ICLR 2025posterarXiv:2411.15102
12
citations

Automatic Auxiliary Task Selection and Adaptive Weighting Boost Molecular Property Prediction

Zhiqiang Zhong, Davide Mottin

NeurIPS 2025poster

AutoRedTeamer: Autonomous Red Teaming with Lifelong Attack Integration

Andy Zhou, Kevin Wu, Francesco Pinto et al.

NeurIPS 2025posterarXiv:2503.15754
15
citations

Bayesian Concept Bottleneck Models with LLM Priors

Jean Feng, Avni Kothari, Lucas Zier et al.

NeurIPS 2025posterarXiv:2410.15555
10
citations

Beyond Graphs: Can Large Language Models Comprehend Hypergraphs?

Yifan Feng, Chengwu Yang, Xingliang Hou et al.

ICLR 2025posterarXiv:2410.10083
10
citations

Beyond Model Collapse: Scaling Up with Synthesized Data Requires Verification

Yunzhen Feng, Elvis Dohmatob, Pu Yang et al.

ICLR 2025posterarXiv:2406.07515

Beyond Next Token Prediction: Patch-Level Training for Large Language Models

Chenze Shao, Fandong Meng, Jie Zhou

ICLR 2025posterarXiv:2407.12665
2
citations

Bilevel ZOFO: Efficient LLM Fine-Tuning and Meta-Training

Reza Shirkavand, Peiran Yu, Qi He et al.

NeurIPS 2025posterarXiv:2502.03604
1
citations

Block Verification Accelerates Speculative Decoding

Ziteng Sun, Uri Mendlovic, Yaniv Leviathan et al.

ICLR 2025posterarXiv:2403.10444
18
citations

Boosting Skeleton-based Zero-Shot Action Recognition with Training-Free Test-Time Adaptation

Jingmin Zhu, Anqi Zhu, Hossein Rahmani et al.

NeurIPS 2025posterarXiv:2512.11458

C3PO: Optimized Large Language Model Cascades with Probabilistic Cost Constraints for Reasoning

Antonios Valkanas, Soumyasundar Pal, Pavel Rumiantsev et al.

NeurIPS 2025posterarXiv:2511.07396

Calibrating Translation Decoding with Quality Estimation on LLMs

Di Wu, Yibin Lei, Christof Monz

NeurIPS 2025posterarXiv:2504.19044

CAMEx: Curvature-aware Merging of Experts

Dung Viet Nguyen, Minh Nguyen, Luc Nguyen et al.

ICLR 2025posterarXiv:2502.18821
6
citations

Can Large Language Models Help Multimodal Language Analysis? MMLA: A Comprehensive Benchmark

Hanlei Zhang, zhuohang li, Hua Xu et al.

NeurIPS 2025posterarXiv:2504.16427
2
citations

Can LLMs Outshine Conventional Recommenders? A Comparative Evaluation

Qijiong Liu, Jieming Zhu, Lu Fan et al.

NeurIPS 2025posterarXiv:2503.05493
4
citations

Can LLMs Reason Over Non-Text Modalities in a Training-Free Manner? A Case Study with In-Context Representation Learning

Tianle Zhang, Wanlong Fang, Jonathan Woo et al.

NeurIPS 2025posterarXiv:2509.17552
1
citations

Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?

Egor Zverev, Sahar Abdelnabi, Soroush Tabesh et al.

ICLR 2025posterarXiv:2403.06833
45
citations
← Previous
123...7
Next →