NEURIPS Poster "large language models" Papers

240 papers found • Page 1 of 5

$\texttt{G1}$: Teaching LLMs to Reason on Graphs with Reinforcement Learning

Xiaojun Guo, Ang Li, Yifei Wang et al.

NEURIPS 2025poster
4
citations

A$^3$E: Towards Compositional Model Editing

Hongming Piao, Hao Wang, Dapeng Wu et al.

NEURIPS 2025poster

Accelerating Block Coordinate Descent for LLM Finetuning via Landscape Expansion

Qijun Luo, Yifei Shen, Liangzu Peng et al.

NEURIPS 2025poster

Accelerating RL for LLM Reasoning with Optimal Advantage Regression

Kianté Brantley, Mingyu Chen, Zhaolin Gao et al.

NEURIPS 2025posterarXiv:2505.20686
12
citations

Activation-Guided Consensus Merging for Large Language Models

Yuxuan Yao, Shuqi LIU, Zehua Liu et al.

NEURIPS 2025posterarXiv:2505.14009
1
citations

AcuRank: Uncertainty-Aware Adaptive Computation for Listwise Reranking

Soyoung Yoon, Gyuwan Kim, Gyu-Hwung Cho et al.

NEURIPS 2025posterarXiv:2505.18512
1
citations

AdaLRS: Loss-Guided Adaptive Learning Rate Search for Efficient Foundation Model Pretraining

Hongyuan Dong, Dingkang Yang, Xiao Liang et al.

NEURIPS 2025posterarXiv:2506.13274
3
citations

Adaptive Distraction: Probing LLM Contextual Robustness with Automated Tree Search

Yanbo Wang, Zixiang Xu, Yue Huang et al.

NEURIPS 2025posterarXiv:2502.01609
3
citations

Adaptive Kernel Design for Bayesian Optimization Is a Piece of CAKE with LLMs

Richard Suwandi, Feng Yin, Juntao Wang et al.

NEURIPS 2025posterarXiv:2509.17998
2
citations

AdmTree: Compressing Lengthy Context with Adaptive Semantic Trees

Yangning Li, Shaoshen Chen, Yinghui Li et al.

NEURIPS 2025posterarXiv:2512.04550
4
citations

Afterburner: Reinforcement Learning Facilitates Self-Improving Code Efficiency Optimization

Mingzhe Du, Anh Tuan Luu, Yue Liu et al.

NEURIPS 2025posterarXiv:2505.23387
6
citations

AgentTTS: Large Language Model Agent for Test-time Compute-optimal Scaling Strategy in Complex Tasks

Fali Wang, Hui Liu, Zhenwei Dai et al.

NEURIPS 2025posterarXiv:2508.00890
9
citations

Alignment of Large Language Models with Constrained Learning

Botong Zhang, Shuo Li, Ignacio Hounie et al.

NEURIPS 2025posterarXiv:2505.19387
2
citations

Alleviating Hallucinations in Large Language Models through Multi-Model Contrastive Decoding and Dynamic Hallucination Detection

Chenyu Zhu, Yefeng Liu, Hao Zhang et al.

NEURIPS 2025poster

AlphaDecay: Module-wise Weight Decay for Heavy-Tailed Balancing in LLMs

Di He, Songjun Tu, Ajay Jaiswal et al.

NEURIPS 2025posterarXiv:2506.14562
1
citations

Analyzing the Power of Chain of Thought through Memorization Capabilities

Lijia Yu, Xiao-Shan Gao, Lijun Zhang

NEURIPS 2025posterarXiv:2511.01190

Approximately Aligned Decoding

Daniel Melcer, Sujan Kumar Gonugondla, Pramuditha Perera et al.

NEURIPS 2025posterarXiv:2410.01103
2
citations

AREAL: A Large-Scale Asynchronous Reinforcement Learning System for Language Reasoning

Wei Fu, Jiaxuan Gao, Xujie Shen et al.

NEURIPS 2025posterarXiv:2505.24298
95
citations

ATLAS: Autoformalizing Theorems through Lifting, Augmentation, and Synthesis of Data

Xiaoyang Liu, Kangjie Bao, Jiashuo Zhang et al.

NEURIPS 2025posterarXiv:2502.05567
13
citations

AutoData: A Multi-Agent System for Open Web Data Collection

Tianyi Ma, Yiyue Qian, Zheyuan Zhang et al.

NEURIPS 2025posterarXiv:2505.15859
5
citations

Automatic Auxiliary Task Selection and Adaptive Weighting Boost Molecular Property Prediction

Zhiqiang Zhong, Davide Mottin

NEURIPS 2025poster

AutoRedTeamer: Autonomous Red Teaming with Lifelong Attack Integration

Andy Zhou, Kevin Wu, Francesco Pinto et al.

NEURIPS 2025posterarXiv:2503.15754
15
citations

Bayesian Concept Bottleneck Models with LLM Priors

Jean Feng, Avni Kothari, Lucas Zier et al.

NEURIPS 2025posterarXiv:2410.15555
10
citations

Beyond Single-Task: Robust Multi-Task Length Generalization for LLMs

Yi Hu, Shijia Kang, Haotong Yang et al.

NEURIPS 2025posterarXiv:2502.11525
4
citations

Bilevel ZOFO: Efficient LLM Fine-Tuning and Meta-Training

Reza Shirkavand, Peiran Yu, Qi He et al.

NEURIPS 2025posterarXiv:2502.03604
1
citations

Boosting Skeleton-based Zero-Shot Action Recognition with Training-Free Test-Time Adaptation

Jingmin Zhu, Anqi Zhu, Hossein Rahmani et al.

NEURIPS 2025posterarXiv:2512.11458

Breaking the Gradient Barrier: Unveiling Large Language Models for Strategic Classification

Xinpeng Lv, Yunxin Mao, Haoxuan Li et al.

NEURIPS 2025posterarXiv:2511.06979
1
citations

C3PO: Optimized Large Language Model Cascades with Probabilistic Cost Constraints for Reasoning

Antonios Valkanas, Soumyasundar Pal, Pavel Rumiantsev et al.

NEURIPS 2025posterarXiv:2511.07396

Calibrating Translation Decoding with Quality Estimation on LLMs

Di Wu, Yibin Lei, Christof Monz

NEURIPS 2025posterarXiv:2504.19044

Can DPO Learn Diverse Human Values? A Theoretical Scaling Law

Shawn Im, Sharon Li

NEURIPS 2025posterarXiv:2408.03459
8
citations

Can Large Language Models Help Multimodal Language Analysis? MMLA: A Comprehensive Benchmark

Hanlei Zhang, zhuohang li, Hua Xu et al.

NEURIPS 2025posterarXiv:2504.16427
2
citations

Can LLMs Outshine Conventional Recommenders? A Comparative Evaluation

Qijiong Liu, Jieming Zhu, Lu Fan et al.

NEURIPS 2025posterarXiv:2503.05493
4
citations

Can LLMs Reason Over Non-Text Modalities in a Training-Free Manner? A Case Study with In-Context Representation Learning

Tianle Zhang, Wanlong Fang, Jonathan Woo et al.

NEURIPS 2025posterarXiv:2509.17552
1
citations

Capturing Polysemanticity with PRISM: A Multi-Concept Feature Description Framework

Laura Kopf, Nils Feldhus, Kirill Bykov et al.

NEURIPS 2025posterarXiv:2506.15538
4
citations

CCL: Causal-aware In-context Learning for Out-of-Distribution Generalization

Hoyoon Byun, Gyeongdeok Seo, Joonseong Kang et al.

NEURIPS 2025poster

CellVerse: Do Large Language Models Really Understand Cell Biology?

Fan Zhang, Tianyu Liu, Zhihong Zhu et al.

NEURIPS 2025posterarXiv:2505.07865
4
citations

Chain of Execution Supervision Promotes General Reasoning in Large Language Models

Nuo Chen, Zehua Li, Keqin Bao et al.

NEURIPS 2025posterarXiv:2510.23629

CIDD: Collaborative Intelligence for Structure-Based Drug Design Empowered by LLMs

Bowen Gao, Yanwen Huang, Yiqiao Liu et al.

NEURIPS 2025poster

Classical Planning with LLM-Generated Heuristics: Challenging the State of the Art with Python Code

Augusto B. Corrêa, André G. Pereira, Jendrik Seipp

NEURIPS 2025posterarXiv:2503.18809
13
citations

CLAWS:Creativity detection for LLM-generated solutions using Attention Window of Sections

Keuntae Kim, Eunhye Jeong, Sehyeon Lee et al.

NEURIPS 2025poster

ClinBench: A Standardized Multi-Domain Framework for Evaluating Large Language Models in Clinical Information Extraction

Ismael Villanueva Miranda, Zifan Gu, Donghan Yang et al.

NEURIPS 2025poster

Computation and Memory-Efficient Model Compression with Gradient Reweighting

Zhiwei Li, Yuesen Liao, Binrui Wu et al.

NEURIPS 2025poster

Concept-Guided Interpretability via Neural Chunking

Shuchen Wu, Stephan Alaniz, Shyamgopal Karthik et al.

NEURIPS 2025posterarXiv:2505.11576

ConfTuner: Training Large Language Models to Express Their Confidence Verbally

Yibo Li, Miao Xiong, Jiaying Wu et al.

NEURIPS 2025posterarXiv:2508.18847
10
citations

CoP: Agentic Red-teaming for Large Language Models using Composition of Principles

Chen Xiong, Pin-Yu Chen, Tsung-Yi Ho

NEURIPS 2025posterarXiv:2506.00781
3
citations

Creativity or Brute Force? Using Brainteasers as a Window into the Problem-Solving Abilities of Large Language Models

Sophia Han, Howard Dai, Stephen Xia et al.

NEURIPS 2025posterarXiv:2505.10844
1
citations

DataSIR: A Benchmark Dataset for Sensitive Information Recognition

Fan Mo, Bo Liu, Yuan Fan et al.

NEURIPS 2025poster

Detecting High-Stakes Interactions with Activation Probes

Alex McKenzie, Urja Pawar, Phil Blandfort et al.

NEURIPS 2025posterarXiv:2506.10805
13
citations

Detoxifying Large Language Models via Autoregressive Reward Guided Representation Editing

Yisong Xiao, Aishan Liu, Siyuan Liang et al.

NEURIPS 2025posterarXiv:2510.01243
2
citations

Differentially Private Federated Low Rank Adaptation Beyond Fixed-Matrix

Ming Wen, Jiaqi Zhu, Yuedong Xu et al.

NEURIPS 2025posterarXiv:2507.09990
Previous
123...5
Next