Poster "large language models" Papers

395 papers found • Page 1 of 8

$\texttt{G1}$: Teaching LLMs to Reason on Graphs with Reinforcement Learning

Xiaojun Guo, Ang Li, Yifei Wang et al.

NeurIPS 2025poster
4
citations

3D-AffordanceLLM: Harnessing Large Language Models for Open-Vocabulary Affordance Detection in 3D Worlds

Hengshuo Chu, Xiang Deng, Qi Lv et al.

ICLR 2025posterarXiv:2502.20041
15
citations

Accelerating Block Coordinate Descent for LLM Finetuning via Landscape Expansion

Qijun Luo, Yifei Shen, Liangzu Peng et al.

NeurIPS 2025poster

A Closer Look at Machine Unlearning for Large Language Models

Xiaojian Yuan, Tianyu Pang, Chao Du et al.

ICLR 2025posterarXiv:2410.08109
31
citations

ActionReasoningBench: Reasoning about Actions with and without Ramification Constraints

Divij Handa, Pavel Dolin, Shrinidhi Kumbhar et al.

ICLR 2025posterarXiv:2406.04046
7
citations

AcuRank: Uncertainty-Aware Adaptive Computation for Listwise Reranking

Soyoung Yoon, Gyuwan Kim, Gyu-Hwung Cho et al.

NeurIPS 2025posterarXiv:2505.18512
1
citations

Ada-K Routing: Boosting the Efficiency of MoE-based LLMs

Zijia Zhao, Longteng Guo, Jie Cheng et al.

ICLR 2025posterarXiv:2410.10456
8
citations

AdaLRS: Loss-Guided Adaptive Learning Rate Search for Efficient Foundation Model Pretraining

Hongyuan Dong, Dingkang Yang, Xiao Liang et al.

NeurIPS 2025posterarXiv:2506.13274
3
citations

Adaptive Distraction: Probing LLM Contextual Robustness with Automated Tree Search

Yanbo Wang, Zixiang Xu, Yue Huang et al.

NeurIPS 2025posterarXiv:2502.01609
3
citations

AdmTree: Compressing Lengthy Context with Adaptive Semantic Trees

Yangning Li, Shaoshen Chen, Yinghui Li et al.

NeurIPS 2025posterarXiv:2512.04550
4
citations

Advancing LLM Reasoning Generalists with Preference Trees

Lifan Yuan, Ganqu Cui, Hanbin Wang et al.

ICLR 2025posterarXiv:2404.02078
179
citations

AgentTTS: Large Language Model Agent for Test-time Compute-optimal Scaling Strategy in Complex Tasks

Fali Wang, Hui Liu, Zhenwei Dai et al.

NeurIPS 2025posterarXiv:2508.00890
9
citations

AI as Humanity’s Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text

Ximing Lu, Melanie Sclar, Skyler Hallinan et al.

ICLR 2025posterarXiv:2410.04265
32
citations

AIMS.au: A Dataset for the Analysis of Modern Slavery Countermeasures in Corporate Statements

Adriana-Eufrosina Bora, Pierre-Luc St-Charles, Mirko Bronzi et al.

ICLR 2025posterarXiv:2502.07022
2
citations

Alignment of Large Language Models with Constrained Learning

Botong Zhang, Shuo Li, Ignacio Hounie et al.

NeurIPS 2025posterarXiv:2505.19387
2
citations

Alleviating Hallucinations in Large Language Models through Multi-Model Contrastive Decoding and Dynamic Hallucination Detection

Chenyu Zhu, Yefeng Liu, Hao Zhang et al.

NeurIPS 2025poster

AlphaDecay: Module-wise Weight Decay for Heavy-Tailed Balancing in LLMs

Di He, Songjun Tu, Ajay Jaiswal et al.

NeurIPS 2025posterarXiv:2506.14562
1
citations

Analyzing the Power of Chain of Thought through Memorization Capabilities

Lijia Yu, Xiao-Shan Gao, Lijun Zhang

NeurIPS 2025posterarXiv:2511.01190

AnoLLM: Large Language Models for Tabular Anomaly Detection

Che-Ping Tsai, Ganyu Teng, Phillip Wallis et al.

ICLR 2025poster
7
citations

API Pack: A Massive Multi-Programming Language Dataset for API Call Generation

Gavin (Zhen) Guo, Adriana Meza Soria, Wei Sun et al.

ICLR 2025posterarXiv:2402.09615
4
citations

A Probabilistic Perspective on Unlearning and Alignment for Large Language Models

Yan Scholten, Stephan Günnemann, Leo Schwinn

ICLR 2025posterarXiv:2410.03523
15
citations

A Simple yet Effective Layout Token in Large Language Models for Document Understanding

Zhaoqing Zhu, Chuwei Luo, Zirui Shao et al.

CVPR 2025posterarXiv:2503.18434
7
citations

A Statistical Approach for Controlled Training Data Detection

Zirui Hu, Yingjie Wang, Zheng Zhang et al.

ICLR 2025poster
2
citations

ATLAS: Autoformalizing Theorems through Lifting, Augmentation, and Synthesis of Data

Xiaoyang Liu, Kangjie Bao, Jiashuo Zhang et al.

NeurIPS 2025posterarXiv:2502.05567
13
citations

A Training-Free Sub-quadratic Cost Transformer Model Serving Framework with Hierarchically Pruned Attention

Heejun Lee, Geon Park, Youngwan Lee et al.

ICLR 2025posterarXiv:2406.09827
8
citations

AttriBoT: A Bag of Tricks for Efficiently Approximating Leave-One-Out Context Attribution

Fengyuan Liu, Nikhil Kandpal, Colin Raffel

ICLR 2025posterarXiv:2411.15102
12
citations

Automatic Auxiliary Task Selection and Adaptive Weighting Boost Molecular Property Prediction

Zhiqiang Zhong, Davide Mottin

NeurIPS 2025poster

AutoRedTeamer: Autonomous Red Teaming with Lifelong Attack Integration

Andy Zhou, Kevin Wu, Francesco Pinto et al.

NeurIPS 2025posterarXiv:2503.15754
15
citations

Beyond Graphs: Can Large Language Models Comprehend Hypergraphs?

Yifan Feng, Chengwu Yang, Xingliang Hou et al.

ICLR 2025posterarXiv:2410.10083
10
citations

Beyond Next Token Prediction: Patch-Level Training for Large Language Models

Chenze Shao, Fandong Meng, Jie Zhou

ICLR 2025posterarXiv:2407.12665
2
citations

Boosting Skeleton-based Zero-Shot Action Recognition with Training-Free Test-Time Adaptation

Jingmin Zhu, Anqi Zhu, Hossein Rahmani et al.

NeurIPS 2025posterarXiv:2512.11458

Calibrating Translation Decoding with Quality Estimation on LLMs

Di Wu, Yibin Lei, Christof Monz

NeurIPS 2025posterarXiv:2504.19044

CAMEx: Curvature-aware Merging of Experts

Dung Viet Nguyen, Minh Nguyen, Luc Nguyen et al.

ICLR 2025posterarXiv:2502.18821
6
citations

Can LLMs Outshine Conventional Recommenders? A Comparative Evaluation

Qijiong Liu, Jieming Zhu, Lu Fan et al.

NeurIPS 2025posterarXiv:2503.05493
4
citations

Can LLMs Reason Over Non-Text Modalities in a Training-Free Manner? A Case Study with In-Context Representation Learning

Tianle Zhang, Wanlong Fang, Jonathan Woo et al.

NeurIPS 2025posterarXiv:2509.17552
1
citations

Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?

Egor Zverev, Sahar Abdelnabi, Soroush Tabesh et al.

ICLR 2025posterarXiv:2403.06833
45
citations

Can LLMs Understand Time Series Anomalies?

Zihao Zhou, Rose Yu

ICLR 2025posterarXiv:2410.05440
32
citations

Capturing Polysemanticity with PRISM: A Multi-Concept Feature Description Framework

Laura Kopf, Nils Feldhus, Kirill Bykov et al.

NeurIPS 2025posterarXiv:2506.15538
4
citations

Catastrophic Failure of LLM Unlearning via Quantization

Zhiwei Zhang, Fali Wang, Xiaomin Li et al.

ICLR 2025posterarXiv:2410.16454
43
citations

Causally Motivated Sycophancy Mitigation for Large Language Models

Haoxi Li, Xueyang Tang, Jie ZHANG et al.

ICLR 2025poster
8
citations

CEB: Compositional Evaluation Benchmark for Fairness in Large Language Models

Song Wang, Peng Wang, Tong Zhou et al.

ICLR 2025posterarXiv:2407.02408
13
citations

Certifying Counterfactual Bias in LLMs

Isha Chaudhary, Qian Hu, Manoj Kumar et al.

ICLR 2025posterarXiv:2405.18780
4
citations

CHASE-SQL: Multi-Path Reasoning and Preference Optimized Candidate Selection in Text-to-SQL

Mohammadreza Pourreza, Hailong Li, Ruoxi Sun et al.

ICLR 2025posterarXiv:2410.01943
116
citations

Classical Planning with LLM-Generated Heuristics: Challenging the State of the Art with Python Code

Augusto B. Corrêa, André G. Pereira, Jendrik Seipp

NeurIPS 2025posterarXiv:2503.18809
13
citations

CLAWS:Creativity detection for LLM-generated solutions using Attention Window of Sections

Keuntae Kim, Eunhye Jeong, Sehyeon Lee et al.

NeurIPS 2025poster

ClinBench: A Standardized Multi-Domain Framework for Evaluating Large Language Models in Clinical Information Extraction

Ismael Villanueva Miranda, Zifan Gu, Donghan Yang et al.

NeurIPS 2025poster

Computation and Memory-Efficient Model Compression with Gradient Reweighting

Zhiwei Li, Yuesen Liao, Binrui Wu et al.

NeurIPS 2025poster

Concept-Guided Interpretability via Neural Chunking

Shuchen Wu, Stephan Alaniz, Shyamgopal Karthik et al.

NeurIPS 2025posterarXiv:2505.11576

Confidence Elicitation: A New Attack Vector for Large Language Models

Brian Formento, Chuan Sheng Foo, See-Kiong Ng

ICLR 2025posterarXiv:2502.04643
2
citations

ConfTuner: Training Large Language Models to Express Their Confidence Verbally

Yibo Li, Miao Xiong, Jiaying Wu et al.

NeurIPS 2025posterarXiv:2508.18847
10
citations
← Previous
123...8
Next →