Poster "large language models" Papers

740 papers found • Page 9 of 15

Re-Thinking Inverse Graphics With Large Language Models

Haiwen Feng, Michael J Black, Weiyang Liu et al.

ICLR 2025arXiv:2404.15228
16
citations

Rethinking Residual Distribution in Locate-then-Edit Model Editing

Xiaopeng Li, Shangwen Wang, Shasha Li et al.

NEURIPS 2025arXiv:2502.03748
2
citations

Rethinking Vision-Language Model in Face Forensics: Multi-Modal Interpretable Forged Face Detector

Xiao Guo, Xiufeng Song, Yue Zhang et al.

CVPR 2025arXiv:2503.20188
26
citations

Retro-R1: LLM-based Agentic Retrosynthesis

Wei Liu, Jiangtao Feng, Hongli Yu et al.

NEURIPS 2025

Revising and Falsifying Sparse Autoencoder Feature Explanations

George Ma, Samuel Pfrommer, Somayeh Sojoudi

NEURIPS 2025

Revolutionizing Training-Free NAS: Towards Efficient Automatic Proxy Discovery via Large Language Models

Haidong Kang, Lihong Lin, Hanling Wang

NEURIPS 2025

REvolve: Reward Evolution with Large Language Models using Human Feedback

RISHI HAZRA, Alkis Sygkounas, Andreas Persson et al.

ICLR 2025arXiv:2406.01309
8
citations

Risk-aware Direct Preference Optimization under Nested Risk Measure

Lijun Zhang, Lin Li, Yajie Qi et al.

NEURIPS 2025arXiv:2505.20359
2
citations

RoboTron-Nav: A Unified Framework for Embodied Navigation Integrating Perception, Planning, and Prediction

Yufeng Zhong, Chengjian Feng, Feng yan et al.

ICCV 2025arXiv:2503.18525
3
citations

Robust Hallucination Detection in LLMs via Adaptive Token Selection

Mengjia Niu, Hamed Haddadi, Guansong Pang

NEURIPS 2025arXiv:2504.07863
8
citations

Rotated Runtime Smooth: Training-Free Activation Smoother for accurate INT4 inference

Ke Yi, Zengke Liu, jianwei zhang et al.

ICLR 2025arXiv:2409.20361
4
citations

RouteLLM: Learning to Route LLMs from Preference Data

Isaac Ong, Amjad Almahairi, Vincent Wu et al.

ICLR 2025
24
citations

ROUTE: Robust Multitask Tuning and Collaboration for Text-to-SQL

Yang Qin, Chao Chen, Zhihang Fu et al.

ICLR 2025arXiv:2412.10138
8
citations

RSAVQ: Riemannian Sensitivity-Aware Vector Quantization for Large Language Models

Zukang Xu, Xing Hu, Qiang Wu et al.

NEURIPS 2025arXiv:2510.01240

rStar-Coder: Scaling Competitive Code Reasoning with a Large-Scale Verified Dataset

Yifei Liu, Li Lyna Zhang, Yi Zhu et al.

NEURIPS 2025arXiv:2505.21297
25
citations

Scalable Bayesian Learning with posteriors

Samuel Duffield, Kaelan Donatella, Johnathan Chiu et al.

ICLR 2025arXiv:2406.00104
7
citations

ScanEdit: Hierarchically-Guided Functional 3D Scan Editing

Mohamed El Amine Boudjoghra, Ivan Laptev, Angela Dai

ICCV 2025arXiv:2504.15049

scPilot: Large Language Model Reasoning Toward Automated Single-Cell Analysis and Discovery

Yiming Gao, Zhen Wang, Jefferson Chen et al.

NEURIPS 2025

Seeing Eye to AI: Human Alignment via Gaze-Based Response Rewards for Large Language Models

Ángela López-Cardona, Carlos Segura, Alexandros Karatzoglou et al.

ICLR 2025arXiv:2410.01532
8
citations

Segment Policy Optimization: Effective Segment-Level Credit Assignment in RL for Large Language Models

Yiran Guo, Lijie Xu, Jie Liu et al.

NEURIPS 2025arXiv:2505.23564
18
citations

Selective Prompt Anchoring for Code Generation

Yuan Tian, Tianyi Zhang

ICML 2025arXiv:2408.09121
10
citations

Self-Boosting Large Language Models with Synthetic Preference Data

Qingxiu Dong, Li Dong, Xingxing Zhang et al.

ICLR 2025arXiv:2410.06961
32
citations

Self-Evolving Pseudo-Rehearsal for Catastrophic Forgetting with Task Similarity in LLMs

Jun Wang, Liang Ding, Shuai Wang et al.

NEURIPS 2025

Self Iterative Label Refinement via Robust Unlabeled Learning

Hikaru Asano, Tadashi Kozuno, Yukino Baba

NEURIPS 2025arXiv:2502.12565
1
citations

Self-Updatable Large Language Models by Integrating Context into Model Parameters

Yu Wang, Xinshuang Liu, Xiusi Chen et al.

ICLR 2025arXiv:2410.00487
5
citations

Self-Verification Provably Prevents Model Collapse in Recursive Synthetic Training

Shi Fu, Yingjie Wang, Yuzhu Chen et al.

NEURIPS 2025

Semantic-guided Diverse Decoding for Large Language Model

Weijie Shi, Yue Cui, Yaguang Wu et al.

NEURIPS 2025arXiv:2506.23601
2
citations

Semantic-KG: Using Knowledge Graphs to Construct Benchmarks for Measuring Semantic Similarity

Qiyao Wei, Edward R Morrell, Lea Goetz et al.

NEURIPS 2025arXiv:2511.19925

Semantic Loss Guided Data Efficient Supervised Fine Tuning for Safe Responses in LLMs

Yuxiao Lu, Arunesh Sinha, Pradeep Varakantham

ICLR 2025arXiv:2412.06843
3
citations

SeRL: Self-play Reinforcement Learning for Large Language Models with Limited Data

Wenkai Fang, Shunyu Liu, Yang Zhou et al.

NEURIPS 2025arXiv:2505.20347
25
citations

ShiQ: Bringing back Bellman to LLMs

Pierre Clavier, Nathan Grinsztajn, Raphaël Avalos et al.

NEURIPS 2025arXiv:2505.11081
2
citations

Short-length Adversarial Training Helps LLMs Defend Long-length Jailbreak Attacks: Theoretical and Empirical Evidence

Shaopeng Fu, Liang Ding, Jingfeng ZHANG et al.

NEURIPS 2025arXiv:2502.04204
6
citations

SilentStriker: Toward Stealthy Bit-Flip Attacks on Large Language Models

HAOTIAN XU, Qingsong Peng, Jie Shi et al.

NEURIPS 2025
1
citations

SimPER: A Minimalist Approach to Preference Alignment without Hyperparameters

Teng Xiao, Yige Yuan, Zhengyu Chen et al.

ICLR 2025arXiv:2502.00883
26
citations

SIMS: Simulating Stylized Human-Scene Interactions with Retrieval-Augmented Script Generation

Wenjia Wang, Liang Pan, Zhiyang Dou et al.

ICCV 2025arXiv:2411.19921
4
citations

Simulating Society Requires Simulating Thought

Chance Jiajie Li, Jiayi Wu, Zhenze MO et al.

NEURIPS 2025arXiv:2506.06958
1
citations

Sinusoidal Initialization, Time for a New Start

Alberto Fernandez-Hernandez, Jose Mestre, Manuel F. Dolz et al.

NEURIPS 2025arXiv:2505.12909
1
citations

SiriuS: Self-improving Multi-agent Systems via Bootstrapped Reasoning

Wanjia Zhao, Mert Yuksekgonul, Shirley Wu et al.

NEURIPS 2025arXiv:2502.04780
22
citations

S'MoRE: Structural Mixture of Residual Experts for Parameter-Efficient LLM Fine-tuning

Hanqing Zeng, Yinglong Xia, Zhuokai Zhao et al.

NEURIPS 2025arXiv:2504.06426
2
citations

SMT: Fine-Tuning Large Language Models with Sparse Matrices

Haoze He, Juncheng Li, Xuan Jiang et al.

ICLR 2025
7
citations

Solver-Informed RL: Grounding Large Language Models for Authentic Optimization Modeling

Yitian Chen, Jingfan Xia, Siyu Shao et al.

NEURIPS 2025arXiv:2505.11792
15
citations

SolverLLM: Leveraging Test-Time Scaling for Optimization Problem via LLM-Guided Search

Dong Li, Xujiang Zhao, Linlin Yu et al.

NEURIPS 2025arXiv:2510.16916
1
citations

SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal

Tinghao Xie, Xiangyu Qi, Yi Zeng et al.

ICLR 2025arXiv:2406.14598
151
citations

SPaR: Self-Play with Tree-Search Refinement to Improve Instruction-Following in Large Language Models

Jiale Cheng, Xiao Liu, Cunxiang Wang et al.

ICLR 2025arXiv:2412.11605
13
citations

Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM Fine-Tuning

Yong Liu, Zirui Zhu, Chaoyu Gong et al.

NEURIPS 2025arXiv:2402.15751
37
citations

SPARTUN3D: Situated Spatial Understanding of 3D World in Large Language Model

Yue Zhang, Zhiyang Xu, Ying Shen et al.

ICLR 2025arXiv:2410.03878
20
citations

Speculative RAG: Enhancing Retrieval Augmented Generation through Drafting

Zilong (Ryan) Wang, Zifeng Wang, Long Le et al.

ICLR 2025arXiv:2407.08223
78
citations

SpikeLLM: Scaling up Spiking Neural Network to Large Language Models via Saliency-based Spiking

Xingrun Xing, Boyan Gao, Zheng Liu et al.

ICLR 2025arXiv:2407.04752
23
citations

SpinQuant: LLM Quantization with Learned Rotations

Zechun Liu, Changsheng Zhao, Igor Fedorov et al.

ICLR 2025arXiv:2405.16406
268
citations

SSTAG: Structure-Aware Self-Supervised Learning Method for Text-Attributed Graphs

Ruyue Liu, Rong Yin, Xiangzhen Bo et al.

NEURIPS 2025arXiv:2510.01248
1
citations