NeurIPS "large language models" Papers
202 papers found • Page 2 of 5
DataSIR: A Benchmark Dataset for Sensitive Information Recognition
Fan Mo, Bo Liu, Yuan Fan et al.
Detoxifying Large Language Models via Autoregressive Reward Guided Representation Editing
Yisong Xiao, Aishan Liu, Siyuan Liang et al.
DEXTER: Diffusion-Guided EXplanations with TExtual Reasoning for Vision Models
Simone Carnemolla, Matteo Pennisi, Sarinda Samarasinghe et al.
Differentially Private Federated Low Rank Adaptation Beyond Fixed-Matrix
Ming Wen, Jiaqi Zhu, Yuedong Xu et al.
Direct Numerical Layout Generation for 3D Indoor Scene Synthesis via Spatial Reasoning
Xingjian Ran, Yixuan Li, Linning Xu et al.
DISCO: Disentangled Communication Steering for Large Language Models
Max Torop, Aria Masoomi, Masih Eskandar et al.
Distribution-Aligned Decoding for Efficient LLM Task Adaptation
Senkang Hu, Xudong Han, Jinqi Jiang et al.
DNA-DetectLLM: Unveiling AI-Generated Text via a DNA-Inspired Mutation-Repair Paradigm
Xiaowei Zhu, Yubing Ren, Fang Fang et al.
Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model?
Yang Yue, Zhiqi Chen, Rui Lu et al.
Do LLMs Really Forget? Evaluating Unlearning with Knowledge Correlation and Confidence Awareness
Rongzhe Wei, Peizhi Niu, Hans Hao-Hsun Hsu et al.
Don’t Forget the Enjoin: FocalLoRA for Instruction Hierarchical Alignment in Large Language Models
Zitong Shi, Guancheng Wan, Haixin Wang et al.
Do You Really Need Public Data? Surrogate Public Data for Differential Privacy on Tabular Data
Shlomi Hod, Lucas Rosenblatt, Julia Stoyanovich
DSAS: A Universal Plug-and-Play Framework for Attention Optimization in Multi-Document Question Answering
Jiakai Li, Rongzheng Wang, Yizhuo Ma et al.
DuoGPT: Training-free Dual Sparsity through Activation-aware Pruning in LLMs
Ruokai Yin, Yuhang Li, Donghyun Lee et al.
DynamicRAG: Leveraging Outputs of Large Language Model as Feedback for Dynamic Reranking in Retrieval-Augmented Generation
Jiashuo Sun, Xianrui Zhong, Sizhe Zhou et al.
Embracing Trustworthy Brain-Agent Collaboration as Paradigm Extension for Intelligent Assistive Technologies
Yankai Chen, Xinni Zhang, Yifei Zhang et al.
Enhancing Personalized Multi-Turn Dialogue with Curiosity Reward
Yanming Wan, Jiaxing Wu, Marwa Abdulhai et al.
ErrorTrace: A Black-Box Traceability Mechanism Based on Model Family Error Space
Chuanchao Zang, Xiangtao Meng, Wenyu Chen et al.
Every Rollout Counts: Optimal Resource Allocation for Efficient Test-Time Scaling
Xinglin Wang, Yiwei Li, Shaoxiong Feng et al.
Exploring the limits of strong membership inference attacks on large language models
Jamie Hayes, I Shumailov, Christopher A. Choquette-Choo et al.
FALQON: Accelerating LoRA Fine-tuning with Low-Bit Floating-Point Arithmetic
Kanghyun Choi, Hyeyoon Lee, Sunjong Park et al.
FFN Fusion: Rethinking Sequential Computation in Large Language Models
Akhiad Bercovich, Mohammed Dabbah, Omri Puny et al.
Finding and Reactivating Post-Trained LLMs' Hidden Safety Mechanisms
Mingjie Li, Wai Man Si, Michael Backes et al.
FoGE: Fock Space inspired encoding for graph prompting
Takis Chytas, Rudrasis Chakraborty, Vikas Singh
FP4 All the Way: Fully Quantized Training of Large Language Models
Brian Chmiel, Maxim Fishman, Ron Banner et al.
From Programs to Poses: Factored Real-World Scene Generation via Learned Program Libraries
Joy Hsu, Emily Jin, Jiajun Wu et al.
Generating Computational Cognitive models using Large Language Models
Milena Rmus, Akshay Kumar Jagadish, Marvin Mathony et al.
Generator-Mediated Bandits: Thompson Sampling for GenAI-Powered Adaptive Interventions
Marc Brooks, Gabriel Durham, Kihyuk Hong et al.
GeoCAD: Local Geometry-Controllable CAD Generation with Large Language Models
Zhanwei Zhang, kaiyuan liu, Junjie Liu et al.
GnnXemplar: Exemplars to Explanations - Natural Language Rules for Global GNN Interpretability
Burouj Armgaan, Eshan Jain, Harsh Pandey et al.
Gradient Multi-Normalization for Efficient LLM Training
Meyer Scetbon, Chao Ma, Wenbo Gong et al.
GraphChain: Large Language Models for Large-scale Graph Analysis via Tool Chaining
Chunyu Wei, Wenji Hu, Xingjia Hao et al.
GRIFFIN: Effective Token Alignment for Faster Speculative Decoding
Shijing Hu, Jingyang Li, Xingyu Xie et al.
GRIP: A Graph-Based Reasoning Instruction Producer
Jiankang Wang, Jianjun Xu, Xiaorui Wang et al.
HCRMP: An LLM-Hinted Contextual Reinforcement Learning Framework for Autonomous Driving
Zhiwen Chen, Hanming Deng, Zhuoren Li et al.
Improving Formal Reasoning of Transformer with State Stack
Kechi Zhang, Ge Li, Jia Li et al.
Improving Generalization of Neural Combinatorial Optimization for Vehicle Routing Problems via Test-Time Projection Learning
Yuanyao Chen, Rongsheng Chen, Fu Luo et al.
IneqSearch: Hybrid Reasoning for Olympiad Inequality Proofs
Zhaoqun Li, Beishui Liao, Qiwei Ye
InfiGFusion: Graph-on-Logits Distillation via Efficient Gromov-Wasserstein for Model Fusion
Yuanyi Wang, Zhaoyi Yan, Yiming Zhang et al.
Influence Guided Context Selection for Effective Retrieval-Augmented Generation
Jiale Deng, Yanyan Shen, Ziyuan Pei et al.
IPAD: Inverse Prompt for AI Detection - A Robust and Interpretable LLM-Generated Text Detector
Zheng CHEN, Yushi Feng, Jisheng Dang et al.
Keeping an Eye on LLM Unlearning: The Hidden Risk and Remedy
Jie Ren, Zhenwei Dai, Xianfeng Tang et al.
KVLink: Accelerating Large Language Models via Efficient KV Cache Reuse
Jingbo Yang, Bairu Hou, Wei Wei et al.
Large Language Models Think Too Fast To Explore Effectively
Lan Pan, Hanbo Xie, Robert Wilson
Layer as Puzzle Pieces: Compressing Large Language Models through Layer Concatenation
Fei Wang, Li Shen, Liang Ding et al.
LayerNavigator: Finding Promising Intervention Layers for Efficient Activation Steering in Large Language Models
Hao Sun, Huailiang Peng, Qiong Dai et al.
Learning “Partner-Aware” Collaborators in Multi-Party Collaboration
Abhijnan Nath, Nikhil Krishnaswamy
Learning to Rank for In-Context Example Retrieval
Yuwen Ji, Luodan Zhang, Ambyer han et al.
Linearization Explains Fine-Tuning in Large Language Models
Zahra Rahimi Afzal, Tara Esmaeilbeig, Mojtaba Soltanalian et al.
LLM-Explorer: A Plug-in Reinforcement Learning Policy Exploration Enhancement Driven by Large Language Models
Qianyue Hao, Yiwen Song, Qingmin Liao et al.