NeurIPS Spotlight "large language models" Papers
28 papers found
Adaptive Prediction-Powered AutoEval with Reliability and Efficiency Guarantees
Sangwoo Park, Matteo Zecchin, Osvaldo Simeone
AGENTIF: Benchmarking Large Language Models Instruction Following Ability in Agentic Scenarios
Yunjia Qi, Hao Peng, Xiaozhi Wang et al.
A Implies B: Circuit Analysis in LLMs for Propositional Logical Reasoning
Guan Zhe Hong, Nishanth Dikkala, Enming Luo et al.
AI-Researcher: Autonomous Scientific Innovation
Jiabin Tang, Lianghao Xia, Zhonghang Li et al.
Angles Don’t Lie: Unlocking Training‑Efficient RL Through the Model’s Own Signals
Qinsi Wang, Jinghan Ke, Hancheng Ye et al.
Bits Leaked per Query: Information-Theoretic Bounds for Adversarial Attacks on LLMs
Masahiro Kaneko, Timothy Baldwin
Can We Infer Confidential Properties of Training Data from LLMs?
Pengrun Huang, Chhavi Yadav, Kamalika Chaudhuri et al.
Conditional Representation Learning for Customized Tasks
Honglin Liu, Chao Sun, Peng Hu et al.
Conflict-Aware Knowledge Editing in the Wild: Semantic-Augmented Graph Representation for Unstructured Text
Zhange Zhang, Zhicheng Geng, Yuqing Ma et al.
ConTextTab: A Semantics-Aware Tabular In-Context Learner
Marco Spinaci, Marek Polewczyk, Maximilian Schambach et al.
Cost-aware LLM-based Online Dataset Annotation
Eray Can Elumar, Cem Tekin, Osman Yagan
DEXTER: Diffusion-Guided EXplanations with TExtual Reasoning for Vision Models
Simone Carnemolla, Matteo Pennisi, Sarinda Samarasinghe et al.
DNA-DetectLLM: Unveiling AI-Generated Text via a DNA-Inspired Mutation-Repair Paradigm
Xiaowei Zhu, Yubing Ren, Fang Fang et al.
ErrorTrace: A Black-Box Traceability Mechanism Based on Model Family Error Space
Chuanchao Zang, Xiangtao Meng, Wenyu Chen et al.
FFN Fusion: Rethinking Sequential Computation in Large Language Models
Akhiad Bercovich, Mohammed Dabbah, Omri Puny et al.
FP4 All the Way: Fully Quantized Training of Large Language Models
Brian Chmiel, Maxim Fishman, Ron Banner et al.
HBLLM: Wavelet-Enhanced High-Fidelity 1-Bit Quantization for LLMs
Ningning Chen, Weicai Ye, Ying Jiang
LLM-Explorer: A Plug-in Reinforcement Learning Policy Exploration Enhancement Driven by Large Language Models
Qianyue Hao, Yiwen Song, Qingmin Liao et al.
MigGPT: Harnessing Large Language Models for Automated Migration of Out-of-Tree Linux Kernel Patches Across Versions
Pucheng Dang, Di Huang, Dong Li et al.
Optimization Inspired Few-Shot Adaptation for Large Language Models
Boyan Gao, Xin Wang, Yibo Yang et al.
Predictable Scale (Part II) --- Farseer: A Refined Scaling Law in LLMs
Houyi Li, Wenzhen Zheng, Qiufeng Wang et al.
Right Question is Already Half the Answer: Fully Unsupervised LLM Reasoning Incentivization
Qingyang Zhang, Haitao Wu, Changqing Zhang et al.
Streaming Attention Approximation via Discrepancy Theory
Ekaterina Kochetkova, Kshiteej Jitesh Sheth, Insu Han et al.
The Best Instruction-Tuning Data are Those That Fit
Dylan Zhang, Qirun Dai, Hao Peng
Understanding Parametric and Contextual Knowledge Reconciliation within Large Language Models
Jun Zhao, Yongzhuo Yang, Xiang Hu et al.
Uni-LoRA: One Vector is All You Need
Kaiyang Li, Shaobo Han, Qing Su et al.
Virus Infection Attack on LLMs: Your Poisoning Can Spread "VIA" Synthetic Data
Zi Liang, Qingqing Ye, Xuan Liu et al.
Wider or Deeper? Scaling LLM Inference-Time Compute with Adaptive Branching Tree Search
Yuichi Inoue, Kou Misaki, Yuki Imajuku et al.