NeurIPS 2025 Papers
5,858 papers found • Page 117 of 118
Why Masking Diffusion Works: Condition on the Jump Schedule for Improved Discrete Diffusion
Alan Amin, Nate Gruver, Andrew Wilson
Why Playing Against Diverse and Challenging Opponents Speeds Up Coevolution: A Theoretical Analysis on Combinatorial Games
Alistair Benford, Per Kristian Lehre
Why Popular MOEAs are Popular: Proven Advantages in Approximating the Pareto Front
Mingfeng Li, Qiang Zhang, Weijie Zheng et al.
Wide-Horizon Thinking and Simulation-Based Evaluation for Real-World LLM Planning with Multifaceted Constraints
Dongjie Yang, Chengqiang Lu, Qimeng Wang et al.
Wider or Deeper? Scaling LLM Inference-Time Compute with Adaptive Branching Tree Search
Yuichi Inoue, Kou Misaki, Yuki Imajuku et al.
WildCAT3D: Appearance-Aware Multi-View Diffusion in the Wild
Morris Alper, David Novotny, Filippos Kokkinos et al.
Win Fast or Lose Slow: Balancing Speed and Accuracy in Latency-Sensitive Decisions of LLMs
Hao Kang, Qingru Zhang, Han Cai et al.
WISA: World simulator assistant for physics-aware text-to-video generation
Jing Wang, Ao Ma, Ke Cao et al.
Wisdom is Knowing What not to Say: Hallucination-Free LLMs Unlearning via Attention Shifting
Chenchen Tan, Youyang Qu, Xinghao Li et al.
With Limited Data for Multimodal Alignment, Let the STRUCTURE Guide You
Fabian Gröger, Shuo Wen, Huyen Le et al.
WKV-sharing embraced random shuffle RWKV high-order modeling for pan-sharpening
man zhou, Xuanhua He, Danfeng Hong et al.
WMCopier: Forging Invisible Watermarks on Arbitrary Images
Ziping Dong, Chao Shuai, Zhongjie Ba et al.
WolBanking77: Wolof Banking Speech Intent Classification Dataset
Abdou Karim KANDJI, Frederic Precioso, Cheikh BA et al.
Wonder Wins Ways: Curiosity-Driven Exploration through Multi-Agent Contextual Calibration
Yiyuan Pan, Zhe Liu, Hesheng Wang
Word-Level Emotional Expression Control in Zero-Shot Text-to-Speech Synthesis
Tianrui Wang, Haoyu Wang, Meng Ge et al.
Words That Unite The World: A Unified Framework for Deciphering Central Bank Communications
Agam Shah, Siddhant Sukhani, Huzaifa Pardawala et al.
World-aware Planning Narratives Enhance Large Vision-Language Model Planner
Junhao Shi, Zhaoye Fei, Siyin Wang et al.
WorldMem: Long-term Consistent World Simulation with Memory
Zeqi Xiao, Yushi LAN, Yifan Zhou et al.
WorldModelBench: Judging Video Generation Models As World Models
Dacheng Li, Yunhao Fang, Yukang Chen et al.
World Models as Reference Trajectories for Rapid Motor Adaptation
Carlos Stein Brito, Daniel McNamee
World Models Should Prioritize the Unification of Physical and Social Dynamics
Xiaoyuan Zhang, Chengdong Ma, Yizhe Huang et al.
WorldWeaver: Generating Long-Horizon Video Worlds via Rich Perception
Zhiheng Liu, Xueqing Deng, Shoufa Chen et al.
Worse than Zero-shot? A Fact-Checking Dataset for Evaluating the Robustness of RAG Against Misleading Retrievals
Linda Zeng, Rithwik Gupta, Divij Motwani et al.
WritingBench: A Comprehensive Benchmark for Generative Writing
Yuning Wu, Jiahao Mei, Ming Yan et al.
Wukong's 72 Transformations: High-fidelity Textured 3D Morphing via Flow Models
Minghao Yin, Yukang Cao, Kai Han
X-Field: A Physically Informed Representation for 3D X-ray Reconstruction
Feiran Wang, Jiachen Tao, Junyi Wu et al.
XIFBench: Evaluating Large Language Models on Multilingual Instruction Following
Zhenyu Li, Kehai Chen, Yunfei Long et al.
xLSTM-Mixer: Multivariate Time Series Forecasting by Mixing via Scalar Memories
Maurice Kraus, Felix Divo, Devendra Singh Dhami et al.
X-Mahalanobis: Transformer Feature Mixing for Reliable OOD Detection
Tong Wei, Bolin Wang, Jiang-Xin Shi et al.
X-Scene: Large-Scale Driving Scene Generation with High Fidelity and Flexible Controllability
Yu Yang, Alan Liang, Jianbiao Mei et al.
XVerse: Consistent Multi-Subject Control of Identity and Semantic Attributes via DiT Modulation
Bowen Chen, Brynn zhao, Haomiao Sun et al.
YEAST: Yet Another Sequential Test
Alexey Kurennoy, Majed Dodin, Tural Gurbanov et al.
Yggdrasil: Bridging Dynamic Speculation and Static Runtime for Latency-Optimal Tree-Based LLM Decoding
Yue Guan, Changming Yu, Shihan Fang et al.
YOLOv12: Attention-Centric Real-Time Object Detectors
Yunjie Tian, Qixiang Ye, DAVID DOERMANN
You Can Trust Your Clustering Model: A Parameter-free Self-Boosting Plug-in for Deep Clustering
Hanyang Li, Yuheng Jia, Hui LIU et al.
You Only Communicate Once: One-shot Federated Low-Rank Adaptation of MLLM
Binqian Xu, Haiyang Mei, Zechen Bai et al.
You Only Spectralize Once: Taking a Spectral Detour to Accelerate Graph Neural Network
Yi Li, Zhichun Guo, Guanpeng Li et al.
Your Pre-trained LLM is Secretly an Unsupervised Confidence Calibrator
Beier Luo, Shuoyuan Wang, Sharon Li et al.
Zebra-Llama: Towards Extremely Efficient Hybrid Models
Mingyu Yang, Mehdi Rezagholizadeh, Guihong Li et al.
ZEBRA: Towards Zero-Shot Cross-Subject Generalization for Universal Brain Visual Decoding
Haonan Wang, Jingyu Lu, Hongrui Li et al.
ZeCO: Zero-Communication Overhead Sequence Parallelism for Linear Attention
Yuhong CHOU, Zehao Liu, Rui-Jie Zhu et al.
ZeroPatcher: Training-free Sampler for Video Inpainting and Editing
Shaoshu Yang, Yingya Zhang, Ran He
ZeroSep: Separate Anything in Audio with Zero Training
Chao Huang, Yuesheng Ma, Junxuan Huang et al.
Zero-Shot Blind-Spot Image Denoising via Cross-Scale Non-Local Pixel Refilling
Qilong Guo, Tianjing Zhang, Zhiyuan Ma et al.
Zero-Shot Context Generalization in Reinforcement Learning from Few Training Contexts
James Chapman, Kedar Karhadkar, Guido Montufar
Zero-shot Denoising via Neural Compression: Theoretical and algorithmic framework
Ali Zafari, Xi Chen, Shirin Jalali
Zero-Shot Detection of LLM-Generated Text via Implicit Reward Model
Runheng Liu, Heyan Huang, Xingchen Xiao et al.
Zero-Shot Performance Prediction for Probabilistic Scaling Laws
Viktoria Schram, Markus Hiller, Daniel Beck et al.
Zero-shot protein stability prediction by inverse folding models: a free energy interpretation
Jes Frellsen, Maher Kassem, Tone Bengtsen et al.
Zero-Shot Trajectory Planning for Signal Temporal Logic Tasks
Ruijia Liu, Ancheng Hou, Xiao Yu et al.