NeurIPS Papers
5,858 papers found • Page 8 of 118
A Physics-preserved Transfer Learning Method for Differential Equations
Hao-Ran Yang, Chuan-Xian Ren
APIGen-MT: Agentic Pipeline for Multi-Turn Data Generation via Simulated Agent-Human Interplay
Akshara Prabhakar, Zuxin Liu, Ming Zhu et al.
A Plug-and-Play Query Synthesis Active Learning Framework for Neural PDE Solvers
Zhiyuan Wang, Jinwoo Go, Byung-Jun Yoon et al.
APML: Adaptive Probabilistic Matching Loss for Robust 3D Point Cloud Reconstruction
Sasan Sharifipour, Constantino Álvarez Casado, Mohammad Sabokrou et al.
APOLLO: Automated LLM and Lean Collaboration for Advanced Formal Reasoning
Azim Ospanov, Farzan Farnia, Roozbeh Yousefzadeh
Approximate Domain Unlearning for Vision-Language Models
Kodai Kawamura, Yuta Goto, Rintaro Yanagi et al.
Approximate Gradient Coding for Distributed Learning with Heterogeneous Stragglers
Heekang Song, Wan Choi
Approximately Aligned Decoding
Daniel Melcer, Sujan Kumar Gonugondla, Pramuditha Perera et al.
Approximating Shapley Explanations in Reinforcement Learning
Daniel Beechey, Özgür Şimşek
Approximation and Generalization Abilities of Score-based Neural Network Generative Models for Sub-Gaussian Distributions
Guoji Fu, Wee Sun Lee
Approximation theory for 1-Lipschitz ResNets
Davide Murari, Takashi Furuya, Carola-Bibiane Schönlieb
A Practical Guide for Incorporating Symmetry in Diffusion Policy
Dian Wang, Boce Hu, Shuran Song et al.
A Pre-training Framework for Relational Data with Information-theoretic Principles
Quang Truong, Zhikai Chen, Mingxuan Ju et al.
A Principled Approach to Randomized Selection under Uncertainty: Applications to Peer Review and Grant Funding
Alexander Goldberg, Giulia Fanti, Nihar Shah
A Principled Path to Fitted Distributional Evaluation
Sungee Hong, Jiayi Wang, Zhengling Qi et al.
A Principle of Targeted Intervention for Multi-Agent Reinforcement Learning
Anjie Liu, Jianhong Wang, Samuel Kaski et al.
A Private Approximation of the 2nd-Moment Matrix of Any Subsamplable Input
Bar Mahpud, Or Sheffet
A Provable Approach for End-to-End Safe Reinforcement Learning
Akifumi Wachi, Kohei Miyaguchi, Takumi Tanabe et al.
ArchCAD-400K: A Large-Scale CAD drawings Dataset and New Baseline for Panoptic Symbol Spotting
Ruifeng Luo, Zhengjie Liu, Tianxiao Cheng et al.
Architectural and Inferential Inductive Biases for Exchangeable Sequence Modeling
Daksh Mittal, Leon Li, Thomson Yen et al.
ArchPower: Dataset for Architecture-Level Power Modeling of Modern CPU Design
Qijun Zhang, Yao Lu, Mengming Li et al.
AREAL: A Large-Scale Asynchronous Reinforcement Learning System for Language Reasoning
Wei Fu, Jiaxuan Gao, Xujie Shen et al.
ARECHO: Autoregressive Evaluation via Chain-Based Hypothesis Optimization for Speech Multi-Metric Estimation
Jiatong Shi, Yifan Cheng, Bo-Hao Su et al.
Are Greedy Task Orderings Better Than Random in Continual Linear Regression?
Matan Tsipory, Ran Levinstein, Itay Evron et al.
A Regularized Newton Method for Nonconvex Optimization with Global and Local Complexity Guarantees
Yuhao Zhou, Jintao Xu, Bingrui Li et al.
A Reinforcement Learning-based Bidding Strategy for Data Consumers in Auction-based Federated Learning
Xiaoli Tang, Han Yu, Xiaoxiao Li
Are Language Models Efficient Reasoners? A Perspective from Logic Programming
Andreas Opedal, Yanick Zengaffinen, Haruki Shirakami et al.
Are Large Language Models Sensitive to the Motives Behind Communication?
Addison J. Wu, Ryan Liu, Kerem Oktar et al.
Are Large Reasoning Models Good Translation Evaluators? Analysis and Performance Boost
Runzhe Zhan, Zhihong Huang, Xinyi Yang et al.
A Reliable Cryptographic Framework for Empirical Machine Unlearning Evaluation
Yiwen Tu, Pingbang Hu, Jiaqi Ma
Are Pixel-Wise Metrics Reliable for Computerized Tomography Reconstruction?
Tianyu Lin, Xinran Li, Chuntung Zhuang et al.
ARGenSeg: Image Segmentation with Autoregressive Image Generation Model
Xiaolong Wang, Lixiang Ru, Ziyuan Huang et al.
ARIA: Training Language Agents with Intention-driven Reward Aggregation
Ruihan Yang, yikai zhang, Aili Chen et al.
ARM: Adaptive Reasoning Model
Tinghui Zhu, Jian Xie, yikai zhang et al.
ARMesh: Autoregressive Mesh Generation via Next-Level-of-Detail Prediction
Jiabao Lei, Kewei Shi, Zhihao Liang et al.
AR-RAG: Autoregressive Retrieval Augmentation for Image Generation
Jingyuan Qi, Zhiyang Xu, Qifan Wang et al.
Artificial Hivemind: The Open-Ended Homogeneity of Language Models (and Beyond)
Liwei Jiang, Yuanjun Chai, Margaret Li et al.
A Scalable, Causal, and Energy Efficient Framework for Neural Decoding with Spiking Neural Networks
Georgios Mentzelopoulos, Ioannis Asmanis, Konrad Kording et al.
Ascent Fails to Forget
Ioannis Mavrothalassitis, Pol Puigdemont, Noam Levi et al.
ASDSV: Multimodal Generation Made Efficient with Approximate Speculative Diffusion and Speculative Verification
Kaijun Zhou, Xingyu Yan, Xingda Wei et al.
A Semantic Parsing Framework for End-to-End Time Normalization
Xin Su, Sungduk Yu, Phillip Howard et al.
A Set of Generalized Components to Achieve Effective Poison-only Clean-label Backdoor Attacks with Collaborative Sample Selection and Triggers
Zhixiao Wu, Yao Lu, Jie Wen et al.
ASGO: Adaptive Structured Gradient Optimization
Kang An, Yuxing Liu, Rui Pan et al.
A Signed Graph Approach to Understanding and Mitigating Oversmoothing
Jiaqi Wang, Xinyi Wu, James Cheng et al.
A Simple Linear Patch Revives Layer-Pruned Large Language Models
Xinrui Chen, Haoli Bai, Tao Yuan et al.
A Single-Loop First-Order Algorithm for Linearly Constrained Bilevel Optimization
Wei Shen, Jiawei Zhang, Minhui Huang et al.
A Single-Loop Gradient Algorithm for Pessimistic Bilevel Optimization via Smooth Approximation
Qichao Cao, Shangzhi Zeng, Jin Zhang
A Single-Swap Local Search Algorithm for k-Means of Lines
Ting Liang, Xiaoliang Wu, Junyu Huang et al.
Ask a Strong LLM Judge when Your Reward Model is Uncertain
Zhenghao Xu, Qin Lu, Qingru Zhang et al.
A Smooth Sea Never Made a Skilled SAILOR: Robust Imitation via Learning to Search
Arnav Kumar Jain, Vibhakar Mohta, Subin Kim et al.