NeurIPS Poster Papers
4,493 papers found • Page 2 of 90
ACCO: Accumulate While You Communicate for Communication-Overlapped Sharded LLM Training
Adel Nabli, Louis Fournier, Pierre ERBACHER et al.
AccuQuant: Simulating Multiple Denoising Steps for Quantizing Diffusion Models
Seunghoon Lee, Jeongwoo Choi, Byunggwan Son et al.
Accurate and Efficient Low-Rank Model Merging in Core Space
Aniello Panariello, Daniel Marczak, Simone Magistri et al.
Accurate KV Cache Eviction via Anchor Direction Projection for Efficient LLM Inference
Zijie Geng, Jie Wang, Ziqi Liu et al.
Accurately Predicting Protein Mutational Effects via a Hierarchical Many-Body Attention Network
Dahao Xu, Jiahua Rao, Mingming Zhu et al.
AC-DiT: Adaptive Coordination Diffusion Transformer for Mobile Manipulation
Sixiang Chen, Jiaming Liu, Siyuan Qian et al.
AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning
Yang Chen, Zhuolin Yang, Zihan Liu et al.
A Circular Argument: Does RoPE need to be Equivariant for Vision?
Chase van de Geijn, Timo Lüddecke, Polina Turishcheva et al.
AC-LoRA: (Almost) Training-Free Access Control Aware Multi-Modal LLMs
Lara Magdalena Lazier, Aritra Dhar, Vasilije Stambolic et al.
A Closer Look at NTK Alignment: Linking Phase Transitions in Deep Image Regression
Giuseppe Castiglione, Christopher L Buckley, Ivor Simpson
A Closer Look at TabPFN v2: Understanding Its Strengths and Extending Its Capabilities
Han-Jia Ye, Si-Yang Liu, Wei-Lun (Harry) Chao
A Closer Look to Positive-Unlabeled Learning from Fine-grained Perspectives: An Empirical Study
Yuanchao Dai, Zhengzhang Hou, Changchun Li et al.
A CLT for Polynomial GNNs on Community-Based Graphs
Luciano Vinas, Arash Amini
A compressive-expressive communication framework for compositional representations
Rafael Elberg, Felipe del Río, Mircea Petrache et al.
A Computationally Viable Numerical Gradient-based Technique for Optimal Covering Problems
Gokul Rajaraman, Debasish Chatterjee
A Counterfactual Semantics for Hybrid Dynamical Systems
Andy Zane, Dmitry Batenkov, Rafal Urbaniak et al.
A Cramér–von Mises Approach to Incentivizing Truthful Data Sharing
Alex Clinton, Thomas Zeng, Yiding Chen et al.
ACT as Human: Multimodal Large Language Model Data Annotation with Critical Thinking
Lequan Lin, Dai Shi, Andi Han et al.
Actial: Activate Spatial Reasoning Ability of Multimodal Large Language Models
Xiaoyu Zhan, Wenxuan Huang, Hao Sun et al.
Activated LoRA: Fine-tuned LLMs for Intrinsics
Kristjan Greenewald, Luis Lastras, Thomas Parnell et al.
Activation-Guided Consensus Merging for Large Language Models
Yuxuan Yao, Shuqi LIU, Zehua Liu et al.
Activation-Informed Merging of Large Language Models
Amin Heyrani Nobari, Kaveh Alimohammadi, Ali ArjomandBigdeli et al.
Active Measurement: Efficient Estimation at Scale
Max Hamilton, Jinlin Lai, Wenlong Zhao et al.
Active Seriation: Efficient Ordering Recovery with Statistical Guarantees
James Cheshire, Yann Issartel
Active Target Discovery under Uninformative Priors: The Power of Permanent and Transient Memory
Anindya Sarkar, Binglin Ji, Yevgeniy Vorobeychik
Active Test-time Vision-Language Navigation
Heeju Ko, Sung June Kim, Gyeongrok Oh et al.
ActiveVOO: Value of Observation Guided Active Knowledge Acquisition for Open-World Embodied Lifted Regression Planning
Xiaotian Liu, Ali Pesaranghader, Jaehong Kim et al.
Activity Pruning for Efficient Spiking Neural Networks
Tong Bu, Xinyu Shi, Zhaofei Yu
Actor-Free Continuous Control via Structurally Maximizable Q-Functions
Yigit Korkmaz, Urvi Bhuwania, Ayush Jain et al.
Act to See, See to Act: Diffusion-Driven Perception-Action Interplay for Adaptive Policies
Jing Wang, Weiting Peng, Jing Tang et al.
AcuRank: Uncertainty-Aware Adaptive Computation for Listwise Reranking
Soyoung Yoon, Gyuwan Kim, Gyu-Hwung Cho et al.
AdaDetectGPT: Adaptive Detection of LLM-Generated Text with Statistical Guarantees
Hongyi Zhou, Jin Zhu, Pingfan Su et al.
Ada-KV: Optimizing KV Cache Eviction by Adaptive Budget Allocation for Efficient LLM Inference
Yuan Feng, Junlin Lv, Yukun Cao et al.
AdaLRS: Loss-Guided Adaptive Learning Rate Search for Efficient Foundation Model Pretraining
Hongyuan Dong, Dingkang Yang, Xiao Liang et al.
Adam Reduces a Unique Form of Sharpness: Theoretical Insights Near the Minimizer Manifold
Xinghan Li, Haodong Wen, Kaifeng Lyu
AdaMSS: Adaptive Multi-Subspace Approach for Parameter-Efficient Fine-Tuning
Jingjing Zheng, Wanglong Lu, Yiming Dong et al.
Adaptable Safe Policy Learning from Multi-task Data with Constraint Prioritized Decision Transformer
Ruiqi Xue, Ziqian Zhang, Lihe Li et al.
AdaptDel: Adaptable Deletion Rate Randomized Smoothing for Certified Robustness
Zhuoqun Huang, Neil Marchant, Olga Ohrimenko et al.
AdaptGrad: Adaptive Sampling to Reduce Noise
Linjiang Zhou, Chao Ma, Zepeng Wang et al.
Adapting to Stochastic and Adversarial Losses in Episodic MDPs with Aggregate Bandit Feedback
Shinji Ito, Kevin Jamieson, Haipeng Luo et al.
Adaptive Algorithms with Sharp Convergence Rates for Stochastic Hierarchical Optimization
Xiaochuan Gong, Jie Hao, Mingrui Liu
Adaptive and Multi-scale Affinity Alignment for Hierarchical Contrastive Learning
Jiawei Huang, Minming Li, Hu Ding
Adaptive Batch-Wise Sample Scheduling for Direct Preference Optimization
Zixuan Huang, Yikun Ban, Lean Fu et al.
Adaptive Cannistraci-Hebb Network Automata Modelling of Complex Networks for Path-based Link Prediction
Jialin Zhao, Alessandro Muscoloni, Umberto Michieli et al.
Adaptive Classifier-Free Guidance via Dynamic Low-Confidence Masking
Pengxiang Li, Shilin Yan, Jiayin Cai et al.
Adaptive Data Analysis for Growing Data
Neil Marchant, Benjamin Rubinstein
Adaptive Data-Borrowing for Improving Treatment Effect Estimation using External Controls
Qinwei Yang, Jingyi Li, Peng Wu
Adaptive Discretization for Consistency Models
Jiayu Bai, Zhanbo Feng, Zhijie Deng et al.
Adaptive Distraction: Probing LLM Contextual Robustness with Automated Tree Search
Yanbo Wang, Zixiang Xu, Yue Huang et al.
Adaptive Divergence Regularized Policy Optimization for Fine-tuning Generative Models
Jiajun Fan, Tong Wei, Chaoran Cheng et al.