NEURIPS Papers
5,858 papers found • Page 109 of 118
Trust, But Verify: A Self-Verification Approach to Reinforcement Learning with Verifiable Rewards
Xiaoyuan Liu, Tian Liang, Zhiwei He et al.
Trust Region Constrained Measure Transport in Path Space for Stochastic Optimal Control and Inference
Denis Blessing, Julius Berner, Lorenz Richter et al.
Trust Region Reward Optimization and Proximal Inverse Reward Optimization Algorithm
Yang Chen, Menglin Zou, Jiaqi Zhang et al.
TRUST: Test-Time Refinement using Uncertainty-Guided SSM Traverses
Sahar Dastani, Ali Bahri, Gustavo Vargas Hakim et al.
Truthful Aggregation of LLMs with an Application to Online Advertising
Ermis Soumalias, Michael Curry, Sven Seuken
Truth over Tricks: Measuring and Mitigating Shortcut Learning in Misinformation Detection
Herun Wan, Jiaying Wu, Minnan Luo et al.
TSENOR: Highly-Efficient Algorithm for Finding Transposable N:M Sparse Masks
Xiang Meng, Mehdi Makni, Rahul Mazumder
T-SHIRT: Token-Selective Hierarchical Data Selection for Instruction Tuning
Yanjun Fu, Faisal Hamman, Sanghamitra Dutta
TS-MOF: Two-Stage Multi-Objective Fine-tuning for Long-Tailed Recognition
Zhe Zhao, Zhiheng Gong, Pengkun Wang et al.
TS-RAG: Retrieval-Augmented Generation based Time Series Foundation Models are Stronger Zero-Shot Forecaster
Kanghui Ning, Zijie Pan, Yu Liu et al.
TTRL: Test-Time Reinforcement Learning
Yuxin Zuo, Kaiyan Zhang, Li Sheng et al.
TTS-VAR: A Test-Time Scaling Framework for Visual Auto-Regressive Generation
Zhekai Chen, Ruihang Chu, Yukang Chen et al.
Turbocharging Gaussian Process Inference with Approximate Sketch-and-Project
Pratik Rathore, Zachary Frangella, Sachin Garg et al.
Turning Sand to Gold: Recycling Data to Bridge On-Policy and Off-Policy Learning via Causal Bound
Tal Fiskus, Uri Shaham
Turning the Tables: Enabling Backward Transfer via Causal-Aware LoRA in Continual Learning
Chaoyang Li, Runze Ye, Jianyang Qin et al.
TV-Rec: Time-Variant Convolutional Filter for Sequential Recommendation
Yehjin Shin, Jeongwhan Choi, Seojin Kim et al.
Twilight: Adaptive Attention Sparsity with Hierarchical Top-$p$ Pruning
Chaofan Lin, Jiaming Tang, Shuo Yang et al.
TwinMarket: A Scalable Behavioral and Social Simulation for Financial Markets
Yuzhe YANG, Yifei Zhang, Minghao Wu et al.
Two Causally Related Needles in a Video Haystack
Miaoyu Li, Qin Chao, Boyang Li
Two Experts Are All You Need for Steering Thinking: Reinforcing Cognitive Effort in MoE Reasoning Models Without Additional Training
Mengru Wang, Xingyu Chen, Yue Wang et al.
Two Heads are Better than One: Simulating Large Transformers with Small Ones
Hantao Yu, Josh Alman
Two‑Stage Learning of Stabilizing Neural Controllers via Zubov Sampling and Iterative Domain Expansion
Haoyu Li, Xiangru Zhong, Bin Hu et al.
Two-Steps Diffusion Policy for Robotic Manipulation via Genetic Denoising
Mateo Clémente, Leo Brunswic, Yang et al.
Týr-the-Pruner: Structural Pruning LLMs via Global Sparsity Distribution Optimization
Guanchen Li, Yixing Xu, Zeping Li et al.
UAV-Flow Colosseo: A Real-World Benchmark for Flying-on-a-Word UAV Imitation Learning
Xiangyu Wang, Donglin Yang, Yue Liao et al.
U-CAN: Unsupervised Point Cloud Denoising with Consistency-Aware Noise2Noise Matching
Junsheng Zhou, XingYu Shi, Haichuan Song et al.
UEPI: Universal Energy-Behavior-Preserving Integrators for Energy Conservative/Dissipative Differential Equations
Elena Celledoni, Brynjulf Owren, Chong Shen et al.
UFM: A Simple Path towards Unified Dense Correspondence with Flow
Yuchen Zhang, Nikhil Keetha, Chenwei Lyu et al.
UFO: A Unified Approach to Fine-grained Visual Perception via Open-ended Language Interface
Hao Tang, Chen-Wei Xie, Haiyang Wang et al.
UFO-RL: Uncertainty-Focused Optimization for Efficient Reinforcement Learning Data Selection
Yang Zhao, Kai Xiong, Xiao Ding et al.
UFT: Unifying Supervised and Reinforcement Fine-Tuning
Mingyang Liu, Gabriele Farina, Asuman Ozdaglar
UGG-ReID: Uncertainty-Guided Graph Model for Multi-Modal Object Re-Identification
Xixi Wan, AIHUA ZHENG, Bo Jiang et al.
UGM2N: An Unsupervised and Generalizable Mesh Movement Network via M-Uniform Loss
Zhichao Wang, Xinhai Chen, Qinglin Wang et al.
UGoDIT: Unsupervised Group Deep Image Prior Via Transferable Weights
Shijun Liang, Ismail Alkhouri, Siddhant Gautam et al.
UI-Genie: A Self-Improving Approach for Iteratively Boosting MLLM-based Mobile GUI Agents
Han Xiao, Guozhi Wang, Yuxiang Chai et al.
Ultra-high Resolution Watermarking Framework Resistant to Extreme Cropping and Scaling
Nan Sun, LuYu Yuan, Han Fang et al.
UltraHR-100K: Enhancing UHR Image Synthesis with A Large-Scale High-Quality Dataset
Chen Zhao, En Ci, Yunzhe Xu et al.
UltraLED: Learning to See Everything in Ultra-High Dynamic Range Scenes
Yuang Meng, Xin Jin, Lina Lei et al.
Ultrametric Cluster Hierarchies: I Want ‘em All!
Andrew Draganov, Pascal Weber, Rasmus Jørgensen et al.
UltraVideo: High-Quality UHD Video Dataset with Comprehensive Captions
Xue zhucun, Jiangning Zhang, Teng Hu et al.
UMA: A Family of Universal Models for Atoms
Brandon Wood, Misko Dzamba, Xiang Fu et al.
UMAMI: Unifying Masked Autoregressive Models and Deterministic Rendering for View Synthesis
Thanh-Tung Le, Tuan Pham, Tung Nguyen et al.
UMoE: Unifying Attention and FFN with Shared Experts
Yuanhang Yang, Chaozheng Wang, Jing Li
UMU-Bench: Closing the Modality Gap in Multimodal Unlearning Evaluation
Chengye Wang, Yuyuan Li, XiaoHua Feng et al.
un$^2$CLIP: Improving CLIP's Visual Detail Capturing Ability via Inverting unCLIP
Yinqi Li, Jiahe Zhao, Hong Chang et al.
Unbalanced Optimal Total Variation Transport: A Theoretical Approach to Spatial Resource Allocation Problems
Nhan-Phu Chung, Jinhui Han, Bohan Li et al.
Unbiased Prototype Consistency Learning for Multi-Modal and Multi-Task Object Re-Identification
Zhongao Zhou, Bin Yang, Wenke Huang et al.
Unbiased Sliced Wasserstein Kernels for High-Quality Audio Captioning
Tien Manh Luong, Khai Nguyen, Dinh Phung et al.
Uncertain Knowledge Graph Completion via Semi-Supervised Confidence Distribution Learning
Tianxing Wu, Shutong Zhu, Jingting Wang et al.
Uncertainty-Aware Multi-Objective Reinforcement Learning-Guided Diffusion Models for 3D De Novo Molecular Design
Lianghong Chen, Dongkyu Kim, Mike Domaratzki et al.