All Papers
34,598 papers found • Page 634 of 692
Conference
Revisiting Domain-Adaptive Object Detection in Adverse Weather by the Generation and Composition of High-Quality Pseudo-Labels
Rui Zhao, Huibin Yan, Shuoyao Wang
Revisiting Feature Disentanglement Strategy in Diffusion Training and Breaking Conditional Independence Assumption in Sampling
Wonwoong Cho, Hareesh Ravi, Midhun Harikumar et al.
Revisiting Global Translation Estimation with Feature Tracks
Peilin Tao, Hainan Cui, Mengqi Rong et al.
Revisiting Gradient Pruning: A Dual Realization for Defending against Gradient Attacks
Lulu Xue, Shengshan Hu, Ruizhi Zhao et al.
Revisiting Graph-Based Fraud Detection in Sight of Heterophily and Spectrum
Fan Xu, Nan Wang, Hao Wu et al.
Revisiting Inexact Fixed-Point Iterations for Min-Max Problems: Stochasticity and Structured Nonconvexity
Ahmet Alacaoglu, Donghwan Kim, Stephen Wright
Revisiting Link Prediction: a data perspective
Haitao Mao, Juanhui Li, Harry Shomer et al.
Revisiting Non-Autoregressive Transformers for Efficient Image Synthesis
Zanlin Ni, Yulin Wang, Renping Zhou et al.
Revisiting Open-Set Panoptic Segmentation
Yufei Yin, Hao Chen, Wengang Zhou et al.
Revisiting Plasticity in Visual Reinforcement Learning: Data, Modules and Training Stages
Guozheng Ma, Lu Li, Sen Zhang et al.
Revisiting Sampson Approximations for Geometric Estimation Problems
Felix Rydell, Angelica Torres, Viktor Larsson
Revisiting Scalable Hessian Diagonal Approximations for Applications in Reinforcement Learning
Mohamed Elsayed, Homayoon Farrahi, Felix Dangel et al.
Revisiting Single Image Reflection Removal In the Wild
Yurui Zhu, Bo Li, Xueyang Fu et al.
Revisiting Spatial-Frequency Information Integration from a Hierarchical Perspective for Panchromatic and Multi-Spectral Image Fusion
Jiangtong Tan, Jie Huang, Naishan Zheng et al.
Revisiting Supervision for Continual Representation Learning
Daniel Marczak, Sebastian Cygert, Tomasz Trzcinski et al.
Revisiting the Domain Shift and Sample Uncertainty in Multi-source Active Domain Transfer
Wenqiao Zhang, Zheqi Lv
Revisiting the Last-Iterate Convergence of Stochastic Gradient Methods
Zijian Liu, Zhengyuan Zhou
Revisiting the Power of Prompt for Visual Tuning
Yuzhu Wang, Lechao Cheng, Chaowei Fang et al.
Revisiting the Role of Language Priors in Vision-Language Models
Zhiqiu Lin, Xinyue Chen, Deepak Pathak et al.
Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Yihua Zhang, Pingzhi Li, Junyuan Hong et al.
Revisit Self-supervision with Local Structure-from-Motion
Shengjie Zhu, Xiaoming Liu
Revisit the Essence of Distilling Knowledge through Calibration
Wen-Shu Fan, Su Lu, Xin-Chun Li et al.
Revitalizing Multivariate Time Series Forecasting: Learnable Decomposition with Inter-Series Dependencies and Intra-Series Variations Modeling
Guoqi Yu, Jing Zou, Xiaowei Hu et al.
Reward-Consistent Dynamics Models are Strongly Generalizable for Offline Reinforcement Learning
Fan-Ming Luo, Tian Xu, Xingchen Cao et al.
Reward Design for Justifiable Sequential Decision-Making
Aleksa Sukovic, Goran Radanovic
Reward-Free Curricula for Training Robust World Models
Marc Rigter, Minqi Jiang, Ingmar Posner
Reward-Free Kernel-Based Reinforcement Learning
Sattar Vakili, Farhang Nabiei, Da-shan Shiu et al.
Reward Model Ensembles Help Mitigate Overoptimization
Thomas Coste, Usman Anwar, Robert Kirk et al.
Reward Model Learning vs. Direct Policy Optimization: A Comparative Analysis of Learning from Human Preferences
Andi Nika, Debmalya Mandal, Parameswaran Kamalaruban et al.
Reward Penalties on Augmented States for Solving Richly Constrained RL Effectively
Hao Jiang, Tien Mai, Pradeep Varakantham et al.
Reward Shaping for Reinforcement Learning with An Assistant Reward Agent
Haozhe Ma, Kuankuan Sima, Thanh Vinh Vo et al.
Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment
Rui Yang, Xiaoman Pan, Feng Luo et al.
Reweighted Solutions for Weighted Low Rank Approximation
David Woodruff, Taisuke Yasuda
REWIND: Real-Time Egocentric Whole-Body Motion Diffusion with Exemplar-Based Identity Conditioning
Jian Wang, Zhe Cao, Diogo Luvizon et al.
RewriteLM: An Instruction-Tuned Large Language Model for Text Rewriting
Lei Shu, Liangchen Luo, Jayakumar Hoskere et al.
Rewrite the Stars
Xu Ma, Xiyang Dai, Yue Bai et al.
RGBD GS-ICP SLAM
Seongbo Ha, Jiung Yeon, Hyeonwoo Yu
RGBD Objects in the Wild: Scaling Real-World 3D Object Learning from RGB-D Videos
Hongchi Xia, Yang Fu, Sifei Liu et al.
RG-GAN: Dynamic Regenerative Pruning for Data-Efficient Generative Adversarial Networks
Divya Saxena, Jiannong Cao, Jiahao Xu et al.
RGMComm: Return Gap Minimization via Discrete Communications in Multi-Agent Reinforcement Learning
Jingdi Chen, Tian Lan, Carlee Joe-Wong
RGNet: A Unified Clip Retrieval and Grounding Network for Long Videos
Tanveer Hannan, Mohaiminul Islam, Thomas Seidl et al.
RICA^2: Rubric-Informed, Calibrated Assessment of Actions
Abrar Majeedi, Viswanatha Reddy Gajjala, Satya Sai Srinath Namburi GNVV et al.
RICE: Breaking Through the Training Bottlenecks of Reinforcement Learning with Explanation
Zelei Cheng, Xian Wu, Jiahao Yu et al.
RichDreamer: A Generalizable Normal-Depth Diffusion Model for Detail Richness in Text-to-3D
Lingteng Qiu, Guanying Chen, Xiaodong Gu et al.
Rich Human Feedback for Text-to-Image Generation
Youwei Liang, Junfeng He, Gang Li et al.
Rich-Observation Reinforcement Learning with Continuous Latent Dynamics
Yuda Song, Lili Wu, Dylan Foster et al.
Riemannian Accelerated Zeroth-order Algorithm: Improved Robustness and Lower Query Complexity
Chang He, Zhaoye Pan, Xiao Wang et al.
Riemannian coordinate descent algorithms on matrix manifolds
Andi Han, Pratik Kumar Jawanpuria, Bamdev Mishra
Riemannian Multinomial Logistics Regression for SPD Neural Networks
Ziheng Chen, Yue Song, Gaowen Liu et al.
Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models
Fangzhao Zhang, Mert Pilanci