All Papers

34,598 papers found • Page 634 of 692

Revisiting Domain-Adaptive Object Detection in Adverse Weather by the Generation and Composition of High-Quality Pseudo-Labels

Rui Zhao, Huibin Yan, Shuoyao Wang

ECCV 2024

Revisiting Feature Disentanglement Strategy in Diffusion Training and Breaking Conditional Independence Assumption in Sampling

Wonwoong Cho, Hareesh Ravi, Midhun Harikumar et al.

ECCV 2024

Revisiting Global Translation Estimation with Feature Tracks

Peilin Tao, Hainan Cui, Mengqi Rong et al.

CVPR 2024

Revisiting Gradient Pruning: A Dual Realization for Defending against Gradient Attacks

Lulu Xue, Shengshan Hu, Ruizhi Zhao et al.

AAAI 2024paperarXiv:2401.16687
8
citations

Revisiting Graph-Based Fraud Detection in Sight of Heterophily and Spectrum

Fan Xu, Nan Wang, Hao Wu et al.

AAAI 2024paperarXiv:2312.06441
58
citations

Revisiting Inexact Fixed-Point Iterations for Min-Max Problems: Stochasticity and Structured Nonconvexity

Ahmet Alacaoglu, Donghwan Kim, Stephen Wright

ICML 2024arXiv:2402.05071
5
citations

Revisiting Link Prediction: a data perspective

Haitao Mao, Juanhui Li, Harry Shomer et al.

ICLR 2024arXiv:2310.00793
35
citations

Revisiting Non-Autoregressive Transformers for Efficient Image Synthesis

Zanlin Ni, Yulin Wang, Renping Zhou et al.

CVPR 2024arXiv:2406.05478
28
citations

Revisiting Open-Set Panoptic Segmentation

Yufei Yin, Hao Chen, Wengang Zhou et al.

AAAI 2024paper
1
citations

Revisiting Plasticity in Visual Reinforcement Learning: Data, Modules and Training Stages

Guozheng Ma, Lu Li, Sen Zhang et al.

ICLR 2024arXiv:2310.07418
27
citations

Revisiting Sampson Approximations for Geometric Estimation Problems

Felix Rydell, Angelica Torres, Viktor Larsson

CVPR 2024arXiv:2401.07114
6
citations

Revisiting Scalable Hessian Diagonal Approximations for Applications in Reinforcement Learning

Mohamed Elsayed, Homayoon Farrahi, Felix Dangel et al.

ICML 2024arXiv:2406.03276
7
citations

Revisiting Single Image Reflection Removal In the Wild

Yurui Zhu, Bo Li, Xueyang Fu et al.

CVPR 2024arXiv:2311.17320
40
citations

Revisiting Spatial-Frequency Information Integration from a Hierarchical Perspective for Panchromatic and Multi-Spectral Image Fusion

Jiangtong Tan, Jie Huang, Naishan Zheng et al.

CVPR 2024

Revisiting Supervision for Continual Representation Learning

Daniel Marczak, Sebastian Cygert, Tomasz Trzcinski et al.

ECCV 2024arXiv:2311.13321
4
citations

Revisiting the Domain Shift and Sample Uncertainty in Multi-source Active Domain Transfer

Wenqiao Zhang, Zheqi Lv

CVPR 2024arXiv:2311.12905
31
citations

Revisiting the Last-Iterate Convergence of Stochastic Gradient Methods

Zijian Liu, Zhengyuan Zhou

ICLR 2024arXiv:2312.08531
27
citations

Revisiting the Power of Prompt for Visual Tuning

Yuzhu Wang, Lechao Cheng, Chaowei Fang et al.

ICML 2024spotlightarXiv:2402.02382
30
citations

Revisiting the Role of Language Priors in Vision-Language Models

Zhiqiu Lin, Xinyue Chen, Deepak Pathak et al.

ICML 2024arXiv:2306.01879
39
citations

Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark

Yihua Zhang, Pingzhi Li, Junyuan Hong et al.

ICML 2024arXiv:2402.11592
107
citations

Revisit Self-supervision with Local Structure-from-Motion

Shengjie Zhu, Xiaoming Liu

ECCV 2024

Revisit the Essence of Distilling Knowledge through Calibration

Wen-Shu Fan, Su Lu, Xin-Chun Li et al.

ICML 2024

Revitalizing Multivariate Time Series Forecasting: Learnable Decomposition with Inter-Series Dependencies and Intra-Series Variations Modeling

Guoqi Yu, Jing Zou, Xiaowei Hu et al.

ICML 2024arXiv:2402.12694
62
citations

Reward-Consistent Dynamics Models are Strongly Generalizable for Offline Reinforcement Learning

Fan-Ming Luo, Tian Xu, Xingchen Cao et al.

ICLR 2024spotlightarXiv:2310.05422
14
citations

Reward Design for Justifiable Sequential Decision-Making

Aleksa Sukovic, Goran Radanovic

ICLR 2024arXiv:2402.15826

Reward-Free Curricula for Training Robust World Models

Marc Rigter, Minqi Jiang, Ingmar Posner

ICLR 2024arXiv:2306.09205
13
citations

Reward-Free Kernel-Based Reinforcement Learning

Sattar Vakili, Farhang Nabiei, Da-shan Shiu et al.

ICML 2024

Reward Model Ensembles Help Mitigate Overoptimization

Thomas Coste, Usman Anwar, Robert Kirk et al.

ICLR 2024arXiv:2310.02743
188
citations

Reward Model Learning vs. Direct Policy Optimization: A Comparative Analysis of Learning from Human Preferences

Andi Nika, Debmalya Mandal, Parameswaran Kamalaruban et al.

ICML 2024arXiv:2403.01857
20
citations

Reward Penalties on Augmented States for Solving Richly Constrained RL Effectively

Hao Jiang, Tien Mai, Pradeep Varakantham et al.

AAAI 2024paper
2
citations

Reward Shaping for Reinforcement Learning with An Assistant Reward Agent

Haozhe Ma, Kuankuan Sima, Thanh Vinh Vo et al.

ICML 2024

Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment

Rui Yang, Xiaoman Pan, Feng Luo et al.

ICML 2024arXiv:2402.10207
125
citations

Reweighted Solutions for Weighted Low Rank Approximation

David Woodruff, Taisuke Yasuda

ICML 2024arXiv:2406.02431
2
citations

REWIND: Real-Time Egocentric Whole-Body Motion Diffusion with Exemplar-Based Identity Conditioning

Jian Wang, Zhe Cao, Diogo Luvizon et al.

CVPR 2024

RewriteLM: An Instruction-Tuned Large Language Model for Text Rewriting

Lei Shu, Liangchen Luo, Jayakumar Hoskere et al.

AAAI 2024paperarXiv:2305.15685
78
citations

Rewrite the Stars

Xu Ma, Xiyang Dai, Yue Bai et al.

CVPR 2024arXiv:2403.19967
352
citations

RGBD GS-ICP SLAM

Seongbo Ha, Jiung Yeon, Hyeonwoo Yu

ECCV 2024arXiv:2403.12550
81
citations

RGBD Objects in the Wild: Scaling Real-World 3D Object Learning from RGB-D Videos

Hongchi Xia, Yang Fu, Sifei Liu et al.

CVPR 2024arXiv:2401.12592
63
citations

RG-GAN: Dynamic Regenerative Pruning for Data-Efficient Generative Adversarial Networks

Divya Saxena, Jiannong Cao, Jiahao Xu et al.

AAAI 2024paper
10
citations

RGMComm: Return Gap Minimization via Discrete Communications in Multi-Agent Reinforcement Learning

Jingdi Chen, Tian Lan, Carlee Joe-Wong

AAAI 2024paperarXiv:2308.03358
17
citations

RGNet: A Unified Clip Retrieval and Grounding Network for Long Videos

Tanveer Hannan, Mohaiminul Islam, Thomas Seidl et al.

ECCV 2024arXiv:2312.06729
12
citations

RICA^2: Rubric-Informed, Calibrated Assessment of Actions

Abrar Majeedi, Viswanatha Reddy Gajjala, Satya Sai Srinath Namburi GNVV et al.

ECCV 2024
12
citations

RICE: Breaking Through the Training Bottlenecks of Reinforcement Learning with Explanation

Zelei Cheng, Xian Wu, Jiahao Yu et al.

ICML 2024spotlightarXiv:2405.03064
10
citations

RichDreamer: A Generalizable Normal-Depth Diffusion Model for Detail Richness in Text-to-3D

Lingteng Qiu, Guanying Chen, Xiaodong Gu et al.

CVPR 2024highlightarXiv:2311.16918
174
citations

Rich Human Feedback for Text-to-Image Generation

Youwei Liang, Junfeng He, Gang Li et al.

CVPR 2024arXiv:2312.10240
134
citations

Rich-Observation Reinforcement Learning with Continuous Latent Dynamics

Yuda Song, Lili Wu, Dylan Foster et al.

ICML 2024arXiv:2405.19269
2
citations

Riemannian Accelerated Zeroth-order Algorithm: Improved Robustness and Lower Query Complexity

Chang He, Zhaoye Pan, Xiao Wang et al.

ICML 2024arXiv:2405.05713
8
citations

Riemannian coordinate descent algorithms on matrix manifolds

Andi Han, Pratik Kumar Jawanpuria, Bamdev Mishra

ICML 2024arXiv:2406.02225
8
citations

Riemannian Multinomial Logistics Regression for SPD Neural Networks

Ziheng Chen, Yue Song, Gaowen Liu et al.

CVPR 2024arXiv:2305.11288
8
citations

Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models

Fangzhao Zhang, Mert Pilanci

ICML 2024arXiv:2402.02347
35
citations