ICML Papers
5,975 papers found • Page 109 of 120
Retrieval-Augmented Score Distillation for Text-to-3D Generation
Junyoung Seo, Susung Hong, Wooseok Jang et al.
Revealing the Dark Secrets of Extremely Large Kernel ConvNets on Robustness
Honghao Chen, Zhang Yurong, xiaokun Feng et al.
Revealing Vision-Language Integration in the Brain with Multimodal Networks
Vighnesh Subramaniam, Colin Conwell, Christopher Wang et al.
Revisiting Character-level Adversarial Attacks for Language Models
Elias Abad Rocamora, Yongtao Wu, Fanghui Liu et al.
Revisiting Context Aggregation for Image Matting
Qinglin Liu, Xiaoqian Lv, Quanling Meng et al.
Revisiting Inexact Fixed-Point Iterations for Min-Max Problems: Stochasticity and Structured Nonconvexity
Ahmet Alacaoglu, Donghwan Kim, Stephen Wright
Revisiting Scalable Hessian Diagonal Approximations for Applications in Reinforcement Learning
Mohamed Elsayed, Homayoon Farrahi, Felix Dangel et al.
Revisiting the Power of Prompt for Visual Tuning
Yuzhu Wang, Lechao Cheng, Chaowei Fang et al.
Revisiting the Role of Language Priors in Vision-Language Models
Zhiqiu Lin, Xinyue Chen, Deepak Pathak et al.
Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Yihua Zhang, Pingzhi Li, Junyuan Hong et al.
Revisit the Essence of Distilling Knowledge through Calibration
Wen-Shu Fan, Su Lu, Xin-Chun Li et al.
Revitalizing Multivariate Time Series Forecasting: Learnable Decomposition with Inter-Series Dependencies and Intra-Series Variations Modeling
Guoqi Yu, Jing Zou, Xiaowei Hu et al.
Reward-Free Kernel-Based Reinforcement Learning
Sattar Vakili, Farhang Nabiei, Da-shan Shiu et al.
Reward Model Learning vs. Direct Policy Optimization: A Comparative Analysis of Learning from Human Preferences
Andi Nika, Debmalya Mandal, Parameswaran Kamalaruban et al.
Reward Shaping for Reinforcement Learning with An Assistant Reward Agent
Haozhe Ma, Kuankuan Sima, Thanh Vinh Vo et al.
Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment
Rui Yang, Xiaoman Pan, Feng Luo et al.
Reweighted Solutions for Weighted Low Rank Approximation
David Woodruff, Taisuke Yasuda
RICE: Breaking Through the Training Bottlenecks of Reinforcement Learning with Explanation
Zelei Cheng, Xian Wu, Jiahao Yu et al.
Rich-Observation Reinforcement Learning with Continuous Latent Dynamics
Yuda Song, Lili Wu, Dylan Foster et al.
Riemannian Accelerated Zeroth-order Algorithm: Improved Robustness and Lower Query Complexity
Chang He, Zhaoye Pan, Xiao Wang et al.
Riemannian coordinate descent algorithms on matrix manifolds
Andi Han, Pratik Kumar Jawanpuria, Bamdev Mishra
Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models
Fangzhao Zhang, Mert Pilanci
RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content
Zhuowen Yuan, Zidi Xiong, Yi Zeng et al.
RIME: Robust Preference-based Reinforcement Learning with Noisy Preferences
Jie Cheng, Gang Xiong, Xingyuan Dai et al.
Risk Aware Benchmarking of Large Language Models
Apoorva Nitsure, Youssef Mroueh, Mattia Rigotti et al.
Risk Estimation in a Markov Cost Process: Lower and Upper Bounds
Gugan Chandrashekhar Mallika Thoppe, Prashanth L.A., Sanjay Bhat
Risk-Sensitive Policy Optimization via Predictive CVaR Policy Gradient
Ju-Hyun Kim, Seungki Min
Risk-Sensitive Reward-Free Reinforcement Learning with CVaR
Xinyi Ni, Guanlin Liu, Lifeng Lai
RLAIF vs. RLHF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Harrison Lee, Samrat Phatale, Hassan Mansoor et al.
RL-CFR: Improving Action Abstraction for Imperfect Information Extensive-Form Games with Reinforcement Learning
Boning Li, Zhixuan Fang, Longbo Huang
RLVF: Learning from Verbal Feedback without Overgeneralization
Moritz Stephan, Alexander Khazatsky, Eric Mitchell et al.
RL-VLM-F: Reinforcement Learning from Vision Language Foundation Model Feedback
Yufei Wang, Zhanyi Sun, Jesse Zhang et al.
RMIB: Representation Matching Information Bottleneck for Matching Text Representations
Haihui Pan, zhifang Liao, Wenrui Xie et al.
RNAFlow: RNA Structure & Sequence Design via Inverse Folding-Based Flow Matching
Divya Nori, Wengong Jin
RoboCodeX: Multimodal Code Generation for Robotic Behavior Synthesis
Yao Mu, Junting Chen, Qing-Long Zhang et al.
RoboDreamer: Learning Compositional World Models for Robot Imagination
Siyuan Zhou, Yilun Du, Jiaben Chen et al.
RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation
Yufei Wang, Zhou Xian, Feng Chen et al.
RoboMP$^2$: A Robotic Multimodal Perception-Planning Framework with Multimodal Large Language Models
Qi Lv, Hao Li, Xiang Deng et al.
Robust and Conjugate Gaussian Process Regression
Matias Altamirano, Francois-Xavier Briol, Jeremias Knoblauch
Robust Classification via a Single Diffusion Model
Huanran Chen, Yinpeng Dong, Zhengyi Wang et al.
Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models
Christian Schlarmann, Naman Singh, Francesco Croce et al.
Robust Data-driven Prescriptiveness Optimization
Mehran Poursoltani, Erick Delage, Angelos Georghiou
Robust Graph Matching when Nodes are Corrupt
Taha Ameen Ur Rahman, Bruce Hajek
Robust Inverse Constrained Reinforcement Learning under Model Misspecification
Sheng Xu, Guiliang Liu
Robust Inverse Graphics via Probabilistic Inference
Tuan Anh Le, Pavel Sountsov, Matthew Hoffman et al.
Robust Learning-Augmented Dictionaries
Ali Zeynali, Shahin Kamali, Mohammad Hajiesmaili
Robustly Learning Single-Index Models via Alignment Sharpness
Nikos Zarifis, Puqian Wang, Ilias Diakonikolas et al.
Robust Multi-Task Learning with Excess Risks
Yifei He, Shiji Zhou, Guojun Zhang et al.
Robustness of Deep Learning for Accelerated MRI: Benefits of Diverse Training Data
Kang Lin, Reinhard Heckel
Robustness of Nonlinear Representation Learning
Simon Buchholz, Bernhard Schölkopf