ICML Papers

5,975 papers found • Page 109 of 120

Retrieval-Augmented Score Distillation for Text-to-3D Generation

Junyoung Seo, Susung Hong, Wooseok Jang et al.

ICML 2024posterarXiv:2402.02972

Revealing the Dark Secrets of Extremely Large Kernel ConvNets on Robustness

Honghao Chen, Zhang Yurong, xiaokun Feng et al.

ICML 2024posterarXiv:2407.08972

Revealing Vision-Language Integration in the Brain with Multimodal Networks

Vighnesh Subramaniam, Colin Conwell, Christopher Wang et al.

ICML 2024posterarXiv:2406.14481

Revisiting Character-level Adversarial Attacks for Language Models

Elias Abad Rocamora, Yongtao Wu, Fanghui Liu et al.

ICML 2024posterarXiv:2405.04346

Revisiting Context Aggregation for Image Matting

Qinglin Liu, Xiaoqian Lv, Quanling Meng et al.

ICML 2024posterarXiv:2304.01171

Revisiting Inexact Fixed-Point Iterations for Min-Max Problems: Stochasticity and Structured Nonconvexity

Ahmet Alacaoglu, Donghwan Kim, Stephen Wright

ICML 2024posterarXiv:2402.05071

Revisiting Scalable Hessian Diagonal Approximations for Applications in Reinforcement Learning

Mohamed Elsayed, Homayoon Farrahi, Felix Dangel et al.

ICML 2024posterarXiv:2406.03276

Revisiting the Power of Prompt for Visual Tuning

Yuzhu Wang, Lechao Cheng, Chaowei Fang et al.

ICML 2024spotlightarXiv:2402.02382

Revisiting the Role of Language Priors in Vision-Language Models

Zhiqiu Lin, Xinyue Chen, Deepak Pathak et al.

ICML 2024posterarXiv:2306.01879

Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark

Yihua Zhang, Pingzhi Li, Junyuan Hong et al.

ICML 2024posterarXiv:2402.11592

Revisit the Essence of Distilling Knowledge through Calibration

Wen-Shu Fan, Su Lu, Xin-Chun Li et al.

ICML 2024poster

Revitalizing Multivariate Time Series Forecasting: Learnable Decomposition with Inter-Series Dependencies and Intra-Series Variations Modeling

Guoqi Yu, Jing Zou, Xiaowei Hu et al.

ICML 2024posterarXiv:2402.12694

Reward-Free Kernel-Based Reinforcement Learning

Sattar Vakili, Farhang Nabiei, Da-shan Shiu et al.

ICML 2024poster

Reward Model Learning vs. Direct Policy Optimization: A Comparative Analysis of Learning from Human Preferences

Andi Nika, Debmalya Mandal, Parameswaran Kamalaruban et al.

ICML 2024posterarXiv:2403.01857

Reward Shaping for Reinforcement Learning with An Assistant Reward Agent

Haozhe Ma, Kuankuan Sima, Thanh Vinh Vo et al.

ICML 2024poster

Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment

Rui Yang, Xiaoman Pan, Feng Luo et al.

ICML 2024posterarXiv:2402.10207

Reweighted Solutions for Weighted Low Rank Approximation

David Woodruff, Taisuke Yasuda

ICML 2024posterarXiv:2406.02431

RICE: Breaking Through the Training Bottlenecks of Reinforcement Learning with Explanation

Zelei Cheng, Xian Wu, Jiahao Yu et al.

ICML 2024spotlightarXiv:2405.03064

Rich-Observation Reinforcement Learning with Continuous Latent Dynamics

Yuda Song, Lili Wu, Dylan Foster et al.

ICML 2024posterarXiv:2405.19269

Riemannian Accelerated Zeroth-order Algorithm: Improved Robustness and Lower Query Complexity

Chang He, Zhaoye Pan, Xiao Wang et al.

ICML 2024posterarXiv:2405.05713

Riemannian coordinate descent algorithms on matrix manifolds

Andi Han, Pratik Kumar Jawanpuria, Bamdev Mishra

ICML 2024posterarXiv:2406.02225

Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models

Fangzhao Zhang, Mert Pilanci

ICML 2024posterarXiv:2402.02347

RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content

Zhuowen Yuan, Zidi Xiong, Yi Zeng et al.

ICML 2024posterarXiv:2403.13031

RIME: Robust Preference-based Reinforcement Learning with Noisy Preferences

Jie Cheng, Gang Xiong, Xingyuan Dai et al.

ICML 2024spotlightarXiv:2402.17257

Risk Aware Benchmarking of Large Language Models

Apoorva Nitsure, Youssef Mroueh, Mattia Rigotti et al.

ICML 2024posterarXiv:2310.07132

Risk Estimation in a Markov Cost Process: Lower and Upper Bounds

Gugan Chandrashekhar Mallika Thoppe, Prashanth L.A., Sanjay Bhat

ICML 2024posterarXiv:2310.11389

Risk-Sensitive Policy Optimization via Predictive CVaR Policy Gradient

Ju-Hyun Kim, Seungki Min

ICML 2024poster

Risk-Sensitive Reward-Free Reinforcement Learning with CVaR

Xinyi Ni, Guanlin Liu, Lifeng Lai

ICML 2024poster

RLAIF vs. RLHF: Scaling Reinforcement Learning from Human Feedback with AI Feedback

Harrison Lee, Samrat Phatale, Hassan Mansoor et al.

ICML 2024posterarXiv:2309.00267

RL-CFR: Improving Action Abstraction for Imperfect Information Extensive-Form Games with Reinforcement Learning

Boning Li, Zhixuan Fang, Longbo Huang

ICML 2024posterarXiv:2403.04344

RLVF: Learning from Verbal Feedback without Overgeneralization

Moritz Stephan, Alexander Khazatsky, Eric Mitchell et al.

ICML 2024posterarXiv:2402.10893

RL-VLM-F: Reinforcement Learning from Vision Language Foundation Model Feedback

Yufei Wang, Zhanyi Sun, Jesse Zhang et al.

ICML 2024posterarXiv:2402.03681

RMIB: Representation Matching Information Bottleneck for Matching Text Representations

Haihui Pan, zhifang Liao, Wenrui Xie et al.

ICML 2024poster

RNAFlow: RNA Structure & Sequence Design via Inverse Folding-Based Flow Matching

Divya Nori, Wengong Jin

ICML 2024posterarXiv:2405.18768

RoboCodeX: Multimodal Code Generation for Robotic Behavior Synthesis

Yao Mu, Junting Chen, Qing-Long Zhang et al.

ICML 2024posterarXiv:2402.16117

RoboDreamer: Learning Compositional World Models for Robot Imagination

Siyuan Zhou, Yilun Du, Jiaben Chen et al.

ICML 2024posterarXiv:2404.12377

RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation

Yufei Wang, Zhou Xian, Feng Chen et al.

ICML 2024posterarXiv:2311.01455

RoboMP$^2$: A Robotic Multimodal Perception-Planning Framework with Multimodal Large Language Models

Qi Lv, Hao Li, Xiang Deng et al.

ICML 2024posterarXiv:2404.04929

Robust and Conjugate Gaussian Process Regression

Matias Altamirano, Francois-Xavier Briol, Jeremias Knoblauch

ICML 2024spotlightarXiv:2311.00463

Robust Classification via a Single Diffusion Model

Huanran Chen, Yinpeng Dong, Zhengyi Wang et al.

ICML 2024posterarXiv:2305.15241

Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models

Christian Schlarmann, Naman Singh, Francesco Croce et al.

ICML 2024posterarXiv:2402.12336

Robust Data-driven Prescriptiveness Optimization

Mehran Poursoltani, Erick Delage, Angelos Georghiou

ICML 2024posterarXiv:2306.05937

Robust Graph Matching when Nodes are Corrupt

Taha Ameen Ur Rahman, Bruce Hajek

ICML 2024poster

Robust Inverse Constrained Reinforcement Learning under Model Misspecification

Sheng Xu, Guiliang Liu

ICML 2024oral

Robust Inverse Graphics via Probabilistic Inference

Tuan Anh Le, Pavel Sountsov, Matthew Hoffman et al.

ICML 2024posterarXiv:2402.01915

Robust Learning-Augmented Dictionaries

Ali Zeynali, Shahin Kamali, Mohammad Hajiesmaili

ICML 2024posterarXiv:2402.09687

Robustly Learning Single-Index Models via Alignment Sharpness

Nikos Zarifis, Puqian Wang, Ilias Diakonikolas et al.

ICML 2024posterarXiv:2402.17756

Robust Multi-Task Learning with Excess Risks

Yifei He, Shiji Zhou, Guojun Zhang et al.

ICML 2024posterarXiv:2402.02009

Robustness of Deep Learning for Accelerated MRI: Benefits of Diverse Training Data

Kang Lin, Reinhard Heckel

ICML 2024posterarXiv:2312.10271

Robustness of Nonlinear Representation Learning

Simon Buchholz, Bernhard Schölkopf

ICML 2024posterarXiv:2503.15355