ICML Papers
5,975 papers found • Page 108 of 120
ReGAL: Refactoring Programs to Discover Generalizable Abstractions
Elias Stengel-Eskin, Archiki Prasad, Mohit Bansal
Regression Learning with Limited Observations of Multivariate Outcomes and Features
Yifan Sun, Grace Yi
Regression with Multi-Expert Deferral
Anqi Mao, Mehryar Mohri, Yutao Zhong
Regularized Q-learning through Robust Averaging
Peter Schmitt-Förster, Tobias Sutter
Regularizing with Pseudo-Negatives for Continual Self-Supervised Learning
Sungmin Cha, Kyunghyun Cho, Taesup Moon
Reinforcement Learning and Regret Bounds for Admission Control
Lucas Weber, Ana Busic, Jiamin ZHU
Reinforcement Learning from Reachability Specifications: PAC Guarantees with Expected Conditional Distance
Jakub Svoboda, Suguman Bansal, Krishnendu Chatterjee
Reinforcement Learning within Tree Search for Fast Macro Placement
Zijie Geng, Jie Wang, Ziyan Liu et al.
Reinformer: Max-Return Sequence Modeling for Offline RL
Zifeng Zhuang, Dengyun Peng, Jinxin Liu et al.
Rejuvenating image-GPT as Strong Visual Representation Learners
Sucheng Ren, Zeyu Wang, Hongru Zhu et al.
Relational DNN Verification With Cross Executional Bound Refinement
Debangshu Banerjee, Gagandeep Singh
Relational Learning in Pre-Trained Models: A Theory from Hypergraph Recovery Perspective
Yang Chen, Cong Fang, Zhouchen Lin et al.
Relaxed Quantile Regression: Prediction Intervals for Asymmetric Noise
Thomas Pouplin, Alan Jeffares, Nabeel Seedat et al.
Relaxing the Accurate Imputation Assumption in Doubly Robust Learning for Debiased Collaborative Filtering
Haoxuan Li, Chunyuan Zheng, Shuyi Wang et al.
ReLU Network with Width $d+\mathcal{O}(1)$ Can Achieve Optimal Approximation Rate
Chenghao Liu, Minghua Chen
ReLUs Are Sufficient for Learning Implicit Neural Representations
Joseph Shenouda, Yamin Zhou, Robert Nowak
ReLU to the Rescue: Improve Your On-Policy Actor-Critic with Positive Advantages
Andrew Jesson, Christopher Lu, Gunshi Gupta et al.
ReMax: A Simple, Effective, and Efficient Reinforcement Learning Method for Aligning Large Language Models
Ziniu Li, Tian Xu, Yushun Zhang et al.
REMEDI: Corrective Transformations for Improved Neural Entropy Estimation
Viktor Nilsson, Anirban Samaddar, Sandeep Madireddy et al.
Remembering to Be Fair: Non-Markovian Fairness in Sequential Decision Making
Parand A. Alamdari, Toryn Q. Klassen, Elliot Creager et al.
Removing Spurious Concepts from Neural Network Representations via Joint Subspace Estimation
Floris Holstege, Bram Wouters, Noud van Giersbergen et al.
Rényi Pufferfish Privacy: General Additive Noise Mechanisms and Privacy Amplification by Iteration via Shift Reduction Lemmas
Clément Pierquin, Aurélien Bellet, Marc Tommasi et al.
Reparameterized Importance Sampling for Robust Variational Bayesian Neural Networks
Yunfei Long, Zilin Tian, Liguo Zhang et al.
Repeat After Me: Transformers are Better than State Space Models at Copying
Samy Jelassi, David Brandfonbrener, Sham Kakade et al.
Replicable Learning of Large-Margin Halfspaces
Alkis Kalavasis, Amin Karbasi, Kasper Green Larsen et al.
Repoformer: Selective Retrieval for Repository-Level Code Completion
Di Wu, Wasi Ahmad, Dejiao Zhang et al.
Representation Surgery for Multi-Task Model Merging
Enneng Yang, Li Shen, Zhenyi Wang et al.
Representation Surgery: Theory and Practice of Affine Steering
Shashwat Singh, Shauli Ravfogel, Jonathan Herzig et al.
Representing Molecules as Random Walks Over Interpretable Grammars
Michael Sun, Minghao Guo, Weize Yuan et al.
Reprompting: Automated Chain-of-Thought Prompt Inference Through Gibbs Sampling
Weijia Xu, Andrzej Banburski-Fahey, Nebojsa Jojic
Reservoir Computing for Short High-Dimensional Time Series: an Application to SARS-CoV-2 Hospitalization Forecast
Thomas Ferté, Dutartre Dan, Boris Hejblum et al.
Reshape and Adapt for Output Quantization (RAOQ): Quantization-aware Training for In-memory Computing Systems
Bonan Zhang, Chia-Yu Chen, Naveen Verma
Residual-Conditioned Optimal Transport: Towards Structure-Preserving Unpaired and Paired Image Restoration
Xiaole Tang, Hu Xin, Xiang Gu et al.
Residual Quantization with Implicit Neural Codebooks
Iris Huijben, Matthijs Douze, Matthew Muckley et al.
Resisting Stochastic Risks in Diffusion Planners with the Trajectory Aggregation Tree
Lang Feng, Pengjie Gu, Bo An et al.
REST: Efficient and Accelerated EEG Seizure Analysis through Residual State Updates
Arshia Afzal, Grigorios Chrysos, Volkan Cevher et al.
Restoring balance: principled under/oversampling of data for optimal classification
Emanuele Loffredo, Mauro Pastore, Simona Cocco et al.
Rethinking Adversarial Robustness in the Context of the Right to be Forgotten
Chenxu Zhao, Wei Qian, Yangyi Li et al.
Rethinking Data Shapley for Data Selection Tasks: Misleads and Merits
Jiachen Wang, Tianji Yang, James Zou et al.
Rethinking Decision Transformer via Hierarchical Reinforcement Learning
Yi Ma, Jianye Hao, Hebin Liang et al.
Rethinking DP-SGD in Discrete Domain: Exploring Logistic Distribution in the Realm of signSGD
Jonggyu Jang, Seongjin Hwang, Hyun Jong Yang
Rethinking Generative Large Language Model Evaluation for Semantic Comprehension
Fangyun Wei, Xi Chen, Lin Luo
Rethinking Guidance Information to Utilize Unlabeled Samples: A Label Encoding Perspective
Yulong Zhang, Yuan Yao, Shuhao Chen et al.
Rethinking Independent Cross-Entropy Loss For Graph-Structured Data
Rui Miao, Kaixiong Zhou, Yili Wang et al.
Rethinking Momentum Knowledge Distillation in Online Continual Learning
Nicolas MICHEL, Maorong Wang, Ling Xiao et al.
Rethinking Optimization and Architecture for Tiny Language Models
Yehui Tang, Kai Han, Fangcheng Liu et al.
Rethinking Specificity in SBDD: Leveraging Delta Score and Energy-Guided Diffusion
Bowen Gao, Minsi Ren, Yuyan Ni et al.
Rethinking the Flat Minima Searching in Federated Learning
Taehwan Lee, Sung Whan Yoon
Rethinking Transformers in Solving POMDPs
Chenhao Lu, Ruizhe Shi, Yuyao Liu et al.
Retrieval Across Any Domains via Large-scale Pre-trained Model
Jiexi Yan, Zhihui Yin, Chenghao Xu et al.