ICLR Papers
6,124 papers found • Page 112 of 123
Rethinking Channel Dependence for Multivariate Time Series Forecasting: Learning from Leading Indicators
Lifan Zhao, Yanyan Shen
Rethinking Channel Dimensions to Isolate Outliers for Low-bit Weight Quantization of Large Language Models
Jung Hwan Heo, Jeonghoon Kim, Beomseok Kwon et al.
Rethinking CNN’s Generalization to Backdoor Attack from Frequency Domain
Quanrui Rao, Lin Wang, Wuying Liu
Rethinking Complex Queries on Knowledge Graphs with Neural Link Predictors
Hang Yin, Zihao Wang, Yangqiu Song
Rethinking Information-theoretic Generalization: Loss Entropy Induced PAC Bounds
Yuxin Dong, Tieliang Gong, Hong Chen et al.
Rethinking Label Poisoning for GNNs: Pitfalls and Attacks
Vijay Chandra Lingam, Mohammad Sadegh Akhondzadeh, Aleksandar Bojchevski
Rethinking Model Ensemble in Transfer-based Adversarial Attacks
Huanran Chen, Yichi Zhang, Yinpeng Dong et al.
Rethinking the Benefits of Steerable Features in 3D Equivariant Graph Neural Networks
Shih-Hsin Wang, Yung-Chang Hsu, Justin Baker et al.
Rethinking the Power of Graph Canonization in Graph Representation Learning with Stability
Zehao Dong, Muhan Zhang, Philip Payne et al.
Rethinking the symmetry-preserving circuits for constrained variational quantum algorithms
Ge Yan, Hongxu Chen, Kaisen Pan et al.
Rethinking the Uniformity Metric in Self-Supervised Learning
Xianghong Fang, Jian Li, Qiang Sun et al.
Retrieval-based Disentangled Representation Learning with Natural Language Supervision
Jiawei Zhou, Xiaoguang Li, Lifeng Shang et al.
Retrieval-Enhanced Contrastive Vision-Text Models
Ahmet Iscen, Mathilde Caron, Alireza Fathi et al.
Retrieval-Guided Reinforcement Learning for Boolean Circuit Minimization
Animesh Basak Chowdhury, Marco Romanelli, Benjamin Tan et al.
Retrieval is Accurate Generation
Bowen Cao, Deng Cai, Leyang Cui et al.
Retrieval meets Long Context Large Language Models
Peng Xu, Wei Ping, Xianchao Wu et al.
RetroBridge: Modeling Retrosynthesis with Markov Bridges
Ilia Igashov, Arne Schneuing, Marwin Segler et al.
Retro-fallback: retrosynthetic planning in an uncertain world
Austin Tripp, Krzysztof Maziarz, Sarah Lewis et al.
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles et al.
RETSim: Resilient and Efficient Text Similarity
Marina Zhang, Owen Vallis, Aysegul Bumin et al.
REValueD: Regularised Ensemble Value-Decomposition for Factorisable Markov Decision Processes
David Ireland, Giovanni Montana
Reverse Diffusion Monte Carlo
Xunpeng Huang, Hanze Dong, Yifan HAO et al.
Reverse Forward Curriculum Learning for Extreme Sample and Demo Efficiency
Stone Tao, Arth Shukla, Tse-kai Chan et al.
Revisit and Outstrip Entity Alignment: A Perspective of Generative Models
Lingbing Guo, Zhuo Chen, Jiaoyan Chen et al.
Revisiting Data Augmentation in Deep Reinforcement Learning
Jianshu Hu, Yunpeng Jiang, Paul Weng
Revisiting Deep Audio-Text Retrieval Through the Lens of Transportation
Tien Manh Luong, Khai Nguyen, Nhat Ho et al.
Revisiting Link Prediction: a data perspective
Haitao Mao, Juanhui Li, Harry Shomer et al.
Revisiting Plasticity in Visual Reinforcement Learning: Data, Modules and Training Stages
Guozheng Ma, Lu Li, Sen Zhang et al.
Revisiting the Last-Iterate Convergence of Stochastic Gradient Methods
Zijian Liu, Zhengyuan Zhou
Reward-Consistent Dynamics Models are Strongly Generalizable for Offline Reinforcement Learning
Fan-Ming Luo, Tian Xu, Xingchen Cao et al.
Reward Design for Justifiable Sequential Decision-Making
Aleksa Sukovic, Goran Radanovic
Reward-Free Curricula for Training Robust World Models
Marc Rigter, Minqi Jiang, Ingmar Posner
Reward Model Ensembles Help Mitigate Overoptimization
Thomas Coste, Usman Anwar, Robert Kirk et al.
Rigid Protein-Protein Docking via Equivariant Elliptic-Paraboloid Interface Prediction
Ziyang Yu, Wenbing Huang, Yang Liu
Ring-A-Bell! How Reliable are Concept Removal Methods For Diffusion Models?
Yu-Lin Tsai, Chia-Yi Hsu, Chulin Xie et al.
RingAttention with Blockwise Transformers for Near-Infinite Context
Hao Liu, Matei Zaharia, Pieter Abbeel
Risk Bounds of Accelerated SGD for Overparameterized Linear Regression
Xuheng Li, Yihe Deng, Jingfeng Wu et al.
RLCD: Reinforcement Learning from Contrastive Distillation for LM Alignment
Kevin Yang, Dan Klein, Asli Celikyilmaz et al.
RLIF: Interactive Imitation Learning as Reinforcement Learning
Jianlan Luo, Perry Dong, Yuexiang Zhai et al.
R-MAE: Regions Meet Masked Autoencoders
Duy-Kien Nguyen, Yanghao Li, Vaibhav Aggarwal et al.
Robot Fleet Learning via Policy Merging
Lirui Wang, Kaiqing Zhang, Allan Zhou et al.
Robust Adversarial Reinforcement Learning via Bounded Rationality Curricula
Aryaman Reddi, Maximilian Tölle, Jan Peters et al.
Robust agents learn causal world models
Jonathan Richens, Tom Everitt
Robust Angular Synchronization via Directed Graph Neural Networks
Yixuan He, Gesine Reinert, David Wipf et al.
Robust Classification via Regression for Learning with Noisy Labels
Erik Englesson, Hossein Azizpour
Robustifying and Boosting Training-Free Neural Architecture Search
Zhenfeng He, Yao Shu, Zhongxiang Dai et al.
Robustifying State-space Models for Long Sequences via Approximate Diagonalization
Annan Yu, Arnur Nigmetov, Dmitriy Morozov et al.
Robust Model-Based Optimization for Challenging Fitness Landscapes
Saba Ghaffari, Ehsan Saleh, Alex Schwing et al.
Robust Model Based Reinforcement Learning Using $\mathcal{L}_1$ Adaptive Control
Minjun Sung, Sambhu Harimanas Karumanchi, Aditya Gahlawat et al.
Robust NAS under adversarial training: benchmark, theory, and beyond
Yongtao Wu, Fanghui Liu, Carl-Johann Simon-Gabriel et al.