ICLR Papers
6,124 papers found • Page 31 of 123
HELM: Hierarchical Encoding for mRNA Language Modeling
Mehdi Yazdani-Jahromi, Mangal Prakash, Tommaso Mansi et al.
HelpSteer2-Preference: Complementing Ratings with Preferences
Zhilin Wang, Alexander Bukharin, Olivier Delalleau et al.
Herald: A Natural Language Annotated Lean 4 Dataset
Guoxiong Gao, Yutong Wang, Jiedong Jiang et al.
HERO: Human-Feedback Efficient Reinforcement Learning for Online Diffusion Model Finetuning
Ayano Hiranaka, Shang-Fu Chen, Chieh-Hsin Lai et al.
Hessian Free Efficient Single Loop Iterative Differentiation Methods for Bi-Level Optimization Problems
Peiran Yu, Junyi Li, Heng Huang
Hessian-Free Online Certified Unlearning
Xinbao Qiao, Meng Zhang, Ming Tang et al.
HexGen-2: Disaggregated Generative Inference of LLMs in Heterogeneous Environment
YOUHE JIANG, Ran Yan, Binhang Yuan
HG-Adapter: Improving Pre-Trained Heterogeneous Graph Neural Networks with Dual Adapters
YUJIE MO, Runpeng Yu, Xiaofeng Zhu et al.
HGM³: Hierarchical Generative Masked Motion Modeling with Hard Token Mining
Minjae Jeong, Yechan Hwang, Jaejin Lee et al.
HiBug2: Efficient and Interpretable Error Slice Discovery for Comprehensive Model Debugging
Muxi Chen, Chenchen Zhao, Qiang Xu
Hidden in the Noise: Two-Stage Robust Watermarking for Images
Kasra Arabi, Benjamin Feuer, R. Teal Witter et al.
Hierarchical Autoregressive Transformers: Combining Byte- and Word-Level Processing for Robust, Adaptable Language Models
Pit Neitemeier, Björn Deiseroth, Constantin Eichenberg et al.
Hierarchically Encapsulated Representation for Protocol Design in Self-Driving Labs
Yu-Zhe Shi, Mingchen Liu, Fanxu Meng et al.
Hierarchical Uncertainty Estimation for Learning-based Registration in Neuroimaging
Xiaoling Hu, Karthik Gopinath, Peirong Liu et al.
Hierarchical World Models as Visual Whole-Body Humanoid Controllers
Nick Hansen, Jyothir S V, Vlad Sobal et al.
High-dimensional Analysis of Knowledge Distillation: Weak-to-Strong Generalization and Scaling Laws
Muhammed Ildiz, Halil Gozeten, Ege Taga et al.
High-Dimensional Bayesian Optimisation with Gaussian Process Prior Variational Autoencoders
Siddharth Ramchandran, Manuel Haussmann, Harri Lähdesmäki
High-dimension Prototype is a Better Incremental Object Detection Learner
Yanjie Wang, Liqun Chen, Tianming Zhao et al.
High-Dynamic Radar Sequence Prediction for Weather Nowcasting Using Spatiotemporal Coherent Gaussian Representation
Ziye Wang, Yiran Qin, Lin Zeng et al.
Higher-Order Graphon Neural Networks: Approximation and Cut Distance
Daniel Herbst, Stefanie Jegelka
Highly Efficient Self-Adaptive Reward Shaping for Reinforcement Learning
Haozhe Ma, Zhengding Luo, Thanh Vinh Vo et al.
High-Precision Dichotomous Image Segmentation via Probing Diffusion Capacity
Qian Yu, Peng-Tao Jiang, Hao Zhang et al.
High-Quality Joint Image and Video Tokenization with Causal VAE
Dawit Mureja Argaw, Xian Liu, Qinsheng Zhang et al.
High-quality Text-to-3D Character Generation with SparseCubes and Sparse Transformers.
Jiachen Qian, Hongye Yang, Shuang Wu et al.
HiLo: A Learning Framework for Generalized Category Discovery Robust to Domain Shifts
Hongjun Wang, Sagar Vaze, Kai Han
HiRA: Parameter-Efficient Hadamard High-Rank Adaptation for Large Language Models
Qiushi Huang, Tom Ko, Zhan ZHUANG et al.
HiSplat: Hierarchical 3D Gaussian Splatting for Generalizable Sparse-View Reconstruction
Shengji Tang, Weicai Ye, Peng Ye et al.
HMoRA: Making LLMs More Effective with Hierarchical Mixture of LoRA Experts
Mengqi Liao, Wei Chen, Junfeng Shen et al.
Holistically Evaluating the Environmental Impact of Creating Language Models
Jacob Morrison, Clara Na, Jared Fernandez et al.
Holistic Reasoning with Long-Context LMs: A Benchmark for Database Operations on Massive Textual Data
Seiji Maekawa, Hayate Iso, Nikita Bhutani
Holographic Node Representations: Pre-training Task-Agnostic Node Embeddings
Beatrice Bevilacqua, Joshua Robinson, Jure Leskovec et al.
Homomorphism Counts as Structural Encodings for Graph Learning
Linus Bao, Emily Jin, Michael Bronstein et al.
Homomorphism Expressivity of Spectral Invariant Graph Neural Networks
Jingchu Gai, Yiheng Du, Bohang Zhang et al.
HOPE for a Robust Parameterization of Long-memory State Space Models
Annan Yu, Michael W Mahoney, N. Benjamin Erichson
Horizon Generalization in Reinforcement Learning
Vivek Myers, Catherine Ji, Benjamin Eysenbach
Hot-pluggable Federated Learning: Bridging General and Personalized FL via Dynamic Selection
Lei Shen, Zhenheng Tang, Lijun Wu et al.
Hotspot-Driven Peptide Design via Multi-Fragment Autoregressive Extension
Jiahan Li, Tong Chen, Shitong Luo et al.
How Discrete and Continuous Diffusion Meet: Comprehensive Analysis of Discrete Diffusion Models via a Stochastic Integral Framework
Yinuo Ren, Haoxuan Chen, Grant Rotskoff et al.
How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning
Arthur Jacot, Seok Hoan Choi, Yuxiao Wen
How Does Critical Batch Size Scale in Pre-training?
Hanlin Zhang, Depen Morwani, Nikhil Vyas et al.
How Does Vision-Language Adaptation Impact the Safety of Vision Language Models?
Seongyun Lee, Geewook Kim, Jiyeon Kim et al.
How Do Large Language Models Understand Graph Patterns? A Benchmark for Graph Pattern Comprehension
Xinnan Dai, Haohao QU, Yifei Shen et al.
How do we interpret the outputs of a neural network trained on classification?
Yudi Xie
How efficient is LLM-generated code? A rigorous & high-standard benchmark
Ruizhong Qiu, Weiliang Zeng, James Ezick et al.
How Far Are We from True Unlearnability?
Kai Ye, Liangcai Su, Chenxiong Qian
How Feature Learning Can Improve Neural Scaling Laws
Blake Bordelon, Alexander Atanasov, Cengiz Pehlevan
How Gradient descent balances features: A dynamical analysis for two-layer neural networks
Zhenyu Zhu, Fanghui Liu, Volkan Cevher
How Learnable Grids Recover Fine Detail in Low Dimensions: A Neural Tangent Kernel Analysis of Multigrid Parametric Encodings
Samuel Audia, Soheil Feizi, Matthias Zwicker et al.
How Low Can You Go? Searching for the Intrinsic Dimensionality of Complex Networks using Metric Node Embeddings
Nikolaos Nakis, Niels Raunkjær Holm, Andreas Lyhne Fiehn et al.
How many samples are needed to train a deep neural network?
Pegah Golestaneh, Mahsa Taheri, Johannes Lederer