ICLR Papers

6,124 papers found • Page 31 of 123

HELM: Hierarchical Encoding for mRNA Language Modeling

Mehdi Yazdani-Jahromi, Mangal Prakash, Tommaso Mansi et al.

ICLR 2025posterarXiv:2410.12459
9
citations

HelpSteer2-Preference: Complementing Ratings with Preferences

Zhilin Wang, Alexander Bukharin, Olivier Delalleau et al.

ICLR 2025posterarXiv:2410.01257
109
citations

Herald: A Natural Language Annotated Lean 4 Dataset

Guoxiong Gao, Yutong Wang, Jiedong Jiang et al.

ICLR 2025posterarXiv:2410.10878
30
citations

HERO: Human-Feedback Efficient Reinforcement Learning for Online Diffusion Model Finetuning

Ayano Hiranaka, Shang-Fu Chen, Chieh-Hsin Lai et al.

ICLR 2025posterarXiv:2410.05116
2
citations

Hessian Free Efficient Single Loop Iterative Differentiation Methods for Bi-Level Optimization Problems

Peiran Yu, Junyi Li, Heng Huang

ICLR 2025poster

Hessian-Free Online Certified Unlearning

Xinbao Qiao, Meng Zhang, Ming Tang et al.

ICLR 2025posterarXiv:2404.01712
5
citations

HexGen-2: Disaggregated Generative Inference of LLMs in Heterogeneous Environment

YOUHE JIANG, Ran Yan, Binhang Yuan

ICLR 2025posterarXiv:2502.07903
18
citations

HG-Adapter: Improving Pre-Trained Heterogeneous Graph Neural Networks with Dual Adapters

YUJIE MO, Runpeng Yu, Xiaofeng Zhu et al.

ICLR 2025posterarXiv:2411.01155
3
citations

HGM³: Hierarchical Generative Masked Motion Modeling with Hard Token Mining

Minjae Jeong, Yechan Hwang, Jaejin Lee et al.

ICLR 2025poster

HiBug2: Efficient and Interpretable Error Slice Discovery for Comprehensive Model Debugging

Muxi Chen, Chenchen Zhao, Qiang Xu

ICLR 2025posterarXiv:2501.16751
7
citations

Hidden in the Noise: Two-Stage Robust Watermarking for Images

Kasra Arabi, Benjamin Feuer, R. Teal Witter et al.

ICLR 2025posterarXiv:2412.04653
11
citations

Hierarchical Autoregressive Transformers: Combining Byte- and Word-Level Processing for Robust, Adaptable Language Models

Pit Neitemeier, Björn Deiseroth, Constantin Eichenberg et al.

ICLR 2025posterarXiv:2501.10322
11
citations

Hierarchically Encapsulated Representation for Protocol Design in Self-Driving Labs

Yu-Zhe Shi, Mingchen Liu, Fanxu Meng et al.

ICLR 2025posterarXiv:2504.03810

Hierarchical Uncertainty Estimation for Learning-based Registration in Neuroimaging

Xiaoling Hu, Karthik Gopinath, Peirong Liu et al.

ICLR 2025posterarXiv:2410.09299
5
citations

Hierarchical World Models as Visual Whole-Body Humanoid Controllers

Nick Hansen, Jyothir S V, Vlad Sobal et al.

ICLR 2025posterarXiv:2405.18418
20
citations

High-dimensional Analysis of Knowledge Distillation: Weak-to-Strong Generalization and Scaling Laws

Muhammed Ildiz, Halil Gozeten, Ege Taga et al.

ICLR 2025posterarXiv:2410.18837
13
citations

High-Dimensional Bayesian Optimisation with Gaussian Process Prior Variational Autoencoders

Siddharth Ramchandran, Manuel Haussmann, Harri Lähdesmäki

ICLR 2025poster
4
citations

High-dimension Prototype is a Better Incremental Object Detection Learner

Yanjie Wang, Liqun Chen, Tianming Zhao et al.

ICLR 2025poster

High-Dynamic Radar Sequence Prediction for Weather Nowcasting Using Spatiotemporal Coherent Gaussian Representation

Ziye Wang, Yiran Qin, Lin Zeng et al.

ICLR 2025oralarXiv:2502.14895
1
citations

Higher-Order Graphon Neural Networks: Approximation and Cut Distance

Daniel Herbst, Stefanie Jegelka

ICLR 2025posterarXiv:2503.14338
3
citations

Highly Efficient Self-Adaptive Reward Shaping for Reinforcement Learning

Haozhe Ma, Zhengding Luo, Thanh Vinh Vo et al.

ICLR 2025posterarXiv:2408.03029

High-Precision Dichotomous Image Segmentation via Probing Diffusion Capacity

Qian Yu, Peng-Tao Jiang, Hao Zhang et al.

ICLR 2025posterarXiv:2410.10105
5
citations

High-Quality Joint Image and Video Tokenization with Causal VAE

Dawit Mureja Argaw, Xian Liu, Qinsheng Zhang et al.

ICLR 2025oral
1
citations

High-quality Text-to-3D Character Generation with SparseCubes and Sparse Transformers.

Jiachen Qian, Hongye Yang, Shuang Wu et al.

ICLR 2025poster

HiLo: A Learning Framework for Generalized Category Discovery Robust to Domain Shifts

Hongjun Wang, Sagar Vaze, Kai Han

ICLR 2025posterarXiv:2408.04591
14
citations

HiRA: Parameter-Efficient Hadamard High-Rank Adaptation for Large Language Models

Qiushi Huang, Tom Ko, Zhan ZHUANG et al.

ICLR 2025poster

HiSplat: Hierarchical 3D Gaussian Splatting for Generalizable Sparse-View Reconstruction

Shengji Tang, Weicai Ye, Peng Ye et al.

ICLR 2025posterarXiv:2410.06245
34
citations

HMoRA: Making LLMs More Effective with Hierarchical Mixture of LoRA Experts

Mengqi Liao, Wei Chen, Junfeng Shen et al.

ICLR 2025poster
8
citations

Holistically Evaluating the Environmental Impact of Creating Language Models

Jacob Morrison, Clara Na, Jared Fernandez et al.

ICLR 2025posterarXiv:2503.05804

Holistic Reasoning with Long-Context LMs: A Benchmark for Database Operations on Massive Textual Data

Seiji Maekawa, Hayate Iso, Nikita Bhutani

ICLR 2025posterarXiv:2410.11996
10
citations

Holographic Node Representations: Pre-training Task-Agnostic Node Embeddings

Beatrice Bevilacqua, Joshua Robinson, Jure Leskovec et al.

ICLR 2025poster
3
citations

Homomorphism Counts as Structural Encodings for Graph Learning

Linus Bao, Emily Jin, Michael Bronstein et al.

ICLR 2025posterarXiv:2410.18676

Homomorphism Expressivity of Spectral Invariant Graph Neural Networks

Jingchu Gai, Yiheng Du, Bohang Zhang et al.

ICLR 2025posterarXiv:2503.00485
3
citations

HOPE for a Robust Parameterization of Long-memory State Space Models

Annan Yu, Michael W Mahoney, N. Benjamin Erichson

ICLR 2025posterarXiv:2405.13975
9
citations

Horizon Generalization in Reinforcement Learning

Vivek Myers, Catherine Ji, Benjamin Eysenbach

ICLR 2025posterarXiv:2501.02709
5
citations

Hot-pluggable Federated Learning: Bridging General and Personalized FL via Dynamic Selection

Lei Shen, Zhenheng Tang, Lijun Wu et al.

ICLR 2025poster
4
citations

Hotspot-Driven Peptide Design via Multi-Fragment Autoregressive Extension

Jiahan Li, Tong Chen, Shitong Luo et al.

ICLR 2025posterarXiv:2411.18463

How Discrete and Continuous Diffusion Meet: Comprehensive Analysis of Discrete Diffusion Models via a Stochastic Integral Framework

Yinuo Ren, Haoxuan Chen, Grant Rotskoff et al.

ICLR 2025posterarXiv:2410.03601

How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning

Arthur Jacot, Seok Hoan Choi, Yuxiao Wen

ICLR 2025posterarXiv:2407.05664
6
citations

How Does Critical Batch Size Scale in Pre-training?

Hanlin Zhang, Depen Morwani, Nikhil Vyas et al.

ICLR 2025posterarXiv:2410.21676
37
citations

How Does Vision-Language Adaptation Impact the Safety of Vision Language Models?

Seongyun Lee, Geewook Kim, Jiyeon Kim et al.

ICLR 2025posterarXiv:2410.07571
4
citations

How Do Large Language Models Understand Graph Patterns? A Benchmark for Graph Pattern Comprehension

Xinnan Dai, Haohao QU, Yifei Shen et al.

ICLR 2025posterarXiv:2410.05298
20
citations

How do we interpret the outputs of a neural network trained on classification?

Yudi Xie

ICLR 2025poster

How efficient is LLM-generated code? A rigorous & high-standard benchmark

Ruizhong Qiu, Weiliang Zeng, James Ezick et al.

ICLR 2025posterarXiv:2406.06647
43
citations

How Far Are We from True Unlearnability?

Kai Ye, Liangcai Su, Chenxiong Qian

ICLR 2025posterarXiv:2509.08058
4
citations

How Feature Learning Can Improve Neural Scaling Laws

Blake Bordelon, Alexander Atanasov, Cengiz Pehlevan

ICLR 2025posterarXiv:2409.17858

How Gradient descent balances features: A dynamical analysis for two-layer neural networks

Zhenyu Zhu, Fanghui Liu, Volkan Cevher

ICLR 2025poster
1
citations

How Learnable Grids Recover Fine Detail in Low Dimensions: A Neural Tangent Kernel Analysis of Multigrid Parametric Encodings

Samuel Audia, Soheil Feizi, Matthias Zwicker et al.

ICLR 2025posterarXiv:2504.13412
1
citations

How Low Can You Go? Searching for the Intrinsic Dimensionality of Complex Networks using Metric Node Embeddings

Nikolaos Nakis, Niels Raunkjær Holm, Andreas Lyhne Fiehn et al.

ICLR 2025posterarXiv:2503.01723
2
citations

How many samples are needed to train a deep neural network?

Pegah Golestaneh, Mahsa Taheri, Johannes Lederer

ICLR 2025posterarXiv:2405.16696