ICLR Papers

6,124 papers found • Page 95 of 123

GraphPulse: Topological representations for temporal graph property prediction

Kiarash Shamsi, Farimah Poursafaei, Shenyang(Andy) Huang et al.

ICLR 2024oral

Graph Transformers on EHRs: Better Representation Improves Downstream Performance

Raphael Poulain, Rahmatollah Beheshti

ICLR 2024oral

Grokking as a First Order Phase Transition in Two Layer Networks

Noa Rubin, Inbar Seroussi, Zohar Ringel

ICLR 2024posterarXiv:2310.03789

Grokking as the transition from lazy to rich training dynamics

Tanishq Kumar, Blake Bordelon, Samuel Gershman et al.

ICLR 2024posterarXiv:2310.06110
63
citations

Grokking in Linear Estimators -- A Solvable Model that Groks without Understanding

Noam Levi, Alon Beck, Yohai Bar-Sinai

ICLR 2024posterarXiv:2310.16441

GROOT: Learning to Follow Instructions by Watching Gameplay Videos

Shaofei Cai, Bowei Zhang, Zihao Wang et al.

ICLR 2024spotlightarXiv:2310.08235

Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models

Hyeonho Jeong, Jong Chul YE

ICLR 2024oralarXiv:2310.01107
60
citations

Grounded Object-Centric Learning

Avinash Kori, Francesco Locatello, Fabio De Sousa Ribeiro et al.

ICLR 2024poster
16
citations

Grounding Language Plans in Demonstrations Through Counterfactual Perturbations

Yanwei Wang, Johnson (Tsun-Hsuan) Wang, Jiayuan Mao et al.

ICLR 2024spotlightarXiv:2403.17124

Grounding Multimodal Large Language Models to the World

Zhiliang Peng, Wenhui Wang, Li Dong et al.

ICLR 2024posterarXiv:2306.14824
1032
citations

Group Preference Optimization: Few-Shot Alignment of Large Language Models

Siyan Zhao, John Dang, Aditya Grover

ICLR 2024posterarXiv:2310.11523
46
citations

GTA: A Geometry-Aware Attention Mechanism for Multi-View Transformers

Takeru Miyato, Bernhard Jaeger, Max Welling et al.

ICLR 2024posterarXiv:2310.10375
31
citations

GTMGC: Using Graph Transformer to Predict Molecule’s Ground-State Conformation

Guikun Xu, Yongquan Jiang, PengChuan Lei et al.

ICLR 2024spotlight

Guaranteed Approximation Bounds for Mixed-Precision Neural Operators

Renbo Tu, Colin White, Jean Kossaifi et al.

ICLR 2024posterarXiv:2307.15034

Guess & Sketch: Language Model Guided Transpilation

Celine Lee, Abdulrahman Mahmoud, Michal Kurek et al.

ICLR 2024posterarXiv:2309.14396

Guiding Instruction-based Image Editing via Multimodal Large Language Models

Tsu-Jui Fu, Wenze Hu, Xianzhi Du et al.

ICLR 2024spotlightarXiv:2309.17102

Guiding Masked Representation Learning to Capture Spatio-Temporal Relationship of Electrocardiogram

Yeongyeon Na, Minje Park, Yunwon Tae et al.

ICLR 2024oralarXiv:2402.09450

H2O-SDF: Two-phase Learning for 3D Indoor Reconstruction using Object Surface Fields

Minyoung Park, MIRAE DO, Yeon Jae Shin et al.

ICLR 2024spotlightarXiv:2402.08138
12
citations

Habitat 3.0: A Co-Habitat for Humans, Avatars, and Robots

Xavier Puig, Eric Undersander, Andrew Szot et al.

ICLR 2024posterarXiv:2310.13724
206
citations

Hard-Constrained Deep Learning for Climate Downscaling

Paula Harder, Alex Hernandez-Garcia, Venkatesh Ramesh et al.

ICLR 2024posterarXiv:2208.05424

Harnessing Density Ratios for Online Reinforcement Learning

Philip Amortila, Dylan Foster, Nan Jiang et al.

ICLR 2024spotlightarXiv:2401.09681

Harnessing Explanations: LLM-to-LM Interpreter for Enhanced Text-Attributed Graph Representation Learning

Xiaoxin He, Xavier Bresson, Thomas Laurent et al.

ICLR 2024posterarXiv:2305.19523

Harnessing Joint Rain-/Detail-aware Representations to Eliminate Intricate Rains

Wu Ran, Peirong Ma, Zhiquan He et al.

ICLR 2024posterarXiv:2404.12091
4
citations

HAZARD Challenge: Embodied Decision Making in Dynamically Changing Environments

Qinhong Zhou, Sunli Chen, Yisong Wang et al.

ICLR 2024posterarXiv:2401.12975

Headless Language Models: Learning without Predicting with Contrastive Weight Tying

Nathan Godey, Éric Clergerie, Benoît Sagot

ICLR 2024posterarXiv:2309.08351
4
citations

Hebbian Learning based Orthogonal Projection for Continual Learning of Spiking Neural Networks

Mingqing Xiao, Qingyan Meng, Zongpeng Zhang et al.

ICLR 2024posterarXiv:2402.11984
13
citations

Heterogeneous Personalized Federated Learning by Local-Global Updates Mixing via Convergence Rate

Meirui Jiang, Anjie Le, Xiaoxiao Li et al.

ICLR 2024poster

H-GAP: Humanoid Control with a Generalist Planner

Zhengyao Jiang, Yingchen Xu, Nolan Wagener et al.

ICLR 2024spotlightarXiv:2312.02682

Hiding in Plain Sight: Disguising Data Stealing Attacks in Federated Learning

Kostadin Garov, Dimitar I. Dimitrov, Nikola Jovanović et al.

ICLR 2024posterarXiv:2306.03013

Hierarchical Context Merging: Better Long Context Understanding for Pre-trained LLMs

Woomin Song, Seunghyuk Oh, Sangwoo Mo et al.

ICLR 2024posterarXiv:2404.10308

HIFA: High-fidelity Text-to-3D Generation with Advanced Diffusion Guidance

Junzhe Zhu, Peiye Zhuang, Sanmi Koyejo

ICLR 2024posterarXiv:2305.18766

HiGen: Hierarchical Graph Generative Networks

Mahdi Karami

ICLR 2024posterarXiv:2305.19337
5
citations

High-dimensional SGD aligns with emerging outlier eigenspaces

Gerard Ben Arous, Reza Gheissari, Jiaoyang Huang et al.

ICLR 2024spotlight

High Fidelity Neural Audio Compression

Yossi Adi, Gabriel Synnaeve, Jade Copet et al.

ICLR 2024poster

Hindsight PRIORs for Reward Learning from Human Preferences

Mudit Verma, Katherine Metcalf

ICLR 2024posterarXiv:2404.08828

Holistic Evaluation of Language Models

Jue Wang, Lucia Zheng, Nathan Kim et al.

ICLR 2024poster

HoloNets: Spectral Convolutions do extend to Directed Graphs

Christian Koke, Daniel Cremers

ICLR 2024posterarXiv:2310.02232

Horizon-Free Regret for Linear Markov Decision Processes

Zihan Zhang, Jason Lee, Yuxin Chen et al.

ICLR 2024posterarXiv:2403.10738

Horizon-free Reinforcement Learning in Adversarial Linear Mixture MDPs

Kaixuan Ji, Qingyue Zhao, Jiafan He et al.

ICLR 2024posterarXiv:2305.08359

How connectivity structure shapes rich and lazy learning in neural circuits

Yuhan Helena Liu, Aristide Baratin, Jonathan Cornford et al.

ICLR 2024posterarXiv:2310.08513

How Does Unlabeled Data Provably Help Out-of-Distribution Detection?

Xuefeng Du, Zhen Fang, Ilias Diakonikolas et al.

ICLR 2024posterarXiv:2402.03502

How do Language Models Bind Entities in Context?

Jiahai Feng, Jacob Steinhardt

ICLR 2024posterarXiv:2310.17191

How Do Transformers Learn In-Context Beyond Simple Functions? A Case Study on Learning with Representations

Tianyu Guo, Wei Hu, Song Mei et al.

ICLR 2024posterarXiv:2310.10616

How I Warped Your Noise: a Temporally-Correlated Noise Prior for Diffusion Models

Pascal Chang, Jingwei Tang, Markus Gross et al.

ICLR 2024oralarXiv:2504.03072

How Many Pretraining Tasks Are Needed for In-Context Learning of Linear Regression?

Jingfeng Wu, Difan Zou, Zixiang Chen et al.

ICLR 2024spotlightarXiv:2310.08391
85
citations

How Over-Parameterization Slows Down Gradient Descent in Matrix Sensing: The Curses of Symmetry and Initialization

Nuoya Xiong, Lijun Ding, Simon Du

ICLR 2024spotlightarXiv:2310.01769

How Realistic Is Your Synthetic Data? Constraining Deep Generative Models for Tabular Data

Mihaela Stoian, Salijona Dyrmishi, Maxime Cordy et al.

ICLR 2024posterarXiv:2402.04823

How to Capture Higher-order Correlations? Generalizing Matrix Softmax Attention to Kronecker Computation

Josh Alman, Zhao Song

ICLR 2024spotlightarXiv:2310.04064

How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking Unrelated Questions

Lorenzo Pacchiardi, Alex Chan, Sören Mindermann et al.

ICLR 2024posterarXiv:2309.15840

How to Fine-Tune Vision Models with SGD

Ananya Kumar, Ruoqi Shen, Sebastien Bubeck et al.

ICLR 2024posterarXiv:2211.09359
35
citations