ICLR Papers
6,124 papers found • Page 95 of 123
GraphPulse: Topological representations for temporal graph property prediction
Kiarash Shamsi, Farimah Poursafaei, Shenyang(Andy) Huang et al.
Graph Transformers on EHRs: Better Representation Improves Downstream Performance
Raphael Poulain, Rahmatollah Beheshti
Grokking as a First Order Phase Transition in Two Layer Networks
Noa Rubin, Inbar Seroussi, Zohar Ringel
Grokking as the transition from lazy to rich training dynamics
Tanishq Kumar, Blake Bordelon, Samuel Gershman et al.
Grokking in Linear Estimators -- A Solvable Model that Groks without Understanding
Noam Levi, Alon Beck, Yohai Bar-Sinai
GROOT: Learning to Follow Instructions by Watching Gameplay Videos
Shaofei Cai, Bowei Zhang, Zihao Wang et al.
Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models
Hyeonho Jeong, Jong Chul YE
Grounded Object-Centric Learning
Avinash Kori, Francesco Locatello, Fabio De Sousa Ribeiro et al.
Grounding Language Plans in Demonstrations Through Counterfactual Perturbations
Yanwei Wang, Johnson (Tsun-Hsuan) Wang, Jiayuan Mao et al.
Grounding Multimodal Large Language Models to the World
Zhiliang Peng, Wenhui Wang, Li Dong et al.
Group Preference Optimization: Few-Shot Alignment of Large Language Models
Siyan Zhao, John Dang, Aditya Grover
GTA: A Geometry-Aware Attention Mechanism for Multi-View Transformers
Takeru Miyato, Bernhard Jaeger, Max Welling et al.
GTMGC: Using Graph Transformer to Predict Molecule’s Ground-State Conformation
Guikun Xu, Yongquan Jiang, PengChuan Lei et al.
Guaranteed Approximation Bounds for Mixed-Precision Neural Operators
Renbo Tu, Colin White, Jean Kossaifi et al.
Guess & Sketch: Language Model Guided Transpilation
Celine Lee, Abdulrahman Mahmoud, Michal Kurek et al.
Guiding Instruction-based Image Editing via Multimodal Large Language Models
Tsu-Jui Fu, Wenze Hu, Xianzhi Du et al.
Guiding Masked Representation Learning to Capture Spatio-Temporal Relationship of Electrocardiogram
Yeongyeon Na, Minje Park, Yunwon Tae et al.
H2O-SDF: Two-phase Learning for 3D Indoor Reconstruction using Object Surface Fields
Minyoung Park, MIRAE DO, Yeon Jae Shin et al.
Habitat 3.0: A Co-Habitat for Humans, Avatars, and Robots
Xavier Puig, Eric Undersander, Andrew Szot et al.
Hard-Constrained Deep Learning for Climate Downscaling
Paula Harder, Alex Hernandez-Garcia, Venkatesh Ramesh et al.
Harnessing Density Ratios for Online Reinforcement Learning
Philip Amortila, Dylan Foster, Nan Jiang et al.
Harnessing Explanations: LLM-to-LM Interpreter for Enhanced Text-Attributed Graph Representation Learning
Xiaoxin He, Xavier Bresson, Thomas Laurent et al.
Harnessing Joint Rain-/Detail-aware Representations to Eliminate Intricate Rains
Wu Ran, Peirong Ma, Zhiquan He et al.
HAZARD Challenge: Embodied Decision Making in Dynamically Changing Environments
Qinhong Zhou, Sunli Chen, Yisong Wang et al.
Headless Language Models: Learning without Predicting with Contrastive Weight Tying
Nathan Godey, Éric Clergerie, Benoît Sagot
Hebbian Learning based Orthogonal Projection for Continual Learning of Spiking Neural Networks
Mingqing Xiao, Qingyan Meng, Zongpeng Zhang et al.
Heterogeneous Personalized Federated Learning by Local-Global Updates Mixing via Convergence Rate
Meirui Jiang, Anjie Le, Xiaoxiao Li et al.
H-GAP: Humanoid Control with a Generalist Planner
Zhengyao Jiang, Yingchen Xu, Nolan Wagener et al.
Hiding in Plain Sight: Disguising Data Stealing Attacks in Federated Learning
Kostadin Garov, Dimitar I. Dimitrov, Nikola Jovanović et al.
Hierarchical Context Merging: Better Long Context Understanding for Pre-trained LLMs
Woomin Song, Seunghyuk Oh, Sangwoo Mo et al.
HIFA: High-fidelity Text-to-3D Generation with Advanced Diffusion Guidance
Junzhe Zhu, Peiye Zhuang, Sanmi Koyejo
HiGen: Hierarchical Graph Generative Networks
Mahdi Karami
High-dimensional SGD aligns with emerging outlier eigenspaces
Gerard Ben Arous, Reza Gheissari, Jiaoyang Huang et al.
High Fidelity Neural Audio Compression
Yossi Adi, Gabriel Synnaeve, Jade Copet et al.
Hindsight PRIORs for Reward Learning from Human Preferences
Mudit Verma, Katherine Metcalf
Holistic Evaluation of Language Models
Jue Wang, Lucia Zheng, Nathan Kim et al.
HoloNets: Spectral Convolutions do extend to Directed Graphs
Christian Koke, Daniel Cremers
Horizon-Free Regret for Linear Markov Decision Processes
Zihan Zhang, Jason Lee, Yuxin Chen et al.
Horizon-free Reinforcement Learning in Adversarial Linear Mixture MDPs
Kaixuan Ji, Qingyue Zhao, Jiafan He et al.
How connectivity structure shapes rich and lazy learning in neural circuits
Yuhan Helena Liu, Aristide Baratin, Jonathan Cornford et al.
How Does Unlabeled Data Provably Help Out-of-Distribution Detection?
Xuefeng Du, Zhen Fang, Ilias Diakonikolas et al.
How do Language Models Bind Entities in Context?
Jiahai Feng, Jacob Steinhardt
How Do Transformers Learn In-Context Beyond Simple Functions? A Case Study on Learning with Representations
Tianyu Guo, Wei Hu, Song Mei et al.
How I Warped Your Noise: a Temporally-Correlated Noise Prior for Diffusion Models
Pascal Chang, Jingwei Tang, Markus Gross et al.
How Many Pretraining Tasks Are Needed for In-Context Learning of Linear Regression?
Jingfeng Wu, Difan Zou, Zixiang Chen et al.
How Over-Parameterization Slows Down Gradient Descent in Matrix Sensing: The Curses of Symmetry and Initialization
Nuoya Xiong, Lijun Ding, Simon Du
How Realistic Is Your Synthetic Data? Constraining Deep Generative Models for Tabular Data
Mihaela Stoian, Salijona Dyrmishi, Maxime Cordy et al.
How to Capture Higher-order Correlations? Generalizing Matrix Softmax Attention to Kronecker Computation
Josh Alman, Zhao Song
How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking Unrelated Questions
Lorenzo Pacchiardi, Alex Chan, Sören Mindermann et al.
How to Fine-Tune Vision Models with SGD
Ananya Kumar, Ruoqi Shen, Sebastien Bubeck et al.