Poster Papers

24,624 papers found • Page 129 of 493

HollowFlow: Efficient Sample Likelihood Evaluation using Hollow Message Passing

Johann Flemming Gloy, Simon Olsson

NEURIPS 2025arXiv:2510.21542
4
citations

Holographic Node Representations: Pre-training Task-Agnostic Node Embeddings

Beatrice Bevilacqua, Joshua Robinson, Jure Leskovec et al.

ICLR 2025
3
citations

HoloLLM: Multisensory Foundation Model for Language-Grounded Human Sensing and Reasoning

Chuhao Zhou, Jianfei Yang

NEURIPS 2025arXiv:2505.17645

HoloScene: Simulation‑Ready Interactive 3D Worlds from a Single Video

Hongchi Xia, Chih-Hao Lin, Hao-Yu Hsu et al.

NEURIPS 2025arXiv:2510.05560
2
citations

HOMO-Feature: Cross-Arbitrary-Modal Image Matching with Homomorphism of Organized Major Orientation

Chenzhong Gao, Wei Li, Desheng Weng

ICCV 2025

HomoGen: Enhanced Video Inpainting via Homography Propagation and Diffusion

Ding Ding, Yueming Pan, Ruoyu Feng et al.

CVPR 2025

Homogeneous Algorithms Can Reduce Competition in Personalized Pricing

Nathanael Jo, Ashia Wilson, Kathleen Creel et al.

NEURIPS 2025arXiv:2503.15634
2
citations

Homogeneous Dynamics Space for Heterogeneous Humans

Xinpeng Liu, Junxuan Liang, Chenshuo Zhang et al.

CVPR 2025arXiv:2412.06146
1
citations

Homogeneous Keys, Heterogeneous Values: Exploiting Local KV Cache Asymmetry for Long-Context LLMs

Wanyun Cui, Mingwei Xu

NEURIPS 2025arXiv:2506.05410

Homomorphism Counts as Structural Encodings for Graph Learning

Linus Bao, Emily Jin, Michael Bronstein et al.

ICLR 2025arXiv:2410.18676
8
citations

Homomorphism Expressivity of Spectral Invariant Graph Neural Networks

Jingchu Gai, Yiheng Du, Bohang Zhang et al.

ICLR 2025arXiv:2503.00485
3
citations

Homophily Enhanced Graph Domain Adaptation

Ruiyi Fang, Bingheng Li, Jingyu Zhao et al.

ICML 2025arXiv:2505.20089
5
citations

HOPE for a Robust Parameterization of Long-memory State Space Models

Annan Yu, Michael W Mahoney, N. Benjamin Erichson

ICLR 2025arXiv:2405.13975
9
citations

HOP: Heterogeneous Topology-based Multimodal Entanglement for Co-Speech Gesture Generation

Hongye Cheng, Tianyu Wang, guangsi shi et al.

CVPR 2025arXiv:2503.01175
4
citations

Horizon Generalization in Reinforcement Learning

Vivek Myers, Catherine Ji, Benjamin Eysenbach

ICLR 2025arXiv:2501.02709
8
citations

Horizon-GS: Unified 3D Gaussian Splatting for Large-Scale Aerial-to-Ground Scenes

Lihan Jiang, Kerui Ren, Mulin Yu et al.

CVPR 2025arXiv:2412.01745
12
citations

HORP: Human-Object Relation Priors Guided HOI Detection

Pei Geng, Jian Yang, Shanshan Zhang

CVPR 2025
2
citations

HORT: Monocular Hand-held Objects Reconstruction with Transformers

Zerui Chen, Rolandos Alexandros Potamias, Shizhe Chen et al.

ICCV 2025arXiv:2503.21313
4
citations

HOTFormerLoc: Hierarchical Octree Transformer for Versatile Lidar Place Recognition Across Ground and Aerial Views

Ethan Griffiths, Maryam Haghighat, Simon Denman et al.

CVPR 2025arXiv:2503.08140
2
citations

HOT: Hadamard-based Optimized Training

Seonggon Kim, Juncheol Shin, Seung-taek Woo et al.

CVPR 2025arXiv:2503.21261

Hot-pluggable Federated Learning: Bridging General and Personalized FL via Dynamic Selection

Lei Shen, Zhenheng Tang, Lijun Wu et al.

ICLR 2025
4
citations

Hotspot-Driven Peptide Design via Multi-Fragment Autoregressive Extension

Jiahan Li, Tong Chen, Shitong Luo et al.

ICLR 2025arXiv:2411.18463
9
citations

HoT-VI: Reparameterizable Variational Inference for Capturing Instance-Level High-Order Correlations

Junxi Xiao, Qinliang Su, Zexin Yuan

NEURIPS 2025

HouseLayout3D: A Benchmark and Training-free Baseline for 3D Layout Estimation in the Wild

Valentin Bieri, Marie-Julie Rakotosaona, Keisuke Tateno et al.

NEURIPS 2025arXiv:2512.02450

HouseTour: A Virtual Real Estate A(I)gent

Ata Çelen, Iro Armeni, Daniel Barath et al.

ICCV 2025arXiv:2510.18054
2
citations

HoVLE: Unleashing the Power of Monolithic Vision-Language Models with Holistic Vision-Language Embedding

Chenxin Tao, Shiqian Su, Xizhou Zhu et al.

CVPR 2025arXiv:2412.16158
5
citations

How Benchmark Prediction from Fewer Data Misses the Mark

Guanhua Zhang, Florian E. Dorner, Moritz Hardt

NEURIPS 2025arXiv:2506.07673
5
citations

How Can Objects Help Video-Language Understanding?

Zitian Tang, Shijie Wang, Junho Cho et al.

ICCV 2025arXiv:2504.07454
3
citations

(How) Can Transformers Predict Pseudo-Random Numbers?

Tao Tao, Darshil Doshi, Dayal Singh Kalra et al.

ICML 2025arXiv:2502.10390
7
citations

How Classifier Features Transfer to Downstream: An Asymptotic Analysis in a Two-Layer Model

HEE BIN YOO, Sungyoon Lee, Cheongjae Jang et al.

NEURIPS 2025

How Compositional Generalization and Creativity Improve as Diffusion Models are Trained

Alessandro Favero, Antonio Sclocchi, Francesco Cagnetta et al.

ICML 2025arXiv:2502.12089
14
citations

How Contaminated Is Your Benchmark? Measuring Dataset Leakage in Large Language Models with Kernel Divergence

Hyeong Kyu Choi, Maxim Khanov, Hongxin Wei et al.

ICML 2025
13
citations

How Data Mixing Shapes In-Context Learning: Asymptotic Equivalence for Transformers with MLPs

Samet Demir, Zafer Dogan

NEURIPS 2025arXiv:2510.25753

How Discrete and Continuous Diffusion Meet: Comprehensive Analysis of Discrete Diffusion Models via a Stochastic Integral Framework

Yinuo Ren, Haoxuan Chen, Grant Rotskoff et al.

ICLR 2025arXiv:2410.03601
29
citations

How Distributed Collaboration Influences the Diffusion Model Training? A Theoretical Perspective

Jing Qiao, Yu Liu, YUAN YUAN et al.

ICML 2025

How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning

Arthur Jacot, Seok Hoan Choi, Yuxiao Wen

ICLR 2025arXiv:2407.05664
6
citations

How Does Critical Batch Size Scale in Pre-training?

Hanlin Zhang, Depen Morwani, Nikhil Vyas et al.

ICLR 2025arXiv:2410.21676
43
citations

How does Labeling Error Impact Contrastive Learning? A Perspective from Data Dimensionality Reduction

Jun Chen, Hong Chen, Yonghua Yu et al.

ICML 2025arXiv:2507.11161

How Does Label Noise Gradient Descent Improve Generalization in the Low SNR Regime?

Wei Huang, Andi Han, Yujin Song et al.

NEURIPS 2025arXiv:2510.17526
1
citations

How Does Sequence Modeling Architecture Influence Base Capabilities of Pre-trained Language Models? Exploring Key Architecture Design Principles to Avoid Base Capabilities Degradation

Xin Lu, Yanyan Zhao, Si Wei et al.

NEURIPS 2025arXiv:2505.18522

How Does Topology Bias Distort Message Passing in Graph Recommender? A Dirichlet Energy Perspective

Yanbiao Ji, Yue Ding, Dan Luo et al.

NEURIPS 2025

How Does Vision-Language Adaptation Impact the Safety of Vision Language Models?

Seongyun Lee, Geewook Kim, Jiyeon Kim et al.

ICLR 2025arXiv:2410.07571
4
citations

How Do Images Align and Complement LiDAR? Towards a Harmonized Multi-modal 3D Panoptic Segmentation

Yining Pan, Qiongjie Cui, Xulei Yang et al.

ICML 2025arXiv:2505.18956
5
citations

(How) Do Language Models Track State?

Belinda Li, Carl Guo, Jacob Andreas

ICML 2025arXiv:2503.02854
17
citations

How Do Large Language Models Understand Graph Patterns? A Benchmark for Graph Pattern Comprehension

Xinnan Dai, Haohao QU, Yifei Shen et al.

ICLR 2025arXiv:2410.05298
20
citations

How Do Multimodal Large Language Models Handle Complex Multimodal Reasoning? Placing Them in An Extensible Escape Game

Ziyue Wang, Yurui Dong, Fuwen Luo et al.

ICCV 2025

How Do Optical Flow and Textual Prompts Collaborate to Assist in Audio-Visual Semantic Segmentation?

Yujian Lee, Peng Gao, Yongqi Xu et al.

ICCV 2025arXiv:2601.08133
1
citations

How Do Transformers Learn Variable Binding in Symbolic Programs?

Yiwei Wu, Atticus Geiger, Raphaël Millière

ICML 2025arXiv:2505.20896
8
citations

How do we interpret the outputs of a neural network trained on classification?

Yudi Xie

ICLR 2025

How Effective Can Dropout Be in Multiple Instance Learning ?

Wenhui Zhu, Peijie Qiu, Xiwen Chen et al.

ICML 2025arXiv:2504.14783
2
citations