Poster Papers
24,624 papers found • Page 129 of 493
Conference
HollowFlow: Efficient Sample Likelihood Evaluation using Hollow Message Passing
Johann Flemming Gloy, Simon Olsson
Holographic Node Representations: Pre-training Task-Agnostic Node Embeddings
Beatrice Bevilacqua, Joshua Robinson, Jure Leskovec et al.
HoloLLM: Multisensory Foundation Model for Language-Grounded Human Sensing and Reasoning
Chuhao Zhou, Jianfei Yang
HoloScene: Simulation‑Ready Interactive 3D Worlds from a Single Video
Hongchi Xia, Chih-Hao Lin, Hao-Yu Hsu et al.
HOMO-Feature: Cross-Arbitrary-Modal Image Matching with Homomorphism of Organized Major Orientation
Chenzhong Gao, Wei Li, Desheng Weng
HomoGen: Enhanced Video Inpainting via Homography Propagation and Diffusion
Ding Ding, Yueming Pan, Ruoyu Feng et al.
Homogeneous Algorithms Can Reduce Competition in Personalized Pricing
Nathanael Jo, Ashia Wilson, Kathleen Creel et al.
Homogeneous Dynamics Space for Heterogeneous Humans
Xinpeng Liu, Junxuan Liang, Chenshuo Zhang et al.
Homogeneous Keys, Heterogeneous Values: Exploiting Local KV Cache Asymmetry for Long-Context LLMs
Wanyun Cui, Mingwei Xu
Homomorphism Counts as Structural Encodings for Graph Learning
Linus Bao, Emily Jin, Michael Bronstein et al.
Homomorphism Expressivity of Spectral Invariant Graph Neural Networks
Jingchu Gai, Yiheng Du, Bohang Zhang et al.
Homophily Enhanced Graph Domain Adaptation
Ruiyi Fang, Bingheng Li, Jingyu Zhao et al.
HOPE for a Robust Parameterization of Long-memory State Space Models
Annan Yu, Michael W Mahoney, N. Benjamin Erichson
HOP: Heterogeneous Topology-based Multimodal Entanglement for Co-Speech Gesture Generation
Hongye Cheng, Tianyu Wang, guangsi shi et al.
Horizon Generalization in Reinforcement Learning
Vivek Myers, Catherine Ji, Benjamin Eysenbach
Horizon-GS: Unified 3D Gaussian Splatting for Large-Scale Aerial-to-Ground Scenes
Lihan Jiang, Kerui Ren, Mulin Yu et al.
HORP: Human-Object Relation Priors Guided HOI Detection
Pei Geng, Jian Yang, Shanshan Zhang
HORT: Monocular Hand-held Objects Reconstruction with Transformers
Zerui Chen, Rolandos Alexandros Potamias, Shizhe Chen et al.
HOTFormerLoc: Hierarchical Octree Transformer for Versatile Lidar Place Recognition Across Ground and Aerial Views
Ethan Griffiths, Maryam Haghighat, Simon Denman et al.
HOT: Hadamard-based Optimized Training
Seonggon Kim, Juncheol Shin, Seung-taek Woo et al.
Hot-pluggable Federated Learning: Bridging General and Personalized FL via Dynamic Selection
Lei Shen, Zhenheng Tang, Lijun Wu et al.
Hotspot-Driven Peptide Design via Multi-Fragment Autoregressive Extension
Jiahan Li, Tong Chen, Shitong Luo et al.
HoT-VI: Reparameterizable Variational Inference for Capturing Instance-Level High-Order Correlations
Junxi Xiao, Qinliang Su, Zexin Yuan
HouseLayout3D: A Benchmark and Training-free Baseline for 3D Layout Estimation in the Wild
Valentin Bieri, Marie-Julie Rakotosaona, Keisuke Tateno et al.
HouseTour: A Virtual Real Estate A(I)gent
Ata Çelen, Iro Armeni, Daniel Barath et al.
HoVLE: Unleashing the Power of Monolithic Vision-Language Models with Holistic Vision-Language Embedding
Chenxin Tao, Shiqian Su, Xizhou Zhu et al.
How Benchmark Prediction from Fewer Data Misses the Mark
Guanhua Zhang, Florian E. Dorner, Moritz Hardt
How Can Objects Help Video-Language Understanding?
Zitian Tang, Shijie Wang, Junho Cho et al.
(How) Can Transformers Predict Pseudo-Random Numbers?
Tao Tao, Darshil Doshi, Dayal Singh Kalra et al.
How Classifier Features Transfer to Downstream: An Asymptotic Analysis in a Two-Layer Model
HEE BIN YOO, Sungyoon Lee, Cheongjae Jang et al.
How Compositional Generalization and Creativity Improve as Diffusion Models are Trained
Alessandro Favero, Antonio Sclocchi, Francesco Cagnetta et al.
How Contaminated Is Your Benchmark? Measuring Dataset Leakage in Large Language Models with Kernel Divergence
Hyeong Kyu Choi, Maxim Khanov, Hongxin Wei et al.
How Data Mixing Shapes In-Context Learning: Asymptotic Equivalence for Transformers with MLPs
Samet Demir, Zafer Dogan
How Discrete and Continuous Diffusion Meet: Comprehensive Analysis of Discrete Diffusion Models via a Stochastic Integral Framework
Yinuo Ren, Haoxuan Chen, Grant Rotskoff et al.
How Distributed Collaboration Influences the Diffusion Model Training? A Theoretical Perspective
Jing Qiao, Yu Liu, YUAN YUAN et al.
How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning
Arthur Jacot, Seok Hoan Choi, Yuxiao Wen
How Does Critical Batch Size Scale in Pre-training?
Hanlin Zhang, Depen Morwani, Nikhil Vyas et al.
How does Labeling Error Impact Contrastive Learning? A Perspective from Data Dimensionality Reduction
Jun Chen, Hong Chen, Yonghua Yu et al.
How Does Label Noise Gradient Descent Improve Generalization in the Low SNR Regime?
Wei Huang, Andi Han, Yujin Song et al.
How Does Sequence Modeling Architecture Influence Base Capabilities of Pre-trained Language Models? Exploring Key Architecture Design Principles to Avoid Base Capabilities Degradation
Xin Lu, Yanyan Zhao, Si Wei et al.
How Does Topology Bias Distort Message Passing in Graph Recommender? A Dirichlet Energy Perspective
Yanbiao Ji, Yue Ding, Dan Luo et al.
How Does Vision-Language Adaptation Impact the Safety of Vision Language Models?
Seongyun Lee, Geewook Kim, Jiyeon Kim et al.
How Do Images Align and Complement LiDAR? Towards a Harmonized Multi-modal 3D Panoptic Segmentation
Yining Pan, Qiongjie Cui, Xulei Yang et al.
(How) Do Language Models Track State?
Belinda Li, Carl Guo, Jacob Andreas
How Do Large Language Models Understand Graph Patterns? A Benchmark for Graph Pattern Comprehension
Xinnan Dai, Haohao QU, Yifei Shen et al.
How Do Multimodal Large Language Models Handle Complex Multimodal Reasoning? Placing Them in An Extensible Escape Game
Ziyue Wang, Yurui Dong, Fuwen Luo et al.
How Do Optical Flow and Textual Prompts Collaborate to Assist in Audio-Visual Semantic Segmentation?
Yujian Lee, Peng Gao, Yongqi Xu et al.
How Do Transformers Learn Variable Binding in Symbolic Programs?
Yiwei Wu, Atticus Geiger, Raphaël Millière
How do we interpret the outputs of a neural network trained on classification?
Yudi Xie
How Effective Can Dropout Be in Multiple Instance Learning ?
Wenhui Zhu, Peijie Qiu, Xiwen Chen et al.