Poster Papers
24,624 papers found • Page 130 of 493
Conference
How efficient is LLM-generated code? A rigorous & high-standard benchmark
Ruizhong Qiu, Weiliang Zeng, James Ezick et al.
How Ensembles of Distilled Policies Improve Generalisation in Reinforcement Learning
Max Weltevrede, Moritz Zanger, Matthijs Spaan et al.
How Expressive are Knowledge Graph Foundation Models?
Xingyue Huang, Pablo Barcelo, Michael Bronstein et al.
How Far are AI-generated Videos from Simulating the 3D Visual World: A Learned 3D Evaluation Approach
Chirui CHANG, Jiahui Liu, Zhengzhe Liu et al.
How Far Are We from Optimal Reasoning Efficiency?
Jiaxuan Gao, Shu Yan, Qixin Tan et al.
How Far Are We from True Unlearnability?
Kai Ye, Liangcai Su, Chenxiong Qian
How Far Is Video Generation from World Model: A Physical Law Perspective
Bingyi Kang, Yang Yue, Rui Lu et al.
How Feature Learning Can Improve Neural Scaling Laws
Blake Bordelon, Alexander Atanasov, Cengiz Pehlevan
How Gradient descent balances features: A dynamical analysis for two-layer neural networks
Zhenyu Zhu, Fanghui Liu, Volkan Cevher
How Learnable Grids Recover Fine Detail in Low Dimensions: A Neural Tangent Kernel Analysis of Multigrid Parametric Encodings
Samuel Audia, Soheil Feizi, Matthias Zwicker et al.
How Low Can You Go? Searching for the Intrinsic Dimensionality of Complex Networks using Metric Node Embeddings
Nikolaos Nakis, Niels Raunkjær Holm, Andreas Lyhne Fiehn et al.
How Many Domains Suffice for Domain Generalization? A Tight Characterization via the Domain Shattering Dimension
Cynthia Dwork, Lunjia Hu, Han Shao
How many samples are needed to train a deep neural network?
Pegah Golestaneh, Mahsa Taheri, Johannes Lederer
How Many Tokens Do 3D Point Cloud Transformer Architectures Really Need?
Tuan Tran Anh, Duy M. H. Nguyen, Hoai-Chau Tran et al.
How Memory in Optimization Algorithms Implicitly Modifies the Loss
Matias Cattaneo, Boris Shigida
How Much Can Transfer? BRIDGE: Bounded Multi-Domain Graph Foundation Model with Generalization Guarantees
Haonan Yuan, Qingyun Sun, Junhua Shi et al.
How Much Can We Forget about Data Contamination?
Sebastian Bordt, Suraj Srinivas, Valentyn Boreiko et al.
How Much is a Noisy Image Worth? Data Scaling Laws for Ambient Diffusion.
Giannis Daras, Yeshwanth Cherapanamjeri, Constantinos C Daskalakis
How Much is Unseen Depends Chiefly on Information About the Seen
Seongmin Lee, Marcel Boehme
How much of my dataset did you use? Quantitative Data Usage Inference in Machine Learning
Yao Tong, Jiayuan Ye, Sajjad Zarifzadeh et al.
How new data permeates LLM knowledge and how to dilute it
Chen Sun, Renat Aksitov, Andrey Zhmoginov et al.
How Particle System Theory Enhances Hypergraph Message Passing
Yixuan Ma, Kai Yi, Pietro Lió et al.
How to Auto-optimize Prompts for Domain Tasks? Adaptive Prompting and Reasoning through Evolutionary Domain Knowledge Adaptation
Yang Zhao, Pu Wang, Hao Frank Yang
How to build a consistency model: Learning flow maps via self-distillation
Nicholas Boffi, Michael Albergo, Eric Vanden-Eijnden
How to Evaluate and Mitigate IP Infringement in Visual Generative AI?
Zhenting Wang, Chen Chen, Vikash Sehwag et al.
How to Evaluate Reward Models for RLHF
Evan Frick, Tianle Li, Connor Chen et al.
How to Find the Exact Pareto Front for Multi-Objective MDPs?
Yining Li, Peizhong Ju, Ness Shroff
How to Learn a Star: Binary Classification with Starshaped Polyhedral Sets
Marie-Charlotte Brandenburg, Katharina Jochemko
How To Make Your Cell Tracker Say "I dunno!"
Richard D Paul, Johannes Seiffarth, David Rügamer et al.
How to Merge Your Multimodal Models Over Time?
Sebastian Dziadzio, Vishaal Udandarao, Karsten Roth et al.
How to Move Your Dragon: Text-to-Motion Synthesis for Large-Vocabulary Objects
Wonkwang Lee, Jongwon Jeong, Taehong Moon et al.
How to Probe: Simple Yet Effective Techniques for Improving Post-hoc Explanations
Siddhartha Gairola, Moritz Böhle, Francesco Locatello et al.
How to Scale Second-Order Optimization
Charlie Chen, Shikai Qiu, Hoang Phan et al.
How to set AdamW's weight decay as you scale model and dataset size
Xi Wang, Laurence Aitchison
How to Synthesize Text Data without Model Collapse?
Xuekai Zhu, Daixuan Cheng, Hengli Li et al.
How to Train Your LLM Web Agent: A Statistical Diagnosis
Dheeraj Vattikonda, Santhoshi Ravichandran, Emiliano Penaloza et al.
How to Train Your Multi-Exit Model? Analyzing the Impact of Training Strategies
Piotr Kubaty, Bartosz Wójcik, Bartłomiej Krzepkowski et al.
How to Verify Any (Reasonable) Distribution Property: Computationally Sound Argument Systems for Distributions
Tal Herman, Guy Rothblum
How to visualize training dynamics in neural networks
Michael Hu, Shreyans Jain, Sangam Chaulagain et al.
How Transformers Learn Regular Language Recognition: A Theoretical Study on Training Dynamics and Implicit Bias
Ruiquan Huang, Yingbin LIANG, Jing Yang
How Transformers Learn Structured Data: Insights From Hierarchical Filtering
Jerome Garnier-Brun, Marc Mezard, Emanuele Moscato et al.
How Two-Layer Neural Networks Learn, One (Giant) Step at a Time
Yatin Dandi, Florent Krzakala, Bruno Loureiro et al.
How Would It Sound? Material-Controlled Multimodal Acoustic Profile Generation for Indoor Scenes
Mahnoor Saad, Ziad Al-Halah
HPSERec: A Hierarchical Partitioning and Stepwise Enhancement Framework for Long-tailed Sequential Recommendation
Xiaolong Xu, Xudong Zhao, Haolong Xiang et al.
HPS: Hard Preference Sampling for Human Preference Alignment
Xiandong Zou, Wanyu LIN, Yuchen Li et al.
HPSv3: Towards Wide-Spectrum Human Preference Score
Yuhang Ma, Keqiang Sun, Xiaoshi Wu et al.
HQA-VLAttack: Towards High Quality Adversarial Attack on Vision-Language Pre-Trained Models
Han Liu, Jiaqi Li, Zhi Xu et al.
HQ-CLIP: Leveraging Large Vision-Language Models to Create High-Quality Image-Text Datasets and CLIP Models
ZHIXIANG WEI, Guangting Wang, Xiaoxiao Ma et al.
HQ-Edit: A High-Quality Dataset for Instruction-based Image Editing
MUDE HUI, Siwei Yang, Bingchen Zhao et al.
HQGS: High-Quality Novel View Synthesis with Gaussian Splatting in Degraded Scenes
Xin Lin, Shi Luo, Xiaojun Shan et al.