Poster Papers

24,624 papers found • Page 130 of 493

How efficient is LLM-generated code? A rigorous & high-standard benchmark

Ruizhong Qiu, Weiliang Zeng, James Ezick et al.

ICLR 2025arXiv:2406.06647
45
citations

How Ensembles of Distilled Policies Improve Generalisation in Reinforcement Learning

Max Weltevrede, Moritz Zanger, Matthijs Spaan et al.

NEURIPS 2025arXiv:2505.16581

How Expressive are Knowledge Graph Foundation Models?

Xingyue Huang, Pablo Barcelo, Michael Bronstein et al.

ICML 2025arXiv:2502.13339
11
citations

How Far are AI-generated Videos from Simulating the 3D Visual World: A Learned 3D Evaluation Approach

Chirui CHANG, Jiahui Liu, Zhengzhe Liu et al.

ICCV 2025arXiv:2406.19568
12
citations

How Far Are We from Optimal Reasoning Efficiency?

Jiaxuan Gao, Shu Yan, Qixin Tan et al.

NEURIPS 2025arXiv:2506.07104
7
citations

How Far Are We from True Unlearnability?

Kai Ye, Liangcai Su, Chenxiong Qian

ICLR 2025arXiv:2509.08058
4
citations

How Far Is Video Generation from World Model: A Physical Law Perspective

Bingyi Kang, Yang Yue, Rui Lu et al.

ICML 2025arXiv:2411.02385
126
citations

How Feature Learning Can Improve Neural Scaling Laws

Blake Bordelon, Alexander Atanasov, Cengiz Pehlevan

ICLR 2025arXiv:2409.17858
40
citations

How Gradient descent balances features: A dynamical analysis for two-layer neural networks

Zhenyu Zhu, Fanghui Liu, Volkan Cevher

ICLR 2025
1
citations

How Learnable Grids Recover Fine Detail in Low Dimensions: A Neural Tangent Kernel Analysis of Multigrid Parametric Encodings

Samuel Audia, Soheil Feizi, Matthias Zwicker et al.

ICLR 2025arXiv:2504.13412
1
citations

How Low Can You Go? Searching for the Intrinsic Dimensionality of Complex Networks using Metric Node Embeddings

Nikolaos Nakis, Niels Raunkjær Holm, Andreas Lyhne Fiehn et al.

ICLR 2025arXiv:2503.01723
2
citations

How Many Domains Suffice for Domain Generalization? A Tight Characterization via the Domain Shattering Dimension

Cynthia Dwork, Lunjia Hu, Han Shao

NEURIPS 2025arXiv:2506.16704
1
citations

How many samples are needed to train a deep neural network?

Pegah Golestaneh, Mahsa Taheri, Johannes Lederer

ICLR 2025arXiv:2405.16696
8
citations

How Many Tokens Do 3D Point Cloud Transformer Architectures Really Need?

Tuan Tran Anh, Duy M. H. Nguyen, Hoai-Chau Tran et al.

NEURIPS 2025arXiv:2511.05449
1
citations

How Memory in Optimization Algorithms Implicitly Modifies the Loss

Matias Cattaneo, Boris Shigida

NEURIPS 2025arXiv:2502.02132
2
citations

How Much Can Transfer? BRIDGE: Bounded Multi-Domain Graph Foundation Model with Generalization Guarantees

Haonan Yuan, Qingyun Sun, Junhua Shi et al.

ICML 2025
10
citations

How Much Can We Forget about Data Contamination?

Sebastian Bordt, Suraj Srinivas, Valentyn Boreiko et al.

ICML 2025arXiv:2410.03249
11
citations

How Much is a Noisy Image Worth? Data Scaling Laws for Ambient Diffusion.

Giannis Daras, Yeshwanth Cherapanamjeri, Constantinos C Daskalakis

ICLR 2025arXiv:2411.02780
16
citations

How Much is Unseen Depends Chiefly on Information About the Seen

Seongmin Lee, Marcel Boehme

ICLR 2025arXiv:2402.05835
2
citations

How much of my dataset did you use? Quantitative Data Usage Inference in Machine Learning

Yao Tong, Jiayuan Ye, Sajjad Zarifzadeh et al.

ICLR 2025

How new data permeates LLM knowledge and how to dilute it

Chen Sun, Renat Aksitov, Andrey Zhmoginov et al.

ICLR 2025arXiv:2504.09522
8
citations

How Particle System Theory Enhances Hypergraph Message Passing

Yixuan Ma, Kai Yi, Pietro Lió et al.

NEURIPS 2025arXiv:2505.18505

How to Auto-optimize Prompts for Domain Tasks? Adaptive Prompting and Reasoning through Evolutionary Domain Knowledge Adaptation

Yang Zhao, Pu Wang, Hao Frank Yang

NEURIPS 2025arXiv:2510.21148
1
citations

How to build a consistency model: Learning flow maps via self-distillation

Nicholas Boffi, Michael Albergo, Eric Vanden-Eijnden

NEURIPS 2025arXiv:2505.18825
31
citations

How to Evaluate and Mitigate IP Infringement in Visual Generative AI?

Zhenting Wang, Chen Chen, Vikash Sehwag et al.

ICML 2025

How to Evaluate Reward Models for RLHF

Evan Frick, Tianle Li, Connor Chen et al.

ICLR 2025arXiv:2410.14872
58
citations

How to Find the Exact Pareto Front for Multi-Objective MDPs?

Yining Li, Peizhong Ju, Ness Shroff

ICLR 2025arXiv:2410.15557
2
citations

How to Learn a Star: Binary Classification with Starshaped Polyhedral Sets

Marie-Charlotte Brandenburg, Katharina Jochemko

NEURIPS 2025arXiv:2505.01346

How To Make Your Cell Tracker Say "I dunno!"

Richard D Paul, Johannes Seiffarth, David Rügamer et al.

ICCV 2025
3
citations

How to Merge Your Multimodal Models Over Time?

Sebastian Dziadzio, Vishaal Udandarao, Karsten Roth et al.

CVPR 2025arXiv:2412.06712
16
citations

How to Move Your Dragon: Text-to-Motion Synthesis for Large-Vocabulary Objects

Wonkwang Lee, Jongwon Jeong, Taehong Moon et al.

ICML 2025arXiv:2503.04257
3
citations

How to Probe: Simple Yet Effective Techniques for Improving Post-hoc Explanations

Siddhartha Gairola, Moritz Böhle, Francesco Locatello et al.

ICLR 2025arXiv:2503.00641
6
citations

How to Scale Second-Order Optimization

Charlie Chen, Shikai Qiu, Hoang Phan et al.

NEURIPS 2025

How to set AdamW's weight decay as you scale model and dataset size

Xi Wang, Laurence Aitchison

ICML 2025arXiv:2405.13698
30
citations

How to Synthesize Text Data without Model Collapse?

Xuekai Zhu, Daixuan Cheng, Hengli Li et al.

ICML 2025arXiv:2412.14689
14
citations

How to Train Your LLM Web Agent: A Statistical Diagnosis

Dheeraj Vattikonda, Santhoshi Ravichandran, Emiliano Penaloza et al.

NEURIPS 2025arXiv:2507.04103
6
citations

How to Train Your Multi-Exit Model? Analyzing the Impact of Training Strategies

Piotr Kubaty, Bartosz Wójcik, Bartłomiej Krzepkowski et al.

ICML 2025arXiv:2407.14320
1
citations

How to Verify Any (Reasonable) Distribution Property: Computationally Sound Argument Systems for Distributions

Tal Herman, Guy Rothblum

ICLR 2025arXiv:2409.06594
5
citations

How to visualize training dynamics in neural networks

Michael Hu, Shreyans Jain, Sangam Chaulagain et al.

ICLR 2025

How Transformers Learn Regular Language Recognition: A Theoretical Study on Training Dynamics and Implicit Bias

Ruiquan Huang, Yingbin LIANG, Jing Yang

ICML 2025arXiv:2505.00926
5
citations

How Transformers Learn Structured Data: Insights From Hierarchical Filtering

Jerome Garnier-Brun, Marc Mezard, Emanuele Moscato et al.

ICML 2025arXiv:2408.15138
10
citations

How Two-Layer Neural Networks Learn, One (Giant) Step at a Time

Yatin Dandi, Florent Krzakala, Bruno Loureiro et al.

ICLR 2025arXiv:2305.18270
52
citations

How Would It Sound? Material-Controlled Multimodal Acoustic Profile Generation for Indoor Scenes

Mahnoor Saad, Ziad Al-Halah

ICCV 2025arXiv:2508.02905
1
citations

HPSERec: A Hierarchical Partitioning and Stepwise Enhancement Framework for Long-tailed Sequential Recommendation

Xiaolong Xu, Xudong Zhao, Haolong Xiang et al.

NEURIPS 2025

HPS: Hard Preference Sampling for Human Preference Alignment

Xiandong Zou, Wanyu LIN, Yuchen Li et al.

ICML 2025arXiv:2502.14400
1
citations

HPSv3: Towards Wide-Spectrum Human Preference Score

Yuhang Ma, Keqiang Sun, Xiaoshi Wu et al.

ICCV 2025arXiv:2508.03789
69
citations

HQA-VLAttack: Towards High Quality Adversarial Attack on Vision-Language Pre-Trained Models

Han Liu, Jiaqi Li, Zhi Xu et al.

NEURIPS 2025

HQ-CLIP: Leveraging Large Vision-Language Models to Create High-Quality Image-Text Datasets and CLIP Models

ZHIXIANG WEI, Guangting Wang, Xiaoxiao Ma et al.

ICCV 2025arXiv:2507.22431
6
citations

HQ-Edit: A High-Quality Dataset for Instruction-based Image Editing

MUDE HUI, Siwei Yang, Bingchen Zhao et al.

ICLR 2025arXiv:2404.09990
146
citations

HQGS: High-Quality Novel View Synthesis with Gaussian Splatting in Degraded Scenes

Xin Lin, Shi Luo, Xiaojun Shan et al.

ICLR 2025
7
citations