2025 Poster Papers

15,759 papers found • Page 312 of 316

When Selection Meets Intervention: Additional Complexities in Causal Discovery

Haoyue Dai, Ignavier Ng, Jianle Sun et al.

ICLR 2025posterarXiv:2503.07302
5
citations

When Semantics Mislead Vision: Mitigating Large Multimodal Models Hallucinations in Scene Text Spotting and Understanding

Yan Shu, Hangui Lin, Yexin Liu et al.

NEURIPS 2025posterarXiv:2506.05551

When the Future Becomes the Past: Taming Temporal Correspondence for Self-supervised Video Representation Learning

Yang Liu, Qianqian Xu, Peisong Wen et al.

CVPR 2025posterarXiv:2503.15096
11
citations

When Thinking Drifts: Evidential Grounding for Robust Video Reasoning

Romy Luo, Zihui (Sherry) Xue, Alex Dimakis et al.

NEURIPS 2025posterarXiv:2510.06077
4
citations

When to Forget? Complexity Trade-offs in Machine Unlearning

Martin Van Waerebeke, Marco Lorenzi, Giovanni Neglia et al.

ICML 2025posterarXiv:2502.17323

When to retrain a machine learning model

Florence Regol, Leo Schwinn, Kyle Sprague et al.

ICML 2025posterarXiv:2505.14903

When, Where and Why to Average Weights?

Niccolò Ajroldi, Antonio Orvieto, Jonas Geiping

ICML 2025posterarXiv:2502.06761

When Will It Fail?: Anomaly to Prompt for Forecasting Future Anomalies in Time Series

Min-Yeong Park, Won-Jeong Lee, Seong Tae Kim et al.

ICML 2025posterarXiv:2506.23596

Where Am I and What Will I See: An Auto-Regressive Model for Spatial Localization and View Prediction

Junyi Chen, Di Huang, Weicai Ye et al.

ICLR 2025posterarXiv:2410.18962
4
citations

Where am I? Cross-View Geo-localization with Natural Language Descriptions

Junyan Ye, Honglin Lin, Leyan Ou et al.

ICCV 2025posterarXiv:2412.17007
16
citations

Where and How to Perturb: On the Design of Perturbation Guidance in Diffusion and Flow Models

Donghoon Ahn, Jiwon Kang, Sanghyun Lee et al.

NEURIPS 2025posterarXiv:2506.10978
1
citations

Where Graph Meets Heterogeneity: Multi-View Collaborative Graph Experts

Zhihao Wu, Jinyu Cai, Yunhe Zhang et al.

NEURIPS 2025poster

Where's the Liability in the Generative Era? Recovery-based Black-Box Detection of AI-Generated Content

Haoyue Bai, Yiyou Sun, Wei Cheng et al.

CVPR 2025posterarXiv:2505.01008

Where the Devil Hides: Deepfake Detectors Can No Longer Be Trusted

Shuaiwei Yuan, Junyu Dong, Yuezun Li

CVPR 2025posterarXiv:2505.08255
2
citations

Which Attention Heads Matter for In-Context Learning?

Kayo Yin, Jacob Steinhardt

ICML 2025posterarXiv:2502.14010
34
citations

Which Data Attributes Stimulate Math and Code Reasoning? An Investigation via Influence Functions

Siqi Kou, Qingyuan Tian, Hanwen Xu et al.

NEURIPS 2025posterarXiv:2505.19949
4
citations

Which Tasks Should Be Compressed Together? A Causal Discovery Approach for Efficient Multi-Task Representation Compression

Sha Guo, Jing Chen, Zixuan Hu et al.

ICLR 2025poster
1
citations

Whitened CLIP as a Likelihood Surrogate of Images and Captions

Roy Betser, Meir Yossef Levi, Guy Gilboa

ICML 2025posterarXiv:2505.06934

Whitened Score Diffusion: A Structured Prior for Imaging Inverse Problems

Jeffrey Alido, Tongyu Li, Yu Sun et al.

NEURIPS 2025posterarXiv:2505.10311
1
citations

Who Controls the Authorization? Invertible Networks for Copyright Protection in Text-to-Image Synthesis

Baoyue Hu, Yang Wei, Junhao Xiao et al.

ICCV 2025poster

Whoever Started the interference Should End It: Guiding Data-Free Model Merging via Task Vectors

Runxi Cheng, Feng Xiong, Yongxian Wei et al.

ICML 2025posterarXiv:2503.08099

"Who experiences large model decay and why?" A Hierarchical Framework for Diagnosing Heterogeneous Performance Drift

Harvineet Singh, Fan Xia, Alexej Gossmann et al.

ICML 2025poster

Who is a Better Talker: Subjective and Objective Quality Assessment for AI-Generated Talking Heads

Yingjie Zhou, Jiezhang Cao, Zicheng Zhang et al.

ICCV 2025posterarXiv:2507.23343
2
citations

Whole-Body Conditioned Egocentric Video Prediction

Yutong Bai, Danny Tran, Amir Bar et al.

NEURIPS 2025posterarXiv:2506.21552
8
citations

Who Reasons in the Large Language Models?

Jie Shao, Jianxin Wu

NEURIPS 2025posterarXiv:2505.20993

Whose Instructions Count? Resolving Preference Bias in Instruction Fine-Tuning

Jiayu Zhang, Changbang Li, Yinan Peng et al.

NEURIPS 2025poster

Who Speaks for the Trigger? Dynamic Expert Routing in Backdoored Mixture-of-Experts Transformers

Xin Zhao, Xiaojun Chen, Bingshan Liu et al.

NEURIPS 2025posterarXiv:2510.13462

Why 1 + 1 < 1 in Visual Token Pruning: Beyond Naive Integration via Multi-Objective Balanced Covering

Yangfu Li, Hongjian Zhan, Tianyi Chen et al.

NEURIPS 2025posterarXiv:2505.10118
1
citations

Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations

Yiyou Sun, Yu Gai, Lijie Chen et al.

NEURIPS 2025posterarXiv:2504.12691
10
citations

Why Does the Effective Context Length of LLMs Fall Short?

Chenxin An, Jun Zhang, Ming Zhong et al.

ICLR 2025posterarXiv:2410.18745

Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive?

Rylan Schaeffer, Hailey Schoelkopf, Brando Miranda et al.

ICML 2025posterarXiv:2406.04391
33
citations

Why In-Context Learning Models are Good Few-Shot Learners?

Shiguang Wu, Yaqing Wang, Quanming Yao

ICLR 2025poster

Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas

Shiqi Chen, Tongyao Zhu, Ruochen Zhou et al.

ICML 2025posterarXiv:2503.01773

"Why Is There a Tumor?": Tell Me the Reason, Show Me the Evidence

Mengmeng Ma, Tang Li, Yunxiang Peng et al.

ICML 2025poster

Why Knowledge Distillation Works in Generative Models: A Minimal Working Explanation

Sungmin Cha, Kyunghyun Cho

NEURIPS 2025posterarXiv:2505.13111
4
citations

Why LVLMs Are More Prone to Hallucinations in Longer Responses: The Role of Context

Ge Zheng, Jiaye Qian, Jiajin Tang et al.

ICCV 2025posterarXiv:2510.20229
6
citations

Why Masking Diffusion Works: Condition on the Jump Schedule for Improved Discrete Diffusion

Alan Amin, Nate Gruver, Andrew Wilson

NEURIPS 2025posterarXiv:2506.08316
8
citations

Why Playing Against Diverse and Challenging Opponents Speeds Up Coevolution: A Theoretical Analysis on Combinatorial Games

Alistair Benford, Per Kristian Lehre

NEURIPS 2025poster

Why Popular MOEAs are Popular: Proven Advantages in Approximating the Pareto Front

Mingfeng Li, Qiang Zhang, Weijie Zheng et al.

NEURIPS 2025poster

Why RoPE Struggles to Maintain Long-Term Decay in Long Sequences?

Wei Shen, Chao Yin, Yuliang Liu et al.

ICLR 2025poster

Wicked Oddities: Selectively Poisoning for Effective Clean-Label Backdoor Attacks

Hung Quang Nguyen, Hieu Nguyen, Anh Ta et al.

ICLR 2025posterarXiv:2407.10825

Wide2Long: Learning Lens Compression and Perspective Adjustment for Wide-Angle to Telephoto Translation

Soumyadipta Banerjee, Jiaul Paik, Debashis Sen

ICCV 2025poster

Wide Neural Networks Trained with Weight Decay Provably Exhibit Neural Collapse

Arthur Jacot, Peter Súkeník, Zihan Wang et al.

ICLR 2025posterarXiv:2410.04887
9
citations

Widening the Network Mitigates the Impact of Data Heterogeneity on FedAvg

Like Jian, Dong Liu

ICML 2025posterarXiv:2508.12576

WikiAutoGen: Towards Multi-Modal Wikipedia-Style Article Generation

Zhongyu Yang, Jun Chen, Dannong Xu et al.

ICCV 2025posterarXiv:2503.19065

WikiBigEdit: Understanding the Limits of Lifelong Knowledge Editing in LLMs

Lukas Thede, Karsten Roth, Matthias Bethge et al.

ICML 2025posterarXiv:2503.05683

WildAvatar: Learning In-the-wild 3D Avatars from the Web

Zihao Huang, Shoukang Hu, Guangcong Wang et al.

CVPR 2025posterarXiv:2407.02165
1
citations

WildBench: Benchmarking LLMs with Challenging Tasks from Real Users in the Wild

Bill Yuchen Lin, Yuntian Deng, Khyathi Chandu et al.

ICLR 2025posterarXiv:2406.04770
142
citations

WildCAT3D: Appearance-Aware Multi-View Diffusion in the Wild

Morris Alper, David Novotny, Filippos Kokkinos et al.

NEURIPS 2025posterarXiv:2506.13030
1
citations

WildChat-50M: A Deep Dive Into the Role of Synthetic Data in Post-Training

Benjamin Feuer, Chinmay Hegde

ICML 2025posterarXiv:2501.18511