2025 Poster Papers
15,759 papers found • Page 312 of 316
When Selection Meets Intervention: Additional Complexities in Causal Discovery
Haoyue Dai, Ignavier Ng, Jianle Sun et al.
When Semantics Mislead Vision: Mitigating Large Multimodal Models Hallucinations in Scene Text Spotting and Understanding
Yan Shu, Hangui Lin, Yexin Liu et al.
When the Future Becomes the Past: Taming Temporal Correspondence for Self-supervised Video Representation Learning
Yang Liu, Qianqian Xu, Peisong Wen et al.
When Thinking Drifts: Evidential Grounding for Robust Video Reasoning
Romy Luo, Zihui (Sherry) Xue, Alex Dimakis et al.
When to Forget? Complexity Trade-offs in Machine Unlearning
Martin Van Waerebeke, Marco Lorenzi, Giovanni Neglia et al.
When to retrain a machine learning model
Florence Regol, Leo Schwinn, Kyle Sprague et al.
When, Where and Why to Average Weights?
Niccolò Ajroldi, Antonio Orvieto, Jonas Geiping
When Will It Fail?: Anomaly to Prompt for Forecasting Future Anomalies in Time Series
Min-Yeong Park, Won-Jeong Lee, Seong Tae Kim et al.
Where Am I and What Will I See: An Auto-Regressive Model for Spatial Localization and View Prediction
Junyi Chen, Di Huang, Weicai Ye et al.
Where am I? Cross-View Geo-localization with Natural Language Descriptions
Junyan Ye, Honglin Lin, Leyan Ou et al.
Where and How to Perturb: On the Design of Perturbation Guidance in Diffusion and Flow Models
Donghoon Ahn, Jiwon Kang, Sanghyun Lee et al.
Where Graph Meets Heterogeneity: Multi-View Collaborative Graph Experts
Zhihao Wu, Jinyu Cai, Yunhe Zhang et al.
Where's the Liability in the Generative Era? Recovery-based Black-Box Detection of AI-Generated Content
Haoyue Bai, Yiyou Sun, Wei Cheng et al.
Where the Devil Hides: Deepfake Detectors Can No Longer Be Trusted
Shuaiwei Yuan, Junyu Dong, Yuezun Li
Which Attention Heads Matter for In-Context Learning?
Kayo Yin, Jacob Steinhardt
Which Data Attributes Stimulate Math and Code Reasoning? An Investigation via Influence Functions
Siqi Kou, Qingyuan Tian, Hanwen Xu et al.
Which Tasks Should Be Compressed Together? A Causal Discovery Approach for Efficient Multi-Task Representation Compression
Sha Guo, Jing Chen, Zixuan Hu et al.
Whitened CLIP as a Likelihood Surrogate of Images and Captions
Roy Betser, Meir Yossef Levi, Guy Gilboa
Whitened Score Diffusion: A Structured Prior for Imaging Inverse Problems
Jeffrey Alido, Tongyu Li, Yu Sun et al.
Who Controls the Authorization? Invertible Networks for Copyright Protection in Text-to-Image Synthesis
Baoyue Hu, Yang Wei, Junhao Xiao et al.
Whoever Started the interference Should End It: Guiding Data-Free Model Merging via Task Vectors
Runxi Cheng, Feng Xiong, Yongxian Wei et al.
"Who experiences large model decay and why?" A Hierarchical Framework for Diagnosing Heterogeneous Performance Drift
Harvineet Singh, Fan Xia, Alexej Gossmann et al.
Who is a Better Talker: Subjective and Objective Quality Assessment for AI-Generated Talking Heads
Yingjie Zhou, Jiezhang Cao, Zicheng Zhang et al.
Whole-Body Conditioned Egocentric Video Prediction
Yutong Bai, Danny Tran, Amir Bar et al.
Who Reasons in the Large Language Models?
Jie Shao, Jianxin Wu
Whose Instructions Count? Resolving Preference Bias in Instruction Fine-Tuning
Jiayu Zhang, Changbang Li, Yinan Peng et al.
Who Speaks for the Trigger? Dynamic Expert Routing in Backdoored Mixture-of-Experts Transformers
Xin Zhao, Xiaojun Chen, Bingshan Liu et al.
Why 1 + 1 < 1 in Visual Token Pruning: Beyond Naive Integration via Multi-Objective Balanced Covering
Yangfu Li, Hongjian Zhan, Tianyi Chen et al.
Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations
Yiyou Sun, Yu Gai, Lijie Chen et al.
Why Does the Effective Context Length of LLMs Fall Short?
Chenxin An, Jun Zhang, Ming Zhong et al.
Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive?
Rylan Schaeffer, Hailey Schoelkopf, Brando Miranda et al.
Why In-Context Learning Models are Good Few-Shot Learners?
Shiguang Wu, Yaqing Wang, Quanming Yao
Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas
Shiqi Chen, Tongyao Zhu, Ruochen Zhou et al.
"Why Is There a Tumor?": Tell Me the Reason, Show Me the Evidence
Mengmeng Ma, Tang Li, Yunxiang Peng et al.
Why Knowledge Distillation Works in Generative Models: A Minimal Working Explanation
Sungmin Cha, Kyunghyun Cho
Why LVLMs Are More Prone to Hallucinations in Longer Responses: The Role of Context
Ge Zheng, Jiaye Qian, Jiajin Tang et al.
Why Masking Diffusion Works: Condition on the Jump Schedule for Improved Discrete Diffusion
Alan Amin, Nate Gruver, Andrew Wilson
Why Playing Against Diverse and Challenging Opponents Speeds Up Coevolution: A Theoretical Analysis on Combinatorial Games
Alistair Benford, Per Kristian Lehre
Why Popular MOEAs are Popular: Proven Advantages in Approximating the Pareto Front
Mingfeng Li, Qiang Zhang, Weijie Zheng et al.
Why RoPE Struggles to Maintain Long-Term Decay in Long Sequences?
Wei Shen, Chao Yin, Yuliang Liu et al.
Wicked Oddities: Selectively Poisoning for Effective Clean-Label Backdoor Attacks
Hung Quang Nguyen, Hieu Nguyen, Anh Ta et al.
Wide2Long: Learning Lens Compression and Perspective Adjustment for Wide-Angle to Telephoto Translation
Soumyadipta Banerjee, Jiaul Paik, Debashis Sen
Wide Neural Networks Trained with Weight Decay Provably Exhibit Neural Collapse
Arthur Jacot, Peter Súkeník, Zihan Wang et al.
Widening the Network Mitigates the Impact of Data Heterogeneity on FedAvg
Like Jian, Dong Liu
WikiAutoGen: Towards Multi-Modal Wikipedia-Style Article Generation
Zhongyu Yang, Jun Chen, Dannong Xu et al.
WikiBigEdit: Understanding the Limits of Lifelong Knowledge Editing in LLMs
Lukas Thede, Karsten Roth, Matthias Bethge et al.
WildAvatar: Learning In-the-wild 3D Avatars from the Web
Zihao Huang, Shoukang Hu, Guangcong Wang et al.
WildBench: Benchmarking LLMs with Challenging Tasks from Real Users in the Wild
Bill Yuchen Lin, Yuntian Deng, Khyathi Chandu et al.
WildCAT3D: Appearance-Aware Multi-View Diffusion in the Wild
Morris Alper, David Novotny, Filippos Kokkinos et al.
WildChat-50M: A Deep Dive Into the Role of Synthetic Data in Post-Training
Benjamin Feuer, Chinmay Hegde