2025 Papers
21,856 papers found • Page 431 of 438
What's New in My Data? Novelty Exploration via Contrastive Generation
Masaru Isonuma, Ivan Titov
What's Producible May Not Be Reachable: Measuring the Steerability of Generative Models
Keyon Vafa, Sarah Bentley, Jon Kleinberg et al.
What's the Move? Hybrid Imitation Learning via Salient Points
Priya Sundaresan, Hengyuan Hu, Quan Vuong et al.
What to align in multimodal contrastive learning?
Benoit Dufumier, Javiera Castillo Navarro, Devis Tuia et al.
What to Distill? Fast Knowledge Distillation with Adaptive Sampling
Byungchul Chae, Seonyeong Heo
What to Preserve and What to Transfer: Faithful, Identity-Preserving Diffusion-based Hairstyle Transfer
Chaeyeon Chung, Sunghyun Park, Jeongho Kim et al.
What We Miss Matters: Learning from the Overlooked in Point Cloud Transformers
Yi Wang, Jiaze Wang, Ziyu Guo et al.
What we need is explicit controllability: Training 3D gaze estimator using only facial images
Tingwei Li, Jun Bao, Zhenzhong Kuang et al.
What You Have is What You Track: Adaptive and Robust Multimodal Tracking
Yuedong Tan, Jiawei Shao, Eduard Zamfir et al.
When Additive Noise Meets Unobserved Mediators: Bivariate Denoising Diffusion for Causal Discovery
Dominik Meier, Sujai Hiremath, PROMIT GHOSAL et al.
When Anchors Meet Cold Diffusion: A Multi-Stage Approach to Lane Detection
Bo-Lun Huang, Tzu-Hsiang Ni, Feng-Kai Huang et al.
When and how can inexact generative models still sample from the data manifold?
Nisha Chandramoorthy, Adriaan de Clercq
When and How Does CLIP Enable Domain and Compositional Generalization?
Elias Kempf, Simon Schrodi, Max Argus et al.
When and Where do Data Poisons Attack Textual Inversion?
Jeremy Styborski, Mingzhi Lyu, Jiayou Lu et al.
When Are Concepts Erased From Diffusion Models?
Kevin Lu, Nicky Kriplani, Rohit Gandikota et al.
When Attention Sink Emerges in Language Models: An Empirical View
Xiangming Gu, Tianyu Pang, Chao Du et al.
When Bad Data Leads to Good Models
Kenneth Li, Yida Chen, Fernanda Viégas et al.
When can in-context learning generalize out of task distribution?
Chase Goddard, Lindsay Smith, Wave Ngampruetikorn et al.
When Can Model-Free Reinforcement Learning be Enough for Thinking?
Josiah Hanna, Nicholas Corrado
When Can Proxies Improve the Sample Complexity of Preference Learning?
Yuchen Zhu, Daniel Augusto de Souza, Zhengyan Shi et al.
When Can We Approximate Wide Contrastive Models with Neural Tangent Kernels and Principal Component Analysis?
Gautham Govind Anil, Pascal Esser, Debarghya Ghoshdastidar
When Causal Dynamics Matter: Adapting Causal Strategies through Meta-Aware Interventions
Moritz Willig, Tim Woydt, Devendra Singh Dhami et al.
When Confidence Fails: Revisiting Pseudo-Label Selection in Semi-supervised Semantic Segmentation
Pan Liu, Jinshi Liu
When Data Can't Meet: Estimating Correlation Across Privacy Barriers
Abhinav Chakraborty, Arnab Auddy, T. Tony Cai
When Data-Free Knowledge Distillation Meets Non-Transferable Teacher: Escaping Out-of-Distribution Trap is All You Need
Ziming Hong, Runnan Chen, Zengmao Wang et al.
When Diffusion Models Memorize: Inductive Biases in Probability Flow of Minimum-Norm Shallow Neural Nets
Chen Zeno, Hila Manor, Gregory Ongie et al.
When Does Closeness in Distribution Imply Representational Similarity? An Identifiability Perspective
Beatrix Nielsen, Emanuele Marconato, Andrea Dittadi et al.
When does compositional structure yield compositional generalization? A kernel theory.
Samuel Lippl, Kimberly Stachenfeld
When Does Curriculum Learning Help? A Theoretical Perspective
Raman Arora, Yunjuan Wang, Kaibo Zhang
When do GFlowNets learn the right distribution?
Tiago Silva, Rodrigo Alves, Eliezer de Souza da Silva et al.
When Do LLMs Help With Node Classification? A Comprehensive Analysis
Xixi Wu, Yifei Shen, Fangzhou Ge et al.
When Domain Generalization meets Generalized Category Discovery: An Adaptive Task-Arithmetic Driven Approach
Vaibhav Rathore, Shubhranil B, Saikat Dutta et al.
When do neural networks learn world models?
Tianren Zhang, Guanyu Chen, Feng Chen
When Do Transformers Outperform Feedforward and Recurrent Networks? A Statistical Perspective
Alireza Mousavi-Hosseini, Clayton Sanford, Denny Wu et al.
When Dynamic Data Selection Meets Data Augmentation: Achieving Enhanced Training Acceleration
Suorong Yang, Peng Ye, Furao Shen et al.
When Every Millisecond Counts: Real-Time Anomaly Detection via the Multimodal Asynchronous Hybrid Network
Dong Xiao, Guangyao Chen, Peixi Peng et al.
When GNNs meet symmetry in ILPs: an orbit-based feature augmentation approach
Qian Chen, Lei Li, Qian Li et al.
When Graph Neural Networks Meet Dynamic Mode Decomposition
Dai Shi, Lequan Lin, Andi Han et al.
When Hypergraph Meets Heterophily: New Benchmark Datasets and Baseline
Ming Li, Yongchun Gu, Yi Wang et al.
When Is Self-Gaze Helpful? Examining Uni- vs Bi-directional Gaze Visualization in Collocated AR Tasks
Daniel Alexander Delgado, Christopher J Bowers, Rodrigo Luis Calvo et al.
When is Task Vector Provably Effective for Model Editing? A Generalization Analysis of Nonlinear Transformers
Hongkang Li, Yihua Zhang, shuai ZHANG et al.
When Kernels Multiply, Clusters Unify: Fusing Embeddings with the Kronecker Product
Youqi WU, Jingwei Zhang, Farzan Farnia
When Large Vision-Language Model Meets Large Remote Sensing Imagery: Coarse-to-Fine Text-Guided Token Pruning
Junwei Luo, Yingying Zhang, Xue Yang et al.
When Less Language is More: Language-Reasoning Disentanglement Makes LLMs Better Multilingual Reasoners
Weixiang Zhao, Jiahe Guo, Yang Deng et al.
When Lighting Deceives: Exposing Vision-Language Models' Illumination Vulnerability Through Illumination Transformation Attack
Hanqing Liu, Shouwei Ruan, Yao Huang et al.
When LLMs Play the Telephone Game: Cultural Attractors as Conceptual Tools to Evaluate LLMs in Multi-turn Settings
Jérémy Perez, Grgur Kovac, Corentin Léger et al.
When LLMs Recognize Your Space: Research on Experiences with Spatially Aware LLM Agents
Seungwoo Oh, Nakyoung An, Youngwug Cho et al.
When Lower-Order Terms Dominate: Adaptive Expert Algorithms for Heavy-Tailed Losses
Antoine Moulin, Emmanuel Esposito, Dirk van der Hoeven
When majority rules, minority loses: bias amplification of gradient descent
François Bachoc, Jerome Bolte, Ryan Boustany et al.
When Maximum Entropy Misleads Policy Optimization
Ruipeng Zhang, Ya-Chien Chang, Sicun Gao