2025 Papers

21,856 papers found • Page 430 of 438

What Are Good Positional Encodings for Directed Graphs?

Yinan Huang, Haoyu Wang, Pan Li

ICLR 2025poster

What Are Step-Level Reward Models Rewarding? Counterintuitive Findings from MCTS-Boosted Mathematical Reasoning

Yiran Ma, Zui Chen, Tianqiao Liu et al.

AAAI 2025paperarXiv:2412.15904

What are you sinking? A geometric approach on attention sink

Valeria Ruscio, Umberto Nanni, Fabrizio Silvestri

NeurIPS 2025spotlightarXiv:2508.02546
2
citations

What can large language models do for sustainable food?

Anna Thomas, Adam Yee, Andrew Mayne et al.

ICML 2025posterarXiv:2503.04734

What Can RL Bring to VLA Generalization? An Empirical Study

Jijia Liu, Feng Gao, Bingwen Wei et al.

NeurIPS 2025poster

What Changed and What Could Have Changed? State-Change Counterfactuals for Procedure-Aware Video Representation Learning

Chi-Hsi Kung, Frangil Ramirez, Juhyung Ha et al.

ICCV 2025posterarXiv:2503.21055
2
citations

What Changed? Detecting and Evaluating Instruction-Guided Image Edits with Multimodal Large Language Models

Lorenzo Baraldi, Davide Bucciarelli, Federico Betti et al.

ICCV 2025poster
2
citations

What Data Enables Optimal Decisions? An Exact Characterization for Linear Optimization

Omar Bennouna, Amine Bennouna, Saurabh Amin et al.

NeurIPS 2025poster
1
citations

What Does It Mean to Be a Transformer? Insights from a Theoretical Hessian Analysis

Weronika Ormaniec, Felix Dangel, Sidak Pal Singh

ICLR 2025posterarXiv:2410.10986
10
citations

What Does It Take to Build a Performant Selective Classifier?

Stephan Rabanser, Nicolas Papernot

NeurIPS 2025posterarXiv:2510.20242

What Do Latent Action Models Actually Learn?

Chuheng Zhang, Tim Pearce, Pushi Zhang et al.

NeurIPS 2025posterarXiv:2506.15691
7
citations

What Do Learning Dynamics Reveal About Generalization in LLM Mathematical Reasoning?

Katie Kang, Amrith Setlur, Dibya Ghosh et al.

ICML 2025poster

What do you know? Bayesian knowledge inference for navigating agents

Matthias Schultheis, Jana-Sophie Schönfeld, Constantin Rothkopf et al.

NeurIPS 2025oral

What Do You See in Common? Learning Hierarchical Prototypes over Tree-of-Life to Discover Evolutionary Traits

Harish Babu Manogaran, M. Maruf, Arka Daw et al.

ICLR 2025posterarXiv:2409.02335
1
citations

What Expressivity Theory Misses: Message Passing Complexity for GNNs

Niklas Kemper, Tom Wollschläger, Stephan Günnemann

NeurIPS 2025spotlight

What Happens During the Loss Plateau? Understanding Abrupt Learning in Transformers

Pulkit Gopalani, Wei Hu

NeurIPS 2025posterarXiv:2506.13688
1
citations

What Has a Foundation Model Found? Inductive Bias Reveals World Models

Keyon Vafa, Peter Chang, Ashesh Rambachan et al.

ICML 2025poster

What Has Been Overlooked in Contrastive Source-Free Domain Adaptation: Leveraging Source-Informed Latent Augmentation within Neighborhood Context

JING WANG, Wonho Bae, Jiahong Chen et al.

ICLR 2025posterarXiv:2412.14301
7
citations

What If: Understanding Motion Through Sparse Interactions

Stefan A. Baumann, Nick Stracke, Timy Phan et al.

ICCV 2025poster

What if Virtual Agents Had Scents? Users' Judgments of Virtual Agent Personality and Appeals in Encounters

Dongyun Han, Siyeon Bak, So-Hui Kim et al.

ISMAR 2025paperarXiv:2509.11342

What If We Recaption Billions of Web Images with LLaMA-3?

Xianhang Li, Haoqin Tu, Mude Hui et al.

ICML 2025posterarXiv:2406.08478

What Is a Good Question? Assessing Question Quality via Meta-Fact Checking

Bo Zhang, Jianghua Zhu, Chaozhuo Li et al.

AAAI 2025paper

What is Wrong with Perplexity for Long-context Language Modeling?

Lizhe Fang, Yifei Wang, Zhaoyang Liu et al.

ICLR 2025poster

What is Your Data Worth to GPT? LLM-Scale Data Valuation with Influence Functions

Sang Choe, Hwijeen Ahn, Juhan Bae et al.

NeurIPS 2025poster

What Kind of Visual Tokens Do We Need? Training-Free Visual Token Pruning for Multi-Modal Large Language Models from the Perspective of Graph

Yutao Jiang, Qiong Wu, Wenhao Lin et al.

AAAI 2025paperarXiv:2501.02268
18
citations

What Limits Bidirectional Model's Generative Capabilities? A Uni-Bi-Directional Mixture-of-Expert Method For Bidirectional Fine-tuning

Zuchao Li, Yonghua Hei, Qiwei Li et al.

ICML 2025poster

What Limits Virtual Agent Application? OmniBench: A Scalable Multi-Dimensional Benchmark for Essential Virtual Agent Capabilities

Wendong Bu, Yang Wu, Qifan Yu et al.

ICML 2025oralarXiv:2506.08933

What Makes a Good Dataset for Knowledge Distillation?

Logan Frank, Jim Davis

CVPR 2025posterarXiv:2411.12817
3
citations

What Makes a Good Diffusion Planner for Decision Making?

Haofei Lu, Dongqi Han, Yifei Shen et al.

ICLR 2025posterarXiv:2503.00535
24
citations

What Makes a Good Feedforward Computational Graph?

Alex Vitvitskyi, João Madeira Araujo, Marc Lackenby et al.

ICML 2025posterarXiv:2502.06751

What Makes a Maze Look Like a Maze?

Joy Hsu, Jiayuan Mao, Joshua B Tenenbaum et al.

ICLR 2025posterarXiv:2409.08202
13
citations

What makes an Ensemble (Un) Interpretable?

Shahaf Bassan, Guy Amir, Meirav Zehavi et al.

ICML 2025posterarXiv:2506.08216
5
citations

What Makes a Reward Model a Good Teacher? An Optimization Perspective

Noam Razin, Zixuan Wang, Hubert Strauss et al.

NeurIPS 2025spotlight

What Makes for Text to 360-degree Panorama Generation with Stable Diffusion?

Jinhong Ni, Chang-Bin Zhang, Qiang Zhang et al.

ICCV 2025poster

What Makes In-context Learning Effective for Mathematical Reasoning

Jiayu Liu, Zhenya Huang, Chaokun Wang et al.

ICML 2025poster

What Makes Large Language Models Reason in (Multi-Turn) Code Generation?

Kunhao Zheng, Juliette Decugis, Jonas Gehring et al.

ICLR 2025posterarXiv:2410.08105
30
citations

WHAT MAKES MATH PROBLEMS HARD FOR REINFORCEMENT LEARNING: A CASE STUDY

Ali Shehper, Anibal Medina-Mardones, Lucas Fagan et al.

NeurIPS 2025posterarXiv:2408.15332
7
citations

What Makes Object Referencing Clear? Multimodal Strategies for Shared Understanding in XR Collaboration

Jeonghyeon Kim, Jemin Lee, Youngwon Kim

ISMAR 2025paper

What Matters in Data for DPO?

Yu Pan, Zhongze Cai, Huaiyang Zhong et al.

NeurIPS 2025poster
5
citations

What Matters in Learning from Large-Scale Datasets for Robot Manipulation

Vaibhav Saxena, Matthew Bronars, Nadun Ranawaka Arachchige et al.

ICLR 2025posterarXiv:2506.13536
16
citations

What Matters When Repurposing Diffusion Models for General Dense Perception Tasks?

Guangkai Xu, yongtao ge, Mingyu Liu et al.

ICLR 2025posterarXiv:2403.06090
56
citations

What Moves the Eyes: Doubling Mechanistic Model Performance Using Deep Networks to Discover and Test Cognitive Hypotheses

Federico D'Agostino, Lisa Schwetlick, Matthias Bethge et al.

NeurIPS 2025oral

What One Cannot, Two Can: Two-Layer Transformers Provably Represent Induction Heads on Any-Order Markov Chains

Chanakya Ekbote, Ashok Vardhan Makkuva, Marco Bondaschi et al.

NeurIPS 2025spotlightarXiv:2508.07208

What Really is a Member? Discrediting Membership Inference via Poisoning

Neal Mangaokar, Ashish Hooda, Zhuohang Li et al.

NeurIPS 2025posterarXiv:2506.06003
1
citations

What Secrets Do Your Manifolds Hold? Understanding the Local Geometry of Generative Models

Ahmed Imtiaz Humayun, Ibtihel Amara, Cristina Nader Vasconcelos et al.

ICLR 2025poster

What should a neuron aim for? Designing local objective functions based on information theory

Andreas C. Schneider, Valentin Neuhaus, David Ehrlich et al.

ICLR 2025posterarXiv:2412.02482
5
citations

What's in a Latent? Leveraging Diffusion Latent Space for Domain Generalization

Xavier Thomas, Deepti Ghadiyaram

ICCV 2025poster
2
citations

What’s in Common? Multimodal Models Hallucinate When Reasoning Across Scenes

Candace Ross, Florian Bordes, Adina Williams et al.

NeurIPS 2025poster

What’s in the Image? A Deep-Dive into the Vision of Vision Language Models

Omri Kaduri, Shai Bagon, Tali Dekel

CVPR 2025posterarXiv:2411.17491

What's Making That Sound Right Now? Video-centric Audio-Visual Localization

hahyeon choi, Junhoo Lee, Nojun Kwak

ICCV 2025posterarXiv:2507.04667