NEURIPS Poster Papers

4,493 papers found • Page 89 of 90

Watermarking Autoregressive Image Generation

Nikola Jovanović, Ismail Labiad, Tomas Soucek et al.

NEURIPS 2025posterarXiv:2506.16349
4
citations

Wavy Transformer

Satoshi Noguchi, Yoshinobu Kawahara

NEURIPS 2025posterarXiv:2508.12787

Weak-shot Keypoint Estimation via Keyness and Correspondence Transfer

Junjie Chen, Zeyu Luo, Zezheng Liu et al.

NEURIPS 2025poster

Weak-to-Strong Generalization under Distribution Shifts

Myeongho Jeon, Jan Sobotka, Suhwan Choi et al.

NEURIPS 2025posterarXiv:2510.21332

WearVQA: A Visual Question Answering Benchmark for Wearables in Egocentric Authentic Real-world scenarios

Eun Chang, Zhuangqun Huang, Yiwei Liao et al.

NEURIPS 2025posterarXiv:2511.22154

WeatherPrompt: Multi-modality Representation Learning for All-Weather Drone Visual Geo-Localization

Jiahao Wen, Hang Yu, Zhedong Zheng

NEURIPS 2025posterarXiv:2508.09560
2
citations

Weaver: Shrinking the Generation-Verification Gap by Scaling Compute for Verification

Jon Saad-Falcon, Estefany Kelly Buchanan, Mayee Chen et al.

NEURIPS 2025poster

WebDancer: Towards Autonomous Information Seeking Agency

Jialong Wu, Baixuan Li, Runnan Fang et al.

NEURIPS 2025posterarXiv:2505.22648
81
citations

Web-Scale Collection of Video Data for 4D Animal Reconstruction

Brian Nlong Zhao, Jiajun Wu, Shangzhe Wu

NEURIPS 2025posterarXiv:2511.01169
1
citations

WebThinker: Empowering Large Reasoning Models with Deep Research Capability

Xiaoxi Li, Jiajie Jin, Guanting Dong et al.

NEURIPS 2025posterarXiv:2504.21776
185
citations

We Should Chart an Atlas of All the World's Models

Eliahu Horwitz, Nitzan Kurer, Jonathan Kahana et al.

NEURIPS 2025posterarXiv:2503.10633
5
citations

What Can RL Bring to VLA Generalization? An Empirical Study

Jijia Liu, Feng Gao, Bingwen Wei et al.

NEURIPS 2025posterarXiv:2505.19789

What Data Enables Optimal Decisions? An Exact Characterization for Linear Optimization

Omar Bennouna, Amine Bennouna, Saurabh Amin et al.

NEURIPS 2025posterarXiv:2505.21692
1
citations

What Does It Take to Build a Performant Selective Classifier?

Stephan Rabanser, Nicolas Papernot

NEURIPS 2025posterarXiv:2510.20242
1
citations

What Do Latent Action Models Actually Learn?

Chuheng Zhang, Tim Pearce, Pushi Zhang et al.

NEURIPS 2025posterarXiv:2506.15691
7
citations

What Happens During the Loss Plateau? Understanding Abrupt Learning in Transformers

Pulkit Gopalani, Wei Hu

NEURIPS 2025posterarXiv:2506.13688
1
citations

What is Your Data Worth to GPT? LLM-Scale Data Valuation with Influence Functions

Sang Choe, Hwijeen Ahn, Juhan Bae et al.

NEURIPS 2025posterarXiv:2405.13954

WHAT MAKES MATH PROBLEMS HARD FOR REINFORCEMENT LEARNING: A CASE STUDY

Ali Shehper, Anibal Medina-Mardones, Lucas Fagan et al.

NEURIPS 2025posterarXiv:2408.15332
8
citations

What Matters in Data for DPO?

Yu Pan, Zhongze Cai, Huaiyang Zhong et al.

NEURIPS 2025posterarXiv:2508.18312
5
citations

What Really is a Member? Discrediting Membership Inference via Poisoning

Neal Mangaokar, Ashish Hooda, Zhuohang Li et al.

NEURIPS 2025posterarXiv:2506.06003
1
citations

What’s in Common? Multimodal Models Hallucinate When Reasoning Across Scenes

Candace Ross, Florian Bordes, Adina Williams et al.

NEURIPS 2025posterarXiv:2511.03768

What's Producible May Not Be Reachable: Measuring the Steerability of Generative Models

Keyon Vafa, Sarah Bentley, Jon Kleinberg et al.

NEURIPS 2025posterarXiv:2503.17482
2
citations

What We Miss Matters: Learning from the Overlooked in Point Cloud Transformers

Yi Wang, Jiaze Wang, Ziyu Guo et al.

NEURIPS 2025poster

When Additive Noise Meets Unobserved Mediators: Bivariate Denoising Diffusion for Causal Discovery

Dominik Meier, Sujai Hiremath, PROMIT GHOSAL et al.

NEURIPS 2025posterarXiv:2506.23374

When and how can inexact generative models still sample from the data manifold?

Nisha Chandramoorthy, Adriaan de Clercq

NEURIPS 2025posterarXiv:2508.07581

When Are Concepts Erased From Diffusion Models?

Kevin Lu, Nicky Kriplani, Rohit Gandikota et al.

NEURIPS 2025posterarXiv:2505.17013
5
citations

When Can Model-Free Reinforcement Learning be Enough for Thinking?

Josiah Hanna, Nicholas Corrado

NEURIPS 2025posterarXiv:2506.17124

When Causal Dynamics Matter: Adapting Causal Strategies through Meta-Aware Interventions

Moritz Willig, Tim Woydt, Devendra Singh Dhami et al.

NEURIPS 2025poster

When Does Closeness in Distribution Imply Representational Similarity? An Identifiability Perspective

Beatrix Nielsen, Emanuele Marconato, Andrea Dittadi et al.

NEURIPS 2025posterarXiv:2506.03784

When Does Curriculum Learning Help? A Theoretical Perspective

Raman Arora, Yunjuan Wang, Kaibo Zhang

NEURIPS 2025poster

When Do Transformers Outperform Feedforward and Recurrent Networks? A Statistical Perspective

Alireza Mousavi-Hosseini, Clayton Sanford, Denny Wu et al.

NEURIPS 2025posterarXiv:2503.11272

When Kernels Multiply, Clusters Unify: Fusing Embeddings with the Kronecker Product

Youqi WU, Jingwei Zhang, Farzan Farnia

NEURIPS 2025posterarXiv:2506.08645
2
citations

When Lower-Order Terms Dominate: Adaptive Expert Algorithms for Heavy-Tailed Losses

Antoine Moulin, Emmanuel Esposito, Dirk van der Hoeven

NEURIPS 2025posterarXiv:2506.01722

When majority rules, minority loses: bias amplification of gradient descent

François Bachoc, Jerome Bolte, Ryan Boustany et al.

NEURIPS 2025posterarXiv:2505.13122
1
citations

When Models Don’t Collapse: On the Consistency of Iterative MLE

Daniel Barzilai, Ohad Shamir

NEURIPS 2025poster

When No Paths Lead to Rome: Benchmarking Systematic Neural Relational Reasoning

Anirban Das, Muhammad Irtaza Khalid, Rafael Peñaloza et al.

NEURIPS 2025posterarXiv:2510.23532

When Semantics Mislead Vision: Mitigating Large Multimodal Models Hallucinations in Scene Text Spotting and Understanding

Yan Shu, Hangui Lin, Yexin Liu et al.

NEURIPS 2025posterarXiv:2506.05551

When Thinking Drifts: Evidential Grounding for Robust Video Reasoning

Romy Luo, Zihui (Sherry) Xue, Alex Dimakis et al.

NEURIPS 2025posterarXiv:2510.06077
4
citations

Where and How to Perturb: On the Design of Perturbation Guidance in Diffusion and Flow Models

Donghoon Ahn, Jiwon Kang, Sanghyun Lee et al.

NEURIPS 2025posterarXiv:2506.10978
1
citations

Where Graph Meets Heterogeneity: Multi-View Collaborative Graph Experts

Zhihao Wu, Jinyu Cai, Yunhe Zhang et al.

NEURIPS 2025poster

Which Data Attributes Stimulate Math and Code Reasoning? An Investigation via Influence Functions

Siqi Kou, Qingyuan Tian, Hanwen Xu et al.

NEURIPS 2025posterarXiv:2505.19949
4
citations

Whitened Score Diffusion: A Structured Prior for Imaging Inverse Problems

Jeffrey Alido, Tongyu Li, Yu Sun et al.

NEURIPS 2025posterarXiv:2505.10311
1
citations

Whole-Body Conditioned Egocentric Video Prediction

Yutong Bai, Danny Tran, Amir Bar et al.

NEURIPS 2025posterarXiv:2506.21552
8
citations

Who Reasons in the Large Language Models?

Jie Shao, Jianxin Wu

NEURIPS 2025posterarXiv:2505.20993

Whose Instructions Count? Resolving Preference Bias in Instruction Fine-Tuning

Jiayu Zhang, Changbang Li, Yinan Peng et al.

NEURIPS 2025poster

Who Speaks for the Trigger? Dynamic Expert Routing in Backdoored Mixture-of-Experts Transformers

Xin Zhao, Xiaojun Chen, Bingshan Liu et al.

NEURIPS 2025posterarXiv:2510.13462

Why 1 + 1 < 1 in Visual Token Pruning: Beyond Naive Integration via Multi-Objective Balanced Covering

Yangfu Li, Hongjian Zhan, Tianyi Chen et al.

NEURIPS 2025posterarXiv:2505.10118
1
citations

Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations

Yiyou Sun, Yu Gai, Lijie Chen et al.

NEURIPS 2025posterarXiv:2504.12691
10
citations

Why Knowledge Distillation Works in Generative Models: A Minimal Working Explanation

Sungmin Cha, Kyunghyun Cho

NEURIPS 2025posterarXiv:2505.13111
4
citations

Why Masking Diffusion Works: Condition on the Jump Schedule for Improved Discrete Diffusion

Alan Amin, Nate Gruver, Andrew Wilson

NEURIPS 2025posterarXiv:2506.08316
8
citations