2025 Spotlight Papers

890 papers found • Page 6 of 18

Estimating cognitive biases with attention-aware inverse planning

Sounak Banerjee, Daphne Cornelisse, Deepak Gopinath et al.

NEURIPS 2025spotlightarXiv:2510.25951
1
citations

EuroSpeech: A Multilingual Speech Corpus

Samuel Pfisterer, Florian Grötschla, Luca Lanzendörfer et al.

NEURIPS 2025spotlightarXiv:2510.00514

Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition

Zheyang Xiong, Jack Cai, John Cooper et al.

ICML 2025spotlightarXiv:2410.05603

Evolutionary Multi-View Classification via Eliminating Individual Fitness Bias

Xinyan Liang, Shuai Li, Qian Guo et al.

NEURIPS 2025spotlight

Exogenous Isomorphism for Counterfactual Identifiability

Yikang Chen, Dehui du

ICML 2025spotlightarXiv:2505.02212

Exploration via Feature Perturbation in Contextual Bandits

Seouh-won Yi, Min-hwan Oh

NEURIPS 2025spotlightarXiv:2510.17390

Extrapolation by Association: Length Generalization Transfer In Transformers

Ziyang Cai, Nayoung Lee, Avi Schwarzschild et al.

NEURIPS 2025spotlightarXiv:2506.09251
7
citations

Fair Cooperation in Mixed-Motive Games via Conflict-Aware Gradient Adjustment

Woojun Kim, Katia Sycara

NEURIPS 2025spotlightarXiv:2508.17696

FAPEX: Fractional Amplitude-Phase Expressor for Robust Cross-Subject Seizure Prediction

Ruizhe Zheng, Lingyan Mao, DINGDING HAN et al.

NEURIPS 2025spotlightarXiv:2511.03263
1
citations

Fast and Fluent Diffusion Language Models via Convolutional Decoding and Rejective Fine-tuning

Yeongbin Seo, Dongha Lee, Jaehyung Kim et al.

NEURIPS 2025spotlightarXiv:2509.15188
1
citations

Fast Monte Carlo Tree Diffusion: 100× Speedup via Parallel and Sparse Planning

Jaesik Yoon, Hyeonseo Cho, Yoshua Bengio et al.

NEURIPS 2025spotlight
2
citations

Fast MRI for All: Bridging Access Gaps by Training without Raw Data

Yasar Utku Alcalar, Merve Gulle, Mehmet Akcakaya

NEURIPS 2025spotlightarXiv:2411.13022
1
citations

Fast Projection-Free Approach (without Optimization Oracle) for Optimization over Compact Convex Set

Chenghao Liu, Enming Liang, Minghua Chen

NEURIPS 2025spotlight

Fast-Slow Thinking GRPO for Large Vision-Language Model Reasoning

Wenyi Xiao, Leilei Gan

NEURIPS 2025spotlightarXiv:2504.18458

Fast Training of Large Kernel Models with Delayed Projections

Amirhesam Abedsoltan, Siyuan Ma, Parthe Pandit et al.

NEURIPS 2025spotlightarXiv:2411.16658

Feature Learning beyond the Lazy-Rich Dichotomy: Insights from Representational Geometry

Chi-Ning Chou, Hang Le, Yichen Wang et al.

ICML 2025spotlightarXiv:2503.18114

Feature learning from non-Gaussian inputs: the case of Independent Component Analysis in high dimensions

Fabiola Ricci, Lorenzo Bardone, Sebastian Goldt

ICML 2025spotlightarXiv:2503.23896
1
citations

Federated Generalised Variational Inference: A Robust Probabilistic Federated Learning Framework

Terje Mildner, Oliver Hamelijnck, Paris Giampouras et al.

ICML 2025spotlightarXiv:2502.00846

FedSSI: Rehearsal-Free Continual Federated Learning with Synergistic Synaptic Intelligence

Yichen Li, Yuying Wang, Haozhao Wang et al.

ICML 2025spotlight

Feedback-Aware MCTS for Goal-Oriented Information Seeking

Harshita Chopra, Chirag Shah

NEURIPS 2025spotlight
2
citations

Feynman-Kac Correctors in Diffusion: Annealing, Guidance, and Product of Experts

Marta Skreta, Tara Akhound-Sadegh, Viktor Ohanesian et al.

ICML 2025spotlightarXiv:2503.02819
34
citations

FFN Fusion: Rethinking Sequential Computation in Large Language Models

Akhiad Bercovich, Mohammed Dabbah, Omri Puny et al.

NEURIPS 2025spotlightarXiv:2503.18908
2
citations

Fine-grained List-wise Alignment for Generative Medication Recommendation

Chenxiao Fan, Chongming Gao, Wentao Shi et al.

NEURIPS 2025spotlightarXiv:2505.20218

FineGRAIN: Evaluating Failure Modes of Text-to-Image Models with Vision Language Model Judges

Kevin Hayes, Micah Goldblum, Vikash Sehwag et al.

NEURIPS 2025spotlightarXiv:2512.02161

Fisher meets Feynman: score-based variational inference with a product of experts

Diana Cai, Robert Gower, David Blei et al.

NEURIPS 2025spotlightarXiv:2510.21598

Fishers for Free? Approximating the Fisher Information Matrix by Recycling the Squared Gradient Accumulator

YuXin Li, Felix Dangel, Derek Tam et al.

ICML 2025spotlightarXiv:2507.18807

Fixed-Point RNNs: Interpolating from Diagonal to Dense

Sajad Movahedi, Felix Sarnthein, Nicola Muca Cirone et al.

NEURIPS 2025spotlightarXiv:2503.10799
2
citations

Fixing It in Post: A Comparative Study of LLM Post-Training Data Quality and Model Performance

Aladin Djuhera, Swanand Kadhe, Syed Zawad et al.

NEURIPS 2025spotlightarXiv:2506.06522

Flash Invariant Point Attention

Andrew Liu, Axel Elaldi, Nicholas Franklin et al.

NEURIPS 2025spotlightarXiv:2505.11580

FlashMD: long-stride, universal prediction of molecular dynamics

Filippo Bigi, Sanggyu Chong, Agustinus Kristiadi et al.

NEURIPS 2025spotlightarXiv:2505.19350
7
citations

FlashTP: Fused, Sparsity-Aware Tensor Product for Machine Learning Interatomic Potentials

Seung Lee, Hojoon Kim, Yutack Park et al.

ICML 2025spotlight

Flattening Hierarchies with Policy Bootstrapping

John Zhou, Jonathan Kao

NEURIPS 2025spotlightarXiv:2505.14975

FlexOLMo: Open Language Models for Flexible Data Use

Weijia Shi, Akshita Bhagia, Kevin Farhat et al.

NEURIPS 2025spotlightarXiv:2507.07024

Flopping for FLOPs: Leveraging Equivariance for Computational Efficiency

Georg Bökman, David Nordström, Fredrik Kahl

ICML 2025spotlightarXiv:2502.05169

Flow Density Control: Generative Optimization Beyond Entropy-Regularized Fine-Tuning

Riccardo De Santi, Marin Vlastelica, Ya-Ping Hsieh et al.

NEURIPS 2025spotlightarXiv:2511.22640

FlowDrag: 3D-aware Drag-based Image Editing with Mesh-guided Deformation Vector Flow Fields

Gwanhyeong Koo, Sunjae Yoon, Younghwan Lee et al.

ICML 2025spotlightarXiv:2507.08285

Flow Equivariant Recurrent Neural Networks

Andy Keller

NEURIPS 2025spotlightarXiv:2507.14793
3
citations

Forecasting in Offline Reinforcement Learning for Non-stationary Environments

Suzan Ece Ada, Georg Martius, Emre Ugur et al.

NEURIPS 2025spotlightarXiv:2512.01987

FP4 All the Way: Fully Quantized Training of Large Language Models

Brian Chmiel, Maxim Fishman, Ron Banner et al.

NEURIPS 2025spotlight

FPSAttention: Training-Aware FP8 and Sparsity Co-Design for Fast Video Diffusion

Akide Liu, Zeyu Zhang, Zhexin Li et al.

NEURIPS 2025spotlightarXiv:2506.04648
8
citations

Frame Context Packing and Drift Prevention in Next-Frame-Prediction Video Diffusion Models

Lvmin Zhang, Shengqu Cai, Muyang Li et al.

NEURIPS 2025spotlightarXiv:2504.12626
56
citations

From Counterfactuals to Trees: Competitive Analysis of Model Extraction Attacks

Awa Khouna, Julien Ferry, Thibaut Vidal

NEURIPS 2025spotlightarXiv:2502.05325

From Language Models over Tokens to Language Models over Characters

Tim Vieira, Benjamin LeBrun, Mario Giulianelli et al.

ICML 2025spotlightarXiv:2412.03719

From Mechanistic Interpretability to Mechanistic Biology: Training, Evaluating, and Interpreting Sparse Autoencoders on Protein Language Models

Etowah Adams, Liam Bai, Minji Lee et al.

ICML 2025spotlight
28
citations

From Shortcut to Induction Head: How Data Diversity Shapes Algorithm Selection in Transformers

Ryotaro Kawata, Yujin Song, Alberto Bietti et al.

NEURIPS 2025spotlightarXiv:2512.18634
1
citations

FUDOKI: Discrete Flow-based Unified Understanding and Generation via Kinetic-Optimal Velocities

Jin Wang, Yao Lai, Aoxue Li et al.

NEURIPS 2025spotlightarXiv:2505.20147
20
citations

Fully Autonomous Neuromorphic Navigation and Dynamic Obstacle Avoidance

Xiaochen Shang, Pengwei Luo, Xinning Wang et al.

NEURIPS 2025spotlight

Functional Alignment Can Mislead: Examining Model Stitching

Damian Smith, Harvey Mannering, Antonia Marcu

ICML 2025spotlight

Functional Scaling Laws in Kernel Regression: Loss Dynamics and Learning Rate Schedules

Binghui Li, Fengling Chen, Zixun Huang et al.

NEURIPS 2025spotlightarXiv:2509.19189

G-Adaptivity: optimised graph-based mesh relocation for finite element methods

James Rowbottom, Georg Maierhofer, Teo Deveney et al.

ICML 2025spotlightarXiv:2407.04516