ICML 2025 Papers
3,340 papers found • Page 7 of 67
AxBench: Steering LLMs? Even Simple Baselines Outperform Sparse Autoencoders
Zhengxuan Wu, Aryaman Arora, Atticus Geiger et al.
Backdoor Attacks in Token Selection of Attention Mechanism
Yunjuan Wang, Raman Arora
BackSlash: Rate Constrained Optimized Training of Large Language Models
Jun Wu, jiangtao wen, Yuxing Han
BalancEdit: Dynamically Balancing the Generality-Locality Trade-off in Multi-modal Model Editing
Dongliang Guo, Mengxuan Hu, Zihan Guan et al.
Balanced Learning for Domain Adaptive Semantic Segmentation
Wangkai Li, Rui Sun, Bohao Liao et al.
Balancing Efficiency and Expressiveness: Subgraph GNNs with Walk-Based Centrality
Joshua Southern, Yam Eitan, Guy Bar Shalom et al.
Balancing Interference and Correlation in Spatial Experimental Designs: A Causal Graph Cut Approach
Jin Zhu, Jingyi Li, Hongyi Zhou et al.
Balancing Model Efficiency and Performance: Adaptive Pruner for Long-tailed Data
Zhe Zhao, HaiBin Wen, Pengkun Wang et al.
Balancing Preservation and Modification: A Region and Semantic Aware Metric for Instruction-Based Image Editing
Zhuoying Li, Zhu Xu, Yuxin Peng et al.
Balancing the Scales: A Theoretical and Algorithmic Framework for Learning from Imbalanced Data
Corinna Cortes, Anqi Mao, Mehryar Mohri et al.
BAME: Block-Aware Mask Evolution for Efficient N:M Sparse Training
Chenyi yang, Wenjie Nie, Yuxin Zhang et al.
BanditSpec: Adaptive Speculative Decoding via Bandit Algorithms
Yunlong Hou, Fengzhuo Zhang, Cunxiao Du et al.
BAnG: Bidirectional Anchored Generation for Conditional RNA Design
Roman Klypa, Alberto Bietti, Sergei Grudinin
Banyan: Improved Representation Learning with Explicit Structure
Mattia Opper, Siddharth N
BARK: A Fully Bayesian Tree Kernel for Black-box Optimization
Toby Boyne, Jose Pablo Folch, Robert Lee et al.
BARNN: A Bayesian Autoregressive and Recurrent Neural Network
Dario Coscia, Max Welling, Nicola Demo et al.
Batch List-Decodable Linear Regression via Higher Moments
Ilias Diakonikolas, Daniel Kane, Sushrut Karmalkar et al.
BaWA: Automatic Optimizing Pruning Metric for Large Language Models with Balanced Weight and Activation
Lian Liu, Xiandong Zhao, Guanchen Li et al.
BaxBench: Can LLMs Generate Correct and Secure Backends?
Mark Vero, Niels Mündler, Viktor Chibotaru et al.
Bayesian Active Learning for Bivariate Causal Discovery
Yuxuan Wang, Mingzhou Liu, Xinwei Sun et al.
Bayesian Basis Function Approximation for Scalable Gaussian Process Priors in Deep Generative Models
Mehmet Yiğit Balık, Maksim Sinelnikov, Priscilla Ong et al.
Bayesian Inference for Correlated Human Experts and Classifiers
Markelle Kelly, Alex Boyd, Samuel Showalter et al.
Bayesian Neural Scaling Law Extrapolation with Prior-Data Fitted Networks
Dongwoo Lee, Dong Bok Lee, Steven Adriaensen et al.
Bayesian Optimization from Human Feedback: Near-Optimal Regret Bounds
Aya Kayal, Sattar Vakili, Laura Toni et al.
Bayesian Weight Enhancement with Steady-State Adaptation for Test-time Adaptation in Dynamic Environments
Jae-Hong Lee
BCE vs. CE in Deep Feature Learning
Qiufu Li, Huibin Xiao, Linlin Shen
BDC-CLIP: Brownian Distance Covariance for Adapting CLIP to Action Recognition
Fei Long, Xiaoou Li, jiaming Lv et al.
Be a Goldfish: Forgetting Bad Conditioning in Sparse Linear Regression via Variational Autoencoders
Kuheli Pratihar, Debdeep Mukhopadhyay
BECAME: Bayesian Continual Learning with Adaptive Model Merging
Mei Li, Yuxiang Lu, Qinyan Dai et al.
Be Confident: Uncovering Overfitting in MLLM Multi-Task Tuning
Wenke Huang, Jian Liang, Guancheng Wan et al.
Behavior-agnostic Task Inference for Robust Offline In-context Reinforcement Learning
Long Ma, Fangwei Zhong, Yizhou Wang
Behavioral Exploration: Learning to Explore via In-Context Adaptation
Andrew Wagenmaker, Zhiyuan Zhou, Sergey Levine
Behavior-Regularized Diffusion Policy Optimization for Offline Reinforcement Learning
Chen-Xiao Gao, Chenyang Wu, Mingjun Cao et al.
Bellman Unbiasedness: Toward Provably Efficient Distributional Reinforcement Learning with General Value Function Approximation
Taehyun Cho, Seungyub Han, Seokhun Ju et al.
Benchmarking Abstract and Reasoning Abilities Through A Theoretical Perspective
Qingchuan Ma, Yuhang Wu, Xiawu Zheng et al.
Benchmarking Quantum Reinforcement Learning
Nico Meyer, Christian Ufrecht, George Yammine et al.
Benefits of Early Stopping in Gradient Descent for Overparameterized Logistic Regression
Jingfeng Wu, Peter Bartlett, Matus Telgarsky et al.
Benign Overfitting in Token Selection of Attention Mechanism
Keitaro Sakamoto, Issei Sato
Benign Samples Matter! Fine-tuning On Outlier Benign Samples Severely Breaks Safety
Zihan Guan, Mengxuan Hu, Ronghang Zhu et al.
Best of Both Worlds: Advantages of Hybrid Graph Sequence Models
Ali Behrouz, Ali Parviz, Mahdi Karami et al.
Best of Both Worlds: Regret Minimization versus Minimax Play
Adrian Müller, Jon Schneider, EFSTRATIOS PANTELEIMON SKOULAKIS et al.
BEST-Route: Adaptive LLM Routing with Test-Time Optimal Compute
Dujian Ding, Ankur Mallick, Shaokun Zhang et al.
Best Subset Selection: Optimal Pursuit for Feature Selection and Elimination
Zhihan Zhu, Yanhao Zhang, Yong Xia
Better to Teach than to Give: Domain Generalized Semantic Segmentation via Agent Queries with Diffusion Model Guidance
Fan Li, Xuan Wang, Min Qi et al.
Beyond Atoms: Enhancing Molecular Pretrained Representations with 3D Space Modeling
Shuqi Lu, Xiaohong Ji, Bohang Zhang et al.
Beyond Bradley-Terry Models: A General Preference Model for Language Model Alignment
Yifan Zhang, Ge Zhang, Yue Wu et al.
Beyond Communication Overhead: A Multilevel Monte Carlo Approach for Mitigating Compression Bias in Distributed Learning
Ze'ev Zukerman, Bassel Hamoud, Kfir Levy
Beyond Confidence: Exploiting Homogeneous Pattern for Semi-Supervised Semantic Segmentation
Rui Sun, Huayu Mai, Wangkai Li et al.
Beyond Cropped Regions: New Benchmark and Corresponding Baseline for Chinese Scene Text Retrieval in Diverse Layouts
Li gengluo, Huawen Shen, Yu ZHOU
Beyond CVaR: Leveraging Static Spectral Risk Measures for Enhanced Decision-Making in Distributional Reinforcement Learning
Mehrdad Moghimi, Hyejin Ku