ICML Papers
5,975 papers found • Page 9 of 120
BRIDGE: Bootstrapping Text to Control Time-Series Generation via Multi-Agent Iterative Optimization and Diffusion Modeling
Hao Li, Yu-Hao Huang, Chang Xu et al.
Bridging Fairness and Efficiency in Conformal Inference: A Surrogate-Assisted Group-Clustered Approach
Chenyin Gao, Peter Gilbert, Larry Han
Bridging Layout and RTL: Knowledge Distillation based Timing Prediction
Mingjun Wang, Yihan Wen, Bin Sun et al.
Bridging Protein Sequences and Microscopy Images with Unified Diffusion Models
Dihan Zheng, Bo Huang
Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging
Shiqi Chen, Jinghan Zhang, Tongyao Zhu et al.
BRiTE: Bootstrapping Reinforced Thinking Process to Enhance Language Model Reasoning
Han Zhong, Yutong Yin, Shenao Zhang et al.
Broadband Ground Motion Synthesis by Diffusion Model with Minimal Condition
Jaeheun Jung, Jaehyuk Lee, ChangHae Jung et al.
B-score: Detecting biases in large language models using response history
An Vo, Mohammad Reza Taesiri, Daeyoung Kim et al.
BSemiFL: Semi-supervised Federated Learning via a Bayesian Approach
Haozhao Wang, Shengyu Wang, Jiaming Li et al.
BSLoRA: Enhancing the Parameter Efficiency of LoRA with Intra-Layer and Inter-Layer Sharing
Yuhua Zhou, Ruifeng Li, Changhai Zhou et al.
BSO: Binary Spiking Online Optimization Algorithm
Yu Liang, Yu Yang, Wenjie Wei et al.
Byzantine-Resilient Federated Alternating Gradient Descent and Minimization for Partly-Decoupled Low Rank Matrix Learning
Ankit Pratap Singh, Ahmed Abbasi, Namrata Vaswani
C2IQL: Constraint-Conditioned Implicit Q-learning for Safe Offline Reinforcement Learning
Zifan LIU, Xinran Li, Jun Zhang
C-3PO: Compact Plug-and-Play Proxy Optimization to Achieve Human-like Retrieval-Augmented Generation
Guoxin Chen, Minpeng Liao, Peiying Yu et al.
Ca2-VDM: Efficient Autoregressive Video Diffusion Model with Causal Generation and Cache Sharing
Kaifeng Gao, Jiaxin Shi, Hanwang Zhang et al.
CABS: Conflict-Aware and Balanced Sparsification for Enhancing Model Merging
Zongzhen Yang, Binhang Qi, Hailong Sun et al.
Cache Me If You Must: Adaptive Key-Value Quantization for Large Language Models
Alina Shutova, Vladimir Malinovskii, Vage Egiazarian et al.
CACTI: Leveraging Copy Masking and Contextual Information to Improve Tabular Data Imputation
Aditya Gorla, Ryan Wang, Zhengtong Liu et al.
CaDA: Cross-Problem Routing Solver with Constraint-Aware Dual-Attention
Han Li, Fei Liu, Zhi Zheng et al.
CAD-Editor: A Locate-then-Infill Framework with Automated Training Data Synthesis for Text-Based CAD Editing
Yu Yuan, Shizhao Sun, Qi Liu et al.
Calibrated Language Models and How to Find Them with Label Smoothing
Jerry Huang, Peng Lu, QIUHAO Zeng
Calibrated Physics-Informed Uncertainty Quantification
Vignesh Gopakumar, Ander Gray, Lorenzo Zanisi et al.
Calibrated Value-Aware Model Learning with Probabilistic Environment Models
Claas Voelcker, Anastasiia Pedan, Arash Ahmadian et al.
Calibrating Video Watch-time Predictions with Credible Prototype Alignment
Chao, Shisong Tang, Fan Li et al.
CALM: Consensus-Aware Localized Merging for Multi-Task Learning
Kunda Yan, Min Zhang, Sen Cui et al.
Can Biologically Plausible Temporal Credit Assignment Rules Match BPTT for Neural Similarity? E-prop as an Example
Yuhan Helena Liu, Guangyu Robert Yang, Christopher Cueva
Can Classic GNNs Be Strong Baselines for Graph-level Tasks? Simple Architectures Meet Excellence
Yuankai Luo, Lei Shi, Xiao-Ming Wu
Can Compressed LLMs Truly Act? An Empirical Evaluation of Agentic Capabilities in LLM Compression
Peijie Dong, Zhenheng Tang, Xiang Liu et al.
Can DBNNs Robust to Environmental Noise for Resource-constrained Scenarios?
Wendong Zheng, Junyang Chen, Husheng Guo et al.
Can Diffusion Models Learn Hidden Inter-Feature Rules Behind Images?
Yujin Han, Andi Han, Wei Huang et al.
Can Large Language Models Understand Intermediate Representations in Compilers?
Hailong Jiang, Jianfeng Zhu, Yao Wan et al.
CAN: Leveraging Clients As Navigators for Generative Replay in Federated Continual Learning
Xuankun Rong, Jianshu Zhang, Kun He et al.
Can MLLMs Reason in Multimodality? EMMA: An Enhanced MultiModal ReAsoning Benchmark
Yunzhuo Hao, Jiawei Gu, Huichen Wang et al.
Cannot See the Forest for the Trees: Invoking Heuristics and Biases to Elicit Irrational Choices of LLMs
Haoming Yang, Ke Ma, Xiaojun Jia et al.
Canonical Rank Adaptation: An Efficient Fine-Tuning Strategy for Vision Transformers
Lokesh Veeramacheneni, Moritz Wolter, Hilde Kuehne et al.
Can RLHF be More Efficient with Imperfect Reward Models? A Policy Coverage Perspective
Jiawei Huang, Bingcong Li, Christoph Dann et al.
Can Transformers Learn Full Bayesian Inference in Context?
Arik Reuter, Tim G. J. Rudner, Vincent Fortuin et al.
Can Transformers Reason Logically? A Study in SAT Solving
Leyan Pan, Vijay Ganesh, Jacob Abernethy et al.
Can We Predict Performance of Large Models across Vision-Language Tasks?
Qinyu Zhao, Ming Xu, Kartik Gupta et al.
Cape: Context-Aware Prompt Perturbation Mechanism with Differential Privacy
Haoqi Wu, Wei Dai, Wang Li et al.
Capturing Temporal Dynamics in Large-Scale Canopy Tree Height Estimation
Jan Pauls, Max Zimmer, Berkant Turan et al.
CASE-Bench: Context-Aware SafEty Benchmark for Large Language Models
Guangzhi Sun, Xiao Zhan, Shutong Feng et al.
Catching Two Birds with One Stone: Reward Shaping with Dual Random Networks for Balancing Exploration and Exploitation
Haozhe Ma, Fangling Li, Jing Lim et al.
Catch Your Emotion: Sharpening Emotion Perception in Multimodal Large Language Models
Yiyang Fang, Jian Liang, Wenke Huang et al.
CAT: Contrastive Adversarial Training for Evaluating the Robustness of Protective Perturbations in Latent Diffusion Models
Sen Peng, Mingyue Wang, Jianfei He et al.
Categorical Distributional Reinforcement Learning with Kullback-Leibler Divergence: Convergence and Asymptotics
Tyler Kastner, Mark Rowland, Yunhao Tang et al.
Categorical Schrödinger Bridge Matching
Grigoriy Ksenofontov, Aleksandr Korotin
CateKV: On Sequential Consistency for Long-Context LLM Inference Acceleration
Haoyun Jiang, Haolin li, jianwei zhang et al.
CAT Merging: A Training-Free Approach for Resolving Conflicts in Model Merging
Wenju Sun, Qingyong Li, Yangliao Geng et al.
Catoni Contextual Bandits are Robust to Heavy-tailed Rewards
Chenlu Ye, Yujia Jin, Alekh Agarwal et al.