ICML Papers
5,975 papers found • Page 29 of 120
Implicit Regularization for Tubal Tensor Factorizations via Gradient Descent
Santhosh Karnik, Anna Veselovska, Mark Iwen et al.
Implicit Riemannian Optimism with Applications to Min-Max Problems
Christophe Roux, David Martinez-Rubio, Sebastian Pokutta
Implicit Subgraph Neural Network
Yongjian Zhong, Liao Zhu, Hieu Vu et al.
Importance Corrected Neural JKO Sampling
Johannes Hertrich, Robert Gruhlke
Importance Sampling for Nonlinear Models
Prakash Palanivelu Rajmohan, Fred Roosta
Impossible Videos
Zechen Bai, Hai Ci, Mike Zheng Shou
Improved Algorithm for Deep Active Learning under Imbalance via Optimal Separation
Shyam Nuggehalli, Jifan Zhang, Lalit Jain et al.
Improved and Oracle-Efficient Online $\ell_1$-Multicalibration
Rohan Ghuge, Vidya Muthukumar, Sahil Singla
Improved Approximations for Hard Graph Problems using Predictions
Anders Aamand, Justin Chen, Siddharth Gollapudi et al.
Improved Coresets for Vertical Federated Learning: Regularized Linear and Logistic Regressions
Supratim Shit, Gurmehak chadha, Surendra kumar et al.
Improved Discretization Complexity Analysis of Consistency Models: Variance Exploding Forward Process and Decay Discretization Scheme
Ruofeng Yang, Bo Jiang, Cheng Chen et al.
Improved Expressivity of Hypergraph Neural Networks through High-Dimensional Generalized Weisfeiler-Leman Algorithms
Detian Zhang, Zhang Chengqiang, Yanghui Rao et al.
Improved Last-Iterate Convergence of Shuffling Gradient Methods for Nonsmooth Convex Optimization
Zijian Liu, Zhengyuan Zhou
Improved Learning via k-DTW: A Novel Dissimilarity Measure for Curves
Amer Krivosija, Alexander Munteanu, André Nusser et al.
Improved Lower Bounds for First-order Stochastic Non-convex Optimization under Markov Sampling
Zhenyu Sun, Ermin Wei
Improved Off-policy Reinforcement Learning in Biological Sequence Design
Hyeonah Kim, Minsu Kim, Taeyoung Yun et al.
Improved Online Confidence Bounds for Multinomial Logistic Bandits
Joongkyu Lee, Min-hwan Oh
Improved Regret Analysis in Gaussian Process Bandits: Optimality for Noiseless Reward, RKHS norm, and Non-Stationary Variance
Shogo Iwazaki, Shion Takeno
Improved Sample Complexity for Private Nonsmooth Nonconvex Optimization
Guy Kornowski, Daogao Liu, Kunal Talwar
Improved Theoretically-Grounded Evolutionary Algorithms for Subset Selection with a Linear Cost Constraint
Dan-Xuan Liu, Chao Qian
Improving Compositional Generation with Diffusion Models Using Lift Scores
Chenning Yu, Sicun Gao
Improving Consistency Models with Generator-Augmented Flows
Thibaut Issenhuth, Sangchul Lee, Ludovic Dos Santos et al.
Improving Continual Learning Performance and Efficiency with Auxiliary Classifiers
Filip Szatkowski, Yaoyue Zheng, Fei Yang et al.
Improving Diversity in Language Models: When Temperature Fails, Change the Loss
Alexandre Verine, Florian Le Bronnec, Kunhao Zheng et al.
Improving Flow Matching by Aligning Flow Divergence
Yuhao Huang, Taos Transue, Shih-Hsin Wang et al.
Improving Generalization in Federated Learning with Highly Heterogeneous Data via Momentum-Based Stochastic Controlled Weight Averaging
Junkang Liu, Yuanyuan Liu, Fanhua Shang et al.
Improving Generalization with Flat Hilbert Bayesian Inference
Tuan Truong, Quyen Tran, Ngoc Quan Pham et al.
Improving LLM Safety Alignment with Dual-Objective Optimization
Xuandong Zhao, Will Cai, Tianneng Shi et al.
Improving LLMs for Recommendation with Out-Of-Vocabulary Tokens
Ting-Ji Huang, Jia-Qi Yang, Chunxu Shen et al.
Improving LLM Video Understanding with 16 Frames Per Second
Yixuan Li, Changli Tang, Jimin Zhuang et al.
Improving Memory Efficiency for Training KANs via Meta Learning
Zhangchi Zhao, Jun Shu, Deyu Meng et al.
Improving Model Alignment Through Collective Intelligence of Open-Source Models
Junlin Wang, Roy Xie, Shang Zhu et al.
Improving Multi-Class Calibration through Normalization-Aware Isotonic Techniques
Alon Arad, Saharon Rosset
Improving Multimodal Learning Balance and Sufficiency through Data Remixing
Xiaoyu Ma, Hao Chen, Yongjian Deng
Improving Out-of-Distribution Detection via Dynamic Covariance Calibration
Kaiyu Guo, Zijian Wang, Tan Pan et al.
Improving Out-of-Distribution Detection with Markov Logic Networks
Konstantin Kirchheim, Frank Ortmeier
Improving Parallel Program Performance with LLM Optimizers via Agent-System Interfaces
Anjiang Wei, Allen Nie, Thiago Teixeira et al.
Improving Rationality in the Reasoning Process of Language Models through Self-playing Game
Pinzheng Wang, Juntao Li, Zecheng Tang et al.
Improving Reward Model Generalization from Adversarial Process Enhanced Preferences
Zhilong Zhang, Tian Xu, Xinghao Du et al.
Improving Soft Unification with Knowledge Graph Embedding Methods
Xuanming Cui, Chionh Peng, Adriel Kuek et al.
Improving the Continuity of Goal-Achievement Ability via Policy Self-Regularization for Goal-Conditioned Reinforcement Learning
Xudong Gong, Sen Yang, Feng Dawei et al.
Improving the Diffusability of Autoencoders
Ivan Skorokhodov, Sharath Girish, Benran Hu et al.
Improving the Effective Receptive Field of Message-Passing Neural Networks
Shahaf E. Finder, Ron Shapira Weber, Moshe Eliasof et al.
Improving the Scaling Laws of Synthetic Data with Deliberate Practice
Reyhane Askari Hemmat, Mohammad Pezeshki, Elvis Dohmatob et al.
Improving the Statistical Efficiency of Cross-Conformal Prediction
Improving the Variance of Differentially Private Randomized Experiments through Clustering
Adel Javanmard, Vahab Mirrokni, Jean Pouget-Abadie
Improving Transformer World Models for Data-Efficient RL
Antoine Dedieu, Joseph Ortiz, Xinghua Lou et al.
Improving Value Estimation Critically Enhances Vanilla Policy Gradient
Tao Wang, Ruipeng Zhang, Sicun Gao
Improving Your Model Ranking on Chatbot Arena by Vote Rigging
Rui Min, Tianyu Pang, Chao Du et al.
Improving Zero-Shot Adversarial Robustness in Vision-Language Models by Closed-form Alignment of Adversarial Path Simplices
Junhao Dong, Piotr Koniusz, Yifei Zhang et al.