"convergence analysis" Papers
29 papers found
Convergence of Score-Based Discrete Diffusion Models: A Discrete-Time Analysis
Zikun Zhang, Zixiang Chen, Quanquan Gu
Decentralized Sporadic Federated Learning: A Unified Algorithmic Framework with Convergence Guarantees
Shahryar Zehtabi, Dong-Jun Han, Rohit Parasnis et al.
Efficient Federated Learning against Byzantine Attacks and Data Heterogeneity via Aggregating Normalized Gradients
Shiyuan Zuo, Xingrun Yan, Rongfei Fan et al.
Flow matching achieves almost minimax optimal convergence
Kenji Fukumizu, Taiji Suzuki, Noboru Isobe et al.
Local Steps Speed Up Local GD for Heterogeneous Distributed Logistic Regression
Michael Crawshaw, Blake Woodworth, Mingrui Liu
Nonconvex Stochastic Optimization under Heavy-Tailed Noises: Optimal Convergence without Gradient Clipping
Zijian Liu, Zhengyuan Zhou
Online robust locally differentially private learning for nonparametric regression
Chenfei Gu, Qiangqiang Zhang, Ting Li et al.
On the Convergence of Projected Policy Gradient for Any Constant Step Sizes
Jiacai Liu, Wenye Li, Dachao Lin et al.
SPFL: Sequential updates with Parallel aggregation for Enhanced Federated Learning under Category and Domain Shifts
Haoyuan Liang, Shilei Cao, Li et al.
A New Theoretical Perspective on Data Heterogeneity in Federated Optimization
Jiayi Wang, Shiqiang Wang, Rong-Rong Chen et al.
A Persuasive Approach to Combating Misinformation
Safwan Hossain, Andjela Mladenovic, Yiling Chen et al.
A Primal-Dual Algorithm for Hybrid Federated Learning
Tom Overman, Garrett Blum, Diego Klabjan
Constrained Bayesian Optimization under Partial Observations: Balanced Improvements and Provable Convergence
Shengbo Wang, Ke Li
Convergence of Online Learning Algorithm for a Mixture of Multiple Linear Regressions
Yujing Liu, Zhixin Liu, Lei Guo
Convergence of Some Convex Message Passing Algorithms to a Fixed Point
Václav Voráček, Tomáš Werner
Delving into the Convergence of Generalized Smooth Minimax Optimization
Wenhan Xian, Ziyi Chen, Heng Huang
Demystifying SGD with Doubly Stochastic Gradients
Kyurae Kim, Joohwan Ko, Yian Ma et al.
Distributed Bilevel Optimization with Communication Compression
Yutong He, Jie Hu, Xinmeng Huang et al.
FADAS: Towards Federated Adaptive Asynchronous Optimization
Yujia Wang, Shiqiang Wang, Songtao Lu et al.
Faster Adaptive Decentralized Learning Algorithms
Feihu Huang, jianyu zhao
Generalized Smooth Variational Inequalities: Methods with Adaptive Stepsizes
Daniil Vankov, Angelia Nedich, Lalitha Sankar
Locally Differentially Private Decentralized Stochastic Bilevel Optimization with Guaranteed Convergence Accuracy
Ziqin Chen, Yongqiang Wang
MADA: Meta-Adaptive Optimizers Through Hyper-Gradient Descent
Kaan Ozkara, Can Karakus, Parameswaran Raman et al.
On Convergence of Incremental Gradient for Non-convex Smooth Functions
Anastasiia Koloskova, Nikita Doikov, Sebastian Stich et al.
On the Role of Server Momentum in Federated Learning
Jianhui Sun, Xidong Wu, Heng Huang et al.
SF-DQN: Provable Knowledge Transfer using Successor Feature for Deep Reinforcement Learning
Shuai Zhang, Heshan Fernando, Miao Liu et al.
Sliced-Wasserstein Estimation with Spherical Harmonics as Control Variates
Rémi Leluc, Aymeric Dieuleveut, François Portier et al.
Spectral Preconditioning for Gradient Methods on Graded Non-convex Functions
Nikita Doikov, Sebastian Stich, Martin Jaggi
Understanding Adam Optimizer via Online Learning of Updates: Adam is FTRL in Disguise
Kwangjun Ahn, Zhiyu Zhang, Yunbum Kook et al.