"sample complexity" Papers

29 papers found

Deployment Efficient Reward-Free Exploration with Linear Function Approximation

Zihan Zhang, Yuxin Chen, Jason Lee et al.

NeurIPS 2025poster

Formal Models of Active Learning from Contrastive Examples

Farnam Mansouri, Hans Simon, Adish Singla et al.

NeurIPS 2025posterarXiv:2506.15893

Nearly-Linear Time Private Hypothesis Selection with the Optimal Approximation Factor

Maryam Aliakbarpour, Zhan Shi, Ria Stevens et al.

NeurIPS 2025posterarXiv:2506.01162

On the Convergence of Single-Timescale Actor-Critic

Navdeep Kumar, Priyank Agrawal, Giorgia Ramponi et al.

NeurIPS 2025posterarXiv:2410.08868
1
citations

Streaming Federated Learning with Markovian Data

Khiem HUYNH, Malcolm Egan, Giovanni Neglia et al.

NeurIPS 2025posterarXiv:2503.18807

Accelerated Policy Gradient for s-rectangular Robust MDPs with Large State Spaces

Ziyi Chen, Heng Huang

ICML 2024poster

An Improved Finite-time Analysis of Temporal Difference Learning with Deep Neural Networks

Zhifa Ke, Zaiwen Wen, Junyu Zhang

ICML 2024oral

An Online Optimization Perspective on First-Order and Zero-Order Decentralized Nonsmooth Nonconvex Stochastic Optimization

Emre Sahinoglu, Shahin Shahrampour

ICML 2024poster

A Primal-Dual Algorithm for Offline Constrained Reinforcement Learning with Linear MDPs

Kihyuk Hong, Ambuj Tewari

ICML 2024poster

A Theory of Fault-Tolerant Learning

Changlong Wu, Yifan Wang, Ananth Grama

ICML 2024spotlight

Boosting Reinforcement Learning with Strongly Delayed Feedback Through Auxiliary Short Delays

Qingyuan Wu, Simon Zhan, Yixuan Wang et al.

ICML 2024poster

Eliciting Kemeny Rankings

Anne-Marie George, Christos Dimitrakakis

AAAI 2024paperarXiv:2312.11663
1
citations

Fast and Sample Efficient Multi-Task Representation Learning in Stochastic Contextual Bandits

Jiabin Lin, Shana Moothedath, Namrata Vaswani

ICML 2024poster

Faster Adaptive Decentralized Learning Algorithms

Feihu Huang, jianyu zhao

ICML 2024spotlight

Finite-Time Convergence and Sample Complexity of Actor-Critic Multi-Objective Reinforcement Learning

Tianchen Zhou, Hairi, Haibo Yang et al.

ICML 2024poster

From Self-Attention to Markov Models: Unveiling the Dynamics of Generative Transformers

Muhammed Emrullah Ildiz, Yixiao HUANG, Yingcong Li et al.

ICML 2024poster

Hierarchical Integral Probability Metrics: A distance on random probability measures with low sample complexity

Marta Catalano, Hugo Lavenant

ICML 2024poster

How Uniform Random Weights Induce Non-uniform Bias: Typical Interpolating Neural Networks Generalize with Narrow Teachers

Gon Buzaglo, Itamar Harel, Mor Shpigel Nacson et al.

ICML 2024spotlight

Improving Sample Efficiency of Model-Free Algorithms for Zero-Sum Markov Games

Songtao Feng, Ming Yin, Yu-Xiang Wang et al.

ICML 2024poster

Is Inverse Reinforcement Learning Harder than Standard Reinforcement Learning? A Theoretical Perspective

Lei Zhao, Mengdi Wang, Yu Bai

ICML 2024poster

Model-Based RL for Mean-Field Games is not Statistically Harder than Single-Agent RL

Jiawei Huang, Niao He, Andreas Krause

ICML 2024poster

Multi-group Learning for Hierarchical Groups

Samuel Deng, Daniel Hsu

ICML 2024poster

Private Gradient Descent for Linear Regression: Tighter Error Bounds and Instance-Specific Uncertainty Estimation

Gavin Brown, Krishnamurthy Dvijotham, Georgina Evans et al.

ICML 2024poster

Replicable Learning of Large-Margin Halfspaces

Alkis Kalavasis, Amin Karbasi, Kasper Green Larsen et al.

ICML 2024spotlight

Reward-Free Kernel-Based Reinforcement Learning

Sattar Vakili, Farhang Nabiei, Da-shan Shiu et al.

ICML 2024poster

Sample Efficient Reinforcement Learning with Partial Dynamics Knowledge

Meshal Alharbi, Mardavij Roozbehani, Munther Dahleh

AAAI 2024paperarXiv:2312.12558

Sliding Down the Stairs: How Correlated Latent Variables Accelerate Learning with Neural Networks

Lorenzo Bardone, Sebastian Goldt

ICML 2024poster

Switching the Loss Reduces the Cost in Batch Reinforcement Learning

Alex Ayoub, Kaiwen Wang, Vincent Liu et al.

ICML 2024poster

Two Heads are Actually Better than One: Towards Better Adversarial Robustness via Transduction and Rejection

Nils Palumbo, Yang Guo, Xi Wu et al.

ICML 2024poster