Poster "sample complexity" Papers
31 papers found
Breaking Neural Network Scaling Laws with Modularity
Akhilan Boopathy, Sunshine Jiang, William Yue et al.
Deployment Efficient Reward-Free Exploration with Linear Function Approximation
Zihan Zhang, Yuxin Chen, Jason Lee et al.
Formal Models of Active Learning from Contrastive Examples
Farnam Mansouri, Hans Simon, Adish Singla et al.
Learning Hierarchical Polynomials of Multiple Nonlinear Features
Hengyu Fu, Zihao Wang, Eshaan Nichani et al.
Nearly-Linear Time Private Hypothesis Selection with the Optimal Approximation Factor
Maryam Aliakbarpour, Zhan Shi, Ria Stevens et al.
Non-Convex Tensor Recovery from Tube-Wise Sensing
Tongle Wu, Ying Sun
On the Convergence of Single-Timescale Actor-Critic
Navdeep Kumar, Priyank Agrawal, Giorgia Ramponi et al.
On the Sample Complexity of Differentially Private Policy Optimization
Yi He, Xingyu Zhou
Revisiting Agnostic Boosting
Arthur da Cunha, Mikael Møller Høgsgaard, Andrea Paudice et al.
Simple and Optimal Sublinear Algorithms for Mean Estimation
Beatrice Bertolotti, Matteo Russo, Chris Schwiegelshohn et al.
Stabilizing LTI Systems under Partial Observability: Sample Complexity and Fundamental Limits
Ziyi Zhang, Yorie Nakahira, Guannan Qu
Streaming Federated Learning with Markovian Data
Khiem HUYNH, Malcolm Egan, Giovanni Neglia et al.
Technical Debt in In-Context Learning: Diminishing Efficiency in Long Context
Taejong Joo, Diego Klabjan
Tight Bounds for Answering Adaptively Chosen Concentrated Queries
Emma Rapoport, Edith Cohen, Uri Stemmer
Accelerated Policy Gradient for s-rectangular Robust MDPs with Large State Spaces
Ziyi Chen, Heng Huang
An Online Optimization Perspective on First-Order and Zero-Order Decentralized Nonsmooth Nonconvex Stochastic Optimization
Emre Sahinoglu, Shahin Shahrampour
A Primal-Dual Algorithm for Offline Constrained Reinforcement Learning with Linear MDPs
Kihyuk Hong, Ambuj Tewari
Boosting Reinforcement Learning with Strongly Delayed Feedback Through Auxiliary Short Delays
Qingyuan Wu, Simon Zhan, Yixuan Wang et al.
Fast and Sample Efficient Multi-Task Representation Learning in Stochastic Contextual Bandits
Jiabin Lin, Shana Moothedath, Namrata Vaswani
Finite-Time Convergence and Sample Complexity of Actor-Critic Multi-Objective Reinforcement Learning
Tianchen Zhou, Hairi, Haibo Yang et al.
From Self-Attention to Markov Models: Unveiling the Dynamics of Generative Transformers
Muhammed Emrullah Ildiz, Yixiao HUANG, Yingcong Li et al.
Hierarchical Integral Probability Metrics: A distance on random probability measures with low sample complexity
Marta Catalano, Hugo Lavenant
Improving Sample Efficiency of Model-Free Algorithms for Zero-Sum Markov Games
Songtao Feng, Ming Yin, Yu-Xiang Wang et al.
Is Inverse Reinforcement Learning Harder than Standard Reinforcement Learning? A Theoretical Perspective
Lei Zhao, Mengdi Wang, Yu Bai
Model-Based RL for Mean-Field Games is not Statistically Harder than Single-Agent RL
Jiawei Huang, Niao He, Andreas Krause
Multi-group Learning for Hierarchical Groups
Samuel Deng, Daniel Hsu
Private Gradient Descent for Linear Regression: Tighter Error Bounds and Instance-Specific Uncertainty Estimation
Gavin Brown, Krishnamurthy Dvijotham, Georgina Evans et al.
Reward-Free Kernel-Based Reinforcement Learning
Sattar Vakili, Farhang Nabiei, Da-shan Shiu et al.
Sliding Down the Stairs: How Correlated Latent Variables Accelerate Learning with Neural Networks
Lorenzo Bardone, Sebastian Goldt
Switching the Loss Reduces the Cost in Batch Reinforcement Learning
Alex Ayoub, Kaiwen Wang, Vincent Liu et al.
Two Heads are Actually Better than One: Towards Better Adversarial Robustness via Transduction and Rejection
Nils Palumbo, Yang Guo, Xi Wu et al.