Poster "multi-armed bandits" Papers
10 papers found
Constrained Feedback Learning for Non-Stationary Multi-Armed Bandits
Shaoang Li, Jian Li
NeurIPS 2025posterarXiv:2509.15073
Efficient Top-m Data Values Identification for Data Selection
Xiaoqiang Lin, Xinyi Xu, See-Kiong Ng et al.
ICLR 2025poster
Learning Across the Gap: Hybrid Multi-armed Bandits with Heterogeneous Offline and Online Data
Qijia He, Minghan Wang, Xutong Liu et al.
NeurIPS 2025poster
Pareto Optimal Risk-Agnostic Distributional Bandits with Heavy-Tail Rewards
Kyungjae Lee, Dohyeong Kim, Taehyun Cho et al.
NeurIPS 2025poster
Revisiting Follow-the-Perturbed-Leader with Unbounded Perturbations in Bandit Problems
Jongyeong Lee, Junya Honda, Shinji Ito et al.
NeurIPS 2025posterarXiv:2508.18604
2
citations
Causal Bandits: The Pareto Optimal Frontier of Adaptivity, a Reduction to Linear Bandits, and Limitations around Unknown Marginals
Ziyi Liu, Idan Attias, Daniel Roy
ICML 2024poster
Factored-Reward Bandits with Intermediate Observations
Marco Mussi, Simone Drago, Marcello Restelli et al.
ICML 2024poster
Federated Combinatorial Multi-Agent Multi-Armed Bandits
Fares Fourati, Mohamed-Slim Alouini, Vaneet Aggarwal
ICML 2024poster
Incentivized Learning in Principal-Agent Bandit Games
Antoine Scheid, Daniil Tiapkin, Etienne Boursier et al.
ICML 2024poster
On Interpolating Experts and Multi-Armed Bandits
Houshuang Chen, Yuchen He, Chihao Zhang
ICML 2024poster