"multi-armed bandits" Papers
13 papers found
Constrained Feedback Learning for Non-Stationary Multi-Armed Bandits
Shaoang Li, Jian Li
NeurIPS 2025posterarXiv:2509.15073
Efficient Top-m Data Values Identification for Data Selection
Xiaoqiang Lin, Xinyi Xu, See-Kiong Ng et al.
ICLR 2025poster
Evolution of Information in Interactive Decision Making: A Case Study for Multi-Armed Bandits
Yuzhou Gu, Yanjun Han, Jian Qian
NeurIPS 2025oralarXiv:2503.00273
1
citations
Learning Across the Gap: Hybrid Multi-armed Bandits with Heterogeneous Offline and Online Data
Qijia He, Minghan Wang, Xutong Liu et al.
NeurIPS 2025poster
Pareto Optimal Risk-Agnostic Distributional Bandits with Heavy-Tail Rewards
Kyungjae Lee, Dohyeong Kim, Taehyun Cho et al.
NeurIPS 2025poster
Revisiting Follow-the-Perturbed-Leader with Unbounded Perturbations in Bandit Problems
Jongyeong Lee, Junya Honda, Shinji Ito et al.
NeurIPS 2025posterarXiv:2508.18604
2
citations
Causal Bandits: The Pareto Optimal Frontier of Adaptivity, a Reduction to Linear Bandits, and Limitations around Unknown Marginals
Ziyi Liu, Idan Attias, Daniel Roy
ICML 2024poster
Communication-Efficient Collaborative Regret Minimization in Multi-Armed Bandits
Nikolai Karpov, Qin Zhang
AAAI 2024paperarXiv:2301.11442
2
citations
Factored-Reward Bandits with Intermediate Observations
Marco Mussi, Simone Drago, Marcello Restelli et al.
ICML 2024poster
Federated Combinatorial Multi-Agent Multi-Armed Bandits
Fares Fourati, Mohamed-Slim Alouini, Vaneet Aggarwal
ICML 2024poster
Incentivized Learning in Principal-Agent Bandit Games
Antoine Scheid, Daniil Tiapkin, Etienne Boursier et al.
ICML 2024poster
Leveraging (Biased) Information: Multi-armed Bandits with Offline Data
Wang Chi Cheung, Lixing Lyu
ICML 2024spotlight
On Interpolating Experts and Multi-Armed Bandits
Houshuang Chen, Yuchen He, Chihao Zhang
ICML 2024poster