Multimodal Bandits: Regret Lower Bounds and Optimal Algorithms
0
citations
#2324
in NEURIPS 2025
of 5858 papers
2
Top Authors
4
Data Points
Top Authors
Abstract
We consider a stochastic multi-armed bandit problem with i.i.d. rewards where the expected reward function is multimodal with at most $m$ modes. We propose the first known computationally tractable algorithm for computing the solution to the Graves-Lai optimization problem, which in turn enables the implementation of asymptotically optimal algorithms for this bandit problem.
Citation History
Jan 25, 2026
0
Jan 27, 2026
0
Jan 27, 2026
0
Jan 28, 2026
0