Offline Inverse RL: New Solution Concepts and Provably Efficient Algorithms
Topics
Abstract
Inverse reinforcement learning(IRL) aims to recover the reward function of anexpertagent from demonstrations of behavior. It is well-known that the IRL problem is fundamentally ill-posed, i.e., many reward functions can explain the demonstrations. For this reason, IRL has been recently reframed in terms of estimating thefeasible reward set(Metelli et al., 2021), thus, postponing the selection of a single reward. However, so far, the available formulations and algorithmic solutions have been proposed and analyzed mainly for theonlinesetting, where the learner can interact with the environment and query the expert at will. This is clearly unrealistic in most practical applications, where the availability of anofflinedataset is a much more common scenario. In this paper, we introduce a novel notion of feasible reward set capturing the opportunities and limitations of the offline setting and we analyze the complexity of its estimation. This requires the introduction an original learning framework that copes with the intrinsic difficulty of the setting, for which data coverage is not under control. Then, we propose two computationally and statistically efficient algorithms, IRLO and PIRLO, for addressing the problem. In particular, the latter adopts a specific form ofpessimismto enforce the novel, desirable property ofinclusion monotonicityof the delivered feasible set. With this work, we aim to provide a panorama of the challenges of the offline IRL problem and how they can be fruitfully addressed.