添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
We gratefully acknowledge support from the Simons Foundation, member institutions , and all contributors. Donate [Submitted on 16 May 2023 ( v1 ), last revised 22 Aug 2023 (this version, v2)]

Title: Double Pessimism is Provably Efficient for Distributionally Robust Offline Reinforcement Learning: Generic Algorithm and Robust Partial Coverage

View PDF Abstract: In this paper, we study distributionally robust offline reinforcement learning (robust offline RL), which seeks to find an optimal policy purely from an offline dataset that can perform well in perturbed environments. In specific, we propose a generic algorithm framework called Doubly Pessimistic Model-based Policy Optimization ($P^2MPO$), which features a novel combination of a flexible model estimation subroutine and a doubly pessimistic policy optimization step. Notably, the double pessimism principle is crucial to overcome the distributional shifts incurred by (i) the mismatch between the behavior policy and the target policies; and (ii) the perturbation of the nominal model. Under certain accuracy conditions on the model estimation subroutine, we prove that $P^2MPO$ is sample-efficient with robust partial coverage data, which only requires the offline data to have good coverage of the distributions induced by the optimal robust policy and the perturbed models around the nominal model.
By tailoring specific model estimation subroutines for concrete examples of RMDPs, including tabular RMDPs, factored RMDPs, kernel and neural RMDPs, we prove that $P^2MPO$ enjoys a $\tilde{\mathcal{O}}(n^{-1/2})$ convergence rate, where $n$ is the dataset size. We highlight that all these examples, except tabular RMDPs, are first identified and proven tractable by this work. Furthermore, we continue our study of robust offline RL in the robust Markov games (RMGs). By extending the double pessimism principle identified for single-agent RMDPs, we propose another algorithm framework that can efficiently find the robust Nash equilibria among players using only robust unilateral (partial) coverage data. To our best knowledge, this work proposes the first general learning principle -- double pessimism -- for robust offline RL and shows that it is provably efficient with general function approximation. Subjects: Machine Learning (cs.LG) ; Artificial Intelligence (cs.AI); Optimization and Control (math.OC); Machine Learning (stat.ML) Cite as: arXiv:2305.09659 [cs.LG] arXiv:2305.09659v2 [cs.LG] for this version)

Submission history

From: Han Zhong [ view email ]
[v1] Tue, 16 May 2023 17:58:05 UTC (722 KB)
Tue, 22 Aug 2023 14:23:10 UTC (944 KB)
View a PDF of the paper titled Double Pessimism is Provably Efficient for Distributionally Robust Offline Reinforcement Learning: Generic Algorithm and Robust Partial Coverage, by Jose Blanchet and 3 other authors
  • View PDF
  • TeX Source
  • Other Formats
  • Current browse context:
    cs.LG
    recent | 2023-05 Change to browse by: cs.AI
    math.OC
    stat.ML

    arXivLabs: experimental projects with community collaborators

    arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

    Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

    Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .