Notes:
Available from: https://arxiv.org/abs/2003.06016
|
Abstract.
Generalization across environments is critical for the successful application of reinforcement learning algorithms to real-world challenges. In this paper, we consider the problem of learning
abstractions that generalize in block MDPs, families of environments with a shared latent state space, and dynamics structure over that latent space, but varying observations. We leverage tools from causal inference to propose a method of invariant prediction to learn state abstractions that generalize to novel observations in the multi-environment setting. We prove that for certain classes of environments, this approach outputs with high probability a state abstraction corresponding to the causal feature set with respect to the return. We further provide more general bounds on model error and generalization error in the multi-environment setting in the process showing a connection between causal variable selection and the state abstraction framework for MDPs. We give empirical evidence that our methods work in both linear and nonlinear settings, attaining improved generalization over single and multi-task baselines.
|