Notes:
A preprint version can be found at https://arxiv.org/abs/2204.14170.
|
Abstract.
Bayesian structure learning allows one to capture uncertainty over the causal directed acyclic graph (DAG) responsible for generating given data. In this work, we present Tractable Uncertainty for STructure learning (TRUST), a framework for approximate posterior inference that relies on probabilistic circuits as the representation of our posterior belief. In contrast to sample-based posterior approximations, our representation can capture a much richer space of DAGs, while also being able to tractably reason about the uncertainty through a range of useful inference queries. We empirically show how probabilistic circuits can be used as an augmented representation for structure learning methods, leading to improvement in both the qual- ity of inferred structures and posterior uncertainty. Experimental results on conditional query answer- ing further demonstrate the practical utility of the representational capacity of TRUST.
|