Abstract.
Bayesian neural networks (BNNs), a family of neural networks
with a probability distribution placed on their weights, have the
advantage of being able to reason about uncertainty in their predictions
as well as data. Their deployment in safety-critical applications demands rigorous robustness guarantees. This paper summarises recent progress in developing algorithmic methods to ensure certifiable safety and robustness guarantees for BNNs, with the view to support design automation for systems incorporating BNN components.
|