EU Horizon 2020
Horizon 2020
HomeNewsCase StudiesPeopleKey Prior PublicationsPublications
[WLP+21] Matthew Wicker, Luca Laurenti, Andrea Patane, Zhuotong Chen, Zheng Zhang and Marta Kwiatkowska. Bayesian Inference with Certifiable Adversarial Robustness. In International Conference on Artificial Intelligence and Statistics (AISTATS'21), PMLR. To appear. April 2021. [pdf] [bib]
Downloads:  pdf pdf (5.19 MB)  bib bib
Notes: Available from:
Abstract. We consider adversarial training of deep neural networks through the lens of Bayesian learning, and present a principled framework for adversarial training of Bayesian Neural Networks (BNNs) with certifiable guarantees. We rely on techniques from constraint relaxation of nonconvex optimisation problems and modify the standard cross-entropy error model to enforce posterior robustness to worst-case perturbations in ϵ-balls around input points. We illustrate how the resulting framework can be combined with methods commonly employed for approximate inference of BNNs. In an empirical investigation, we demonstrate that the presented approach enables training of certifiably robust models on MNIST, FashionMNIST and CIFAR-10 and can also be beneficial for uncertainty calibration. Our method is the first to directly train certifiable BNNs, thus facilitating their deployment in safety-critical applications.