EU Horizon 2020
Horizon 2020
HomeNewsResearch ThemesPeopleKey Prior PublicationsPublications
[LVK+20] Clare Lyle, Mark van der Wilk, Marta Kwiatkowska, Yarin Gal and Benjamin Bloem-Reddy. On the Benefits of Invariance in Neural Networks. Technical report , arXiv. Presented as poster at the NeurIPS 2019 Machine Learning with Guarantees Workshop. May 2020. [pdf] [bib]
Downloads:  pdf pdf (1.2 MB)  bib bib
Notes: Available here: https://arxiv.org/abs/2005.00178
Abstract. Many real world data analysis problems exhibit invariant structure, and models that take advantage of this structure have shown impressive empirical performance, particularly in deep learning. While the literature contains a variety of methods to incorporate invariance into models, theoretical understanding is poor and there is no way to assess when one method should be preferred over another. In this work, we analyze the benefits and limitations of two widely used approaches in deep learning in the presence of invariance: data augmentation and feature averaging. We prove that training with data augmentation leads to better estimates of risk and gradients thereof, and we provide a PAC-Bayes generalization bound for models trained with data augmentation. We also show that compared to data augmentation, feature averaging reduces generalization error when used with convex losses, and tightens PAC-Bayes bounds. We provide empirical support of these theoretical results, including a demonstration of why generalization may not improve by training with data augmentation: the ‘learned invariance’ fails outside of the training distribution.