Towards the assurance of AI-based systems via formal verification
Event Menu
A major challenge in deploying ML-based systems, such as ML-based computer vision, is the inherent difficulty in ensuring their performance in the operational design domain. The standard approach consists in extensively testing models for inputs. However, testing is inherently limited in coverage, and it is expensive in several domains. Novel verification methods provide guarantees that a neural model meets its specifications in dense neighborhood of selected inputs. For example, by using verification methods we can establish whether a model is robust with respect to infinitely many lighting perturbations, or particular noise patterns in the vicinity to an input. Verification methods can also be tailored to specifications in the latent space and establish the robustness of models against semantic perturbations not definable in the input space (3D pose changes, background changes, etc). Additionally, verification methods can be paired with learning to obtain robust learning methods capable of generating models inherently more robust than those that may be derived with standard methods. In this presentation I will succinctly cover the key theoretical results leading to some of the present ML verification technology, illustrate the resulting toolsets and capabilities, and describe some of the use cases developed with our colleagues at Boeing Research, including centerline distance estimation, object detection, and runway detection. I will argue that verification and robust learning can be used to obtain models that are inherently more robust and better understood than present learning and testing approaches, thereby unlocking the deployment of applications in the industry.
Speaker
Dr. Lomuscio is Professor of AI Safety and Director of the Safe AI Lab, Department of Computing, Imperial College London, UK. He is founding co-director of the UKRI Doctoral Training Centre in Safe and Trusted Artificial Intelligence. Alessio's research interests concern the development of verification methods for artificial intelligence. Since 2000 he has pioneered the development of formal methods for the verification of autonomous systems and multi-agent systems, both symbolic and ML-based. He has published over 200 papers in leading AI and formal methods conferences and journals. He is distinguished ACM member, a Fellow of the European Association of Artificial Intelligence and currently holds a Royal Academy of Engineering Chair in Emerging Technologies. Prof. Lomuscio is the founder and CEO of Safe Intelligence, a VC-backed Imperial College London spinout helping users build and assure robust ML systems.
Sponsors: IEEE Aerospace and Electronic Systems Society - Co-sponsored by AESS Boston (Chair Dr. Francesca Scire-Scappuzzo) and AESS London (Chair Dr. Julien Le Kernec)