Topic:Verification of Deep Learning Systems
Time: Jan 3, 10:00-11:00
Location:Room 1A-200, SIST Building
Speaker: Assistant Prof. Xiaowei HUANG
Abstract:
Deep neural networks (DNN) have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. In this talk, I will report two recent works towards the verification of feed-forward multi-layer DNNs. The first is a novel automated verification framework based on Satisfiability Modulo Theory (SMT). We enable exhaustive search of the region by employing discretisation, and propagate the analysis layer by layer. Our method works directly with the network code and, in contrast to existing methods, can guarantee that adversarial examples, if they exist, are found for the given region and family of manipulations. We implement the techniques using Z3 and evaluate them on state-of-the-art networks, including regularised and deep learning networks. The second is a feature-guided black-box approach that requires no knowledge (architecture, parameters, etc) of the network. The algorithm employs object detection techniques such as SIFT (Scale Invariant Feature Transform) to extract features from an image. We formulate the crafting of adversarial examples as a two-player turn-based stochastic game, and show that, theoretically, the two-player game can converge to the optimal strategy, and that the optimal strategy represents a globally minimal adversarial image. Using Monte Carlo tree search we gradually explore the game state space to search for adversarial examples.
Bio:
Xiaowei Huang is a Lecturer (Assistant Professor) at the Department of Computer Science, University of Liverpool, UK. Prior to Liverpool, he worked at the University of Oxford in UK and the University of New South Wales in Australia, and received his PhD from the Chinese Academy of Sciences. Dr Huang’s research concerns the correctness (e.g., safety, trustworthiness, etc) of autonomous systems, including mainly (1) safety verification of machine learning techniques (such as neural network-based deep learning systems), and (2) logic-based approaches for the specification, verification and synthesis of autonomous multi-agent systems. To date, he has published more than 40 research papers in computer science journals such as Artificial Intelligence and top conferences such as CAV, TACAS, AAAI, IJCAI, etc. His webpage is: https://cgi.csc.liv.ac.uk/~xiaowei