Computer Science > Machine Learning

Abstract: Verifying correctness of deep neural networks (DNNs) is challenging. We study
a generic reachability problem for feed-forward DNNs which, for a given set of
inputs to the network and a Lipschitz-continuous function over its outputs,
computes the lower and upper bound on the function values. Because the network
and the function are Lipschitz continuous, all values in the interval between
the lower and upper bound are reachable. We show how to obtain the safety
verification problem, the output range analysis problem and a robustness
measure by instantiating the reachability problem. We present a novel algorithm
based on adaptive nested optimisation to solve the reachability problem. The
technique has been implemented and evaluated on a range of DNNs, demonstrating
its efficiency, scalability and ability to handle a broader class of networks
than state-of-the-art verification approaches.

Comments:

This is the long version of the conference paper accepted in IJCAI-2018. Github: this https URL