Abstract
Deep learning, i.e., machine learning using neural networks, is used successfully in many application areas. Still, their use in safety-critical or security-critical applications is limited, due to the lack of testing and verification techniques.
We address this problem by formalizing an important class of neural networks, feed-forward neural networks, in Isabelle/HOL. We present two different approaches of formalizing feed-forward networks and show their equivalence as well as demonstrate their use in verifying certain safety and correctness properties of various example. Moreover, we do not only provide a formal model that allows to reason over feed-forward neural networks, we also provide a datatype package for Isabelle/HOL that supports importing models from TensorFlow.js.
License
Topics
Related publications
- Brucker, A. D., & Stell, A. (2023). Verifying Feedforward Neural Networks for Classification in Isabelle/HOL. Formal Methods, 427–444. https://doi.org/10.1007/978-3-031-27481-7_24
- Amy Stell. Trustworthy Machine Learning for High-Assurance Systems. PhD Thesis. University of Exeter, UK. 2025.
Session Neural_Networks
- TensorFlow_Import
- NN_Common
- NN_Digraph
- NN_Digraph_Layers
- Activation_Functions
- Prediction_Utils
- Properties
- NN_Digraph_Main
- NN_Layers
- NN_Layers_List_Main
- Matrix_Utils
- NN_Lipschitz_Continuous
- Prediction_Utils_Matrix
- Properties_Matrix
- NN_Layers_Matrix_Main
- NN_Layers_Main
- NN_Main
- NN_Manual
- Compass_Digraph
- Compass_Layers_Matrix
- Compass_Layers_List
- Grid_Layers
- Grid_Layers_List
- Grid_Layers_Matrix