Theory NN_Manual
chapter‹Reference Manual (thy)›
theory
NN_Manual
imports
NN_Main
begin
text‹\label{ch:manual}›
section‹Importing Neural Networks and Data›
paragraph‹@{command "import_data_file"}.›
text‹ For importing test or training data, we provide the following command:
\<^rail>‹ @@{command "import_data_file"} data_filename 'defining' data_def ›
where \emph{data-filename} is the path (file name) of the data file that should be read and
∗‹data-def› is the name the HOL definition representing the data
is bound to. The data file should be a simple two-dimensional array of real numbers as, e.g.,
produced by NumPy's \<^cite>‹"harris.ea:array:2020"› ▩‹savetxt› command:
\begin{python}
import numpy as np
training_data = np.array([[0,0],[0,1],[1,0],[1,1]], "float32")
np.savetxt('training_data.out', training_data)
\end{python}
For further information, please see
🌐‹https://numpy.org/doc/stable/reference/generated/numpy.savetxt.html›.
›
paragraph‹@{command "import_TensorFlow"}.›
text‹ For importing trained neural networks, we provide the following command:
\<^rail>‹ @@{command "import_TensorFlow"} name 'file' data_filename 'as'
('json' | 'digraph' | 'digraph_single' | 'seq_layer')›
The input should be in JSON \<^cite>‹"ecma:json:2017"› format with the weights stored
in a separate file. This format is used by TensorFlow.js~\<^cite>‹"smilkov.ea:tensorflowjs:2019"›
and also supported by the corresponding Python module. For example:
\begin{python}
import tensorflowjs as tfjs
import numpy as np
import keras
from keras.models import Sequential
from keras.layers import Dense
training_data = np.array([[0,0],[0,1],[1,0],[1,1]], "float32")
target_data = np.array([[1],[0],[0],[0]], "float32")
model = Sequential()
model.add(Dense(1, activation='hard_sigmoid'))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['binary_accuracy'])
model.fit(training_data, target_data, epochs=7500, verbose=0)
# safe trained model as JSON (with external file for weights)
tfjs.converters.save_keras_model(model, ".")
\end{python}
›
paragraph‹Configuration.› text ‹We provide several configuration attributes:
▪ The attribute @{attribute "nn_proof_mode"} (default @{term "nbe"} configures if proofs during
the import of neural networks (i.e., @{command "import_TensorFlow"}) should
▪ not generate any proofs (@{term "skip"}
▪ generate proofs axiomatically, without actually proving them (@{term "sorry"})
▪ use code generation (i.e., the proof method @{method "eval"}) if possible (@{term "eval"})
▪ use normalization by evaluation (i.e., the proof method @{method "normalization"}) if possible (@{term "nbe"})
▪ avoid using the code generator (@{term "safe"})
While, in many scenarios, the proof method @{method "eval"} is much faster than the
alternative approaches, its safety relies on the configuration of the code generator. A more
detailed discussion can be found in Section 5 of \<^cite>‹"isabelle:codegen:2021"›.
›
section‹Proof Methods›
text‹Currently, we provide two domain-specific proof methods:
▪ The method @{method "predict_layer"} is, in its essence, a simplification using the
theorem set @{text "nn_layer"}, which is configured automatically by the import of
layer-based models.
▪ @{method "forced_approximation"} is a method mainly for debugging and experimentation
that repeats the application of the @{method "approximation"}.
›
end