Home Page XOR Neural Network

Layers

Dense Layer Activation Layer

Activation functions Cost functions

Training modes

Backpropagation Minibatched backprop
Requirements Structure of the code

> Creating a Neural Network

Requirements

You need to have V installed, you can verify that by running v run .. You also need to have the vsl module installed, you can install it with v install vsl. Then you need to clone the module's repo and create a v file in it that we are going to use.

Structure of the code

We are going to create a neural network to train a neural network that will be able to perform the XOR logic gate. The result can be found here.

First we are going to import the neural network module and create a main function. Then we create the neural network that we are going to train. The 0 is the seed for the random weights and biases to be able to get the same neural network at each run.

import neural_networks as nn fn main() { mut model := nn.NeuralNetwork.new(0) }

Then we add the layers that we want our network to have. We need our network to have 2 inputs and 1 output to match the XOR gate. So we will first add a Dense layer with 2 inputs and 3 outputs, 3 is arbitrary but works well. The two numbers after the number of inputs/outputs is the range for the initialisation of random weights and biases.

Then an Activation layer, the Dense and Activation layers are complementary so we will add one Activation per Dense layer. The Activation function that we will use for this layer is leaky relu, as it is convenient. We add a second Dense layer with 3 input and 1 output and the Activation layer that goes with it.

model.add_layer(nn.Dense.new(2, 3, 0.7, 0.65)) model.add_layer(nn.Activation.new(.leaky_relu)) model.add_layer(nn.Dense.new(3, 1, 0.6, 0.65)) model.add_layer(nn.Activation.new(.leaky_relu))

Then we need to create the parametters for the training. The learning rate, momentum, number of epochs are found by trial and error and these work well. The Cost function that we will use is the Mean Squared Error (MSE).

We then add the dataset that the network will use for it's training. And same for the testing, in a real example the test data is unseen during the training to be able to see how well the networks does in an unseen situation but as we have only 4 different possible inputs we can not show unseen data to the network so we will use the same data.

The neural newtork will print it's performance every print_interval epochs. For the test parameters, every training_interval epochs it will run the test dataset and print the results from the print_startth element of the test dataset to the print_endth one.

training_parameters := nn.BackpropTrainingParams{ learning_rate: 0.37 momentum: 0.9 nb_epochs: 300 print_interval: 25 cost_function: .mse // mean squared error training: nn.Dataset { inputs: [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]] expected_outputs: [[0.0], [1.0], [1.0], [0.0]] } test: nn.Dataset { inputs: [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]] expected_outputs: [[0.0], [1.0], [1.0], [0.0]] } test_params: nn.TestParams{ print_start: 0 print_end: 3 training_interval: 100 } }

Now it's the time to train the network!

model.train(training_parameters)

We can also save the model by adding that to the end of the program:

model.save_model('saveXOR')

And to load a model (to use it or to train it further) you just need to create an empty model like we did at the start and then do:

model.load_model('saveXOR')

There it is, we can just run the program and it will train!