11

I am new to that area, so the question may seem strange. However before asking I've read bunch of introductory articles about what are the key points about in machine learning and what are the acting parts of neural networks. Including very useful that one What is machine learning. Basically as I got it - an educated NN is (correct me if it's wrong):

  1. set of connections between neurons (maybe self-connected, may have gates, etc.)
  2. formed activation probabilities on each connection.

Both things are adjusted during the training to fit expected output as close as possible. Then, what we do with an educated NN - we load the test subset of data into it and check how good it performs. But what happens if we're happy with the test results and we want to store the education results and not run training again later when dataset get new values.

So my question is - is that education knowledge is stored somewhere except RAM? can be dumped (think of object serialisation in a way) so that you don't need to educate your NN with data you get tomorrow or later.

Now I am trying to make simple demo with my dataset using synaptic.js but I could not spot that kind of concept of saving education in project's wiki. That library is just an example, if you reference some python lib would be good to!

hungryghost
  • 9,463
  • 3
  • 23
  • 36
shershen
  • 9,875
  • 11
  • 39
  • 60
  • If you are using synaptic, you can save your neural network as a JSON and use it later. You can even convert it to a standalone javascript function! – Thomas Wagenaar Feb 22 '17 at 20:06

2 Answers2

7

With regards to storing it via synaptic.js:

This is quite easy to do! It actually has a built-in function for this. There are two ways to do this.

If you want to use the network without training it again

This will create a standalone function of your network, you can use it anywhere with javascript without requiring synaptic.js! Wiki

var standalone = myNetwork.standalone();

If you want to modify the network later on

Just convert your network to a JSON. This can be loaded up anytime again with synaptic.js! Wiki

// Export the network to a JSON which you can save as plain text
var exported = myNetwork.toJSON();

// Conver the network back to useable network
var imported = Network.fromJSON(exported);
Thomas Wagenaar
  • 6,489
  • 5
  • 30
  • 73
6

I will assume in my answer that you are working with a simple multi-layer perceptron (MLP), although my answer is applicable to other networks too.

The purpose of 'training' an MLP is to find the correct synaptic weights that minimise the error on the network output.

When a neuron is connected to another neuron, its input is given a weight. The neuron performs a function, such as the weighted sum of all inputs, and then outputs the result.

Once you have trained your network, and found these weights, you can verify the results using a validation set.

If you are happy that your network is performing well, you simply record the weights that you applied to each connection. You can store these weights wherever you like (along with a description of the network structure) and then retrieve them later. There is no need to re-train the network every time you would like to use it.

Hope this helps.

Kieran
  • 2,554
  • 3
  • 26
  • 38
  • so basically you mean that the "knowledge" (effective result on some dataset) is explicitly result of program code that describes NN (number layers, activation coefficients etc)? so it's not some data that is result of applying program code to training dataset? – shershen Feb 28 '16 at 21:44
  • 1
    @shershen Exactly. The network's behaviour is characterised completely by its structure and the coefficients/weights on each of the neural connections. – Kieran Feb 28 '16 at 21:46
  • @shershen The weights/coefficients change during training. When you stop training, you stop modifying the weights – Kieran Feb 28 '16 at 21:48
  • you say "The network's behaviour is characterised completely by its structure.." - got it! but - "When you stop training, you stop modifying the weights " - are those *modified weights* saved somewhere? – shershen Feb 28 '16 at 22:32
  • @shershen Well initially you choose a random set of weights, and train the network using backpropagation (or another method) to make them more accurate over time. After each training epoch/iteration, you save the new weights down. When you are happy that your network performs well, you need to save the final weights somewhere so that you can easily retrieve them later. – Kieran Feb 28 '16 at 22:34
  • "you need to save the final weights somewhere" - aha, so basically again it's in code snippets of your NN algorithm. Thanks, your explanation really helped to put some things right for me! – shershen Feb 28 '16 at 22:56