0

While the input is the same and the code is the same, I get two different results when run multiple time. There are only two unique outputs though. I do not know what part of the code is randomized and I'm having a hard time figuring out where the error is. Is this a known bug in neurolab by any chance?

I've attached the complete code below. Please run in it some 9-10 times to see the two different outputs. I also have attached the output from five runs of the same code and I see that the error output has two different values in the five runs. Please help.

Code: --------

import neurolab as nl
import numpy as np

# Create train samples

N = 200;

## DATA
x1 = [0]*(N+1);

for ii in range(-N/2,N/2+1,1):

    x1[ii+N/2] = ii;

x1_arr = np.array(x1);

y1 = -2+ 3*x1_arr ;

y = [0]*len(y1);

for ii in range(len(y1)):

    if(y1[ii] > 15):

        y[ii] = 1;

l = len(y);

x0 = [1]*l;

x0_arr = np.array(x0);

x_arr = np.concatenate(([x0_arr], [x1_arr]), axis=0)

x = x1_arr;

y_arr = np.array(y);

size = l;

inp = x.reshape(size,1)

tar = y_arr.reshape(size,1)

# Create network with 2 layers and random initialized

net = nl.net.newff([[-N/2, N/2]],[1, 1])

net.trainf =  nl.train.train_gd;

# Train network
error = net.train(inp, tar, epochs=100, show=100, goal=0.02, lr = 0.001)

# Simulate network
out = net.sim(inp);

Ouput ---------

>>> 
========= RESTART: D:/Python_scripts/ML/nn_neurolab/num_detection.py =========
Epoch: 100; Error: 2.49617137968;
The maximum number of train epochs is reached
>>> 
========= RESTART: D:/Python_scripts/ML/nn_neurolab/num_detection.py =========
Epoch: 100; Error: 2.49617137968;
The maximum number of train epochs is reached
>>> 
========= RESTART: D:/Python_scripts/ML/nn_neurolab/num_detection.py =========
Epoch: 100; Error: 2.66289633422;
The maximum number of train epochs is reached
>>> 
========= RESTART: D:/Python_scripts/ML/nn_neurolab/num_detection.py =========
Epoch: 100; Error: 2.49617137968;
The maximum number of train epochs is reached
>>> 
========= RESTART: D:/Python_scripts/ML/nn_neurolab/num_detection.py =========
Epoch: 100; Error: 2.66289633422;
The maximum number of train epochs is reached

Thanks and Cheers!

2 Answers2

0

Neural network training is not deterministic. It starts from random initialization of weights and perform (greedy in nature) optimziation process. You cannot expect the exact same results, unless you fix all random number generators used in nn training.

lejlot
  • 64,777
  • 8
  • 131
  • 164
0

You can fix it using in the beginning of the code numpy.random.seed(x) # x is a number

  • Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Nov 15 '22 at 03:06