1

I am training a neural network on Jupyter, using Sklearn and am having trouble with knowing when/if my network is overfitting the data. Right now I am plotting the actual outputs of my testing data vs the predicted outputs of the testing data my ANN came up with. Can anyone let me know if there is a specific way to tell?

This is what I am training on, and using around 1500 iterations, and 2 hidden layers with 6-8 nodes in each. My dataset has about 300 points, with 5 inputs and 2 outputs.

HiddenLayerStruture = (6,6)    
MaxNumEpochs = 1500
NN = MLPRegressor(hidden_layer_sizes=HiddenLayerStruture,
                  activation='tanh',
                  solver='lbfgs',
                  alpha=0.0001,
                  learning_rate='constant',
                  max_iter=MaxNumEpochs)

NN.fit(Input_Trn_Scaled, Output_Trn_Scaled)
Output_Predicted_Scaled = NN.predict(Input_Tst_Scaled)

Thanks for any guidance that can be offered:)

Engineero
  • 12,340
  • 5
  • 53
  • 75
AWaye
  • 11
  • 2
  • You may want to look at the accuracy of your training_data, and then check the accuracy of your testing_data. If the testing_data accuracy is higher there is a good chance its overfitting - not a guarantee, just an indication – Ely Fialkoff Aug 08 '18 at 20:37

0 Answers0