I am training a neural network on Jupyter, using Sklearn and am having trouble with knowing when/if my network is overfitting the data. Right now I am plotting the actual outputs of my testing data vs the predicted outputs of the testing data my ANN came up with. Can anyone let me know if there is a specific way to tell?
This is what I am training on, and using around 1500 iterations, and 2 hidden layers with 6-8 nodes in each. My dataset has about 300 points, with 5 inputs and 2 outputs.
HiddenLayerStruture = (6,6)
MaxNumEpochs = 1500
NN = MLPRegressor(hidden_layer_sizes=HiddenLayerStruture,
activation='tanh',
solver='lbfgs',
alpha=0.0001,
learning_rate='constant',
max_iter=MaxNumEpochs)
NN.fit(Input_Trn_Scaled, Output_Trn_Scaled)
Output_Predicted_Scaled = NN.predict(Input_Tst_Scaled)
Thanks for any guidance that can be offered:)