0

Sorry if this has been asked before, I have tried looking online but maybe I don't know the proper terminology because I mostly find results that try to address overfitting by splitting the data set.

So when my my models gets stuck at like 30% accuracy on the validation data and refuses to improve, my strategies tend to be trying to change the number of nodes per layer, batch size, or number of epochs. Sometimes this is helpful, but other times it doesn't seem to do much at all.

What do people usually do in this situation?

Xun
  • 43
  • 1
  • 5

1 Answers1

0

I'd like to help with your question. You probably are working on a classification task. Could you please specify the following properties of your dataset: number of samples, number of features, types of features (numerical, categorical, etc).

Wladd
  • 21
  • 4