2

I am making a LSTM network where output is in the form of One-hot encoded directions Left, Right, Up and Down. Which comes out to be like:

[0. 0. 1. 0.] [1. 0. 0. 0.] [0. 0. 1. 0.] ... [0. 0. 1. 0.] [0. 0. 1. 0.] [0. 0. 0. 0.]

What should be the acceptable range of Categorical Cross-Entropy loss, in order to consider the model successfully trained?

Mohak Shukla
  • 77
  • 1
  • 7
  • 3
    There is no numeric answer for this. You would typically train until the loss stops decreasing (less than a small amount, say 0.001) or, sometimes, it may start increasing. Then see how accurate your model turns out. I would want losses ~ 0.01, but don't always get that low, even though the model accuracy may be good. – pink spikyhairman May 06 '20 at 19:03
  • 2
    Losses are just a way for your network to train, by minimizing it. It's usually not so useful for you, as the numbers have little meaning : in your case, you should be more interested in something more meaningful, like how often does you model make the correct prediction – CoMartel May 07 '20 at 08:36

0 Answers0