Yes, this is perfectly normal.
As the NN learns, it infers from the training samples, that it knows better at each iteration. The validation set is never used during training, this is why it is so important.
Basically:
- as long as the validation loss decreases (even slightly), it means the NN is still able to learn/generalise better,
- as soon as the validation loss stagnates, you should stop training,
- if you keep training, the validation loss will likely increase again, this is called overfitting. Put simply, it means the NN learns "by heart" the training data, instead of really generalising to unknown samples (such as in the validation set)
We usually use early stopping to avoid the last: basically, if your validation loss doesn't improve in X iterations, stop training (X being a value such as 5 or 10).