I've ran a code both with gpu in anaconda's cmd and cpu in normal cmd but i get different results and the difference is not normal. its huge as you can see, this graph is when i run the code with normal cmd Normal CMD
and when i run the code with anaconda's gpu's cmd i get this graphs:
so i want to know why the difference is so huge like this?
to make it more clear if needed, this is a time series prediction, predicted by a BiLSTM Model in python keras