I'making a implementation of Q-learning, specifically the Bellman equation.
I'm using the version from a website that guides he through the problem, but I have question: For maxQ, do I calculate the max reward using all Q-table values of the new state (s') - in my case 4 possible action (a'), each with their respective value- or the sum of the Q-table values of all the positions when taking the action (a')?
In other words, do I use the highest Q-value of all the possible actions I can take, or the summed Q-values of all the "neighbouring" squares?