I am really confused about how the gradientdescentoptimizer calculates the gradient and also applies the gradient in the next iteration. i know that we will be giving alpha(learning rate) as an argument, which will be multiplied by (y - y_hat)
. but, how does the optimizer know, to which variable, it has to apply the gradient.
Asked
Active
Viewed 56 times
0

rawwar
- 4,834
- 9
- 32
- 57
-
1You might want to checkout `apply_gradient_descent` code: https://stackoverflow.com/q/47178371/712995 – Maxim Feb 27 '18 at 14:41
-
@Maxim, it looks pure c++ code. i tried understanding, but could not. i have not learnt c++ properly. – rawwar Feb 27 '18 at 17:33