0

I'm currently working in python with tensorflow and would like to train my model with a gradient descent model and not a stochastic gradient descent model. The reason is that I want to train my model on all data points instead of a subset. Considering the different optimizers for tensorflow I found at: https://www.tensorflow.org/api_docs/python/tf/keras/optimizers I'm not sure if this is possible. Could someone explain me if there is way to do this?

Currently the code looks something like this:

model.compile(optimizer = 'adam', loss = 'mse')

Is there a gradient descent optimizer I could just fill in?

Thank you!

Arka
  • 1
  • You mean batch GD, mini-batch GD, and Stochastic GD to update weights based on all data points, a subset of data points and each data point, respectively. It does not depend on the optimizer. It depends on the batch size you opt for. If you set `batch_size=1` in `model.fit` you are running stochastic, if you set it to the size of the dataset, you are running on batch GD. – Kaveh May 21 '22 at 10:48
  • Okay! So if I change my batch_size to the size of the dataset, the 'adam' optimizer is not a SGD but only GD? – Arka May 21 '22 at 11:09
  • SGD and ADAM as optimizers have different formulas to update the weights. Suppose you set batch_size to the size of the dataset. In that case, all samples will be fed into your model, and the cost function will be computed on all data points (All data points affect the update) in each iteration, but how much they will be updated depends on the optimizer. – Kaveh May 21 '22 at 11:18

0 Answers0