2

When building input pipelines with Tensorflow's Dataset API, batching comes up a lot. For example in: tf.data: Build TensorFlow input pipelines or Output differences when changing order of batch(), shuffle() and repeat().

What I've not been able to get a good answer on, though, is what problem batching in dataset generation solves in the first place or why I should use it. Can someone enlighten me?

I presume the dataset batches are an optimization to maximize throughput to a GPU, given limited memory, especially if the whole dataset doesn't fit into memory. Are there other scenarios?

I have a somewhat better idea of the purpose of the batch_size parameter in the fit function, from, e.g. What is a batch in TensorFlow? However, what is the interaction or relationship between the dataset batches and the batch_size specified in the fit function for training.

How should either be chosen?

Wolfram Arnold
  • 7,159
  • 5
  • 44
  • 64

1 Answers1

2

I have an answer for the second question, how the batches in the dataset and the batch_size parameter in the fit function interact. Per the documentation of fit:

Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

So the fit function will take the batches straight from the input pipeline. Specify one or the other but not both. Specifying the batch size in the input pipeline, presumably permits optimizations upstream. This makes it also a partial answer to the first question. Still, I'm curious if someone has a better explanation.

Wolfram Arnold
  • 7,159
  • 5
  • 44
  • 64