0

Is it possible to do batching in tensorflow without expanding the placeholder size by an extra dimension of None? Specifically I'd just like to feed multiple samples via the placeholders through feed_dict. The code base I'm working on would require a large amount of change to the code to account for adding an extra dimension for the batch size.

eg: sess.run(feed_dict={var1:val1values, var2: val2values, ...})

Where val1values would represent a batch of size X instead of just one training sample.

1 Answers1

0

The shape information including the number of dimensions is available to Python code to do arbitrary things with, and does affect the ops added to the graph (like which matmul kernel is used), so there's no general safe way to automatically add a batch dimension. Something like labeled_tensor may make code slightly less confusing to refactor.

Allen Lavoie
  • 5,778
  • 1
  • 17
  • 26