2

I'm using the MirroredStrategy to perform multi-gpu training and it doesn't appear to be properly sharding the data. How do you go about manually sharding data?

I know that I could use the shard method for a tf.data dataset, but for that I need access to the worker ID and I can't figure out how to get that. How do I access the worker ids?

Luke
  • 6,699
  • 13
  • 50
  • 88

1 Answers1

2

MirroredStrategy runs on a single worker (for multiple workers there is MultiWorkerMirroredStrategy). Because it runs on only one worker, MirroredStrategy runs a single Dataset pipeline without any data sharding. At each step, MirroredStrategy requests one dataset element per worker.

AAudibert
  • 1,223
  • 11
  • 23
  • Is there a way to manually shard using MultiWorkerMirroredStrategy? – Luke Feb 12 '20 at 18:10
  • 1
    Yes, you can create your dataset using `strategy.experimental_distribute_datasets_from_function(dataset_fn)`. tf.distribute will pass an `input_context` argument to your `dataset_fn`, which will tell you the current worker id via `input_context.input_pipeline_id`. See these docs for an example: https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/MultiWorkerMirroredStrategy#experimental_distribute_datasets_from_function – AAudibert Feb 12 '20 at 18:29
  • How do you control the number of workers when training on a single machine? Will it automatically create one worker per GPU or is it instead one worker per machine? – Luke Feb 17 '20 at 19:17