0

Currently, I'm passing the data into multiple GPUs using get_next(). Is there a better way to feed data into multiple GPU s?

Illuminati0x5B
  • 602
  • 7
  • 24

1 Answers1

0

Take a look at the shard class method which is aimed for distributed training. This could be what you need.