Currently, I'm passing the data into multiple GPUs using get_next(). Is there a better way to feed data into multiple GPU s?
Asked
Active
Viewed 87 times
1 Answers
0
Take a look at the shard class method which is aimed for distributed training. This could be what you need.