When using multiple GPUs to perform inference on a model (e.g. the call method: model(inputs)) and calculate its gradients, the machine only uses one GPU, leaving the rest idle.
For example in this code snippet below:
import tensorflow as tf
import numpy as np
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1"
# Make the tf-data
path_filename_records = 'your_path_to_records'
bs = 128
dataset = tf.data.TFRecordDataset(path_filename_records)
dataset = (dataset
.map(parse_record, num_parallel_calls=tf.data.experimental.AUTOTUNE)
.batch(bs)
.prefetch(tf.data.experimental.AUTOTUNE)
)
# Load model trained using MirroredStrategy
path_to_resnet = 'your_path_to_resnet'
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
resnet50 = tf.keras.models.load_model(path_to_resnet)
for pre_images, true_label in dataset:
with tf.GradientTape() as tape:
tape.watch(pre_images)
outputs = resnet50(pre_images)
grads = tape.gradient(outputs, pre_images)
Only one GPU is used. You can profile the behavior of the GPUs with nvidia-smi. I don't know if it is supposed to be like this, both the model(inputs)
and tape.gradient
to not have multi-GPU support. But if it is, then it's a big problem because if you have a large dataset and need to calculate the gradients with respect to the inputs (e.g. interpretability porpuses) it might take days with one GPU.
Another thing I tried was using model.predict()
but this isn't possible with tf.GradientTape
.
What I've tried so far and didn't work
- Put all the code inside mirrored strategy scope.
- Used different GPUs: I've tried A100, A6000 and RTX5000. Also changed the number of graphic cards and varied the batch size.
- Specified a list of GPUs, for instance,
strategy = tf.distribute.MirroredStrategy(['/gpu:0', '/gpu:1'])
. - Added this
strategy = tf.distribute.MirroredStrategy(cross_device_ops=tf.distribute.HierarchicalCopyAllReduce())
as @Kaveh suggested.
How do I know that only one GPU is working?
I used the command watch -n 1 nvidia-smi
in the terminal and observed that only one GPU is at 100%, the rest are at 0%.
Working Example
You can find a working example with a CNN trained on the dogs_vs_cats datasets below. You won't need to manually download the dataset as I used the tfds version, nor train a model.
Notebook: Working Example.ipynb
Saved Model: