This is still a long way from an allocated tensor list, but a start for TF2:
Tensorflow 2.4.1 contains the tf.config.experimental.get_memory_usage method, which returns the current number of bytes used on the GPU. Comparing this value across different points in time can shed some light on which tensors take up VRAM. It seems to be pretty accurate.
BTW, the latest nightly build contains the tf.config.experimental.get_memory_info method instead, seems they had a change of heart. This one contains the current
, as well as the peak
memory used.
Example code on TF 2.4.1:
import tensorflow as tf
print(tf.config.experimental.get_memory_usage("GPU:0")) # 0
tensor_1_mb = tf.zeros((1, 1024, 256), dtype=tf.float32)
print(tf.config.experimental.get_memory_usage("GPU:0")) # 1050112
tensor_2_mb = tf.zeros((2, 1024, 256), dtype=tf.float32)
print(tf.config.experimental.get_memory_usage("GPU:0")) # 3147264
tensor_1_mb = None
print(tf.config.experimental.get_memory_usage("GPU:0")) # 2098688
tensor_2_mb = None
print(tf.config.experimental.get_memory_usage("GPU:0")) # 1536