3

I'm trying to see the difference between training a model with TPU and GPU.

This is the training model part :

import time

start = time.time()
tf.keras.backend.clear_session()

resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
tf.config.experimental_connect_to_cluster(resolver)
#TPU initialization
tf.tpu.experimental.initialize_tpu_system(resolver)
print("All devices: ", tf.config.list_logical_devices('TPU'))

strategy = tf.distribute.experimental.TPUStrategy(resolver)

with strategy.scope():
  training_model = lstm_model(seq_len=100, stateful=False)
  training_model.compile(
      optimizer=tf.keras.optimizers.RMSprop(learning_rate=0.01),
      loss='sparse_categorical_crossentropy',
      metrics=['sparse_categorical_accuracy'])

training_model.fit(
    input_fn(),
    steps_per_epoch=100,
    epochs=10
)
training_model.save_weights('/tmp/bard.h5', overwrite=True)

end = time.time()
elapsed_TPU = end - start

print(elapsed_TPU)

(Source:https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/shakespeare_with_tpu_and_keras.ipynb )

The top part of the code is for TPU initialization. Is there any way to change that so it can be adaptable to run on a GPU?

Samir
  • 235
  • 1
  • 10
  • Check here https://colab.research.google.com/notebooks/gpu.ipynb – Albert Mar 25 '21 at 17:45
  • Thanks for the reply but that still didn't help to adapt to the following algorithm to GPU . How could I achieve strategy.scope() with GPU instead of TPU – Samir Mar 25 '21 at 18:32

1 Answers1

1

You don't need to use tf.distribute.Strategy unless you have TPUs or multiple CPUs/GPUs. See here. You can run this as standard Tensorflow code without a strategy.

import time

start = time.time()
tf.keras.backend.clear_session()

training_model = lstm_model(seq_len=100, stateful=False)
training_model.compile(
    optimizer=tf.keras.optimizers.RMSprop(learning_rate=0.01),
    loss='sparse_categorical_crossentropy',
    metrics=['sparse_categorical_accuracy'])

training_model.fit(
    input_fn(),
    steps_per_epoch=100,
    epochs=10
)
training_model.save_weights('/tmp/bard.h5', overwrite=True)

end = time.time()
elapsed_TPU = end - start

print(elapsed_TPU)
Albert
  • 540
  • 4
  • 9