3

I am able to do this with their GPU, but with their TPU it retrieves me an error... Does anybody around here know what I'm missing, please? Does it make sense to actually use the TPU with CuDNNLSTM? Or is CuDNNLSTM just tailored for GPU? Thanks a lot in advance.

Robert Crovella
  • 143,785
  • 11
  • 213
  • 257
joaopq
  • 31
  • 4
  • TPU doesn't support CUDA or CuDNN. It will require a different software stack. I'm removing the `cuda` tag because a question about how to run something on TPU really has nothing to do with CUDA. – Robert Crovella Aug 27 '20 at 23:06

1 Answers1

4

keras.layers.CuDNNLSTM is only supported on GPUs. But in Tensorflow 2 built-in LSTM and GRU layers have been updated to leverage CuDNN kernels by default when a GPU is available.

Below is the details from Performance optimization and CuDNN kernels:

In TensorFlow 2.0, the built-in LSTM and GRU layers have been updated to leverage CuDNN kernels by default when a GPU is available. With this change, the prior keras.layers.CuDNNLSTM/CuDNNGRU layers have been deprecated, and you can build your model without worrying about the hardware it will run on.

You can just use the built-in LSTM layer: tf.keras.layers.LSTM and it will work on both TPUs and GPUs.

Gagik
  • 396
  • 3
  • 6