Reading the following from TPU FAQ: https://cloud.google.com/tpu/docs/faq
Can I train a Recurrent Neural Network (RNN) on Compute Engine?In certain configurations, tf.static_rnn() and tf.dynamic_rnn() are compatible with the current TPU execution engine. More generally, the TPU supports both tf.while_loop() and TensorArray, which are used to implement tf.dynamic_rnn(). Specialized toolkits such as CuDNN are not supported on the TPU, as they contain GPU-specific code. Using tf.while_loop() on the TPU does require specifying an upper bound on the number of loop iterations so that the TPU execution engine can statically determine the memory usage.
How can i make my SimpleRNN static or valid for running on a Colab TPU?
Colab TPU code
import tensorflow as tf
import os
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, SimpleRNN
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
print("All devices: ", tf.config.list_logical_devices('TPU'))
strategy = tf.distribute.TPUStrategy(resolver)
with strategy.scope():
model = Sequential()
model.add(SimpleRNN(units=32, input_shape=(1,step), activation="relu"))
model.add(Dense(16, activation="relu"))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='rmsprop')
model.fit(X,y, epochs=50, batch_size=16, verbose=0)