0

I am trying to convert the following GRU layer from PyTorch(1.9.1) to TensorFlow(2.6.0):

# GRU layer
self.gru = nn.GRU(64, 32, bidirectional=True, num_layers=2, dropout=0.25, batch_first=True)

I am unsure about my current implementation, especially regarding the conversion of the parameters bidirectional and num_layers. My current reconstruction is the following:

# GRU Layer
model.add(Bidirectional(GRU(32, return_sequences=True, dropout=0.25, time_major=False)))
model.add(Bidirectional(GRU(32, return_sequences=True, dropout=0.25, time_major=False)))

Am I missing something? Thanks for your help in advance!

tcw00
  • 1

1 Answers1

0

yes these two models are the same, at least from the number of parameters and the output shape point of view: In pytorch:

import torch
model = torch.nn.Sequential(torch.nn.GRU(64, 32, bidirectional=True, num_layers=2, dropout=0.25, batch_first=True))
from torchinfo import summary
batch_size = 16
summary(model, input_size=(batch_size, 100, 64))

> ========================================================================================== Layer (type:depth-idx)                   Output Shape             
> Param #
> ========================================================================================== Sequential                               --                        --
> ├─GRU: 1-1                               [16, 100, 64]            
> 37,632
> Total params: 37,632 Trainable params: 37,632 Non-trainable params: 0
> Total mult-adds (M): 60.21
> ============================================================================= Input size (MB): 0.41 Forward/backward pass size (MB): 0.82 Params
> size (MB): 0.15 Estimated Total Size (MB): 1.38
> =============================================================================

In Tensorflow:

import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Bidirectional, GRU
# GRU Layer
model = Sequential()
model.add(Bidirectional(GRU(32, return_sequences=True, dropout=0.25, time_major=False)))
model.add(Bidirectional(GRU(32, return_sequences=True, dropout=0.25, time_major=False)))
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3), loss='mse')
a = model.call(inputs=tf.random.normal(shape=(16, 100, 64)))
model.summary()

Model: "sequential_4"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
bidirectional_8 (Bidirection (16, 100, 64)             18816     
_________________________________________________________________
bidirectional_9 (Bidirection (16, 100, 64)             18816     
=================================================================
Total params: 37,632
Trainable params: 37,632
Non-trainable params: 0
elbe
  • 1,363
  • 1
  • 9
  • 13