-1

After permute layer, dimensions become (None, None, 12, 16) I want to summarize last two dimensions with a LSTM(48 units) with input_shape(12, 16) so that overall dimension becomes (None, None, 48)

Currently I have a workaround with custom lstm&lstmcell, however its very slow, since I have used another LSTM within the cell etc.

What I would want to have is this:

(None, None, 12, 16)
(None, None, 48)
(None, None, 60)

The last two is done in custom lstm (currently), is there a way to seperate them?

Whats the proper way of doing this? Can we create different(or more than one) lstm for cells, which have same weights but different cell states? Would you give me some direction?

inputs (InputLayer) (None, 36, None, 1) 0


convlayer (Conv2D) (None, 36, None, 16) 160 inputs[0][0]


mp (MaxPooling2D) (None, 12, None, 16) 0 convlayer[0][0]


permute_1 (Permute) (None, None, 12, 16) 0 mp[0][0]


reshape_1 (Reshape) (None, None, 192) 0 permute_1[0][0]


custom_lstm_extended_1 (CustomL (None, None, 60) 26160 reshape_1[0][0]

Custom LSTM is called like this: CustomLSTMExtended(units=60, summarizeUnits=48, return_sequences=True, return_state=False, input_shape=(None, 192))(inner)

LSTM class:
self.summarizeUnits = summarizeUnits
self.summarizeLSTM = CuDNNLSTM(summarizeUnits, input_shape=(None, 16), return_sequences=False, return_state=True)

        cell = SummarizeLSTMCellExtended(self.summarizeLSTM, units,
                activation=activation,
                recurrent_activation=recurrent_activation,
                use_bias=use_bias,
                kernel_initializer=kernel_initializer,
                recurrent_initializer=recurrent_initializer,
                unit_forget_bias=unit_forget_bias,
                bias_initializer=bias_initializer,
                kernel_regularizer=kernel_regularizer,
                recurrent_regularizer=recurrent_regularizer,
                bias_regularizer=bias_regularizer,
                kernel_constraint=kernel_constraint,
                recurrent_constraint=recurrent_constraint,
                bias_constraint=bias_constraint,
                dropout=dropout,
                recurrent_dropout=recurrent_dropout,
                implementation=implementation)


        RNN.__init__(self, cell,
                                   return_sequences=return_sequences,
                                   return_state=return_state,
                                   go_backwards=go_backwards,
                                   stateful=stateful,
                                   unroll=unroll,
                                   **kwargs)
Cell class:
def call(self, inputs, states, training=None):
        #cell
        reshaped = Reshape([12, 16])(inputs)
        state_h = self.summarizeLayer(reshaped)
        inputsx = state_h[0]
        return super(SummarizeLSTMCellExtended, self).call(inputsx, states, training)
Ceday
  • 1
  • 2

1 Answers1

0

I have done this using tf.reshape rather than keras Reshape layer. Keras reshape layer doesnt want you to interfere with "batch_size" dimension

shape = Lambda(lambda x: tf.shape(x), output_shape=(4,))(inner)
..
..
inner = Lambda(lambda x : customreshape(x), output_shape=(None, 48))([inner, shape])
..
def customreshape(inputs):
    inner = inputs[0]
    shape = inputs[1]
    import tensorflow as tf2 
    reshaped = tf2.reshape(inner, [shape[0], shape[1], 48] )
    return reshaped
Ceday
  • 1
  • 2