I have a rnn network structure that looks like following
cells = rnn.MultiRNNCell(
[self._one_rnn_cell(l + 1) for l in range(self.layers)],
state_is_tuple=True
) if self.layers > 1 else self._one_rnn_cell(1)
out, _ = tf.nn.dynamic_rnn(cells, self.inputs,
dtype=tf.float32, scope="DyRNN")
out = tf.transpose(out, [1, 0, 2])
num_time_steps = int(out.get_shape()[0])
last_state = tf.gather(out, num_time_steps - 1, name="last_lstm_state")
Here while I am running the code, I am getting the following warning.
UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory. "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
Now I understand why this error is coming. I tried out several ideas and the most common one is the following
How to deal with UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape
But the problem is this requires too many variable like
max_length time_steps seq_length n_dim partitions
this make the code very unreadable. I wanted to know if there is a simpler way I can avoid the problem.
Also if the sequence length remain same across all the batches, can I assume max_length == time_steps == seq_length
?
Please help, documentation is very less.