2

I am trying to implement a simple LSTM model in tensorflow. I have lines of sentences as array of char as input.

Sample input:

['Y', 'â', 'r', 'â', 'b', ' ', 'n', 'e', ' ', 'i', 'n', 't', 'i', 'z', 'â', 'r', 'd', 'ı', 'r', ' ', 'b', 'u']

Each training step I am trying to fed this inputs to lstm. The problem is the length of sentences are not constant. Some sentences length can be 20 and some other 22 or what ever else.

The small part of training:

x_input =  [dictionary[i]  for i in line]

x_input = np.reshape(np.array(x_input), [-1, n_input, 1])

onehot_out = np.zeros([output_size], dtype=float)
onehot_out[vezin] = 1.0
onehot_out = np.reshape(onehot_out, [1, -1])

_, acc, loss, onehot_pred = session.run([optimizer, accuracy, cost, pred],\
        feed_dict={x: x_input, y: onehot_out})

Is there any way to change intput size in each tarining step? If there is it bad to use?

mcemilg
  • 976
  • 1
  • 11
  • 18

1 Answers1

0

When I post this question, I just started the RNN. Actually the answer is very simple and I am answering it if there are people who face the same problem as me.

The solution is using Dynamic RNN. It allows you to give input with different sequences and its very important on most of RNN models.

Tensorflow has an implementation for dynamic RNN and it is very useful. For more information check here.

mcemilg
  • 976
  • 1
  • 11
  • 18