I am trying to implement a simple LSTM model in tensorflow. I have lines of sentences as array of char as input.
Sample input:
['Y', 'â', 'r', 'â', 'b', ' ', 'n', 'e', ' ', 'i', 'n', 't', 'i', 'z', 'â', 'r', 'd', 'ı', 'r', ' ', 'b', 'u']
Each training step I am trying to fed this inputs to lstm. The problem is the length of sentences are not constant. Some sentences length can be 20 and some other 22 or what ever else.
The small part of training:
x_input = [dictionary[i] for i in line]
x_input = np.reshape(np.array(x_input), [-1, n_input, 1])
onehot_out = np.zeros([output_size], dtype=float)
onehot_out[vezin] = 1.0
onehot_out = np.reshape(onehot_out, [1, -1])
_, acc, loss, onehot_pred = session.run([optimizer, accuracy, cost, pred],\
feed_dict={x: x_input, y: onehot_out})
Is there any way to change intput size in each tarining step? If there is it bad to use?