I'm running the ConvS2S example, the model trains just fine, but the inference code it's not clear enough, why does the prediction arrays have the length of the inputs_texts? My predictions output gibberish, I'm clearly doing something wrong, since my model seems to learn quite, well.
Thanks in advance.
As far as I can tell the prediction array should be something like:
The original source is this:
nb_examples = 100
in_encoder = encoder_input_data[:nb_examples]
in_decoder = np.zeros((len(input_texts), max_decoder_seq_length, num_decoder_tokens), dtype='float32')
in_decoder[:, 0, target_token_index["\t"]] = 1
predict = np.zeros((len(input_texts), max_decoder_seq_length), dtype='float32')
But why use len(input_texts)
?