I have an encoder decoder network mimicking the one produced in this tutorial: https://towardsdatascience.com/how-to-implement-seq2seq-lstm-model-in-keras-shortcutnlp-6f355f3e5639
However the output of the decoder LSTM will be numbers between 0 and 1. However the words were tokenized in this tutorial to be integers. How do I convert this output between 0 and 1 back to words using this tokenizing?
The other option could be to use the one hot encoding tokenization but surely you'd still have to round the output to turn the floating outputs to integers?