Seq2Seq is a sequence to sequence learning add-on for the python deep learning library.
Questions tagged [seq2seq]
318 questions
0
votes
1 answer
Decoder targets required for RNN inference
I have been trying to run some experiments using the deepfix tool (https://bitbucket.org/iiscseal/deepfix) which is a seq2seq model for correcting common programming errors.
I made changes to the code so that it is compatible to TF-1.12, as the…

Shivam Mittal
- 1
- 2
0
votes
0 answers
Embedding version seq2seq model (Keras)
I want to build up an embedding version seq2seq model by modifying the example on keras github.
https://github.com/keras-team/keras/blob/master/examples/lstm_seq2seq.py
I've tried the np.reshape but it won't work.
from keras.layers.embeddings…

jjlin
- 39
- 5
0
votes
0 answers
Keras seq2seq example save issue
.
Hello everyone!
I just tried keras seq2seq example(link).
It works well, but the problem happens when I try to save the trained model.
I had never fixed the code. Is there anyone who already know about this issue?
Please help…

INDI
- 43
- 7
0
votes
0 answers
Tensorflow - Seq2Seq model weights are not loaded properly
I am working on an encoder-decoder chat bot that consists of an embedding layer, two layers of LSTM and a fully connected layer on top of the decoder.
After I load the checkpoint file, the loss is way higher than it was the last time I saved the…

liellahat
- 1
- 3
0
votes
0 answers
How to implement attention for sequence to sequence model in keras. Please explain step by step
How to implement attention for a sequence to sequence model in keras. I understand this seq2seq model, but I want to do attention with Fig B (shown in the attached link seq2seq). Please explain step by step.

Mr.Beans
- 1
- 2
0
votes
1 answer
Saved tensorflow NLP model outputs nothing after restoring saved variables for training
I built a seq2seq model for a chatbot after getting inspired by a github repo. To train the chatbot I used my facebook chat history. Since most of my chat is like hindi words written in english language. I had to train word embedding from scratch. I…

Mohit Saini
- 21
- 1
- 8
0
votes
1 answer
implement Attention mechanism in seq2seq Maluuba model
Hello, i'm trying to add an attention to simple Maluuba/qgen-workshop seq2seq model but i can not figure out what is the correct batch_size i should pass to the initial state i tried this:
# Attention
# attention_states: [batch_size, max_time,…

Mekasa
- 1
0
votes
1 answer
tf.nn.rnn_cell.GRUCell were built on CPU device
I'm training a 2-layer seq2seq model now and gru_cell is used.
def create_rnn_cell():
encoDecoCell = tf.contrib.rnn.GRUCell(emb_dim)
encoDecoCell = tf.contrib.rnn.DropoutWrapper(
…

Ming
- 1
- 1
0
votes
1 answer
Tensorflow: Attention output gets concatenated with the next decoder input causing dimension missmatch in seq2seq model
[TF 1.8]
I'm trying to build a seq2seq model for a toy chatbot to learn about tensorflow and deep learning. I was able to train and run the model with sampled softmax and beam search but then I try to apply tf.contrib.seq2seq.LuongAttention using…

Sea Otter
- 73
- 1
- 6
0
votes
0 answers
Pytorch Spell Check Character RNN not outputting end tokens
I’m trying to implement a character RNN for the purpose of spell correction and tokenization. The model is based on the practical pytorch GRU-RNN implementation of a seq2seq model - the loss function is masked cross entropy like they use here and…

E Holm
- 1
0
votes
1 answer
Seq2seq for non-sentence, float data; stuck configuring the decoder
I am trying to apply sequence-to-sequence modelling to EEG data. The encoding works just fine, but getting the decoding to work is proving problematic. The input-data has the shape None-by-3000-by-31, where the second dimension is the…

MPKenning
- 569
- 1
- 7
- 22
0
votes
2 answers
why do we reverse input when feeding in seq2seq model in tensorflow( tf.reverse(inputs,[-1]))
Why do we reverse input when feeding in seq2seq model in tensorflow ( tf.reverse(inputs,[-1]))
training_predictions,test_predictions=seq2seq_model(tf.reverse(inputs,[-1]),
targets,
…

music in air
- 1
- 2
0
votes
0 answers
Multi-step Time Series Prediction w/ seq2seq LSTM
I am trying to predict time series data using an encoder/decoder with LSTM layers. So far, I am using 20 points of past data to predict 20 future points. For each sample of 20 past data points, the 1st value in the predicted sequence is very close…
0
votes
1 answer
pytorch seq2seq encoder forward method
I'm following Pytorch seq2seq tutorial and below is how they define the encoder function.
class EncoderRNN(nn.Module):
def __init__(self, input_size, hidden_size):
super(EncoderRNN, self).__init__()
self.hidden_size =…

aerin
- 20,607
- 28
- 102
- 140
0
votes
1 answer
Encoder returning same states for every input Keras seq2seq
I am using an Encoder Decoder seq2seq architecture in Keras,
I'm passing a one-hot array of shape (num_samples, max_sentence_length, max_words) for training, and using teacher forcing.
#Encoder
latent_dim = 256
encoder_inputs = Input(shape=(None,…

Arth Dh
- 11
- 5