Questions tagged [encoder-decoder]
184 questions
0
votes
2 answers
How do i make a synthesizable parameterized encoder on Verilog?
I have tried to do it this way:
module encoder
#(
parameter WIDTH = 4
)
(
input wire [WIDTH-1: 0] in,
output reg [$clog2(WIDTH)-1: 0] out
);
genvar i;
generate
for (i = 0;i < WIDTH;i = i+1)
begin :gen_block
always…

Abhishek Revinipati
- 33
- 3
0
votes
0 answers
Error while trying to train an Encoder-Decoder model to convert between string date representations
I want to train an Encoder-Decoder model which converts dates from a string format to a numeric format. For example, I want to convert April 22, 2019" to "2019-04-22".
Here's the code that I used to create the dataset:
months = {
1 : "January",
…

Join_Where
- 101
- 6
0
votes
1 answer
Extracting features of the hidden layer of an autoencoder using Pytorch
I am following this tutorial to train an autoencoder.
The training has gone well. Next, I am interested to extract features from the hidden layer (between the encoder and decoder).
How should I do that?

Kadaj13
- 1,423
- 3
- 17
- 41
0
votes
1 answer
Sentence Indicating in Neural Machine Translation Tasks
I have seen many people working on Neural Machine Translation. Usually, they represent their sentence between , , etc. tags, before training the network. Of course it's a logical solution to specify the start and end of…

Burhan Bilen
- 83
- 2
- 7
0
votes
1 answer
why the context vector is not passed to every input of the decoder
In this model, in the encoder part, we give an input sentence with 3 words A, B, and c, and we get a context vector W, which is passed to the decoder. why don't we pass W to all the cells of the decoder instead of the output of the previous cell,…

Mohamed Abdullah
- 129
- 1
- 1
- 8
0
votes
0 answers
Neural machine translation - seq2seq encoder-decoder
I am working on seq2seq NMT for french to english translation. In the inference model I am getting cardinality error.
ValueError: Data cardinality is ambiguous:
x sizes: 1, 5, 5
Please provide data which shares the same first dimension.
…

Ankur
- 1
- 1
0
votes
1 answer
explain model.fit in LSTM encoder-decoder with Attention model for Text Summarization using Keras /Tensorflow
In deep learning using Keras I have usually come across model.fit as something like this:
model.fit(x_train, y_train, epochs=50, callbacks=[es], batch_size=512, validation_data=(x_val, y_val)
Whereas in NLP taks, I have seen some articles on Text…

morelloking
- 193
- 1
- 3
- 11
0
votes
0 answers
How we can explain below validation loss?
I have trained different number of layers in CNN+LSTM encoder and decoder model with attention.
The problem I am facing is very strange to me. The validation loss is fluctuating around 3.***. As we can see from the below loss graphs. I have 3 CNN…

Hikmah
- 1
- 1
0
votes
1 answer
How to train an encoder-decoder model?
I do not really understand the obviously (or actually the same?) training procedures for training a LSTM encoder-decoder.
on the one hand in the tutorial they use a for loop for…

ctiid
- 335
- 1
- 3
- 14
0
votes
1 answer
Encoder-Decoder for Trajectory Prediction
I need to use encoder-decoder structure to predict 2D trajectories. As almost all available tutorials are related to NLP -with sparse vectors-, I couldn't be sure about how to adapt the solutions to a continuous data.
In addition to my ignorance in…

Anil Bora Yayak
- 5
- 3
0
votes
2 answers
Error while generating dimension of universal sentence encoder embedding
Below is the code for generating embedding and reducing dimension:
def generate_embeddings(text):
if embed_fn is None:
embed_fn = hub.load(module_url)
embedding = embed_fn(text).numpy()
return embedding
from…

Sweety Tripathi
- 25
- 6
0
votes
3 answers
while implementing SEGNET using MaxPoolingWithArgmax2D and MaxUnpooling2D giving error
I am implementing SEGNET segmentation Network in python but getting the following error,
Traceback (most recent call last):
File "/scratch/pkasar.dbatu/training/NEW_SEGNET_updated_on_16_11_20.py", line 370,…

Pankaj Kasar
- 21
- 5
0
votes
1 answer
how does nn.embedding for developing an encoder-decoder model works?
In this tutorial, it teaches how to develop a simple encoder-decoder model with attention using pytorch.
However, in the encoder or decoder, self.embedding = nn.Embedding(input_size, hidden_size) (or similar) is defined. In pytorch documents,…

Kadaj13
- 1,423
- 3
- 17
- 41
0
votes
0 answers
Appropriate loss function in pytorch when output is an array of float numbers
I am writing an encoder/decoder model very similar to https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html
The only difference is, here, the words are represented by some indices. I want to show them based on another metric,…

Kadaj13
- 1,423
- 3
- 17
- 41
0
votes
1 answer
Why is Encoder hidden state shape different from Encoder Output shape in Bahdanau attention
This question relates to the neural machine translation shown here:
Neural Machine Translation
Here:
Batch size = 64
Input length (number of words in the example input sentence and also called the distinct time steps) = 16
Number of RNN units (which…

Utpal Mattoo
- 890
- 3
- 17
- 41