i'm wondering about seq2seq model's architecture (not using attentioin) also how to make decoder_inputs data.
While studying, I looked at the structure of seq2seq. Sometimes repeatvectors are used and sometimes not. Are these both seq2seq? As far as I know, it is correct not to use a repeat vector, but it is too confusing.
i'm not trying to encode and decode in natural language, i'm trying to predict the power produced through sunlight intensity. I want to predict the electricity production for the next day (the 11th) with the past 10 days' count.
For this part, I know that encoder_input is the solar intensity of the past 10 days. But I don't know what to put as decoder_inputs. Also, can I put the 11th power value as decoder_output?
Below is the encoder decoder code used as a reference in keras. (https://keras.io/examples/nlp/lstm_seq2seq/)
# Define an input sequence and process it.
encoder_inputs = keras.Input(shape=(None, num_encoder_tokens))
encoder = keras.layers.LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = keras.Input(shape=(None, num_decoder_tokens))
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = keras.layers.LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
decoder_dense = keras.layers.Dense(num_decoder_tokens, activation="softmax")
decoder_outputs = decoder_dense(decoder_outputs)
# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = keras.Model([encoder_inputs, decoder_inputs], decoder_outputs)
I searched a lot of codes, but I faced a problem because most of the codes only used the label to use the next label rather than predicting the label using the feature like I did.