1

Suppose you have a timeseries classification task with n_classes possible classes, and you want to output the probability of each class for every timestep (like seq2seq). How can we achieve mult-step multi-class classification with a Conv1D network?

With a RNN it would be something like:

# input shape (n_samples, n_timesteps, n_features)

layer = LSTM(n_neurons, return_sequences=True, input_shape=(n_timesteps n_features))
layer = Dense(n_classes, activation="softmax")(layer)

# objective output shape (n_samples, n_timesteps, n_classes)

keras would know that since the LSTM layer is outputting sequences then it should wrap the Dense layer around a TimeDistributed layer.

How would we achieve the same but with a Conv1D layer? I imagine we need to manually wrap the last Dense layer with a time distributed, but I don't know how to build the network before to output sequences

# input shape (n_samples, n_timesteps, n_features)

layer = Conv1D(n_neurons, 3, input_shape=(n_timesteps n_features))

# ... ?
# can we use a Flatten layer in this case?

layer = TimeDistributed(Dense(n_classes, activation="softmax"))(layer)

# objective output shape (n_samples, n_timesteps, n_classes)
lsfischer
  • 344
  • 2
  • 14

0 Answers0