In a seq2seq model with an encoder and a decoder, at each generation step a softmax layer outputs a distribution over the entire vocabulary. In CNTK, a greedy decoder can be implemented easily by using the C.hardmax function. It looks like this.
def create_model_greedy(s2smodel):
# model used in (greedy) decoding (history is decoder's own output)
@C.Function
@C.layers.Signature(InputSequence[C.layers.Tensor[input_vocab_dim]])
def model_greedy(input): # (input*) --> (word_sequence*)
# Decoding is an unfold() operation starting from sentence_start.
# We must transform s2smodel (history*, input* -> word_logp*) into a generator (history* -> output*)
# which holds 'input' in its closure.
unfold = C.layers.UnfoldFrom(lambda history: s2smodel(history, input) >> **C.hardmax**,
# stop once sentence_end_index was max-scoring output
until_predicate=lambda w: w[...,sentence_end_index],
length_increase=length_increase)
return unfold(initial_state=sentence_start, dynamic_axes_like=input)
return model_greedy
However, at each step I don't want to output the token with the maximum probability. Instead, I want to have a random decoder, which generates a token according to the probability distribution of the vocabulary.
How can I do that? Any help is appreciated. Thanks.