the best way to do this is to use a GlobalAveragePooling1D layer. it receives the embeddings of tokens inside the sentences from the Embedding layer with the shapes (n_sentence, n_token, emb_dim) and computes the average of each token present in the sentence. the result has shape (n_sentence, emb_dim)
here a code example
embedding_dim = 128
vocab_size = 100
sentence_len = 20
embedding_matrix = np.random.uniform(-1,1, (vocab_size,embedding_dim))
test_sentences = np.random.randint(0,vocab_size, (3,sentence_len))
inp = Input((sentence_len))
embedder = Embedding(vocab_size, embedding_dim,
trainable=False, weights=[embedding_matrix])(inp)
avg = GlobalAveragePooling1D()(embedder)
model = Model(inp, avg)
model.summary()
model(test_sentences) # the mean of all the word embeddings inside sentences