Keras documentation isn't clear what this actually is. I understand we can use this to compress the input feature space into a smaller one. But how is this done from a neural design perspective? Is it an autoenocder, RBM?
-
7It's a lookup table that can be trained – gokul_uf Oct 15 '16 at 15:31
-
1It simply creates and indexes a weight matrix; see my detailed answer below (https://stackoverflow.com/a/53101566/9024698). – Outcast Nov 01 '18 at 15:52
-
3Although the most voted answer says it's a matrix multiplication, the source code and other answers show that in fact they're just a trainable matrix. The input words just pick the respective row in this matrix. – Daniel Möller Nov 05 '18 at 11:47
4 Answers
As far as I know, the Embedding layer is a simple matrix multiplication that transforms words into their corresponding word embeddings.
The weights of the Embedding layer are of the shape (vocabulary_size, embedding_dimension). For each training sample, its input are integers, which represent certain words. The integers are in the range of the vocabulary size. The Embedding layer transforms each integer i into the ith line of the embedding weights matrix.
In order to quickly do this as a matrix multiplication, the input integers are not stored as a list of integers but as a one-hot matrix. Therefore the input shape is (nb_words, vocabulary_size) with one non-zero value per line. If you multiply this by the embedding weights, you get the output in the shape
(nb_words, vocab_size) x (vocab_size, embedding_dim) = (nb_words, embedding_dim)
So with a simple matrix multiplication you transform all the words in a sample into the corresponding word embeddings.
-
Interesting that it is just a simple matrix multiplication. Do you think we'd gain anything by learning the embedding with an autoencoder? – Jul 07 '16 at 01:21
-
3Definitely a valid approach (see [Semi-Supervised Sequence Learning](https://papers.nips.cc/paper/5949-semi-supervised-sequence-learning.pdf) ). You can also learn the embeddings with an autoencoder and then use them as initialization of the Embedding layer to reduce the complexity of you neural network (I assume that you do something else after the Embedding layer). – Lorrit Jul 07 '16 at 08:28
-
3[Here](http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/) is a nice blogpost about word embeddings and their advantages. – sietschie Jul 27 '16 at 12:00
-
I guess in this case, each training sample can be a sentence. Each sentence is represented as a one-hot vector. Is it correct? – thd Dec 02 '16 at 14:51
-
3In the case that I presented, each training input is a set of words (can be a sentence). Each word is represented as one-hot vector and embedded into a dense vector. The disadvantage of this approach is that, since the input needs to be of constant length, all your sentences need to have the same number of words. An alternative would be [paragraph vectors](https://cs.stanford.edu/~quocle/paragraph_vector.pdf), which can embed sentences, paragraphs or even documents into vectors. – Lorrit Dec 02 '16 at 15:21
-
Does anyone know, keras built-in embedding function is working based on what embedding function? I mean does it follow the same function as w2v or it just use one-hot encoding or something else? – Reihan_amn Jul 06 '17 at 07:09
-
It will train the Embedding layer weights like all other weights in your neural network (e.g. with stochastic gradient descent). You can also pretrain you word embeddings with w2v and use them as initial weights for the Embedding layer. You can then make the weights static or trainable, depending on your preference. – Lorrit Jul 06 '17 at 08:01
-
Thanks @Lorrit, does it mean that it consider the semantic similarity of the words (words happen in the same context are more semantically similar) to generate the vector for them? I know the semantic behind w2v algorithm but I would like to know the semantic behind keras word embedding too! Do the words in the same sequence are gonna get the much similar vectors here? – Reihan_amn Jul 06 '17 at 21:34
-
4The Embedding layer will just optimize its weights in order to minimize the loss. Maybe that means that it will consider the semantic similarity, maybe it won't. You never know with neural networks. If you want to be sure that the embedding follows a certain formula (e.g. w2v), use the formula. If you have enough data, you might want to use the Embedding layer and train the embeddings. Just try it and check whether you like the results. – Lorrit Jul 08 '17 at 00:37
-
Source code have this paper as reference 'A Theoretically Grounded Application of Dropout in Recurrent Neural Networks' https://arxiv.org/pdf/1512.05287.pdf – mrgloom Aug 17 '17 at 14:53
-
Just a small adjustment to @Lorrit comment on semantic similarity. Strictly speaking, while the outcome may reflect semantic similarity, embeddings is a way to completely void any need for corpus/lexicon of any kind. In other words, there is no consideration for semantic similarity or other aspects that things like word2vec and conventional NLP approaches depend on. – mikkokotila Sep 10 '17 at 16:23
-
2I agree with user36624 (answer below). Its **NOT** a simple matrix multiplication. – Daniel Möller May 08 '18 at 14:29
-
@DanielMöller, I agree that the Keras Embedding layer is not doing any matrix multiplication as I show with my answer beow. However, everyone has upvoted this answer and the moderators are not doing anything about all this...haha.... – Outcast Nov 02 '18 at 11:30
The Keras
Embedding
layer is not performing any matrix multiplication but it only:
1. creates a weight matrix of (vocabulary_size)x(embedding_dimension) dimensions
2. indexes this weight matrix
It is always useful to have a look at the source code to understand what a class does. In this case, we will have a look at the class
Embedding which inherits from the base layer class
called Layer.
(1) - Creating a weight matrix of (vocabulary_size)x(embedding_dimension) dimensions:
This is occuring at the build
function of Embedding:
def build(self, input_shape):
self.embeddings = self.add_weight(
shape=(self.input_dim, self.output_dim),
initializer=self.embeddings_initializer,
name='embeddings',
regularizer=self.embeddings_regularizer,
constraint=self.embeddings_constraint,
dtype=self.dtype)
self.built = True
If you have a look at the base class Layer you will see that the function add_weight
above simply creates a matrix of trainable weights (in this case of (vocabulary_size)x(embedding_dimension) dimensions):
def add_weight(self,
name,
shape,
dtype=None,
initializer=None,
regularizer=None,
trainable=True,
constraint=None):
"""Adds a weight variable to the layer.
# Arguments
name: String, the name for the weight variable.
shape: The shape tuple of the weight.
dtype: The dtype of the weight.
initializer: An Initializer instance (callable).
regularizer: An optional Regularizer instance.
trainable: A boolean, whether the weight should
be trained via backprop or not (assuming
that the layer itself is also trainable).
constraint: An optional Constraint instance.
# Returns
The created weight variable.
"""
initializer = initializers.get(initializer)
if dtype is None:
dtype = K.floatx()
weight = K.variable(initializer(shape),
dtype=dtype,
name=name,
constraint=constraint)
if regularizer is not None:
with K.name_scope('weight_regularizer'):
self.add_loss(regularizer(weight))
if trainable:
self._trainable_weights.append(weight)
else:
self._non_trainable_weights.append(weight)
return weight
(2) - Indexing this weight matrix
This is occuring at the call
function of Embedding:
def call(self, inputs):
if K.dtype(inputs) != 'int32':
inputs = K.cast(inputs, 'int32')
out = K.gather(self.embeddings, inputs)
return out
This functions returns the output of the Embedding
layer which is K.gather(self.embeddings, inputs)
. What tf.keras.backend.gather exactly does is to index the weights matrix self.embeddings
(see build
function above) according to the inputs
which should be lists of positive integers.
These lists can be retrieved for example if you pass your text/words inputs to the one_hot function of Keras which encodes a text into a list of word indexes of size n (this is NOT one hot encoding - see also this example for more info: https://machinelearningmastery.com/use-word-embedding-layers-deep-learning-keras/).
Therefore, that's all. There is no matrix multiplication.
On the contrary, the Keras
Embedding
layer is only useful because exactly it avoids performing a matrix multiplication and hence it economizes on some computational resources.
Otherwise, you could just use a Keras
Dense layer (after you have encoded your input data) to get a matrix of trainable weights (of (vocabulary_size)x(embedding_dimension) dimensions) and then simply do the multiplication to get the output which will be exactly the same with the output of the Embedding
layer.

- 4,967
- 5
- 44
- 99
In Keras, the Embedding
layer is NOT a simple matrix multiplication layer, but a look-up table layer (see call function below or the original definition).
def call(self, inputs):
if K.dtype(inputs) != 'int32':
inputs = K.cast(inputs, 'int32')
out = K.gather(self.embeddings, inputs)
return out
What it does is to map each a known integer n
in inputs
to a trainable feature vector W[n]
, whose dimension is the so-called embedded feature length.
-
Well when you multiply a one-hot represented set of vectors with a matrix, the product becomes a look-up. So the `Embedding` layer *is* indeed a matrix multiplication. – yannis Apr 28 '18 at 14:16
-
Except that nowhere keras performs this multiplication. It just defines "embeddings = a trainable matrix", and use the input indices to gather words from the matrix. – Daniel Möller May 08 '18 at 14:34
-
Thus, this embedding spares a lot of memory by simply not creating any one-hot version of the inputs. – Daniel Möller May 08 '18 at 14:35
In simple words (from the functionality point of view), it is a one-hot encoder and fully-connected layer. The layer weights are trainable.

- 1,496
- 2
- 16
- 27