0

I need to use an embedding layer to encode the word vectors, so the weights of the embedding layer essentially are the word vectors. Obviously I don't want the weights in this case to be updated during back propagation. My question is if embedding layer by design already prohibits weight updates, or I have to do something special about it ?

Sheng
  • 1,697
  • 4
  • 19
  • 33

1 Answers1

0

Looking at an old issue here,

https://github.com/deeplearning4j/deeplearning4j/issues/3118

I think one way to satisfy what I need is to set learning rate as well as bias to 0, i.e., .biasInit(0.0).learningRate(0.0), although the better way I think, which is also suggested from the link above, is to use a frozen layer to wrap around it ?

EDIT: I think I will end up with a solution like the following,

new FrozenLayer.Builder().layer(new EmbeddingLayer.Builder().nIn(nIn).nOut(nOut).activate(Activate.IDENTITY).biasInit(0.0).build()).build()
Sheng
  • 1,697
  • 4
  • 19
  • 33