I would like to have a layer in a sequential model that has some fixed, non-trainable weights, sparsely distributed inside the layer.
For example: I create a model with few layers
model = Sequential()
model.add(Dense(units=n_nodes, activation=activation, kernel_initializer='glorot_uniform'), input_shape=(n_nodes,))
model.add(Dense(....
then I compile and fit the model and obtain the two layers with the trained weights.
Next, I would like for example to take model.layer[0]
modify and fix some of the weights, and then perform a retraining of the network.
The trained layer for example is
a b c
d e f
g h i
and I want it to be like this:
A* b c
d e F*
g H* I*
with A*, F*, H* and I* the edited weights and set to be non-trainable, so that after another round of training the layer results in something like this
A* b2 c2
d2 e2 F*
g2 H* I*
My network is built in Keras, and I did not find a way to do this transformation. Is it even possible? I thought about creating a custom Layer but I can't figure out how to make only some values non-trainable.