Say I have simple MLP network for 2 class problem:
model = Sequential()
model.add(Dense(2, 10, init='uniform'))
model.add(Activation('tanh'))
model.add(Dense(10, 10, init='uniform'))
model.add(Activation('tanh'))
model.add(Dense(10, 2, init='uniform'))
model.add(Activation('softmax'))
after training this network I was unable to see any values in W object when observed it in debug mode.
Are they stored somewhere in Theano's computational graph, if so is it possible to get them? If not why all values in activation layer of model are None?
UPDATE:
Sorry for being too quick. Tensor object holding weights from Dense layer can be perfectly found. But invoking:
model.layers[1]
gives me Activation layer. Where I would like to see activation levels. Instead I see only:
beta = 0.1
nb_input = 1
nb_output = 1
params = []
targets = 0
updates = []
I assume keras just clears all these values after model evaluation - is it true? If so the only way to record activations from neurons is to create custom Activation layer which would record needed info - right?