1

Say I have simple MLP network for 2 class problem:

model = Sequential()
model.add(Dense(2, 10, init='uniform'))
model.add(Activation('tanh'))
model.add(Dense(10, 10, init='uniform'))
model.add(Activation('tanh'))
model.add(Dense(10, 2, init='uniform'))
model.add(Activation('softmax'))

after training this network I was unable to see any values in W object when observed it in debug mode.

Are they stored somewhere in Theano's computational graph, if so is it possible to get them? If not why all values in activation layer of model are None?

UPDATE:

Sorry for being too quick. Tensor object holding weights from Dense layer can be perfectly found. But invoking:

model.layers[1]

gives me Activation layer. Where I would like to see activation levels. Instead I see only:

beta = 0.1
nb_input = 1
nb_output = 1
params = []
targets = 0
updates = []

I assume keras just clears all these values after model evaluation - is it true? If so the only way to record activations from neurons is to create custom Activation layer which would record needed info - right?

1 Answers1

0

I am not familiar with Keras but if it is building a conventional Theano neural network computation graph then it is not possible to view the activation values in the way you propose.

Conventionally, only weights are stored persistently, as shared variables. Values computed in intermediate stages of a Theano computation are transient and can only be viewed via debugging the execution of the compiled Theano function (note that this is not easy to do -- debugging the host Python application is not sufficient).

If you were building the computation directly instead of using Keras I would advise including the intermediate activation values that you are interested in in the output values list of the Theano function. I cannot comment on how this can be achieved via Keras.

Daniel Renshaw
  • 33,729
  • 8
  • 75
  • 94