3

How do I get the weights of all filters (like 32 ,64, etc.) of a Conv2D layer in Keras after each epoch? I mention that, because initial weights are random but after optimization they will change.

I checked this answer but did not understand. Please help me find a solution of getting the weights of all the filter and after every epoch.

And one more question is that in Keras documentation for the Conv2D layer input shape is (samples, channels, rows, cols). What exactly does samples mean? Is it the total number of inputs we have (like in MNIST data set it is 60.000 training images) or the batch size (like 128 or other)?

petezurich
  • 9,280
  • 9
  • 43
  • 57
Hitesh
  • 1,285
  • 6
  • 20
  • 36
  • 1
    What exactly you did not understand? You need to be specific, else its a duplicate question. – Dr. Snoopy Sep 14 '17 at 10:52
  • suppose i am making a model in keras which having a layer like Conv2D(64, (3, 3), activation='relu'), that means no of filter is 64 and size of each filter is 3*3. for first iteration of any model these 64*3*3 values initialize by let's say glorot_uniform initializer and then in model.compile i am using sgd optimizers. That meand now these 64 fileters will attain new values. I want to see these new values. – Hitesh Sep 14 '17 at 11:16
  • 1
    I know that. The question you linked provides the answer. You haven't explained what you don't understand from the other question. – Dr. Snoopy Sep 14 '17 at 12:26
  • I got the answer of my first question. model.layers[index of layer].get_weights()[0] will weights. Just i want to ask in Conv2D layer in keras what is meant by samples? total no of inputs i have or batch_size. Because in tensorflow documentation they mention batch size for Conv layer. – Hitesh Sep 14 '17 at 13:11

1 Answers1

4

Samples = batch size = number of images in a batch

Keras will often use None for this dimension, meaning it can vary and you don't have to set it.

Although this dimension actually exists, when you create a layer, you pass input_shape without it:

Conv2D(64,(3,3), input_shape=(channels,rows,cols))
#the standard it (rows,cols,channels), depending on your data_format

To have actions done after each epoch (or batch), you can use a LambdaCallback, passing the on_epoch_end function:

#the function to call back
def get_weights(epoch,logs):
    wsAndBs = model.layers[indexOfTheConvLayer].get_weights()
    #or model.get_layer("layerName").get_weights()

    weights = wsAndBs[0]
    biases = wsAndBs[1]
    #do what you need to do with them
    #you can see the epoch and the logs too: 
    print("end of epoch: " + str(epoch)) for instance

#the callback
from keras.callbacks import LambdaCallback
myCallback = LambdaCallback(on_epoch_end=get_weights)

Pass this callback to the training function:

model.fit(...,...,... , callbacks=[myCallback])
Daniel Möller
  • 84,878
  • 18
  • 192
  • 214
  • i have used batch size=128 in mnist data set – Hitesh Sep 14 '17 at 13:32
  • 2
    Ok, you're right. But this dimension shouldn't make us worry. It's automatically calculated, and we don't need to put it in the layers. We must put `Conv2D(filers,kernel_size, input_shape=(side1,side2,channels))` – Daniel Möller Sep 14 '17 at 13:49