0

I have a (1, 224, 224, 3) sized numpy array named content_image. That is the size of VGG network input.

When I transfer content_image to input of VGG network as illustrated below:

model = vgg19.VGG19(input_tensor=K.variable(content_image), weights='imagenet', include_top=False)

for layer in model .layers:
    if layer.name == 'block5_conv2':
        model_output = layer.output

this seems to produce outputs in scale of [0, 1]:

[0.06421799 0.07012904 0.         ... 0.         0.05865938
    0.        ]
   [0.21104832 0.27097407 0.         ... 0.         0.
    0.        ] ...

On the other hand, when I apply the following approach based on the keras documentation (extracting features from an arbitrary intermediate layer with VGG19):

from keras.models import Model
base_model = vgg19.VGG19(weights='imagenet'), include_top=False) 
model = Model(inputs=base_model.input, outputs=base_model.get_layer('block5_conv2').output)
model_output = model.predict(content_image)

This approach seems to produce outputs differently.

[ 82.64436     40.37433    142.94958    ...   0.
     27.992153     0.        ]
   [105.935936    91.84446      0.         ...   0.
     86.96397      0.        ] ...

The both approaches use the same network with same weights, and transfer same numpy array (content_image) as inputs but they produce different outputs. I expect that they should produce same results.

today
  • 32,602
  • 8
  • 95
  • 115
johncasey
  • 1,250
  • 8
  • 14
  • Different layers, different outputs. That's totally normal. ---- This line does nothing: `model_output = layer.output`. Did you mean `model.output`? I really don't think you should do this that way, who knows the amount of hidden bugs for trying to hack what is expected. – Daniel Möller Jul 18 '18 at 14:35
  • Just use the second (and correct) approach. – Daniel Möller Jul 18 '18 at 14:36
  • @Daniel, respond to your second answer, you cannot find gradients with second way. First one works well but second one does not. I wonder the root cause. – johncasey Jul 18 '18 at 14:50
  • @Daniel, respond to your first answer, the both approach find the same layer's output - block5_conv2. the layer is mentioned in both code blocks. – johncasey Jul 18 '18 at 14:51
  • @johncasey could you please add the code you use to evaluate and print the `model_output` in the first approach? – today Jul 18 '18 at 14:58
  • @today, in second approach it produces numpy array and I just run the command print(model_output). But in the first approach it is in type of Tensor. That's why, I am using following code: init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) v1 = sess.run(model_output) print(v1) – johncasey Jul 18 '18 at 15:26
  • @johncasey Would you please confirm that the issue was caused by creating a new session as I mentioned in my answer? – today Jul 19 '18 at 06:05

1 Answers1

2

I think you would get the same result if you use the session (implicitly) created by Keras in your first approach:

sess = K.get_session()
with sess.as_default():
    output = model_output.eval()
    print(output)

I think by creating a new session and using init = tf.global_variables_initializer() and sess.run(init) you are changing the values of variables. In general, don't create a new session and instead use the session created by Keras (unless you have good reasons to do otherwise).

today
  • 32,602
  • 8
  • 95
  • 115