I have a (1, 224, 224, 3)
sized numpy array named content_image
. That is the size of VGG network input.
When I transfer content_image
to input of VGG network as illustrated below:
model = vgg19.VGG19(input_tensor=K.variable(content_image), weights='imagenet', include_top=False)
for layer in model .layers:
if layer.name == 'block5_conv2':
model_output = layer.output
this seems to produce outputs in scale of [0, 1]
:
[0.06421799 0.07012904 0. ... 0. 0.05865938
0. ]
[0.21104832 0.27097407 0. ... 0. 0.
0. ] ...
On the other hand, when I apply the following approach based on the keras documentation (extracting features from an arbitrary intermediate layer with VGG19):
from keras.models import Model
base_model = vgg19.VGG19(weights='imagenet'), include_top=False)
model = Model(inputs=base_model.input, outputs=base_model.get_layer('block5_conv2').output)
model_output = model.predict(content_image)
This approach seems to produce outputs differently.
[ 82.64436 40.37433 142.94958 ... 0.
27.992153 0. ]
[105.935936 91.84446 0. ... 0.
86.96397 0. ] ...
The both approaches use the same network with same weights, and transfer same numpy array (content_image
) as inputs but they produce different outputs. I expect that they should produce same results.