So, I have been working on neural style transfer in Pytorch, but I'm stuck at the point where we have to run the input image through limited number of layers and minimize the style loss. Long story short, I want to find a way in Pytorch to evaluate the input at different layers of the architecture(I'm using vgg16). I have seen this problem solved very simply in keras, but I wanted to see if there is a similar way in pytorch as well or not.
from keras.applications.vgg16 import VGG16
model = VGG16()
model = Model(inputs=model.inputs, outputs=model.layers[1].output)