Questions tagged [vgg-net]

A kind of convolutional neural network consisting of 16 or 19 layers, often used with weights pre-trained on ImageNet dataset. Whereas the the model was originally created for image classification, its convolutional part can be used for a variety of purposes. Use this tag for questions, specific for this CNN architecture.

The name VGG stands for Visual Geometry Group (Oxford University), authors of the original paper.

The model consists of a convolutional part (several convolution and max- or avegare-pooling layers) and several fully-connected layers atop of it. Small (3x3) convolution filters are used.

See visual representation below (taken from this answer):

enter image description here

Model applications

  1. Image classifier (Tensorflow).
  2. Image segmentation (Keras).
  3. Image style transfer (Keras).
471 questions
0
votes
2 answers

Last fc layers in VGG16

The VGG16 architecture has input: 224x224x3 images.I want to have 48x48x3 inputs but to do this in keras, we remove the last fc layers which have 4096 neurons each.Why we have to do this? and is it needed to add another size of fc layers for this…
christk
  • 834
  • 11
  • 23
0
votes
0 answers

Why the number of images getting reduced after obtaining the bottleneck features?

I am trying to build a simple 5-class object detector by extracting the bottleneck features using a pre-trained vgg16 (trained on image net). I have 10000 images for training - 2000 for each class AND 2500 for testing - 500 for each class. However,…
0
votes
1 answer

Keras Output Dimensions of VGG19 Conv4-1 Layer doesn't match up with model output

import numpy as np from PIL import Image from keras.preprocessing import image from keras.applications.vgg19 import preprocess_input To create the VGG19 model I use: img = Input(shape=(256,256,3)) vgg = VGG19(weights="imagenet") vgg.outputs =…
GRS
  • 2,807
  • 4
  • 34
  • 72
0
votes
1 answer

what happens if i set input size to 32,32 mnist

I want to train MNIST on VGG16. MNIST image size is 28*28 and I set the input size to 32*32 in keras VGG16. When I train I get good metrics, but I´m not sure what really happens. Is keras filling in with empty space or is the image being expanded…
0
votes
2 answers

How to correctly train VGG16 Keras

I'm trying to retrain VGG16 to classify Lego images. However, my model has a low accuracy (between 20%). What am I doing wrong? Maybe the number of FC is wrong, or my ImageDataGenerator. I have approx. 2k images per class and a total of 6…
0
votes
0 answers

VGGnet is not learning while training

VGGNet is not learning while fine tuning. I trained VGGnet 16 layer model on ECG data. after that i designed a new model taking conv_base of VGGnet and fully connected layers at top of that. The new-model is not learning at all. It is showing same…
M. Jangra
  • 29
  • 3
0
votes
1 answer

Python: pre-trained VGG-face model for face anti-spoofing problem

I'm trying to solve face anti-spoofing problem by using pre-trained model (e.g., VGG trained on ImageNet). Where do I need to retrieve the features ? after which layer ? More specific, is it enough to change the output of the last full connected…
Aj.h
  • 1
  • 1
0
votes
0 answers

Large input image limitations for VGG19 transfer learning

I'm using the Tensorflow (using the Keras API) in Python 3.0. I'm using the VGG19 pre-trained network to perform style transfer on an Nvidia RTX 2070. The largest input image that I have is 4500x4500 pixels (I have removed the fully-connected layers…
wandadars
  • 1,113
  • 4
  • 19
  • 37
0
votes
0 answers

How to load keras CNN model with fully connected layers and convert to FCN?

I have a VGG16 style model (i.e. CNN with fully connected layers) saved as json file and weights as h5 file. I want to load this model and convert it to an FCN (Fully Convolutional Model) which can accept variable size RGB images as inputs. From my…
user3731622
  • 4,844
  • 8
  • 45
  • 84
0
votes
1 answer

TensorFlow API Slim: How to set checkpoint_exclude_scopes and output_node_names for VGG-Net 16?

I am currently trying to train classification networks using TensorFlow API (https://github.com/tensorflow/models). After creating TFrecords for my data set (stored in research/slim/data), I train the networks using following command: python…
0
votes
1 answer

VGG bottleneck features + LSTM in keras

I have pre-stored bottleneck features (.npy files) obtained from VGG16 for around 10k images. Training a SVM classifier (3-class classification) on these features gave me an accuracy of 90% on the test set. These images are obtained from videos. I…
ravvv
  • 113
  • 1
  • 8
0
votes
1 answer

Pre-trained VGG-16 in MATLAB and PyTorch Same?

I have been trying to figure this out for some time but i am still a bit unsure. Does the PyTorch pre-trained VGG-16 (torchvision model) have exactly the same weights as the MATLAB pre-trained VGG-16?
0
votes
1 answer

What is wrong with this code, why the loss in this code is not reducing?

I have implemented VGG-16 in tensorflow, VGG-16 is reasonably deep network, so the loss should definitely reduce. But in my code it's not reducing. But when i run the model on same batch again and again then the loss is reducing. Any idea, why such…
0
votes
1 answer

How to feed features extracted frames of videos in a LSTM?

I want to do some anomaly detection based based on a thousand of videos. I have extracted the features of all frames of all videos (using VGG16). Now, I have everything in several files corresponding to each videos. When I load a file from my disk,…
Mourad Qqch
  • 373
  • 1
  • 5
  • 16
0
votes
2 answers

Keras implementation of VGG19 net has 26 layers. How?

A VGG-19 network has 25 layers as shown here. But if I check the number of layers in Keras implementation, it shows 26 layers. How? model = VGG19() len(model.layers) gives output 26
Nagabhushan S N
  • 6,407
  • 8
  • 44
  • 87