-3

I'm trying to train a VGG16 model following a video guide on YouTube.

I copied the code given by the instructor. After doing this I tried to train a model using some of the images available in my system. I have uploaded some images only to demonstrate for the reader here.

Summary:
I tried to change the dataset for the VGG16 and train for my dataset. VGG16 uses IMAGE_SIZE = [224, 224] and I don't know the size of the images that I have! Could this be the problem?
I have uploaded the some images at OneDrive, but when I change the dataset I came across multiple errors, one of which was kernel died, which came frequently. After that was solved I hade some errors related the images I provided for training and testing. I need help to train the model.

# -*- coding: utf-8 -*-
"""
@author: Krish.Naik
"""
import tensorflow as tf
from keras.models import load_model
from keras.layers import Input, Lambda, Dense, Flatten
from keras.models import Model
from keras.applications.vgg16 import VGG16
from keras.applications.vgg16 import preprocess_input
from keras.preprocessing import image
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
import numpy as np
from glob import glob
import matplotlib.pyplot as plt

# re-size all the images to this
IMAGE_SIZE = [224, 224]

train_path = 'Datasets/Train'
valid_path = 'Datasets/Test'

# add preprocessing layer to the front of VGG
vgg = VGG16(input_shape=IMAGE_SIZE + [3], weights='imagenet', include_top=False)

# don't train existing weights
for layer in vgg.layers:
  layer.trainable = False
  
# useful for getting number of classes
folders = glob('Datasets/Train/*')
  
# our layers - you can add more if you want
x = Flatten()(vgg.output)
# x = Dense(1000, activation='relu')(x)
prediction = Dense(len(folders), activation='softmax')(x)

# create a model object
model = Model(inputs=vgg.input, outputs=prediction)

# view the structure of the model
model.summary()

# tell the model what cost and optimization method to use
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])


from keras.preprocessing.image import ImageDataGenerator

train_datagen = ImageDataGenerator(rescale = 1./255,
                                   shear_range = 0.2,
                                   zoom_range = 0.2,
                                   horizontal_flip = True)

test_datagen = ImageDataGenerator(rescale = 1./255)

training_set = train_datagen.flow_from_directory('Datasets/Train',
                                                 target_size = (224, 224),
                                                 batch_size = 32,
                                                 class_mode = 'categorical')

test_set = test_datagen.flow_from_directory('Datasets/Test',
                                            target_size = (224, 224),
                                            batch_size = 32,
                                            class_mode = 'categorical')

'''r=model.fit_generator(training_set,
                         samples_per_epoch = 8000,
                         nb_epoch = 5,
                         validation_data = test_set,
                         nb_val_samples = 2000)'''

# fit the model
r = model.fit_generator(
  training_set,
  validation_data=test_set,
  epochs=5,
  steps_per_epoch=len(training_set),
  validation_steps=len(test_set)
)
# loss
plt.plot(r.history['loss'], label='train loss')
plt.plot(r.history['val_loss'], label='val loss')
plt.legend()
plt.show()
plt.savefig('LossVal_loss')

# accuracies
plt.plot(r.history['accuracy'], label='train acc')
plt.plot(r.history['val_accuracy'], label='val acc')
plt.legend()
plt.show()
plt.savefig('AccVal_acc')

model.save('facefeatures_new_model.h5')

when training the model got this error

Error when checking target: expected dense_3 to have shape (2,) but got array with shape (1,)

what should i do to resolve it??

how can i change the shape of the array to match the shape ofdense_3 ??

if someone could do the changes and show how whould it be done!!! ill be greatfull.

  • Please provide the errors that you are facing exactly, otherwise it's a game of guessing. – Timbus Calin Aug 11 '21 at 06:15
  • You need to provide actual error messages and details, else this is too vague and you have people guessing what the problem is. – Dr. Snoopy Aug 11 '21 at 08:37
  • @Kaveh ImageDataGenerator is a Sequence so len is perfectly well defined, nothing of what you said is actually an issue. – Dr. Snoopy Aug 11 '21 at 08:38
  • @Kaveh There is not even a need to specify steps_per_epoch with ImageDataGenerator (and yes, I tested this) – Dr. Snoopy Aug 11 '21 at 09:35
  • it says to manay values to unpack ... ill show you and jupyter keeps on dieing while im training the model – Mohammad Awais Aug 12 '21 at 09:10
  • when i change the image source to my images that i want to train upon... then it gives error.... – Mohammad Awais Aug 12 '21 at 09:18
  • after reaching 78/78 kernel dies in jupyter ... i even created separate environment for keras – Mohammad Awais Aug 12 '21 at 09:45
  • the same thing happens in sqyder – Mohammad Awais Aug 12 '21 at 10:25
  • ValueError: You are passing a target array of shape (32, 1) while using as loss `categorical_crossentropy`. `categorical_crossentropy` expects targets to be binary matrices (1s and 0s) of shape (samples, classes). If your targets are integer classes, you can convert them to the expected format via: ``` from keras.utils import to_categorical y_binary = to_categorical(y_int) ``` Alternatively, you can use the loss function `sparse_categorical_crossentropy` instead, which does expect integer targets. – Mohammad Awais Aug 13 '21 at 01:40
  • @Dr.Snoopy if you could help me? – Mohammad Awais Aug 13 '21 at 04:42

1 Answers1

0

So there were multiple problems that I was facing one of which was the compiler(kernel) keeps dying... I was not sure what it was due to so I tried every possible solution online, but after searching for days haha I came across my solution to the problem... at least it solved my problem... Dataset contained training and test dataset, training set contained 2 classes and the test dataset contained 1 class which I then changed to 2 classes in the test dataset.... doing this solved the kernel die error and solved the error which I was facing in code...