-1

Hello guys I could use some advice on whether my approach that I employed in order to apply transfer learning on the resNet50 model is correct, after reading many articles and resources online, it is hard to say if the method I adopted is correct. I should mention that I am using 500 images/labels (with labels ranging from 0-25) to run my model. Let us first go over the first section of building the model, please find the code below:

X_train, X_test, y_train, y_test = train_test_split(files, labels, test_size=0.2)
X_train = np.array(X_train)
X_test = np.array(X_test)
y_train = np.array(y_train)
y_test = np.array(y_test)

input_t = (224, 224, 3)
resnet = ResNet50(input_shape=input_t, weights='imagenet', include_top=False)

for layer in resnet.layers:
    layer.trainable = False

So I create my train/test groups and instantiate my resNet50 model. Then I freeze the layers of my model such that they do not have to be trained, however it is unclear to me whether I should freeze all the layers or only a partial amount.

Let us now move on to the next section, please find the code below:

model = tensorflow.keras.models.Sequential()
model.add(tensorflow.keras.layers.Lambda(lambda image: tensorflow.image.resize(image, to_res)))
model.add(resnet)
model.add(tensorflow.keras.layers.Flatten())
model.add(tensorflow.keras.layers.BatchNormalization())
model.add(tensorflow.keras.layers.Dense(256, activation='relu'))
model.add(tensorflow.keras.layers.Dropout(0.5))
model.add(tensorflow.keras.layers.BatchNormalization())
model.add(tensorflow.keras.layers.Dense(128, activation='relu'))
model.add(tensorflow.keras.layers.Dropout(0.5))
model.add(tensorflow.keras.layers.BatchNormalization())
model.add(tensorflow.keras.layers.Dense(64, activation='relu'))
model.add(tensorflow.keras.layers.Dropout(0.5))
model.add(tensorflow.keras.layers.BatchNormalization())
model.add(tensorflow.keras.layers.Dense(26, activation='softmax'))

model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

model.fit(X_train, y_train, batch_size=32, epochs=5, verbose=1, validation_data=(X_test, y_test))

In this section, I essentially add additional layers to the resNet50 model in order to train them on my data. At the end I use a softmax activation function since my labels range from 0-25, and then finish off by fitting the model on my data. Please let me know if there are things you agree/disagree with, or any kind of tips/recommendations are also welcomed. Thanks for reading.

highwiz10
  • 23
  • 1
  • 4

2 Answers2

0

I think your dataset is quite small so you don't need so much Fully Connected layer. Or you can try to shuffle the data in your train_test_split

Trong Van
  • 374
  • 1
  • 13
0

Unfortunately, a lot of Deep Learning is based on guessing or exploiting what others have found successful. It is impossible to tell you what you need to exactly, but here are some suggestions:

  1. You are not training your convolutional layers at all by freezing them. This is ok if and only if the pretrained weights actually form a feature extractor that can separate your data (e.g. if your data is similar to ImageNet). I'd suggest taking the output of the head-less resnet and reducing dimensionality via T-SNE or similar to see how well your separation happens (color-code each label). If the T-SNE plots show a bad separation you may want to unfreeze some (starting from the last ones) or even all of the layers of the resnet and train them with a smaller learning rate.

  2. You only have 500 images for 26 classes. If your data is balanced that's about 20 images per class which is not a lot. Look into augmentation or perhaps you have a chance to get more data (maybe even some synthetic one?).

  3. Your FC-Head looks complex. If the Feature Extractor is doing a good job there is no need for such a complicated architecture. Try pooling (if resnet is not doing that already), flattening and a single sigmoid dense layer. If this does not work you can always add Dropout, BN and more hidden layers.

  4. 5 Epochs may be too optimistic, try something bigger and use a good Early Stopping technique.

Dharman
  • 30,962
  • 25
  • 85
  • 135
paulgavrikov
  • 1,883
  • 3
  • 29
  • 51