3

I am working on a classification problem in a project. The specificity of my problem is that I have to use two different type of data to manage it. My classes are Car, Pedestrian, Truck and Cyclist. My dataset is composed of :

-Images coming from the Camera : they are RGB image. Here is an example :enter image description here

  • Images obtain by projecting Lidar Point Cloud (just 3D points) into 2D camera plane and encoding pixels using Depth & Reflectance. Here are examples : enter image description here enter image description here

I already manage to use both modalities in order to perform the classification task by using the Concatenate function of the keras API.

But what I would like to do is to use a more powerful CNN like VGG. I used pre-trained model and freeze all layers except the last 4. I read the grayscale image as RGB because the VGG16 pre-trained model need 3 channels input. Here is my code :

from keras.applications import VGG16
#Load the VGG model
#Camera Model
vgg_conv_C = VGG16(weights='imagenet', include_top=False, input_shape=(227, 227, 3))
#Depth Model
vgg_conv_D = VGG16(weights='imagenet', include_top=False, input_shape= (227, 227, 3))
for layer in vgg_conv_D.layers[:-4]:
    layer.trainable = False 
for layer in vgg_conv_C.layers[:-4]:
    layer.trainable = False 
mergedModel = Concatenate()([vgg_conv_C.output,vgg_conv_D.output])
mergedModel = Dense(units = 1024)(mergedModel)
mergedModel = BatchNormalization()(mergedModel)
mergedModel = Activation('relu')(mergedModel)
mergedModel = Dropout(0.5)(mergedModel)
mergedModel = Dense(units = 4,activation = 'softmax')(mergedModel)
fused_model = Model([vgg_conv_C.input, vgg_conv_D.input], mergedModel)                        ) 

The last line give the following error :

ValueError: The name "block1_conv1" is used 2 times in the model. All 
layer names should be unique.

Did someone know how to handle this? To be simple, I just want to use VGG16 on both type of images, then just get the feature vectors for each modality, then Concatenate them and add fully connected layers at top to predict the image's class. It works with no-pre trained models. Can provide the code if needed

Doxcos44
  • 135
  • 2
  • 12

3 Answers3

6

Try this

#Camera Model
vgg_conv_C = VGG16(weights='imagenet', include_top=False, input_shape=(227, 227, 3))
for layer in vgg_conv_C.layers:
    layer.name = layer.name + str('_C')
#Depth Model
vgg_conv_D = VGG16(weights='imagenet', include_top=False, input_shape= (227, 227, 3))
for layer in vgg_conv_D.layers:
    layer.name = layer.name + str('_D')

In this way, you'd still be able to use two identical pre-trained networks.

0

As mentioned in the error, ValueError: The name "block1_conv1" is used 2 times in the model. All layer names should be unique. Therefore use Saimse network or If use dual CNN them remember in network layer ame are unique. its better and copy the network for second configuration and change the layers name.

0

IStackoverflowAndIKnowThings solution gives me the error:

AttributeError: Can't set the attribute "name", likely because it conflicts with an existing read-only @property of the object. Please choose a different name.

The following worked for me (see this post):

..
for layer in vgg_conv_C.layers:
    layer._name = layer._name + str('_C')
..