-2

I'm using CNN for training and testing images of seeds. I want to know:

  • What features are getting extracted at every layer?
  • Is there any way to represent it in a graphical or image format?
  • How do I define my classifier to extract only specific features?

    from keras.preprocessing.image import ImageDataGenerator
    from keras.models import Sequential
    from keras.layers import Conv2D, MaxPooling2D
    from keras.layers import Activation, Dropout, Flatten, Dense
    from keras import backend as K
    
    
    # dimensions of our images.
    img_width, img_height = 150, 150
    
    train_data_dir = 'Train_Walnut_Seed/train'
    validation_data_dir = 'Train_Walnut_Seed/validation'
    nb_train_samples = 70
    nb_validation_samples = 9
    epochs = 50
    batch_size = 16
    
    if K.image_data_format() == 'channels_first':
            input_shape = (3, img_width, img_height)
    else:
            input_shape = (img_width, img_height, 3)
    
    model = Sequential()
    model.add(Conv2D(32, (3, 3), input_shape=input_shape))
    model.add(Activation('relu'))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    
    model.add(Conv2D(32, (3, 3)))
    model.add(Activation('relu'))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    
    model.add(Conv2D(64, (3, 3)))
    model.add(Activation('relu'))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    
    model.add(Flatten())
    model.add(Dense(64))
    model.add(Activation('relu'))
    model.add(Dropout(0.5))
    model.add(Dense(1))
    model.add(Activation('sigmoid'))
    
    model.compile(loss='binary_crossentropy',
            optimizer='rmsprop',
            metrics=['accuracy'])
    
    # this is the augmentation configuration we will use for training
    train_datagen = ImageDataGenerator(
            rescale=1. / 255,
            shear_range=0.2,
            zoom_range=0.2,
            horizontal_flip=True)
    
    # this is the augmentation configuration we will use for testing:
    # only rescaling
    test_datagen = ImageDataGenerator(rescale=1. / 255)
    
    train_generator = train_datagen.flow_from_directory(
    train_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode='binary')
    
    validation_generator = test_datagen.flow_from_directory(
            validation_data_dir,
            target_size=(img_width, img_height),
            batch_size=batch_size,
            class_mode='binary')
    
    model.fit_generator(
            train_generator,
            steps_per_epoch=nb_train_samples // batch_size,
            epochs=epochs,
            validation_data=validation_generator,
            validation_steps=nb_validation_samples // batch_size)
    
    model.save('first_try_walnut.h5')
    

The above code is for training the classifier using CNN. how to visually represent the output at each layer while training. Also how to deploy my trained model into a protocolbuffer(.pb) file for using it in my android project

2 Answers2

0

I believe the best way, or at least the best way I know of to extract useful features would be using an autoencoder.
Check out this article from the Keras blog.

Cheers,
Gabriel

Gabriel Bercea
  • 1,191
  • 1
  • 10
  • 21
  • maybe you are talking about getting a clearer output with no loss or less loss, but i want to check what output my classifier gives at each layer on giving an image as input. – Avinash Nera Jan 29 '18 at 13:33
  • Oh, I am sorry I have misunderstood your question. – Gabriel Bercea Jan 29 '18 at 15:32
  • do you have any idea in context of my query? – Avinash Nera Jan 30 '18 at 05:16
  • Yes, but I would have to redirect you to a post since there is quite a bit to talk about in a post here and it is explained in details there. Check [this](http://blog.christianperone.com/2015/08/convolutional-neural-networks-and-feature-extraction-with-python/) out. – Gabriel Bercea Jan 30 '18 at 06:40
0

I know this probably isn't an issue anymore, but I just thought I'd add this in case it's useful to someone else. As the features output by a CNN aren't really human-readable it is difficult to inspect them. One way is to use t-SNE which gives a visual indication of which embedded representations of the images are close to each other. Another way to do this is using a 'heat map' which shows in more detail which parts of an image are activating parts of the CNN. This post has a nice explanation of some of these techniques: http://cs231n.github.io/understanding-cnn/

Getting a classifier to focus on certain features is difficult - either you need to change the network architecture or use image pre-processing to accentuate the features you want the network to focus on. I'm afraid I can't really give more details on that.

William Smith
  • 89
  • 1
  • 10