1

I have a trained network, which consists of the following layers: {conv1, pool1, conv2, pool2, conv3, pool3, conv4, pool4, fc5, fc6, output} which fc means fully connected layers and conv means convolutional layers.

I need to do feature extraction for some images. I am using Lasagne and Theano. I need to save features from each layer for later analysis. I am a newbie in this language so I tried to find sample codes or some tutorials on this (with theano/lasagne). However, I failed to understand what should I do by myself.

I would appreciate if someone can guide me on this in order to how to implement feature extraction.

Thank you in advance

Edit: I followed comments by Mr/Ms gntoni, here is my code:

 feat_all = []
    for layer in layer_list:
        feat = np.zeros_like(lasagne.layers.get_output([self.acnn.cnn[layer]], inputs = img, deterministic=True))
        feat[:] = lasagne.layers.get_output([self.acnn.cnn[layer]], inputs = img, deterministic=True)
        feat_all.append(feat)
=

For my case, I need to save features from each layer. I want to write a function like the one that we have in Caffe:

self.net.blobs['data'].data[0] = img
        self.net.forward(end=layer_list[-1])

        feat_all = []
        for layer in layer_list:
            feat = np.zeros_like(self.net.blobs[layer].data[0])
            feat[:] = self.net.blobs[layer].data[0]
            feat_all.append(feat)

However, my trained model is written with lasagne and theano, So I have to implement this in lasagne format.

After writing the code above (in lasagne), I am getting an empty output. I wonder why and how can I fix it.

Thank you in advance

kadaj13
  • 43
  • 1
  • 7

2 Answers2

1

A Convolutional Neural Network, like yours, consists in two parts:

The first one is the feature extraction part and in your case consists in the conv-pool layers {conv1, pool1, conv2, pool2, conv3, pool3, conv4, pool4}.

The second is the classification part. In your network: {fc5, fc6, output}.

When training, the first part is trying to obtain the best representation of the input data to be classified by the second part.

So, if after trained, you disconnect this two parts, the output of conv4 layer will be giving you the features you want.

This features can be used with a different classifier. In fact, many people use an already trained network (e.g. AlexNet), remove the last classification layers, and use the features with their own classification system.

gntoni
  • 477
  • 4
  • 15
  • Thank you so much. Can you also help me with implementation part? As I can not find which function I should use for this case. – kadaj13 Jan 16 '17 at 03:07
  • You should have some function like `lasagne.layers.get_output()` getting the values from the last layer. Just make a new function to get the values from `pool4` layer instead. – gntoni Jan 16 '17 at 05:10
  • Thank you so much for your help. I will try it. – kadaj13 Jan 16 '17 at 06:17
0

Keep in mind that in Lasagne the get_output method returns Theano tensors, and you cannot directly use them to compute the features from a numpy array. However, you can define a Theano function and use it to compute the values. In your case:

layers = [self.acnn.cnn[layer] for layer in layer_list]
feat_fn = theano.function([input_var], lasagne.layers.get_output(layers),
                          deterministic=True)

where input_var is the input tensor to your network. The get_output method can accept multiple layers and Theano functions can have multiple outputs, so you can define a single function to extract all the features. Getting the numerical values is then as simple as:

feat_all = feat_fn(img)
Banus80
  • 101
  • 1
  • 8