1

I am trying to implement deepdream in C++ in caffe(I want to run it in android). googlenet requires input of shape 224*224*3. In the ipython notebook of deepdream it shows src.reshape(1,3,h,w). Does this mean that only input blob is reshaped or it is propagated through the network? I tried calling the net.Reshape() in C++ and it resulted in:

F0307 01:27:24.529654 31857 inner_product_layer.cpp:64] Check failed: K_ == new_K 
(1024 vs. 319488) Input size incompatible with inner product parameters.

Shouldn't the network be reshaped too? If not what is the implication of just reshaping input blob? I am new to deep learning. So forgive me if it seems trivial.

Shai
  • 111,146
  • 38
  • 238
  • 371
nomem
  • 1,568
  • 4
  • 17
  • 33

1 Answers1

1

changing the shape of the input requires reshaping of the entire net. Alas, there are some layer types that do not like to be reshaped. Specifically, "InnerProduct" layer: the number of trainable parameters of an inner product layer depends on the exact input shape and the output shape. Therefore a net with an "InnerProduct" layer cannot be reshaped.

You can use methods described in the "net surgery" example to convert the inner product layers to equivalent convolutional layers (that can be reshaped).

Shai
  • 111,146
  • 38
  • 238
  • 371
  • Should I just propagate reshape to the relevent layers? In `googlenet` only last layer is inner product layer and for the purpose of deepdream I need go through some inceptions layers only. – nomem Mar 06 '17 at 22:21
  • I saw the source code of `Net::Reshape()` and it just loops through all the layers. For the purpose of deepdream I only forward to specific layer and backward from that layer. So it seems reasonable to me. Can you elaborate why not? – nomem Mar 06 '17 at 22:28
  • @lnman you cannot reshape part of the net, because changing the input shape affects **all** the net. If there are layers you are not going to use, simply remove them: no point leaving them and not reshaping them. – Shai Mar 08 '17 at 10:01