0

I have a prototxt like this:

name: "CaffeNet"
layer {
  name: "data"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {
    mirror: true
    crop_size: 100
    mean_file: []

but my image size in lmdb are 100X200, the only way make this net to work is to set crop_size to 100 , Is Caffe ruining my images and crop it from center ? Is there any way to fix it ? What i remove crop size at all?

PS : If i remove crop size i will get

Source param shape is 4096 3072 (12582912); target param shape is 4096 1024 (4194304)

And i saw this question but there is no filed to put my calculation for each layer out put, we just define num_output for fully connected layer.

Although calculation of output each layer is not so hard with:

output_net = (input_net+(2*pad)-kernel_size)/(stride+1) * num_output(or fillters)

data(100X200X3)
conv1(((100X200X3)+(2*0)-11)/(4+1)) * 96 = ((60000 -11) /5) * 96 = 1151788.8
pool1((input_net+(2*0)-3)/(2+1))         = (1151788.8 -3) /3 = 383928.6
conv2((input_net+(2*2)-5)/(1+1)) * 256   = ((383928.6 + (4) -5) /5) * 256 = ((383928.6 + -1) /5) * 256 = 19657093.12
pool2((input_net+(2*0)-3)/(2+1))         = (19657093.12 -3) /3 = 6552363.373333333
conv3((input_net+(2*1)-3)/(1+1)) * 384   = ((6552363.373333333 + (2) -5) /2) * 384 = 3276180.186666667 * 384 = 1258053191.679999936
conv4((input_net+(2*1)-3)/(1+1)) * 384   = ((1258053191.679999936 + (2) -5) /2) * 384 = 629026594.339999968 * 384 = 241546212226.559987712
conv5((input_net+(2*1)-3)/(1+1)) * 256   = ((241546212226.559987712 + (2) -5) /2) * 256 = 120773106111.779993856 * 256 = 3.091791516×10¹³

How can i precede from here and correct network ? And caffe said 12582912 but it is not after my calculation. if I've calculate it right.

malloc
  • 604
  • 10
  • 18
  • 1
    you are trying to make a rectangle (100x200) to be a square (100x100) I'm afraid brute-force it does not seem like the right approach here. Why don't you re-design your net to really fit your inputs? – Shai Jun 27 '17 at 11:17
  • 1
    Alternatively, convert the top fully connected layers into convolutional ones (see [this tutorial](https://github.com/BVLC/caffe/blob/master/examples/net_surgery.ipynb)) and then add a global pooling layer on top. – Shai Jun 27 '17 at 11:20
  • 1
    @Shai ,Thank you sir, I'm really appreciate your answers and contribution. I'm going to read that tutorial. Can't i use the caffenet for my purpose ? I've changed everything i could in that net. – malloc Jun 27 '17 at 13:10
  • @Shai , I've read that tutorial before and a tutorial from Christopher Bourez's blog, and I've implement their code, they design the filters but aren't the network suppose to learn those? If I remove fully connected layer, then How the network would classify without Neural Networks? I've visualized my network before and I think the only thing i can do is remove some layers or add some, so Is network without neural net works fine? – malloc Jun 27 '17 at 15:39
  • 2
    the tutorial doesn't show you how to **remove** layers, but rather how to **changed** trained fully connected layer with an **equivalent** convolutional layer. If you do not change the input size of your net, you end up with an **identical** net – Shai Jun 27 '17 at 15:45

0 Answers0