0

I trained an FC network with HDF5 data layer, then used surgery for transplantation to a convolutional network, then changed the data layer to a probe-suitable data layer, i.e.:

from:

layer {
    name: "layer_data_left"
    type: "HDF5Data"
    top: "data_left"
    top: "labels_left"
    include {
        phase: TRAIN
    }
    hdf5_data_param {
        source: "/home/me/Desktop/trainLeftPatches.txt"
        batch_size: 128
    }
}

to

layer {
  name: "data_left"
  type: "Input"
  top: "data_right"
  input_param { shape: { dim: 1 dim: 1 dim: 1241 dim: 367 } }
}

is there any reason this would go out of memory?:

>>> fc_net.forward()
F0729 20:02:02.205382  6821 syncedmem.cpp:56] Check failed: error == cudaSuccess (2 vs. 0)  out of memory
*** Check failure stack trace: ***
Aborted (core dumped)

Or, is it more likely that I made a mistake somewhere in surgery & exchanging data layers?

Thank you.

imonaboat
  • 29
  • 7

0 Answers0