1

I have created an lmdb file that contains non-encoded 6-channel images. When I load it into a network in caffe, after the network is loaded, the system RAM usage (as seen using the 'top' command) is initially around 10%, but it goes on increasing, until in reaches above 90%. I am using a system with 32 GB RAM, and it begins to slow down extremely, until the code crashes with the following error:

terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc

Note that this happens even before running a single forward pass.

The size of the lmdb file I'm using is 545 MB.

I've used python netspec to define the network. Following is the code:

net = caffe.NetSpec()
net.data0, net.label = CreateAnnotatedDataLayer(train_data, 
    batch_size=1,train=True, output_label=True,  
    label_map_file=label_map_file,
    transform_param=train_transform_param, batch_sampler=batch_sampler)
net.data, net.data_d = L.Slice(net.data0, slice_param={'axis': 1}, ntop=2, name='data_slicer')

Since my lmdb has 6-channel images, and the pretrained network has 3 channels, I am using a slice layer to split the image into 3-channel images that can be fed into two different convolutional layers.

Any suggestions would be helpful.

ankita raj
  • 36
  • 5
  • please post the code of your `"Data"` layer using the problematic lmdb. Can you try to decrease `prefetch` parameter and see if it has any effect? – Shai Mar 13 '18 at 13:33
  • I have updated the code in the original post. – ankita raj Mar 13 '18 at 13:48
  • you are using SSD input layer. it might be the case that your augmentation parameters results with too many candidates per input image. You'll have to look into it. – Shai Mar 14 '18 at 06:44

0 Answers0