I have created an lmdb file that contains non-encoded 6-channel images. When I load it into a network in caffe, after the network is loaded, the system RAM usage (as seen using the 'top' command) is initially around 10%, but it goes on increasing, until in reaches above 90%. I am using a system with 32 GB RAM, and it begins to slow down extremely, until the code crashes with the following error:
terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc
Note that this happens even before running a single forward pass.
The size of the lmdb file I'm using is 545 MB.
I've used python netspec to define the network. Following is the code:
net = caffe.NetSpec()
net.data0, net.label = CreateAnnotatedDataLayer(train_data,
batch_size=1,train=True, output_label=True,
label_map_file=label_map_file,
transform_param=train_transform_param, batch_sampler=batch_sampler)
net.data, net.data_d = L.Slice(net.data0, slice_param={'axis': 1}, ntop=2, name='data_slicer')
Since my lmdb has 6-channel images, and the pretrained network has 3 channels, I am using a slice layer to split the image into 3-channel images that can be fed into two different convolutional layers.
Any suggestions would be helpful.