Newbie to caffe here.
I am trying to replicate LeNet on my own dataset. My training data is a 1D data which can be represented as 1x3000 vector. For each 1x3000 vector I have a label which is another 1D vector of 1x64 dimension binary vector. I have 100 thousands of such (data, label) data. I am confused how I can feed this to Caffe. All the examples out there are for images of dimension N
xN
.
Any idea how this data can be per-processed to be fed to Caffe?
I was thinking of zero-padding the vector and make it n(zero-padding)xN, but it doesn't seem the right way. Also could slicing the 1x3000 vector into 1xn and stacking them up to form an mxn matrix be a solution? Has anyone done this before?
Any suggestion is appreciated.