1

I'm trying to read from an HDF5 dataset file. The file is a .h5 dataset created in python (possibly h5py) and contains 2 attributes. The first attribute is a depth image - "/depth" (single channel, 32bit float matrix) with dimensions 376 x 1241. The second attribute is an RGB image - "/rgb" (8 bit unsigned characters) with dimensions 3 x 376 x 1241 when viewed using "hdfview".

I am able to successfully read the depth image with

cv::Ptr<cv::hdf::HDF5> h5io = cv::hdf::open("filename.h5");                              
cv::Mat depth;                                                                                                    
h5io->dsread(depth, "/depth");   

But when I try to do the same with the RGB Image, not all the data seems to be correctly loaded into the cv::Mat RGB variable.

cv::Mat rgb;
h5io->dsread(rgb, "/rgb");
std::cout << rgb.size() << std::endl;

I get the following output : [3 x 376] instead of [3 x 376 x 1241]

I tried using the dsgetsize method and that seems to return the correct dimensions.

std::vector<int> size_rgb = h5io->dsgetsize("/rgb",cv::hdf::HDF5::H5_GETDIMS);

This returns a vector of integers [3,376,1241]

Any help is greatly appreciated. I'm also open to using other HDF5 libraries for reading data instead of the built-in hdf library of OpenCV 3.3.1.

Thank you in advance, S

prisar
  • 3,041
  • 2
  • 26
  • 27
  • size from a cv::Mat returns something like (width, height). So only 2 dimensions. Most probably OpenCV expects the image to be of size (376, 1241, 3). It could be that actually you got an image with 1241 channels (depth) (you can check it with type). – api55 Aug 21 '18 at 13:37
  • Hi @api55 thanks for the comment, I did try to display std::cout << rgb.type() << std::endl; and I got "0" which is an 8UC1 - single channel matrix. – Shreyas Skandan Aug 21 '18 at 14:17

0 Answers0