1

I have problems converting .dcm image from dcmtk format to opencv. My code:

DicomImage dcmImage(in_file.c_str());
int depth = dcmImage.getDepth();
std::cout << "bit-depth: " << depth << "\n";    //this outputs 10
Uint8* imgData = (uchar*)dcmImage.getOutputData(depth);
std::cout << "size: " << dcmImage.getOutputDataSize() << "\n";    //this outputs 226100
cv::Mat image(int(dcmImage.getWidth()), int(dcmImage.getHeight()), CV_32S, imgData);
std::cout << dcmImage.getWidth() << " " << dcmImage.getHeight() << "\n";  //this outputs 266 and 425

imshow("image view", image);  //this shows malformed image

So I am not sure about CV_32S and getOutputData parameter. What should i put there? Also 226100/(266*425) == 2 so it should be 2 bytes pre pixel (?)

user7428910
  • 61
  • 1
  • 7

2 Answers2

1

When getDepth() returns 10, that means you have 10 bits (most probably grayscale) per pixel.

Depending on the pixel representation of the DICOM image (0x0028,0x0103), you have to specify signed or unsigned 16 bit integer for the matrix type: CV_16UC2 or CV_16SC2.

Caution: As only 10 bits of 2 bytes are used, you might find garbage in the upper 6 bits which should be masked out before passing the buffer to the mat.

Update: About your comments and your source code:

  1. DicomImage::getInterData()::getPixelRepresentation() does not return the pixel representation as found in the DICOM header but an internal enumeration expressing bit depth and signed/unsigned at the same time. To obtain the value in the header - use the DcmDataset or DcmFileFormat
  2. I am not an openCV expert, but I think you are applying an 8 bit bitmask to the 16 bit image which cannot work properly
  3. The bitmask should read (1 >> 11) - 1
Markus Sabin
  • 3,916
  • 14
  • 32
  • Thanks for your answer! Ok, I've tried to play with it a bit, but with no result. I probably don't get something. My code: https://pastebin.com/NGsUn3tN I don't know why but representation is 2 where it should be 0 or 1 (unsined/signed). And I am also not sure if putting (1<<11)-1 as mask is right choice. Btw: is there a generic way to do it? Beceause there may be images with different representation/depth. – user7428910 Aug 09 '17 at 22:11
  • And another question: am I extracting pixel representation of the DICOM image as dcmImage.getInterData()->getRepresentation() properly? – user7428910 Aug 09 '17 at 22:16
  • Updated my answer accordingly – Markus Sabin Aug 10 '17 at 05:35
1

The question is whether you really need rendered pixel data as returned by DicomImage::getOutputData(), or if you need the original pixel data from the DICOM image (also see answer from @kritzel_sw). When using getOutputData() you should pass the requested bit depth as a parameter (e.g. 8 bits per sample) and not the value returned by getDepth().

When working with CT images, you probably want to use pixel data in Hounsfield Units (which is a signed integer value that is the result of the Modality LUT transformation).

J. Riesmeier
  • 1,641
  • 10
  • 14