0

I am trying to read a 12-bit grayscale (DICOM:MONOCHROME2) image. I can read DICOM RGB files fine. When I attempt to load a grayscale image into NSBitmapImageRep, I get the following error message:

Inconsistent set of values to create NSBitmapImageRep

I have the following code fragment:

NSBitmapImageRep *rep = [[NSBitmapImageRep alloc]
                         initWithBitmapDataPlanes : nil
                         pixelsWide               : width
                         pixelsHigh               : height
                         bitsPerSample            : bitsStored
                         samplesPerPixel          : 1
                         hasAlpha                 : NO
                         isPlanar                 : NO
                         colorSpaceName           : NSCalibratedWhiteColorSpace
                         bytesPerRow              : width * bitsAllocated / 8
                         bitsPerPixel             : bitsAllocated];

With these values:

width         = 256
height        = 256
bitsStored    = 12
bitsAllocated = 16

Nothing seems inconsistent to me. I have verified that the image is: width*height*2 in length. So I am pretty sure that it is in a 2-byte grayscale format. I have tried many variations of the parameters, but nothing works. If I change "bitsPerSample" to 16, the error message goes away, but I get a solid black image. The closest success that I have been able to achieve, is to set "bitsPerPixel" to zero. When I do this, I successfully produce an image but it is clearly incorrectly rendered (you can barely make out the original image). Please some suggestions!! I have tried a long time to get this to work and have checked the Stack overflow and the web (many times). Thanks very much for any help!

SOLUTION:

After the very helpful suggestions from LEADTOOLS Support, I was able to solve my problem. Here is the code fragment that works (assuming a MONOCHROME2 DICOM image):

// If, and only if, MONOCHROME2:
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc]
                              initWithBitmapDataPlanes : &pixelData
                              pixelsWide               : width
                              pixelsHigh               : height
                              bitsPerSample            : bitsAllocated /*bitsStored-this will not work*/
                              samplesPerPixel          : samplesPerPixel
                              hasAlpha                 : NO
                              isPlanar                 : NO
                              colorSpaceName           : NSCalibratedWhiteColorSpace
                              bytesPerRow              : width * bitsAllocated / 8
                              bitsPerPixel             : bitsAllocated];

int     scale = USHRT_MAX / largestImagePixelValue;
uint16_t *ptr = (uint16_t *)imageRep.bitmapData;
for (int i = 0; i < width * height; i++) *ptr++ *= scale;
user1092808
  • 283
  • 4
  • 12
  • 12-bpp grayscale is not a native pixel format on most mobile and desktop operating systems. The displays attached to most computers/mobile devices can only display 8-bit gray. You can try telling it that it has 16-bit pixels and shifting the data left 4 bits. The proper way to display it is to render it as 8-bpp gray and allow the user to adjust the level/window to see what they need to see. – BitBank Mar 03 '15 at 17:46

1 Answers1

0

It is important to know about the Transfer Syntax (0002:0010) and Number of frames in the dataset. Also, try to get the value length and VR for Pixel Data (7FE0:0010) element. Using value length of the pixel data element you will be able to validate your calculation for uncompressed image.

As for displaying the image, you will also need the value for High Bit (0028:0102) and Pixel Representation (0028:0103). An image could be 16-bit allocated, 12-bit stored, high bit set to 15 and have one sample per pixel. That means 4 lest significant bits of each word do not contain pixel data. Pixel Representation when set to 1 means sign bit is the high bit in pixel sample. In addition, you many need to apply modality LUT transformation (rescale slope and rescale intercept for linear transformation) when present in the dataset to prepare the data for display. At the end, you apply the VOI LUT transformation (Window center and Window Width) to display the image.

LEADTOOLS Support
  • 2,755
  • 1
  • 12
  • 12
  • Thanks very much for your reply. The Transfer Syntax is Little Endian Explicit (1.2.840.10008.1.2.1). The number of frames is not specified, but assumed to be one. The value length of the Pixel Data was one of the items that I used to verify the image length (and format). The High Bit is 11 and Pixel Representation is 0. Although, I still see nothing inconsistent with the parameters I specified above, but I believe the solution is to make `bitsPerSample = 16` (no error) and then WW/WL the image. I have not tried this yet, but I believe this will solve my problem (I hope!!). – user1092808 Mar 09 '15 at 17:11