3

I'm currently trying to build a 16 bit grayscale "gradient" image but my output looks weird so I'm clearly not understanding this correctly. I was hoping somebody could shine some knowledge on my issue. I think that the "bitmap" I write is wrong? But I'm not sure.

#include "CImg.h"
using namespace std;

unsigned short buffer[1250][1250];

void fill_buffer()
{
    unsigned short temp_data = 0;
    for (int i =0;i < 1250; i++)
    {
        for (int j =0 ;j < 1250;j++)
        {
            buffer[i][j] = temp_data;
        }
        temp_data += 20;
    }
}

int main()
{
    fill_buffer();
    auto hold_arr = (uint8_t *)&buffer[0][0];
    cimg_library::CImg<uint8_t> img(hold_arr, 1250, 1250);
    img.save_bmp("test.bmp");
    return 0;
}

Current Output: Current Output

Barmak Shemirani
  • 30,904
  • 6
  • 40
  • 77
user2316289
  • 39
  • 1
  • 7
  • it is unlikely that the function expects a 2-D array disguised as `uint8_t*` pointer. Use a 1-D array instead. Also in 16-bit format you need to shift the colors and combine them to make `uint16_t`. You are actually saving the image in 24-bit format, that's why you see any gray color at all. Use a bitmap viewer and read the format information from bitmap file. Note that 24-bit format is better for gradients and is only slightly larger than 16-bit. You may want to use 24-bit or PNG. – Barmak Shemirani Jul 20 '18 at 06:48

2 Answers2

4

You cannot store 16-bit greyscale samples in a BMP... see Wikipedia.

The 16-bit per pixel option in a BMP allows you to store 4 bits of red, 4 bits of green, 4 bits of blue and 4 bits of alpha, but not 16-bits of greyscale.

The 24-bit format allows you to store 1 byte for red, 1 byte for green and one byte for blue, but not 16-bits of greyscale.

The 32-bit BMP allows you to store a 24-bit BMP plus alpha.

You will need to use PNG, or a NetPBM PGM format, or TIFF format. PGM format is great because CImg can write that without any libraries and you can always use ImageMagick to convert it to anything else, e.g.:

convert image.pgm image.png

or

convert image.pgm image.jpg

This works:

#define cimg_use_png
#define cimg_display 0
#include "CImg.h"

using namespace cimg_library;
using namespace std;

unsigned short buffer[1250][1250];

void fill_buffer()
{
    unsigned short temp_data = 0;
    for (int i =0;i < 1250; i++)
    {
        for (int j =0 ;j < 1250;j++)
        {
            buffer[i][j] = temp_data;
        }
        temp_data += 65535/1250;
    }
}

int main()
{
    fill_buffer();
    auto hold_arr = (unsigned short*)&buffer[0][0];
    cimg_library::CImg<unsigned short> img(hold_arr, 1250, 1250);
    img.save_png("test.png");
    return 0;
}

enter image description here

Note that when asking CImg to write a PNG file, you will need to use a command like this (with libpng and zlib) to compile:

g++-7 -std=c++11 -O3 -march=native -Dcimg_display=0 -Dcimg_use_png  -L /usr/local/lib -lm -lpthread -lpng -lz -o "main" "main.cpp"

Just by way of explanation:

  • -std=c++11 just sets the C++ standard
  • -O3 -march=native is only to speed things up and is not strictly required
  • -Dcimg_display=0 means all the X11 headers are not parsed so compilation is quicker - however this means you can't display images from your program so it means you are "head-less"
  • -Dcimg_use_png means you can read/write PNG images using libpng rather than needing ImageMagick installed
  • -lz -lpng means the resulting code gets linked with the PNG and ZLIB libraries.
Mark Setchell
  • 191,897
  • 31
  • 273
  • 432
  • *"You cannot store 16-bit greyscale samples in a BMP"* You can store greyscale in 16-bit bitmap, but it has to be at much lower color resolution, up to RGB555 or RGB565. Or maybe you mean this library doesn't support it. – Barmak Shemirani Jul 20 '18 at 15:17
  • @BarmakShemirani No, I meant what I said. You cannot store 16-bit greyscale samples in a BMP. If it is at a lower resolution like RGB555, it's no longer 16-bit greyscale, it is 5-bit. I agree you can store greyscale in a 16-bit BMP, just not 16-bit grayscale, which is what I stated. – Mark Setchell Jul 20 '18 at 15:28
  • I am not sure what you mean. It's 5 bits for each primary color, that's 31 colors and 31 shades of gray. As opposed to 256 shades of gray available in full rgb range. – Barmak Shemirani Jul 20 '18 at 16:04
  • @BarmakShemirani A 16-bit grayscale image is capable of storing any one of 65,536 shades of grey, i.e. 2^16 shades, at each pixel. If you only store 32 shades of grey, you have not successfully stored a 16-bit greyscale image. – Mark Setchell Jul 20 '18 at 16:20
  • There are only 256 shades of grey: RGB(0,0,0), RGB(1,1,1), ... RGB(255,255,255). 16-bit bitmap can store up to 31 of these grey colors, it has to skip a bunch in the middle. Anyway, 16-bit bitmap is old and is likely not supported in `CImg`. – Barmak Shemirani Jul 20 '18 at 16:53
  • @BarmakShemirani Yeah i just looked at how CImg writes the header and it writes the image as 24 bit. – user2316289 Jul 22 '18 at 20:02
  • You can store 16-bit grayscale in a BMP... you just can't expect it to render as 16-bit grayscale. You can distribute those 16 bit anywhere across the 24 available bits of RGB (e.g. 8 bits in green 8 in blue) and render a "false colour" image with potentially strange colour changes (no smooth gradients possible). It won't solve your original problem, but others may find this useful if they're not really interested in rendering a 16-bit (or 12-bit) image so much as processing it. – omatai May 30 '19 at 22:54
3

You've got an 8 bit vs 16 bit problem. You're writing 16 bit values, but the library is interpreting them as 8 bit. That's the explanation for the dark vertical bars that are visible. It's alternating between the low and high bytes of each value treating them as two separate pixel values.

And the reason for the "gradient venetian blind" effect is again due to only considering the low byte. That'll cycle from 0 to 240 in 12 steps, and then overflow back to 5 on the next step, and so on.

I'm no cimg_library expert, but a good starting point might be to replace the uint8_ts with uint16_t and see what effect that has.

dgnuff
  • 3,195
  • 2
  • 18
  • 32
  • You are awesome! It makes sense the uint16_t change got rid of the vertical lines! I don't even know why i casted my data to uint8_t anyway. Didn't fix the "gradient venetian blind" issue but its a step in the right direction. – user2316289 Jul 20 '18 at 04:40
  • @user2316289 Try opening the resulting bmp in an image application that can tell you the bit depth. It may be possible that bmp files are limited to 8 bits. I believe that png and jpg files are capable of storing 16 bit images. Just remember, I'm not an image expert so I may be mistaken here. ;) – dgnuff Jul 20 '18 at 04:43