1

I have an cv::Mat of doubles image that I've truncated between 0.0 and 4095.0. I want to be able to convert this matrix/create a new matrix based on this one that is 12bit. (smallest int size needed to hold 0 -> 4095 integer values). I can just get the raw buffer out, however I'm not sure the format of the data inside the matrix.

Manually I could do the following:

cv::Mat new_matrix(/*type CV_8UC3, size (matrix.rows, matrix.cols/2)*/);
for(int i = 0; i < matrix.rows; ++i){
    for(int j = 0; j < matrix.cols; ++j){
        std::uint16_t upper_half = static_cast<std::uint16_t>(matrix.at<double>(j*2,i));
        std::uint16_t lower_half = static_cast<std::uint16_t>(matrix.at<double>(j*2+1,i));
        std::uint8_t first_byte = static_cast<std::uint8_t>(upper_half>>4);
        std::uint8_t second_byte = static_cast<std::uint8_t>(upper_half<<4) | static_cast<std::uint8_t>(lower_half << 12 >> 12);
        std::uint8_t third_byte = static_cast<std::uint8_t>(lower_half>>4);

        new_matrix.at<cv::Vec3B>(j, i) = cv::Vec3b(first_byte, second_byte, third_byte);
    }
}

which is essentially compressing two double values to one for upper half, one for lower, extracting three bytes out of it (12 + 12 = 24, 24/8 = 3) into a 3 byte matrix. I'm unsure if the memory layout will match that of packed 12 bits however (I do have an even number of cols, so dividing cols/2 isn't a problem) and I'm not sure how to make sure this obeys endianess.

I might even be able to use a custom data type, but I would need to make sure that the elements are not padded if say I made a Union Struct 12bit type or something.

Note after the conversion, I'm not intending to use the 12bit values in OpenCV anymore, I then need to extract the raw values and they get sent to another separate process.

Krupip
  • 4,404
  • 2
  • 32
  • 54

1 Answers1

1

cv::Mat will store data in 8 bits units, minimum. Which means that your 12-bits values would be padded anyways inside the matrix, as evidenced by the return value of Mat::elemSize1(), which is in # of bytes. For what you need to do, the best bet seems to use a custom struct holding 2 values (struct sizes are byte-padded as well), then pack everything in an std::vector<>. You will then waste at worst 12 bits of padding on the streamed data, when you have an odd number of samples.

A note about packing: If you use something like the following, you need to reverse the the order of the bit-sized elements, depending of the machine, if you need to transfer bytes from one architecture to another.

#pragma pack(push, 1)
struct PackedSamples { 
    char lowA; 
    char highA : 4; // NOTE: the declaration of bit sized fields order is inverse when
    char lowB : 4; // going from BIG_ENDIAN to SMALL_ENDIAN and vice-versa
    char highB;  
};

#pragma pack(pop)

Here are the macros I use for testing endianness, I assume Windows running on x86/x64. AMD is __BIG_ENDIAN.

#ifdef WIN32
# ifndef __BYTE_ORDER
#  define __LITTLE_ENDIAN 1234
#  define __BIG_ENDIAN    4321
#  define __BYTE_ORDER __LITTLE_ENDIAN
# endif
#else
# include <endian.h>
#endif

So the declaration above would become:

    #pragma pack(push, 1)
    struct PackedSamples { 
        char lowA; 
    #if __BYTE_ORDER == __LITTLE_ENDIAN
        char highA : 4;
        char lowB : 4; 
    #else
        char lowB : 4; 
        char highA : 4;
    #endif
        char highB;  
    };

    #pragma pack(pop)
Michaël Roy
  • 6,338
  • 1
  • 15
  • 19
  • I'll probably eventually accept this answer, but I won't need to implement this until later, I'll upvote this for now. – Krupip Jun 19 '17 at 14:53
  • Why not `struct PackedSamples { int A : 12; int B : 12 };`? And if the (target) byte order was big-endian, wouldn't you need write `{char highA; char highB : 4; char lowA : 4; char lowB; }`? – chtz Jul 19 '17 at 15:20