I have a device creating raw frames and placing them in memory pointed to as a uint32_t*
that is allocated for the frame size. The data is packed bytes with no padding as 1460 pixels, 1920 lines and 2 bytes per pixel. I am trying to display the frame(s) using openCV. I have created the frame with the following line:
cv::Mat frame(1460, 1920, CV_16UC1, g_mem1);
When this called with:
cv::imshow("Video Playback", frame);
I get a distorted image. If I perform the following operations on the pixels:
size_t memSz = 1920 * 1460 * 2;
uint32_t* g_mem1 = static_cast<uint32_t>(std::malloc(memSz));
cv::Mat frame(1460, 1920, CV_16UC1, g_mem1);
for (int y = 0; y < 1460; ++y)
{
for (int x = 0; x < (1920 / 2); ++x)
{
uint32_t data = g_mem1[(y * (1920 / 2) + x)];
uint16_t pixelValue1 = static_cast<uint16_t>(data>> 16);
pixelValue1 = (pixelValue1 >> 8) | (pixelValue1 << 8);
uint16_t pixelValue2 = static_cast<uint16_t>(data & 0xFFFF);
pixelValue2 = (pixelValue2 >> 8) | (pixelValue2 << 8);
frame.at<uint16_t>(y, (x * 2)) = pixelValue1;
frame.at<uint16_t>(y, (x * 2) + 1) = pixelValue2;
}
}
The images now displays correctly. This is much too slow to be doing in software. I have a working pipeline with gstreamer that uses a cap format of gray16-be
that displays the data correctly with no manipulation. I can't use gstreamer on the target machine and I am trying to find the openCV equivalent that does not involve manipulating the pixel data or at least lets me leverage hardware to do so.
I have tried the various CV_16 formats available. The data still appeared distorted and only manually manipulating the data as described above has resulted in valid output using openCV.