In my Qt application I have image data as a numpy.ndarray
. Usually that comes from cv2.imread()
, which I then convert to a QImage
as follows:
height, width, channel = cvImg.shape
bytesPerLine = 3 * width
qImg = QImage(cvImg.data, width, height, bytesPerLine, QImage.Format_RGB888)
This works fine, the QImage
can be converted to a pixmap and painted to a label. Now in some cases I don't get the image data from a file via imread()
, but instead directly from a camera. This data is also a numpy.ndarray
, I can save it via cv2.imwrite()
(and then open it in an image viewer). However, using the code above I cannot convert that image data directly to a QImage
, the result is a red-ish image without any details, just some vertical lines.
Now since I can save that camera image data it seems to be valid, I just need to find the correct image format when calling the QImage
constructor (I guess). I tried several of them, but none worked. So how can I determine in which format this image data is?