I'm using a python program to acquire an image from a scientific camera. This part is okay, I can get the 16 bits image in an array. The problem comes when I want to display the image in the qt window (I'm using a QGraphicsWindow), the way the image is displayed is very strange. To display the image, I convert the 2d array to a pixmap which is then dislpayed. I tried different things but the best results are obtained for the following codes :
def array2Pixmap(arr):
arr_uint8 = arr.view(dtype=numpy.uint8)
im8 = Image.fromarray(arr_uint8)
imQt = QtGui.QImage(ImageQt.ImageQt(im8))
pix = QtGui.QPixmap.fromImage(imQt)
return pix
which gives the following result :
and this one:
def array2Pixmap(arr):
arr_uint8 = arr.astype(numpy.uint8)
im8 = Image.fromarray(arr_uint8)
imQt = QtGui.QImage(ImageQt.ImageQt(im8))
pix = QtGui.QPixmap.fromImage(imQt)
return pix
which gives this for the exact same capture conditions (camera exposure time, light intensity, etc ...):
So now I'm looking for a way to display the image the correct way. Do you have any idea of what I'm doing wrong ?
Thanks
EDIT
Here is an exemple of what is the arr. The command print(arr)
returns
[[100 94 94 ... 97 98 98]
[ 97 100 98 ... 98 101 99]
[100 95 98 ... 104 98 102]
...
[ 98 98 98 ... 96 98 100]
[ 94 100 102 ... 92 98 104]
[ 97 90 96 ... 96 97 100]]
and a print(type(arr))
returns
<class 'numpy.ndarray'>
EDIT
Ok I have some news. I changed my code so that now the conversion to 8 bits array id done like this :
arr = numpy.around(arr*(2^8-1)/(2^16-1))
arr_uint8 = arr.astype(numpy.uint8)
If I display the image using matplotlib.pyplot.imshow(arr, cmap='gray')
it works and the image is diplayed like this in the editor :
but when I convert it into a QPixmap, the result is the same as before.
What is strange is that when I use arr_uint8 = arr.view(dtype=numpy.uint8)
to convert to 8-bits, the result is an array of 2048*4096 instead of 2048*2048. I don't understand why...