0

I am looking for an efficient way to convert multiple numpy arrays (images) into bytes so I can display them into a GUI, in my case imgui from https://github.com/pyimgui/pyimgui.

The way I'm doing this seems a bit counterintuitive, since I am getting the images from neural networks and I need to transform frame by frame to display in the rendering engine. The pipeline is:

get z vector -> 
generate image data from the z vector -> 
convert the image data to PIL image -> 
.convert("RGB") the PIL image -> 
get the PIL image in bytes using : data = im.tobytes("raw", "RGBA", 0, -1)

This seems extremely inefficient to me and I am doing this for 5 textures at the same time (from two different neural networks). Even when I try to display, instead of bytes, either the PIL image or even the numpy array directly in the OpenGL context I only see a glitch.

Any help is appreciated.

Ari Cooper-Davis
  • 3,374
  • 3
  • 26
  • 43
kashik
  • 9
  • 4
  • It seems that [displaying images is out of scope for `imgui`](https://github.com/ocornut/imgui/wiki/Image-Loading-and-Displaying-Examples#tl-dr) as that depends on your rendering engine. That FAQ does have an example for OpenGL though :) – Ari Cooper-Davis Aug 23 '21 at 17:26
  • 1
    I don't have any problems getting that. Just as I have everything working. My question here is to see if someone expert can help me turn np arrays of images into bytes to demonstrate in opengl more efficiently. – kashik Aug 23 '21 at 20:53

0 Answers0