0

What is the best practice for sending OpenGL pixel data across a network and then then displaying it on on a client as a bitmap image.

What I currently have is

  1. Get the pixel data using glReadPixels,
  2. Create a FreeImage object using FreeImage_ConvertFromRawBits

However I am unable to find any documentation or examples on serialising this to a byte array in bitmap RGB format. This will then be passed down the network to a client, de-serialised back into a bitmap image and rendered. Some help in this area would be greatly appreciated.

Please also provide some advice regarding best practice for this type of networking. The screen will be captured and sent at 30fps however at a relatively low resolution. Would a compressed JPG image be a better approach or would the compression complexity outweigh the extra data?

maxhap
  • 221
  • 3
  • 15
  • You could compress it with ZLib. Send over a header first and then decompress it based on the header. I did this before in a chat program. I didn't compress anything though. Just send over the width the height and bits per pixel. Send over the raw RGBA and finally on the client side, read the width and height and bits per pixel, allocate a buffer and read the RGBA.. Basically, you're designing your own packet layout/header. – Brandon Oct 13 '14 at 23:16
  • there's also jpeg and png formats – ratchet freak Oct 13 '14 at 23:18
  • 1
    You might want to look into actually generating a video stream. Ultimately, that will be the smallest delivery payload. Smaller than jpeging each frame. There are numerous file formats (MPG) which are designed to stream. How you generate, deliver and receive the video stream depends on your platform – cppguy Oct 13 '14 at 23:24
  • Yeah, you might want to compress that as much as you can, just doing some basic calculations of an RGB VGA stream @30Hz (3*640*480*30) gives you over 26MB/s which is quite a lot (720p being about 80MB/s and 1080p is 178MB/s). – PeterT Oct 14 '14 at 00:03

0 Answers0