I have a kind of streaming application, which should capture the content of a window. In the window a QGLWidget draws the stuff. At the moment I am using widget.grabFrameBuffer
in every draw cycle (has to be real time). The app uses 100% CPU and without the widget.grabFrameBuffer
it drops to 10-20%. I figured out, that the call image.bits()
(what I need, for sending the QImage data) creates a copy (not really good way of solving problem). Has somebody an idea how I could get the pointer to the frame buffer or image data, or do I have to use opengl commands?
Asked
Active
Viewed 1,181 times
0

Maxim Makhun
- 2,197
- 1
- 22
- 26
-
1You generally cannot get "the" pointer to the frame buffer, because it is stored in the GPU's VRAM. And herein lies the problem, you have to synchronize the pipeline to read the framebuffer back unless you use something like a PBO and do not need the results immediately. – Andon M. Coleman Jan 16 '14 at 20:21
-
ok, but is there a way to avoid the copy in qt? but thanks! – Jan 16 '14 at 20:57
1 Answers
0
QImage::bits()
only performs a deep copy because you ask it to.
For constant data, the right way to do it, without duplicating the data already in QImage
, is:
const QImage & frame(widget.grabFrameBuffer());
// compiler enforces const on uchar
const uchar * bits = frame.constBits();
// or
// if you forget const on uchar, this would deep copy
const uchar * bits = frame.constBits();
If the code you pass the bits to is broken by not being const-correct, you can cast the bits:
class AssertNoImageModifications {
const char * m_bits;
int m_len;
quint16 m_checksum;
Q_DISABLE_COPY(AssertNoImageModifications)
public:
explicit AssertNoImageModifications(const QImage & image) :
m_bits(reinterpret_cast<const char*>(image.constBits())),
m_len(image.byteCount())
{
Q_ASSERT((m_checksum = qChecksum(m_bits, m_len)) || true);
}
~AssertNoImageModifications() {
Q_ASSERT(m_checksum == qChecksum(m_bits, m_len));
}
};
...
AssertNoImageModifications ani1(frame);
stupidAPI(const_cast<uchar*>(bits));
Of course grabFrameBuffer()
has to copy the data from the GPU's memory to system memory, but this shouldn't be obscenely expensive. It should be done using DMA.
If you want to downsize/resample the data before dumping it to an image, you would save the CPU and system memory bandwidth by running a downsampler shader program, and getting the image from an FBO.

Community
- 1
- 1

Kuba hasn't forgotten Monica
- 95,931
- 16
- 151
- 313
-
correct, but unfortunately the network lib, uses no const ptr... anyway, i might cast it. could you extend your suggestion with the shader? what do you mean? – Jan 16 '14 at 21:11
-
@immerhart: If the network library's API is not const-correct, as it should be, you can use `const_cast
(bits)`, but it'd be better to either fix the library or complain to its authors (and complain **loudly**!). – Kuba hasn't forgotten Monica Jan 16 '14 at 21:13 -
@immerhart: GPU programming is beyond the scope of this answer, and can't really be taught in a stack overflow answer. GPUs these days are fairly general-purpose computers and can run code that is C-like. Said code can take the contents of a rendered frame, resize and resample it, and dump it somewhere else before it's passed to the system memory. – Kuba hasn't forgotten Monica Jan 16 '14 at 21:15