I noticed that Gtk in Windows seems to be rendering images using the CPU rather than the GPU (whereas in Linux this does not seem to be the case).
I am creating a program using Python, Gtk3, and OpenCV which streams video from a camera and displays it in a GtkImage. The program works but the moment I resize the image to a larger resolution, the framerate seems to drop. I notice that the CPU usage is higher, the larger the image is.
Here is a snippet of code:
import gi
gi.require_version("Gtk", "3.0")
from gi.repository import Gtk, GdkPixbuf, GLib
# This method is called in a thread that read() frames from cv2.VideoCapture and displays it into a GtkImage.
def writeDisplay(uiBuilder, frame):
# Load frame from OpenCV into pixbuf
frame = cv.cvtColor(frame, cv.COLOR_BGR2RGB)
h, w, d = frame.shape
pixbuf = GdkPixbuf.Pixbuf.new_from_data(
frame.tostring(), GdkPixbuf.Colorspace.RGB, False, 8, w, h, w*d)
# Load image into GtkImage
imageDisplay = uiBuilder.get_object("display")
GLib.idle_add(imageDisplay.set_from_pixbuf, pixbuf)
pass
In Linux, I don't notice any frame drop which suggests that GtkImage is rendered by the GPU. However, in Windows it seems to be software rendered.
I should also note that I am using PyGObject in Windows using Msys2.
Is there any way of streaming video frames from OpenCV to a Gtk3 GUI using hardware acceleration?