1

using python3, gtk3. I am working on a computer vision application that needs to manipulate and display (playback) the frames from a video.

Currently, I create a new Pixbuf with the static method new_from_data, fed by a sequence of bytes created from a numpy array that contained the manipulated frame, and I am having performance problems, such as not being able to play the video at 20fps.

I wonder: is this the way to go for this kind of problem? Is creating a new Pixbuf for every frame relatively cheap or expensive? Should I use other methods, such as using new_from_stream? (not familiar with it)

fstab
  • 4,801
  • 8
  • 34
  • 66
  • Try GStreamer? I don't know how much work your setup would take... – andlabs Nov 17 '16 at 20:57
  • the amount of work is not an issue – fstab Nov 17 '16 at 21:24
  • Yes, creating a new Pixbuf for every frame is very, very, expensive. Unfortunately I don't have a good answer for on-the-fly generated streaming video because it depends heavily on how you are generating the frames, and your other requirements, but this way is almost certainly not the best way. – ptomato Nov 27 '16 at 19:32

1 Answers1

2

This is not an "easier" way, but if you're having performance issues, you should try using Clutter via Clutter-Gtk which will use hardware acceleration to draw the frames.

You can create a GtkClutterEmbed widget, which will give you a ClutterStage which is a ClutterActor. Everytime you have a new frame, you can create a new ClutterImage and do clutter_image_set_data() (which is just like new_from_data for the Pixbuf) and then set the ClutterContent of the ClutterActor (eg: the ClutterStage) to be this new ClutterImage.

This is how Gnome-Ring plays video, you can take a look at its source code here for inspiration.

Stepan
  • 163
  • 1
  • 6