11

Background:

  1. I have a pipeline that uses a series of OpenGL shaders to processes webcam source footage and locate a feature (it's always the same feature and there is only one feature that I am ever looking for).
  2. The only thing that is read back to the CPU is 4 coordinates for the bounding box.

I am interested in training an object detection NN to see if I can get better performance/accuracy at extracting my feature from the footage.


The Question:

Is it possible to run the trained model in the openGL environment (using a framebuffer/texture as input) without reading the textures back and forth from the cpu/gpu?

Example:

  1. Run my preprocessing OpenGL shader programs
  2. Feature detection model (trained with tensorflow) using framebuffer as the input
  3. Extract the bounding box coordinates
Innat
  • 16,113
  • 6
  • 53
  • 101
Corada
  • 307
  • 2
  • 10
  • You can share (frame)buffers between OpenCL and OpenGL and tensorflow as some experimental support for OpenCL, but I guess it's a lot of work to get it working. – ixeption Dec 11 '22 at 16:04

0 Answers0