1

I have this thread class built to run inference with TensorRT:

class GPUThread(threading.Thread):

  def __init__(self, engine_path):
    threading.Thread.__init__(self)
    self.engine_path = engine_path
    self.engine = self.open_engine(engine_path)

  def run(self):
    cuda.init()
    #self.dev = cuda.Device(0)
    #self.ctx = self.dev.make_context()
    self.rt_run()
    #self.ctx.pop()
    #del self.ctx
    return

  def rt_run(self):
    with self.engine.create_execution_context() as context:
      inputs, outputs, bindings, stream = self.allocate_buffers(self.engine)
      # ...  Retrieve image
      self.load_input(inputs[0].host, image)
      [output] = self.do_inference(
        context,
        bindings=bindings,
        inputs=inputs,
        outputs=outputs,
        stream=stream
      )
    return

  def load_input(self, pagelocked_buffer, image):
    # ... Image transformations ...
    # Copy to the pagelocked input buffer
    np.copyto(pagelocked_buffer, crop_img)
    return

  def allocate_buffers(self, engine):
    inputs = []
    outputs = []
    bindings = []
    stream = cuda.Stream()
    for binding in engine:
      size = trt.volume(engine.get_binding_shape(binding)) * engine.max_batch_size
      dtype = trt.nptype(engine.get_binding_dtype(binding))
      # Allocate host and device buffers
      host_mem = cuda.pagelocked_empty(size, dtype)
      device_mem = cuda.mem_alloc(host_mem.nbytes)
      # Append the device buffer to device bindings.
      bindings.append(int(device_mem))
      # Append to the appropriate list.
      if engine.binding_is_input(binding):
        inputs.append(HostDeviceMem(host_mem, device_mem))
      else:
        outputs.append(HostDeviceMem(host_mem, device_mem))
    return inputs, outputs, bindings, stream

  def run_inference(self, context, bindings, inputs, outputs, stream, batch_size=1):
    # Transfer input data to the GPU.
    [cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs]
    # Run inference.
    context.execute_async(batch_size=batch_size, bindings=bindings, stream_handle=stream.handle)
    # Transfer predictions back from the GPU.
    [cuda.memcpy_dtoh_async(out.host, out.device, stream) for out in outputs]
    # Synchronize the stream
    stream.synchronize()
    # Return only the host outputs.
    return [out.host for out in outputs]

When running the code above, I get the error: stream = cuda.Stream() pycuda._driver.LogicError: explicit_context_dependent failed: invalid device context - no currently active context? This function cuda.Stream() is called in allocate_buffers above.

So I then try the below in run (note this is the commented out code above):

self.dev = cuda.Device(0)
self.ctx = self.dev.make_context()
self.rt_run()
self.ctx.pop()
del self.ctx

This causes my system to completely freeze when rt_run's create_execution_context is called. I'm guessing there are conflicts between making the PyCuda context and then creating the TensorRT execution context? I'm running this on a Jetson Nano.

If I remove the create_execution_context code, I can allocate buffers and it seems that the context is active and found in the worker thread. However, I can't run inference without the TensorRT execution context. execute_async is not a method of self.ctx above.

Note that none of these issues arise when running from the main thread. I can just use PyCuda's autoinit and create an execution context as in the above code.

So in summary, in a worker thread, I can't allocate buffers unless I call self.dev.make_context but this causes the create_execution_context call to crash the system. If I don't call self.dev.make_context, I can't allocate buffers in the execution context as I get the error invalid device context when calling cuda.Stream() in allocate buffers.

What I'm running:

  • TensorRT 6
  • PyCuda 1.2
  • Jetson Nano 2019 (A02)
Biiiiiird
  • 384
  • 1
  • 3
  • 17
  • Try replacing `the with self.engine.create_execution_context() as context` with `context = self.engine.create_execution_context()` – mibrahimy Apr 30 '20 at 18:27
  • @Biiiiiird were you able to solve the issue? – Walid Hanafy Jul 04 '20 at 16:20
  • @WalidHanafy I did not find a solution. I resigned to developing the application on a single thread instead. – Biiiiiird Jul 08 '20 at 20:46
  • Thanks, I have a solution that might be valid - I have decided to use C++ - deploy multiple threads each with its own Cuda context and tensor rt context. The threads should be always running, then communicate with them via a queue, this will increase the throughput or the processing. – Walid Hanafy Jul 09 '20 at 10:49
  • 1
    @Biiiiiird I have found the solution and posted it here. https://stackoverflow.com/questions/62719277/tensorrt-multiple-threads – Walid Hanafy Jul 30 '20 at 10:21

0 Answers0