2

I have 2 colab notebooks. First one for real-time video processing via webcam (using Yolov4 Darknet). Second one for NLP, it's like a voice assistant. Now i want to send outputs of video procssing to NLP notebook. I have to present the objects detected by Yolo(live) as audio feedback when needed. Is is it possible? Can you give me any ideas about this. Thanks!

Note: Yolo live detection and NLP work properly independently of each other.

(Btw sorry for my poor english :))

berkep
  • 21
  • 1

0 Answers0