I used gstreamer + opencv for decoding 12 multi-streams ip camera with 480p5. The predection times of model in nano are 100ms and 130ms for batch_size=1 and batch_size=2. I used threads for H264 HW decoding. and I want to know how to handle this problem for 12 cameras at the same time? I want to know in Nvidia demo shown 8 streams 1080p30 processed at the same time, how do they thandle this challange. I guess they don’t feed inputs of 8 stream to model at the same time, right? I think they feed the first two cameras as batch_size=2 and then feed to model and then second two cameras to model as sequentioal and so on , right? if so, assume they feed each of two cameras to model as same time and also they decode 8 streams at the same time, so if we feed first two camera into model in the time of T1, the frames of rest cameras (i.e 3-8) are rejected in the time of T1 for processing?
Asked
Active
Viewed 1,002 times
0
-
They use their DeepStream SDK. Here's the link. https://developer.nvidia.com/deepstream-sdk – mibrahimy Apr 28 '20 at 08:06
-
Yes, I know. in your opinion, Is it possible to use deepstream sdk in our custom projects? – DeeeepNet May 03 '20 at 22:39
-
Absolutely, Deepstream supports custom models/projects. – mibrahimy May 03 '20 at 23:52