I am working on a project where i need to send the video from my IP camera to Kinesis Video Stream, and use Sagemaker to host my ML model, which will then analyse the video from Kinesis Video Stream in real time.
I followed this link: https://aws.amazon.com/blogs/machine-learning/analyze-live-video-at-scale-in-real-time-using-amazon-kinesis-video-streams-and-amazon-sagemaker/
I am done with these things -
- Setup of IP camera and kinesis video stream to send video from IP camera to KVS
- Setup cloud formation template for KIT (as mentioned in the link)
I am stuck with this particular thing -
How do i get the video frames from aws fargate in the sagemaker notebook as input (as shown in below image, step 3)? In the link it is mentioned that the KIT comes pre-bundled with a custom Lambda function that is written to process the prediction output of one of the Amazon SageMaker examples using Object Detection algorithm. I am not sure how this algorithm is taking input from KVS.
Please if someone can help me out with this.