0

I am working on a project where i need to send the video from my IP camera to Kinesis Video Stream, and use Sagemaker to host my ML model, which will then analyse the video from Kinesis Video Stream in real time.
I followed this link: https://aws.amazon.com/blogs/machine-learning/analyze-live-video-at-scale-in-real-time-using-amazon-kinesis-video-streams-and-amazon-sagemaker/
I am done with these things -

  1. Setup of IP camera and kinesis video stream to send video from IP camera to KVS
  2. Setup cloud formation template for KIT (as mentioned in the link)

I am stuck with this particular thing -

  1. How do i get the video frames from aws fargate in the sagemaker notebook as input (as shown in below image, step 3)? In the link it is mentioned that the KIT comes pre-bundled with a custom Lambda function that is written to process the prediction output of one of the Amazon SageMaker examples using Object Detection algorithm. I am not sure how this algorithm is taking input from KVS.

    enter image description here

Please if someone can help me out with this.

2 Answers2

0

Hi i am cloud architect in S.Korea

First remember that Lambda don't connect to Kinesis Video Stream. Then how do example get frame data of Kinesis Video Stream. It is AWS ECS Docker Image in CloudFormation.

DockerImageRepository:
    Type: String
    Default: >-
      528560246458.dkr.ecr.us-east-1.amazonaws.com/kinesisvideosagemakerintegration_release:V1.0.3
    Description: Docker image for Kinesis Video Stream & SageMaker Integration Driver.

It is XML file code to create CloudFormation. In this file, You can confirm to create Docker Image. Therefore you should to check code in Docker Image.

I can't open code in Docker Image. But i can find java code in Github.

https://github.com/aws/amazon-kinesis-video-streams-parser-library

https://github.com/aws/amazon-kinesis-video-streams-parser-library/tree/master/src/main/java/com/amazonaws/kinesisvideo/parser/utilities

https://github.com/aws/amazon-kinesis-video-streams-parser-library/blob/master/src/main/java/com/amazonaws/kinesisvideo/parser/utilities/FrameRendererVisitor.java

https://github.com/aws/amazon-kinesis-video-streams-parser-library/blob/master/src/main/java/com/amazonaws/kinesisvideo/parser/utilities/OutputSegmentMerger.java

May be this code run in Docker Image. First this code read i-Frame at Kinesis Video Stream. Second compare with different at previous i-Frame. Third send to SageMaker endpoint and get Sagemaker inference. Last Send to Kinesis(Not Kinesis Video Stream) then Lambda receive Sagemaker inference.

I trying this example in develop product. You should follow my Github to communicate

https://github.com/WooSung-Jung

  • 1
    This sample project from AWS is very frustrating as they don't provide any of the actual code that does the consuming, just a reference to a docker image with the compiled output! – joshwa Aug 19 '21 at 16:58
0

This AWS sample Lambda code would be a good reference.

The document is here; https://github.com/aws/amazon-kinesis-video-streams-parser-library#kinesisvideorekognitionlambdaexample

Lambda can be triggered by Kinesis Datastream input.

LittleWat
  • 53
  • 6