0

We currently have 3 machine learning models in production in our team (2 classifiers & one time-series). Sagemaker studio with Sagemaker model monitoring wasn't the right option for us because of our CICD architecture. So now we have an ECS container with our models for predictions.

We now want to apply proper model monitoring to our model. My idea is to store ground truth and prediction data in s3 and apply quicksight for monitoring to this via Athena.

My question is: Is this a good way of doing this? Can we apply the right metrics this way?

JanBennk
  • 277
  • 7
  • 16

1 Answers1

0

So the long and the short it is, no one can give you a complete answer because this is a vast and wide industry level problem, and you should know that. You need to learn how it works, in general, to figure out how and what to implement for a given use case + desired perf metrics + distance metrics (drift) + tech stack.

You will have to decipher and learn the code examples and article below, then reimplement and refactor for your use case.

1. Code:

https://github.com/graviraja/MLOps-Basics/tree/main/week_9_monitoring

2. Article:

https://www.ravirajag.dev/blog/mlops-serverless

3. GitHub/Sagemaker: Model monitoring with your own container:

https://github.com/aws-samples/sagemaker-model-monitor-bring-your-own-container

4. GitHub/Sagemaker: Visualize model monitoring data:

https://github.com/aws/amazon-sagemaker-examples/blob/main/sagemaker_model_monitor/visualization/SageMaker-Model-Monitor-Visualize.ipynb
joe hoeller
  • 1,248
  • 12
  • 22