I've setup container insight on a ECS cluster running Fargate.
I'm experiencing quite big delay to get metrics into AWS Container.
When looking at the metric log /aws/ecs/containerinsights/{cluster_name}/performance, in log insight:
- I can see delay from 130s to 170s between the @ingestionTime and the @timestamp
- I also see a delay of like 60s between the advertised @ingestionTime, and the time the corresponding log appeared in the Logs insight query.
This appearingly also impact auto-scaling, making it very slow to react.
The metrics are 60s appart, made at the start of every minutes.
Anyone experienced this, or know how to tune it?