I started adding alarms to CloudWatch (ECS/Fargate) but they all show "Insufficient data"
My first theories were that there was no CloudWatch agent running on my instances or that the container role or security groups doesn't allow access to CloudWatch but there are some conflicting observations:
a) Even metrics in the AWS/ApplicationELB namespace (+TargetResponseTime) show Insufficient data. I was under the assumption that the ELB published "automatically" to CloudWatch(?)
b) In the CloudWatch console under Insights -> Container Insights I can see "Avg CPU" and "Avg memory %" for the services and tasks which indicate that there is some agent running on the instances and they can publish to CloudWatch(?)
c) under Log groups there is a /aws/ecs/containerinsights//performance category with FargateTelemetry-nnnn and ServiceTelemetry- logs which would also lead to the same conclusions as in b)(?)
Is it possible that a configuration of the alarm parameters could lead to inconsistencies, resulting in the Insufficient data?
I created the alarm in CloudFormation
ServerEndpointCPUAlarm:
Type: AWS::CloudWatch::Alarm
Properties:
AlarmDescription: Server endpoint CPU high
AlarmName: ServerEndpointCPUHigh
AlarmActions:
- !Ref AlertTopic
ComparisonOperator: GreaterThanOrEqualToThreshold
Namespace: AWS/ECS
MetricName: CPUUtilization
Statistic: Maximum
DatapointsToAlarm: 3
EvaluationPeriods: 5
Period: 60
Threshold: 80
Unit: Percent
Dimensions:
- Name: Cluster
Value: !Ref Cluster
- Name: Service
Value: !Ref ServerEndpointService
Thankful for pointers or clarifications,
- Nik