I am confused. I have an EC2 t2.micro
(I know, micro, but until recently it was ok) instance with 5 Kafka consumers which, according to htop
, use 100% CPU all the time. It seems to be confirmed by Kafka, which shows that we have lags, so consumers can't keep up.
However, when I look at CloudWatch for this instance's CPUUtilisation
, I see that it never goes above 10%. It is always right below this value, which makes me think that I am either choosing wrong metrics, or that I should know that there is some kind of a factor that I should use when setting up my CloudWatch alarms...
Update
I checked mpstat -P all
(as suggested here), and it seems that now, the effect is opposite to what was reported 10 years ago:
20:45:07 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
20:45:07 all 10,80 0,00 1,49 0,71 0,00 0,27 66,92 0,00 0,00 19,81
20:45:07 0 10,80 0,00 1,49 0,71 0,00 0,27 66,92 0,00 0,00 19,81
So apparently I can use 10% of CPU max, but CloudWatch doesn't show usage of my share, but of entire CPU...