0

I am confused. I have an EC2 t2.micro (I know, micro, but until recently it was ok) instance with 5 Kafka consumers which, according to htop, use 100% CPU all the time. It seems to be confirmed by Kafka, which shows that we have lags, so consumers can't keep up.

However, when I look at CloudWatch for this instance's CPUUtilisation, I see that it never goes above 10%. It is always right below this value, which makes me think that I am either choosing wrong metrics, or that I should know that there is some kind of a factor that I should use when setting up my CloudWatch alarms...

Update

I checked mpstat -P all (as suggested here), and it seems that now, the effect is opposite to what was reported 10 years ago:

20:45:07     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idle
20:45:07     all   10,80    0,00    1,49    0,71    0,00    0,27   66,92    0,00    0,00   19,81
20:45:07       0   10,80    0,00    1,49    0,71    0,00    0,27   66,92    0,00    0,00   19,81

So apparently I can use 10% of CPU max, but CloudWatch doesn't show usage of my share, but of entire CPU...

george007
  • 113
  • 6
  • Well, there is no better way to display this in a single number, is there? If they represented *"burstable"* (less than 100%, when actually used) resources as a fraction of dynamic limits, they would show something misleading in the other, much more relevant (you would not choose that if you wanted consistent performance) case. – anx Dec 03 '22 at 22:45
  • Perhaps, but in that case, how can I see if my server is not running under 100% CPU load? – george007 Dec 04 '22 at 11:20

0 Answers0