1

I send metrics from CloudWatch to Datadog via Kinesis Firehose.
And when I send multiple values of the same metric at the same second, Datadog always preforms average. Even when I use a rollup-sum function.

Example

I send three values for the same metric quickly one after the other in CloudWatch:

  aws cloudwatch put-metric-data --namespace example --metric-name test3 --value 1
  aws cloudwatch put-metric-data --namespace example --metric-name test3 --value 0
  aws cloudwatch put-metric-data --namespace example --metric-name test3 --value 0

And in DataDog the value appears as 0.33 (DataDog preformed average): Value in DataDog

Even with a rollup(sum, 300) the value is still 0.33: Value in Datadog with rollup

What's going on? How can I force Datadog to preform a sum instead of average?

Daniel
  • 21
  • 3
  • What is the metric type? Seems like it may be displaying as a rate when you want a count https://docs.datadoghq.com/metrics/type_modifiers/ – bwest Mar 03 '22 at 17:50
  • @bwest using test3.as_count() provides the same result. – Daniel Mar 03 '22 at 19:14
  • What is the metric type? – bwest Mar 03 '22 at 19:28
  • @bwest I don't know. It's not written in the metric summary page like the documentaion suggests (probably an older version). In AWS I tried setting the unit type of the metric as Counter and I still got the same result. – Daniel Mar 04 '22 at 05:15
  • Faced with a similar behavior regarding metrics created by Cloud Watch Logs metric filters and streamed to DD. Where you able to find a root cause? – kolyaiks Dec 09 '22 at 05:30

1 Answers1

0

I think the cause might be the minimum resolution for Datadog is 1 datapoint/minute(https://docs.datadoghq.com/developers/guide/data-collection-resolution-retention/) You pushed 3 data points within one minute, Datadog can only consume ONE. Basically, Datadog needs to convert CW metrics into its own storage, which is limited by 1 data point/minute

steve
  • 1