2

I have defined some alerts with expressions that look like this:

sum(rate(some_error_metric[1m])) BY (namespace,application) > 10
sum(rate(some_other_error_metric[1m])) BY (namespace,application) > 10
...

The above alerts currently fire when any of our applications emit these metrics at a rate of more than 10 per minute.

Rather than hard-coding a threshold of 10, I want to be able to specify a different threshold for each application.

e.g. application_1 should alert at a rate of 10 per minute, application_2 should alert at a rate of 20 per minute, etc.

Is this possible without duplicating the alerts for each application?

This stackoverflow question: Dynamic label values in Promethues alerting rules suggests that it might be possible to achieve what I want using recording rules, however following the pattern suggested in the only answer to this question results in recording rules that Prometheus doesn't seem to be able to parse:

  - record: application_1_warning_threshold
    expr: warning_threshold{application="application_1"} 10
  - record: application_2_warning_threshold
    expr: warning_threshold{application="application_2"} 20
  ...
rcgeorge23
  • 3,594
  • 4
  • 29
  • 54

1 Answers1

0

Here's my configuration for a TasksMissing alert with varying per-job thresholds:

groups:
- name: availability.rules
  rules:

  # Expected number of tasks per job and environment.
  - record: job_env:up:count
    expr: count(up) without (instance)

  # Actually up and running tasks per job and environment.
  - record: job_env:up:sum
    expr: sum(up) without (instance)

  # Ratio of up and running to expected tasks per job and environment.
  - record: job_env:up:ratio
    expr: job_env:up:sum / job_env:up:count

  # Global warning and critical availability ratio thresholds.
  - record: job:up:ratio_warning_threshold
    expr: 0.7
  - record: job:up:ratio_critical_threshold
    expr: 0.5


  # Job-specific warning and critical availability ratio thresholds.

  # Always alert if one Prometheus instance is down.
  - record: job:up:ratio_critical_threshold
    labels:
      job: prometheus
    expr: 0.99

  # Never alert for some-batch-job instances down:
  - record: job:up:ratio_warning_threshold
    labels:
      job: some-batch-job
    expr: 0
  - record: job:up:ratio_critical_threshold
    labels:
      job: some-batch-job
    expr: 0


  # TasksMissing is fired when a certain percentage of tasks belonging to a job are down. Namely:
  #
  #     job_env:up:ratio < job:up:ratio_(warning|critical)_threshold
  #
  # with a job-specific warning/critical threshold when defined, or the global default otherwise.

  - alert: TasksMissing
    expr: |
      # Default warning threshold is < 70%
        job_env:up:ratio
      < on(job) group_left()
        (
            job:up:ratio_warning_threshold
          or on(job)
              count by(job) (job_env:up:ratio) * 0
            + on() group_left()
              job:up:ratio_warning_threshold{job=""}
        )
    for: 2m
    labels:
      severity: warning
    annotations:
      summary: Tasks missing for {{ $labels.job }} in {{ $labels.env }}
      description:
       '...'

  - alert: TasksMissing
    expr: |
      # Default critical threshold is < 50%
        job_env:up:ratio
      < on(job) group_left()
        (
            job:up:ratio_critical_threshold
          or on(job)
              count by(job) (job_env:up:ratio) * 0
            + on() group_left()
              job:up:ratio_critical_threshold{job=""}
        )
    for: 2m
    labels:
      severity: critical
    annotations:
      summary: Tasks missing for {{ $labels.job }} in {{ $labels.env }}
      description:
       '...'

Alin Sînpălean
  • 8,774
  • 1
  • 25
  • 29