We have a multi tenant Kubernetes cluster and using Prometheus Alertmanager to send alerts to those tenants via slack.
So we have config that includes this:
slack_configs:
- send_resolved: true
channel: '{{ printf "topic-svc-%.11s" (index (index .Alerts 0).Labels "namespace") }}'
(The %.11s ensures the channel name stays within the 21 character limit)
This works great if the slack channel exists, but if the channel doesn't exist the alert ends up in the ether (not good for an alert!).
Alertmanager logs are pretty limited in what they tell you, e.g. it's a generic error with no user data:
alertmanager-k8s-0 alertmanager level=error ts=2018-11-09T15:01:52.134984182Z caller=dispatch.go:280 component=dispatcher msg="Notify for alerts failed" num_alerts=3 err="cancelling notify retry for \"slack\" due to unrecoverable error: unexpected status code 404"
Tried all sorts of options, checked StackOverflow but seems all examples have simple fixed name for slack channels