2

I have deployed prometheus-operator using helm charts, I need to customize Prometheus stateful set but couldn't do it due to the nature of stateful set. I have to delete the stateful by "kubectl delete sts prometheus-monitoring-prometheus-oper-Prometheus --cascade=false" but strangely it recreates the statefulset by itself.

Because of this issue, I couldn't able to update my stateful set.

Please help, how to troubleshoot this issue.

udayr
  • 31
  • 2

2 Answers2

0
  1. Check prometheus-operator helm chart docs if it allows you to do the change that you are looking for.

  2. Use helm upgrade to do any modification to the existing release not by manual edits.

vinodk
  • 719
  • 6
  • 5
0

The issue here is where Prometheus stateful set is controlled by the prometheus/monitoring.coreos.com CRD.

enter image description here

It has ownerReference, which says that the resource is controlled by another resource/CRD/some-other-entity. In order to make any change/modifications/deletion you will need to go through the owner specified in ownerReference.

In this case the kind is Prometheus which has the following configuration;

apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  annotations:
  labels:
    app: kube-prometheus-stack-prometheus
    app.kubernetes.io/instance: prometheus
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/part-of: kube-prometheus-stack
    app.kubernetes.io/version: 37.3.0
    chart: kube-prometheus-stack-37.3.0
    heritage: Helm
    release: prometheus
  name: prometheus-kube-prometheus-prometheus
  namespace: default
spec:
  additionalScrapeConfigs:
    key: additional-scrape-configs.yaml
    name: prometheus-kube-prometheus-prometheus-scrape-confg
  alerting:
    alertmanagers:
    - apiVersion: v2
      name: prometheus-kube-prometheus-alertmanager
      namespace: default
      pathPrefix: /
      port: web
  enableAdminAPI: false
  evaluationInterval: 30s
  externalLabels:
    cluster_name: test-cluster-01
    env: test
    kubernetes_cluster_name: test-cluster-01
    replica: test-cluster-01
  externalUrl: http://prometheus-kube-prometheus-prometheus.default:9090
  image: quay.io/prometheus/prometheus:v2.28.1
  listenLocal: false
  logFormat: logfmt
  logLevel: info
  paused: false
  podMonitorNamespaceSelector: {}
  podMonitorSelector:
    matchLabels:
      release: prometheus
  portName: web
  probeNamespaceSelector: {}
  probeSelector:
    matchLabels:
      release: prometheus
  remoteWrite:
  - url: http://prometheus01.domain.com:8990/api/v1/receive
  replicas: 1
  resources:
    limits:
      cpu: 2000m
      memory: 12288Mi
    requests:
      cpu: 500m
      memory: 7168Mi
  retention: 1d
  routePrefix: /
  ruleNamespaceSelector: {}
  ruleSelector:
    matchLabels:
      release: prometheus
  scrapeInterval: 30s
  securityContext:
    fsGroup: 2000
    runAsGroup: 2000
    runAsNonRoot: true
    runAsUser: 1000
  serviceAccountName: prometheus-kube-prometheus-prometheus
  serviceMonitorNamespaceSelector: {}
  serviceMonitorSelector:
    matchLabels:
      release: prometheus
  shards: 1
  version: v2.28.1

kubectl get prometheus

NAME                                    VERSION   REPLICAS   AGE
prometheus-kube-prometheus-prometheus   v2.28.1   1          141d
prometheus-prometheus-oper-prometheus   v2.28.1   1          141d

Hence to delete Prometheus statefulset you need to run

kubectl delete prometheus/prometheus-prometheus-oper-prometheus

If you want to modify resource limits, you will need to edit prometheus CRD which will then modify the statefulset.

Further Reading:

redzack
  • 1,521
  • 8
  • 20