1

For the kube-prometheus-stack we added more and more dashboards config to /grafana/dashboards folder to have more and more provisioned dashboards.

And then in one day we've done this:

kube-prometheus-stack>helm -n monitoring upgrade prometheus ./ -f ./values-core.yaml 

and got:

Error: UPGRADE FAILED: create: failed to create: Secret "sh.helm.release.v1.prometheus.v16" is invalid: data: Too long: must have at most 1048576 bytes

What is the designed way to overrun these limitations? There is a need to add more and more provisioned dashboards to the chart.

kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:30:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.5", GitCommit:"aea7bbadd2fc0cd689de94a54e5b7b758869d691", GitTreeState:"clean", BuildDate:"2021-09-15T21:04:16Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
Eljah
  • 4,188
  • 4
  • 41
  • 85

2 Answers2

2

As we can get information from doc, the purpose of auto generated secret is for recording release information. In k8s design, Individual secrets are limited to 1MiB in size. Based on the above information, the secret size is the hard limitation of k8s, and the actual release secret size should be positively correlated with the size of the helm chart.

In this use case, the main reason for the large helm chart is that you use grafana's dashboardProvider to automatically deploy the ready-made dashboard JSON file. The provider will load all JSON file to kube-prometheus-stack for creating dashboard configmaps. And then in one day when you add new dashboard and it makes release secret finally hit the limitation you will get the error.

If you don't want to change k8s storage backend type, there is alternative way to work around with. The main idea is to separat tasks of creating dashboard configmap from grafana dashboardProvider and create dashboard configmap by our own.

First, we can abandon this kind of declaration in kube-prometheus-stack

    dashboardProviders:
      dashboardproviders.yaml:
          apiVersion: 1
          providers:
              - name: 'default'
              orgId: 1
              folder: 'default'
              type: file
              disableDeletion: true
              editable: true
              options:
                  path: /var/lib/grafana/dashboards/default
      dashboards:
        default:
        {{- range $_, $file := ( exec "bash" (list "-c" "echo -n dashboards/default/*.json") | splitList " " ) }}
        {{ trimSuffix (ext $file) (base $file) }}:
            json: |
              {{- readFile $file }}
        {{- end }}

Then, we create another helm chart configmap

Helm chart template

{{- range $config, $data := .Values.configs }}
 ---
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: grafana-dashboard-{{ $config }}
   labels:
     grafana_dashboard: "1"
   annotations:
     grafana_folder: {{ $config }}
 data:
   {{ range $key, $val := $data }}
     {{ $key }}.json: |
       {{ mustToJson $val }}
   {{ end }}
{{- end }}

Helm values, read dashboard json file and convert to golang string

configs:
 default:
 {{- range $_, $file := ( exec "bash" ( list "-c" (printf "echo -n dashboards/default/*.json")) | splitList " ") }}
 {{ trimSuffix (ext $file) (base $file) }}:
     {{ readFile $file }}
 {{- end}}

At this time, when we deploy this separated dashboard helm chart, it should generate all configmaps which contain dashboard json value automatically.

Finally, the last step, we can go to setup Grafana sidecar configurations to make it scrape dashboard from configmaps.

grafana:
 defaultDashboardsEnabled: false
 sidecar:
   dashboards:
     enabled: true
     label: grafana_dashboard
     annotations:
       grafana_folder: "Default"
     folder: /tmp/dashboards
     folderAnnotation: grafana_folder
     provider:
       foldersFromFilesStructure: true

After update kube-prometheus-stack and waiting for a while, or you can monitoring on Grafana sidecar pod logs. You will see the dashboard configmaps are loading to pod and ADD to dashboard.

Rong
  • 36
  • 3
1

Secret ... is invalid: data: Too long: must have at most 1048576 bytes

This is a well known limitation of Kubernetes secrets (version 1.23 at the moment). The official k8s documentation says:

Individual secrets are limited to 1MiB in size. This is to discourage creation of very large secrets which would exhaust the API server and kubelet memory. However, creation of many smaller secrets could also exhaust memory. More comprehensive limits on memory usage due to secrets is a planned feature.

So, first of all, check if some unnecessary files/dirs are stored in your chart directories and remove them. I am sure you have already removed all unnecessary files.

To address such issues Helm introduced an SQL storage backend:

Using such a storage backend is particularly useful if your release information weighs more than 1MB (in which case, it can't be stored in Secrets because of internal limits in Kubernetes).

To enable the SQL backend, you'll need to deploy a SQL database and set the environmental variable HELM_DRIVER to sql. The DB details are set with the environmental variable HELM_DRIVER_SQL_CONNECTION_STRING.

You can set it in a shell as follows:

export HELM_DRIVER=sql
export HELM_DRIVER_SQL_CONNECTION_STRING=postgresql://helm-postgres:5432/helm?user=helm&password=changeme

Note: Only PostgreSQL is supported at this moment.

If you want to switch from the default backend to the SQL backend, you'll have to do the migration for this on your own. You can retrieve release information with the following command:

kubectl get secret --all-namespaces -l "owner=helm"

You can check some recommendations on this Helm webpage.

mozello
  • 1,083
  • 3
  • 8