0

I have an ElasticSearch + Kibana cluster on Kubernetes. We want to bypass authentification to let user to directly to dashboard without having to log in.

We have managed to implement Elastic Anoynmous access on our elastic nodes. Unfortunately, it is not what we want as we want user to bypass Kibana login, what we need is Anonymous Authentication.

Unfortunately we can't figure how to implement it. We are declaring Kubernetes Objects with yaml such as Deployment, Services etc.. without using ConfigMap. In order to add Elastic/kibana config, we pass them through env variables.

For example, here how we define es01 Kubernetes Deployment yaml :

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "37"
    kompose.cmd: kompose convert
    kompose.version: 1.26.1 (a9d05d509)
    objectset.rio.cattle.io/applied: H4sIAAAAAAAA/7RVTW/jNhD9KwueWkCmJfl
    objectset.rio.cattle.io/id: 3dc011c2-d20c-465b-a143-2f27f4dc464f
  creationTimestamp: "2022-05-24T15:17:53Z"
  generation: 37
  labels:
    io.kompose.service: es01
    objectset.rio.cattle.io/hash: 83a41b68cabf516665877d6d90c837e124ed2029
  name: es01
  namespace: waked-elk-pre-prod-test
  resourceVersion: "403573505"
  uid: e442cf0a-8100-4af1-a9bc-ebf65907398a
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      io.kompose.service: es01
  strategy:
    type: Recreate
  template:
    metadata:
      annotations:
        cattle.io/timestamp: "2022-09-09T13:41:29Z"
        kompose.cmd: kompose convert
        kompose.version: 1.26.1 (a9d05d509)
      creationTimestamp: null
      labels:
        io.kompose.service: es01
    spec:
      affinity: {}
      containers:
      - env:
        - name: ELASTIC_PASSWORD
          valueFrom:
            secretKeyRef:
              key: ELASTIC_PASSWORD
              name: elastic-credentials
              optional: false
        - name: cluster.initial_master_nodes
          value: es01,es02,es03

And here is the one for Kibana node :

apiVersion: apps/v1
kind: Deployment
metadata: 
  annotations:
    deployment.kubernetes.io/revision: "41"
    field.cattle.io/publicEndpoints: '[{"addresses":["10.130.10.6","10.130.10.7","10.130.10.8"],"port":80,"protocol":"HTTP","serviceName":"waked-elk-pre-prod-test:kibana","ingressName":"waked-elk-pre-prod-test:waked-kibana-ingress","hostname":"waked-kibana-pre-prod.cws.cines.fr","path":"/","allNodes":false}]'
    kompose.cmd: kompose convert
    kompose.version: 1.26.1 (a9d05d509)
    objectset.rio.cattle.io/applied: H4sIAAAAAAAA/7ST34/.........iNhDH/5WTn1o
    objectset.rio.cattle.io/id: 5b109127-cb95-4c93-857d-12399979d85a
  creationTimestamp: "2022-05-19T08:37:59Z"
  generation: 49
  labels:
    io.kompose.service: kibana
    objectset.rio.cattle.io/hash: 0d2e2477ef3e7ee3c8f84b485cc594a1e59aea1d
  name: kibana
  namespace: waked-elk-pre-prod-test
  resourceVersion: "403620874"
  uid: 6f22f8b1-81da-49c0-90bf-9e773fbc051b
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      io.kompose.service: kibana
  strategy:
    type: Recreate
  template:
    metadata:
      annotations:
        cattle.io/timestamp: "2022-09-21T13:00:47Z"
        kompose.cmd: kompose convert
        kompose.version: 1.26.1 (a9d05d509)
        kubectl.kubernetes.io/restartedAt: "2022-11-08T14:04:53+01:00"
      creationTimestamp: null
      labels:
        io.kompose.service: kibana
    spec:
      affinity: {}
      containers:
      - env:
        - name: xpack.security.authc.providers.anonymous.anonymous1.order
          value: "0"
        - name: xpack.security.authc.providers.anonymous.anonymous1.credentials.username
          value: username
        - name: xgrzegrgepack.security.authc.providers.anonymous.anonymous1.credentials.password
          value: password
        image: docker.elastic.co/kibana/kibana:8.2.0
        imagePullPolicy: IfNotPresent
        name: kibana
        ports:
        - containerPort: 5601
          name: 5601tcp
          protocol: TCP
        resources:
          limits:
            memory: "1073741824"
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /usr/share/kibana/config/certs
          name: certs
        - mountPath: /usr/share/kibana/data
          name: kibanadata
      dnsPolicy: ClusterFirst
      nodeName: k8-worker-cpu-3.cines.fr
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: certs
        persistentVolumeClaim:
          claimName: certs
      - name: kibanadata
        persistentVolumeClaim:
          claimName: kibanadata
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2022-11-08T13:45:44Z"
    lastUpdateTime: "2022-11-08T13:45:44Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2022-06-07T14:12:17Z"
    lastUpdateTime: "2022-11-08T13:45:44Z"
    message: ReplicaSet "kibana-84b65ffb69" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 49
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

We don't face any problem when modifying/applying the yaml and the pod run flawlessly. But it just doesn't work, if we try to access Kibana, we land on the login page.

Both files are a bit cropped. Feel free to ask for full file if needed.

Have a good night!

Jules Civel
  • 449
  • 2
  • 13

0 Answers0