-1

Hello everyone i'm trying to deploy Heartbeat on kubernetes to monitor kubernetes components.

i got the yaml file from official elastic documentation. yaml file

this is the full configuration file:

  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: heartbeat
    namespace: kube-system
    labels:
      k8s-app: heartbeat
  spec:
    selector:
      matchLabels:
        k8s-app: heartbeat
    template:
      metadata:
        labels:
          k8s-app: heartbeat
      spec:
        serviceAccountName: heartbeat
        hostNetwork: true
        dnsPolicy: ClusterFirstWithHostNet
        containers:
        - name: heartbeat
          image: docker.elastic.co/beats/heartbeat:7.17.6
          args: [
            "-c", "/etc/heartbeat.yml",
            "-e",
          ]
          env:
   
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          securityContext:
            runAsUser: 0
          resources:
            limits:
              memory: 1536mi
            requests:
          # for synthetics, 2 full cores is a good starting point for 
         relatively consistent perform of a single concurrent check
        # For lightweight checks as low as 100m is fine
              cpu: 2000m 
        # A high value like this is encouraged for browser based 
        monitors. 
        # Lightweight checks use substantially less, even 128Mi is fine 
        for those.
              memory: 1536Mi 
          volumeMounts:
          - name: config
            mountPath: /etc/heartbeat.yml
            readOnly: true
            subPath: heartbeat.yml
          - name: data
            mountPath: /usr/share/heartbeat/data
        volumes:
        - name: config
          configMap:
            defaultMode: 0600
            name: heartbeat-deployment-config
        - name: data
          hostPath:
            path: /var/lib/heartbeat-data
            type: DirectoryOrCreate

now it's giving me this error "error when creating "heartbeat-kubernetes.yaml": Deployment in version "v1" cannot be handled as a Deployment: unable to parse quantity's suffix".

when applying the yamlfile i get that `

configmap/heartbeat-deployment-config unchanged
clusterrolebinding.rbac.authorization.k8s.io/heartbeat unchanged
rolebinding.rbac.authorization.k8s.io/heartbeat unchanged
rolebinding.rbac.authorization.k8s.io/heartbeat-kubeadm-config 
unchanged
clusterrole.rbac.authorization.k8s.io/heartbeat unchanged
role.rbac.authorization.k8s.io/heartbeat unchanged
role.rbac.authorization.k8s.io/heartbeat-kubeadm-config unchanged
serviceaccount/heartbeat unchanged`

eveything is good except the deployment part.

any help would be appreciated and thank you.

1 Answers1

0

Hey I think there is a mistake in indentation in your yaml file. Can you check if this works for you?

data:
  heartbeat.yml:  |-
    heartbeat.autodiscover:
     # Autodiscover pods
      providers:
      - type: kubernetes
        resource: pod
        scope: cluster
        node: ${NODE_NAME}
        hints.enabled: true
      providers:
      - type: kubernetes
        resource: service
        scope: cluster
        node: ${NODE_NAME}
        hints.enabled: true
      providers:
      - type: kubernetes
        resource: node
        node: ${NODE_NAME}
        scope: cluster
        templates:
          # Example, check SSH port of all cluster nodes:
          - condition: ~
            config:
              - hosts:
                  - ${data.host}:22
                name: ${data.kubernetes.node.name}
                schedule: '@every 10s'
                timeout: 5s
                type: tcp

    processors:
      - add_cloud_metadata


    output.elasticsearch:
      hosts: ['https://10.112.100.121:30883']
      username: "elastic"
      password: "***********"
      ssl.verification_mode: none`

Yes your deployment has issues. I have made few changes but its failing for me due to an error but just check if this works for you

  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: heartbeat
    namespace: kube-system
    labels:
      k8s-app: heartbeat
  spec:
    selector:
      matchLabels:
        k8s-app: heartbeat
    template:
      metadata:
        labels:
          k8s-app: heartbeat
      spec:
        serviceAccountName: heartbeat
        hostNetwork: true
        dnsPolicy: ClusterFirstWithHostNet
        containers:
        - name: heartbeat
          image: docker.elastic.co/beats/heartbeat:7.17.6
          args: [
            "-c", "/etc/heartbeat.yml",
            "-e",
          ]
          env:

          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          securityContext:
            runAsUser: 0
          resources:
            limits:
              memory: "1300Mi"
              cpu: "3000m"
            requests:
              cpu: "2000m" 
              memory: "700Mi" 
          volumeMounts:
          - name: config
            mountPath: /etc/heartbeat.yml
            readOnly: true
            subPath: heartbeat.yml
          - name: data
            mountPath: /usr/share/heartbeat/data
        volumes:
        - name: config
          configMap:
            defaultMode: 0600
            name: heartbeat-deployment-config
        - name: data
          hostPath:
            path: /var/lib/heartbeat-data
            type: DirectoryOrCreate

Can you test if this works for you ? 
  • Hello @sidharth vijayakumar thank you for your response. i edited the post for the error above i tried your solution but giving me the same error, the problem is in the deployment part no ? – skander khalfet Sep 16 '22 at 23:30
  • Updated my answer adjust the resource limit as per the requirement – sidharth vijayakumar Sep 17 '22 at 02:02
  • Hello thank you, that was really the problem, but know the container is stuck in the crashloopbackoff. i changed a little the resources. limits: memory: 2000Mi cpu: 2500m requests: cpu: 2000m memory: 1536Mi and even commented one provider in case the container reaches the limit and restarts according to this [link] (https://discuss.elastic.co/t/heartbeat-8-1-0-kubernetes-autodiscovery-memory-leak/300543/6) – skander khalfet Sep 17 '22 at 13:07
  • Did you check logs ? – sidharth vijayakumar Sep 17 '22 at 14:26
  • in "kubectl describe pod" it shows that it exited with code 126 as i searched for it, it means "A command specified in the image specification could not be invoked" – skander khalfet Sep 17 '22 at 14:47