0

With Flask, I made an application that prints the keys and values ​​of get requests to the console. My goal is to create a cluster with 1 master and 2 workers and install flask app and ELK stack for logging in it. I have already created the cluster on Google cloud (4 cpus 8gb memory). I created a docker image for the Flask application and uploaded it to the hub. I want to install the ELK stack with HELM. But the pods are not working. It stays in the pending state. I am using default charts for HELM. I didn't make any edits. What should I do? Thanks for any help.

EDIT: This is my deployment.yaml file for elasticsearch (I couldn't handle it with helm, keep trying with yaml files)

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: elasticsearch
  namespace: elk-stack
spec:
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
        - name: elasticsearch
          image: elasticsearch:7.6.2
          resources:
            requests:
              memory: 1Gi
              cpu: 1
            limits:
              memory: 2Gi
              cpu: 2

This is output of kubectl get pods -n elk-stack -o wide command:

NAME                            READY   STATUS             RESTARTS      AGE   IP           NODE                                            NOMINATED NODE   READINESS GATES
elasticsearch-c49749bbc-db9pt   0/1     CrashLoopBackOff   3 (35s ago)   95s   10.84.0.11   gke-devops-cluster-default-pool-8155544c-978l   <none>           <none>

This is output of kubectl top node:

NAME                                            CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
gke-devops-cluster-default-pool-8155544c-3mkr   73m          7%     1318Mi          46%       
gke-devops-cluster-default-pool-8155544c-978l   94m          10%    1194Mi          42%

This is ouput of kubectl describe pods -n elk-stack:

Name:             elasticsearch-c49749bbc-db9pt
Namespace:        elk-stack
Priority:         0
Service Account:  default
Node:             gke-devops-cluster-default-pool-8155544c-978l/10.156.0.24
Start Time:       Mon, 31 Jul 2023 21:15:00 +0300
Labels:           app=elasticsearch
                  pod-template-hash=c49749bbc
Annotations:      <none>
Status:           Running
IP:               10.84.0.11
IPs:
  IP:           10.84.0.11
Controlled By:  ReplicaSet/elasticsearch-c49749bbc
Containers:
  elasticsearch:
    Container ID:   containerd://c2532904c4a2551f1ff2c8df924fb7704186a9924cfb6b491e7d6996407a3aa5
    Image:          elasticsearch:7.6.2
    Image ID:       docker.io/library/elasticsearch@sha256:1b09dbd93085a1e7bca34830e77d2981521a7210e11f11eda997add1c12711fa
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       OOMKilled
      Exit Code:    137
      Started:      Mon, 31 Jul 2023 21:18:27 +0300
      Finished:     Mon, 31 Jul 2023 21:18:31 +0300
    Ready:          False
    Restart Count:  5
    Limits:
      cpu:     500m
      memory:  700Mi
    Requests:
      cpu:        300m
      memory:     500Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7kjrr (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-7kjrr:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                     From               Message
  ----     ------     ----                    ----               -------
  Normal   Scheduled  4m41s                   default-scheduler  Successfully assigned elk-stack/elasticsearch-c49749bbc-db9pt to gke-devops-cluster-default-pool-8155544c-978l
  Normal   Pulled     2m48s (x5 over 4m41s)   kubelet            Container image "elasticsearch:7.6.2" already present on machine
  Normal   Created    2m48s (x5 over 4m41s)   kubelet            Created container elasticsearch
  Normal   Started    2m48s (x5 over 4m41s)   kubelet            Started container elasticsearch
  Warning  BackOff    2m16s (x10 over 4m30s)  kubelet            Back-off restarting failed container elasticsearch in pod elasticsearch-c49749bbc-db9pt_elk-stack(90993e32-bcd8-4e96-b27b-d5b6d532a470)

I followed this tutorial: https://ardabatuhandemir.medium.com/helm-ile-kubernetes-ortam%C4%B1nda-elk-stack-kurulumu-f5ab8f934f99. Also I looked for every source that includes ELK deploy to k8s.

stratovic
  • 58
  • 5
  • You have to describe the pod and see why it is failing. – dany L Jul 31 '23 at 15:48
  • insufficient memory and cpu error but i do not know how to handle distribution on worker nodes. I thought that it should be enough 8gb memory and 4 cpus. This is my deployment.yaml file: ``` – stratovic Jul 31 '23 at 17:42
  • Can you please put in your post the output of: `kubectl get pods -o wide`, `kubectl top node node-name-here`, `kubectl describe pod pod-name-here`. Kubernetes should take care of putting your pod on the node that has sufficient resources, we have to see if on your nodes remained sufficient resources for deploying new pods. – Andra Radu Jul 31 '23 at 18:12
  • @AndraRadu I added outputs to question. – stratovic Jul 31 '23 at 18:22
  • Based on the available information, the pod is experiencing OOM kills despite having sufficient available memory on the nodes. You need to check the container logs and events associated with the pod. `kubectl logs elasticsearch-c49749bbc-db9pt -c elasticsearch -n elk-stack` and `kubectl logs elasticsearch-c49749bbc-db9pt -n elk-stack` also `kubectl get events -n elk-stack`, and not least `kubectl top pods -A -o wide` – Andra Radu Aug 01 '23 at 08:47
  • By the way is a pain to set up ELK by yaml files, you will have to create a deployment for Logstash and Kibana too, you will also have to take care of volumes, config maps, better try with helm, what error did you encounter using helm? – Andra Radu Aug 01 '23 at 12:39
  • I recommend you to take a look at https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html and at this tutorial https://www.youtube.com/watch?v=IO_uXPKQht0. Either you use helm to install elasticsearch, kibana and logstash and take care of what is needed additionally. Either you use Elastic Cloud on Kubernetes which helps to manage, scale, upgrade, and deploy the Elastic Stack, and it's very easy to install Elasticsearch cluster and Kibana after. However I am curios about the pod log so please do not forget to also put in your question the pod logs – Andra Radu Aug 01 '23 at 13:37

1 Answers1

0

For testing purposes you can use this, and let me know:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: elasticsearch
  namespace: elk-stack
spec:
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
        - name: elasticsearch
          image: elasticsearch:7.17.5
          resources:
            requests:
              memory: 1Gi
              cpu: 1
            limits:
              memory: 2Gi
              cpu: 2
          env:
            - name: discovery.type
              value: "single-node" # Set the discovery.type to single-node

Do not forget that you will still need to install logstash, kibana, take care of volumes and configmaps, if you decided to configure ELK manually via yaml files

Andra Radu
  • 611
  • 3
  • 6