0

New to AWS EKS Fargate.

I created a cluster on aws EKS fargate and then proceed to install a helm chart; and the pods are all in pending state, looking at the pod description, I noticed there is some errors as

eksctl create cluster -f cluster-fargate.yaml


k -n bd describe pod bd-blackduck-authentication-6c8ff5cc85-jwr8m
Name:                 bd-blackduck-authentication-6c8ff5cc85-jwr8m
Namespace:            bd
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 <none>
Labels:               app=blackduck
                      component=authentication
                      eks.amazonaws.com/fargate-profile=fp-bd
                      name=bd
                      pod-template-hash=6c8ff5cc85
                      version=2021.10.5
Annotations:          checksum/blackduck-config: 6c1796e5e4218c71ea2ae7a1249fefbb6f7c216f702ea38919a0bb9751b06922
                      checksum/postgres-config: f21777c0b5bf24b5535a5b4a8dbf98a5df9c9dd2f4a48e5219dcccf46301a982
                      kubernetes.io/psp: eks.privileged
Status:               Pending
IP:
IPs:                  <none>
Controlled By:        ReplicaSet/bd-blackduck-authentication-6c8ff5cc85
Init Containers:
  bd-blackduck-postgres-waiter:
    Image:      docker.io/blackducksoftware/blackduck-postgres-waiter:1.0.0
    Port:       <none>
    Host Port:  <none>
    Environment Variables from:
      bd-blackduck-config  ConfigMap  Optional: false
    Environment:
      POSTGRES_HOST:  <set to the key 'HUB_POSTGRES_HOST' of config map 'bd-blackduck-db-config'>  Optional: false
      POSTGRES_PORT:  <set to the key 'HUB_POSTGRES_PORT' of config map 'bd-blackduck-db-config'>  Optional: false
      POSTGRES_USER:  <set to the key 'HUB_POSTGRES_USER' of config map 'bd-blackduck-db-config'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85q7d (ro)
Containers:
  authentication:
    Image:      docker.io/blackducksoftware/blackduck-authentication:2021.10.5
    Port:       8443/TCP
    Host Port:  0/TCP
    Limits:
      memory:  1Gi
    Requests:
      memory:  1Gi
    Liveness:  exec [/usr/local/bin/docker-healthcheck.sh https://127.0.0.1:8443/api/health-checks/liveness /opt/blackduck/hub/hub-authentication/security/root.crt /opt/blackduck/hub/hub-authentication/security/blackduck_system.crt /opt/blackduck/hub/hub-authentication/security/blackduck_system.key] delay=240s timeout=10s period=30s #success=1 #failure=10
    Environment Variables from:
      bd-blackduck-db-config  ConfigMap  Optional: false
      bd-blackduck-config     ConfigMap  Optional: false
    Environment:
      HUB_MAX_MEMORY:                              512m
      DD_ENABLED:                                  false
      HUB_MANAGEMENT_ENDPOINT_PROMETHEUS_ENABLED:  false
    Mounts:
      /opt/blackduck/hub/hub-authentication/ldap from dir-authentication (rw)
      /opt/blackduck/hub/hub-authentication/security from dir-authentication-security (rw)
      /tmp/secrets/HUB_POSTGRES_ADMIN_PASSWORD_FILE from db-passwords (rw,path="HUB_POSTGRES_ADMIN_PASSWORD_FILE")
      /tmp/secrets/HUB_POSTGRES_USER_PASSWORD_FILE from db-passwords (rw,path="HUB_POSTGRES_USER_PASSWORD_FILE")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85q7d (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  dir-authentication:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  bd-blackduck-authentication
    ReadOnly:   false
  db-passwords:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  bd-blackduck-db-creds
    Optional:    false
  dir-authentication-security:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  kube-api-access-85q7d:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  58s   fargate-scheduler  Pod not supported on Fargate: volumes not supported: dir-authentication not supported because: PVC bd-blackduck-authentication not bound

my storageClass is currently set to gp2 in my values.yaml.

what can I do next to troubleshoot this?

sqr
  • 365
  • 2
  • 12
  • 29

2 Answers2

0

Currently, Fargate does not support PersistentVolume back by EBS. You can use EFS instead.

gohm'c
  • 13,492
  • 1
  • 9
  • 16
  • thank you @gohm'c! from the link you provided, I understand I need to manually create a EFS and PVC, and then use it with EKS? this means for any docker application which includes a database, I will need to follow similar process? is EKS fargate primarily used for stateless applications? – sqr Mar 06 '22 at 13:21
  • Specifically on Fargate, 1) You also need to create the PV that represent the EFS file system which your PVC will bound. 2) Application that runs on Fargate that needs to persist state (eg. database) can use EFS. On top of that, if your pods need to share data across nodes and **availability zones**, EFS is the way to go. 3) Not necessary as you can use EFS to persist state. – gohm'c Mar 06 '22 at 15:20
0

you need to create a persistent volume manually. fargate does not support the dynamic creation of persistent volume.

so the flow will be to create storage class, persistent volume, and then persistent volume claim