I want to use Local pv to provide storage for es statefulset, and Local pv needs to correspond to pod, so I try the following, Create a pv for each node local path (mkdir -p /data/logging/elasticsearch/master
) create the corresponding storage class to match it and set the delay binding (WaitForFirstConsumer), and finally use helm to install es, volumeClaimTemplates is used in the template
create pv like this:
# pv-local.yaml apiVersion: v1 kind: PersistentVolume metadata: name: elasticsearch--master-1 labels: pvname: elasticsearch--master-1 spec: capacity: storage: 30Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: elasticsearch-logging local: path: /data/logging/elasticsearch/master nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - k8s1211 --- apiVersion: v1 kind: PersistentVolume metadata: name: elasticsearch--master-2 labels: pvname: elasticsearch--master-2 spec: capacity: storage: 30Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: elasticsearch-logging local: path: /data/logging/elasticsearch/master nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - k8s1212 --- apiVersion: v1 kind: PersistentVolume metadata: name: elasticsearch--master-0 labels: pvname: elasticsearch--master-0 spec: capacity: storage: 30Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: elasticsearch-logging local: path: /data/logging/elasticsearch/master nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - k8s1210
create storageclass like this:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: elasticsearch-logging provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer reclaimPolicy: Retain
use helm install elasticsearch and helm creat statefulset like this:
helm install -name es-master -f values-master.yaml -n logging .
kind: StatefulSet apiVersion: apps/v1 metadata: name: elasticsearch-master namespace: logging labels: app: elasticsearch-master app.kubernetes.io/managed-by: Helm chart: elasticsearch heritage: Helm release: es-master annotations: esMajorVersion: '7' meta.helm.sh/release-name: es-master meta.helm.sh/release-namespace: logging spec: replicas: 3 selector: matchLabels: app: elasticsearch-master template: metadata: name: elasticsearch-master creationTimestamp: null labels: app: elasticsearch-master chart: elasticsearch release: es-master annotations: configchecksum: 5b9873d870e8dac515e1c8de4df7144cc973664ca35a569294629d20fa21664 spec: volumes: - name: elastic-certs secret: secretName: elastic-certs defaultMode: 493 - name: esconfig configMap: name: elasticsearch-master-config defaultMode: 420 initContainers: - name: configure-sysctl image: 'docker.elastic.co/elasticsearch/elasticsearch:7.17.3' command: - sysctl - '-w' - vm.max_map_count=262144 resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent securityContext: privileged: true runAsUser: 0 containers: - name: elasticsearch image: 'docker.elastic.co/elasticsearch/elasticsearch:7.17.3' ports: - name: http containerPort: 9200 protocol: TCP - name: transport containerPort: 9300 protocol: TCP env: - name: node.name valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: cluster.initial_master_nodes value: >- elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2, - name: discovery.seed_hosts value: elasticsearch-master-headless - name: cluster.name value: elasticsearch - name: network.host value: 0.0.0.0 - name: cluster.deprecation_indexing.enabled value: 'false' - name: ES_JAVA_OPTS value: '-Xmx1g -Xms1g' - name: node.data value: 'false' - name: node.ingest value: 'false' - name: node.master value: 'true' - name: node.ml value: 'true' - name: node.remote_cluster_client value: 'true' - name: ELASTIC_USERNAME valueFrom: secretKeyRef: name: elastic-auth key: username - name: ELASTIC_PASSWORD valueFrom: secretKeyRef: name: elastic-auth key: password resources: limits: cpu: '2' memory: 2Gi requests: cpu: '2' memory: 2Gi volumeMounts: - name: elasticsearch-master mountPath: /usr/share/elasticsearch/data - name: elastic-certs mountPath: /usr/share/elasticsearch/config/certs - name: esconfig mountPath: /usr/share/elasticsearch/config/elasticsearch.yml subPath: elasticsearch.yml readinessProbe: exec: command: - bash - '-c' - > set -e # If the node is starting up wait for the cluster to be ready (request params: "wait_for_status=green&timeout=1s" ) # Once it has started only check that the node itself is responding START_FILE=/tmp/.es_start_file # Disable nss cache to avoid filling dentry cache when calling curl # This is required with Elasticsearch Docker using nss < 3.52 export NSS_SDB_USE_CACHE=no http () { local path="${1}" local args="${2}" set -- -XGET -s if [ "$args" != "" ]; then set -- "$@" $args fi if [ -n "${ELASTIC_PASSWORD}" ]; then set -- "$@" -u "elastic:${ELASTIC_PASSWORD}" fi curl --output /dev/null -k "$@" "http://127.0.0.1:9200${path}" } if [ -f "${START_FILE}" ]; then echo 'Elasticsearch is already running, lets check the node is healthy' HTTP_CODE=$(http "/" "-w %{http_code}") RC=$? if [[ ${RC} -ne 0 ]]; then echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} http://127.0.0.1:9200/ failed with RC ${RC}" exit ${RC} fi # ready if HTTP code 200, 503 is tolerable if ES version is 6.x if [[ ${HTTP_CODE} == "200" ]]; then exit 0 elif [[ ${HTTP_CODE} == "503" && "7" == "6" ]]; then exit 0 else echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} http://127.0.0.1:9200/ failed with HTTP code ${HTTP_CODE}" exit 1 fi else echo 'Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )' if http "/_cluster/health?wait_for_status=green&timeout=1s" "--fail" ; then touch ${START_FILE} exit 0 else echo 'Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )' exit 1 fi fi initialDelaySeconds: 10 timeoutSeconds: 5 periodSeconds: 10 successThreshold: 3 failureThreshold: 3 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent securityContext: capabilities: drop: - ALL runAsUser: 1000 runAsNonRoot: true restartPolicy: Always terminationGracePeriodSeconds: 120 dnsPolicy: ClusterFirst automountServiceAccountToken: true securityContext: runAsUser: 1000 fsGroup: 1000 affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: elasticsearch-logging operator: In values: - 'true' podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - elasticsearch-master topologyKey: kubernetes.io/hostname schedulerName: default-scheduler enableServiceLinks: true volumeClaimTemplates: - kind: PersistentVolumeClaim apiVersion: v1 metadata: name: elasticsearch-master creationTimestamp: null spec: accessModes: - ReadWriteOnce resources: requests: storage: 30Gi storageClassName: elasticsearch-logging volumeMode: Filesystem status: phase: Pending serviceName: elasticsearch-master-headless podManagementPolicy: Parallel updateStrategy: type: RollingUpdate revisionHistoryLimit: 10
then StatefulSet auto create the pvc ,but it not bound to pv,i don't no why?
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: elasticsearch-master-elasticsearch-master-0 namespace: logging labels: app: elasticsearch-master finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 30Gi storageClassName: elasticsearch-logging volumeMode: Filesystem --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: elasticsearch-master-elasticsearch-master-1 namespace: logging labels: app: elasticsearch-master finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 30Gi storageClassName: elasticsearch-logging volumeMode: Filesystem --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: elasticsearch-master-elasticsearch-master-2 namespace: logging labels: app: elasticsearch-master finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 30Gi storageClassName: elasticsearch-logging volumeMode: Filesystem
I tried to create pvc manually, it still doesn't work, when I remove WaitForFirstConsumer, it can bind pv, but it can't correspond to node and pv, sometimes pod A binds pv2 when node 1, which is not what I want The result will also report an error because it does not match
I checked the official docs (https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/) and it seems to be consistent with what I'm doing, so I don't know what's wrong?