0

Here is my Elasticsearch yaml:

---
# Source: elastic/templates/elastic.yaml
apiVersion: elasticsearch.k8s.elastic.co/v1 
kind: Elasticsearch 
metadata: 
  name: ichat-els-deployment
spec: 
  # updateStrategy:
  #   changeBudget:
  #     maxSurge: -1
  #     maxUnavailable: -1
  version: 7.11.1
  auth:
    roles:
    - secretName: elastic-roles-secret
    fileRealm:
    - secretName: elastic-filerealm-secret
  nodeSets: 
  - name: default
    count: 1 
    config:
      node.store.allow_mmap: false 
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        storageClassName: ""
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 10Gi
        volumeName: elasticsearch-azure-pv
    podTemplate:
      spec:
        initContainers:
        - name: install-plugins
          command:
          - sh
          - -c
          - |
            bin/elasticsearch-plugin install --batch ingest-attachment
  - name: default2
    count: 0
    config:
      node.store.allow_mmap: false 
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi

After creating this, I have the 2 nodesets running, kubectl get pods:

NAME                                 READY   STATUS    RESTARTS   AGE
elastic-operator-0                   1/1     Running   8          7d23h
ichat-els-deployment-es-default-0    1/1     Running   0          24m
ichat-els-deployment-es-default2-0   1/1     Running   0          26m

Everything is working fine, but now I want to delete the default2 nodeset, how can I do that? I tried removing the nodeset from the manifest and reapply it, but nothing happened:

---
# Source: elastic/templates/elastic.yaml
apiVersion: elasticsearch.k8s.elastic.co/v1 
kind: Elasticsearch 
metadata: 
  name: ichat-els-deployment
spec: 
  # updateStrategy:
  #   changeBudget:
  #     maxSurge: -1
  #     maxUnavailable: -1
  version: 7.11.1
  auth:
    roles:
    - secretName: elastic-roles-secret
    fileRealm:
    - secretName: elastic-filerealm-secret
  nodeSets: 
  - name: default
    count: 1 
    config:
      node.store.allow_mmap: false 
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        storageClassName: ""
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 10Gi
        volumeName: elasticsearch-azure-pv
    podTemplate:
      spec:
        initContainers:
        - name: install-plugins
          command:
          - sh
          - -c
          - |
            bin/elasticsearch-plugin install --batch ingest-attachment

The pods and shards are still running and there are no errors in elastic operator. What is the correct way to remove a nodeset? Thanks.

Saligia
  • 147
  • 1
  • 9
  • Based on the information from [elastic site](https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-orchestration.html#k8s-statefulsets) "ECK translates each NodeSet specified in the Elasticsearch resource into a StatefulSet in Kubernetes". According to that, you can try to check your StatefulSets and remove necessary: `kubectl delete statefulsets ` – Andrew Skorkin Oct 07 '21 at 10:27

1 Answers1

0

I solved that issue with elasticsearch Kind object deletion:

kubectl delete elasticsearch <elasticsearch_object_name>

so all objects (StatefulSet -> Pods, Secrets, PVCs -> PVs, etc. ) related to elasticsearch CRD object were deleted due to it, but I didn't care about data loss here.