0

k8s

  • master: 172.17.1.1
  • worker: 172.17.2.1, 2, 3, 4
  • pool: 192.168.0.0 ~

ceph

  • master: 172.17.3.1
  • node: 172.17.3.11, 12, 13

k8s is installed "A" server, ceph is installed "B" server.

  1. install ceph-csi helm-chart

helm install --namespace ceph-fs ceph-fs-sc ceph-csi/ceph-csi-cephfs -f values.yaml

values.yaml

csiConfig:
  - clusterID: "20be3294-1b09-11ee-8aec-bb7badd6c1ee"
    monitors:
      - "172.17.3.1:6789"
      - "172.17.3.11:6789"
      - "172.17.3.12:6789"
      - "172.17.3.13:6789"
secret:
  create: true
  name: "ceph-fs-secret"
  adminID: "admin"
  adminKey: "[**ADMIN_KEY**]"
storageClass:
  create: true
  name: ceph-fs-sc
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"
    storageclass.kubesphere.io/supported-access-modes: '["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]'
  clusterID: "20be3294-1b09-11ee-8aec-bb7badd6c1ee"
  fsName: "wiztest"
  pool: "cephfs.wiztest.data"
  provisionerSecret: ceph-fs-secret
  provisionerSecretNamespace: ceph-fs
  controllerExpandSecret: ceph-fs-secret
  controllerExpandSecretNamespace: ceph-fs
  nodeStageSecret: ceph-fs-secret
  nodeStageSecretNamespace: ceph-fs
  reclaimPolicy: Delete
  allowVolumeExpansion: true
  mountOptions:
    - discard
  1. create pvc

k apply -f cephfs-pvc01.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cephfs-pvc01
spec:
  storageClassName: ceph-fs-sc
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi

cph volumes

  1. create pod ( ERROR )

k apply -f cephfs-pod01.yaml

---
apiVersion: v1
kind: Pod
metadata:
  name: csi-cephfs-test-demo
spec:
  containers:
    - name: web-server
      image: docker.io/library/nginx:latest
      volumeMounts:
        - name: demo-pvc01
          mountPath: /var/lib/www
  volumes:
    - name: demo-pvc01
      persistentVolumeClaim:
        claimName: cephfs-pvc01
        readOnly: false

This step occured error.

enter image description here

Events:
  Type     Reason       Age   From               Message
  ----     ------       ----  ----               -------
  Normal   Scheduled    23s   default-scheduler  Successfully assigned default/csi-cephfs-test-demo to k8s-worker-01
  Warning  FailedMount  3s    kubelet            MountVolume.MountDevice failed for volume "pvc-e136a448-d7c4-4467-be7a-38d661a872df" : rpc error: code = Internal desc = an error (exit status 32) occurred while running mount args: [-t ceph 172.17.3.1:6789,172.17.3.11:6789,172.17.3.12:6789,172.17.3.13:6789:/volumes/csi/csi-vol-6f40b655-58c4-4c34-bba2-7fc7e1268d1c/25f99e96-e034-4614-9af3-e408e05a9798 /var/lib/kubelet/plugins/kubernetes.io/csi/cephfs.csi.ceph.com/24ab534cbf0433ac2504a29c4424faae65354deda69a80deb2b64f46a240e909/globalmount -o name=admin,secretfile=/tmp/csi/keys/keyfile-2640644382,mds_namespace=wiztest,discard,_netdev] stderr: unable to get monitor info from DNS SRV with service name: ceph-mon
2023-07-10T09:14:51.562+0000 7ff25416d0c0 -1 failed for service _ceph-mon._tcp
mount error 22 = Invalid argument

stderr: unable to get monitor info from DNS SRV with service name: ceph-mon

Searching for this error, list up only ceph on top of k8s.

please help me...

ImuruKevol
  • 11
  • 2
  • Maybe this [thread](https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/TAR5VMVLTU6FMFOHT6H5LFOSF3G643QK/) can help you. – eblock Jul 10 '23 at 19:24
  • Thank you. I solved it by changing from cephfs to rbd. I expect link to solved cephfs problem, I'll try to do it later. – ImuruKevol Jul 11 '23 at 04:07

0 Answers0