0

I am using GlusterFS volume plugin in my Kubernetes cluster on a Ubuntu installed GlusterFS using DaemonSet.

My dought is how the data is mounted? Are PVCs or PVs replicated on every kubernetes worker or the disks which are configured in topology.json for every worker node sharable amoung the cluster? I read the documents, but I didn't get clarity.

How the GlusterFS works?

Rico
  • 58,485
  • 12
  • 111
  • 141
BSG
  • 673
  • 2
  • 13
  • 33

1 Answers1

3

Here is the link to the official GlusteFS for Kubernetes repository. Two main components are used in this approach: GlusterFS itself and Heketi.

GlusterFS is a scalable, distributed file system that aggregates disk storage resources from multiple servers into a single global namespace. A Volume in GlusterFS consists of blocks called Bricks. Bricks are located on a different server according to a type of the Volume, there are several types of volumes and for more information about them, you can visit this link.

Heketi provides an interface which can be used to manage the lifecycle of GlusterFS volumes. Heketi is used in Kubernetes for a dynamically provision GlusterFS volumes.

In the example from the repository, 3 Nodes for GlusterFS are used, each Node has block devices. All that is mentioned in topology.json file for Heketi:

{
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "node0"
              ],
              "storage": [
                "192.168.10.100"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/vdb",
            "/dev/vdc",
            "/dev/vdd"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "node1"
              ],
              "storage": [
                "192.168.10.101"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/vdb",
            "/dev/vdc",
            "/dev/vdd"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "node2"
              ],
              "storage": [
                "192.168.10.102"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/vdb",
            "/dev/vdc",
            "/dev/vdd"
          ]
        }
      ]
    }
  ]
}

These Nodes are Kubernetes Nodes as well, you can use all your Kubernetes Nodes as servers for GlusterFS, but the minimum number of Nodes is three.

Heketi will format the block devices posted in topology.json file and include them into GlusterFS cluster.

After that, you need to create a Kuberntes StorageClass for dynamic provision of GlusterFS volumes. Example of a Yaml file for Kubernetes:

apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: gluster-heketi
provisioner: kubernetes.io/glusterfs
parameters: #Here are the setting for access to Heketi RESTapi
  resturl: "http://10.42.0.0:8080" 
  restuser: "joe" 
  restuserkey: "My Secret Life"  

Now you can create a PersistentVolumeClaim with a request to the StorageClass. This action will trigger three processes:
1. Kubernetes will create a GlusterFS Volume using Heketi
2. Kubernetes will create a PersistentVolume configured to use the GlusterFS Volume
3. Kubernetes will assign the PersistentVolume to the PersistentVolumeClaim

Here is the example:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: gluster1
 annotations:
   volume.beta.kubernetes.io/storage-class: gluster-heketi 
spec:
 accessModes:
  - ReadWriteOnce
 resources:
   requests:
     storage: 5Gi

After that, you can use the PersistentVolumeClaim in Kubernetes Deployments, Pods and etc. The example of the Pod configuration:

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod1
  labels:
    name: nginx-pod1
spec:
  containers:
  - name: nginx-pod1
    image: gcr.io/google_containers/nginx-slim:0.8
    ports:
    - name: web
      containerPort: 80
    volumeMounts:
    - name: gluster-vol1
      mountPath: /usr/share/nginx/html
  volumes:
  - name: gluster-vol1
    persistentVolumeClaim:
      claimName: gluster1 <1>

For more information, you can look through this example on the link.

Artem Golenyaev
  • 2,568
  • 12
  • 20
  • Thank you so much Artem Golenyaev. Now i got idea on how the glusterFS works. – BSG Jul 26 '18 at 13:04
  • `apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: mongo-storage2 provisioner: kubernetes.io/glusterfs parameters: resturl: "http://10.46.0.0:8080" ` And Directly using this SC name in StatefulSet (with replicas 3) VloumeClaimTemplate section.I didn't create any pvcs. whenever the Headless servcie is up, It automatically creates 3pvc and 3pv s. I think it creates the Distributed GlusterFS volume type because i didn't mention any storage type. – BSG Jul 26 '18 at 13:57
  • Cloud you please help me out what happend with my procedure? Is it neccessary to create pvc or not? if i want to mention Replicated Glusterfs Volume type where i could mention in my storageclass file? – BSG Jul 26 '18 at 13:57
  • Could you add configs to the original question, it is hard to understand them in comments. It is better if you add as much as you can share, not only the StorageClass, but also a StatefulSet – Artem Golenyaev Jul 27 '18 at 11:04