I have deployed my Kubernetes cluster on GCP Compute Engines and having 3 Master Nodes and 3 Worker Nodes (It's not a GKE Cluster). Can anybody suggest me what storage options I can use for my cluster? If I create a virtual disk on GCP, can I use that disk as a persistent storage?
-
This may help you: https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes – Kamol Hasan Jan 21 '21 at 05:55
-
What are your requirement? Do you need to read and write concurrently on the same storage? – guillaume blaquiere Jan 21 '21 at 08:34
-
@guillaumeblaquiere, I'm having few pods, I need to store each pod's data separately in in the disk, separated by each other. What option you would recommend? – Dusty Jan 21 '21 at 11:40
2 Answers
You can use GCE Persistent Disk Storage Class.
Here is how you create the storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ssd
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
Then you do the following to create the PV & PVC and to attach to your pod.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gce-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: ssd
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Pod
metadata:
name: webserver-pd
spec:
containers:
- image: httpd
name: webserver
volumeMounts:
- mountPath: /data
name: dynamic-volume
volumes:
- name: dynamic-volume
persistentVolumeClaim:
claimName: gce-claim
Example taken from this blog post

- 6,940
- 5
- 15
- 36
-
if I proceed with this solution, do I need to edit the kubernetes config? Specifying the cloud provider or something? – Dusty Jan 21 '21 at 11:37
-
@Dusty yeah, I think you have to configure IAM for the each instance (basically setup a service account with sufficient storage permissions).. also since you're working outside GKE, you should configure `cloud-provider` to `GCE` in configs – Tibebes. M Jan 21 '21 at 13:21
-
-
Try following [this commit diff](https://github.com/eldorplus/kubernetes-the-hard-way/commit/ecf26a1100f5e95b6c7e44c01a65ef8c624f37f4), maybe? (I don't have experience doing this configuration, sorry) – Tibebes. M Jan 21 '21 at 13:47
-
1it's working now. Since I'm running my cluster outside from GKE, we need to specifically provide --cloud-provider=gce parameter in api and contoller-manager conf file. And restarted kubelet. BTW thanks for your support. – Dusty Jan 22 '21 at 08:21
-
Awesome! Glad it finally has worked out! we all have learned something in the process :) – Tibebes. M Jan 22 '21 at 09:04
There are two types of provisioning Persistent Volumes: Static Provisioning
and Dynamic Provisioning
.
I will briefly describe each of these types.
Static Provisioning
Using this approach you need to create Disk
, PersistentVolume
and PersistentVolumeClaim
manually.
I've create simple example for you to illustrate how it works.
First I created disk, on GCP we can use gcloud
command:
$ gcloud compute disks create --size 10GB --region europe-west3-c test-disk
NAME ZONE SIZE_GB TYPE STATUS
test-disk europe-west3-c 10 pd-standard READY
Next I created PV
and PVC
using this manifest files:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
gcePersistentDisk:
pdName: test-disk # This GCE PD must already exist.
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claim-test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
After applaying this manifest files, we can check status of PV
and PVC
:
root@km:~# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv-test 10Gi RWO Retain Bound default/claim-test 12m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/claim-test Bound pv-test 10Gi RWO 12m
Finally I used above claim as volume:
apiVersion: v1
kind: Pod
metadata:
name: web
spec:
containers:
- name: web
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx"
name: vol-test
volumes:
- name: vol-test
persistentVolumeClaim:
claimName: claim-test
We can inspect created Pod
to check if it works as expected:
root@km:~# kubectl exec -it web -- bash
root@web:/# df -h
Filesystem Size Used Avail Use% Mounted on
...
/dev/sdb 9.8G 37M 9.8G 1% /usr/share/nginx
...
Dynamic Provisioning
In this case volume is provisioned automatically when application requires it.
First you need to create StorageClass
object to define a provisioner such e.g. kubernetes.io/gce-pd
.
We don't need to create PersistenVolume
anymore, it's created automatically by StorageClass
for us.
I've also created simple example for you to illustrate how it works.
First I created StorageClass
as default storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: standard
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
fstype: ext4
And then PVC
(the same as in the previous example) - but in this case PV
was created automatically:
root@km:~# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-8dcd69f1-7081-45a7-8424-cc02e61a4976 10Gi RWO Delete Bound default/claim-test standard 3m10s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/claim-test Bound pvc-8dcd69f1-7081-45a7-8424-cc02e61a4976 10Gi RWO standard 3m12s
In more advanced cases it may be useful to create multiple StorageClasses
with differnet persistent disks types.

- 4,010
- 1
- 9
- 23
-
I followed the same steps which you have mentioned in Dynamic Provisioning. But when I check the status of PVC, it's in pending state. Below is the error which I get. – Dusty Jan 21 '21 at 13:10
-
Failed to provision volume with StorageClass "ssd": Failed to get GCE GCECloudProvider with error
– Dusty Jan 21 '21 at 13:10 -
@Dusty you probably didn't configure GCE as cloud provider. Please follow steps from this [answer](https://stackoverflow.com/a/50364756/14801225) – matt_j Jan 21 '21 at 13:13
-
there they have edited the /etc/systemd/system/kubelet.service.d/10-kubeadm.conf file. But when I check my cluster, I don't have a file in this location. Please help – Dusty Jan 21 '21 at 13:16
-
Did you deploy k8s using `kubeadm` ? Which Linux distribution do you use ? – matt_j Jan 21 '21 at 13:19
-
https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/01-prerequisites.md – Dusty Jan 21 '21 at 13:20
-
-
-
Do you have `/etc/systemd/system/kubelet.service` file ? Try to add `--cloud-provider=gce` at the end of `ExecStart` statement. Probably you also need to add `--cloud-provider=gce` to `/etc/systemd/system/kube-controller-manager.service` to `ExecStart`. And then restart `kubelet` and `kube-controller-manager` services. – matt_j Jan 21 '21 at 13:58
-
I have add the --cloud-provider=gce in /etc/systemd/system/kube-apiserver.service and /etc/systemd/system/kube-controller-manager.service in each controller. And also I added --cloud-provider=gce in /etc/systemd/system/kubelet.service in each worker. Eventually in each worker I executed { sudo systemctl daemon-reload sudo systemctl enable containerd kubelet kube-proxy sudo systemctl start containerd kubelet kube-proxy } – Dusty Jan 21 '21 at 16:16
-
And also in each controller I executed { sudo systemctl daemon-reload sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler } after adding the --cloud-provider – Dusty Jan 21 '21 at 16:17
-
-
it's working now. Since I'm running my cluster outside from GKE, we need to specifically provide --cloud-provider=gce parameter in api and contoller-manager conf file. And restarted kubelet. BTW thanks for your support. – Dusty Jan 22 '21 at 08:21