38

I am trying to run a Factorio game server on Kubernetes (hosted on GKE).

I have setup a Stateful Set with a Persistent Volume Claim and mounted it in the game server's save directory.

I would like to upload a save file from my local computer to this Persistent Volume Claim so I can access the save on the game server.

What would be the best way to upload a file to this Persistent Volume Claim?

I have thought of 2 ways but I'm not sure which is best or if either are a good idea:

  • Restore a disk snapshot with the files I want to the GCP disk which backs this Persistent Volume Claim
  • Mount the Persistent Volume Claim on an FTP container, FTP the files up, and then mount it on the game container
Noah Huppert
  • 4,028
  • 6
  • 36
  • 58
  • 1
    See the new snapshot/restore feature for CSI with Kuberentes 1.12 (Sept. 2018): https://stackoverflow.com/a/52570512/6309 – VonC Sep 29 '18 at 16:52

3 Answers3

60

It turns out there is a much simpler way: The kubectl cp command.

This command lets you copy data from your computer to a container running on your cluster.

In my case I ran:

kubectl cp ~/.factorio/saves/k8s-test.zip factorio/factorio-0:/factorio/saves/

This copied the k8s-test.zip file on my computer to /factorio/saves/k8s-test.zip in a container running on my cluster.

See kubectl cp -h for more more detail usage information and examples.

Noah Huppert
  • 4,028
  • 6
  • 36
  • 58
  • 3
    I've been searching for something like this! Why is this not listed on the cheat sheet? https://kubernetes.io/docs/reference/kubectl/cheatsheet/ – chringel21 Nov 30 '20 at 09:40
  • this is exactly the right command I've been looking for. thanks, you saved my life! – Aldi Unanto Feb 21 '22 at 15:38
  • what if the files you are copying are in use by the container application? Say you need to replace the current database file with another one? You have to stop the application for that, but stopping the application also stops the container. so.... – majorgear Feb 27 '23 at 05:08
  • @majorgear if your source pod mounts a persistent volume and the database saves into that then you can stop your pod, and make a new pod (say just a generic alpine container that sleeps forever) which mounts this persistent volume. Then you can copy from that pod. However if your pod doesn't put files in a persistent volume then you're in trouble. As once that pod shuts down you will loose the files. Either find a way of stopping writes to the db and triggering a disk save or make a database dump and copy that out and restore it to a new database. – Noah Huppert Feb 28 '23 at 06:14
  • I also needed a method to transfer file between my workstation and the pv. I mounted the pv on a dperson/samba pod and transferred them using a cifs share. – majorgear Feb 28 '23 at 23:11
12

You can create data-folder on your GoogleCloud:

gcloud compute ssh <your cloud> <your zone>
mdkir data

Then create PersistentVolume:

kubectl create -f hostpth-pv.yml

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-local
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/home/<user-name>/data"

Create PersistentVolumeClaim:

kubectl create -f hostpath-pvc.yml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: hostpath-pvc
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  selector:
    matchLabels:
      type: local

Then copy file to GCloud:

gcloud compute scp <your file> <your cloud> <your zone> 

And at last mount this PersistentVolumeClaim to your pod:

...
      volumeMounts:
       - name: hostpath-pvc
         mountPath: <your-path>
         subPath: hostpath-pvc  
  volumes:
    - name: hostpath-pvc
      persistentVolumeClaim:
        claimName: hostpath-pvc

And copy file to data-folder in GGloud:

  gcloud compute scp <your file> <your cloud>:/home/<user-name>/data/hostpath-pvc <your zone>
Andrey
  • 1,528
  • 14
  • 12
3

You can just use Google Cloud Storage (https://cloud.google.com/storage/) since you're looking at serving a few files.

The other option is to use PersistenVolumeClaims. This will work better if you're not updating the files frequently because you will need to detach the disk from the Pods (so you need to delete the Pods) while doing this.

You can create a GCE persistent disk, attach it to a GCE VM, put files on it, then delete the VM and bring the PD to Kubernetes as PersistentVolumeClaim. There's doc on how to do that: https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#using_preexsiting_persistent_disks_as_persistentvolumes

ahmet alp balkan
  • 42,679
  • 38
  • 138
  • 214
  • Hi Ahmet, thanks for the answer. How might I go about connecting Google Cloud Storage to a Kubernetes pod? The method of using a Persistent Volume Claim is exactly what I was looking for. Thank you for the link to the guide. – Noah Huppert Jun 05 '18 at 21:41