1

I did look at this, but I don't feel like the answer is covered: kubernetes persistence volume and persistence volume claim exceeded storage

Anyways, I have tried to look in the documentation but could not find out what is going to happen when an PVC Azure disk is full? So, we have a grafana application which monitors some data. We use the PVC to make sure that the data is saved if the pod gets killed. Right now the pod continously fetches data and the disk gets more and more full. What happens when the disk is full? Ideally it would be nice to implement some sort of functionality, such that when it gets like 80% full, it removes the let's say 20% of the data, starting from the oldest for example. Or how do we tackle this problem?

pvc:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: graphite-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: managed-premium
  resources:
    requests:
      storage: 256Gi
Frank Wang-MSFT
  • 1,367
  • 6
  • 6

1 Answers1

3

Think of PVC as a folder that is mounted to your container running grafana service. It has a fixed size which you provided and as per the question, it is not going to increase.

What happens when the disk is full?

There is nothing different here that happens from a normal service running on a system and runs out of the system. If it were your local machine or cloud VM you would get an alert about storage and if you dint take action, the service will error out saying out of disk space error Now you can use services like Prometheus with Kubernetes plugin to get storage space alerts, but by default, Kubernetes won't provide any alert.

ref - https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/#full-metrics-pipeline

how do we tackle this(disk space) problem?

Again same way you would on a normal system, there is a number of solutions. But if you think of it, the System or VM or Kubernetes is not the right candidate to decide which files should be removed and which should be kept, the reason being kubernetes does not know what data is and for the fact, it does not own the data. The service does. On the other hand, you can use the service or create a custom new archiving service to take the data from your Grafana PVC and place it S3 or any other storage.

damitj07
  • 2,689
  • 1
  • 21
  • 40
  • So, there is no "smart" way to make the folder "automatically update"? I cannot add some functionality to the pod, that wipes the data of the mounted folder once it is close to be full? I mean, we have a graphite pod (hence graphite PVC) gathering the data and grafana using it, just to clarify. So a suggestion is that we add some alerts to the graphite PVC that alerts us when it is about to be full and we manually do a kubectl command to wipe the data? – Christian Hjelmslund Nov 08 '19 at 12:02
  • Keep in mind that `emptyDir` and `hostPath` volumes are exceptions. When using this type of volume [there is no "soft limit"](https://stackoverflow.com/questions/55619425/hostpath-persistentvolume-and-spec-capacity-storage-attribute) on how much space can be consumed. – Eduardo Baitello Nov 08 '19 at 12:54
  • @EduardoBaitello so the "soft limit" is how much you are paying or how is it to be understood? We can exceed the 256gb, it will just affect the bill rather than stopping graphite for putting data in the PVC? – Christian Hjelmslund Nov 08 '19 at 12:57
  • ` we manually do a kubectl command to wipe the data?` - I think it has to be more ambiguous than that, coz' the data is not just any data it will have graphite & grafana configs, your historical metrics, etc depending on the service using the PVC. Thus, I wouldn't suggest using `kubectl commands` .. rather create a cron script to archive historical data. – damitj07 Nov 08 '19 at 15:35