1

We have success creating the pods, services and replication controllers according to our project requirements. Now we are planning to setup persistence storage in AWS using Kubernetes. I have created the YAML file to create an EBS volume in AWS, it's working fine as expected. I am able to claim volume and successfully mount to my pod (this is for single replica only).

I am able to successfully create the file.Volume also creating but my Pods is going to pending state, volume still shows available state in aws. I am not able to see any error logs over there.

Storage file:

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: mongo-ssd
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2

Main file:

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
    name: web2
spec:
  selector:
    matchLabels:
      app: mongodb
  serviceName: "mongodb"
  replicas: 2
  template:
    metadata:
      labels:
        app: mongodb
      annotations:
         pod.alpha.kubernetes.io/initialized: "true"
    spec:
      containers:
      - image: mongo
        name: mongodb
        ports:
        - name: web2
          containerPort: 27017
          hostPort: 27017
        volumeMounts:
        - mountPath: "/opt/couchbase/var"
          name: mypd1
  volumeClaimTemplates:
  - metadata:
      name: mypd1
      annotations:
        volume.alpha.kubernetes.io/storage-class: mongo-ssd
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi

Now I am planning to set up a pod Autoscaling. I have seen pod autoscaling for deployment and ReplicationContoller. May I know can we implement pod auto-scaling for Stateful set also?

Shahriar
  • 13,460
  • 8
  • 78
  • 95
Raju
  • 61
  • 1
  • 7

1 Answers1

2

Horizontal Pod Autoscaler can scale only Deployment, Replica Set or Replication Controller. You cannot scale Stateful Sets. (see Kubernetes Docu for more details)

The main reason is that most of the stateful applications running in Stateful Sets (such as your MongoDB) are usually not as easy to scale up / down as the stateless applications running as Deployments. Scaling up and down is usually quite complicated process for stateful apps which you do not want to do only based on the autoscaler. It usually requires some additional support logic in the application it self. And especially with scale down it could also mean risk for your data. The autoscaling is more useful for short term changes in the load. Scaling of Stateful Sets requires more long term thinking. Because of the complexity you do not want your database to be scaling up and down every minute.

Jakub
  • 3,506
  • 12
  • 20
  • Thank you for your response. I agree with your point. If I use deployment and Replication controller, How can we manage volumes in AWS? – Raju Jan 29 '18 at 14:24
  • We have success creating the pods, services and replication controllers according to our project requirements. Now we are planning to setup persistence storage in AWS using Kubernetes. I have created the YAML file to create an EBS volume in AWS, it's working fine as expected. I am able to claim volume and successfully mount to my pod (this is for single replica only). I am able to successfully create the file.Volume also creating but my Pods is going to pending state, volume still shows available state in aws. I am not able to see any error logs over there – Raju Jan 29 '18 at 14:24
  • with statefulset only I am able to create volumes in different availability zones along with my pods. If I use replication controller and deployment do autoscale, how can my volumes will create? – Raju Jan 29 '18 at 14:28
  • The Deployments / Replication Controllers are more suitable for stateless processes which do not use persistent disks. So the question is whether you got your architecture "right" if you need persistent volumes in your pods created by Deployment / replication controller. Do you really need a persistent volume? If it is just some scratch data, you can use something such as emptyDir volumes which would work fine with Deployments / Replication Controllers. – Jakub Jan 29 '18 at 15:19
  • Thank you for the response Jakub. Our application stack is Tomcat/JBoss, Kafka, Hivemq, postgress, Mongodb, EFS, Persistance volumes (Amazon EBS), Redis, NFS/EFS(from amazon)..etc. Suppose If planned to setup per pod 2 replicas how can plan for persistence volume? If I go with "volumeMount" conecpt, I can attach one volume per POD. If my pod will try to create 2 POD 2 nd volume will able to create using "Volume Mont conect" that is suitable for RC and Deployement. But If use stateful set "using VolumeMount Template" I can create no. of volumes along with my pod, But that is suitable for AS – Raju Jan 29 '18 at 16:19
  • I hope you understand my situation – Raju Jan 29 '18 at 16:19
  • I have only one option to store data using Network share I have store my Main code like tomcat data..etc instead of static content in NFS. – Raju Jan 29 '18 at 16:21
  • The Pods will be created and deleted on demand. Without any warning. And the volumes will be gone with them. That means you cannot really store there anything what should be persisted, because you will be loosing the data all the time while scaling down. Which leads me again to the question whether you really need persistent volume or only a scratch volume such as emptyDir. – Jakub Jan 29 '18 at 16:38
  • If what you want to have on the volume is the JAR / WAR file of your application - than you should not take that from a volume. You should ideally have it in the Docker image already to make sure the Docker image is "immutable. – Jakub Jan 29 '18 at 16:39
  • Yes I really require Persistence storage. Other than Jar/WAr we have to put some application data over there. Lets take Mongo 1 Mongo 2shades will be there. "scratch volume" means, No data will be there right ? – Raju Jan 29 '18 at 17:28
  • For Mongo you should use Stateful Sets. So you will not have this problem. – Jakub Jan 29 '18 at 17:34
  • Okay. Let take Tomcat/HiveMq If I have 5 nodes, in different availability zones. If I want to set to 4 replica, in this care how can use Persistence Volume without using "VolumeMont Template(This is for only stateful set)" – Raju Jan 29 '18 at 17:45