I've just upgraded my Kubernetes cluster to version 1.7.11. This increased the maximum number of pods I can run per node from 40 to 100. However it seems like now I can only attach 39 volumes per node. If I try to create more I get:
No nodes are available that match all of the following predicates:: MaxVolumeCount (3), PodToleratesNodeTaints (1).
This is rather annoying because I was hoping to be able to put more than 40 pods on a node. I don't want to decrease the node size because that would limit the max amount of CPU I can allow a pod to use.
I've setup my cluster on AWS using Kops. Is there a way to change the MaxVolumeCount limit?
Is it normal to have a MaxVolumeCount limit of 39?
System info:
Kernel Version: 4.4.111-k8s
OS Image: Debian GNU/Linux 8 (jessie)
Container Runtime Version: docker://1.12.6
Kubelet Version: v1.7.11
Kube-Proxy Version: v1.7.11
Operating system: linux
Architecture: amd64