0

Is there any way to change the limits, open file descriptors in my case, both soft and hard, for a running process inside a pod?

I'm running a memcached deployment using helm stable/memcached, but the 1024 open file limit is really short for the intended concurrency.

If it is not possible to do so, what is the right way to change the limits for a deployment or globaly on a kubernetes cluster (running on AWS and setup with kops)?

migas
  • 4,787
  • 3
  • 15
  • 15
  • This is more of the docker image problem than of Kubernetes. http://mtyurt.net/2017/04/06/docker-how-to-increase-number-of-open-files-limit/ – Vishal Biyani Apr 27 '17 at 12:15
  • Indeed it is, but it seems one cannot pass `--ulimit` to docker run through a kubernetes deployment. – migas Apr 27 '17 at 13:21
  • It appears that you can't currently set a ulimit but it is an open issue: https://github.com/kubernetes/kubernetes/issues/3595 – ahmet alp balkan Apr 27 '17 at 17:04

1 Answers1

0

The problem was that max simultaneous connections on memcached defaults to 1024, and the chart uses the defaults.

I needed to modify it to pass "-c "

migas
  • 4,787
  • 3
  • 15
  • 15