I have xinetd
running inside a docker container (base image centos:7
) on a rockylinux 9
host machine.
On starting xinetd
service in the docker image, it appears to hang. So I used strace
, and found the following output, rapidly counting up:
close(181189) = -1 EBADF (Bad file descriptor)
close(181190) = -1 EBADF (Bad file descriptor)
close(181191) = -1 EBADF (Bad file descriptor)
close(181192) = -1 EBADF (Bad file descriptor)
close(181193) = -1 EBADF (Bad file descriptor)
close(181194) = -1 EBADF (Bad file descriptor)
close(181195) = -1 EBADF (Bad file descriptor)
close(181196) = -1 EBADF (Bad file descriptor)
Running strace xinetd
on another Centos 8
host I see the same behavior.
It appears that xinetd
is going through all the available file descriptors, past the /proc/sys/fs/file-max
(if a limit is configured, it goes past that limit) until it runs out of numbers.
What is happening here? Why would it have this behavior?
Update March 2023:
For those in similar circumstances, based on the answer detailed below, you can set ulimits via docker when running a container. This is also available in docker compose, and in nomad. Unfortunately, it is not available in k8s.