I am trying to figure out how resource limits in rootless container environments are handled. I am running influxdb:latest
(tried :alpine
as well) as a rootless container with nerdctl and containerd an an up-to-date ubuntu host. While all other containers on the host and influx startup are running just fine, I get "too many open files" errors when ingesting about 3 years of data into the database. Data is sampled on a 24h basis, all in all 926 days with 5 points each. Logs indicate that errors arise during shard creation.
Form what I have read during the last hours, when running rootless containers the container 'root' user gets mapped to an UID from my personal user's namespace on the host system. From my understanding the container's 'root' should therefore be inheriting my host user's limits on open files and processes. ulimit -u
gives 14959 as open files limit. lsof -u myuser | wc -l
indicates about 833 open files during normal operation. Accordingly, there should be plenty of room for opening more files for the influxdb container. The errors start to happen during creation of shards numbering below 100 on a newly set up database.
After hours of testing and googling I as able to mitigate my problem by setting --ulimit nofile=5000:5000
when running the influxdb container.
Can someone explain why this is necessary even though my user has a limit set way above what should be needed an is indicated by --ulimit
?
Already tried mounting the influx data folder as a volume, as bind mount and already tried without mounting at all. The issue stays all the same, independent of how storage is handled. My impression is, that the container gets started with a fairly low number of open files allowed. But why? Could not find anything related in neither documentation.