Inside the container, that file is a bind mount. The mount statistics come from the underlying filesystem where the file is originally located, in this case /dev/vda1
. They are not the statistics for the single file, it's just the way mount shows this data for a bind mount. Same happens for the overlay filesystem since it's also based on a different underlying filesystem. Since that filesystem is the same for each, you see the exact same mount statistics for each.
Therefore you are exhausting the inodes on your host filesystem, likely the /var/lib/docker
filesystem, which if you have not configured a separate mount, will be the /
(root) filesystem. Why you are using so many inodes on that filesystem is going to require debugging on your side to see what is creating so many files. Often you'll want to separate docker from the root filesystem by making /var/lib/docker
a separate partition, or symlinking it to another drive where you have more space.
As another example to show that these are all the same:
$ df -i /var/lib/docker/.
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/bmitch--t490--vg-home 57098240 3697772 53400468 7% /home
$ docker run -it --rm busybox df -i
Filesystem Inodes Used Available Use% Mounted on
overlay 57098240 3697814 53400426 6% /
tmpfs 4085684 17 4085667 0% /dev
tmpfs 4085684 16 4085668 0% /sys/fs/cgroup
shm 4085684 1 4085683 0% /dev/shm
/dev/mapper/bmitch--t490--vg-home
57098240 3697814 53400426 6% /etc/resolv.conf
/dev/mapper/bmitch--t490--vg-home
57098240 3697814 53400426 6% /etc/hostname
/dev/mapper/bmitch--t490--vg-home
57098240 3697814 53400426 6% /etc/hosts
tmpfs 4085684 1 4085683 0% /proc/asound
tmpfs 4085684 1 4085683 0% /proc/acpi
tmpfs 4085684 17 4085667 0% /proc/kcore
tmpfs 4085684 17 4085667 0% /proc/keys
tmpfs 4085684 17 4085667 0% /proc/timer_list
tmpfs 4085684 17 4085667 0% /proc/sched_debug
tmpfs 4085684 1 4085683 0% /sys/firmware
From there you can see /etc/resolv.conf
, /etc/hostname
, and /etc/hosts
are each bind mounts going back to the /var/lib/docker
filesystem because docker creates and maintains these for each container.
If removing the container frees up a large number of inodes, then check your container to see if you are modifying/creating files in the container filesystem. These will all be deleted as part of the container removal. You can see currently created files (which won't capture files created and then deleted but still held open by a process) with: docker diff $container_id