Why does a fresh container on a small-ish docker image say that its disk is full of 10G when it's not?
I'm running this in a Debian 10 AppVM in QubesOS. In Debian 10, I do:
sudo apt-get -y install docker.io
sudo docker pull node:13-buster-slim
At the time of writing, this gives me docker v18.09.1 using the 'overlay2' storage driver by default.
root@coviz:~# sudo docker --version
Docker version 18.09.1, build 4c52b90
root@coviz:~# docker info | grep Storage
Storage Driver: overlay2
root@coviz:~#
My docker host now has only this 181M docker image and no containers. The docker host is using only 0.5G out of 20G available. Plenty of free space.
root@coviz:~# sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
root@coviz:~# sudo docker image ls -a
REPOSITORY TAG IMAGE ID CREATED SIZE
node 13-buster-slim e4217af9b7c7 9 days ago 181MB
root@coviz:~# sudo docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 1 0 180.7MB 180.7MB (100%)
Containers 0 0 0B 0B
Local Volumes 0 0 0B 0B
Build Cache 0 0 0B 0B
root@coviz:~#
I'm working on creating a Dockerfile for my project, so I execute the following command to spin up a new container from the above base image and drop me in a shell on that temporary container
root@coviz:~# docker run --rm -it --entrypoint /bin/bash e4217af9b7c7
root@97a318c599ab:/#
It isn't long before I encounter issues when testing out commands to install dependencies with apt-get
. I think the issue is that apt needs to store cache data to /var/lib/apt/lists/
. The actual error is at least one invalid signature was encountered
, but it actually appears to be a disk fill issue (the apt key verification fails because it can't store the signature to disk). Running an apt-clean
doesn't help; it's already empty. This is a fresh container based on a fresh image.
Checking the disk with df
in this fresh container immediately shows that there's only 17M of disk space available, but I can only account for ~200M with du
. Again, this is a fresh container, so I highly doubt this is an issue with a file stuck in a 'deleting' state still opened by a process.
root@97a318c599ab:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 9.6G 9.1G 17M 100% /
tmpfs 64M 0 64M 0% /dev
tmpfs 255M 0 255M 0% /sys/fs/cgroup
/dev/xvda3 9.6G 9.1G 17M 100% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 285M 0 285M 0% /proc/asound
tmpfs 285M 0 285M 0% /proc/acpi
tmpfs 285M 0 285M 0% /proc/scsi
tmpfs 285M 0 285M 0% /sys/firmware
root@97a318c599ab:/# du -sh /*
4.8M /bin
4.0K /boot
0 /dev
612K /etc
20K /home
12M /lib
4.0K /lib64
4.0K /media
4.0K /mnt
5.2M /opt
du: cannot access '/proc/11/task/11/fd/4': No such file or directory
du: cannot access '/proc/11/task/11/fdinfo/4': No such file or directory
du: cannot access '/proc/11/fd/4': No such file or directory
du: cannot access '/proc/11/fdinfo/4': No such file or directory
0 /proc
136K /root
8.0K /run
4.1M /sbin
4.0K /srv
0 /sys
2.2M /tmp
160M /usr
5.9M /var
root@97a318c599ab:/#
Moreover, docker ps -s
shows that the size of the "writable layer" on my container is empty (0B
):
root@coviz:~# docker ps -a -s
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES SIZE
320af1498086 e4217af9b7c7 "docker-entrypoint.sā¦" 15 minutes ago Up 15 minutes epic_leakey 0B (virtual 181MB)
root@coviz:~#
So why is this fresh docker container's disk (based on a ~200M image) full? What's taking-up those ~9G of unaccounted space?