8

we use Jenkins to build Docker images and everything was find until last week when every time I try to build an image in Jenkins it fails with "Error response from daemon: Error processing tar file(exit status 1): write /app/node_modules/acorn/dist/acorn_loose.es.js: no space left on device" (the file it fails on can depend on the project). The image will build if I try using Docker in the server but fails on Jenkins.

I have tried removing old containers and images etc but to no avail. Disk space and inodes seem to be fine so I'm not sure what to try now. Any help is appreciated.

Result of "docker info":

Containers: 55
 Running: 48
 Paused: 0
 Stopped: 7
Images: 59
Server Version: 17.03.2-ce
Storage Driver: overlay
 Backing Filesystem: extfs
 Supports d_type: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.9.45-rancher
Operating System: RancherOS v1.1.0
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 19.61 GiB
Name: rancher
ID: Z7Z3:T3NW:N4O3:FKMZ:7KH6:FJ7R:TJ6A:FXLW:KNUL:WMRC:ED74:KHEM
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 dockerhub.companysite.net:5000
 127.0.0.0/8
Live Restore Enabled: false

Result of "df -h":

Filesystem      Size  Used Avail Use% Mounted on
overlay          47G   27G   18G  60% /
tmpfs           9.9G     0  9.9G   0% /dev
tmpfs           9.9G     0  9.9G   0% /sys/fs/cgroup
/dev/sda1        47G   27G   18G  60% /.r
shm              64M     0   64M   0% /dev/shm
tmpfs           9.9G     0  9.9G   0% /sys/firmware

Result of "df -ih":

Filesystem     Inodes IUsed IFree IUse% Mounted on
overlay           13M  2.8M  9.8M   22% /
tmpfs            2.5M    16  2.5M    1% /dev
tmpfs            2.5M    15  2.5M    1% /sys/fs/cgroup
/dev/sda1         13M  2.8M  9.8M   22% /.r
shm              2.5M     1  2.5M    1% /dev/shm
tmpfs            2.5M     1  2.5M    1% /sys/firmware
Matt
  • 83
  • 1
  • 1
  • 4

2 Answers2

7

You may have a bunch of old dangling images left over that are causing problems, try:

docker volume rm $(docker volume ls -qf dangling=true)

(This does delete stuff, so you may want to check there is nothing you want to keep in any dangling volumes before doing this)

Ardesco
  • 7,281
  • 26
  • 49
  • I've tried that but it hasn't solved the problem as there are no dangling images left according to "docker volume ls -qf dangling=true" – Matt Apr 24 '19 at 08:32
  • May sound stupid, but have you tried restarting the docker service? It may be quietly holding onto something that has been deleted – Ardesco Apr 24 '19 at 10:32
  • I am reluctant to restart the Docker service due to the amount of running containers etc, I am relatively new to Docker so slightly skeptic about what I try. Will restarting the Docker service affect the containers etc? – Matt Apr 24 '19 at 11:30
  • yup it will result in you needing to restart them all – Ardesco Apr 24 '19 at 11:31
  • That's why I'd rather not restart the service just yet. If I run the build command from the RancherOS server then it builds without errors but if I run the build from Jenkins UI then it fails with "no space left on device" which is making me believe that Docker is working fine but Jenkins isn't? Immediately after Jenkins logs the "Sending build context to Docker daemon 20.78MB" message it fails. – Matt Apr 24 '19 at 12:12
  • is Jenkins running in a container? If so it sounds like the Jenkins container has run out of disk space. Quickest fix is probably to restart the Jenkins container, or build agent (assuming the config has not been tweaked once it started running) – Ardesco Apr 24 '19 at 12:59
  • All sorted now, thanks for your help. I managed to clean up some space using the command above, not sure why it didn't work the first time though. – Matt Apr 25 '19 at 10:17
  • All of the above didn't work for me, as by 'no space left', the system meant 'no inode left' (check by df -i)... This thread addresses this problem: https://stackoverflow.com/questions/45812401/no-space-left-on-device-even-after-removing-all-containers – Bob May 22 '20 at 16:46
  • If you had no inodes left, deleting stuff should have still freed up some inodes. nodes problems are always the biggest PITA though, it's really no obvious what's wrong at first and it's the kind of thing you don't really find out about until you suffer from it and spend forever trying to work out WTF is wrong :) – Ardesco Jun 01 '20 at 07:11
5

You should clean your old containers, images && volumes. You can use prune to remove it with only one command:

docker system prune

Remove all unused containers, networks, images (both dangling and unreferenced), and optionally, volumes.

Output:

WARNING! This will remove:
        - all stopped containers
        - all networks not used by at least one container
        - all dangling images
        - all build cache
Are you sure you want to continue? [y/N] y