Short answer: the server become unresponsive because you filled almost all memory with dirty pages (ie: data to be flushed out).
Long answer: generally, writes do not push data to the backing device immediately. Rather, they are cached in the pagecache. This is done for performance reasons: storage (especially HDDs) is quite slow compared to CPU/memory, so caching as much possible significantly increse I/O speed. However, if you write to much at all, the pagecache will frenetically (and with high priority) flush to disks as much data as possible. This cause the calling process to go into "deep sleep" - meaning that you can not interrupt it because, well, it is not really running, rather itself is waiting for be woken by the kernel. Moreover, as dirty data flushing is a costly and high priority operation, the entire server become very slow.
That said, how can you create large image files without reducing the server to a crawl? You had different options:
- launch your
dd
command appending the oflag=direct
option: this will cause dd
to bypass the pagecache, directly writing to disks. I also suggest to use a smaller buffer, ie: 1 MB, using something similar to that dd if=/dev/zero of=/var/lib/libvirt/images/dat.img bs=1M count=1000000
. Please note that this command above will somewhat slow down the server during execution (after all, you are writing to disks), but nowhere near you first try;
- a better approach to create a fully-allocated file is to use the
fallocate
command, ie: fallocate /var/lib/libvirt/images/dat.img -l 1G
. By allocating via metadata handling, and not writing any real data, this command will return almost immediately, causing no slow down at all. Many modern filesystems support it, with the notable exception of ZFS;
- finally, you can use a sparse file - a file with a nominal size but no real allocation. You can create it by issuing
truncate --size=1G /var/lib/libvirt/images/dat.img
. This command will return immediately, causing basically almost no I/O at all.