I set up a new server with a LUKS encrypted RAID5. On the former server, the bottleneck was definitely the CPU, as it was a 7y old single core and the load went up to 100%.
Now it is different. I still get poor write performance, but I cannot see where the bottleneck is.
During
root@home-le:/data# dd if=/dev/zero of=benchmark bs=100MB count=100
100+0 Datensätze aus
10000000000 Bytes (10 GB) kopiert, 775,726 s, 12,9 MB/s
I get
root@home-le:/data# iostat
Linux 2.6.38-11-server (home-le) 23.09.2011 _x86_64_ (2 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0,22 3,58 10,02 13,56 0,00 72,61
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 66,63 795,46 8876,84 105325279 1175367302
sdc 244,12 8203,55 1523,39 1086218095 201709949
sdf 253,41 8219,63 1519,15 1088347371 201148053
sde 242,42 8172,09 1495,00 1082051932 197950373
md0 933,49 36,80 3937,60 4872631 521371476
dm-4 933,51 36,79 3938,19 4871328 521449348
The array is in sync
md0 : active raid5 sda1[5] sdc1[0] sde1[2] sdf1[4]
2768292864 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
and consists of 4 950GB partitions of 4 1TB WD Caviar Green. (The other partions on the discs do not have considerable load.) FS is ext4 with block size 4096.
If you don't know about the bottleneck, I would also appreciate your results from comparable arrays.