This helped explain the purpose to me: https://docs.kernel.org/driver-api/md/raid5-cache.html
write-back mode
write-back mode fixes the ‘write hole’ issue too, since all write data
is cached on cache disk. But the main goal of ‘write-back’ cache is to
speed up write. If a write crosses all RAID disks of a stripe, we call
it full-stripe write. For non-full-stripe writes, MD must read old
data before the new parity can be calculated. These synchronous reads
hurt write throughput. Some writes which are sequential but not
dispatched in the same time will suffer from this overhead too.
Write-back cache will aggregate the data and flush the data to RAID
disks only after the data becomes a full stripe write. This will
completely avoid the overhead, so it’s very helpful for some
workloads. A typical workload which does sequential write followed by
fsync is an example.
In write-back mode, MD reports IO completion to upper layer (usually
filesystems) right after the data hits cache disk. The data is flushed
to raid disks later after specific conditions met. So cache disk
failure will cause data loss.
In write-back mode, MD also caches data in memory. The memory cache
includes the same data stored on cache disk, so a power loss doesn’t
cause data loss. The memory cache size has performance impact for the
array. It’s recommended the size is big. A user can configure the size
by:
echo "2048" > /sys/block/md0/md/stripe_cache_size Too small cache disk
will make the write aggregation less efficient in this mode depending
on the workloads. It’s recommended to use a cache disk with at least
several gigabytes size in write-back mode.