I've got the next setup:
- Proxmox 7.2
- CEPH 16.2.9
- K3S v1.23.15+k3s1
- CEPH CSI v3.7.2
CEPH using as RBD-storage for QEMU images and K8S PVC. When I do disk benchmark in QEMU I've got the next results:
Name | Read(MB/s) | Write(MB/s) |
---|---|---|
SEQ1M Q8 T1 | 16122.25 | 5478.27 |
SEQ1M Q1 T1 | 3180.51 | 2082.51 |
RND4K Q32T16 | 633.94 | 615.96 |
. IOPS | 154771.09 | 150380.37 |
. latency us | 3305.38 | 3401.61 |
RND4K Q1 T1 | 103.38 | 98.75 |
. IOPS | 25238.15 | 24109.38 |
. latency us | 39.06 | 40.30 |
But when I do the same in K8S results worse
Name | Read(MB/s) | Write(MB/s) |
---|---|---|
SEQ1M Q8 T1 | 810.36 | 861.11 |
SEQ1M Q1 T1 | 600.29 | 310.13 |
RND4K Q32T16 | 230.73 | 177.05 |
. IOPS | 56331.27 | 43224.29 |
. latency us | 9077.98 | 11831.65 |
RND4K Q1 T1 | 19.94 | 5.90 |
. IOPS | 4868.23 | 1440.42 |
. latency us | 204.76 | 692.60 |
I'm using writeback cache for QEMU. If i disable cache the results looks like K8S. Is there similar writeback mechanism in K8S or CEPH CSI?