I have 2 DRBD
based storage clusters. And I am somehow not able to utilize 13% of the underlying disk space. The disk in both configs is sda4
Server1: sda4
size: 859G, drbd0
size:788G, drbd0
space available with empty disk:748G
Server1: sda4
size: 880G, drbd0
size:807G, drbd0
space available with empty disk:766G
resource config:
$cat /etc/drbd.d/nfs.res
resource r0 {
device /dev/drbd0;
disk /dev/sda4;
meta-disk internal;
protocol C;
disk {
md-flushes no;
resync-rate 90M;
al-extents 3833;
c-plan-ahead 2;
c-fill-target 2M;
c-max-rate 100M;
c-min-rate 25M;
}
handlers {
fence-peer "/usr/lib/drbd/crm-fence-peer.9.sh";
after-resync-target "/usr/lib/drbd/crm-unfence-peer.9.sh";
pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh";
}
net {
sndbuf-size 0;
max-buffers 8000;
max-epoch-size 8000;
after-sb-0pri discard-least-changes;
after-sb-1pri discard-secondary;
after-sb-2pri call-pri-lost-after-sb;
fencing resource-only;
}
on node1 {
address x.x.1.1:7790;
}
on node2 {
address x.x.1.2:7790;
}
}