0

I have 2 DRBD based storage clusters. And I am somehow not able to utilize 13% of the underlying disk space. The disk in both configs is sda4
Server1: sda4 size: 859G, drbd0 size:788G, drbd0 space available with empty disk:748G
Server1: sda4 size: 880G, drbd0 size:807G, drbd0 space available with empty disk:766G
resource config:

$cat /etc/drbd.d/nfs.res
resource r0 {
    device /dev/drbd0;
    disk /dev/sda4;
    meta-disk internal;
    protocol C;

    disk {
        md-flushes no;
        resync-rate 90M;
        al-extents 3833;
        c-plan-ahead 2;
        c-fill-target 2M;
        c-max-rate 100M;
        c-min-rate 25M;
    }

    handlers {
        fence-peer "/usr/lib/drbd/crm-fence-peer.9.sh";
        after-resync-target "/usr/lib/drbd/crm-unfence-peer.9.sh";
        pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh";
    }

    net {
        sndbuf-size 0;
        max-buffers 8000;
        max-epoch-size 8000;
        after-sb-0pri discard-least-changes;
        after-sb-1pri discard-secondary;
        after-sb-2pri call-pri-lost-after-sb;

        fencing resource-only;
    }


    on node1 {
        address x.x.1.1:7790;
    }
    on node2 {
        address x.x.1.2:7790;
    }
}
bakasan
  • 103
  • 1
  • 11

1 Answers1

0

I don't know much about DRBD and you didn't provide much detail in your question about how you measured free space, but similar things happen with ext4, which most distros initialize with a 5% reserved by default: maybe your DRBD has similar default configuration?

I couldn't help noticing that the amount of missing space is more or less said 5%, if you consider DRBD size and available space...

Lucio Crusca
  • 420
  • 3
  • 12
  • 33
  • I considered that and looked into DRBD documentation but was not able to find anything related. Also DRBD is claiming 91% of disk space, after that it's somehow blocking additional 4-5%. That is why total loss is about 13%. – bakasan Oct 26 '18 at 21:27