0

We are running a cluster for about 50 user-groups mapped as Linux groups. Each group has a filesystem allocated from a ZFS storage server which is exported over NFS. This results in a long list of mountpoints that are required to be mounted over the NFS client nodes. Similarly the output of df -h is also very long listing all those mountpoints for each user group. Is there a way to avoid this by somehow restructuring or reconfiguring the ZFS server while still managing user-groups efficiently such as quotas, access rights etc.

Below is the partial output of zfs list command:

~]# zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
intp1               43.7G  47.9T   219K  /zfs/1
intp1/sam           43.7G  47.9T   219K  /zfs/1/sam
intp1/harry         219K  4.00T   219K  /zfs/1/harry
intp1/rick          219K  4.00T   219K  /zfs/1/rick
intp1/kim           43.7G  3.96T  43.7G  /zfs/1/kim
intp2                252G  47.7T   252G  /zfs/2
intp2/johnson       219K  8.00T   219K  /zfs/2/johnson
intp2/hoekstra        219K  8.00T   219K  /zfs/2/hoekstra

Shown below is the output of zfs get all command for one filesystem:

# zfs get all intp1/sam
NAME       PROPERTY              VALUE                  SOURCE
intp1/sam  type                  filesystem             -
intp1/sam  creation              Fri Sep 23  9:56 2016  -
intp1/sam  used                  43.7G                  -
intp1/sam  available             47.9T                  -
intp1/sam  referenced            219K                   -
intp1/sam  compressratio         5.42x                  -
intp1/sam  mounted               yes                    -
intp1/sam  quota                 none                   default
intp1/sam  reservation           none                   default
intp1/sam  recordsize            128K                   default
intp1/sam  mountpoint            /zfs/1/sam             inherited from intp1
intp1/sam  sharenfs              on                     inherited from intp1
intp1/sam  checksum              on                     default
intp1/sam  compression           lz4                    inherited from intp1
intp1/sam  atime                 on                     default
intp1/sam  devices               on                     default
intp1/sam  exec                  on                     default
intp1/sam  setuid                on                     default
intp1/sam  readonly              off                    default
intp1/sam  zoned                 off                    default
intp1/sam  snapdir               hidden                 default
intp1/sam  aclinherit            restricted             default
intp1/sam  canmount              on                     default
intp1/sam  xattr                 on                     default
intp1/sam  copies                1                      default
intp1/sam  version               5                      -
intp1/sam  utf8only              off                    -
intp1/sam  normalization         none                   -
intp1/sam  casesensitivity       sensitive              -
intp1/sam  vscan                 off                    default
intp1/sam  nbmand                off                    default
intp1/sam  sharesmb              off                    default
intp1/sam  refquota              none                   default
intp1/sam  refreservation        none                   default
intp1/sam  primarycache          all                    default
intp1/sam  secondarycache        all                    default
intp1/sam  usedbysnapshots       0                      -
intp1/sam  usedbydataset         219K                   -
intp1/sam  usedbychildren        43.7G                  -
intp1/sam  usedbyrefreservation  0                      -
intp1/sam  logbias               latency                default
intp1/sam  dedup                 off                    default
intp1/sam  mlslabel              none                   default
intp1/sam  sync                  standard               default
intp1/sam  refcompressratio      1.00x                  -
intp1/sam  written               219K                   -
intp1/sam  logicalused           198G                   -
intp1/sam  logicalreferenced     40K                    -
intp1/sam  filesystem_limit      none                   default
intp1/sam  snapshot_limit        none                   default
intp1/sam  filesystem_count      none                   default
intp1/sam  snapshot_count        none                   default
intp1/sam  snapdev               hidden                 default
intp1/sam  acltype               off                    default
intp1/sam  context               none                   default
intp1/sam  fscontext             none                   default
intp1/sam  defcontext            none                   default
intp1/sam  rootcontext           none                   default
intp1/sam  relatime              off                    default
intp1/sam  redundant_metadata    all                    default
intp1/sam  overlay               off                    default
Ketan Maheshwari
  • 333
  • 1
  • 3
  • 8

1 Answers1

3

I use the crossmnt NFS export option for this.

For example:

/home *(rw,crossmnt,sec=krb5:krb5i:krb5p)

From the exports(5) man page:

This option is similar to nohide but it makes it possible for clients to access all filesystems mounted on a filesystem marked with crossmnt. Thus when a child filesystem "B" is mounted on a parent "A", setting crossmnt on "A" has a similar effect to setting "nohide" on B.

With nohide the child filesystem needs to be explicitly exported. With crossmnt it need not. If a child of a crossmnt file is not explicitly exported, then it will be implicitly exported with the same export options as the parent, except for fsid=. This makes it impossible to not export a child of a crossmnt filesystem. If some but not all subordinate filesystems of a parent are to be exported, then they must be explicitly exported and the parent should not have crossmnt set.

Michael Hampton
  • 244,070
  • 43
  • 506
  • 972
  • It's not entirely clear to me why this answers the question. As I understand, crossmnt will actually make the tree appear as a single mount to the client, which breaks isolation for multi tenancy where you _want_ the client to see where the fa boundaries are – aep Jun 20 '23 at 07:46