2

I got a few hundred folders that I need to migrate to a new server. I've been asked to create a xfs FS for each individual folder. Ran a du so I know the exact size of each folder and specified block size of 1G.

How do I calculate what size do I need to create each vdisk? After I format it with xfs, I lose some space.

lbanz
  • 1,609
  • 5
  • 20
  • 30
  • 3
    Why do you need to do it this way? That's a strange request. – ewwhite May 28 '15 at 11:24
  • That's the way the new FS was designed. Each folder represents a department and they are big. Some are nearly a hundred TB. Once they moved to the new server they will be made read only for archiving. On the new server I don't want to waste too much storage so I need to create the FS as close to the previous one. Thin provisioning is out of the equation. I don't make the decisions and we are just following orders.. – lbanz May 28 '15 at 11:40
  • 2
    To be honest, I would not do this without the use of a smarter filesystem or volume manager like [ZFS](http://en.wikipedia.org/wiki/ZFS), where you can have a large pool of storage and just set quotas on each directory. Otherwise, it sounds like a mess. – ewwhite May 28 '15 at 11:53
  • Hm, but Thin Provisioning that is never going to be used would sound like a nice idea. What about this: See how large a thin provisioned disk really grows, then start from scratch with a thick provisioned disk of that size? – Hagen von Eitzen May 28 '15 at 17:10

1 Answers1

5

If you are just following orders, please ask the person issuing them how much usable space each filesystem needs for present and future requirements.

You want to make sure that you're not creating nearly-full filesystems from the start!

This approach to filesystem allocation is not scalable, so I strongly suggest that it be redesigned with a more modern filesystem or volume-manager like ZFS (or even btrFS). Those would allow the presentation of a pool of storage with multiple directories that can have individual attributes and quotas. But if you're stuck with the existing solution, you shouldn't be the one dictating the sizes of each volume.

If you must, though, test an XFS filesystem creation and see how much usable space results. In general, formatted XFS volumes have less overhead than like-sized ext3/4 volumes. So you'll have more usable space than you expect.

ewwhite
  • 197,159
  • 92
  • 443
  • 809
  • 2
    +1 from me; using filesystem boundaries as an expensive and painful way of reimplementing quotas doesn't sound like a good idea. – MadHatter May 28 '15 at 12:17
  • 1
    Do note that XFS creates inodes on the fly, instead of allocating space for them from the start. – Jenny D May 28 '15 at 12:21
  • @JennyD yes, this has caught us out before when a FS got completely full and we expanded it. It still catches us out from time to time. – lbanz May 28 '15 at 12:43
  • We will create exact size FS from the start, because they won't grow. They are only used for archiving purposes and the users are aware of that. Have to use XFS and I wasn't involved in the politics. But it is complicated and it took them nearly half a year to design this setup. – lbanz May 28 '15 at 12:46
  • @lbanz I'm sorry. It's still not a great solution. – ewwhite May 28 '15 at 13:04