0

I found out that my server(linux Ubuntu Server 14.04) has probably a to high chunk size for the RAID0, or atleast I think this is an performance issue... The current chunk size is 512K with 4 disk of 3TB but the averige filesize of the files we write is 650kB. So they are not devided well on all 4 disks if I'm correct.

Only I can't test different chunk sizes really well because it seems I can't change chunk size unless I totally reinstall the server, and I need the hosting company to do this as well.

What chunk size do you guys recommand, should I go really low 4K or go 128K ? As I said averige filesize is 650kB, but it could be some files are sometimes smaller so between 100 to 650K

klaasio
  • 79
  • 5
  • How big is the total volume size needed for this, the reason I ask is that for very small files such as this I've had some really good results with PCIe-based flash (FusionIO in my case) than with R0 - but obviously it's more expensive hence my volume question. Have you no 1MB option? – Chopper3 Oct 01 '14 at 08:32
  • @Chopper3 You suggest to put RAID0 on 1024K ? I didn't think about that... Well the volume is quite big, else I would have created a memory disk or something as well. We expect 10TB of data or more.. the disks 3TB would be fully used. (but i read it's good to keep 10% space free for speed) – klaasio Oct 01 '14 at 09:53
  • How many disks are you looking at using in this R0 and what will be doing the 'raiding', a hardware controller or software? – Chopper3 Oct 01 '14 at 10:13
  • Want statistics? Average means nothing without a standard deviation. 650gb average can be tons of 1kb files and some BIIIIG ones OR can be a spread between 600 and 700. – TomTom Oct 01 '14 at 10:42
  • @Chopper3 we'll be using 4 disks of 3TB each. It's (unfortunately) a software RAID0. – klaasio Oct 01 '14 at 11:44
  • @TomTom You're very right about that. But our file size is limited to 800 KB files... so very big files aren't possible. Perhaps I need to make a linux find query to find files between 0-100KB 100-200KB etc. and count those. Will get back on this soon! – klaasio Oct 01 '14 at 11:48
  • @TomTom what do you mean with 650gb ? i'm talking about kB but still you have a good point. – klaasio Oct 01 '14 at 11:50
  • Yeah, mixed up gb and kb. But with that size I Would likely hit 64kb as stripe. – TomTom Oct 01 '14 at 12:07
  • With four disks in R0 I'd be tempted to go with a 256KB chunk as that means you should be able to store one whole file in one chunk per disk which should help with the random-read. What would help more would be to use faster disks in R10 but I'm assuming budget is an issue here. What are you going to do when you lose the array by the way, I'd imagine this will happen around once a year. – Chopper3 Oct 01 '14 at 12:37
  • @CHopper3 THX! sorry for my super late reply do you think 256KB is good 4x256 = 1024kB. The files are rather smaller = 650 kB. Then the 4th disk would be less busy, or is software raid so smart it will go for the 4th disk for the next file and then skips the 3th disk etc.etc. So far I would say 128kB there seems to be nothing between 256kB and 128kB if I'm correct? – klaasio Oct 03 '14 at 06:32
  • @klaasio - it doesn't work that way, the fourth disk wouldn't be less busy as the writing 'wraps around' usually - the issue with 128KB is that 4 of them is only 512KB, so on an average file you're going to have to read 8 chunks not 4, why not test these various options? – Chopper3 Oct 03 '14 at 08:50

0 Answers0