2

We are setting up a new database system and have 15 drives to play with (+2 on-board for the OS).

With a total of 15 drives would it be better to setup all 14 as one RAID-10 block (+1 hot spare) OR split into two RAID-10 sets one for Data (8 disks) and one for logs/backups (6 disks).

My question boils down to the following: is there a specific point where having more drives in a RAID-10 setup will out preform having the drives broken into smaller RAID-10 sets.

telstone
  • 159
  • 10

2 Answers2

3

You wont like it... it depends ;)

Wha I would do:

  • Take 4 drives, RAID 10. Put down 64gb bottom for operating systme, the rest for logs ;) There yoyu go. As the OS is mostly dong nothning, and you have a caching raid controller (hopefully), the impact of two partitions is minimal.

  • The other 11... take 1 out as hot spare.

  • 10... depends what you need. For high performance i would AGAIN go RAID 10.

The trick here is that no IO activity on the database storage discs will interfere with the log IO capability. This basically gives both ends defined IO characteristics which are guaranteed and can be measured independently.

Depending on RAID controllre and OS you may want to be carefull with your formatting - on SQL Server:

  • Align partition to 1mb - automatic with 2008 upward, screwed before. Use 256k+ stripe groups.
  • Format disc subsystems (except OS) with 64kb nodes NTFS
  • Use multiple database files, in particular at least as many database / log files for tmepdb and heavy duty as your processor can run threads in parallel (=cores, cores*2 for hyperthreading) as there is an allocation "bottleneck" for files.
  • Dont use autogrow in normal operations - preallocate space. You dont really want the performance imppact of autogrow during heavy times.
TomTom
  • 51,649
  • 7
  • 54
  • 136
  • +1 It depends on your requirements. If this server is idle most of the time, RAID5 the whole thing, it wont matter. If you actually use quite a bit of IO, then TomTom's setup sounds very good. – Chris S Mar 15 '10 at 14:39
  • Thanks. I actually have a QUITE similar setup for my main database which is pretty active collecting price information ;) – TomTom Mar 15 '10 at 14:50
  • In most cases the DB will be DB2 on Linux although some setups may run on windows. So the windows information is appreciated. – telstone Mar 15 '10 at 14:56
  • Then look up the db access patterns on DB2 on both - the idea is to align read/write operations. On windows, SQL Server reads/writes pages of 8kb in clusters of 8 (i.e. 64kb) - that is why this pattern comes from. – TomTom Mar 15 '10 at 15:15
  • @Chris: If he is building a database server with 17 total disks and asking a question about "optimal disk partitions" I highly doubt it will be idle. So I think he should follow best practice and keep the transaction logs and temp dbs away from a RAID5 array – ITGuy24 Mar 18 '10 at 00:49
  • @ITGuy24, I've seen plenty of server budgets that rarely reflect actual server usage. Besides, most benchmarks I've seen RAID 10 outperforms RAID 5 or 6 by single digit percentages. That might be meaningful for some, YMMV. – Chris S Mar 18 '10 at 01:29
  • Agree with both. I remember my time at MS SQL Server support where I got a call a new 64 gbyte server only used like a handfull of gb memory - like 3-4 or so. Noon obviously there checked the size of the databases they worked with ;) The server was active on all cores though - tons of financial data requests, just very litle actual active data. – TomTom Mar 18 '10 at 04:47
  • You are correct this is a high volume ERP system. Lots of small writes throughout the day followed by heavy reads as reports are requested on the system. Let me see if I can re-state the question a little clearer. RAID-10 performance as you add drives to the setup. So would having 7 (14) vs 4 (8) and 3 (6) be a better setup. I would still create partitions to separate the data and the logs. I'm probably splitting hairs but when I think about a SAN generally you see large RAID 10 striped and storage is carved out of that. – telstone Mar 18 '10 at 13:33
  • SAN also comes normally with heavy caches (I had one once with a 32gb RAM cache). This can alleviate a LOT of stuff. You are (seem to be) some numbers smaller. – TomTom Mar 18 '10 at 14:25
0

I'd probably setup 12 in RAID-10 with one hot-spare and 4 in RAID-5, so you can get some parity for your backups, though RAID-6 would be more ideal.

The performance you get will heavily depend on your RAID controller, though with a good controller it scales pretty well.

gekkz
  • 4,229
  • 2
  • 20
  • 19
  • I have a secondary location for backups so no need for the RAID-5. With that being said do you feel 14xRAID-10 would outperform 8xRAID-10 (Data) and 6xRAID-10 (Logs)? – telstone Mar 15 '10 at 14:58
  • 1
    Have a look at this example benchmark from 3ware: http://www.3ware.com/kb/article.aspx?id=15058. – gekkz Mar 15 '10 at 15:32