1

http://www.adaptec.com/en-us/_common/maxcache/

http://www.adaptec.com/en-us/_common/hybrid-raid/

They sound similar, but there's no detailed description of how either of them work.

  • Ah - there is. Seriously. There are some white papers. Maybe you should not expect them to be ll in one place? – TomTom Oct 12 '11 at 10:34

1 Answers1

1

Obviously MaxCache is using an SSD pool as a large read cache for "hot data", while HybridRAID is the implementation of a read bias towards SSD disks in setups where you have a RAID 1 setup with one SSD and one HDD.

the-wabbit
  • 40,737
  • 13
  • 111
  • 174
  • Exactly. Also in a 20 disc hybrid raid, you need Raid 10 and half the discs being SSD - maxcache is a front end cache in front of the discs for the often requested data ONLY. – TomTom Oct 12 '11 at 10:34
  • Come to think of it, HybridRAID sounds like an incredibly stupid idea. For any reasonably sized storage system, you would spend more money than for a tiered storage solution with none of the advantages. – the-wabbit Oct 12 '11 at 13:01
  • Ah - yes. Except for example a read heavy database where you SAVE money. With limited writes the write budget may be ok for the normal disc - and you get the extreme IOPS and latency for read access. Note: The wolls is more comlpex than your simple mind. The database server I use right now has nearly 1000gb SSD secnod level cache in front of the discs. We would love a full hybrid raid approach. No way to tier that stuff either. – TomTom Oct 12 '11 at 13:27
  • Writes still would need to be performed synchronously. Unless you have a database with virtually no disk writes (at least compared to your write cache size), they are likely to kill the array's overall performance anyway. This is not just "read-heavy" but a border case for OLAP-only style of databases - which in turn would have other optimization techniques working out more effectively than throwing SSDs at them (admittedly this is much easier for engineering and DBAs). – the-wabbit Oct 12 '11 at 13:40
  • Ah, no. Out of experience. Writes would not kill the perforamcne as all reads go to the ssd only. Look at Oracle ExaData for something that has heavy ssd caches. – TomTom Oct 12 '11 at 14:20
  • Do you have any data on this? Writes would need to be committed to both disks for failure/outage consistency before proceeding - thus effectively blocking the array - this is why RAID1 is not performing better on writes than a single disk. If you could "unbundle" this and simply use one disk for writing to leave another one available for reads, not only would you get better TPS rates at the expense of more logic, but also "poor man's tiered storage" since you would actually end up writing journals. BTW: the ExaData apparently uses tiered storage, not a hybrid RAID. – the-wabbit Oct 12 '11 at 16:32
  • Ah, no - first, writes can be delayed with a BBU safely - that is why the bbu is there. Second, dont forget that the non-ssd discs have NO READS. So they can sue all their IO for writing. While all rady go to the SSD. a LOT of applications fit here perfectly - like websites CMS that read a lot of data but have limited updates. Poor man tiered? Yes - like cost efficient. – TomTom Oct 12 '11 at 17:43