-2

I have a specific storage server plan in mind. I want to use a SSD to buffer all data that I want written to a hard drive. The data would be written to the SSD, but copied to the HDD aswell. Kinda like Raid 1, but without throttling - but still showing up as one logical drive.

This would mean I can just plop a 20GB file to the logical drive, it would get copied in the background and removed from the SSD afterwards.

I plan to make my own local server for this, so any solutions are acceptable, but with a software solution Linux is preffered.

niraami
  • 105
  • 3
  • 1
    OSX and Windows can do this just out of the box but just google 'autotiering linux' and take your pick of options. – Chopper3 Oct 29 '16 at 13:26

2 Answers2

3

In principle creating an ext3 or ext4 file system on the HDD with an external journal on the SSD will do exactly what you ask for.

Whether it will actually achieve the performance you expect from it is however unknown to me. Where you will see the most significant performance difference between HDD and SSD is in random access reads. But a journal should never see random access only sequential access.

With a good SSD you will probably still see a performance improvement compared to writing to a file system with the journal residing on the same HDD as the file system itself.

kasperd
  • 30,455
  • 17
  • 76
  • 124
  • Quick research on external journals shows that this is a good solution. But I can't find any information on how big the journal can become until the file transfer is throttled by the HDD again. Is it the size of the drive that houses the journal or does it have a internal limit? – niraami Oct 29 '16 at 13:11
  • Hmm, looking back at my comment It's wrongly worded. I'll rework it: 'If I create a few GB journal partition on an SSD, will this mean that I can transfer that much data until I encounter a bottleneck'? – niraami Oct 29 '16 at 13:25
  • @Areuz There probably is some limit on how large the journal can be. But I don't know what it is or whether it is going to matter to your usage. I can come up with some guesses which could be totally wrong. Maybe the size of the journal is stored as a 32 bit integer somewhere making the maximum size be 2GB or 4GB. Maybe data written to the journal need to stay in RAM until the journal has been flushed which would make the limit depend on how much RAM you have. – kasperd Oct 29 '16 at 13:29
1

You also should have a look at bcache and lvmcache which are intended for a setup you want to achieve. Which one to choose is up to you and maybe worth of testing with your common workload.

A little bit more higher level solution would be using ZFS or btrfs, where I would recommend ZFS. But both filesystems also can make use of SSDs for caching.

Thomas
  • 4,225
  • 5
  • 23
  • 28
  • Hmm, I'll look into ZFS, I already wanted to use it, but I didn't know that It had this feature built in. – niraami Oct 29 '16 at 13:37
  • Well, ZFS might not provide the exact functionality you asked for, but I thought it would be worth mentioning it. In any case, I would recommend to test the different possibilities for your use case. It also might be that the answer of kasperd suits your needs. – Thomas Oct 29 '16 at 14:10