6

The biggest problem I have with my site is the 4 backup periods throughout the day where load will inevitably go above 50.

I am using nice and ionice to try and reduce the impact but to only limited success.

As I immediately copy my backup to an S3 bucket as soon as its completed followed by a deletion of the files, I'm wondering if I can create a memory based disk?

I usually have 25GB free memory (I over specced) and my uncompressed db is 17GB and my compressed is 6GB. I use xtrabackup stream and compress.

If the stream was to memory and not disk I think that would take a lot of load off the system as a whole.

Does this seem viable?

EDIT 1 something like tmpfs. https://en.wikipedia.org/wiki/Tmpfs

MadHatter
  • 79,770
  • 20
  • 184
  • 232
Christian
  • 789
  • 1
  • 13
  • 31
  • 3
    Generally you would do one full backup during low load hours then incremental multiple times during the day. – Brian Jun 22 '18 at 01:27
  • I'm not sure this will automatically help, given that Linux will be using memory to be buffering your write to local disc anyway. You may well find that it's the *read* load that's blocking your system, and improving the backup *write* load won't help that. At least you should better understand which aspect of the backup load is stressing your system before you fix only one part of it. – MadHatter Jul 05 '18 at 09:08

2 Answers2

6

Of course, you can take something like Starwinds RAM disk to create the local drive based on the RAM. I haven't heard such configuration but it should work. As for the future needs, I would recommend you to get another server for the backup purposes. https://www.starwindsoftware.com/high-performance-ram-disk-emulator

Stuka
  • 5,445
  • 14
  • 13
1

Yes, you could use tmpfs like this:

mkdir /mnt/rd
mount -t tmpfs -o size=20g tmpfs /mnt/rd
Jonas Bjork
  • 386
  • 1
  • 4