0

I'm using Ubuntu 32 BIT. - My app need to store incoming data at RAM (because I need to do a lot of searches on the incoming data and calc somthing). - I have a need to save the data for X seconds => So I need to allocate 12GB of memory. (client requirements) - I'm using Ubuntu 32 BIT (and dont want to to work with Ubuntu 64 BIT) - So I am using Ram Disk to save the incomming data and to searach on it. (So I can use 12GB of Ram on 32 BIT system)

when I test the app with 2GB of allocated memory (instead of 12GB) I saw that the performance of the CPU when using RAM is better than when using RAM DISK when I just write data into my DB (15% VS 17% CPU usage) but when I test the queries (which read a lot of data / or Files if I'm working with RAM disk) I saw a huge different (20% vs 80% CPU usage).

I dont understand why there is a huge of DIFF ? Both RAM and RAM DISK work on RAM ? no ? Is there anything I can do to get better performance ?

user3668129
  • 4,318
  • 6
  • 45
  • 87
  • A Ramdisk is a virtual HDD hich stores data in RAM. How do you think it will get you 12GB? – deviantfan Jun 02 '14 at 10:29
  • you can make a 12 GB file and use mmap() to write/read pieces of it. You will have to write a clever object / memory manager over it for transparent usage. – vrdhn Jun 02 '14 at 10:38
  • 3
    You probably just discovered that an operating system is better at figuring out how to use RAM effectively than you are. This is expected, it's their job. – Hans Passant Jun 02 '14 at 10:43

1 Answers1

2

There are two reasons that I can think of as to why a RAM disk is slower.

  1. With a RAMDisk we might use RAM as the file media but we still have the overhead of using a filesystem. This involved system calls to access data with other forms of indirection or copying. Directly accessing memory is just that.

  2. Memory access tends to be fast because we often can find what we are looking for in the processor cache. This saves us from reading directly from slower RAM. Using a RAMDisk will probably not be able to make use of the processor cache to the same extent if for no other reason, it requires a system call.

doron
  • 27,972
  • 12
  • 65
  • 103
  • 1
    Modern operating systems implement something called unified buffer cache, which basically means that the same mechanism is used to implement both swap-backed private memory and memory-mapped files (sans the filesystem overhead in the latter case). Therefore reading and writing to a mmap-ed file has the same CPU cache unitilisation as using allocated memory. – Hristo Iliev Jun 02 '14 at 14:31