21

I want to run my PostgreSQL database server from memory. The reason is that on my new server, I have 24 GB of memory, and hardly any of it is used.

I know I can run this command to make a ramdisk:

mdmfs -s 1024m md2 /mnt

And I could theoretically have PostgreSQL store its data there. But the problem with this is that if the server crashes or reboots, the data will be gone.

Basically, I want the database to be loaded in memory at all times so that it does not have to go to the hard disk drive to read every record, since I have TONS of memory and since memory is faster than hard disk drives.

Is there a way to do this while also having PostgreSQL write to disk so I don't lose any data in case the server goes down? Or is there a way to cache all data in memory?

David Barnes
  • 2,138
  • 5
  • 19
  • 25
  • 1
    Did you measure that almost nothing of those 24 gigabytes are used by postgresql? – tuinstoel Dec 01 '09 at 21:23
  • Since memory is faster than disk most likely it would be some speed up but, since PostgreSQL isn't designed to be run in memory the speed up might not be as big as expected. For example PostgreSQL would try to flush its memory to the persistent storage (normally disk, memory in this case), see _Main Memory Databases Systems: An overview_ by Gracia-Molina and Kenneth Salem, [here are my notes](https://docs.google.com/document/d/1k9N8UNvCVM484P8xKSAF8Su8UnNKAt3D8oWM-aFYsK8/edit) of this paper. – user454322 Apr 25 '17 at 11:10

4 Answers4

11

I'm now using streaming replication which is async. This means my MASTER could be running all in memory, with the separate SLAVE instance using traditional disk.

A machine restart would involve stopping the SLAVE, copying the postgresql data back into ramdisk and then restarting the MASTER followed by the SLAVE. This would be an interesting possibility which compares well with something like REDIS, but with the advantage of redundancy / hotstandby / backup / sql / rich toolset etc.

TonyC
  • 111
  • 1
  • 2
  • 2
    This is probably even more feasible now with synchronous replication. Doing what OP want. – plundra Nov 15 '11 at 20:34
  • If synchronous replication is used, it should be to another on memory file system, otherwise there wouldn't be performance gains. _When requesting synchronous replication, each commit of a write transaction will wait until confirmation is received that the commit has been written to the transaction log on disk of both the primary and standby server._ https://www.postgresql.org/docs/9.4/static/warm-standby.html#SYNCHRONOUS-REPLICATION – user454322 Jul 13 '16 at 06:28
  • I was thinking of the same idea. – Abhijit Gujar Apr 21 '18 at 10:00
6

have you seen the Server Configuration manual chapter? check it out, then google postgresql memory tuning.

just somebody
  • 18,602
  • 6
  • 51
  • 60
  • OP is not asking about tuning memory usage; he's asking about having Postgres use system memory as if it were the traditional secondary disk-based storage. – code_dredd Feb 21 '20 at 20:39
3

I have to believe that Postgres is written in such a way as to take full advantage of available RAM in the server. As you may have guessed by now, there's no reliable way to do this outside of Postgres.

Within Postgres, transactions assure that all operations are atomic, so if the power goes down while you are writing to a Postgres database, you will only lose that particular operation, and not the entire database.

Robert Harvey
  • 178,213
  • 47
  • 333
  • 501
  • 3
    I think what the question is asking is related to the persistence of the ramdisk being gone after shutdown not the atomic nature of a transaction in Postgres. – Kuberchaun Feb 25 '13 at 18:03
  • @JustBob: This question is over three years old, and the highest voted answer says "Google it." Post an answer if you think you can do better (that shouldn't be too hard). – Robert Harvey Feb 25 '13 at 18:11
  • 4
    Thanks for the constructive input. I just found this question and thought adding a little detail to an given answer wouldn't hurt anybody. – Kuberchaun Feb 25 '13 at 19:20
3

The answer is caching. Look into adding memory to the server, then tuning PostgreSQL to maximize memory usage. Also, the file system cache will help with this, doing some of it automatically. You will be able to speed up performance, almost as if it were in memory except for the first hit, while not having to manage it yourself, and being able to have a database larger than the physical memory.

Grant Johnson
  • 1,224
  • 10
  • 12