1

Here's what we're trying to do.

We will have a 200GB+ SQL Server database that needs to load into memory. Microsoft best practice is to have enough physical memory available on the server and then load the entire database into that. That means we would need 256GB of memory on each of our SQL Servers. This would result is fast access to the database which is loaded to memory, but for the high cost of memory. BTW, we're running SQL Server 2008 on Windows Server 2008.

Currently, our server is setup with only 12GB memory. Just under 3GB is allocated to the OS, and the remaining 9GB is used for SQL Server. Is it possible to increase the pagefile to 256GB and set it up on an SSD drive? What we want to do then is, load the database into the pagefile located on the SSD. We're hoping the performance will be similar to loading the entire database into memory, since it'll be on an SSD.

Will this work? Is there another alternative we're overlooking? We want to keep the costs down as much as we can, without sacrificing the performance of our environment. Any advice would be appreciated.

Thanks.

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
WeirdG
  • 21
  • 1
  • 4
  • 1
    What does "physical memory" mean here? Do you mean to say that Microsoft suggests that there be enough RAM on memory sticks to keep entire databases? – vgoff Nov 26 '13 at 01:07
  • @vgoff Ideally, yes, why not? Obviously you want the data persisted to disk but when it can all fit in memory this means that all of your queries can access data from memory, which is faster than any disk you can throw at it, several times over. – Aaron Bertrand Nov 26 '13 at 01:55
  • Thanks for your clarification of the OP's statement. I could not exactly determine if they were speaking of physical memory, as on a platter, (since they are asking about pagefile) or volatile memory. You can even write pagefile in volatile RAM. Ideal and best practice are often two different things. – vgoff Nov 26 '13 at 02:04
  • 1
    Your very first sentence answers your question. "We will have a 200GB+ SQL database that needs to load into memory" Purchase enough RAM to do exactly that. Thanks @AaronBertrand. – vgoff Nov 26 '13 at 02:52
  • 1
    Have you tracked how much effort and time you're putting into *avoiding* to buy 128GB more RAM? How much money does **that** cost your company in the end? – marc_s Nov 26 '13 at 05:59
  • Adding more memory won't be any option since the cost will be high to purchase and install on all of our servers. This also includes disaster recovery servers in multiple locations. Can we install an SSD to move the pagefile to so the database can be stored in virtual memory? We're using the database with AEGIS and FIM to store new user information. The database needs to be stored in memory for fast read/write access for when users login or register on our website. – WeirdG Nov 26 '13 at 06:54

2 Answers2

1

If you want the database to be stored in memory, you need to buy more memory. In spite of what the other answer suggests, memory is the absolute best and cheapest way to make a database perform better - SQL Server is designed to use memory well.

While SQL Server will take advantage of the page file when it has to, and while having the page file on an SSD will be slightly faster than on an old-fashioned mechanical disk, it's still I/O and swapping and there is a lot of overhead around that, regardless of the disk type underneath. This may turn out to be a little bit better, in general, than having the same page file on a spinny disk (or no page file at all), but I don't think that it's going to be anywhere near the impact of having real memory, or that it's going to come anywhere close to your expectations of "fast access."

If you can't buy more memory then you can start with this page file on an SSD, but I'm confident you will need to additionally focus on other tuning opportunities - largely making sure you have indexes that support the type of queries you run, avoiding full table scans as much as possible. For full table aggregates you can consider indexed views (see here and here); for subsets you can consider filtered indexes.

And just to be sure: you are storing the actual data on an SSD drive, right? If not, then I would argue that you should use the SSD for the data and/or log, not for the page file. The page file isn't going to offer you much benefit if it is constantly swapping data in and out by exchanging with a spinny disk.

Community
  • 1
  • 1
Aaron Bertrand
  • 272,866
  • 37
  • 466
  • 490
  • Thanks for your response Aaron. I really appreciate it. I will talk to my team and see where we should go from here. – WeirdG Nov 26 '13 at 16:10
  • "I would argue that you should use the SSD for the data and/or log" This is very important to highlight. – alphadogg Feb 19 '14 at 13:43
0

Need more clairity on the question.

Are you in control of the database or is this a COTS solution that limits your ability to optimize?

Are you clustering? Is that why adding 200+Gb of RAM is an issue (now more than 400GB, 200 per node)?

Are you on bare metal or virtualized? Is this why RAM may be an issue?

So far it would seem the "experts" have made some assumptions that may not be fair to your circumstance.

Please update your question... :)

  • Why have you put this as answer? Put them in comment box – DevelopmentIsMyPassion Nov 27 '13 at 21:44
  • Yes this a COTS solution that limits our ability. The servers are all virtual, and that's why RAM is an issue. I'm told the cost to support the Microsoft best practice will be $60,000, hence why we're looking into a cheaper alternative. Apparently that's the way FIM works, or so I'm told. It puts the entire database into memory to optimize performance. We need a cheaper alternative since we expect the database to grow to over 200GB. – WeirdG Nov 28 '13 at 22:04