7

I was just looking through old notes for a pair of Exchange servers that I spec'd for a project a while ago at a previous job. They were for a fairly large organization with large mail quotas, so each mailbox server had 96GB RAM. The disk layout was:

  • 147GB RAID1 for the OS, applications, and pagefile

  • 1.2TB RAID 10 for the mail databases

  • 900GB RAID 10 for the logs

This seemed good in theory until you realize that, by default, you're going to have a 96GB pagefile on the 147GB partition, causing it to hit full-disk pretty quickly. In a situation like this, do you move the pagefile to another partition and lose the ability to recover crash dumps and also sacrifice some performance? Should I have ordered a pair of 300GB disks for the mirror instead (which is what I ended up doing)? Should I have artificially limited the pagefile size to something smaller like 32GB?

MDMarra
  • 100,734
  • 32
  • 197
  • 329
  • helpful links: http://technet.microsoft.com/en-us/library/cc431357%28v=exchg.80%29.aspx , http://www.msexchange.org/articles-tutorials/exchange-server-2010/migration-deployment/areas-consider-smooth-exchange-2010-installation-part2.html – TheCleaner Feb 13 '13 at 14:35

4 Answers4

8

The official recommendation from Microsoft, which hasn't changed since NT 4.0) is:

  • System Disk Page File
    • 8GB+: RAM Size + 10MB minimum
    • <8GB: 1.5x RAM
  • Adding page files to other disks may increase performance up to the maximum
  • Total of all page files 1.5x RAM maximum, but only because Windows will never make productive use of more than that. If it's using page consistently, you need more RAM.

As you've mentioned the page file on the System drive must be as big as RAM + 10MB to get a full memory dump should the server crash. I've never found a full memory dump to be useful in diagnosing a server crash any more than a mini-dump. Servers should be configured for either mini-dumps or full-dumps, whichever you feel is going to be most beneficial to you when diagnosing crashes.

Specific to Exchange 2003, 2007, and 2010: They all defer to the OS recommendations for the page file, which is the same for WinNT 4.0 to Server 2012 as shown above. Other versions are probably the same, but I'm not familiar with them and haven't dug out the documentation.

What I would have done: Kept the 147GB disks with mini-dumps configured and about 16GB of page file.

Chris S
  • 77,945
  • 11
  • 124
  • 216
  • I would agree with keeping the 147GB disks if required. However, if the ability is there to have the mirrored 300GB and the price was allowed, then no harm in going with the MS recommended here. – TheCleaner Feb 13 '13 at 14:32
2

Per a 2014 Exchange Team blog post, their current recommendation for Exchange 2013 'the smaller of RAM+10MB or 32,778MB'. In your case, 96GB RAM, you'd want to use the 32778MB for the page file.

Ref: You had me at EHLO

Michael Hampton
  • 244,070
  • 43
  • 506
  • 972
Eric W
  • 41
  • 1
-1

i believe that you need to measure memory pressure prior to setting the size of page file. In general this is not required. Please check Pushing the Limits of Windows: Virtual Memory, it is quite useful.

Eosfor
  • 25
  • 4
  • It's better if you summarise the page you are linking to in your answer. That way if the link dies we still have the answer. – ChrisF Feb 24 '13 at 12:54
-2

Think about it like this....

How exactly do you think you'd be able to send a 140 Gig+ Memory Dump to Microsoft for analysis? Do you really think support would take it?

On Hypervisors, Page File = Host RAM only, can't page VM RAM. (in other words, you need about a 4GB page file only.)

On Exchange Servers, Microsoft has issued this recommendation:Best practices for pagefile on an Exchange server with a large amount of RAM?

Personal experience with Exchange servers and +48GB Ram = go with 16 GB fixed, and you won't suffer any performance degradation, plus you'll avoid the 3AM "Exchange is Down!" call because your System disk is full.

  • 1
    Sorry, but this answer is kind of a mess. Why do I have to send a dump to Microsoft? They can analyze it remotely on the system itself, I don't have to send it anywhere. Also, virtualization has nothing to do with this and even if it did, your talk about vRAM doesn't make any sense. Finally, you say that Microsoft has issued a recommendation and then the link that you provide points back to my question. I don't see any new or valuable information in here at all. – MDMarra Feb 24 '13 at 05:54
  • 2
    The link...is straight back to this question? – tombull89 Feb 24 '13 at 12:27