2

We manage almost a dozen of SQL Server 2016 Enterprise instances and this happens in all of them: overstimation of required memory for queries. Real overestimation, at the GBs level.

Most servers have either 512 or 256 GB of RAM, so from what I know, by design SQL Server uses 75% of its dedicated memory for queries (Workspace Memory) and from this 75% it may take up until 25% as a maximum granted memory for each query. Well, this leads us to queries - bad queries I'm aware (several joins, sorts and views after views and with query costs in the thousands) sometimes being granted ~45 GB of RAM (~25% of 75%).

The problem is, as you can see from the image below, the following:

enter image description here

Both requested and granted memory are way superior than required and specially actually used memory.

I've already tried this, to no avail:

  • flushing execution plan cache
  • forcing statistics updates on tables
  • manually specifying recompile on these queries
  • different settings of MAXDOP (it only had any effect on 'required' memory, not 'requested/granted')

So, from the way SQL Server works, it only takes 5-6 with poorly estimated requested memory transactions (45 GB each, for example) to lead all the other remaining queries to a RESOURCE_SEMAPHORE wait queue (because SQL Server considers 'requested' memory, and not 'required' memory when putting a session into a resource semaphore).

Would you have any ideas or suggestions?

jarlh
  • 42,561
  • 8
  • 45
  • 63
rofz
  • 95
  • 8
  • Quick and dirty solution: rebuild indexes, update statistics, add **OPTION (RECOMPILE)** at the end of the query that causes this. – knyazs May 09 '19 at 09:21

0 Answers0