-1

Given:

  • A server with 2008 R2 Standard.
  • 16 GB RAM in use for Hyper-V and OS, very low activity.
  • A separate RAID 5 for files, on it 500gb files (1.5 gb/file about)
  • A process on another machine reading 8 files at the same time. The only process using this raid.

Seen:

  • Memory goes up, caching.
  • System starts swapping, 1800 pages per second, C/V disc (first RAID) are slow with swapping (only)
  • Effective data rate is an astonighing 6 megabyte per second.

What can I do?

The disc can deliver more - at the start they pull 100 megabytes. Only once the swapping starts it gets nasty.

Is that a known issue? Any fix around? I see no reason for windows to cache so much it has to start sawpping.

TomTom
  • 51,649
  • 7
  • 54
  • 136

1 Answers1

0

Have you tried plotting the Perf counters for disk access and throughput, reads/writes per second, operations per second, etc., and the network perf counters, during these operations ?

I'd use the highest granularity, i.e. 1 second intervals, and collect it over a couple of attempts.

adaptr
  • 16,576
  • 23
  • 34
  • Yes. The main problem is that this machine is having Pages/Second a 1800 - that is 1800 pages exchanged on the swap file per second. It is ridiculous - quite obviously windows tries caching so aggressively it runs into swap space and kills performance. – TomTom Feb 21 '12 at 09:57
  • But page faults include actual accesses - and you're seeing 1800x4KB = 7200KB/second... isn't that what you'd expect for 6MB/sec SMB transfers ? – adaptr Feb 21 '12 at 10:18
  • Well, I would expect to see ZERO page faults. Please explain my why windows starts using swap space for caching? There is nothing on the machine that would overload it. 6.5 of 16gb memory in use - the rest is used for caching. There is no logical sense in using swap space as disc cache. And the swap is local. I would expect NO swapping (ok, some background noise). When files in use change (i.e. the threads close, then open another) network bandwidth use goes up to 200-250megabit for some seconds. THIS is what I would expect all the time. Instead the server wastes time using swap space. – TomTom Feb 21 '12 at 10:30
  • @TomTom why not disable swap space altogether - at least for testing? Also, *what* is using 6,5 GB of memory on your server? If the file server role is the only one, it is way too much. BTW: page faults are not necessarily swapping - loading parts of memory-mapped files from disk or file caching ops also produce page faults. Also, soft faults do not induce any disk activity, but you would not see them displayed distinctly in perfmon. See [this Technet article](http://blogs.technet.com/b/askperf/archive/2008/06/10/the-basics-of-page-faults.aspx) for how to dig a bit deeper here. – the-wabbit Feb 21 '12 at 11:03
  • I already said - Hyper-V. We have some no / low load virtual machines there, and they are allocated that memory,. this is in my original post. And I checked - Pages/Second is hard faults, loading from disc. – TomTom Feb 21 '12 at 11:39
  • @TomTom Ah, so you have the File Server and the Hyper-V role on this server? Please double-check your *Cache Faults/sec* and *Paging file\% Usage* values or disable or marginalize away the swap by reducing it to 4 MB to make sure it is really swapping/thrashing (and not memory-mapped disk I/O happening for other reasons) you are seeing. – the-wabbit Feb 21 '12 at 13:29
  • The VM's are not busy while I do measurements. No work on them - they are my personal db server, build agents and no, there is nothing happening while I do performance tests. – TomTom Feb 21 '12 at 14:47