3

I have a web application that uses ASP.NET with "InProc" session handling. Normally, everything works fine, but a few hundred requests each day take significantly longer to run than normal. In the IIS logs, I can see that these pages (which usually require 2-5 seconds to run) are running for 20+ seconds.

I enabled Failed Request Tracing in Verbose mode, and found that the delay is happening in the AspNetSessionData section. In the example shown below, there was a 39-second gap between AspNetSessionDataBegin and AspNetSessionDataEnd.

I'm not sure what to do next. I can't find any reason for this delay, and I can't find any more logging features that could be enabled to tell me what's happening here. Does anyone know why this is happening, or have any suggestions for additional steps I can take to find the problem?

My app usually stores 1-5MB in session for each user, mostly cached data for searches. The server has plenty of available memory, and only runs about 50 users.

Screenshot of Failed Request Trace

Josh Yeager
  • 3,763
  • 1
  • 25
  • 29
  • Do you know anything more about the slow requests? For example, are they fetching data from the database? Are they using the data from the session? – Matthew Rodatus Apr 26 '11 at 13:56
  • 1
    One possible avenue for investigation would be lock contention for the session state. Take a look at the last paragraph of http://msdn.microsoft.com/en-us/library/ms178581.aspx – Matthew Rodatus Apr 26 '11 at 13:57
  • 1
    See also http://odetocode.com/Blogs/scott/archive/2006/05/21/session-state-uses-a-reader-writer-lock.aspx -- "When a request arrives for a page that reads and writes Session variables, the runtime acquires a writer lock. The writer lock will block other pages in the same Session who might write to the same session variables." – Matthew Rodatus Apr 26 '11 at 13:58
  • Thanks, Matthew. That looks like it might be what's causing the problem. I'm building a test case now. For some reason, it never occurred to me that the entire Session collection might be locked by each page. I guess I assumed that each individual Session entry was locked separately. – Josh Yeager Apr 27 '11 at 15:12

2 Answers2

6

It could be caused by lock contention for the session state. Take a look at the last paragraph of MSDN's ASP.NET Session State Overview. See also K. Scott Allen's helpful post on this subject.

If a page is annotated with EnableSessionState="True" (or inherits the web.config default), then all requests for that page will acquire a write lock on the session state. All other requests that use session state -- even if they do not acquire a write lock -- are blocked until that request finishes.

If a page is annotated with EnableSessionState="ReadOnly", then the page will not acquire a write lock and so will not block other requests. (Though it may be blocked by another request holding the write lock.)

To eliminate this lock contention, you may want to implement your own [finer grained] locking around the HttpContext.Cache object or static WeakReferences. The latter is probably more efficient. (See pp. 118-122 of Ultra-Fast ASP.NET by Richard Kiessig.)

Matthew Rodatus
  • 1,393
  • 9
  • 18
  • I'm not sure yet if the Cache object or WeakReferences are good solutions for us, but you were definitely right about the cause of the problem. Thanks! – Josh Yeager May 05 '11 at 16:36
  • Could you comment back here once you come up with a solution that works for you? We have not yet but may encounter this, and I'd be interested in learning about alternatives to what I mentioned above. – Matthew Rodatus May 06 '11 at 11:34
  • We have a solution, although we're not done implementing it yet. The only really critical session data that we need is the user's ID. So, we're just going to change all of our pages (except the login page) to "ReadOnly" mode. To make that work, we're going to move our search caches into a custom object and change our other pages to use Context.Items instead of Session. – Josh Yeager Jun 07 '11 at 13:59
  • Thanks for responding back. And, good idea using Context.Items; we recently discovered that as well. You may want to consider introducing an abstraction wrapping Context.Items (i.e. an ICacheService with Set/TryGet/Remove). – Matthew Rodatus Jun 07 '11 at 14:14
  • One more idea: If you tie the user ID to the ASP.NET session ID with a table in the database and verify authentication for each request, you can use Context.Items to store the user ID and be robust when the worker process recycles. You could also store a canary in the table so that you won't get spurious CSRF validation failures when the user's session expires and the user tries to submit a form. – Matthew Rodatus Jun 07 '11 at 14:18
  • Yeah, we're kicking around a design to use the database to back up our user sessions. We hadn't thought of using Context.Items as part of that; interesting idea. – Josh Yeager Jun 07 '11 at 16:36
0

There is chance your are running up against the maximum amount of memory that Application Pool is allowed to consume, which causes a restart of the Application Pool (which would account for the delay you are seeing in accessing the session). The amount of memory on the server doesn't impact the amount of memory ASP.NET can use, this is controlled in the machine.config in the memoryLimit property and in IIS 6.0 later in IIS itself using the "Maximum memory used" property. Beyond that, have you considered alternatives to each user using 5 MB of session memory? This will not scale well at all and can cause a lot of issues while under load. Might caching be a more effective solution? Do the searches take so long that you need to do this, could the SQL/Database Setup be optimized to speed up your queries?

ben f.
  • 750
  • 6
  • 14
  • I don't see any app pool restart messages in the Windows event log, so I don't think that is the cause of the problem. Also, we use session to track the user's login, which means that everyone is logged out if the app pool restarts. I haven't gotten any reports of that happening. – Josh Yeager Apr 21 '11 at 21:20
  • One more note: this app does not have a large number of users (50 to 200 per server), but each user works with a lot of data. So, it makes sense for us to cache a lot of data in memory for each user. At 5MB per user, even 200 users is only 1GB of RAM used for cache. We designed the system this way, and it has been running smoothly in many environments for several years. – Josh Yeager Apr 21 '11 at 21:23