3

I have an ASP.NET 4.0 Web Service that accepts transmission of XML files. In the past (with a different implementation of the same web service) we have tracked the concurrency (# of XML files being received/processed at the same time) using timestamps. I have replicated this behavior in the new version of the web service as such:

In the constructor for the Web Service class I record ConnectionStartTime using HttpContext.Current.Timestamp

public class MyWebService : System.Web.Services.WebService
{
  public MyWebService()
  {
    ConnectionStartTime = HttpContext.Current.Timestamp
  }
}

After I'm done processing the XML file within the WebMethod I insert the file into a database (recording ConnectionEndTime) and return the response to the user. I execute the database insert in a new Thread so the end user doesn't have to wait for the insert to occur to receive their response.

new Thread (() =>
  {
    insertIntoDatabase(ConnectionStartTime, ConnectionEndTime=Datetime.Now, xmlFile);
  }).Start();
return responseToUser;

Now I'm trying to gauge how many concurrent XML transmissions we've reached with two methods:

1. Performance Counters

  • ASP.NET Apps v4.0\Requests Executing - This counter peaked at 52.
  • ASP.NET Apps v4.0\Requests Queued - This counter peaked at 19.

To me this means I should see a point where we have 33 records with overlapping ConnectionStartTime and ConnectionEndTime.

2. Querying Against Timestamps - In this question I reference the query I'm using to calculate the number of concurrent transmissions based on ConnectionStartTime and ConnectionEndTime. These are datetime fields in a SQL Server database. Note: The query in that question is a reworked version of the algorithm we've been using for the past 3 years to determine concurrency so it may not be 100% correct but the other implementations of the algorithm (Excel macros, etc) have been validated.

My problem is that the two methods never align. The maximum results from querying the timestamps hit 10 while the performance counters suggested the maximum should be 30+. I'm having a hard time locating where the discrepancy is. Am I making a mistake in how I'm recording my timestamps? Does the HttpContext.Current.Timestamp value not record the beginning of a transmission to the web service?

Community
  • 1
  • 1
Jeff Swensen
  • 3,513
  • 28
  • 52
  • Not sure if related, but... unless you have a really really good reason, Don't use Thread.Start() in ASPNET. Use the threadpool. http://stackoverflow.com/questions/684640/advantage-of-using-thread-start-vs-queueuserworkitem – Cheeso Apr 29 '11 at 15:01
  • Thanks for the tip. Not sure it is affecting the timestamp issue but an improvement nonetheless. – Jeff Swensen Apr 29 '11 at 15:13

1 Answers1

0

Using thread start WILL cause a discrepancy between your data and the ASP.NET counters (mainly because of the way you've written your thread function. I would change it to:

DateTime EndTime = DateTime.Now
new Thread (() =>
{
    insertIntoDatabase(ConnectionStartTime, ConnectionEndTime=EndTime, xmlFile);
}).Start();
return responseToUser;

Not sure if it is the only source of a difference, but with your code you are measuring the time it takes to process the request AND spin up a thread and issue a command to the database to record the times.

My code measures the time to process the request only by capturing the endtime in a closure before spinning up the thread. It should come out closer to the ASP.NET performance counters. Not sure if it will account for the entire discrepancy, but it should help.

I agree with previous commenters that you shouldn't be starting a new thread with every request like this. Spinning up new threads takes time and is very memory intensive. If this is a high performance application it will definitely have an effect. Using QueueUserWorkItem would be better, although using ThreadPool comes with its own set of concerns and limitations.

As a final comment, the pattern you are using has some other potential gotchas that will surface as your request rate increases (queuing, concurrency and bottleneck issues). I'd bet that in your current implementation the discrepancy grows with the request rate. If this is a high performance or performance sensitive application I would use a different approach to measuring concurrency entirely.

Joe Enzminger
  • 11,110
  • 3
  • 50
  • 75