-1
  • Server - Windows Server 2012 R2
  • CPU - AMD Opteron(tm) processor 6174 2.20GHz (12 processors)
  • RAM - 8.00 GB
  • System Type - 64-bit OS
  • IIS - version 8.5.9699.16384
  • SiteFinity - 9.1.6131
  • Virtual Server
  • No Load balancing
  • Database - not affected/separate server

Periodically our servers IIS worker process that runs the Application pool for our production release of our website, built with SiteFinity, spikes and remains pegged at 100%. Something within the web application is utilizing all the resources available to the machine to just run the IIS worker process.

Our site, with minimal load will all of a sudden start using all the processor that is available and we cannot track this issue down. It doesn't seem to happen at any one point in the day or affected by high load. When this happens, we have to either allocate lots of resources to the machine, which requires a re-boot since we are not cloud hosted OR we have to recycle the application pool and release the resources and flush the requests and hope that we killed the process.

We have been through the optimization document for start up, All our pages are on a "standard" cache, 3 minute cache with a slide. All our images are on a long cache policy with a slide, all our images are also on disk and not being served up through the CMS.

Doing some digging, found some very old articles detailing what sounds like exactly our problem. I understand that these are very old and are in very old SF instances, however the issue sounds identical, High CPU utilization with minimal load:

Anyone experience anything like this with SiteFinity recently and have any tips/tricks/coupons on what you did to resolve or located the rogue process chewing up server resources?

Thanks and look forward to your response.

  • Do you have proof to support your statement that during high CPU load the traffic on the site is low? Have you checked the number of requests per second during that peak? This could be an automated attack - somebody is just running a tool that sends hundreds of thousands of requests to your site. Do you have any complex custom code that may be eating the resources? – Veselin Vasilev Mar 29 '17 at 00:55
  • I do not have numbers to support my claim, just my experience when the worker process spikes. When this happens the site takes minutes to load, so performing admin functions on the page is painful. We have looked at the IIS logs and we have talked to our hosting company, traffic doesn't look fishy, so we still don't think we are being attacked. In terms of our custom code, we try to keep it as stock as possible (to save time) but we do have some processes that run at night, I will check those to see if they are not finishing and chewing up resources. – Robert Dustin Mar 29 '17 at 18:28
  • yes, check those custom jobs that you have and let us know – Veselin Vasilev Mar 30 '17 at 05:46

1 Answers1

0

Here are a few article I found that might help...

http://knowledgebase.progress.com/articles/Article/How-to-isolate-a-performance-problem-in-Sitefinity?q=100%25+cpu&l=en_US&c=Product_Group%3ASitefinity&fs=Search&pn=1

http://knowledgebase.progress.com/articles/Article/How-to-resolve-high-CPU-after-sharing-page-link-on-Facebook?q=100%25+cpu&l=en_US&c=Product_Group%3ASitefinity&fs=Search&pn=1

Also, I would open IIS and click on the server node and then double click on the "Worker Process". You will need to be on the web server to see this option.

This page may show you any worker processes that have long running pages that could be causing this spike in CPU. You can keep clicking the REFRESH button in the upper right corner to see updates.

Let me know if this helps narrow down your issue.

Craig
  • 11
  • 2
  • Hi @RobertDustin, did you manage to solve your problem? Thanks – ChrisBellew Dec 07 '17 at 04:25
  • Yes and no. The issue still arises (vary occasionally), however we found the main source of the consistent crashes. We had a service that updated our mobile application feeds and that was running too often and during peak load times, causing race conditions for resources and bottle necking the system. once we properly scheduled the job to run (SF system time is on GMT so thats how we got our timing confused) the site started performing much better. – Robert Dustin Dec 11 '17 at 16:12
  • Also in that time, we moved from a regular web server on a VM to Azure hosting with horizontal and vertical resource expansion, so that has seemed to really help us as well. Load balancing was a major upgrade for our system. – Robert Dustin Dec 11 '17 at 16:12