2

My question is simple. About 2 years ago we began migrating to ASP.NET from ASP Classic. Our issue is we currently have about 350 sites on a server and the server seems to be getting bogged down. We have been trying various things to improve performance, Query Optimizations, Disabling ViewState, Session State, etc and they have all worked, but as we add more sites we end up using more of the server's resources and so the improvements we made in code are virtually erased.

Basically we're now at a tipping point, our CPUs currently average near 100%. Our IS would like us to find new ways to reword the code on the sites to improve performance.

I have a theory, that we are simply at the limit on the amount of sites one server can handle.

Any ideas? Please only respond if you have a good idea about what you are talking about. I've heard a lot of people theorize about the station. I need someone who has actual knowledge about what might be going on.

Here are the details.

  • 250 ASP.NET Sites
  • 250 Admin Sites (Written in ASP.NET, basically they are backend admin sites)
  • 100 Classic ASP Sites

Running on a virtualized Windows Server 2003.

  • 3 CPUs, 4 GB Memory.
  • Memory stays around 3 - 3.5 GB
  • CPUs Spike very badly, sometimes they remain near 100% for short period of time ( 30 - 180 seconds)

The database is on a separate server and is SQL SERVER 2005.

Jon Lin
  • 142,182
  • 29
  • 220
  • 220
TroySteven
  • 4,885
  • 4
  • 32
  • 50
  • 1
    depending on the server and the code you can put a lot more sites on one server than what you describe BUT since we don't know anything about the source code (is it .NET 2 or 4 ? what exactly does it do ?) and "CPU spikes inside a VM" are not a real diagnostic thing an answer will just be pure speculation... – Yahia Dec 09 '11 at 20:55
  • 4
    Gosh, you try to run this amount of sites on a TOY SERVER? This is not about asp.net scalability, it is about "how cheap can I get". Here is a tip - get at least a decent workstation - 16gb memory. Then get enough discs for decent IO. THEN come back and ask. – TomTom Dec 09 '11 at 20:59
  • 6
    *"Please only respond if you have a good idea about what you are talking about. I've heard a lot of people theorize about the station. I need someone who has actual knowledge about what might be going on. "* You're kidding, right? You give us no information to go on to make such an answer. We would have to be *you* to know that. That said... you could be right. Or, you could be wrong. – Andrew Barber Dec 09 '11 at 21:01
  • 1
    One other thing; which I mention below. There's a balance between the cost of paying people to optimize software to the nth degree, and the cost of buying more hardware. You should also consider the costs associated with trying to eke ever more performance out. It's an ever decreasing return. – dash Dec 09 '11 at 21:13
  • How many application pools are you using? Could that number be reduced? – Brian Dec 09 '11 at 21:44

6 Answers6

7

It looks like you've reached that point. You've optimised your apps, you've looked at server performance, you can see you are hitting peak memory usage, maxing out the CPU, and, lets face it, administering so many websites musn't be easy.

Also, the spec of your VM isn't fantastic. It's memory, in particular, potentially isn't great for the number of sites you have.

You have plenty of reasons to move.

However, some things to look at:

1) How many of those 250 sites are actually used? Which ones are the peak performance offenders? Those ones are prime candidates for being moved off onto their own box.

2) How many are not used at all? Can you retire any?

3) You are running on a virtual machine. What kind of virtual machine platform are you using? What other servers are running on that hardware?

4) What kind of redundancy do you currently have? 250 sites on one box with no backup? If you have a backup server, you could use that to round robin requests, or as a web farm, sharing the load.

Lets say you decide to move. The first thing you should probably think about is how.

Are you going to simply halve the number of sites? 125 + admins on one box, 125 + admins on the other? Or are you going to move the most used?

Or you could have several virtual machines, all active, as part of a web farm or load balanced system.

By the sounds of things, though, there's a real resistance to buy more hardware.

At some point, you are going to have to though, as sometimes, things just get old or get left behind. New servers have much more processing power and memory in the same space, and can be cheaper to run.

Oh, and one more thing. The cost of all those repeated optimizations and testing probably could easily be offset by buying more hardware. That's no excuse for not doing any optimization at all, of course, and I am impressed by the number of sites you are running, especially if you have a good number of users, but there is a balance, and I hope you can tilt towards the "more hardware" side of it some more.

dash
  • 89,546
  • 4
  • 51
  • 71
3

I think you've answered your own question really. You've optimised the sites, you've got the database server on a different server. And you have 600 sites (250 + 250 + 100).

The answer is pretty clear to me. Buy a box with more memory and CPU power.

Moo-Juice
  • 38,257
  • 10
  • 78
  • 128
  • 3
    He does not need a different server. he needss something that you can buy in a shop. Heck, his "Server" is not even a decent virtual machine. I run a VM for my dev environment that is times more powerfulll than this toy VM Setup. – TomTom Dec 09 '11 at 21:00
  • But can it run Crysis? ;-) I think the spec is light on everything, RAM especially, but using a farm of low powered machines can be a decent compromise. – dash Dec 09 '11 at 21:09
  • My home desktop is even better then that... and the servers at my work by far – Ruben Dec 09 '11 at 21:11
3

There is no real limit on the number of sites your server can handle, if all 600 sites had no users, you wouldn't have very much load on the server.

I think you might find a better answer at serverfault, but here are my 2 cents.

You can scale up or scale out.

Scale up -- upgrade the machine with more memory / more cores in the CPU. Scale out -- distribute the load by splitting the sites across 2 or more servers. 300 on server A, 300 on server B, or 200 each across 3 servers.

As @uadrive mentions, this is an issue of load, not of # of sites.

danludwig
  • 46,965
  • 25
  • 159
  • 237
2

Just thinking this through, it seems like you would be better off measuring the # of users hitting the server instead of # of sites. You could have 300 sites and only half are used. Knowing the usage would be better in my mind.

uadrive
  • 1,249
  • 14
  • 23
2

There's no simple formula answer, like "you can have a maximum of 47.3 sites per gig of RAM". You could surely maintain performance with many more sites if each site had only one user per day. There are likely servers that have only two sites but performance is terrible because each hit requires a massive database query.

In practice, the only way to approach this is empirically: When performance starts to degrade, you have a problem. The fact that somebody wrote in a book somewhere that a server with such-and-such resources should be able to support more sites is of little value if, in practice, YOUR server can't support YOUR sites and YOUR users.

Realistic options are:

(a) Optimize your code and database queries. You say you've already done that. Maybe you can do more. It's unlikely that your code is now the absolute best that it can possibly be, but it may well be that the effort to find further improvements will be hugely expensive.

(b) Buy a bigger server.

(c) Break your sites across multiple servers, and either update DNS or install a front-end to map requests to the correct server.

Jay
  • 26,876
  • 10
  • 61
  • 112
0

Maxing out on CPU use can be a good sign, in the sense that moving to a large server or dividing the sites between multiple servers, is likely to help.

There are many things you can do to help improve performance and scalability (in fact, I've written a book on this subject -- see my profile).

It's difficult to make meaningful suggestions without knowing much more about your apps, but here are a few quick tips that might help to get you started:

  1. Multiple AppPools are expensive. How many sites do you have per AppPool? Combine multiple sites per AppPool if you can
  2. Minimize client round-trips: improve client and proxy-level caching, offload static files to a CDN, use image sprites, merge multiple CSS and JS files
  3. Enable output caching on pages and/or controls were possible
  4. Enable compression for static files (more CPU use on first access, but less after that)
  5. Avoid Session state all together if you can (prefer cookies for state management). If you can't, then at least configure EnableSessionState="ReadOnly" session state for pages that don't need to write it, or "false" for pages that don't need it at all
  6. Many things on the SQL Server side: caching, SqlCacheDependency, command batching, grouping multiple insert/update/deletes into a single transaction, using stored procedures instead of dynamic SQL, using async ADO.NET instead of LINQ or EF, make sure your DB logs are on separate spindles from data, etc
  7. Look for algorithmic issues with your code; for example, hash tables are often better than linear searches, etc
  8. Minimize cookie sizes, and only set cookies on pages, not on static content.

In addition, using a VM is likely to cost you up to about 10% in performance -- make sure it's really worth that for what it buys you in terms of improved manageability.

RickNZ
  • 18,448
  • 3
  • 51
  • 66