-1

I'm currently hosting about 30 websites with very low traffic with Nginx (~30k page views a day) and having no issues. I am curious how many sites you can comfortably run on a server, assuming at least some amount of config for various things.

This was sparked out of curiosity, because of a site that uses the following host:

http://www.forest.net/services/shared/webhosting.php

At 100MB of storage and 3GB of data, I calculated that you could probably host 5,000 sites that get almost no traffic with no issue for next to nothing, provided Nginx didn't choke on such an absurd number of configs. I've told them not to upgrade with this company and consider a different one.

So what say the experts? Where is the point where I should stop reasonably with my setup, and does anyone want to take a stab at my absurd hypothetical?

(Note: the site linked has not changed prices or capacity since Dec of 2003 [wayback machine], which I found good for a chuckle. I feel sorry for anyone that hosts there though.)

G.Martin
  • 129
  • 6
  • "absurd hypothetical" is right. Try [this question](http://serverfault.com/questions/292013/good-sites-for-discussing-specific-hosting-provider-server-specification-scenario) – womble Jul 26 '11 at 05:06
  • I will point out that I asked an actual question, and then posed the hypothetical which made me think about it. – G.Martin Jul 27 '11 at 05:21
  • possible duplicate of [How do you do Load Testing and Capacity Planning for Web Sites](http://serverfault.com/questions/350454/how-do-you-do-load-testing-and-capacity-planning-for-web-sites) – Michael Hampton Aug 03 '12 at 04:55

1 Answers1

1

I have a similar setup to you - a small number of low volume sites that are PHP driven - in this scenario, I find that nginx has no problems with the configuration. Many of the sites I host have multiple subdomains, yet, with the exception of one or two 'virtual hosts' the default configuration applies to all of them - I don't foresee nginx being a problem even with a few thousand virtual hosts. However, I don't think that a single server would be well-suited for that either.

Firstly, if you are running PHP (fairly common in a hosting environment) you are probably running either php-fpm or reverse-proxying to apache. Chances are you need some security, so your 1000s of users each run as their own username. Typically each user spawns their own php process - which then needs its own memory (and with php-fpm the default setup doesn't let you have no 'servers' running for a specific pool). I would suggest therefore that memory is more of an issue in this scenario.

At 5000 sites, I imagine you don't know most people whose sites you are hosting, and can't personally assist each with their site design - many people use .htaccess files - which means that you might need to support Apache, in the sense of reverse proxying - which probably won't scale to those numbers as well (especially with suExec or FastCGI run through apache).

Secondly, I presume that the database will not be able to handle the increase in load quite as well as nginx will though. Some well executed caching (either via nginx, or another layer such as Varnish) might be able to reduce the impact of this - although, if you already do this, than the 'scaling' issue still exists.

Finally, I imagine that having a 'single point of failure' for 5000 websites might make some people unhappy - if your one server goes down with 30 sites - you have a few unhappy clients, if your one server goes down with 5000 sites, I doubt it would be pleasant (a small high-availability cluster - arguably even two nodes) might help with this, but probably isn't sufficient.

(I have no substantial data to support my position, making this answer more conjecture than fact)

cyberx86
  • 20,805
  • 1
  • 62
  • 81