-1

I have a python web application running on linux nginx + uwsgi. I am currently running this on a single machine. I would like to migrate this setup to a cluster.

I am new to clusters. What are the steps to do this migration. Do we require the web application to be changed in order to make the same setup to run on a cluster?

I would be grateful if some body could point me to some tutorials regarding the same.

Thanks

Nithin
  • 123
  • 6

1 Answers1

1

This really depends on what you mean by cluster and what your aim is in clustering it. For web applications the most common reasons are spreading load or high availability, so I'll base my answer on assuming you want one of those.

Spreading load

To spread load among multiple machines, the easiest way is to move your datastore (database, memcache etc...) to a different machine. If that does not help enough, you need 2 more servers: one to be a copy of your appserver and one to be a loadbalancer, balancing the load among two appservers.

Redundancy

If your aim is high availability, you need a clustered datastore (such as mysql cluster) and something like pacemaker/corosync to keep your service's IP available at all times on one of two boxes.

Combining the two

The 'spreading load' setup also provides basic high availability, you can lose an appserver and the service will remain up. For better availability, you would cluster the loadbalancers, and maybe the datastore too.

Summary

This is only a very small set of things you can do. 'Clustering' in itself is not a target, but a means to an end. What is it you actually want to achieve?

Dennis Kaarsemaker
  • 19,277
  • 2
  • 44
  • 70
  • Thank you for the reply. I was looking for both "spreading load" and "redundancy". I was looking for a star-pattern in which the app runs on the master node and the master node then distributes threads/computations to worker nodes, connected to the master node. Ideally, the web application would be unaware of the cluster. Is that possible?? – Nithin Feb 10 '13 at 14:07
  • That sounds more like a service oriented architecture, which is not something I'd recommend as a first step to scaling out. It's also the least easy to make transparent to the application. – Dennis Kaarsemaker Feb 11 '13 at 06:50
  • What would be your recommended first step? If we use something like openstack, cant keep adding compute nodes when the need arises? From what I have read, In such a case the application need not be aware of the cluster right?? – Nithin Feb 12 '13 at 10:44
  • My recommended first step would be figuring out your needs. How much do you need to scale this up? What are the plans in 1 or 2 years? Then put as much engineering effort into it to reach that. – Dennis Kaarsemaker Feb 12 '13 at 20:18