0

Moved from : https://stackoverflow.com/questions/25304968/uwsgi-django-with-nginx-high-availability-setup to here.

I was setting up High Availability on RHEL 6.5 . My Stack is

1. uwsgi  
2. nginx  
3. django 
4. Pacemaker 

Now I understand that nginx can be setup easily via monitoring the nginx_status

    location /nginx_status {
        # Turn on nginx stats
        stub_status on;
        access_log   off;
        # Security: Only allow access from 192.168.1.100 IP #
        allow 127.0.0.1;
        # Send rest of the world to /dev/null #
        deny all;
    }

This would ensure the heartbeat monitoring for nginx.

But, my question is how to ensure that uwsgi would be in running state so that is when second nginx machine comes up it recognises the uwsgi process and be bind to it. Or if the uwsgi goes down, how to ensure to bring it up and rebind to nginx

The setup goes as follows

assuming cluster machines :

1. x.x.x.x (main machine)
2. y.y.y.y (slave machine)

Shared storage in

1. /apps (SAN)

/apps available in both the machines as shared storage

The application running django + uwsgi in

1. virtualenv : /apps/venv
2. applicaiton in : /apps
3. uwsgi configuration in : /apps/config.d
4. running application : /apps/project

uwsgi configuration

[uwsgi]

# the base directory (full path)
chdir           = /apps/project

# Django's wsgi file
module          = project.wsgi

# the virtualenv (full path)
home            = /apps/venv

# process-related settings
# master
master          = true

# maximum number of worker processes
processes       = 4

# the socket (use the full path to be safe
socket          = /tmp/project.sock

# ... with appropriate permissions - may be needed
chmod-socket    = 666

# clear environment on exit
vacuum          = true

#daemonize
daemonize       = true

#logging
logger          = file:/tmp/uwsgi.log

I am not getting any idea about how uwsgi would be running in HA setup ?

Joddy
  • 111
  • 1
  • 6

1 Answers1

1

I wouldn't run uwsgi in an HA setup. Just make nginx talk to a local uwsgi and run nginx in an HA setup with pacemaker or a loadbalander.

Dennis Kaarsemaker
  • 19,277
  • 2
  • 44
  • 70
  • I am bit confused here, the `/apps` is shared between the `node1` and `node 2` (`node 1` as master and `node2` as slave). Now say node1 goes down, the node2 would come up with its own nginx, the nginx would read the socket from `/tmp/project.sock`. Great!! But the uwsgi would not be running by itself on `node2` so there would be no socket at `/tmp/project.sock` So what should be a proper way – Joddy Aug 15 '14 at 09:02
  • I wouldn't share /apps either but have a real deployment process :) – Dennis Kaarsemaker Aug 15 '14 at 09:04
  • Ok, so you mean, /apps/project would be installed in both the nodes, and if one goes down , the other would come-up, and nginx would read from local socket. So at time of upgrade, i would have to upgrade code at both the nodes – Joddy Aug 15 '14 at 09:05
  • Should I use XINETD for managing the `uwsgi` ? if node goes down, the xinetd would comeup on second node and would start the uwsgi ? What do you suggest ? – Joddy Aug 15 '14 at 09:06
  • Actually I dont have a choice in the matter, the `/apps` is on SAN and `node1` and `node2` share the `/apps` The server is RHEL6.5, and `python2.7` along with all the modules is stored in `/apps` only – Joddy Aug 15 '14 at 09:08
  • Sure you have a choice, you can create a proper deployment procedure :) – Dennis Kaarsemaker Aug 15 '14 at 09:13
  • Thanks !! I would be sharing this info with my boss as well. Lets see. – Joddy Aug 15 '14 at 09:18