I have the python flask application which i am using inside the NGINX and i have dockerised this image. By default flask application runs in 5000 port and NGINX in 80 port. If I run image in the container all the services are working fine. I am able to access the services from NGINX port 80 which internally mapped to flask 5000 port. Now I want to add the health check for this image. So i am using the py-healthcheck module in the flask application like this.
health = HealthCheck()
def redis_available():
return True, "UP"
health.add_check(redis_available)
app.add_url_rule("/health", "healthcheck", view_func=lambda: health.run())
Now If i run only the flask application(without NGINX in my local system) using the URL
http://localhost:5000/health
I am getting the proper response saying the applicarion is up. In order to add the healthcheck for image i have adde this command in Dockerfile
HEALTHCHECK --interval=30s --timeout=120s --retries=3 CMD wget --no-check-certificate --quiet --tries=1 --spider https://localhost:80/health || exit 1
Here i am assuming that i am trying to access the healthcheck endpoint from NGINX thats why i am using localhost:80. But if i run the conainer the container is always unhealthy but all the end points are working fine. Whether i have to do some configuration in NGINX conf file in order access the healthcheck endpoint of flask from NGINX?
Here is the nginx config:
# based on default config of nginx 1.12.1
# Define the user that will own and run the Nginx server
user nginx;
# Define the number of worker processes; recommended value is the number of
# cores that are being used by your server
# auto will default to number of vcpus/cores
worker_processes auto;
# altering default pid file location
pid /tmp/nginx.pid;
# turn off daemon mode to be watched by supervisord
daemon off;
# Enables the use of JIT for regular expressions to speed-up their processing.
pcre_jit on;
# events block defines the parameters that affect connection processing.
events {
# Define the maximum number of simultaneous connections that can be opened by a worker process
worker_connections 1024;
}
# http block defines the parameters for how NGINX should handle HTTP web traffic
http {
# Include the file defining the list of file types that are supported by NGINX
include /opt/conda/envs/analytics_service/etc/nginx/mime.types;
# Define the default file type that is returned to the user
default_type text/html;
# Don't tell nginx version to clients.
server_tokens off;
# Specifies the maximum accepted body size of a client request, as
# indicated by the request header Content-Length. If the stated content
# length is greater than this size, then the client receives the HTTP
# error code 413. Set to 0 to disable.
client_max_body_size 0;
# Define the format of log messages.
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
# Define the location of the log of access attempts to NGINX
access_log /opt/conda/envs/analytics_service/etc/nginx/access.log main;
# Define the location on the file system of the error log, plus the minimum
# severity to log messages for
error_log /opt/conda/envs/analytics_service/etc/nginx/error.log warn;
# Define the parameters to optimize the delivery of static content
sendfile on;
tcp_nopush on;
tcp_nodelay on;
# Define the timeout value for keep-alive connections with the client
keepalive_timeout 65;
# Define the usage of the gzip compression algorithm to reduce the amount of data to transmit
#gzip on;
# Include additional parameters for virtual host(s)/server(s)
include /opt/conda/envs/analytics_service/etc/nginx/conf.d/*.conf;
}