does anyone know if there is a way to find out why AWS Elastic Beanstalk classifies an environments health as Red when it is actually working ok (at least from my perspective).
It is a web based application and the health check path is specified to be just "/".
I can see the health checker making requests via the NGINX access logs and the response is a HTTP 200:
172.31.**.*** - - [22/Aug/2015:17:26:51 +0000] "GET / HTTP/1.1" 200 21099 "-" "ELB-HealthChecker/1.0"
172.31.**.** - - [22/Aug/2015:17:26:51 +0000] "GET / HTTP/1.1" 200 21099 "-" "ELB-HealthChecker/1.0"
The application is up, running and responding to requests via my browser.
One thing I have noticed is that on the monitoring tab of the AWS console it thinks there are 0.9 instances rather than 1. At this point in time the auto scaling is setup for a minimum of 1 instance and max of 1 because I only need a single instance at this point in time. The reason I configured it to use an auto scaling group in the first place is because I'm using the ELB for SSL termination.
The app is currently running on 64bit Amazon Linux 2015.03 v1.4.1 running Docker 1.6.0
but I get the same problem on the latest build too (64bit Amazon Linux 2015.03 v2.0.0 running Docker 1.6.2
).