3

After eb deploy the environment gets stuck in Health: 'Severe'.

It show the following warning in recent events:

Environment health has transitioned from Info to Severe. ELB processes are not healthy on all instances. Application update in progress on 1 instance. 0 out of 1 instance completed (running for 3 minutes). None of the instances are sending data. ELB health is failing or not available for all instances.

I'm not able to ssh into the instance: connection reset by peer. (I'm normally able to ssh into the instance without any issues).

The request logs function doesn't work because:

An error occurred requesting logs: Environment named portal-api-staging is in an invalid state for this operation. Must be Ready.

Cloudwatch logs only contain the same message from 'recent events'.

How do I figure out what why the deployment fails?

AWS documentation says I should check the logs or ssh into the instance, but none of those options work.

vrepsys
  • 2,143
  • 4
  • 25
  • 37
  • Did you ever solve this? I've been fighting with this for 3 days now. – BWStearns Nov 30 '22 at 20:32
  • 1
    I found my instance was running out of space during deployment. It was somehow leaving the instance in a state where I couldn't even ssh or otherwise connect to check the logs. Increasing the disk size solved the issue. – vrepsys Jan 02 '23 at 18:13

1 Answers1

-1

We found the issue. The deployment was somehow getting stuck and rolled back. Changing deployment policy to AllAtOnce and disabling RollingUpdateEnabled fixed it.

BWStearns
  • 2,567
  • 2
  • 19
  • 33