0

Update:ok weird so my instance public dns seemed to change? is that possible, now I can ssh in after changing to the new one. still can't ssh in from the browserbased connection tho.

I have a ec2 t2.micro instance of ubuntu 18, that keeps replicating this same issue each time I rebuild it and get it working again. I am running the following command pm2 deploy ecosystem.config.js development PM2 is a node package that runs a node server for me all the time, it allows for deployments to a instance where it ssh, pulls down the repo, and runs node and it can pull from my github repo.

Here is the ecosystem.js file with the configuration

module.exports = {
    apps: [
      {
        name: "app",
        script: "npm",
        args: "start",
        watch: false,
        env: {
          NODE_ENV: "development",
          MY_SANITY_TOKEN:
            "obsf",
        },
      },
    ],
  
    deploy: {
      development: {
        user: "ubuntu",
        host: "obsf.compute-1.amazonaws.com",
        ref: "origin/development",
        repo: "git@github.com:name/website-gatsby-main.git",
        path: "/home/ubuntu/deploy",
        "pre-deploy-local": "",
        "post-deploy":
          "npm install && pm2 reload ecosystem.config.js --env development && npm install -g gatsby-cli",
        "pre-setup": "",
      },
    },
  }

Now the first time I setup pm2, and deploy it it works just fine, but after a few succesfult deploys, all of a sudden I can't ssh into my instance anymore. I try restarting the istance, and I can connect. I can't even connect using the EC2 Instance Connect (browser-based SSH connection) my security groups hasn't changed so ssh is open. This has happened on several instances, and I keep having to rebuild from scratch which really slows me down.

What would be my first step for debugging ssh issues after something like this.

Thanks ahead of time

0 Answers0