1

Geting error while deployment with capistrano

DEBUG [aaaad896] Command: cd /home/dev/PROJECT-NAME/current && ( export RAILS_ENV="production" ; ~/.rvm/bin/rvm default do bundle exec unicorn -c /home/dev/PROJECT-NAME/current/config/unicorn.rb -E deployment -D  )
DEBUG [aaaad896]    master failed to start, check stderr log for details
(Backtrace restricted to imported tasks)
cap aborted!
SSHKit::Runner::ExecuteError: Exception while executing as dev@XX.XXX.XXX.XX: bundle exit status: 1
bundle stdout: Nothing written
bundle stderr: master failed to start, check stderr log for details

Other Log.

Errno::EADDRINUSE: Address already in use - bind(2) for 0.0.0.0:8080
  /home/dev/PROEJCT-NAME/shared/bundle/ruby/2.3.0/gems/unicorn-5.1.0/lib/unicorn/socket_helper.rb:149:in `bind'

unicorn.rb File : unicorn.rb

deploy.rb File deploy.rb

default (nginx/site-enables/default) File : default

While restarting unicorn everytime I'm getting this error in capistrano. So how can I fix this ?

RoR Developer
  • 151
  • 1
  • 13

4 Answers4

2

The problem is that you have another service listening in port 8080, that's what your log says. If you're using linux you can check which service it is using lsof -i:8080. This will tell you who is using that port. If you can kill the service, just do it, if you can't, just change the port on your config files.

Allam Matsubara
  • 517
  • 5
  • 18
  • after deployment, I did few changes in my code and again I deploy with "cap production deploy" command. (the server already is running) but that time I'm getting error because port already use to unicorn. I want to restart my unicorn while I run cap production deploy. – RoR Developer May 10 '17 at 14:42
  • There are several options on how you can restart your unicorn after you cap deploy. You can search google to see that, but does any of these [answers](http://stackoverflow.com/questions/19896800/starting-or-restarting-unicorn-with-capistrano-3-x) help you? – Allam Matsubara May 10 '17 at 15:00
0

To kill a process that's already running, you can check for them with

ps aux | grep unicorn

Which will list something like

deployer  3807  2.8  9.3 369964 94996 ?        Sl   12:51   0:03 unicorn master -D -c /home/deployer/apps/cb_app/current/config/unicorn.rb -E staging

deployer  3816  0.0  8.5 369964 87040 ?        Sl   12:51   0:00 unicorn worker[0] -D -c /home/deployer/apps/cb_app/current/config/unicorn.rb -E staging

deployer  3818  0.0  8.5 369964 87200 ?        Sl   12:51   0:00 unicorn worker[1] -D -c /home/deployer/apps/cb_app/current/config/unicorn.rb -E staging

You can then kill them with

kill 3807

Now try your deploy again and see if the master starts.

bryanus
  • 954
  • 1
  • 8
  • 11
0

Sometimes it's just about permissions, as unicorn restarts it needs to write in logs. Check the folder permissions for /log and make it writeable for all.

Giggs
  • 851
  • 10
  • 16
0

I faced the same issue and yes, its about the permissions, earlier when the docker GitLab container (or server) was down, I did make some changes to the permissions of logs and data files (which are volumes) on local machine. As the permissions were changed the GitLab container was not able to write anything while boot up, hence giving this error. As I have reverted the changes i did. It is working fine for me.

Jagdish0886
  • 343
  • 1
  • 5
  • 20