Note: I don't have any real experience managing servers or using Linux on any deep level so my knowledge and understanding is quite limited. In essence, I'm winging it.
For full code examples see: https://github.com/Integralist/Vagrant-Examples/tree/master/nodejs
This is a two part issue:
- not being able to mount my shared directory
systemd
services not being available
I'm trying to create a service that starts up a NodeJS application but it looks like systemctl
isn't available on the version of Ubuntu I've installed (https://vagrantcloud.com/ubuntu/trusty64).
Here is my Vagrantfile
:
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "ubuntu/trusty64"
# Working arround the "stdin: is not a tty" error, which appears when provisioning
# config.ssh.pty = true
config.vm.network :forwarded_port, guest: 80, host: 3000, auto_correct: true
# We use Vagrant to create the new "web" group/owner for us
# But we could have done this manually as part of our provisioning script
#
# useradd -mrU web
# chown web /var/www
# chgrp web /var/www
# cd /var/www/
# su web
# git clone {code}
config.vm.synced_folder "./", "/var/www", create: true, group: "web", owner: "web"
config.vm.provision "shell" do |s|
s.path = "provision/setup.sh"
end
end
Below is the content of my setup.sh
provisioning script which creates the .service
file:
su root
mkdir -p /var/www
cat << 'EOF' > /etc/systemd/system/our-node-app.service
[Service]
WorkingDirectory=/var/www
ExecStart=/usr/bin/nodejs boot.js
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=some-identifier-here-typically-matching-workingdirectory
User=web
Group=web
Environment='NODE_ENV=production'
[Install]
WantedBy=multi-user.target
EOF
But when I run a vagrant up
I get the following error output:
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'ubuntu/trusty64'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'ubuntu/trusty64' is up to date...
==> default: Setting the name of the VM: nodejs_default_1407743897168_39018
==> default: Clearing any previously set forwarded ports...
==> default: Fixed port collision for 22 => 2222. Now on port 2200.
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
==> default: Forwarding ports...
default: 80 => 3000 (adapter 1)
default: 22 => 2200 (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2200
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Connection timeout. Retrying...
default: Warning: Remote connection disconnect. Retrying...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Mounting shared folders...
default: /var/www => /Users/markmcdonnell/Box Sync/Library/Vagrant/nodejs
Failed to mount folders in Linux guest. This is usually because
the "vboxsf" file system is not available. Please verify that
the guest additions are properly installed in the guest and
can work properly. The command attempted was:
mount -t vboxsf -o uid=`id -u web`,gid=`getent group web | cut -d: -f3` var_www /var/www
mount -t vboxsf -o uid=`id -u web`,gid=`id -g web` var_www /var/www
So my first problem is that I can't seem to mount my shared folder.
Also, originally in my provisioning script (after creating the our-node-app.service
file) I would have the following:
systemctl enable our-node-app
systemctl start our-node-app
systemctl status our-node-app
journalctl -u node-sample # logs
If I add that back into my provisioning script and then run vagrant provision --provision-with shell
I'll get the following output:
==> default: Running provisioner: shell...
default: Running: /var/folders/n0/jlvkmj5n36vc0932b_1t0kxh0000gn/T/vagrant-shell20140811-58128-fa27fk.sh
==> default: stdin: is not a tty
==> default: /tmp/vagrant-shell: line 25: systemctl: command not found
==> default: /tmp/vagrant-shell: line 26: systemctl: command not found
==> default: /tmp/vagrant-shell: line 27: systemctl: command not found
==> default: /tmp/vagrant-shell: line 28: journalctl: command not found
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
chmod +x /tmp/vagrant-shell && /tmp/vagrant-shell
Stdout from the command:
Stderr from the command:
stdin: is not a tty
/tmp/vagrant-shell: line 25: systemctl: command not found
/tmp/vagrant-shell: line 26: systemctl: command not found
/tmp/vagrant-shell: line 27: systemctl: command not found
/tmp/vagrant-shell: line 28: journalctl: command not found
This is where I discovered the issue with the systemctl
command not being available.
I also tried modifying the provisioning script so that instead of...
systemctl enable our-node-app
systemctl start our-node-app
systemctl status our-node-app
journalctl -u node-sample # logs
...I would use...
service our-node-app start
service --status-all | grep 'node'
This was because I had read somewhere that Ubuntu doesn't support systemd
and instead uses something called upstart
to boot all its services. I assumed at the time that I could just use the other command and keep the script itself the same (it seems that is not the case).
But all that change did was demonstrate that my service wasn't recognised:
==> default: Running provisioner: shell...
default: Running: /var/folders/n0/jlvkmj5n36vc0932b_1t0kxh0000gn/T/vagrant-shell20140811-58428-iot9kx.sh
==> default: stdin: is not a tty
==> default: our-node-app: unrecognized service
==> default: [ ? ] apport
==> default: [ ? ] console-setup
==> default: [ ? ] cryptdisks
==> default: [ ? ] cryptdisks-early
==> default: [ ? ] dns-clean
==> default: [ ? ] irqbalance
==> default: [ ? ] killprocs
==> default: [ ? ] kmod
==> default: [ ? ] networking
==> default: [ ? ] ondemand
==> default: [ ? ] open-vm-tools
==> default: [ ? ] pppd-dns
==> default: [ ? ] rc.local
==> default: [ ? ] screen-cleanup
==> default: [ ? ] sendsigs
==> default: [ ? ] umountfs
==> default: [ ? ] umountnfs.sh
==> default: [ ? ] umountroot
==> default: [ ? ] virtualbox-guest-x11
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
chmod +x /tmp/vagrant-shell && /tmp/vagrant-shell
Stdout from the command:
Stderr from the command:
stdin: is not a tty
our-node-app: unrecognized service
[ ? ] apport
[ ? ] console-setup
[ ? ] cryptdisks
[ ? ] cryptdisks-early
[ ? ] dns-clean
[ ? ] irqbalance
[ ? ] killprocs
[ ? ] kmod
[ ? ] networking
[ ? ] ondemand
[ ? ] open-vm-tools
[ ? ] pppd-dns
[ ? ] rc.local
[ ? ] screen-cleanup
[ ? ] sendsigs
[ ? ] umountfs
[ ? ] umountnfs.sh
[ ? ] umountroot
[ ? ] virtualbox-guest-x11
I then discovered that Ubuntu is going to move to the systemd
format after all: but this was announced back in February 2014 and so I would have thought the latest Ubuntu to have switched over by now (or is that just me being an idiot and not appreciating how long a change like that can take).
Thinking I would have to use this Upstart format, I had started to read through this but sadly I wasn't able to work out how to convert my systemd
script into Upstart's format.
This leaves me with the question: has anyone else here had this problem, and if so how did they resolve it (did you switch to a different Ubuntu release that supports systemd
or rewrite your service to use the Upstart format)?
Do you have any advice (or good resources) on how to convert a systemd
script over to the Upstart format?
Any help on this subject would be appreciated; as I mentioned at the start, I'm not a system/server op guy and so I'm winging it here.
Thanks.
Update
I found this and it seems I misunderstood the difference between systemd
, init.d
and upstart
. So systemd
is a brand new system that improves upon init.d
and upstart
.
The article linked to explains how to install systemd
alongside upstart
and then switch to systemd
, but I'm still getting an error trying to mount the VM?
I've updated my repo code.