I've setup a puma server for a rails app and an nginx upstream directive but it's not routing correctly.
I'm on Ubunutu 22.10
root@kapelner-gradesly:~/deploy/gradeportal/current# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.10
Release: 22.10
Codename: kinetic
and ruby v3.2.0
root@kapelner-gradesly:~/deploy/gradeportal/current# ruby -v
ruby 3.2.0 (2022-12-25 revision a528908271) [x86_64-linux]
I'm using git push
and capistrano
via cap production deploy
for deployment. The relevant gems are below in my Gemfile.lock
file:
...
capistrano (3.17.3)
airbrussh (>= 1.0.0)
i18n
rake (>= 10.0.0)
sshkit (>= 1.9.0)
capistrano-bundler (2.1.0)
capistrano (~> 3.1)
capistrano-rails (1.6.3)
capistrano (~> 3.1)
capistrano-bundler (>= 1.1, < 3)
capistrano-rvm (0.1.2)
capistrano (~> 3.0)
sshkit (~> 1.2)
...
puma (6.3.0)
nio4r (~> 2.0)
racc (1.7.1)
rack (2.2.7)
rack-cors (2.0.1)
rack (>= 2.0.0)
rack-test (2.1.0)
rack (>= 1.3)
rack-timeout (0.6.3)
rails (7.0.6)
actioncable (= 7.0.6)
actionmailbox (= 7.0.6)
actionmailer (= 7.0.6)
actionpack (= 7.0.6)
actiontext (= 7.0.6)
actionview (= 7.0.6)
activejob (= 7.0.6)
activemodel (= 7.0.6)
activerecord (= 7.0.6)
activestorage (= 7.0.6)
activesupport (= 7.0.6)
bundler (>= 1.15.0)
railties (= 7.0.6)
...
BUNDLED WITH
2.3.21
My puma config file is below
# Default to production
rails_env = ENV['RAILS_ENV'] || "production"
environment rails_env
if rails_env == "production"
# Change to match your CPU core count
workers 2
# Min and Max threads per worker
threads 1,2
app_dir = File.expand_path("../..", __FILE__)
shared_dir = "/root/deploy/gradeportal/shared"
# Set up socket location
bind 'tcp://0.0.0.0:9292'
#bind "unix://#{shared_dir}/sockets/puma.sock"
# Logging
stdout_redirect "#{shared_dir}/log/puma.stdout.log", "#{shared_dir}/log/puma.stderr.log", true
# Set master PID and state locations
pidfile "#{shared_dir}/pids/puma.pid"
state_path "#{shared_dir}/pids/puma.state"
activate_control_app
on_worker_boot do
require "active_record"
ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
ActiveRecord::Base.establish_connection(YAML.load_file("#{app_dir}/config/database.yml")[rails_env])
end
end
and I have puma set up as a systemctl
service:
root@kapelner-gradesly:~# cat /etc/systemd/system/puma.service
[Unit]
Description=Puma HTTP Server
After=network.target
[Service]
Type=notify
WatchdogSec=10
User=root
WorkingDirectory=/root/deploy/gradeportal/current
Environment=RAILS_ENV=production
ExecStart=/usr/share/rvm/gems/ruby-3.2.0/wrappers/puma -C /root/deploy/gradeportal/current/config/puma.rb
Restart=always
KillMode=process
[Install]
WantedBy=multi-user.target
The puma service is running fine:
root@kapelner-gradesly:~# sudo systemctl status puma
● puma.service - Puma HTTP Server
Loaded: loaded (/etc/systemd/system/puma.service; enabled; preset: enabled)
Active: active (running) since Tue 2023-07-18 11:54:29 EDT; 12min ago
Main PID: 443710 (ruby)
Status: "Puma 6.3.0: cluster: 2/2, worker_status: [{ 1/2 threads, 2 available, 0 backlog },{ 1/2 threads, 2 available, >
Tasks: 26 (limit: 2308)
Memory: 303.3M
CPU: 6.930s
CGroup: /system.slice/puma.service
├─443710 "puma 6.3.0 (tcp://0.0.0.0:9292) [20230714225250]"
├─443717 "puma: cluster worker 0: 443710 [20230714225250]"
└─443718 "puma: cluster worker 1: 443710 [20230714225250]"
Jul 18 11:54:24 kapelner-gradesly puma[443710]: [443710] * Puma version: 6.3.0 (ruby 3.2.0-p0) ("Mugi No Toki Itaru")
Jul 18 11:54:24 kapelner-gradesly puma[443710]: [443710] * Min threads: 1
Jul 18 11:54:24 kapelner-gradesly puma[443710]: [443710] * Max threads: 2
Jul 18 11:54:24 kapelner-gradesly puma[443710]: [443710] * Environment: production
Jul 18 11:54:24 kapelner-gradesly puma[443710]: [443710] * Master PID: 443710
Jul 18 11:54:24 kapelner-gradesly puma[443710]: [443710] * Workers: 2
Jul 18 11:54:24 kapelner-gradesly puma[443710]: [443710] * Restarts: (✔) hot (✔) phased
Jul 18 11:54:24 kapelner-gradesly puma[443710]: [443710] * Listening on http://0.0.0.0:9292
Jul 18 11:54:24 kapelner-gradesly puma[443710]: [443710] Use Ctrl-C to stop
Jul 18 11:54:29 kapelner-gradesly systemd[1]: Started Puma HTTP Server.
And the port is listening and opened with sudo ufw allow 9292/tcp
(I will close it later, I just opened it to debug.
root@kapelner-gradesly:~# sudo lsof -i -P -n | grep LISTEN
systemd 1 root 60u IPv6 19157 0t0 TCP *:22 (LISTEN)
systemd-r 563 systemd-resolve 14u IPv4 18840 0t0 TCP 127.0.0.53:53 (LISTEN)
systemd-r 563 systemd-resolve 16u IPv4 18842 0t0 TCP 127.0.0.54:53 (LISTEN)
mysqld 725 mysql 21u IPv4 19934 0t0 TCP 127.0.0.1:33060 (LISTEN)
mysqld 725 mysql 23u IPv4 20046 0t0 TCP 127.0.0.1:3306 (LISTEN)
sshd 879 root 3u IPv6 19157 0t0 TCP *:22 (LISTEN)
nginx 405901 root 8u IPv4 2177079 0t0 TCP *:80 (LISTEN)
nginx 405901 root 9u IPv4 2177080 0t0 TCP *:443 (LISTEN)
nginx 405902 nobody 8u IPv4 2177079 0t0 TCP *:80 (LISTEN)
nginx 405902 nobody 9u IPv4 2177080 0t0 TCP *:443 (LISTEN)
nginx 405903 nobody 8u IPv4 2177079 0t0 TCP *:80 (LISTEN)
nginx 405903 nobody 9u IPv4 2177080 0t0 TCP *:443 (LISTEN)
nginx 405904 nobody 8u IPv4 2177079 0t0 TCP *:80 (LISTEN)
nginx 405904 nobody 9u IPv4 2177080 0t0 TCP *:443 (LISTEN)
nginx 405905 nobody 8u IPv4 2177079 0t0 TCP *:80 (LISTEN)
nginx 405905 nobody 9u IPv4 2177080 0t0 TCP *:443 (LISTEN)
ruby 443710 root 5u IPv4 2999047 0t0 TCP *:9292 (LISTEN)
ruby 443717 root 5u IPv4 2999047 0t0 TCP *:9292 (LISTEN)
ruby 443718 root 5u IPv4 2999047 0t0 TCP *:9292 (LISTEN)
I can go to kapelner.com:9292
and the webpage serves without errors (save a few asset files and whatnot which is expected since they're configured to run through nginx
as you'll see below). Puma logs appropriately and rails logs appropriately to the following three files:
root@kapelner-gradesly:~/deploy/gradeportal/current/log# ls -lh
total 45M
-rw-rw-r-- 1 root root 0 Jul 18 11:52 development.log
-rw-rw-r-- 1 root root 43M Jul 18 11:55 production.log
-rw-r--r-- 1 root root 377 Jul 18 11:54 puma.stderr.log
-rw-r--r-- 1 root root 2.2M Jul 18 11:54 puma.stdout.log
Now onto the problem...
I've got nginx v1.22
root@kapelner-gradesly:~/deploy/gradeportal/current# nginx -version
nginx version: nginx/1.22.0 (Ubuntu)
running also as a systemctl
service:
root@kapelner-gradesly:~# cat /usr/lib/systemd/system/nginx.service
[Unit]
Description=A high performance web server and a reverse proxy server
Documentation=man:nginx(8)
After=network.target nss-lookup.target
[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t -q -g 'daemon on; master_process on;'
ExecStart=/usr/sbin/nginx -g 'daemon on; master_process on;'
ExecReload=/usr/sbin/nginx -g 'daemon on; master_process on;' -s reload
ExecStop=-/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid
TimeoutStopSec=5
KillMode=mixed
[Install]
WantedBy=multi-user.target
which is running fine:
root@kapelner-gradesly:~# sudo systemctl status nginx
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; preset: enabled)
Active: active (running) since Fri 2023-07-14 18:54:59 EDT; 3 days ago
Docs: man:nginx(8)
Process: 405899 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Process: 405900 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Main PID: 405901 (nginx)
Tasks: 5 (limit: 2308)
Memory: 7.4M
CPU: 5.207s
CGroup: /system.slice/nginx.service
├─405901 "nginx: master process /usr/sbin/nginx -g daemon on; master_process on;"
├─405902 "nginx: worker process"
├─405903 "nginx: worker process"
├─405904 "nginx: worker process"
└─405905 "nginx: worker process"
Jul 14 18:54:59 kapelner-gradesly systemd[1]: nginx.service: Deactivated successfully.
Jul 14 18:54:59 kapelner-gradesly systemd[1]: Stopped A high performance web server and a reverse proxy server.
Jul 14 18:54:59 kapelner-gradesly systemd[1]: Starting A high performance web server and a reverse proxy server...
Jul 14 18:54:59 kapelner-gradesly systemd[1]: Started A high performance web server and a reverse proxy server.
and the configuration file checks out
root@kapelner-gradesly:~# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
My nginx configuration is below:
root@kapelner-gradesly:~# cat /etc/nginx/nginx.conf
worker_processes 4;
events {
worker_connections 1024;
multi_accept on;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
gzip off;
gzip_vary off;
upstream app {
# Path to Puma SOCK file, as defined previously
#server unix:///root/deploy/gradeportal/shared/sockets/puma.sock fail_timeout=0;
server 0.0.0.0:9292;
}
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
As you can see, I tried the "sock" method before, also with no luck. Here's my additional configuration file:
root@kapelner-gradesly:~# cat /etc/nginx/conf.d/kapelner.conf
server {
listen 80;
server_name kapelner.com *.kapelner.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
location @app {
proxy_pass https://app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
root /root/deploy/gradeportal/current/public;
server_name kapelner.com *.kapelner.com;
#index index.htm index.html;
try_files $uri/index.html $uri @app;
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/kapelner.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/kapelner.com/privkey.pem; # managed by Certbot
access_log /var/log/nginx/kapelner_access.log;
error_log /var/log/nginx/kapelner_error.log;
location / {
rewrite ^/assets(.*)$ /assets$1 break;
rewrite ^(.*)$ /kapelner$1 break;
}
}
I've also tried commenting out the location
block with no luck either.
The problem is I'm getting a "404 not found error" when visiting kapelner.com
.
When looking at the nginx logs I find:
root@kapelner-gradesly:/var/log/nginx# cat /var/log/nginx/kapelner_access.log
...
72.226.69.230 - - [18/Jul/2023:12:23:22 -0400] "GET / HTTP/1.1" 404 564 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36"
and the error officially is:
root@kapelner-gradesly:~# cat /var/log/nginx/kapelner_error.log
...
2023/07/18 12:29:22 [error] 444642#444642: *16 "/root/deploy/gradeportal/current/public/kapelner/index.html" is not found (2: No such file or directory), client: 72.226.69.230, server: kapelner.com, request: "GET / HTTP/1.1", host: "kapelner.com"
which I'm interpreting to mean that it's not connecting to puma on the upstream.
Anyone know what I'm doing wrong?
EDIT:
I've revamped my config file to:
root@kapelner-gradesly:~# cat /etc/nginx/conf.d/kapelner.conf
server {
listen 80;
server_name kapelner.com *.kapelner.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
#location @app {
# proxy_pass https://app;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header Host $http_host;
# proxy_redirect off;
# }
root /root/deploy/gradeportal/current/public;
server_name kapelner.com *.kapelner.com;
#index index.htm index.html;
try_files $uri @app;
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/kapelner.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/kapelner.com/privkey.pem; # managed by Certbot
access_log /var/log/nginx/kapelner_access.log;
error_log /var/log/nginx/kapelner_error.log;
location / {
rewrite ^/assets(.*)$ /assets$1 break;
rewrite ^(.*)$ /kapelner$1 break;
}
}
and restarted nginx. Then I still get the same error message:
root@kapelner-gradesly:~# cat /var/log/nginx/kapelner_error.log
...
2023/07/18 13:40:32 [error] 445188#445188: *2 "/root/deploy/gradeportal/current/public/kapelner/index.html" is not found (2: No such file or directory), client: 72.226.69.230, server: kapelner.com, request: "GET / HTTP/1.1", host: "kapelner.com"
Maybe there is something else in the other included conf
files. Here's
root@kapelner-gradesly:~# cat /etc/letsencrypt/options-ssl-nginx.conf
ssl_session_cache shared:le_nginx_SSL:10m;
ssl_session_timeout 1440m;
ssl_session_tickets off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
ssl_ciphers "ECDHE-<cipher continues, no need to show it>-SHA384";
There's nothing else in the conf.d
folder:
root@kapelner-gradesly:~# cd /etc/nginx/conf.d
root@kapelner-gradesly:/etc/nginx/conf.d# ls -lh
total 4.0K
-rw-r--r-- 1 root root 1.1K Jul 18 13:39 kapelner.conf
and nothing in the sites-enabled
directory:
root@kapelner-gradesly:/etc/nginx# cd sites-enabled/
root@kapelner-gradesly:/etc/nginx/sites-enabled# ls -lh
total 0
This is super strange. Anything else I can try?