13

I manage servers where users have their own websites on it that can be accessed by FTP (like an hosting company) and instead of working on isolating LAMP stack processes, I was wondering if it was possible to implement Docker and use an images per website.

From what I understand, you can expose Docker instance via their ports, so if you run two docker instance on the same server, you'll have to expose two different ports.

But is it possible to export not ports, but server name, like :

  • www.somewebsite.com : Docker instance 1
  • www.otherwebsite.com : Docker instance 2
  • www.etc.com : Docker instance ...

And that, in the same server.

I thought about installing only Apache on the server, that would redirect request to the dedicated Docker instance based on the server name, but then I would have to install Apache (again !) andMySQL on any Docker instances.

Is this possible and moreover, is this interesting in term of performance (or not at all)?

Thank you for your help.

Cyril N.
  • 624
  • 1
  • 10
  • 36
  • 1
    Theoretically it is possible, Apache would do a ProxyPass towards the port each Docker instance is listening. – thanasisk Sep 29 '14 at 11:34

4 Answers4

13

Yes, it is possible. What you need to do is providing several 80 ports. one for each URLs. You can do this using, e.g. Virtual Host of Apache running on Docker host server.

  1. Set DNS CNAME.
  2. Run docker instances and map their port 80 to port, say, 12345~12347 of the docker host.
  3. Run Apache server on docker host and set a Virtual Host for each URL and set ProxyPass and ProxyPassReverse to localhost:12345 which is one of your docker instances.

Apache config file will look like this:

<VirtualHost *:80>
ServerName www.somewebsite.com
  <Proxy *>
    Allow from localhost
  </Proxy>
  ProxyPass        / http://local.hostname.ofDockerHost:12345/
  ProxyPassReverse / http://local.hostname.ofDockerHost:12345/
</VirtualHost>
Jihun
  • 346
  • 3
  • 7
  • 4
    Thanks! This helped a lot. Also, there's the `ProxyPreserveHost On`, so you don't end up with a lot of links to http://local.hostname.ofDockerHost:12345/ insite your web site. Here's more info that was helpful to me: https://www.digitalocean.com/community/tutorials/how-to-use-apache-http-server-as-reverse-proxy-using-mod_proxy-extension – tiangolo Sep 08 '15 at 21:07
  • Will docker save changes to database etc? – EminezArtus Jan 18 '18 at 04:39
3

It is possible. You may use apache (or better yet, haproxy, nginx or varnish, that may be more efficient than apache for just that redirection task) in the main server, to redirect to the apache ports of each container.

But, depending on the sites you run there (and their apache configurations), it may require far more memory than using a single central apache with virtualhosts, specially if you have modules (i.e. php) that require a lot of RAM.

gmuslera
  • 181
  • 3
  • Thank you for your answer. Indeed, the "hosting" service I will provide include things like Prestashop, Wordpress, etc, so, based a lot on PHP and heavy engines (I'm talking more about Prestashop here). – Cyril N. Oct 08 '14 at 11:45
  • 2
    Would a Dockerized virtual hosting system be better modularized by separating PHP into it's own Docker container(s) and make the Apache container(s) use that container for PHP processing? Would the same for databases apply? E.g. Have host proxy traffic to Apache containers (which contain user websites), which in turn send all PHP processing to a PHP container and database reads/writes to a MySQL container? Or would the PHP be any less resource-hungry this way? Would PHP-FPM, SuPHP or similar provide the same kind of a setup in a non-Docker environment? – ojrask Jan 29 '15 at 13:04
  • PHP-FPM in a container would at least be a bit redundant filespace-wise: https://code.google.com/p/sna/wiki/NginxWithPHPFPM The Apache/Nginx installation needs to copy the PHP files over to the PHP-FPM container in order for this system to work. Would a mounted shared data container solve this problem? – ojrask Jan 29 '15 at 13:20
  • If you need to share data (I.e. the php files) between containers, volumes are the way to go, you can mount them from other containers (even have data dedicated ones) or the real filesystem. The apache module used to be the fastest way to run php code, having one just for php, not static files, and have an upper layer to deliver the static/cacheable content (I.e. varnish) could be a good combo. – gmuslera Jan 29 '15 at 14:03
3

I know this has already been answered however I wanted to take it one step further and show you an example of how this could be done, to provide a more complete answer.

Please see my docker image here with instructions on how to use it, this will show you how to configure two siteshttps://hub.docker.com/r/vect0r/httpd-proxy/

As jihun said you will have to make sure you have your vhost configuration set. My example uses port 80 to display a test site example.com and 81 to display test site example2.com. Also important to note that you will need to specify your content and expose required ports in your Dockerfile, like such;

FROM centos:latest
Maintainer vect0r
LABEL Vendor="CentOS"

RUN yum -y update && yum clean all
RUN yum -y install httpd && yum clean all

EXPOSE 80 81

#Simple startup script to aviod some issues observed with container restart
ADD run-httpd.sh /run-httpd.sh
RUN chmod -v +x /run-httpd.sh

#Copy config file across
COPY ./httpd.conf /etc/httpd/conf/httpd.conf
COPY ./example.com /var/www/example.com
COPY ./example2.com /var/www/example2.com
COPY ./sites-available /etc/httpd/sites-available
COPY ./sites-enabled /etc/httpd/sites-enabled

CMD ["/run-httpd.sh"]

Hope this helps explain the process a little be more. Please feel free to ask me any further questions on this, happy to help.

Regards,

V

Vect0r
  • 31
  • 1
  • I've also uploaded the files used to make this image on github; https://github.com/V3ckt0r/docker-httpd-proxy – Vect0r Nov 10 '15 at 20:37
1

In my case I needed to add SSLProxyEngine On, ProxyPreserveHost On and RequestHeader set Front-End-Https "On" to my apache 2.4 vhost file, because I wanted to enable SSL on the docker container. About the local.hostname.ofDockerHost, in my case the name of the host server running the docker container was lucas, and the port mapped to port 443 of the docker container was 1443 (because port 443 was already in use by apache in the host server), so that line ended up this way https://lucas:1443/

This is the final setup, and it's working just fine!

<VirtualHost *:443> # Change to *:80 if no https required
    ServerName www.somewebsite.com
    <Proxy *>
        Allow from localhost
    </Proxy>
    SSLProxyEngine On # Comment this out if no https required
    RequestHeader set Front-End-Https "On" # Comment this out if no https required
    ProxyPreserveHost    On
    ProxyPass        / http://local.hostname.ofDockerHost:12345/
    ProxyPassReverse / http://local.hostname.ofDockerHost:12345/
</VirtualHost>

Finally, in the docker container I had have to setup proxy SSL headers. In my case, the container was running nginx and something called omnibus for setting up ruby apps. I think this can be setup in a nginx config file as well. Will write it down as is just in case someone find this helpful

nginx['redirect_http_to_https'] = true
nginx['proxy_set_headers'] = {
    "Host" => "$http_host",
    "X-Real-IP" => "$remote_addr",
    "X-Forwarded-For" => "$proxy_add_x_forwarded_for",
    "X-Forwarded-Proto" => "https",
    "X-Forwarded-Ssl" => "on"
}
nginx['real_ip_trusted_addresses'] = ['10.0.0.77'] # IP for lucas host
nginx['real_ip_header'] = 'X-Real-IP'
nginx['real_ip_recursive'] = 'on'

Complete guide for apache, ISP Config, Ubuntu server 16.04 here https://www.howtoforge.com/community/threads/subdomain-or-subfolder-route-requests-to-running-docker-image.73845/#post-347744

razor7
  • 135
  • 10