3

I'm learning how to use Docker with multiple containers, as opposed to a single one.

I'd like to learn how to call, from container A, a program located on container B.

(i.e.: I want to be able to call sendmail from my web container, while sendmail and similar programs are located on a mailhog container.)

I have this:

docker-compose.yml

web:
  container_name: centosweb
  image: fab/centosweb
  ports:
    - "80:80"
  volumes:
    # Single files
    - ./config/httpd.conf:/etc/httpd/conf/httpd.conf
    # Directories
    - ./vhosts:/var/www/html
    - /Users/fabien/Dropbox/AppData/XAMPP/web/bilingueanglais/public_html:/var/www/html/bilingueanglais
    - ./logs/apache:/etc/httpd/logs # This will include access_log(s) and error_log(s), including PHP errors.
  links:
    - db:3306
    - mailhog:1025
db:
  container_name: centosdb
  image: fab/centosdb
  hostname: db
  volumes:
    # Single files
    - ./config/my.cnf:/etc/my.cnf
    # Directories
    - ./mysqldata:/var/lib/mysql
    - ./logs/mysql:/var/log/mysql
mailhog:
  image: mailhog/mailhog
  hostname: mailhog
  ports:
      - 1025:1025
      - 8025:8025

However, right now, web cannot find /usr/sbin/sendmail since it's located on the mailhog container. Trying to use a mail function from PHP produces a Could not instantiate mail function error.

How can I link the two?

Fabien Snauwaert
  • 4,995
  • 5
  • 52
  • 70
  • I don't know mailhog and I can't see the advantage of what you are trying to do here? Why not just install sendmail in your web container? I don't think you get far with standard images. Use them to build your own. -- Beside that, if you really want to access software from one container inside another container, you have to go through `docker exec` and that means you need to have the `docker` client installed in your web container and the socket `/var/run/docker.sock` mounted inside web. – CFrei May 10 '17 at 12:53
  • Using everything in one image and container worked fine, but I'm trying to learn to use Docker compose and multiple containers, as it seems to prove handy down the line, in production. I'm roughly trying to follow the model of the [Twelve-Factor App](https://12factor.net/) (but maybe this is just a case of premature optimization?) My understanding is that, in production (where Mailhog and mhsendmail would be replaced with something else), it would make sense to have one container for email, separate from the rest. Bottom-line: I'm trying to understand how Docker multi-containers work. – Fabien Snauwaert May 10 '17 at 13:50
  • 1
    Docker simplifies the interface between software - and that means we should communicate between the containers only with the network. If you have only one server, you can do this with `docker exec` or with `docker run --from ...` too, but it always feel like a hack. So if you want to have different email services, then run a container containing an open smtp relay only for your services and configure the real gateway in that container. That relay you can then connect directly with php (see for example https://github.com/PHPMailer/PHPMailer, but I really have not much knowledge about PHP ;) ). – CFrei May 10 '17 at 14:04

1 Answers1

1

The containers need to be in the same network, that was my problem.

Rafalfaro
  • 211
  • 4
  • 3
  • could you elaborate on what was required to solve this? – ol'bob dole Feb 11 '20 at 13:02
  • So basically if you're using docker-compose you can use the network property to make sure both containers that you're trying to link use a specific docker network. This allows them to connect to each other without ip addresses by just using the hostname which is by default the name of the service in docker-compose. I don't use the link property. Here's a working example https://gist.github.com/rafalfaro18/d1811dd42f89ae03194aededafb78a4e so in that case both containers are in a network called 'wordpress-network' so wordpress container can connect to db using 'db' as url of the database. – Rafalfaro Jun 16 '20 at 23:32
  • No ip addresses needed if you use same network (besides the default one). If you don't use networks, docker will use the default network which only makes the containers accessible by ip address and not by hostname. – Rafalfaro Jun 16 '20 at 23:35
  • So for the original question of this post. You can fix the connection issues removing the link property and adding the same custom network to all the containers. – Rafalfaro Jun 16 '20 at 23:37