3

I wanted to have below configuration:

  • 1 master and 2 sentinels on host A.
  • 1 slave and 1 sentinels on host B.

So for Master, I have created dockerfile like below:

FROM redis
COPY redis.conf /etc/redis/redis.conf
COPY sentinel.conf /etc/redis/sentinel.conf
CMD [ "redis-server", "/etc/redis/redis.conf" ]
CMD [ "redis-sentinel", "/etc/redis/sentinel.conf" ]
CMD [ "redis-sentinel", "/etc/redis/sentinel.conf" ]

All looks good and when I try to run docker container it doesn't throw any error and looks fine. But when I try to connect to container using redis-cli, I am getting below error.

error: Could not connect to Redis at 127.0.0.1:6379: Connection refused

I am not able to understand why it is not able to connect? Also if anyone can tell me if I am creating the dockerfile correct way?

Note : Trying below command to connect

docker exec -it rdbcontainer redis-cli
Rahul
  • 326
  • 2
  • 10
  • I just tried connecting to sentinel specifying port and it worked but I am not sure I cannot see other sentinel. command : docker exec -it rdbcontainer redis-cli -p 26379 – Rahul Aug 09 '20 at 14:11

2 Answers2

2

Dockerfile can only have one CMD instruction, and if you specified multiple, the last one will be execute. So this is the reason you can access sentinel but not redis server.

If you want to execute multiple command, you should use RUN instead and use CMD for the main process.

But I don't recommend to use RUN for sentinel or redis-server as Docker container is very lightweight and each container should focus on its own process(CMD). For sentinels and redis-server, you can create multiple containers on same host(docker-compose should be a potential solution).

Gawain
  • 1,017
  • 8
  • 17
1

You're stepping into the realm of multi-process containers, for this specific case the recommended way is what @Gawain already stated, using one container per Redis process and wrap it all with docker-compose.

But in the corner case when you need to start multiple processes on the same container this article is an eye opener. The main topics here are the init process and signal forwarding, like the author I've had the best experience using s6-overlay.

What I like about this approach is the fact that you can setup s6 in a way that if any of the monitored processes goes down the whole container will go down triggering this way a rebuild on a Kubernetes environment. You don't want a container to look healthy from outside and have one of its child processes failing (this is one the advantages of the 1-process-per-container mantra that Docker preaches).

Here's an example repo from the same author starting multiple processes with the mentioned safety mechanism to take it down in case anything fails.

MGP
  • 2,981
  • 35
  • 34