1

Here's my docker-compose file:

version: '3'
services:

  mongo:
    hostname: mongo
    container_name: search_mongo
    image: mongo:latest
    volumes:
      - ./docker/local/persist/mongo:/data/db
      - ./docker/mongo:/opt/mongo
    ports:
      - "8884:27017"
      - "8885:27018"
    entrypoint: /opt/mongo/entrypoint_mongo.sh

  agent:
    build: .
    image: myapp_search:compose
    depends_on:
      - mongo

Here's my entrypoint_mongo.sh

#!/bin/bash
mongod --port 27018 --replSet rs0 --fork --syslog --smallfiles
mongo --port 27018 --eval "rs.initiate({_id : 'rs0', members : [{_id : 0, host : 'mongo:27018'}]})"
mongo --port 27018 --eval "while(true) {if (rs.status().ok) break;sleep(1000)};"

The issue I am facing is : The mongo container is executing all its steps successfully but its exiting with status 0.

mongo_1          | about to fork child process, waiting until server is ready for connections.
mongo_1          | forked process: 7
mongo_1          | child process started successfully, parent exiting
mongo_1          | MongoDB shell version v3.4.10
mongo_1          | connecting to: mongodb://127.0.0.1:27018/
mongo_1          | MongoDB server version: 3.4.10
mongo_1          | { "ok" : 1 }
mongo_1          | MongoDB shell version v3.4.10
mongo_1          | connecting to: mongodb://127.0.0.1:27018/
mongo_1          | MongoDB server version: 3.4.10
search_mongo exited with code 0
Usha S.M
  • 63
  • 5
  • In the *nix world, zero (0) exit status means success. Non-zero is error. – JJussi Dec 07 '17 at 04:39
  • Yes. But I want my mongo container to be alive instead of successful exit..!! – Usha S.M Dec 07 '17 at 09:28
  • As far as I can see, your script works as you have written it! At `entrypoint_mongo.sh` you first start `mongod` to background (--fork) and then run two command where latter one will exit when replica set is "up and running". After that, there is nothing in your script and script will end with exit status of last (this time successful) command. – JJussi Dec 07 '17 at 10:00
  • Adding "tail -f /dev/null" in the entry script worked for me to stop the mongo from exit(0)..!! – Usha S.M Dec 07 '17 at 10:33

2 Answers2

4

Your startup script should not initialise or monitor the replicaset; those should be manual tasks.

You should bear in mind that:

  • initiating a replica set is strictly a one-off job; once it is initiated, the MongoDB service, when restarted, will continue being part of the same replica set.
  • a replica set normally contains several nodes which should be interchangable; if each of them tries to initialise the replica set on startup, they will throw errors
  • restarting a service is normal, expected behaviour; for example when you upgrade to the next version of MongoDB, or after patches to your server host require a reboot, or after a power outage
  • if your script tries to initialise an already-initialised replica set each time it starts the MongoDB service, it will throw errors

I strongly recommend that you make three changes:

  1. Let your mongo container just run mongo, without the steps to initiate and monitor the replica set.
  2. If you want to run a replica set, initiate it carefully and in a controlled manual way; ditto if you want to add / remove nodes, or reconfigure.
  3. If you want to monitor the health of your replica set, use a separate tool to do that; let the mongo service just do its ordinary job.
Vince Bowdren
  • 8,326
  • 3
  • 31
  • 56
1

If your last command at the script will never exit (i.e. stop running), your container keeps running. So, while (true);do sleep 1; done is one fix.

If you want to have solution where you can close container when wanted, you can use something like

while (! test -e /tmp/stop); do sleep 1; done; rm /tmp/stop

So, when you say at command line touch /tmp/stop, it will stop that loop what keeps container running.

JJussi
  • 1,540
  • 12
  • 12