0

I was working on my Openshift app today and without changing anything related to mongodb connection I started getting this message:

/opt/app-root/src/node_modules/mongodb/lib/server.js:242
        process.nextTick(function() { throw err; })
                                      ^
Error: connect EHOSTUNREACH 172.30.173.215:27017
    at Object.exports._errnoException (util.js:1020:11)
    at exports._exceptionWithHostPort (util.js:1043:20)
    at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1086:14)

npm info lifecycle bolao_2018@0.1.0~start: Failed to exec start script
npm ERR! Linux 3.10.0-693.21.1.el7.x86_64
npm ERR! argv "/opt/rh/rh-nodejs6/root/usr/bin/node" "/opt/rh/rh-nodejs6/root/usr/bin/npm" "run" "-d" "start"
npm ERR! node v6.11.3
npm ERR! npm  v3.10.9
npm ERR! code ELIFECYCLE
npm ERR! bolao_2018@0.1.0 start: `node main`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the bolao_2018@0.1.0 start script 'node main'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the bolao_2018 package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR!     node main
npm ERR! You can get information on how to open an issue for this project with:
npm ERR!     npm bugs bolao_2018
npm ERR! Or if that isn't available, you can get their info via:
npm ERR!     npm owner ls bolao_2018
npm ERR! There is likely additional logging output above.

npm ERR! Please include the following file with any support request:
npm ERR!     /opt/app-root/src/npm-debug.log

The one thing different was that I saw that for some reason the mongodb service tried and failed a new deploy so I ran one manually.

I also noticed that the IP addressit tries to connect is mongodb's cluster IP but the Node IP of the current running pod is different.

Can someone help me figure out what triggered the connection to break?

Thanks

Graham Dumpleton
  • 57,726
  • 6
  • 119
  • 134
  • 1
    Are you using an IP address for the MongoDB in the client configuration that uses it, or a hostname. Technically the IP address of a ``Service`` shouldn't change, but usually always better to use the hostname for the ``Service``, which is the name of the ``Service`` object for the MongoDB instance. – Graham Dumpleton Apr 12 '18 at 06:08
  • I followed the steps on Openshift's docs, creating a environment variable MONGO_URL that contains: 'mongodb://admin:secret@:27017/sampledb'. I'm not using a fixed IP in my code. – Gabriel Muniz Antonio Apr 12 '18 at 12:06
  • Not sure - but, if the mongodb image was patched with a security update, then the related OpenShift ImageStream resource may automatically schedule new Deployments (to distribute the updates throughout the cluster). If there were ongoing issues with Deployments at the time, this may result in a scenario where the "mongodb service tried and failed a new deploy". Manually issuing a new Deployment should restage the DB and resolve the issue. – ʀɣαɳĵ Apr 12 '18 at 16:29
  • Thanks for the idea, unfortunately it didn't work. I tried redeploying mongo and my application and it still shows the same error – Gabriel Muniz Antonio Apr 12 '18 at 18:33
  • Which Online Starter cluster is this? The status page say us-east-1 is having some issues, it could be related to that. Have you looked at the MongoDB pod logs to verify it is not showing any errors when starting up? Have you tried using ``oc rsh`` to get into the MongoDB pod and use ``curl`` to see if MongoDB port can be accessed from the same pod? – Graham Dumpleton Apr 13 '18 at 00:08
  • Hey Graham! So I ran `curl $MONGODB_SERVICE_HOST:$MONGODB_SERVICE_PORT` on my mongodb pod and got the same response I got running it on my application pod `Failed connect to 172.30.173.215:27017; No route to host` My cluster is canada central – Gabriel Muniz Antonio Apr 13 '18 at 00:33
  • Use ``localhost:27017`` in the MongoDB pod. I want to verify it is working from inside. – Graham Dumpleton Apr 13 '18 at 00:38
  • Just as a sanity check, if you run ``oc describe pod`` for the MongoDB pod, what does the IP show as? Also what do you get for ``oc get endpoints mongodb``? Is the pod IP in that list? Also run ``oc describe service mongodb`` and verify same IPs listed there. – Graham Dumpleton Apr 13 '18 at 00:41
  • Presuming the service IP mapping might be screwed up somehow, try scaling down MongoDB to no replicas: ``oc scale --replicas=0 mongodb``. Then scale back up to 1 again. – Graham Dumpleton Apr 13 '18 at 00:42
  • So `curl localhost:27017` returns `It looks like you are trying to access MongoDB over HTTP on the native driver port.` oc describe pod: IP 10.130.44.221 oc get endpoints mongodb `NAME ENDPOINTS AGE mongodb 10.130.44.221:27017 25d` oc describe service mongodb: `Name: mongodb Type: ClusterIP IP: 172.30.173.215 Port: mongo 27017/TCP Endpoints: 10.130.44.221:27017 Session Affinity: None Events: ` – Gabriel Muniz Antonio Apr 13 '18 at 00:52
  • scaling down and up didn't work. Same error log – Gabriel Muniz Antonio Apr 13 '18 at 01:02
  • any other ideas on how to figure this out? I even killed my whole project and started a new one without any luck – Gabriel Muniz Antonio Apr 14 '18 at 02:37

1 Answers1

1

From within a given Project scope, requests directed toward a hostname of "mongodb" should be routed to the Kubernetes Service named "mongodb" (when available).

To debug this connectivity issue:

  1. Try opening a live terminal within your front-end container using the OpenShift web console. The front-end container must be within the same Project scope as the database service
  2. Type env to list environment variables available to the front-end. If this container was started after the creation of the "mongodb" service, configuration strings should be visible the environment
  3. Run curl $MONGODB_SERVICE_HOST:$MONGODB_SERVICE_PORT from the live terminal to verify the availability of the mongodb service
ʀɣαɳĵ
  • 1,992
  • 1
  • 15
  • 20