0

I am running a strongloop API on Heroku, and have had no issues till I made a few minor changes and now I get massive memory use problems, such that the application crashes:

2015-04-20T09:48:02.414727+00:00 heroku[web.1]: Process running mem=728M(142.2%)
2015-04-20T09:48:02.414771+00:00 heroku[web.1]: Error R14 (Memory quota exceeded)

The weird thing is that this does NOT happen when I run locally using NODE_ENV='production' slc run

When I monitor memory usage while running locally, I get a total (between the 4 worker processes) of about 330MB vs 728MB+ when running exactly the same thing on Heroku.

Rolling back to a previous version of my application works just fine, but I cannot see how the changes I implemented could have caused the memory use to go out of control like this. Here's what I essentially changed:

  1. fixed a mis-typed check for custom environment variable on boot

  2. created a datasources.development.json as well datasources.production.json files, to ensure that in development mode I connect to a memory database instead of the live MongoLab hosted database

  3. made some ACL changes to two of my models to allow an admin user to write via the API (works fine locally)

datasources.development.json looks like this:

{
  "KaranMongo_live": {
    "name": "KaranMongo_live",
    "connector": "memory",
    "file": "db.json"
  },
  "MyEmail": {
    "name": "MyEmail",
    "connector": "mail",
    "transports": [
      {
        "type": "smtp",
        "host": "smtp.mandrillapp.com",
        "secure": true,
        "port": 465,
        "tls": {
          "rejectUnauthorized": false
        },
        "auth": {
          "user": "HIDDEN",
          "pass": "HIDDEN"
        }
      }
    ]
  }
}

And datasources.production.json looks like this:

{
  "KaranMongo_live": {
    "host": "ds043471-a0.mongolab.com",
    "port": 43471,
    "database": "HIDDEN",
    "username": "HIDDEN",
    "password": "HIDDEN",
    "name": "KaranMongo_live",
    "connector": "mongodb"
  },
  "MyEmail": {
    "name": "MyEmail",
    "connector": "mail",
    "transports": [
      {
        "type": "smtp",
        "host": "smtp.mandrillapp.com",
        "secure": true,
        "port": 465,
        "tls": {
          "rejectUnauthorized": false
        },
        "auth": {
          "user": "HIDDEN",
          "pass": "HIDDEN"
        }
      }
    ]
  }
}

I am really stumped. Any ideas what might be happening? Or at least how I would go about tracing the cause of the problem?

UPDATE: I'm getting a little closer to an answer. It seems that setting NODE_ENV=production is the culprit, since it has a side effect I didn't anticipate - it make loopback launch with 4 worker nodes instead of just a single process, and this overwhelms the memory provided by Heroku. I'm renaming my datasources.production.json to datasources.live.json and setting NODE_ENV=live to see whether the problem goes away.

Anselan
  • 396
  • 3
  • 13

2 Answers2

2

LoopBack doesn't fork processes on its own, but strong-supervisor does, and the behaviour you are describing matches that.

If that is the case (eg. if you have slc run or sl-run in your Procfile), then you could add a --cluster=1 option to that or set an environment variable in your Heroku app via heroku config:set STRONGLOOP_CLUSTER=1.

Then your app would still run in "production" mode, but the cluster would be capped at 1 worker.

Ryann Graham
  • 8,079
  • 2
  • 29
  • 32
0

Fixed it!

It was indeed the NODE_ENV variable that was causing the problem - none of my other changes were actually the issue.

It seems that setting NODE_ENV='production' has a special meaning for Strongloop - it launches with 4 worker nodes instead of 1 single process, and this is too much for Heroku with slightly more complex applications. Changing the NODE_ENV to something else (in my case, NODE_ENV='live') made everything run just fine - only 86MB total memory use, on average.

Anselan
  • 396
  • 3
  • 13
  • 1
    Be careful and note that if you do not use NODE_ENV=production in a production environment, if your app has exceptions the stack traces will be publicly exposed. Using production mode suppresses this, and there is another way to suppress it when using other NODE_ENV values. See http://docs.strongloop.com/display/public/LB/Environment-specific+configuration;jsessionid=E8070264032F4CCBC9CB6987B9D08A2F#Environment-specificconfiguration-Turningoffstacktraces – notbrain Apr 20 '15 at 17:14