0

To learn node.js, I am writing a web site that allows users to play the online game Mafia. For those unfamiliar with Mafia, it is a game most commonly played on forums, and pits an uninformed majority (the "Town") against an informed minority (the "Mafia"). However, although this is an accurate brief overview, in fact every game session can exhibit widely varying house rules that can dramatically change the game mechanics.

I want my website to be able to handle all of these variations. At first I planned for my website to implement a comprehensive framework that could run all Mafia variants itself. However, after going over a ton of rule sets for finished games archived on several different forums, I realized that the space of reasonable rules and gameplay mechanics is so huge that I would essentially have to create a new domain-specific programming language to allow all possible variants. Inventing a new language for a an otherwise straightforward personal project is rather silly and not something I'm interested in at the moment, especially given I have a perfectly good language at hand, namely JavaScript.

Therefore, I decided to let variant authors to upload a JavaScript file containing the variant code that my website will call into at the appropriate points. Essentially, JavaScript modules implementing Mafia variant game logic (which my website code will require()) will act as a scripting language to my web site's "game engine". Think Lua for C++ games. Unfortunately, this introduces a severe security problem. Unlike in the browser, node-run JavaScript has access to the file system, the network, etc. etc. So it would be trivial for a malicious user to upload a variant file that deletes the contents of my hard drive, or starts Bitcoin mining, or whatever.

My first thought was to do a replace() over each user's uploaded code for dangerous libraries such as 'fs' and 'http' into invalid strings, and catch the consequent exceptions when I try to load the file. However, this ad-hoc blacklisting technique feels like the kind of approach that one of the many people smarter/more knowledgeable that me will be able to overcome in a heartbeat. What I really need is a way to whitelist the libraries that user-uploaded code has access to. Is there a way to do this using JavaScript in node.js? If not, how would you recommend I secure the computer my node server will be running on as much as possible?

My current strategy is to require myself and a small number of trusted users to review and then vote unanimously in favor of user-uploaded JavaScript variant code before it is brought into the system, but I'm hoping there is a more automatic way of doing it.

Drake
  • 483
  • 1
  • 4
  • 12
  • Take a look [at an answer I gave recently](http://stackoverflow.com/a/19444133/893780) on how to whitelist/blacklist modules that can be `require`'d. But I agree with @vkurchatkin that it would only solve one aspect of your problem. – robertklep Nov 04 '13 at 06:35
  • An awesome feature of JavaScript is the ability to define behavior at runtime, creating new functions, etc. Even if you were to run the uploaded code through an AST, you'd still likely miss many potential vulnerabilities. If you don't trust the users fully, you need to find a different way for custom code to execute. A platform like .Net is far more capable of providing a low trust environment for example. – WiredPrairie Nov 04 '13 at 12:19
  • That answer is really helpful @roberklep. I think I'm just going to pass in a completely empty sandbox to the `vm` module though, because users should be able to implement all their logic in a single file. – Drake Nov 04 '13 at 16:27

1 Answers1

2

You need to use vm module for this. Basically it allows to run scripts in customized contexts, so you can put whatever globals you want, define your own require etc.

You should also remember that in node.js it's possible to harm your app without any libraries — a user can simply add something like while (true) {} which will stop the whole process. So you need to run all untrusted code in separate processes and be ready to kill them, when they start to abuse cpu or memory.

vkurchatkin
  • 13,364
  • 2
  • 47
  • 55
  • The vm module looks very interesting, thanks. The only downside I can see is that I can't set a timeout on any of the runInX() functions, but since the untrusted code will be running in a child process I can just require the children to send a keepalive message to the master process every X ms and kill them as you suggested if they miss one. – Drake Nov 04 '13 at 16:18
  • You should also add `use strict;` to the head of the code in order to prevent breaking the sandbox using `arguments.callee.caller`. I have recently created [a library](https://github.com/asvd/jailed) for exactly the mentioned purpose, which additionally simplifies the interaction between the application and sandboxed code. Here is an example which nearly implements the execution control with a timeout, as you asked: https://github.com/asvd/jailed/tree/master/demos/node/timeout – asvd Jan 18 '15 at 14:57