1

I'm learning Node.js (-awesome-), and I'm toying with the idea of using it to create a next-generation MUD (online text-based game). In such games, there are various commands, skills, spells etc. that can be used to kill bad guys as you run around and explore hundreds of rooms/locations. Generally speaking, these features are pretty static - you can't usually create new spells, or build new rooms. I however would like to create a MUD where the code that defines spells and rooms etc. can be edited by users.

That has some obvious security concerns; a malicious user could for example upload some JS that forks the child process 'rm -r /'. I'm not as concerned with protecting the internals of the game (I'm securing as much as possible, but there's only so much you can do in a language where everything is public); I could always track code changes wiki-style, and punish users who e.g. crash the server, or boost their power over 9000, etc. But I'd like to solidly protect the server's OS.

I've looked into other SO answers to similar questions, and most people suggest running a sandboxed version of Node. This won't work in my situation (at least not well), because I need the user-defined JS to interact with the MUD's engine, which itself needs to interact with the filesystem, system commands, sensitive core modules, etc. etc. Hypothetically all of those transactions could perhaps be JSON-encoded in the engine, sent to the sandboxed process, processed, and returned to the engine via JSON, but that is an expensive endeavour if every single call to get a player's hit points needs to be passed to another process. Not to mention it's synchronous, which I would rather avoid.

So I'm wondering if there's a way to "sandbox" a single Node module. My thought is that such a sandbox would need to simply disable the 'require' function, and all would be bliss. So, since I couldn't find anything on Google/SO, I figured I'd pose the question myself.

Anthon
  • 69,918
  • 32
  • 186
  • 246
opensourcejunkie
  • 518
  • 7
  • 14
  • This sounds pretty neat :) How would you feel about running the user's code on the browser, and having it submit the resulting values to node via ajax? That way you can provide somewhat of an Interface, where only certain values can be modified (e.g. health, mana, level, skills, etc.), you constrain them serverside, and you have no chance of malicious code running on your server.. Does that seem like something that might work? – asifrc Jul 09 '13 at 05:35
  • Thanks! :-). Unfortunately that won't quite work for what I'm looking to do; it's pretty much similar to the sandboxed process approach. I want to allow user-provided code to call functions, and register events, etc. For example, a room might register an event to detect when someone enters, and then automatically turn on a magic light (clap on ;-)). Or an NPC might decide to follow you, casting healing spells, or intelligently casting elemental spells on enemies based on their weaknesses, etc. So these entities really need to be able to interact with each-other on the server. Thanks though! – opensourcejunkie Jul 10 '13 at 01:15
  • I know this is a super old question, but this is definitely doable. You have multiple libraries allowing you to do that, trading of security for usability, but I can only recommend `isolated-vm`. It sandboxes your module, but you can call the node functions you want to expose with it. – Jerska Mar 27 '20 at 10:44

1 Answers1

1

Okay, so I thought about it some more today, and I think I have a basic strategy:

var require = function(module) {
    throw "Uh-oh, untrusted code tried to load module '" + module + "'";
}
var module = null;
// use similar strategy for anything else susceptible

var loadUntrusted = function() {
    eval(code);
}

Essentially, we just use variables in a local scope to hide the Node API from eval'ed code, and run the code. Another point of vulnerability would be objects from the Node API that are passed into untrusted code. If e.g. a buffer was passed to an untrusted object/function, that object/function could work its way up the prototype chain, and replace a key buffer function with its own malicious version. That would make all buffers used for e.g. File IO, or piping system commands, etc., vulnerable to injection.

So, if I'm going to succeed in this, I'll need to partition untrusted objects into their own world - the outside world can call methods on it, but it cannot call methods on the outside world. Anyone can of course feel free to please tell me of any further security vulnerabilities they can think of regarding this strategy.

opensourcejunkie
  • 518
  • 7
  • 14