2

I have a Postgres database containing public information that I'd like to expose to the internet, for anyone to use. What steps can I take to prevent:

  1. Excessively expensive queries that could hog resources, preventing access by others
  2. Queries that return too much data, using up too much bandwidth, causing harm to the server owner.
  3. The server itself being compromised and used for bad things.

I'm not worried about any data on the server being exposed, and I'm not especially worried about the server being crashed - it's trivial to rebuild.

It's PostgreSQL 9.1 with PostGIS extensions, containing OpenStreetMap data and a few other things. It's currently running on an Ubuntu (Quantal) VM, on OpenStack infrastructure.

The database is currently configured so that the only account that can connect over a network has read access to the necessary tables, and nothing more. It has a trivial password, and is running on the default port (5432), to simplify use. Shell access is only by public key. I'm not using a firewall, other than that provided by the OpenStack infrastructure. (All of these decisions are fair game for discussion...)

Steve Bennett
  • 5,750
  • 12
  • 47
  • 59
  • First question that comes to mind - why must it be public facing? – Aaron Mason Nov 19 '13 at 00:17
  • So that the public can use it. Although, the audience is almost exclusively academics in Australia, so perhaps I should use network subnet restrictions? – Steve Bennett Nov 19 '13 at 00:19
  • Since it's running Linux, you could look at iptables and tc to see if you can pick up connections getting more than their fair share and class them into a low-bandwidth queue, or blocking such connections for a short time. Using only PKI authentication is very wise, good man. Restricting access to academic subnets might be a good idea - have a feedback loop (maybe via email?) so that you know when people have been knocked back from academic networks. – Aaron Mason Nov 19 '13 at 03:38
  • 2
    The problem with restricting to actual .edu.au subnets, is it prevents people connecting from home, cafe wifi or from tethered mobile phones. Academics aren't as desk-bound as they used to be... – Steve Bennett Nov 19 '13 at 11:47
  • Hi, just wondering how you went with this? – Aaron Mason Feb 04 '14 at 22:50
  • Thanks for following up - I haven't got any further at the moment. I obviously have some reading up to do and need to redesign the servers to be both more robust and more replaceable. – Steve Bennett Feb 05 '14 at 00:26

1 Answers1

1

Since you're using Linux, iptables gives you a few options.

If your distro has the iptables quota module, you can use it to set a quota on requests. Be sure to set your quota rule, then a drop rule straight after once the limit is reached. Once that's in place, you may need to have a cron job flush the byte counters to reset the quota.

Also, iptables and tc in combination could achieve the effect you desire, whether you wish to limit heavy users or stop them in their tracks.

Have a look at those tools and see if they'll do what you need.

Aaron Mason
  • 703
  • 6
  • 20