2

Working through a backup solution and could use some security assistance. Please see below.

The process

For our editing business, we have an offsite backup server that we update nightly using rsync over SSH. The backup script:

  • Wakes the remote machine
  • Mounts the encrypted volumes
  • Analyzes the files on the local RAID array compared to the individual LUKS encrypted disks on the backup machine
  • Splits them up to fit on those backup drives with as little data transfer as possible
  • Rsync's
  • Dismounts the encrypted volumes
  • Conducts SMART tests to round things out
  • Puts the remote machine to sleep

The problem

As the process stands now, all data transfer is encrypted, and the drives themselves are encrypted (other than the system itself) with the password for LUKS sent in via the remote server. This is mostly secure, but theoretically a malicious staff member at the remote site could break into the system and monitor the traffic by something like:

  • Booting into single user mode
  • Change root password
  • Boot normally
  • Log in as root
  • Change password file back, and hide traces
  • Monitor all going-ons, file names, access file systems when the script remotes in and mounts the drives

Is there a way to overcome this, or detect these types of on-site attacks, without rewriting everything to encrypt before sending to the remote server (which would make our process much more disk and/or bandwidth intensive, we're talking many TBs of data).

Thanks.

Fmstrat
  • 237
  • 4
  • 15
  • 2
    This sounds like the Evil Maid attack, on which much has been written. – Michael Hampton Aug 11 '15 at 18:22
  • Encrypting the data before it leaves the server in the first place is the sensible approach. When you insist on not doing that, I can only see you spending a lot of additional effort on something, which will never work well. – kasperd Aug 21 '15 at 20:45

2 Answers2

1

In theory, if your machine has a TPM chip, you could use it for Trusted Boot (i. e. store some keys in the TPM chip that can be only loaded if the chain from MBR up to anywhere you want is unchanged). The key can then be used to encrypt some local partition that contains information like the SSH keys, so that if the trusted boot fails, the SSH server cannot get up any more (or the whole server-side software, including /etc/shadow etc.)

But in practice, it is a lot of work to set up (TrustedGRUB bootloader, custom kernel, deciding which files to "measure"), it makes updating your system a pain (obviously, as for the TPM an update of your software is indistinguishable from an Evil Maid attack) and it also means you won't just be able to boot in single user mode (or from a live DVD) any more yourself without making the trusted boot fail (unless you have the keys stored somewhere off-site as a backup and don't forget to put them into the TPM again once you triggered your own booby-traps :)

mihi
  • 890
  • 1
  • 7
  • 14
  • I wish I could mark both of these as answers, since your response is a valid way to handle things. However, the below response from Aaron regarding file integrity checks is likely the route I will take. – Fmstrat Aug 12 '15 at 19:31
  • you can vote both answers up :) – mihi Aug 13 '15 at 17:17
0

Here are some thoughts, let me know if this is feasible.

Physical Access

Assuming you are using near-enterprise servers i.e. HP, Dell then you can get a remote management card for the server that allows for out-of-band monitoring. You can set up alerts so that you know when:

The machine is powered on.
The anti-tamper switches on the enclosure are tripped.
When there is a pending or active hardware problem.

In your case, you would want to know when the server goes online when you did not expect it to. If the remote management card becomes unreachable, or if the server is powered on when you didn't expect it to, then you should be alerted. i.e. nagios or in-house scripts.

You can also have video surveillance cameras that will send video or pictures of anyone going near the hardware or entering the room.

File Integrity

You could use tools such as AIDE, Tripwire, Samhain, OSSEC to create a file integrity database on the remote end, then copy that back to your server (maybe in a date-named folder), then compare the databases on your local copy and remote copy. It would be up to you to write rules that disable your rsync if certain files or conditions are met. You would have to decide the logic that suits the needs of your organization, at which point a human is alerted and required to intervene. OSSEC can also create diffs of files in directories that you specify.

Rsync used in --dry-run mode could even be used in your script to determine if human intervention is required. so you can compare your local and remote files to see what changed

Aaron
  • 2,859
  • 2
  • 12
  • 30