1

The company I work for has multiple offices spread throughout the US. We're trying to come up with a solution that would allow secure file sharing across all the offices. What we are thinking at the moment is:

  • Setting up a Linux server at each site and mounting the shared directories from the other servers via sshfs. This way the clients would only directly access their local server.

  • Another possibility is to do data replication via rsync between all the servers. This would allow us to also preserve data in the event of catastrophic failure. The thing we are worried about is corrupt data being replicated across all the servers and us losing any chance of recovery. It might be possible to solve this by doing hourly rsync between the servers and daily incremental backup to an external drive at our office to preserve data from the past.

  • We would probably use LDAP to deal with user account replication and access control.

Neither of the solutions "feels" right, for lack of a better term. What would you do to solve this situation?

EDIT: We would prefer a Linux based solution to avoid the costs and hassle of Windows licensing. Also, the IT team is much more familiar with Linux than Windows Servers. It should also be noted that we're not against outsourcing this so that the data would live out on the cloud somewhere. Assuming the price is reasonable.

baudtack
  • 490
  • 2
  • 6
  • 15

5 Answers5

1

If you're not absolutely committed to Linux I'd recommend going down a Windows Server route with DFS, Previous Versions and Active Directory. It's clean, well established, and more than adequately secure, and is probably the best "one stop shop" for file sharing right now.

Maximus Minimus
  • 8,987
  • 2
  • 23
  • 36
  • 1
    Yeah, we'd prefer to avoid all the hassles of Windows licensing fees. That and both myself and the other IT guy are much more familiar with Linux than Windows Servers. – baudtack Jul 30 '09 at 17:20
  • Fair enough reason, I'm a big fan of "stick with what you know best" when it comes to servers. :) – Maximus Minimus Jul 30 '09 at 18:25
1

I wouldn't use sshfs. Set up VPN tunnels between the different locations and use your standard file sharing protocols. If this is too slow because of bandwith limitations, you would have to think about something else, but this certainly would also involve an VPN.

Sven
  • 98,649
  • 14
  • 180
  • 226
  • The reason I though of sshfs is we have a lot of users who travel around a lot. Using sshfs would allow them to mount their directories while they are on the road with out them having to activate a VPN client. The less the users have to do the better. – baudtack Jul 30 '09 at 17:36
  • You could combine starting the VPN and connecting to the servers in a script. If you are familiar with Linux, you could use OpenVPN, which has a CLI interface for the clients, and use the net command on Windows to mount the shares. – Sven Jul 30 '09 at 18:09
  • Ah. Good point. – baudtack Jul 30 '09 at 18:12
  • We have VPN to access our office resources while on the road, but also ssh access in case someone needs to connect without having the client certificates. Both solutions can work at the same time – Jorge Bernal Jul 30 '09 at 20:34
0

There are a large number of variables to consider here -- including link speed -- preferred platforms -- type of data -- and data size.

I would suggest you consider regionally partitioning the data no matter what solution you go with -- such that

 /shared/chi/
 /shared/nyc/
 /shared/phi/
 /shared/sfo/

Are unique local (fast, authoritative) copies of the local data. You mention hourly rsync, so is it critical everyone have access all the time to all of the systems? Is real-time important?

If you were doing this with Windows -- I would suggest a single AD forrest with local domain controllers at every site, local storage, and DFS to sync between servers.

If you are using Linux LDAP or simple permissions should do the trick. You've left a bit of room for interpretation though -- are there certain business cases for another setup? Have you considered a document management system? Will people from various regional sites need to collaborate?

Is something such as Citrix an option (much lower cost due to centralized IT).

but this certainly would also involve an VPN

Depending on your budget -- large carriers can provide a MPLS network that will look like a private network for you, but is routed over the standard ethernet. No managing of VPN's, no router limitations, no technical maintenance. The costs are typically around the same as a standard T1/T3 connection.

SirStan
  • 2,373
  • 15
  • 19
  • We are planing to partition the data by program. Each office has one or more programs that are run from there. It is important to have everyone have access to the data. Going rsync would allow us to do that while doing backups at the same time. We'd prefer non-windows solutions. I'll take a look at Citrix and get back to you. – baudtack Jul 30 '09 at 17:23
0

if you don't need 'near-realtime' sharing between offices, rsync is a really good way to go.

make a 'local' share on each server, with appropriate R/W privileges for local users, and an 'imported' directory where you pull a copy (with rsync) from the other servers (each one in it's own subdirectory), and serve it R/O for local users. if you want, make another directory with links to those but a 'flatter' structure, and share that.

there's no reason why this would be any more susceptible to propagating corrupted files than any other scheme.

if you want something more realtime, you could mount and reexport the remote servers for local users. i think NFSv4 has some cacheing options that could help.

or, you could use WebDAV, the users could directly mount the remote servers, and let a local squid proxy to help cache objects.

Javier
  • 9,268
  • 2
  • 24
  • 24
0

One thing we are considering using now is Jungle Disk. It seems to be a decent solution for a reasonable price.

baudtack
  • 490
  • 2
  • 6
  • 15