5

For a specific project, I have to implement a temporary files server :

  1. the server must serve medium sized binary files (around 1 MB)
  2. the maximum lifetime of files is 5 minutes
  3. files will be uploaded by around 10 distincts servers
  4. files will be read by around 10 distincts servers
  5. a given file is uploaded by only 1 server and read by 1 server only
  6. a given file can be destroyed after the first succesful read
  7. the server must use only non privilegied ports (no FTP or NFS)
  8. the server must work without any root access
  9. the server must work on Linux
  10. the server must be accessible on the LAN
  11. clients (upload & download) are only Linux servers (the client code must work also with any root access)
  12. I don't need formal persistence (I can accept to loose some files after a crash)
  13. the server must use only opensource components
  14. it must be really fast !

I'm considering two solutions : - a REDIS instance (no VM, no persistence) - a NGINX server with DAV module (PUT command to upload)

But I'm really open to other solutions ;-)

Ben Pilbrow
  • 12,041
  • 5
  • 36
  • 57

4 Answers4

3

If you can use a FreeBSD solution rather than Linux, FreeNAS is a great NAS option that's easy to install and configure and has a huge range of options for connectivity and access control. There's also a Linux version of the project underway, but I'm not certain how feature-complete it is.

There's also OpenFiler on the Linux side, but we've found FreeNAS better for our varied needs (which admittedly sound quite a bit different than yours).

Edit: Sounds like you need something to run on an existing Linux server rather than its own hardware. If this is a necessity I'd think about running one of these options as a VM under KVM or Xen.

nedm
  • 5,630
  • 5
  • 32
  • 52
0

you can try your luck with memcached or coauchdb. both are key value stores, first one is 'in memory' type. can use it? i dont know - it depends if you consumer can 'guess' key under which data was stored by the produced.

maybe memcached module for nginx is exactly what you need? together with it you get RESTful HTTP interface to your in-memory database.

you can also consider some queing mechanism - starting from simple fifo implemented by you in mysql to things like rabbitmq or activemq.

pQd
  • 29,981
  • 6
  • 66
  • 109
  • a memcached record is limited to 1 MB, too small for my needs –  Jul 25 '10 at 20:43
  • @Fabien - not anymore; check http://code.google.com/p/memcached/wiki/ReleaseNotes142 : "Many people have asked for memcached to be able to store items larger than 1MB, while it's generally recommended that one not do this, it is now supported on the commandline. [..] The new -I parameter allows you to specify the maximum item size at runtime. It supports a unit postfix to allow for natural expression of item size." – pQd Jul 25 '10 at 20:47
0

Just run NFS on a non standard port, make your disk for that tmpfs, I asume from the sound of this, the total size is small? That ought to run close to wire speed (classically nfs want synchronous writes, but with tmpfs, that will not be an issue.


tmpfs is a fs that is basically a chunk of your swapfile/memory. So no persistance through a reboot of any sort, but as fast as writing to memory.

Ronald Pottol
  • 1,703
  • 1
  • 11
  • 19
  • I need a complete userspace solution (and I didn't find complete NFS solutions (server and client) in userspace) tmpfs is clearly a good option, even for the nginx service –  Jul 25 '10 at 20:47
0

Your requirements suggest that, in addition to the "server" you need to provide:

1) some client software to put files on the server

2) client software for downloading from the server

3) a notification channel between the uploader and downloader

4) some smarts on the server for expiring content no longer required

While HTTP is an obvious choice (1 and 2 are well covered, while 3 and 4 would only require a few lines of code easily implemented in perl / php / whatever) this sounds like an asynchronous message queueing system. You don't say how important transactional integrity is - but 2 & 12 suggest its not a big deal. So I would suggest using email - notification is implicit, it will survive temporary outages, you can easily configure a second SMTP server on a system running on a non-privileged port and most will allow you to specify retry intervals, timeouts etc. In terms of adding trigger code at the receiving end - this is trivial using procmail.

HTH

C.

symcbean
  • 21,009
  • 1
  • 31
  • 52