3

My application need to have a file upload component which will be uploading very large(>1gb) files. I'm yet to decide on the protocol (HTTP or FTP) to go for (Any help in this regard will be highly appreciated). Now, when one user is using this upload feature other user's work should not be hampered i.e. one large file upload should not eat up other user's bandwidth.

Is there any way this upload process can be throttled in the n/w so that I can only allocate so much b/w to it which will continue the upload process and other user's work is also not getting hampered? What will be best protocol for this type of large file upload (HTTP or FTP) from the n/w point of view? Is having FTP and SFTP the same thing or SFTP has more overhead and hence has slower data transfer rate?

kaychaks
  • 167
  • 1
  • 1
  • 9
  • Are you talking about an application you are developing? That's what it sounds like... which means you're asking for ideas on throttling for your app? Is that right? – Izzy Jul 07 '09 at 02:14
  • 1
    If this is an application you are developing yourself then this is a question for Stack Overflow. They will be able to help you with this at app level so that no system admin work is required. – Mark Henderson Jul 07 '09 at 02:44

3 Answers3

3

If you are needing to throttle your own applicatino then I would suggest you include daat rate limiting support in the application itself, though if you are using a 3rd party library to do the sent this may not be possible.

You don't state anything about your platform and intended install environment making specific recommendations difficult, but libcurl (http://curl.haxx.se/) is generally a popular choice and supports just about every protocol for straight point-to-point transfers, has rate limiting options, and is available for most platforms including Linux, BSD, MacOS and Windows. The license is one that allows its linking in non F/OSS applications too, if this is an issue for you, and if you can't find direct bindings for your chosen language you can always call it via the external curl utility.

If you are stuck using a library or external program that does not support rate limiting and you are using Linux (or other unix-alike environment) then you could look into trickle (see here) or the traffic shaping that is available built-in to modern kernels (there are many guides out there for this, this is the one that came to the top of a quick Google). Using traffic shaping like this would allow you to control the whole outgoing bandwidth, not just one application, so you could stop any stream (or combination of streams) consuming all your network's upstream bandwidth without changing individual applications.

David Spillett
  • 22,754
  • 45
  • 67
1

You can throttle bandwidth at the application level, if the application you're using supports it. For example, curl takes a --limit-rate option that you can specify.

You can throttle bandwidth in the network itself, using Quality-of-Service (QoS). That's a bit more complicated - a good intro is here.

pjz
  • 10,595
  • 1
  • 32
  • 40
1

If security is not concern then prefer ftp over sftp to avoid encryption over head. For files as big as 1 GB I do not think you have worry about people sniffing as users do not usually sniff such big files. So ftp is ok.

Saurabh Barjatiya
  • 4,703
  • 2
  • 30
  • 34
  • 1
    I would disagree there. On a modern machine, except many embedded systems, the overhead of encrypting a SSH stream is minimal. FTP can also have more problems with firewalls. – David Spillett Jul 07 '09 at 07:50
  • I have found Linux to Linux scp to work without much CPU overhead at nice speeds (11+ mbps). But windows to Linux sftp is really slow (2-3 mbps) and is bottle necked by CPU and not by network. So I felt ftp would be faster in general case. – Saurabh Barjatiya Jul 07 '09 at 12:11
  • 1
    I'm at a loss to understand why an answer which failed to address the question at all was accepted. – John Gardeniers Jul 29 '09 at 12:14
  • It answers - What will be best protocol for this type of large file upload (HTTP or FTP) from the n/w point of view? Is having FTP and SFTP the same thing or SFTP has more overhead and hence has slower data transfer rate? – Saurabh Barjatiya Jul 29 '09 at 15:15
  • As far as throttling is concerned combination of iptables modules limit and connbytes should allow to control speed of large files being downloaded / uploaded. It will affect all large TCP connections from server so should be used with caution. – Saurabh Barjatiya Jul 29 '09 at 15:18