0

Almost all of file transfer softwares like [NetSupport, Radmin, PcAnyWhere..] and also the different codes i used in my application, it slows down the transfer speed when you send alot of small sized files < 1kb like Folder of a game that has alot of files.

for example on a LAN (ethernet CAT5 cables) i send a single file, let say a video, the transfer rate is between 2MB and 9MB
but when i send a folder of a game that has alot of files the transfer rate is about 300kb-800kb

as i guess it's because the way of sending a file:

  • Send File Info [file_path,file_Size].
  • Send file bytes [loop till end of the file].
  • End Transfer [ensure it received completely].

    but when you use the regular windows [copy-paste] on a shared folder on the network, the transfer rate of sending a folder is always fast like sending a single file.
    so im trying to develop a file transfer application using [WCF service c# 4.0] that would use the maximum speed available on LAN, and I'm thinking about this way:

    Get all files from the folder.
    if(FileSize<1 MB)
    {
        Create additional thread to send;
        SendFile(FilePath);
    }
    else
    {
        Wait for the large file to be sent. // fileSize>1MB
    }
    
    void SendFile(string path)  // a regular single file send.
    {
        SendFileInfo;
        Open Socket and wait for server application to connect;
        SendFileBytes;
        Dispose;
    }
    

    but im confused about using more than one Socket for a file transfer, because that will use more ports and more time (delay of listening and accepting).

    so is it a good idea to do it?
    need an explaination about if it's possible to do, how to do it, a better protocol than tcp that would meant for this.
    thanks in advance.

  • Murhaf Sousli
    • 12,622
    • 20
    • 119
    • 185
    • 2
      have you thought about compressing multiple files that are beyond a trigger size into a single archive and then streaming that across to be unpacked at the other end? – Andras Zoltan Apr 16 '12 at 12:51
    • Andras is right, that is by far going to be the quickest and simplest way to do this. – M.Babcock Apr 16 '12 at 12:52
    • @AndrasZoltan i don't think windows uses this way to copy files over the network, it's a good idea to send them as a single archive (just for the small files),but this will take more time to process and calculate and much more CPU on both server and client. – Murhaf Sousli Apr 16 '12 at 13:13
    • Very small files produce a massive overhead when they're being moved around one by one over a network: more files means more transfers which in turn means overhead which at last means a slower speed. As far as I can tell the best option is to pack them in a byte array, stream it, then decompose it on the other side of the wire. – Alex Apr 16 '12 at 13:21
    • 2
      @MurHafSoz no you're right - Windows does not use this technique - it uses a method which is much further down the stack then at the level you're at; in fact I think it might even be handled by the filesystem itself and you're not going to be able to compete with that. You have to cheat. – Andras Zoltan Apr 16 '12 at 13:24
    • @alex alot of small files sizes that would be like a 200Mb, 500Mb, this will be saved in ram in the way you said, i think would not be good, right! expect if i temporally save the byte array on hard disk then remove it later. – Murhaf Sousli Apr 16 '12 at 13:27

    2 Answers2

    3

    It should be noted you won't ever achieve 100% LAN speed usage - I'm hoping you're not hoping for that - there are too many factors there.

    In response to your comment as well, you can't reach the same level that the OS uses to transfer files, because you're a lot further away from the bare metal than windows is. I believe file copying in Windows is only a layer or two above the drivers themselves (possibly even within the filesystem driver) - in a WCF service you're a lot further away!

    The simplest thing for you to do will be to package multiple files into archives and transmit them that way, then at the receiving end you unpack the complete package into the target folder. Sure, some of those files might already be compressed and so won't benefit - but in general you should see a big improvement. For rock-solid compression in which you can preserve directory structure, I'd consider using SharpZipLib

    A system that uses compression intelligently (probably medium-level, low CPU usage but which will work well on 'compressible' files) might match or possibly outperform OS copying. Windows doesn't use this method because it's hopeless for fault-tolerance. In the OS, a transfer halted half way through a file will still leave any successful files in place. If the transfer itself is compressed and interrupted, everything is lost and has to be started again.

    Beyond that, you can consider the following:

    Get it working using compression by default first before trying any enhancements. In some cases (depending on size/no. files) it might be you can simply compress the whole folder and then transmit it in one go. Beyond a certain size, however, and this might take too long, so you'll want to create a series of smaller zips.

    Write the compressed file to a temporary location on disk as it's being received, don't buffer the whole thing in memory. Delete the file once you've then unpacked it into the target folder.

    Consider adding the ability to mark certain file types as being able to be sent 'naked'- i.e. uncompressed. That way you can exclude .zips, avis etc files from the compression process. That said, a folder with a million 1kb zip files will clearly benefit from being packed into one single archive - so perhaps give yourself the ability to set a min size beyond which that file will still be packed into a compressed folder (or perhaps a file count/size on disk ratio for a folder itself - including sub-folders).

    Beyond this advice you will need to play around to get the best results.

    Andras Zoltan
    • 41,961
    • 13
    • 104
    • 160
    0

    perhaps, an easy solution would be gathering all files together onto a big stream (like zipping them, but just append to make this fast) and send this one stream. This would give more speed, but will use up some cpu on both devices and a good idea how to separate all files in the stream.

    But using more ports would, from what i know, only be a disadvantage, since there would be more different streams colliding and so the speed would go down.

    reggaemuffin
    • 1,188
    • 2
    • 11
    • 26
    • ummm what about the softwares like download accelerator and Internet Download Manager that `Send GetFile` up to (16) for a single file to make the download faster, i think this also uses more than one connection. – Murhaf Sousli Apr 16 '12 at 13:08
    • That is a different thing, there it is not your bandwidth, which is being the problem but the provider limiting each connection. So using more than one connection each lets say having 50kb, makes about 1600kb. To give you an example, it would be like copying 5 files at once to a hard drive, first, the drive writes a part of file one than has to jump to the location of file 2 and write a part of this and so on. Since Enternet is a single stream, even if you use more than one thread on your cpu, – reggaemuffin Apr 16 '12 at 13:14
    • it physically is one cable with only one bit send at a time. so every bit wasted on announcing a new file or announcing a differend thread sending is wasting bandwith. – reggaemuffin Apr 16 '12 at 13:15
    • i know exactly what it is and i know it's different, my point is the common thing between them that it uses more the a connection. i guess that means more ports. – Murhaf Sousli Apr 16 '12 at 13:16
    • Yeah, using more than one port will work, but both ports communicate over the same lan and so through the same cable. Its like two people speaking at the same time through a telephone and the third guy at the end has to first know who is talking now and these little times are making a huge difference. I don't know how to explain it to you, but with one port, you could use the whole bandwith of the cable, that would work. And it is physically impossible to get faster with more ports used simultaneously, because they also all together apply to the same physical limitation of the cable. – reggaemuffin Apr 16 '12 at 13:21
    • 1
      when sending a small file, it sent so fast, and i repeatedly need to send file info before the file bytes, so there's a wasted delay before sending the actual file bytes,this delay doesn't use the LAN bandwith, i can use it for another file that's sending on the other port, this's the point but it still not good to use more ports as you said, so i think in the end i have to use the way you guys saying to get it done. – Murhaf Sousli Apr 16 '12 at 13:34