10

Using Java NIO use can copy file faster. I found two kind of method mainly over internet to do this job.

public static void copyFile(File sourceFile, File destinationFile) throws IOException {
    if (!destinationFile.exists()) {
        destinationFile.createNewFile();
    }

    FileChannel source = null;
    FileChannel destination = null;
    try {
        source = new FileInputStream(sourceFile).getChannel();
        destination = new FileOutputStream(destinationFile).getChannel();
        destination.transferFrom(source, 0, source.size());
    } finally {
        if (source != null) {
            source.close();
        }
        if (destination != null) {
            destination.close();
        }
    }
}

In 20 very useful Java code snippets for Java Developers I found a different comment and trick:

public static void fileCopy(File in, File out) throws IOException {
    FileChannel inChannel = new FileInputStream(in).getChannel();
    FileChannel outChannel = new FileOutputStream(out).getChannel();
    try {
        // inChannel.transferTo(0, inChannel.size(), outChannel); // original -- apparently has trouble copying large files on Windows
        // magic number for Windows, (64Mb - 32Kb)
        int maxCount = (64 * 1024 * 1024) - (32 * 1024);
        long size = inChannel.size();
        long position = 0;
        while (position < size) {
            position += inChannel.transferTo(position, maxCount, outChannel);
        }
    } finally {
        if (inChannel != null) {
            inChannel.close();
        }
        if (outChannel != null) {
            outChannel.close();
        }
    }
}

But I didn't find or understand what is meaning of

"magic number for Windows, (64Mb - 32Kb)"

It says that inChannel.transferTo(0, inChannel.size(), outChannel) has problem in windows, is 32768 (= (64 * 1024 * 1024) - (32 * 1024)) byte is optimum for this method.

Community
  • 1
  • 1
Tapas Bose
  • 28,796
  • 74
  • 215
  • 331

3 Answers3

12

Windows has a hard limit on the maximum transfer size, and if you exceed it you get a runtime exception. So you need to tune. The second version you give is superior because it doesn't assume the file was transferred completely with one transferTo() call, which agrees with the Javadoc.

Setting the transfer size more than about 1MB is pretty pointless anyway.

EDIT Your second version has a flaw. You should decrement size by the amount transferred each time. It should be more like:

while (size > 0) { // we still have bytes to transfer
    long count = inChannel.transferTo(position, size, outChannel);
    if (count > 0)
    {
        position += count; // seeking position to last byte transferred
        size-= count; // {count} bytes have been transferred, remaining {size}
    }
}
Mohammad Faisal
  • 5,783
  • 15
  • 70
  • 117
user207421
  • 305,947
  • 44
  • 307
  • 483
  • 3
    Can you please elaborate "Setting the transfer size more than about 1MB is pretty pointless anyway"? Is the transfer size has any relation with transfer rate? What are the factors effect the file transfer (specifically in Java)? – Tapas Bose Sep 15 '11 at 05:40
  • @Tapas Bose You use a larger transfer buffer to reduce the syscall overhwad of repeating actions, you also give a reasonable large buffer for OS optimizations like readahead and scather gather to take advantage. With 1MB most of those optimizations kick in. – eckes Apr 24 '13 at 16:03
  • @TapasBose Setting the transfer size more than about 1MB is pretty pointless because there is no asymptotic benefit. What you're trying to achieve with larger transfer sizes is fewer context switches, and every time you double the transfer size you halve the context switch cost. Pretty soon it vanishes into the noise. – user207421 Jan 04 '14 at 09:08
  • 1
    Why should `maxCount` be decremented? – Dante WWWW Feb 11 '14 at 07:57
  • 1
    AFAIK, `maxCount` should be decremented only if it is given the value of the file size in the very beginning: `maxCount = filesize`, and then decrement it in each loop. – Dante WWWW Feb 25 '14 at 10:12
  • @coolcfan Why? What's the difference? What exactly is the reason not to decrement it in the case where it was set to `Integer.MAX_VALUE` or `Long.MAX_VALUE`? or anything else? What is the point in making a special case? – user207421 Dec 07 '17 at 09:58
  • 1
    From what I understand, `size` should **not** be decremented. `size` is the total number of bytes to be read, i.e. `inChannel.size()`. You want to keep reading `maxCount` bytes in a loop, all the while incrementing `position` until `position` reaches the end of the channel, i.e. `while (position < size)`. Can you explain why you want to **decrement** `size`? If you do that, then with every iteration, you are incrementing `position`, i.e. advancing the start of the byte range you are reading, and decrementing `size` means that you are bringing the end of the byte range closer and closer. – Ajoy Bhatia Jun 06 '18 at 22:44
  • @AjoyBhatia Because there are that many fewer bytes left to transfer? Your last sentence expresses the reason perfectly. – user207421 Jul 03 '18 at 20:19
  • 1
    Read the code again. `size` is not the number of bytes **left** to transfer. `size` is the end of the input. The value of `position` needs to keep advancing until the max position value, which is `size`. So, `size` is the maximum value that `position` can have, because then you are at the end of the file. It is the EOF position. Its value remains the same throughout. I am not sure if I have made it clear enough. What you are saying is - "keep advancing towards your goal, **and** keep moving your goal post nearer as well". That would be double-counting your progress. – Ajoy Bhatia Jul 05 '18 at 01:11
  • 1
    Oh, I see what you are saying - that the second argument to `FileChannel::transferTo` **is** the number of bytes to be transferred, so it **should** be decremented. However, in that case, there is still an error in your code. The `while`-loop condition should be `while (size > 0)` - which means while there are still bytes to be transferred. It is wrong to compare the starting point value `position` with the number of bytes still to be transferred. So `while (position < size)` is wrong, because as soon as the half-way point is crossed, `position > size`, so the `while`-loop will be exited. – Ajoy Bhatia Jul 05 '18 at 01:22
  • @AjoyBhatia You've got things messed up. You don't decrement size because in the original code it is NOT used as the amount to transfer per-loop, that's maxCount. 'size' stays fixed at the total amount to transfer and it is correct to only increment position until it reaches this amount. – swpalmer Oct 02 '18 at 14:41
  • try-with-resources should be used instead of the finally block. – swpalmer Oct 02 '18 at 14:42
  • @swpalmer - Yes, you're right. That's a clear, simple explanation of why `size` should not be decremented. – Ajoy Bhatia Oct 02 '18 at 16:48
  • And there was probably no try-with-resources on Sep 11, 2011 when @TapasBose posted the question. Yes, it would be cleaner to use that now. – Ajoy Bhatia Oct 02 '18 at 16:52
  • @AjoyBhatia try-with-resources was introduced with Java 7, which was released summer of 2011. So while it was available, it's true that it was very new when the question was posed. Still worth pointing out to anyone that might copy this code today. – swpalmer Oct 03 '18 at 17:52
  • @AjoyBhatia: you were right with `while (size > 0)` instead of `while (position < size)` which is wrong. – Mohammad Faisal Dec 08 '21 at 05:27
0

I have read that it is for compatibility with the Windows 2000 operating system.

Source: http://www.rgagnon.com/javadetails/java-0064.html

Quote: In win2000, the transferTo() does not transfer files > than 2^31-1 bytes. it throws an exception of "java.io.IOException: Insufficient system resources exist to complete the requested service is thrown." The workaround is to copy in a loop 64Mb each time until there is no more data.

Danny Rancher
  • 1,923
  • 3
  • 24
  • 43
  • No. Your link says the limitation applies to the entire Windows platform. Windows 2000 is only mentioned as a test platform. – user207421 Jan 04 '14 at 04:29
  • Well, I have tested the transferTo with and without the loop on my Windows 8 platform and experienced only a time difference (indeed the loop was faster, but i cannot implement it in my projects without knowing why it was faster). However, the outcome of the tests were that both completed successfully with files over 2GB in size. I cannot find a source for your "Windows has a hard limit on the maximum transfer size" comment. Would you please provide one? – Danny Rancher Jan 06 '14 at 06:13
  • Your link says exactly that. Twice. – user207421 Jan 19 '15 at 12:08
-1

There appears to be anecdotal evidence that attempts to transfer more than 64MB at a time on certain Windows versions results in a slow copy. Hence the check: this appears to be the result of some detail of the underlying native code that implements the transferTo operation on Windows.

Femi
  • 64,273
  • 8
  • 118
  • 148
  • Anecdotal evidence where? Which Windows versions? Mere unsubstantiated rumour is not an answer. – user207421 Jan 04 '14 at 04:31
  • Ah, evidence. Take a look at http://bugs.sun.com/view_bug.do?bug_id=6822107: notice the 64MB number quoted. I'd guess that's the cleanest source for that specific magic value. – Femi Jan 04 '14 at 06:02
  • I read them years ago. There is nothing about 'certain Windows versions' or 'results in a slow copy' in that bug, or in either of the bugs linked from it. The actual number in the evaluation is 1.5GB. – user207421 Jan 04 '14 at 06:07