3

I was just testing Towers of Hanoi problem in Java and I ran the following code: (I have removed sysouts for convinience)

public class Util {

    public static void main(String[] args) {
        for (int i = 1; i <= 30; i++) {
            long startTime = System.currentTimeMillis();
            solveTowerOfHanoi(i, "A", "B", "C");
            System.out.println("Time taken for " + i + ": "
                    + (System.currentTimeMillis() - startTime));
        }
    }

    public static void solveTowerOfHanoi(int n, String src, String inter,
            String dest) {
        if (n == 0) {
            return;
        }
        solveTowerOfHanoi(n - 1, src, dest, inter);
        solveTowerOfHanoi(n - 1, inter, src, dest);
    }
}

I did an experiment and I tried disk sizes (indexes) from 1 to 35, ad I observed a very strange timing pattern, here is the output of the program:

Time taken for 1: 0
Time taken for 2: 0
Time taken for 3: 0
Time taken for 4: 0
Time taken for 5: 0
Time taken for 6: 0
Time taken for 7: 0
Time taken for 8: 1
Time taken for 9: 0
Time taken for 10: 0
Time taken for 11: 0
Time taken for 12: 0
Time taken for 13: 0
Time taken for 14: 0
Time taken for 15: 0
Time taken for 16: 0
Time taken for 17: 0
Time taken for 18: 3
Time taken for 19: 2
Time taken for 20: 11
Time taken for 21: 10
Time taken for 22: 39
Time taken for 23: 37
Time taken for 24: 158
Time taken for 25: 147
Time taken for 26: 603
Time taken for 27: 579
Time taken for 28: 2414
Time taken for 29: 2304
Time taken for 30: 9509
Time taken for 31: 9408
Time taken for 32: 38566
Time taken for 33: 37531
Time taken for 34: 152255
Time taken for 35: 148704

Question 1: The algorithm has exponential growth (2^n-1), then why do I see no gradual time growth from 1-20? But then sudden jumps from 20-35?

Question 2: Also another thing which amazes me even more is the equality of times in pairs. Starting from 19, (19,20), (21,22), (23,24), (25,26) etc.... have comparable times. I cannot understand this, if the growth rate of the algorithm is indeed exponential, why does two indexes give almost comparable times and then sudden jump at the next index?

Note: I repeated this program 2-3 times and got almost comparable timings so you can take it as an average run.

EDIT
Tried System.nanoTime() and got following results:

Time taken for 1: 62644
Time taken for 2: 3500
Time taken for 3: 3500
Time taken for 4: 4200
Time taken for 5: 6300
Time taken for 6: 7350
Time taken for 7: 11549
Time taken for 8: 19948
Time taken for 9: 47245
Time taken for 10: 73142
Time taken for 11: 87491
Time taken for 12: 40246
Time taken for 13: 39196
Time taken for 14: 156784
Time taken for 15: 249875
Time taken for 16: 593541
Time taken for 17: 577092
Time taken for 18: 2318166
Time taken for 19: 2305217
Time taken for 20: 9468995
Time taken for 21: 9082284
Time taken for 22: 37747543
Time taken for 23: 37230646
Time taken for 24: 150416580
Time taken for 25: 145795297
Time taken for 26: 603730414
Time taken for 27: 578825875
Time taken for 28: 2409932558
Time taken for 29: 2399318129
Time taken for 30: 9777009489

Output is almost similar to the millisecs, but it does make the picture clear...answers my Question 1 may be, but Question 2 is still intriguing. And System.nanoTime() raised one more question:

Question 3: Why does index 1 take more time than the next indexes (2,3...) etc ?

Dhwaneet Bhatt
  • 601
  • 1
  • 8
  • 19

2 Answers2

2

Answer 1: until you start playing with 18 discs your timing measurements are too coarse to show any useful information. You would be ill-advised to conclude anything about the change in run time with respect to number of disks in this range of values.

Answer 2: since the Tower of Hanoi problem has been much studied and its time complexity is well known you are unlikely to have hit upon an implementation which has a better time complexity. We might therefore conclude that there is something particular to your implementation which causes the approximate equality of execution times in successive runs of your program.

I can't immediately see anything wrong with your Java, so I'll jump to the conclusion that there's something odd about the timings returned by your system.

EDIT

The nano-timings show a similar structure to the milli-timings. I note, though, that whereas the milli-timings for 27 disks and 28 disks were similar and followed by a big jump to 29 disks, for the nano-timings there is a large jump from 27 disks to 28 which is then similar to the timing for 29 disks.

As to the new question 3: I suggest that OP 'spins up' the JIT and JVM by running the main loop twice, discarding the results of the first run. I think I'd also get the end time outside the call to System.out.println, something like:

long startTime = System.currentTimeMillis();
solveTowerOfHanoi(i, "A", "B", "C");
long endTime = System.currentTimeMillis();
System.out.println("Time taken for " + i + ": " + (endTime - startTime));

EDIT 2

In response to OP's comment ...

One of the generic explanations for 1 disk taking longer to solve than 2 disks is that your timing includes 'startup stuff' behind the scenes -- if I remember my Java education rightly (I probably don't) the Java compiler creates bytecode and the run-time system translates bytecode into machine code. You may (Java gurus feel free to pour scorn here) be timing some of that second translation. As to JIT, well the general approach is that the just-in-time compiler will only kick in if its dynamic analysis of the code suggests that it should. It's possible that it only optimises your code on the second go round.

EDIT 3

OK, I took the bait and timed 2 versions of the code, the first (column 2 below) exactly as presented in the original question, the second (column3 below) using nanoTime instead of currentTimeMillis. I conclude that

  1. As I originally asserted, there's something fishy about OP's system; the timings I show in column 2 fit expectations very closely so this isn't a generic Java problem.
  2. There's something seriously wrong with the implementation of nanoTime on my machine, or perhaps my understanding of it. Which adds weight to the thinking behind my original assertion, it is easy to be misled by computer clocks, the results they provide cannot always be taken at face value.

I did also test spinning up the JIT/JVM before starting to time things, in the end it had little impact so I haven't included that data here.

| Discs | milliS | nanoS      |
|    1: |      0 |       7000 |
|    2: |      0 |       4000 |
|    3: |      0 |       6000 |
|    4: |      0 |      11000 |
|    5: |      0 |      22000 |
|    6: |      0 |      43000 |
|    7: |      0 |      84000 |
|    8: |      0 |     172000 |
|    9: |      1 |     331000 |
|   10: |      0 |     663000 |
|   11: |      2 |    1334000 |
|   12: |      2 |    2632000 |
|   13: |      6 |    5265000 |
|   14: |     11 |   10476000 |
|   15: |     21 |   22034000 |
|   16: |     42 |   43407000 |
|   17: |     85 |   89683000 |
|   18: |    169 |  171209000 |
|   19: |    337 | -655065000 |
|   20: |    673 |  688203000 |
|   21: |   1345 | -814465000 |
|   22: |   2708 |  707742000 |
|   23: |   5531 | -533716000 |
|   24: |  10918 | -140615000 |
|   25: |  21542 |  628928000 |
|   26: |  42889 |   97892000 |
|   27: |  85698 |  325197000 |
|   28: | 172370 |  -80280000 |
|   29: | 345650 |  110795000 |
|   30: | 685006 |  453388000 |

Note:

javac -v => Eclipse Java Compiler v_677_R32x, 3.2.1 release, Copyright IBM Corp 2000, 2006. All rights reserved.

java --version => java version "1.4.2"
gij (GNU libgcj) version 4.1.2 20071124 (Red Hat 4.1.2-42)
High Performance Mark
  • 77,191
  • 7
  • 105
  • 161
  • 1
    *there's something odd about the timings returned by your system* - I think you should see by running code on your system, you will get similar time pattern? I am looking for a JVM-RAM specific answer. – Dhwaneet Bhatt Jun 26 '12 at 09:56
  • *I suggest that OP 'spins up' the JIT and JVM by running the main loop twice, discarding the results of the first run* - can you explain further? – Dhwaneet Bhatt Jun 26 '12 at 10:21
  • I will try to decompile this code on every single itration separately and see how Optimization handles this. Nice info thanks. – Dhwaneet Bhatt Jun 26 '12 at 11:11
  • Strange. You are not getting any pair equivalent timings as I was getting in my run. Really, the system time is entirely unpredictable. nanoS particularly. Thanks for the tests. Just concluded one thing - do not try to measure asymptotic complexity by getting system time. – Dhwaneet Bhatt Jun 26 '12 at 14:15
0

When we talk about complexity analysis of a program, it means to study the complexity asymptotically. As far as your timing data is considered, from which you concluded an exponential growth and other interesting features like equality, I can simply say that when you have some data and you stare at it for some time, you will get interesting patterns :)

So any conclusion from your timing data about the theoretical complexity of Tower Of Hanoi problem would be rubbish.

However if you are interested in why the code is practically showing so-observed exponential growth, it may be attributed to the amount of RAM which java is using.

damned
  • 935
  • 2
  • 19
  • 35
  • I have 4 GB RAM, (32-bit) and while running the program it was 51% utilized (I use Windows 7), can you conclude anything from this? – Dhwaneet Bhatt Jun 26 '12 at 09:58