2

Hello StackOverflow community!

I had this question in my mind from so many days and finally have decided to get it sorted out. So, given a algorithm or say a function which implements some non-standard algorithm in your daily coding activity, how do you go about analyzing the rum time complexity?

Ok let me be more specific. Suppose you are solving this problem,

Given a NxN matrix consisting of positive integers, find the longest increasing sequence in it. You may only traverse in up, down, left or right directions but not diagonally.

Eg: If the matrix is

  [ [9,9,4], 
    [6,6,8], 
    [2,1,1] ].    

the algorithm must return 4

(The sequence being 1->2->6->9)

So yeah, looks like I have to use DFS. I get this part. I have done my Algorithms course back in Uni and can work my way around such questions. So, I come up with this solution say,

class Solution
{   
    public int longestIncreasingPathStarting(int[][] matrix, int i, int j)
    {
        int localMax = 1;
        int[][] offsets = {{0,1}, {0,-1}, {1,0}, {-1,0}};

        for (int[] offset: offsets)
        {
            int x = i + offset[0];
            int y = j + offset[1];
            if (x < 0 || x >= matrix.length || y < 0 || y >= matrix[i].length || matrix[x][y] <= matrix[i][j])
                continue;
            localMax = Math.max(localMax, 1+longestIncreasingPathStarting(matrix, x, y));
        }
        return localMax;
    } 

    public int longestIncreasingPath(int[][] matrix) 
    {
        if (matrix.length == 0)
            return 0;
        int maxLen = 0;

        for (int i = 0; i < matrix.length; ++i)
        {
            for (int j = 0; j < matrix[i].length; ++j)
            {
                maxLen = Math.max(maxLen, longestIncreasingPathStarting(matrix, i, j));
            }
        }
        return maxLen;
    }
}

Inefficient, I know, but I wrote it this way on purpose! Anyways my question is, how do you go about analyzing the run time of longestIncreasingPath(matrix) function?

I can understand the analysis they teach us in a Algos course, you know the standard MergeSort, QuickSort analysis etc. but unfortunately and I hate to say this, that did not prepare me to apply it in my day-day coding job. I want to do it now, and hence would like to start it by analyzing such functions.

Can someone help me out here and describe the steps one would take to analyze the runtime of the above function? That would greatly help me. Thanks in advance, Cheers!

Angel Koh
  • 12,479
  • 7
  • 64
  • 91
yezdi
  • 51
  • 1
  • 3

1 Answers1

2

For day to day work eye-balling things usually works well. In this case you will try to go in every direction recursively. So a really bad example comes to mind like: [[1,2,3], [2,3,4], [3,4,5]] so that you have two options from most cells. I happen to know that this will be O((2*n) ! / (n!*n!)) steps, but another good guess would be O(2^N). Now that you have an example where you know or can compute more easily the complexity, the overall complexity has to be at least that.

Usually, it doesn't really matter which one it is exactly since for both O(N!) and O(2^N) the run-time grows very fast and should only work fast for up to around 10-20 maybe a bit more if you are willing to wait. You would not run this algorithm for N ~= 1000, you would need something polynomial. So an rough estimate that you have a exponential solution would be enough to make a decision.

So in general to get an idea of the complexity, try to relate your solution to other algorithms where you know the complexity already or figure out a worst case scenario for the algorithm where it's easier to judge the complexity. Even if you are slightly off it might still help you make a decision.

If you need to compare algorithms of more similar complexity (ie. O(NlogN) vs O(N^2) for N~=100) you should implement both and benchmark since the constant factor might be the leading contributor to the run-time.

Sorin
  • 11,863
  • 22
  • 26