0

In contest programming, I've often been recommended to allocate more space than required for more-than-zero-dimensional data types, i.e. arrays or arrays of arrays, than actually required by the limits of the task.

E.g. for the simple DP task of computing the maximum sum of values when choosing a path in a triangle of integer values from top to bottom, with a maximum height of 100 (see example below), I was recommended to allocate space for 110 rows instead.

The reason I've been given: Just in case it requires more space, it won't crash.

However, I don't see the logic behind this. Should a program attempt to use more space of this array, at least in my eyes, it has to contain some bug. In this case, it would make more sense not to allocate more space, so the bug will be noticed instead of giving the program the room to do whatever it isn't supposed to do.

So I hope that someone can give me an explanation, saying why it's done and in which cases this is actually useful.


Example for above task (bottom right corner):

Without additional allocation:                                With additional allocation:

1 0 0 0                                  1 0 0 0 0 0
2 3 0 0                                  2 3 0 0 0 0
4 5 6 0                                  4 5 6 0 0 0
7 8 9 5                                  7 6 9 5 0 0
                                         0 0 0 0 0 0
                                         0 0 0 0 0 0
                                         0 0 0 0 0 0

In this example, the path with the max. sum would be right-right-left with a sum of 1+3+6+9 = 19

An example C++ implementation to solve the problem (works perfectly without additional allocation):

#include <iostream>
#include <vector>

using namespace std;

vector<vector<int>> p(100, vector<int>(100));
vector<vector<int>> dp(100, vector<int>(100, -1));
int n = 0;

int maxsum(int r, int c) {
    if (r == n-1) {
        dp[r][c] = p[r][c];
    } else {
        if (dp[r][c] == -1) {
            dp[r][c] = max(maxsum(r+1, c), maxsum(r+1, c+1)) + p[r][c];
        }
    }
    return dp[r][c];
}

int main() {
    cin >> n;
    for (int i = 0; i < n; ++i) {
        for (int j = 0; j <= i; j++) {
            cin >> p[i][j];
        }
    }
    cout << maxsum(0, 0) << "\n";
    return 0;
}
s3lph
  • 4,575
  • 4
  • 21
  • 38
  • Just wondering could you give the link where it says so? – WannaBeCoder Apr 06 '15 at 11:54
  • By allocating more "room" than you need you are not protecting the software from bugs. You are making your life easier, as if in a latter day you need to use some extra row/collumns you won't need to redefine the array. – Arthur Samarcos Apr 06 '15 at 11:59
  • @WannaBeCoder "In contest programming, I've often been recommended"... verbally. – s3lph Apr 06 '15 at 15:54

2 Answers2

1

The answer you were given is perfectly legitimate.

My short answer: It's just a future-proofing technique. Better safe than sorry. Besides, it's not like storage is an issue nowadays. This may have been more of an issue back in the days of floppy disks.

The way I see it, if I can make my code more future-ready and less likely to crash at the cost of allowing it to waste a couple of kilobytes on rows and columns of zeros, then so be it. It's a small price to pay for contingency. :)

Tim
  • 797
  • 2
  • 10
  • 22
0

You are right if you consider the strategy of Fail Fast development.

Dragonborn
  • 1,755
  • 1
  • 16
  • 37