4

I'm taking a course on coursera that uses minizinc. In one of the assignments, I was spinning my wheels forever because my model was not performing well enough on a hidden test case. I finally solved it by changing the following types of accesses in my model

from

constraint sum(neg1,neg2 in party where neg1 < neg2)(joint[neg1,neg2]) >= m;

to

constraint sum(i,j in 1..u where i < j)(joint[party[i],party[j]]) >= m;

I dont know what I'm missing, but why would these two perform any differently from eachother? It seems like they should perform similarly with the former being maybe slightly faster, but the performance difference was dramatic. I'm guessing there is some sort of optimization that the former misses out on? Or, am I really missing something and do those lines actually result in different behavior? My intention is to sum the strength of every element in raid.

Misc. Details:

  • party is an array of enum vars
  • party's index set is 1..real_u
  • every element in party should be unique except for a dummy variable.
  • solver was Gecode
  • verification of my model was done on a coursera server so I don't know what optimization level their compiler used.

edit: Since minizinc(mz) is a declarative language, I'm realizing that "array accesses" in mz don't necessarily have a direct corollary in an imperative language. However, to me, these two lines mean the same thing semantically. So I guess my question is more "Why are the above lines different semantically in mz?"

edit2: I had to change the example in question, I was toting the line of violating coursera's honor code.

Jack
  • 155
  • 4
  • I'm not sure if sharing the FlatZinc would or not. I don't know how to read FZ but I'll take a look and look into sharing the FZ. This is in the course "Basic Modeling for Discrete Optimization" and it is assignment 2. I got a response from the professor, he gave the short response which essentially said that the mz compiler takes advantage of an optimization in the second case but not the first. – Jack Nov 08 '19 at 10:19
  • I saw his answer. It would have been nice to get some hindsight about this *"more efficient form"*. – Patrick Trentin Nov 08 '19 at 10:27

1 Answers1

1

The difference stems from the way in which the where-clause "a < b" is evaluated. When "a" and "b" are parameters, then the compiler can already exclude the irrelevant parts of the sum during compilation. If "a" or "b" is a variable, then this can usually not be decided during compile time and the solver will receive a more complex constraint.

In this case the solver would have gotten a sum over "array[int] of var opt int", meaning that some variables in an array might not actually be present. For most solvers this is rewritten to a sum where every variable is multiplied by a boolean variable, which is true iff the variable is present. You can understand how this is less efficient than an normal sum without multiplications.

Dekker1
  • 5,565
  • 25
  • 33