0

I have enhance the project job scheduling example from optaplanner examples with these features :

  1. Priority Based Project : execution order will be start from higher priority project
  2. Break Time Feature : adding another shadow variable named breakTime, and calculate it whenever an allocation overlap with break time (i.e. holiday)
  3. Change the value provider range for delay to 15000

Then after that I run benchmark, and get the same result for LA 500, LA 1000, LA 2000, it state that all of them are the favorite. Is this the valid result from benchmark? Please someone help me analyze my bench mark result. I have attached my benchmark result. Thanks.

Best Score Summary Level 1

Best Score Summary Level 2

Time Spent Summary

Average Calculation Count Summary

the.wizard
  • 1,079
  • 1
  • 9
  • 25

1 Answers1

0

It's possible that they give the same results if the score function is very flat. It could be that 1 constraint type dwarfs all the others, making the score flat.

Does the problemStatisticType BEST_SCORE graph look exactly the same too? That would be very unlikely.

1) Enable the following statistic (new in 6.2.CR1) and run your benchmarks again:

<singleStatisticType>CONSTRAINT_MATCH_TOTAL_BEST_SCORE</singleStatisticType>

That one will tell you which constraint types affect the best score (see docs "14.6.2. Constraint match total best score over time statistic" about it). If one constraint type dwarfs the others this will make it visible.

2) Examine the actual solution after running the solver (the benchmarker won't write the best solution by default, so either configure that or just run a solver directly). Check with constraints cause the score you got.

Geoffrey De Smet
  • 26,223
  • 11
  • 73
  • 120
  • I just change the constructionHeuristicType to FIRST_FIT_DECREASING, and add a new class AllocationDifficultyComparator. Why it could affect so much in this planning solution? It is FIRST_FIT_DECREASING not a good match for Late Acceptance & Entity Tabu? Oh and one more thing, in my planning problem domain, the time measurement was in minute not in days, does it bad? Since it will give a long duration to each job. – the.wizard Nov 26 '14 at 07:55
  • I tried using your example, and enhance it with my requirement. Then I run it against your example data (A-10), it still solving with good result, average calculate per second (35945), while using my data it not even give a feasible solution (the hard constraint is negative). I think this is because the duration from my data was too big, it was around 240 - 1200, what do you think geoffrey? – the.wizard Nov 26 '14 at 08:13
  • I think the root of my planning problem was the int ValueRangeFactory. In examples, it was set to 500, while my planning problem probably have a delay more than 500 minutes. When I try to lower the duration in execution mode, it give the performance that was similar to the examples. Is there any better way instead of using int ValueRangeFactory? I tried using BigDecimal ValueRangeFactory, and it still give a horrible performance when it comes to a large dataset. – the.wizard Nov 27 '14 at 07:41
  • BigDecimals are always slower than ints. Stick with ints. A couple of things you can try: if the delay is represented in seconds, but a granularity of minutes (or per 10 seconds) is enough, then change the code so it works at that granularity. That massively reduces the search space (see docs chapter "calculating the size of the search space"). – Geoffrey De Smet Nov 27 '14 at 08:29
  • I am also working on nearbySelection for IntvalueRanges, but that won't make 6.2. – Geoffrey De Smet Nov 27 '14 at 08:29
  • "calculating the size of the search space" - which chapter is this? I try to find it, but can't find it. how to change the code to works granularity? Oh by the way, I leave some message for you in the irc channel, please visit it, I have tried 2 alternative ways, but still not really good. – the.wizard Nov 27 '14 at 10:08