2

Problem description:

I'm optimizing quite complex algorithm, which unfortunately relies heavily on using set and frozenset datatypes (because of faster in operator). This means I get different time of execution every time I run the test, even for exactly the same input data. And since I need (badly) to optimize the algorithm, I'd like to have a constant (as far as it's possible) time of execution every time.

Demonstration

I made a simplified example, which hopefully demonstrates the problem:

import timeit

class A(object):
    def __init__(self, val):
        self.val = val

def run_test():
    result = []
    for i in xrange(100):
        a = {A(j) for j in xrange(100)}
        result.append(sorted(v.val for v in a))
    return result

N = 10
times = timeit.Timer(run_test).repeat(repeat=3, number=N)
print '%.6f s' % (min(times) / N,)

The core of the problem is the ordering of objects in sets - it depends ( I think) on their position in memory, which of course is different each time. Then, when sorting the values, sorted execution speed will be different each time. On my machine, it gives execution times with tolerance of about 10%. It's not the best demonstration, because my real code depends much more on set ordering and time differences are much higher, but I hope you get the picture.

What I tried:

  • sorting the sets in the algoritm - it gives constant execution time, but also makes the whole algorithm ten times slower
  • using very large number and repeat parameters - unless I want to wait an hour after each change, this won't help me

What I'd like to try:

I think that if I could somehow "reset" python interpreter to have a "clean memory", it would lead to predictable memory position for objects and the time measurements would be constant. But I have no idea how to do something like this, except for creating a VM and restarting it every time I want to make a test.. I think

Not an issue:

  • I profile a lot, and I know which functions are slowest by now - I just need to make them faster - those are the functions which speeds I'm trying to measure.
  • I can't use anything other than set and frozenset for testing (like some ordered set), because it would be much slower and the measured speed wouldn't be in any relation to production code
  • set and frozenset performance is not important here

Summary / Question:

  • my algorithm uses sets internally
  • I want to measure execution speed
  • the execution speed depends on the order in which the elements contained in the internal set are retrieved
  • the test I'm using has fixed input values
  • based on timeit measurement, I'm unable to measure impact of any change I make
  • in the test above, the run_test function is a good example of my real problem

So I need some way to temporarily make sure all set elements will be created in the same memory positions, which will make the test execution speed and number of function calls (profiling) deterministic.

Additional example

This example perhaps demonstrates my problem better:

import timeit
import time

class A(object):
    def __init__(self, val):
        self.val = val

    def get_value_from_very_long_computation(self):
        time.sleep(0.1)
        return self.val

def run_test():
    input_data = {A(j) for j in xrange(20)}

    for i, x in enumerate(input_data):
        value = x.get_value_from_very_long_computation()
        if value > 16:
            break
    print '%d iterations' % (i + 1,)

times = timeit.Timer(run_test).repeat(repeat=1, number=1)
print '%.9f s' % (min(times) / N,)

Which returns, for example:

$ ./timing-example2.py 
4 iterations
0.400907993 s

$ ./timing-example2.py 
3 iterations
0.300778866 s

$ ./timing-example2.py 
8 iterations
0.801693201 s

(this will be, of course, different every time you run it and may or may not be completely different on another machine)

You can see that the execution speed is VERY different each time while the input data remains exactly the same. This is exactly the behaviour I see when I measure my algorithm speed.

Jan Spurny
  • 5,219
  • 1
  • 33
  • 47
  • What about write a wrap function for all the set operation and then profile? Then you can subtract the set operation time easily. Are those set operations actually your bottleneck? – Haochen Wu Apr 12 '16 at 19:26
  • @HaochenWu the problem is, that when using `sets`, I get different timing results each time, because the same elements put into the `set` will be "pulled out" in different order and my algorithm is **very** sensitive to order in which the elements are processed. So I'm unable to decide if some change in the algorithm made it faster or slower based on `timeit` results. The set operations are not a bottleneck, they are just a cause of "random" execution times. Which is definitely a feature, not a bug (for `sets` I mean). Or perhaps I didn't fully understand your suggestion.. – Jan Spurny Apr 12 '16 at 22:26
  • Just to clarify - the order of element processing changes execution time considerably, but it allways returns the correct result - it's only that the "path" to the result takes different time for differen element ordering. – Jan Spurny Apr 12 '16 at 22:28
  • I was suggesting write a wrapper function to perform all the in operation and then you can know exactly how much time was spending on them and subtracting this time from the total time will give you the running time of the rest of you code. – Haochen Wu Apr 13 '16 at 21:39
  • @HaochenWu I don't think that's possible - it's exactly like that example I've given - sorting [5 4 6 3 7 2 8 1] will allways be slower than sorting [1 2 3 4 5 6 7 8] - the algortithm DEPENDS on "order of elements in the set" (i.e. order in which set is iterated) - if I write a wrapper for set operations, it won't change the dependency on element retrieval order.. – Jan Spurny Apr 14 '16 at 15:35
  • @HaochenWu I've added additional example which may better demonstrate my problem. However, It's still possible that I just do not understand what you're suggesting with the wrapper.. – Jan Spurny Apr 14 '16 at 16:09
  • What makes you think it is the `set` iteration order that causes the variability and not just variability on the machine you are executing the code on? Also, why do you need a `set`? In your code snippets you never do any `in` membership tests – Chris_Rands Jun 07 '17 at 14:43
  • @Chris_Rands - please read (or try) the additional example - I think it pretty clearly sums up why the variability is caused ONLY by set iteration order; as for why I need set - this is just a minimal working example - in real code I indeed do a lot of set operations. – Jan Spurny Jun 07 '17 at 15:45
  • @JanSpurny Maybe checkout some ordered set implementations https://stackoverflow.com/questions/1653970/does-python-have-an-ordered-set Some of them to implement O(1) lookups so should not be slow – Chris_Rands Jun 07 '17 at 15:52
  • @Chris_Rands is it faster or just as fast as `set`/`frozenset`? Because if not (and I suspect it isn't), I can't use it in production code. Also, some people pointed out that it doesn't have the same interface as `set`. – Jan Spurny Jun 07 '17 at 16:04
  • You'll have to try it to test the speed; or another idea, fix the PYTHONHASHSEED variable – Chris_Rands Jun 07 '17 at 16:12
  • @Chris_Rands if you make it into a good answer, I'll accept it.. – Jan Spurny Jun 08 '17 at 09:50

0 Answers0