1

I am running simple simulations with Python, trying to learn the inner workings of AI. I've built up a very simple framework for drawing visual representations of what I'm working with (to give the ANN a "playground" to survive in).

class GUI: #Parent class for all displayed object classes
def __init__(self, x, y, width, height, xB, yB, image, isImage, color):
    GUIObj.append(self) 
    #Appends self to a list that is iterated through in the main loop, where each iterated element
    #has its .display() method run.

    self.coord = (x, y) #SUBJECT MATTER
    self.size = (width, height) #SUBJECT MATTER

def move(self, direction, distance=1):
        #SUBJECT MATTER
        self.coord = ( (self.coord[0] + direction[0] * distance) * SRConst[0], (self.coord[1] + direction[1] * distance) * SRConst[1])

Early on in my learning of Python, it was my understanding that tuples are calculated much faster than lists. And as you can see, I use tuples for the coordinates of my objects.

However, I am currently wondering if tuples are a good choice, when the values of these variables change quite frequently (with the "entity body" coordinates changing a lot, as you can imagine). And so I ask, simply: Is this faster, or should I be using mutable lists?

InKryption
  • 155
  • 1
  • 7
  • 1
    Why don't you test it out with `timeit`? – roganjosh Mar 04 '20 at 07:30
  • @roganjosh It's not a bad idea, but my program is so simple right now that I don't think it will make a very precise difference; I'd rather know now than later, when I might have built something on a bad foundation. I'll give it a try though. – InKryption Mar 04 '20 at 07:36
  • 2
    The best way to answer your question is to profile your code, but "it was my understanding that tuples are calculated much faster than lists" not really. Tuples offer some marginal benefit, particularly in memory, but it isn't huge for speed. Certainly not "much faster". They are practically the same. If speed is your concern, you should use something like Cython. Or just write it in C/C++/Java – juanpa.arrivillaga Mar 04 '20 at 07:45
  • @juanpa.arrivillaga Indeed, I should probably be using something else than Python for speed. Just wanted to know if there was any way to possibly improve speed with what I have - or to not worsen it, haha. I've been trying to make myself learn C++. And it's going alright, but it's a slog compared to learning Python. – InKryption Mar 04 '20 at 08:01

1 Answers1

1

Here are timings for several common scenarios

>>> timeit("n=(n[0]+1,n[1]+2,n[2]+3)", setup="n=(1,2,3)")
0.17996198200125946
>>> timeit("n[0]+=1;n[1]+=2;n[2]+=3", setup="n=[1,2,3]")
0.23098498799663503
>>> timeit("n=[n[0]+1,n[1]+2,n[2]+3]", setup="n=[1,2,3]")
0.19799970900203334
>>> timeit("n[0]=n[0]+1;n[1]=n[1]+2;n[2]=n[2]+3", setup="n=[1,2,3]")
0.21857999200437916

Its all pretty much a wash. If you find yourself performing an operation on a bunch of ints or floats, you many find that pandas, which hold data in C structures is the way to go.

tdelaney
  • 73,364
  • 6
  • 83
  • 116
  • 1
    `numpy` instead of `pandas` for raw speed, unless you really want labeled axes. and it even lets you define struct dtypes, which could be useful here. For example: https://stackoverflow.com/questions/51891518/how-to-create-a-numpy-array-to-describe-the-vertices-of-a-triangle/51891937#51891937 if your operations on your "points" could be expressed as whole-array operations (on large arrays) then the performance gains could be quite dramatic – juanpa.arrivillaga Mar 04 '20 at 08:07
  • @juanpa.arrivillaga - I was debating whether to throw `numpy` in there, but, yes you are absolutely right. Coordinate transformations are in `numpy`'s wheelhouse. – tdelaney Mar 04 '20 at 08:21