0

I have an MXNet MultilayerPerceptron inside a class MyModel. I first load the trained weights from a file. I am performing prediction with the MLP like this:

class MyModel:
    ...
    def predict(self, X):
         data_iterator = mx.io.NDArrayIter(data=X, 
         batch_size=self.model.data_shapes[0].shape[0], shuffle=False)
         predictions_npa = self.model.predict(data_iterator ).asnumpy()

where X is a numpy array (1,777)

Now the first time i'm performing a MyModel.predict this works perfectly. I then store the MyModel instance in a functools.LRUCache and trying to perform a second time the prediction with the exact same input.

And every time when doing that, my python process just stops doing anything, no logs, no actions, neither does it exit. I just know that when I try to inspect the result of self.model.predict(data_iterator ) in my PyCharm debugger I get a loading error.

So I'm a bit confused with what's happening there, if anyone had an idea it could be a great help!

Thanks

Tyrannas
  • 4,303
  • 1
  • 11
  • 16

1 Answers1

0

This maybe be because you have to recreate data_iterator. data_iterator is exausted once it has finished and .next() call will raise the error.

Emil
  • 629
  • 2
  • 7
  • 24
  • Yeah my bad I forgot to mention but the data_iterator is created inside a predict method for my class (I edited the question), so each time the MyClass.predict is called, the iterator is recreated – Tyrannas Dec 11 '19 at 13:10
  • How do you use functools.lru_cache? Do you put `@lru_cache(maxsize=...)` decorator before predict method, or before model? Also, I didn't quite get if you make two calls in single program - one without lru_cache, one with - or you just run two different programs? – Emil Dec 11 '19 at 13:51
  • I do: cacher = cachetools.LRUCache(maxsize=20) , I perform a MyModel.predict, and then I store the MyModel instance like this: cacher['model'] = my_instance. I then do: model = cacher['model'] and model.predict – Tyrannas Dec 11 '19 at 14:02