-1

I wrote a python code using deep learning on NLP in some medium data (32K line of text) and I am running into some problems!

Running this code on this amount of data is time-consuming for me and it will never be finished on my PC (iMac 5K), I search for a better way to run the program, some of them says you should have a GPU, then I heard about multithreading for running the program on a multiple CPUs.

the question is:

what is the better way to do it? .. and what is the way of using a multiple CPUs on my machine?

Thank you.

Ali Yousef
  • 35
  • 6

1 Answers1

1

Artificial Neural Networks can take quite long to train - depending on the net's structure. You could try to reduce the amount of layers and/or neurons since Dense Neural Nets take a lot more time to train than Convolutional Neural Nets...

You did not specify the framework you use to implement your deep learning algorithm...still I'd assume that most frameworks like keras/tensorflow/... automatically use all CPU cores.

So yes, you could try training on a GPU as it is suited for highly parallel workload. If you have money left over you could try cloud computing like AWS.

Remember: high training time is normal for ANNs.

lenngro
  • 110
  • 10
  • I see. [Click here for more info on speedup on non-gpu setups](https://stackoverflow.com/questions/43143003/how-can-i-speed-up-deep-learning-on-a-non-nvidia-setup) – lenngro Apr 03 '18 at 08:43