0

I have a python file which uses tensorflow GPU in it. It uses GPU when i run the file from console using python MyFile.py.

However, when i convert it into exe using pyinstaller, it converts and runs successfully, But it does not use GPU anymore when i run the exe. This happens on a system which was not used for developing MyFile.py. Checking on the same system which was used in development, it uses just 40-50% GPU, which was 90% if i run the python script.

My application even has a small UI made using tkinter.

Though application runs fine on CPU, It is incredibly slow. (I am not using --one-file flag in pyinstaller.) Although having GPU, The application is not using it.

My questions are:

  • How do I overcome this issue? Do I need to install any CUDA or CuDnn toolkits in my Destination computer?

  • (Once the main question is solved) Can i use 1050ti in development and 2080ti in destination computer, if the CuDnn and CUDA versions are the same?

Tensorflow Version : 1.14.0 (I know 2.x is out there, but this works perfectly fine for me.)

GPU : GeForce GTX 1050 ti ( In development as well as deployment.)

CUDA Toolkit : 10.0

CuDnn : v7.6.2 for cuda 10.0

pyinstaller version : 3.5

Python version : 3.6.5

talonmies
  • 70,661
  • 34
  • 192
  • 269
squareRoot17
  • 820
  • 6
  • 16
  • 1
    I don't have a full answer but I know "for sure" you will have to install the Nvidia driver and CUDA on the target machine. The specific GPU model shouldn't matter as long as they both present to tensorflow as `/gpu0` (assuming you don't specify a specific GPU in your script). I assume what is happening is that `pyinstaller` is indeed gathering `tensorflow-gpu`, but when ran on the client machine tensorflow is defaulting to whatever device it can find with what it's provided. The Nvidia driver and CUDA toolkit are necessary for the GPU to be "found" as a useable device. – KDecker Dec 11 '19 at 20:12
  • 1
    A solution may be to take the `exe` and generate a `msi` (if windows) installer with the specific Nvidia driver and CUDA toolkit installers packaged with it. – KDecker Dec 11 '19 at 20:16
  • @KDecker Thank you for your quick response ! generating a `msi` would definitely help me! – squareRoot17 Dec 11 '19 at 20:35
  • 1
    You may also have issues with environment paths too. They should technically be the same, but if the Nvidia driver or CUDA are installed in "odd" locations on either the dev or target machine it could be an issue. – KDecker Dec 11 '19 at 20:47
  • Did you ever manage to solve this issue? I am also running into something very similar. – HamsterHuey Mar 10 '20 at 01:10
  • @HamsterHuey Well i was unable to find perfect solution/reason for this, but from my extensive number of attempts i found the following: 1) Make sure you're explicitly calling GPU to perform your operation in your program. 2) Make a separate virtual env for your program with all your dependencies and try to convert it using pyinstaller. Try avoiding packages from ANACONDA and make a clean installation. 3) As mentioned by KDecker above, By generating msi with the specific cuda toolkits solved issue for me, but this might not be efficient way as the target machine specs may affect this method. – squareRoot17 Mar 11 '20 at 17:10

1 Answers1

0

As I asnwered also here, according to the GitHub issues in the official repository (here and here for example) CUDA libraries are usually dynamically loaded at run-time and not at link-time, so they are typically not included in the final exe file (or folder) with the result that the generated exe file won't work on a machine without CUDA installed. The solution (please refer to the linked issues too) is to put the DLLs necessary to run the exe in its dist folder (if generated without the --onefile option) or install the CUDA runtime on the target machine.

Aelius
  • 1,029
  • 11
  • 22