8

My machine has Geforce 940mx GDDR5 GPU.

I have installed all requirements to run GPU accelerated dlib (with GPU support):

  1. CUDA 9.0 toolkit with all 3 patches updates from https://developer.nvidia.com/cuda-90-download-archive?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exelocal

  2. cuDNN 7.1.4

Then I executed all those below command after cloning dlib/davisKing repository on Github for compliling dlib with GPU support :

$ git clone https://github.com/davisking/dlib.git
$ cd dlib
$ mkdir build
$ cd build
$ cmake .. -DDLIB_USE_CUDA=1 -DUSE_AVX_INSTRUCTIONS=1
$ cmake --build .
$ cd ..
$ python setup.py install --yes USE_AVX_INSTRUCTIONS --yes DLIB_USE_CUDA

Now how could I possibly check/confirm if dlib(or other libraries depend on dlib like face_recognition of Adam Geitgey) is using GPU inside python shell/Anaconda(jupyter Notebook)?

talonmies
  • 70,661
  • 34
  • 192
  • 269
rahulreddy
  • 101
  • 1
  • 1
  • 2

3 Answers3

11

In addition to the previous answer using command,

dlib.DLIB_USE_CUDA

There are some alternative ways to make sure if dlib is actually using your GPU.

Easiest way to check it is to check if dlib recognizes your GPU.

import dlib.cuda as cuda
print(cuda.get_num_devices())

If the number of devices is >= 1 then dlib can use your device.

Another useful trick is to run your dlib code and at the same time run

$ nvidia-smi

This should give you full GPU utilization information where you can se ethe total utilization together with memory usage of each process separately.

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.48                 Driver Version: 410.48                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 1070    Off  | 00000000:01:00.0  On |                  N/A |
|  0%   52C    P2    36W / 151W |    763MiB /  8117MiB |      5%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      1042      G   /usr/lib/xorg/Xorg                            18MiB |
|    0      1073      G   /usr/bin/gnome-shell                          51MiB |
|    0      1428      G   /usr/lib/xorg/Xorg                           167MiB |
|    0      1558      G   /usr/bin/gnome-shell                         102MiB |
|    0      2113      G   ...-token=24AA922604256065B682BE6D9A74C3E1    33MiB |
|    0      3878      C   python                                       385MiB |
+-----------------------------------------------------------------------------+

In some cases the Processes box might say something like "processes are not supported", this does not mean your GPU cannot run code but it does not just support this kind of logging.

Sebastian Värv
  • 221
  • 3
  • 7
  • 3
    For some reason, I find that `print(cuda.get_num_devices())` returns 1 even on a machine with no GPU. Oddly enough, `dlib.DLIB_USE_CUDA` returns the expected default on a non-GPU box (False) and on a GPU box (True). – Fausto Morales Apr 13 '19 at 02:52
4

If dlib.DLIB_USE_CUDA is true then it's using cuda, if it's false then it isn't.

As an aside, these steps do nothing and are not needed to use python:

$ mkdir build
$ cd build
$ cmake .. -DDLIB_USE_CUDA=1 -DUSE_AVX_INSTRUCTIONS=1
$ cmake --build .

Just running setup.py is all you need to do.

Davis King
  • 4,731
  • 1
  • 25
  • 26
  • dlib.DLIB_USE_CUDA is returning False, which means that it is not using GPU. So what I did wrong in those above steps why it's not using GPU. @Davis King, do you have any hint what could be the issue with that – rahulreddy Aug 06 '18 at 06:51
  • And the commands(above) that you mentioned are not necessary ,why, don't we need to build dlib's c++ file(using cmake) before compiling setup.py – rahulreddy Aug 06 '18 at 07:05
  • 1
    Do these commands are enough for compile(rebuild) python API: git clone https://github.com/davisking/dlib.git cd dlib python install setup.py --yes USE_AVX_INSTRUCTIONS --yes DLIB_USE_CUDA --clean – rahulreddy Aug 06 '18 at 08:26
  • Yes. setup.py does everything. That’s its job. Read it’s output. It tells you what it is doing and why. – Davis King Aug 06 '18 at 10:38
3

The following snippets have been simplified to either use or check whether dlib is using GPU or not.

First, Check whether dlib identifies your GPU or not.
import dlib.cuda as cuda; print(cuda.get_num_devices());

Secondly, dlib.DLIB_USE_CUDA if it's false, simply allow it to use GPU support by
dlib.DLIB_USE_CUDA = True.