2

I have GPU NVIDIA GeForce GT 740M (compute capability 3.0) and the following versions of CUDA, cuDNN and tensorflow installed.

nvcc -V

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Fri_Feb__8_19:08:17_PST_2019
Cuda compilation tools, release 10.1, V10.1.105

cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2

#define CUDNN_MAJOR 7
#define CUDNN_MINOR 5
#define CUDNN_PATCHLEVEL 0
--
#define CUDNN_VERSION (CUDNN_MAJOR * 1000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL)

#include "driver_types.h"

pip3 show tensorflow-gpu

Name: tensorflow-gpu
Version: 1.13.1
Summary: TensorFlow is an open source machine learning framework for everyone.
Home-page: https://www.tensorflow.org/
Author: Google Inc.
Author-email: opensource@google.com
License: Apache 2.0
Location: /home/lightning/.local/lib/python3.6/site-packages
Requires: grpcio, tensorboard, absl-py, termcolor, protobuf, astor, gast, numpy, tensorflow-estimator, wheel, keras-preprocessing, keras-applications, six

pip3 show tensorflow

Name: tensorflow
Version: 1.13.1
Summary: TensorFlow is an open source machine learning framework for everyone.
Home-page: https://www.tensorflow.org/
Author: Google Inc.
Author-email: opensource@google.com
License: Apache 2.0
Location: /home/lightning/.local/lib/python3.6/site-packages
Requires: wheel, keras-preprocessing, numpy, astor, six, protobuf, tensorflow-estimator, termcolor, grpcio, keras-applications, absl-py, tensorboard, gast

But when I check devices detected by tensorflow with print(device_lib.list_local_devices()), the output is as follows...

[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 13567978771733496471
, name: "/device:XLA_CPU:0"
device_type: "XLA_CPU"
memory_limit: 17179869184
locality {
}
incarnation: 12191851301991039336
physical_device_desc: "device: XLA_CPU device"
]

How can I make tensorflow see GPU?

p.s. tensorflow-gpu was installed before tensorflow, so re-installing in order "1) tensorflow-gpu 2) tensorflow" is inefficient

talonmies
  • 70,661
  • 34
  • 192
  • 269
svetlana
  • 23
  • 1
  • 6
  • Although, the package names are different, the module names are the same. So, tensorflow installation would've overwritten the tensorflow-gpu implementation. Only tensorflow-gpu is sufficient. Please uninstall tensorflow. pip uninstall tensorflow. pip install tensorflow-gpu – Manoj Mohan Mar 22 '19 at 16:24
  • @ManojMohan When I do this way, I can't import tensorflow and libs based on it. An error occurs (limit of characters doesn't allow to copy the whole text) with text as follows: ======================================== ImportError: Traceback (most recent call last): ... ImportError: libcublas.so.10.0: cannot open shared object file: No such file or directory Failed to load the native TensorFlow runtime. See tensorflow.org/install/errors for some common reasons and solutions. Include the entire stack trace above this error message when asking for help. – svetlana Mar 22 '19 at 17:17
  • 1
    You have CUDA 10.1. As the error shows, install CUDA 10. https://developer.nvidia.com/cuda-10.0-download-archive – Manoj Mohan Mar 22 '19 at 17:20

1 Answers1

0

tensorflow need compute capability +3.2.

you have a GPU with compute capability 3.0

nima farhadi
  • 678
  • 8
  • 9