Questions tagged [nvidia]

For programming questions specifically related to Nvidia hardware. N.B. Questions about system configuration are usually off-topic here!

Nvidia is an American global technology company based in Santa Clara, California, best known for its graphics processors (GPUs).

More about Nvidia at http://en.wikipedia.org/wiki/Nvidia
Nvidia website at http://www.nvidia.com/content/global/global.php

3668 questions
1
vote
0 answers

Nvidia drivers (440, 450) cannot find with GeForce 2080ti (Ubuntu 20.04)

I am trying to get Ubuntu 20.04 running on a computer (desktop) with a GeForce 2080Ti and I had no luck with various versions of the nvidia drivers (440 from ppa, latest 450 from nividia website). However, I could not get it to work : nvidia-smi -->…
1
vote
1 answer

What is libcublasLt.so (not libcublas.so)?

I'm compiling the source code by using pgf95 (Fortran compiler). If I use cuda 10.0, it successfully compiles the source code. However, If I use cuda 10.1, it fails showing that 'cannot find libcublasLt.so'. When I scan the directory…
sungjun cho
  • 809
  • 7
  • 18
1
vote
0 answers

What L2-write bandwidth should I expect from a Nvidia Turing T4 GPU?

The microbenchmarking papers that I have found such as [1] and [2] report L2 bandwidths of 1200 GB/s and 900 GB/s respectively. I'm developing a kernel which attempts to leverage the L2 cache for global read and write operations. So far, I have not…
Jesse Lu
  • 19
  • 2
1
vote
2 answers

How to get FFMPEG to use more GPU when encoding

so the situation is as following Im receiging 20/30 uncompressed image per second. format is either PNG or Bitmap. Each individual photo size is between 40 and 50 mb (all have same size since uncompressed). I want to encode them to a 265 lossless…
Entropy
  • 13
  • 1
  • 1
  • 8
1
vote
1 answer

StyleGAN2 get values of constant layer returning incompatible device assigned GPU vs CPU in Colab

I am trying to get values of StyleGAN generator constant layer using the following code in Google Colab with GPU runtime v1 = (tflib.run(['G_synthesis_1/4x4/Const/const:0'])[0]) But I am getting the following error: > InvalidArgumentError: Cannot…
javaCity
  • 4,288
  • 2
  • 25
  • 37
1
vote
1 answer

Nvidia Spark-XGBOOST

I would like to use NVIDIA Spark-XGBOOST because it has python support, however I can't find any documentation on how to install it. The GITHUB can be found here. NVIDIA-SPARK/XGBOOST
Vinh Tran
  • 169
  • 1
  • 12
1
vote
0 answers

Torch not compiled with CUDA enabled Windows 10

I try to run my code with pytorch 10.2, but I get the assertion error described in the title. At first I installed cuda with version 11. Then I uninstalled it and installed version 10.2. Also, I used pytorch before without cuda and installed it…
MichiganMagician
  • 273
  • 2
  • 15
1
vote
1 answer

My GPUs are not visible with tensorflow-gpu 2.1.0 and CUDA 10.1

I am working on Windows 10. I have installed tensorflow-gpu 2.1.0 and checked that it was in the pip list: My python version is 3.7 and CUDA version is 10.1: Here is what the nvidia-smi command outpouts: As you can see I have 2 GPU's installed…
1
vote
1 answer

Not able to launch file after makefile in SSH

So basically, I bought a Ubuntu server, and installed cuda on it. I have installed github.com/brichard19/BitCrack and i successfully managed to build the file, makefile it. After all these steps, how can I launch the file? I actually have no idea…
1
vote
0 answers

Vertex shader is correct, but won't run on my hardware?

I can't figure out what's wrong with this shader. It's correct, compiles, and links, but simply won't run on my hardware (MacBook pro w/NVidia GeForce 9400, nothing special). It seems totally GLSL 1.2 compliant vec4 position; vec4…
Joshua Noble
  • 830
  • 2
  • 12
  • 26
1
vote
1 answer

Measure the utilization of nvidia gpu

I am searching for methods to record the utilization at the GPU level. I have two definitions of utilization, optimistically I want to be able to compute both: The number of running/utilized cuda cores by the GPU at a time instance. Peak Efficiency…
Walid Hanafy
  • 1,429
  • 2
  • 14
  • 26
1
vote
0 answers

What image should I use if I need to use NVidia v100 GPU on an instance running on Google Cloud

What image should I use if I need to use NVidia v100 GPU on an instance running on Google Cloud Listing versions: listing-versions Refer: nvidia-v100
dwayneJohn
  • 919
  • 1
  • 12
  • 30
1
vote
1 answer

Utilising my GTX1050 for tensorflow/keras not allowed as due to WDDM mode

I recently bought a computer with an NVIDIA GeForce GTX1050. I have been trying to use it with tensorflow and keras through a local jupyter notebook. I have got tensorflow-gpu and keras-gpu in my environment. I have all the correct versions of cuda…
1
vote
0 answers

Test the .trt file using tensorflow

Below output_saved_model_dir in this directory i am having trt file named final_model_gender_classification_gpu0_int8.trt output_saved_model_dir='/home/cocoslabs/Downloads/age_gender_trt' saved_model_loaded =…
Pavithran
  • 41
  • 3
1
vote
0 answers

GPU utilization is N/A when using nvidia-smi for GeForce GTX 1650 graphic card

I want to see the GPU usage of my graphic card but its showing N/A!. I use Windows 10 x64, with an Nvidia GeForce GTX 1650. I am getting the GPU availability status when executing my custom code on the Jupyter-Notebook. But after running Nvidia-smi…
kush
  • 11
  • 1
  • 6