Questions tagged [nvidia]

For programming questions specifically related to Nvidia hardware. N.B. Questions about system configuration are usually off-topic here!

Nvidia is an American global technology company based in Santa Clara, California, best known for its graphics processors (GPUs).

More about Nvidia at http://en.wikipedia.org/wiki/Nvidia
Nvidia website at http://www.nvidia.com/content/global/global.php

3668 questions
1
vote
1 answer

"server doesn't have a resource type "pods"" while installing NVIDIA Clara Deploy

I am trying to install the latest version of NVIDIA Clara Deploy Bootstrap following the official documentations (this & this). At one step of the installation, these is a shellscript named "bootstrap.sh" - which is meant to install all the…
Proteeti Prova
  • 1,079
  • 4
  • 25
  • 49
1
vote
1 answer

Sliding Window on 2D Tensor using PyTorch

How can we use a sliding window on a 2D PyTorch tensor t with shape (6, 10) such that we end up with a 3D PyTorch tensor with shape (3, 4, 10)? For example, if we have the tensor t: t = torch.range(1, 6*10).reshape((7, 10)) tensor([[ 1., 2., 3., …
Athena Wisdom
  • 6,101
  • 9
  • 36
  • 60
1
vote
1 answer

docker version 18.09 version of --gpus all

I'm trying to run a gpu-enabled container on a server with docker 18.09.5 installed. It's a shared server so I can't just upgrade the docker version. I have a private server with docker 19.03.12 and the following works fine: docker pull…
user3470496
  • 141
  • 7
  • 33
1
vote
0 answers

Audio device: Nvidia corporation usage

I am using Ubuntu 18.04. Whenever I execute: lspci -v | grep -i audio It returns: 37:00.1 Audio device: NVIDIA Corporation Device 10f8 (rev a1) In my sound settings,instead of showing any speakers or builtin speakers information, I noticed a Dummy…
1
vote
1 answer

Does moving data from global memory to shared memory stall the thread?

__shared__ float smem[2]; smem[0] = global_memory[0]; smem[1] = global_memory[1]; /*process smem[0]...*/ /*process smem[1]...*/ My question is, does smem[1] = global_memory[1]; block computation on smem[0]? In Cuda thread scheduling - latency…
1
vote
0 answers

PyTorch on Linux\ARM (BeagleBone Black or other)

we're embarking on a project with Linux\ARM and Deep Learning. Currently prototyping with PyTorch on PC and looking for a target Linux\ARM platform which has good and straightforward support. any recommendation on a supported Linux\ARM platform for…
Roy
  • 139
  • 3
  • 11
1
vote
2 answers

gpucompute* is down* in slurm cluster

There is a down state on my gpucompute nodes and cant send the jobs on GPU nodes. I couldn't return my 'down GPU' nodes after following all the solutions on the net. Before this problem, I had an error with the Nvidia driver configuration in a way…
Charlt
  • 17
  • 9
1
vote
1 answer

Installing gstreamer NVIDIA Plugins on Ubuntu

I am trying to install the official NVIDIA Codecs for GStreamer. I have the following setup: Ubuntu 18.04 Gstreamer 1.14.5 NVIDIA QUADRO P2000 NVIDIA-SMI 440.100 Driver Version: 440.100 CUDA Version 10.2.89 NVIDIA Video_Codec_SDK_9.0.20 I…
makolele12
  • 111
  • 1
  • 10
1
vote
0 answers

Connect Ultrasonic Sensor to Jetson Xavier NX

I have a Jetson Xavier NX Board. I need to interface the ultrasonic sensor. I used Jetson.GPIO lib to communicate through GPIO but I'm not getting any data from Jetson. I believe the GPIO pin is not powering up which shows 0V after making it…
1
vote
0 answers

unmet dependencies when trying to install nvidia-docker

I am trying to install nvidia-docker on my azure virtual machine with: sudo apt-get install -y nvidia-docker2 I get this error: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not…
Henrik Leijon
  • 1,277
  • 2
  • 10
  • 15
1
vote
1 answer

Does NVIDIA Docker need CUDA installed?

I am setting up my environment for machine learning development and I thought of using Docker. Does Nvidia CUDA and/or CUdnn need to be installed on my machine or does it work only by existing in the docker container? Thanks in advance for your…
than_g
  • 87
  • 8
1
vote
2 answers

GCP AI Platform Notebook driver too old?

I am trying to run the following Hugging Face Transformers tutorial on GCP's AI Platform Notebook with 32 vCPUs, 208 GB RAM, and 2 NVIDIA Tesla T4s. However, when I try to run the part model = DistillBERTClass() model.to(device) I get the following…
1
vote
1 answer

Do I need cudaSetDevice before cudaStreamSynchronize?

In my system I program multiple GPUs concurrently. Do I need to call cudaSetDevice() before calling cudaStreamSynchronize()? When creating the cudaStream_t objects, I did set the device correctly before calling cudaStreamCreate().
huzzm
  • 489
  • 9
  • 24
1
vote
0 answers

Forcing DirectX 11 VSync, even if turned off in Nvidia control panel

Some users of my DirectX 11 application complains about frame rates between 1000-2000 fps, which I totally understand. It's a really small group of people who have had this issue. The only way I've been able to replicate it is by disabling 'Vertical…
Filip
  • 21
  • 2
1
vote
0 answers

Memory issue while trying to convert it to tensorrt or tflite

I succesfull converted the .weight file to .tf file, then i used the convert_trt.py scirpt which indeed is getting killed after a 2 mins wait. I use a Jetson Xavier nx, Cuda 10.2. Error log: .. .. . 2020-08-22 15:31:22.362558: I…
Sai Krishnadas
  • 2,863
  • 9
  • 36
  • 69