Questions tagged [gpu]

Acronym for "Graphics Processing Unit". For programming traditional graphical applications, see the tag entry for "graphics programming". For general-purpose programming using GPUs, see the tag entry for "gpgpu". For specific GPU programming technologies, see the popular tag entries for "opencl", "cuda" and "thrust".

Acronym for "Graphics Processing Unit". For programming traditional graphical applications, see the tag entry for . For general-purpose programming using GPUs, see the tag entry for . For specific GPU programming technologies, see the popular tag entries for , and .

More information on GPU at http://en.wikipedia.org/wiki/Graphics_processing_unit

8854 questions
55
votes
7 answers

Python GPU programming

I am currently working on a project in python, and I would like to make use of the GPU for some calculations. At first glance it seems like there are many tools available; at second glance, I feel like im missing something. Copperhead looks awesome…
Eelco Hoogendoorn
  • 10,459
  • 1
  • 44
  • 42
54
votes
3 answers

Get total amount of free GPU memory and available using pytorch

I'm using google colab free Gpu's for experimentation and wanted to know how much GPU Memory available to play around, torch.cuda.memory_allocated() returns the current GPU memory occupied, but how do we determine total available memory using…
Hari Prasad
  • 1,162
  • 1
  • 10
  • 11
54
votes
3 answers

why are draw calls expensive?

assuming the texture, vertex, and shader data are already on the graphics card, you don't need to send much data to the card. there's a few bytes to identify the data, and presumably a 4x4 matrix, and some assorted other parameters. so where is all…
notallama
  • 1,069
  • 1
  • 8
  • 11
54
votes
3 answers

Get CPU/GPU/memory information

I need to get any information about the CPU/GPU/memory.The number of cores, memory value, memory and cpu usage... I found a way to do this for IE:How to Use JavaScript to Find Hardware Information solutions for other browsers I do not know. Any idea…
Alex Nester
  • 541
  • 1
  • 4
  • 3
53
votes
3 answers

How to get the device type of a pytorch module conveniently?

I have to stack some my own layers on different kinds of pytorch models with different devices. E.g. A is a cuda model and B is a cpu model (but I don't know it before I get the device type). Then the new models are C and D respectively, where class…
Kani
  • 1,072
  • 2
  • 7
  • 16
52
votes
26 answers

NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver

I'm running an AWS EC2 g2.2xlarge instance with Ubuntu 14.04 LTS. I'd like to observe the GPU utilization while training my TensorFlow models. I get an error trying to run 'nvidia-smi'. ubuntu@ip-10-0-1-213:/etc/alternatives$ cd…
dbl001
  • 2,259
  • 8
  • 39
  • 53
52
votes
2 answers

Running more than one CUDA applications on one GPU

CUDA document does not specific how many CUDA process can share one GPU. For example, if I launch more than one CUDA programs by the same user with only one GPU card installed in the system, what is the effect? Will it guarantee the correctness of…
cache
  • 1,239
  • 3
  • 13
  • 21
48
votes
8 answers

High level GPU programming in C++

I've been looking into libraries/extensions for C++ that will allow GPU-based processing on a high level. I'm not an expert in GPU programming and I don't want to dig too deep. I have a neural network consisting of classes with virtual functions. I…
goocreations
  • 2,938
  • 8
  • 37
  • 59
47
votes
1 answer

Why do we use CPUs for ray tracing instead of GPUs?

After doing some research on rasterisation and ray tracing. I have discovered that there is not much information on how CPUs work for ray-tracing available on the internet. I came across and article about Pixar and how they pre-rendered Cars 2 on…
oodle600
  • 619
  • 1
  • 5
  • 8
46
votes
3 answers

How to get allocated GPU spec in Google Colab

I'm using Google Colab for deep learning and I'm aware that they randomly allocate GPU's to users. I'd like to be able to see which GPU I've been allocated in any given session. Is there a way to do this in Google Colab notebooks? Note that I am…
Alexander Soare
  • 2,825
  • 3
  • 25
  • 53
46
votes
1 answer

what is XLA_GPU and XLA_CPU for tensorflow

I can list gpu devices sing the following tensorflow code: import tensorflow as tf from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) The result is: [name: "/device:CPU:0" device_type: "CPU" memory_limit:…
tidy
  • 4,747
  • 9
  • 49
  • 89
46
votes
2 answers

Monitor the Graphics card usage

How can I monitor how much of the graphics card is used when I run a certain application? I want to see how much my application uses the GPU.
melculetz
  • 1,961
  • 8
  • 38
  • 51
45
votes
1 answer

Choosing between GeForce or Quadro GPUs to do machine learning via TensorFlow

Is there any noticeable difference in TensorFlow performance if using Quadro GPUs vs GeForce GPUs? e.g. does it use double precision operations or something else that would cause a drop in GeForce cards? I am about to buy a GPU for TensorFlow, and…
user2771184
  • 709
  • 1
  • 5
  • 9
44
votes
6 answers

How to kill process on GPUs with PID in nvidia-smi using keyword?

How to kill running processes on GPUs for a specific program (e.g. python) in terminal? For example two processes are running with python in the top picture and kill them to see the bottom picture in nvidia-smi
salehinejad
  • 7,258
  • 3
  • 18
  • 26
44
votes
7 answers

Run C# code on GPU

I have no knowledge of GPU programming concepts and APIs. I have a few questions: Is it possible to write a piece of managed C# code and compile/translate it to some kind of module, which can be executed on the GPU? Or am I doomed to have two…
jojovilco
  • 649
  • 1
  • 6
  • 13