Questions tagged [tesla]

Nvidia Tesla is a brand of GPUs targeting the high performance computing market.

Nvidia Tesla has very high computational power (measured in floating point operations per second or FLOPS) compared to microprocessors. Teslas power some of the world's fastest supercomputers, including Titan at Oak Ridge National Laboratory and Tianhe-1A.

Tesla products primarily used

  • In simulations and in large scale calculations (especially floating-point calculations).
  • For high-end image generation for applications in professional and scientific fields.
  • For password brute-forcing.

.

89 questions
0
votes
1 answer

CUDA driver too old for Matlab GPU?

Ok,this is something am having problems with. I recently installed Matlab R2013a on a x86_64 Linux system running RHEL 5, attached to a Tesla S2050. I have never used the GPU functionality in Matlab itself (but have tried some of it using Jacket…
nahsivar
  • 1,099
  • 1
  • 10
  • 13
0
votes
1 answer

Tesla GPU Usage

In my machine three GPUs are connected.ie, Tesla M2090. I want to get the usage of that GPUs. There is a tool called NVIDIA SMI which show shows the GPU usage. But when i tried to run the Option nvidia-smi.exe -d (I want to know memory and GPU…
Sijo
  • 619
  • 1
  • 7
  • 25
0
votes
1 answer

CUDA Fermi's Architecture: Memory structure

I've a question about the CUDA Fermi's Architecture: I've read somewhere that in Fermi's architecture the global memory's access is fast like the shared memory just because now they use uniform addressing. So it's true that I can access to data on…
Andrea Sylar Solla
  • 157
  • 1
  • 2
  • 10
-1
votes
0 answers

Is Nvidia Tesla P40 GPU supported by TensorFlow version 2.x?

I am interested in getting a 2nd hand Nvidia Tesla P40 GPU. Before I buy it, I want to check if TensorFlow version 2.x supports it. I use tensorflow version 2.x on Python 3.6.6 My question is this: Does TensorFlow version 2.x support this card?…
George
  • 121
  • 7
-1
votes
1 answer

CUDA unified memory pages accessed in CPU but not evicted from GPU

I was trying to understand the functioning of the CUDA Unified Memory. I have read the blog on CUDA unified memory for beginners. I wrote the code given below: #include #include #include #include #include…
-1
votes
1 answer

Nested query params with Tesla

This is the URL I'm trying to hit: /example/?fields=*&filter[platform][eq]=111&order=date:asc&filter[date][gt]=1500619813000&expand=yes My code: get("/release_dates", query: [ fields: "*", order: "date:desc", expand:…
Sergio Tapia
  • 9,173
  • 12
  • 35
  • 59
-1
votes
1 answer

Theano / Chainer Reporting Not Reporting Correct Free VRAM on K80 with 12GB RAM

System: Ubuntu 16.04.2 cudnn 5.1, CUDA 8.0 I have theano installed from git (latest version). When I run the generate sample from https://github.com/yusuketomoto/chainer-fast-neuralstyle/tree/resize-conv, it reports back out of memory whether CPU or…
Chris
  • 988
  • 3
  • 18
  • 30
-1
votes
1 answer

CUDA unknown error

I'm trying to run mainSift.cpp from CudaSift on a Nvidia Tesla M2090. First of all, as explained in this question, I had to change from sm_35 to sm_20 the CMakeLists.txt. Unfortunatley now this error is returned: checkMsg() CUDA error:…
justHelloWorld
  • 6,478
  • 8
  • 58
  • 138
-1
votes
1 answer

CUDA program running slower on Tesla K20 than GTX 965

I'm doing a project where i have to compare various gpu cards for performance analysis. I had ran the same cuda code for Canny Edge Detection in both GPU's and found that gtx 965 is much faster(200%) than the Tesla K20. Also i observed that Tesla…
Srinivas
  • 176
  • 2
  • 12
-1
votes
1 answer

cudaMemcpyToSymbol use details

I am trying to move data structures from host to constant memory on a Tesla C1060 (compute 1.3). With the following function: //mem.cu #include "kernel.cuh" int InitDCMem(SimuationStruct *sim) { SimParamGPU h_simparam; h_simparam.na =…
mrei
  • 121
  • 14
-1
votes
1 answer

cudaMemcpy is too slow on Tesla C2075

I'm currently working on a server with 2 cuda capable GPU's: Quadro 400 and Tesla C2075. I made a simple vector addition test program. My problem is that while Tesla C2075 GPU is supposed to be more powerful than Quadro 400, it takes it more time to…
Sasha
  • 21
  • 2
-1
votes
1 answer

get nan with sm_20

I'm using Tesla C2050. I want to run my code with "-arch=sm_20" but I get -nan while the calculations are correct using "-arch=sm_13" ?! What should I figure out the problem? Thanks, BehZad
-2
votes
1 answer

cudaMemcpyToSymbol just hangs and never returns. GPU processing at 100%. Code works fine on K40 but not on V100

I have the following code snippet: __constant__ int baseLineX[4000]; __constant__ int baseLineY[4000]; __constant__ int guideLineX[4000]; __constant__ int guideLineY[4000]; __constant__ int rectangleOffsets[8]; __constant__ float…
Aaron
  • 57
  • 5
-4
votes
1 answer

Does cuDNN support Tesla M60?

As the official website for cuDNN mentioned the following: cuDNN is supported on Windows, Linux and MacOS systems with Pascal, Kepler, Maxwell, Tegra K1 or Tegra X1 GPUs. So Tesla M60 is not mentioned here, although it has compute capability = 5…
H.H
  • 281
  • 1
  • 4
  • 12
1 2 3 4 5
6