Questions tagged [cuda-context]

A CUDA context hold state information for controlling computational work on a CUDA device, including memory allocations, loaded modules of code, memory region mappings etc.

46 questions
2
votes
1 answer

How can I determine whether a CUDA context is the primary one - cheaply?

You can (?) determine whether a CUDA context is the primary one by calling cuDevicePrimaryCtxRetain() and comparing the returned pointer to the context you have. But - what if nobody's created the primary context yet? Is there a cheaper way to…
einpoklum
  • 118,144
  • 57
  • 340
  • 684
2
votes
1 answer

When is a primary CUDA context destroyed by the Runtime API?

In this discussion of the runtime vs the driver API, it is said that Primary contexts are created as needed, one per device per process, are reference-counted, and are then destroyed when there are no more references to them. What counts as such…
einpoklum
  • 118,144
  • 57
  • 340
  • 684
2
votes
1 answer

Check context of given resource

Lets imagine the situation, that I have a lot of initialized resources for example: streams, host and device memory end events, part of them are initialized in context of one GPU and the rest of them belong to the other GPU context. Is there a way…
kokosing
  • 5,251
  • 5
  • 37
  • 50
2
votes
1 answer

Is cuDevicePrimaryCtxRetain() used for having persistent CUDA context objects between multiple processes?

Using only driver api, for example, I have a profiling with single process below(cuCtxCreate), cuCtxCreate overhead is nearly comparable to 300MB data copy to/from GPU: In CUDA documentation here, it says(for cuDevicePrimaryCtxRetain) Retains the…
huseyin tugrul buyukisik
  • 11,469
  • 4
  • 45
  • 97
2
votes
2 answers

A CUDA context was created on a GPU that is not currently debuggable

When i start cuda debugging, Nsight return this error: A CUDA context was created on a GPU that is not currently debuggable. Breakpoints will be disabled. Adapter: GeForce GT 720M This is my system and CUDA information. Please note that last…
Ali Motameni
  • 2,567
  • 3
  • 24
  • 34
2
votes
2 answers

CUDA context destruction at host process termination

If my host program [exit]/[segfault]/[is killed] what are the corresponding behaviors regarding the CUDA context destruction and corresponding allocated resources ? By "behavior" I mean automatic GPU driver side mechanism if I never explicitly call…
1
vote
0 answers

What are the new unique-id's for CUDA streams and contexts useful for?

CUDA 12 introduces two new API calls, cuStreamGetId() and cuCtxGetId() which return "unique ID"s associated with a stream or a context respectively. I'm struggling to understand why this is useful, or how this would be used. Are the handles for…
einpoklum
  • 118,144
  • 57
  • 340
  • 684
1
vote
1 answer

Passing cuda context to worker pthreads

I have some CUDA kernels I want to run in individual pthreads. I basically have to have each pthread execute, say, 3 cuda kernels, and they must be executed sequentially. I thought I would try to pass each pthread a reference to a stream, and so…
Derek
  • 11,715
  • 32
  • 127
  • 228
1
vote
1 answer

Missing symbol: cuDevicePrimaryCtxRelease vs cuDevicePrimaryCtxRelease_v2

I'm trying to build the following program: #include #include int main() { const char* str; auto status = cuInit(0); cuGetErrorString(status, &str); std::cout << "status = " << str << std::endl; int…
einpoklum
  • 118,144
  • 57
  • 340
  • 684
1
vote
1 answer

get memory usage on cuda context

Is there a way that I can get cuda context memory usage rather than having to use cudaMemGetInfo which only reports global information of a device? or at least a way to get how much memory is occupied by the current application?
Pittie
  • 191
  • 1
  • 13
1
vote
1 answer

Get current CUDA contexts running on my GPU

Is there any way to discover at a given time how many process' are running on the GPU and possibly manage them (yield, resume, kill ... when necessary). What i want to do is while I run different programs, monitor each process activities on the GPU.…
Kanté
  • 21
  • 2
1
vote
1 answer

Persistence of modules in CUDA contexts

I have a MATLAB mex library that loads a problem specific cubin file at runtime. This mex function gets called a few hundred times by MATLAB. Is the kernel reloaded each time by CUDA when I call cuModuleLoad? Or is it somehow cached? If not, is…
Thomas Antony
  • 544
  • 1
  • 7
  • 17
1
vote
1 answer

Good strategy Multi-GPU handling with CPU threads, cuda context creation overhead

We have a multi-GPU framework (on windows) where one can specifiy 'jobs' (which specify also on which GPU they shall be done) which are then executed on a specific GPU. Currently, we have the approach that on startup of the framework we create one…
user2454869
  • 105
  • 1
  • 11
1
vote
2 answers

Share GPU buffers across different CUDA contexts

Is it possible to share a cudaMalloc'ed GPU buffer between different contexts (CPU threads) which use the same GPU? Each context allocates an input buffer which need to be filled up by a pre-processing kernel which will use the entire GPU and then…
lessju
  • 148
  • 1
  • 12
1
vote
1 answer

Cannot create context on NVIDIA device with ECC enabled

On a node with 4 NVIDIA GPUs I enabled on device 0 the ECC memory protection (all other have ECC disabled). Since I enabled ECC on device 0 my application (CUDA, using just one device) hangs when it tries to create the context on this device 0…
ritter
  • 7,447
  • 7
  • 51
  • 84