10

It is not a secret that on CUDA 4.x the first call to cudaMalloc can be ridiculously slow (which was reported several times), seemingly a bug in CUDA drivers.

Recently, I noticed weird behaviour: the running time of cudaMalloc directly depends on how many 3rd-party CUDA libraries I linked to my program (note that I do NOT use these libraries, just link my program with them)

I ran some tests using the following program:

int main() {
  cudaSetDevice(0);
  unsigned int *ptr = 0;
  cudaMalloc((void **)&ptr, 2000000 * sizeof(unsigned int));   
  cudaFree(ptr);
return 1;
}

the results are as follows:

  • Linked with: -lcudart -lnpp -lcufft -lcublas -lcusparse -lcurand running time: 5.852449

  • Linked with: -lcudart -lnpp -lcufft -lcublas running time: 1.425120

  • Linked with: -lcudart -lnpp -lcufft running time: 0.905424

  • Linked with: -lcudart running time: 0.394558

According to 'gdb', the time indeed goes into my cudaMalloc, so it's not caused by some library initialization routine..

I wonder if somebody has plausible explanation for this ?

talonmies
  • 70,661
  • 34
  • 192
  • 269

1 Answers1

11

In your example, the cudaMalloc call initiates lazy context establishment on the GPU. When runtime API libraries are included, their binary payloads have to be inspected and the GPU elf symbols and objects they contain merged into the context. The more libraries there are, the longer you can expect the process to take. Further, if there is an architecture mismatch in any of the cubins and you have a backwards compatible GPU, it can also trigger driver recompilation of device code for the target GPU. In a very extreme case, I have seen an old application linked with an old version of CUBLAS take 10s of seconds to load and initialise when run on a Fermi GPU.

You can explicitly force lazy context establishment by issuing a cudaFree call like this:

int main() {
    cudaSetDevice(0);
    cudaFree(0); // context establishment happens here
    unsigned int *ptr = 0;
    cudaMalloc((void **)&ptr, 2000000 * sizeof(unsigned int));   
    cudaFree(ptr);
  return 1;
}

If you profile or instrument this version with timers you should find that the first cudaFree call consumes most of the runtime and the cudaMalloc call becomes almost free.

talonmies
  • 70,661
  • 34
  • 192
  • 269
  • thanks @talomnies, indeed inserting cudaFree at the beginning takes the whole running time. I tested this program originally on GT650M graphics card (Kepler core) while on Fermi GPU GTX580 it takes even longer - about 7 seconds.. still NVIDIA could do smth to optimize their context management - 7 seconds with full CPU workload seems to be too much –  Jul 26 '12 at 23:34
  • @asm: try CUDA 5 and see what it does. Now there is a proper device code linker in the tool chain, so some of the overhead at runtime might be shifted to compile and link time (or at least streamlined a bit). Also, if you found this answered your question, you might be so kind as to accept it so your question is marked as answered. – talonmies Jul 27 '12 at 07:05
  • Note that in CUDA 4.0, part of the reason CUDA initialization takes so long is because the driver is performing massive virtual memory allocations for Unified Virtual Addressing. – ArchaeaSoftware Jul 28 '12 at 21:19
  • Also note that the driver caches JIT-compiled kernels on disk. It has to redo the work if you change the hardware, but as far as kernel compilation, it should go a lot faster the second time than the first. – ArchaeaSoftware Jul 28 '12 at 21:20