11

In CMake version 3.8, native support for CUDA as a language was introduced. When a project has CUDA as one of its languages, CMake will proceed to locate CUDA (e.g. it locates the nvcc binary).

As long as you only compile CUDA code - this is enough. But what if you want to compile a C++ target in that project? The CUDA includes are not -I'ed automatically, and CMakeCache.txt does not seem to contain the CUDA include path anywhere.

Do I actually have to run something find_package(CUDA 9.0 REQUIRED) even when CMake itself has already located CUDA? Or - can I obtain the include directory some other way?

einpoklum
  • 118,144
  • 57
  • 340
  • 684
  • @havogt: Yes, and you can make that an answer. But is there also a similar variable for the CUDA libraries? – einpoklum Aug 09 '18 at 07:47
  • on a normal CUDA install, the include location (dir) for CUDA libraries (i.e. their header files) is the same as the one for CUDA toolkit. – Robert Crovella Aug 09 '18 at 14:20
  • @RobertCrovella: Yes, but I want to know what CMake knows, not make my own guesses which may be inconsistent with it. – einpoklum Aug 09 '18 at 14:36

2 Answers2

25

The include directories, which are used by the compiler set by CMAKE_CUDA_COMPILER, can be retrieved from the CMake variable CMAKE_CUDA_TOOLKIT_INCLUDE_DIRECTORIES.

For getting the libraries, the best way is probably to use find_library() in combination with CMAKE_CUDA_IMPLICIT_LINK_DIRECTORIES.

Example:

cmake_minimum_required(VERSION 3.9)
project(MyProject VERSION 1.0)
enable_language(CUDA)

find_library(CUDART_LIBRARY cudart ${CMAKE_CUDA_IMPLICIT_LINK_DIRECTORIES})

add_executable(
    binary_linking_to_cudart 
    my_cpp_file_using_cudart.cpp
)
target_include_directories(
    binary_linking_to_cudart 
    PRIVATE 
    ${CMAKE_CUDA_TOOLKIT_INCLUDE_DIRECTORIES}
)
target_link_libraries(
    binary_linking_to_cudart 
    ${CUDART_LIBRARY}
)

This issue is also discussed on the CMake bug tracker: Provide target libraries for cuda libraries.


Update: CMake 3.17.0 adds FindCUDAToolkit

Instead of doing find_library() manually, the best way as of CMake 3.17.0 would be to use the CUDAToolkit module.

find_package(CUDAToolkit)
add_executable(
    binary_linking_to_cudart 
    my_cpp_file_using_cudart.cpp
)
target_link_libraries(binary_linking_to_cudart PRIVATE CUDA::cudart)

For support with earlier CMake versions, you can ship the CUDATookit module file with minimal changes in your repository.

havogt
  • 2,572
  • 1
  • 27
  • 37
  • 1
    I think support for CUDA with MSVC was added only in 3.9 that's why I picked that. – havogt Aug 09 '18 at 15:19
  • Thank you for the update (which I happened upon accidentally...) this is much more convenient. Unfortunately, it'll be several years before we can assume CMake 3.17 is available on people's systems... :-( – einpoklum Sep 26 '20 at 11:20
  • 1
    Following my earlier comment: It is actually more reasonable than I believed to assume people have a newer CMake. The reason is that CMake binary distributions are quite flexible, and you rarely have to custom-compile. So getting CMake 3.17 (or by now, 3.21) is easy and painless. – einpoklum Jul 27 '21 at 08:09
2

These days, with CMake 3.18 and later, you can get most of what you need by examining the targets provided by find_package(CUDAToolkit) - which you do need even if CUDA has located the CUDA compiler. But actually, you may just depend on one of those targets and avoid using the include directories directly.

PS - If you happen to use cuda-api-wrappers (e.g. via find_package(cuda-api-wrappers)), it will take care of the dependencies for you.

einpoklum
  • 118,144
  • 57
  • 340
  • 684