26

For my work it's particularly interesting to do integer calculations, which obviously are not what GPUs were made for. My question is: Do modern GPUs support efficient integer operations? I realize this should be easy to figure out for myself, but I find conflicting answers (for example yes vs no), so I thought it best to ask.

Also, are there any libraries/techniques for arbitrary precision integers on GPUs?

Kara
  • 6,115
  • 16
  • 50
  • 57
gspr
  • 11,144
  • 3
  • 41
  • 74

1 Answers1

18

First, you need to consider the hardware you're using: GPU devices performance widely differs from a constructor to another.
Second, it also depends on the operations considered: for example adds might be faster than multiplies.

In my case, I'm only using NVIDIA devices. For this kind of hardware: the official documentation announces equivalent performance for both 32-bit integers and 32-bit single precision floats with the new architecture (Fermi). Previous architecture (Tesla) used to offer equivalent performance for 32-bit integers and floats but only when considering adds and logical operations.

But once again, this may not be true depending on the device and instructions you use.

user703016
  • 37,307
  • 8
  • 87
  • 112
jopasserat
  • 5,721
  • 4
  • 31
  • 50
  • 3
    I think one thing to note is, that yes almost for all architectures all CUDA Cores on a GPU can be used for integer operations, but there is no fused multiply add for integers, so that the peak integer operations per second is only half the peak FLOPs. – mxmlnkn Feb 06 '16 at 03:50