I'm doing my PhD research in A.I. and I've gotten to the part where I have to start using CUDA libraries for my testing platform. I've played with CUDA before, and I have a basic understanding of how GPGPU works, etc, but I am troubled by the float precision.
Looking at GTX680 I see FP64: 1/24 FP32, whereas Tesla has full FP64 at 1.31 TFLOPS. I understand very well that one is a gaming card, while the other is a professional card.
The reason I am asking is simple: I cannot afford a Tesla, but I may be able to get two GTX680. While the main target is to have as many CUDA cores and memory, float precision may become a problem.
My questions are:
- How much of a compromise is the small float precision in Gaming GPU's?
- Isn't 1/24 of a 32bit float precision too small? Especially compared to previous Fermi of 1/8 FP32
- Is there a risk of wrong computation results due to the smaller float precision? I.e in SVM, VSM, Matrix operations, Deep Belief Networks, etc, could I have issues with the results of the algorithms due to the smaller floating point, or does it simply mean that operations will take longer/use more memory?
Thanks !