5

Is there any seamless way available with best fp16 performance being achieved in NV V100/P100? E.g. I've a model and implementation being trained in fp32. The App works perfectly. Now, I'd like to explore the experience of fp16. Is there any simple way to enable this.

xiaoyong
  • 61
  • 5
  • i have very similar issue . I want to take my trained fp32 model and run inference with fp16 . Did you figure our or any idea how to do it ? – user179156 Nov 02 '18 at 05:57

1 Answers1

0

try this method, and I found inference with fp16 is faster on Pascal architecture GPU, can someone give an explanation?

7oud
  • 75
  • 1
  • 8