5

I've read some information that I could find over the Internet about differences between those 2 series of cards, but I can't help the feeling that they are somehow advertisements. While most powerful GeForce costs roughly $700, starting prices for Tesla move around $2500 and that's quite a difference.

While the ECC memory listed between biggest advantages is interesting, I doubt that it accounts for that difference. The second most highlighted thing is much better performance for double precision numbers but I will be focusing mostly on integer operations so it doesn't really matter. Top GeForce cards have a lot of memory too. While both series use GDDR5, GeForce memory bandwidth is even higher than Tesla's.

Does anyone have personal experience for comparing those 2 series objectively? Because I think most of that Tesla cost is connected with premium tools and support and not with their performance.

Raven
  • 4,783
  • 8
  • 44
  • 75
  • 1
    I think is best suited for another stack exchange network and not stackoverflow. – ericosg Jun 12 '12 at 14:26
  • There is a similar question about Quadro vs GeForce [here](http://stackoverflow.com/questions/10532978/difference-between-nvidia-quadro-and-geforce-cards/10547517). – Hristo Iliev Jun 12 '12 at 15:14
  • @ericosg yeah I suppose, I'll let it open for a day and if nothing comes up I'll delete it and try elsewhere. And thanks for link Hristo, helped a bit – Raven Jun 12 '12 at 15:25

2 Answers2

0

This answer may be out of date, as its been a year or two since I worked on GPGPU. However, back when Tesla was first released they were the only cards to natively support full double-precision computation for all operations. Whereas the GeForce cards enumulated a lot of double-precision computations.

For scientific calculations you would find the Cuda APIs would compile just fine with GeForce with double precision variables, but the results you get back would actually have a lower precision.

Hope this is helpful,

Dr. Andrew Burnett-Thompson
  • 20,980
  • 8
  • 88
  • 178
  • That was never the case. When Telsa cards were first released the (C870,D870,S8070), they didn't support double precision floating point at all, exactly like their GeForce counterparts. When double precision support was introduced (the C1060,S1070) it was identical in every way to the equivalent GeForce (GTX280). When Fermi was released, it was announced that the double precision *throughput* of the Geforce cards (GTX 480) would be artificially capped to one quarter of the equivalent Telsa cards (C2050,S2070). There has never been "double precision emulation" on any CUDA hardware. – talonmies Jun 13 '12 at 09:34
  • If you want, you can post an answer to the question - @Raven wants to know what the advantages are to Tesla over GeForce. If its any good I might vote it up! :P – Dr. Andrew Burnett-Thompson Jun 13 '12 at 09:41
-1

The NVIDIA official answer to this question is here

swiftBoy
  • 35,607
  • 26
  • 136
  • 135
  • Thanks for the answer, but I've already seen this comparison and if you look on those benefits, they aren't worth that price difference. For example better DP performance? It's just artificially cripled.. But yes, they still advertise it as benefits of Tesla and unless someone do heavy moding to GeForce it is just fact. Still it seems to me like Nvidia is forcing someone to buy Tesla with no real advantages, just for marketing purposes. – Raven Jun 13 '12 at 13:26