0

I have scoured the web for documentation regarding the computational expense of different Floating Point Model options in Microsoft Visual Studio 2013, but so far, my search has been fruitless. What I want to know is, how much more computationally expensive is /fp:precise than /fp:fast for different mathematical operations? Some example operations are (obviously, these are not the actual operations I am using, but just examples I wrote in a few minutes for the sake of clarity, the code is probably not very good):

For all examples:

double array1[100]; // then fill this with a bunch of numbers somehow
double array2[100]; // then fill this with a different bunch of numbers somehow

Example 1 (adding up a bunch of random doubles):

double sum = 0;
for (int i = 0; i < 100; i++)
{
  sum += array1[i];
}

Example 2 (subtracting a bunch of random doubles):

double diff = 0;
for (int i = 0; i < 100; i++)
{
  diff -= array1[i];
}

Example 3 (multiplying two doubles):

double prod;
for (int i = 0; i < 100; i++)
{
  prod = array1[i] * array2[i];
}

Example 4 (dividing two doubles):

double quot;
for (int i = 0; i < 100; i++)
{
  quot = array1[i] / array2[i];
}

Other examples include combinations of these operations. Is there a way I can use Microsoft Visual Studio 2013 to determine the computational expense by running the same code with the Floating Point Model set to /fp:precise and /fp:fast?

The following links may be helpful:

http://msdn.microsoft.com/en-us/library/aa289157%28v=vs.71%29.aspx#floapoint_topic3

PS: I know that using /fp:fast has its risks (see my prior question at Possible loss of precision between two different compiler configurations). What I am trying to determine is the additional computational expense I can expect to see if I switch the Floating Point Model from /fp:precise to /fp:fast.

Community
  • 1
  • 1
  • Actually invoking `rand` is going to make these loops difficult/impossible for the compiler to vectorize, so these are not very good test cases. Precompute an array (or two) of doubles and operate on that if you really want to observe differences in performance and results. (But in general, your question is too application-dependent for a meaningful answer. As is often the case, there is no substitute for profiling your actual code, IMO.) – Nemo Mar 25 '14 at 15:33
  • The only "risk" in using /fp:fast is to invest too heavily in the values of the random noise digits, the ones you'd produce if you display values with more significant digits than the floating point type can support. Digits 8 and up for float, 16 and up for double. Or earlier if your calculation loses significant digits. Using /fp:precise makes them less random, programmers tend to like that. Not more accurate, less actually. It is expensive on non-trivial calculations as it suppresses code optimization that keeps values stored in the FPU. Your repro code doesn't exercise that. – Hans Passant Mar 25 '14 at 15:33
  • Okay, I just used rand() because I thought it would be easier to read/understand than typing in a bunch of doubles. Your point is well-taken and I have modified the code to perhaps make it more appropriate. Is there a link you can point me to that shows how to profile my code? I am guessing there is a tool within Microsoft Visual Studio 2013 for that? – skankinkid33 Mar 25 '14 at 15:41
  • More specifically, I want to profile the performance of a unit test where I observed different performance due to /fp:fast being used instead of /fp:precise. – skankinkid33 Mar 25 '14 at 15:49

0 Answers0