I have scoured the web for documentation regarding the computational expense of different Floating Point Model options in Microsoft Visual Studio 2013, but so far, my search has been fruitless. What I want to know is, how much more computationally expensive is /fp:precise than /fp:fast for different mathematical operations? Some example operations are (obviously, these are not the actual operations I am using, but just examples I wrote in a few minutes for the sake of clarity, the code is probably not very good):
For all examples:
double array1[100]; // then fill this with a bunch of numbers somehow
double array2[100]; // then fill this with a different bunch of numbers somehow
Example 1 (adding up a bunch of random doubles):
double sum = 0;
for (int i = 0; i < 100; i++)
{
sum += array1[i];
}
Example 2 (subtracting a bunch of random doubles):
double diff = 0;
for (int i = 0; i < 100; i++)
{
diff -= array1[i];
}
Example 3 (multiplying two doubles):
double prod;
for (int i = 0; i < 100; i++)
{
prod = array1[i] * array2[i];
}
Example 4 (dividing two doubles):
double quot;
for (int i = 0; i < 100; i++)
{
quot = array1[i] / array2[i];
}
Other examples include combinations of these operations. Is there a way I can use Microsoft Visual Studio 2013 to determine the computational expense by running the same code with the Floating Point Model set to /fp:precise and /fp:fast?
The following links may be helpful:
http://msdn.microsoft.com/en-us/library/aa289157%28v=vs.71%29.aspx#floapoint_topic3
PS: I know that using /fp:fast has its risks (see my prior question at Possible loss of precision between two different compiler configurations). What I am trying to determine is the additional computational expense I can expect to see if I switch the Floating Point Model from /fp:precise to /fp:fast.