0

I find that Fortran gives different result in a simple calculation, compared with C++, Python, or Julia.

The following code in Fortran

program array_slice
    implicit none
    double precision :: sigma = 0.30
    double precision :: rho = 0.75
    write(*,*) dexp(dble(3.) * sigma / dsqrt(dble(1.) - rho*rho))
end program array_slice

saved as test.f90, compiled using ifort test.f90 -o test, returns

   3.89881303534984   

However, the following code in C++

#include <iostream>
#include <iomanip>
#include <cmath>

int main() {
    double sigma = 0.3;
    double rho = 0.75;
    std::cout << std::setprecision(15) << std::exp(3. * sigma / std::sqrt(1. - rho*rho)) << std::endl;
    return 0;
}

saved as testcpp.cpp, compiled using g++ testcpp.cpp -o testcpp, returns

3.89881282454784

Similarly, the following code in Python 3.8.5

import numpy as np
sigma = 0.3
rho = 0.75
print(np.exp(3 * sigma / np.sqrt(1 - rho**2)))

returns

3.8988128245478393

Still similarly, the following code in Julia 1.6.1

σ = 0.3
ρ = 0.75
println(exp(3. * σ / sqrt(1 - ρ^2)))

returns

3.8988128245478393

All of the above test are done in 64-bit Win10 WSL.

I guess Fortran uses single-precision, instead of double-precision, but I don't know why since I've already defined sigma and rho as double precision and used dexp and dsqrt instead of exp and sqrt.

L. Francis Cong
  • 319
  • 2
  • 8
  • I think Fortran (and C) are the only ones doing the right thing by default. Only when using Python's `decimal.Decimal` class then it gives the same output as Fortran. `print(np.exp(3 * 0.3 / np.sqrt(1 - 0.75**2)))` gives 3.8988128245478393 whereas `print(np.exp(3 * Decimal('0.3') / np.sqrt(1 - Decimal('0.75')**2)))` gives 3.898812824547840006158679458 – MarkM Aug 14 '21 at 14:51
  • according to Wolfram, [here](https://www.wolframalpha.com/input/?i=exp%5B3+*+0.3+%2F+sqrt%5B1-+0.75%5E2%5D%5D) seems like Fortran is less precise – jling Aug 14 '21 at 14:55
  • 2
    @jling Sure, and the duplicate links explain why. `0.75` and `0.30` are **default precision** constants. The default precision is the single precision. That the variables are declared as `double precision` does not matter, they are being assigned single precision values. 0.3 is not exactly representable in binary floating point numbers. – Vladimir F Героям слава Aug 14 '21 at 19:30
  • @VladimirF Thank you! Using `dble(0.75)` and `dble(0.30)` in the initialization gives the same number as the others. – L. Francis Cong Aug 16 '21 at 01:13
  • 1
    @L.FrancisCong But even that is not the righ to do. Please do see the links you were given. Use `0.30d0`. – Vladimir F Героям слава Aug 16 '21 at 06:49
  • @VladimirF Thank you! – L. Francis Cong Aug 18 '21 at 19:09

0 Answers0