I have a C++ program that can be compiled for single or double precision floating point numbers. Similar as explained here (Switching between float and double precision at compile time), I have a header file which defines:
typedef double dtype
or:
typedef float dtype
depending on whether single or double precision is required by the user. When declaring variables and arrays I always use the data type dtype
, so the correct precision is used throughout the code.
My question is how can I, in a similar fashion, set the data type of hard-coded numbers in the code, like for instance in this example:
dtype var1 = min(var0, 3.65)
As far as I know, 3.65 is by default double precision and will be single precision if I write:
dtype var1 = min(var0, 3.65f)
But is there a way to define a literal, for instance like this:
dtype var1 = min(var0, 3.65_dt)
that can either be defined as float or double at compile time to ensure that also hard-coded numbers in the code will have the right precision?
Currently, I cast the number to dtype
like this:
dtype var1 = min(var0, (dtype)3.65)
but I was concerned that this might create overhead in the case of single precision since the program might actually create a double precision number which is then cast to a single precision number. Is this indeed the case?