We have a setup where a remote C++ client invokes an Oracle function (using OCCI library) passing some data. This Oracle function then inserts the data into a table. One of the data which gets passed to the function is AMOUNT, which is a floating point number.
In C++ client, this AMOUNT variable is defined as a "double". Both the Oracle function and the table define the data type for this column as "NUMBER" (without any precision or scale defined explicitly).
We are seeing a lot of inconsistency between what the client is sending and what we are receiving and storing at our database. For example, the client claim that they are passing a value “35.6”, but what we are seeing in our table is “35.59999847”.
We are supposed to store the values with an accuracy of up to 6 decimal places. If I change the data type definition of the AMOUNT column from NUMBER to NUMBER(38,6), I get “35.599998”. If I change it to NUMBER (38,5), I end up getting “35.6”.
Can someone suggest what might be happening here? I know that it is not advisable to treat the amount or price values as floating point numbers and should use whole numbers instead, but in this case we don’t have any control over the C++ client. And we are seeing this issue for a very large set of data (more than 50% of the data suffers this issue).
Also, this issue is not just limited to floating point numbers. We can see inconsistencies even with large whole numbers (for example, a value passed as 1000000000 gets stored as 1000000004).