In our Oracle database, I have a table (say MYTABLE) with column VALUE_NUMBER of type NUMBER (NULL, no precision specified). This table contains a value 1178.2
When retrieving data from this column using standard ADO.Net (actually ODP.Net) and then converting the decimal value to a string, I get '1178.20'. Obviously, when converting the constant 1178.2M
to a string, the output is 1178.2
.
Digging deeper, I looked at the output of decimal.GetBits(), which turned out to be different, although when comparing both numbers they are really considered equal.
The code sample below illustrates the behaviour:
using (var connection = new OracleConnection("my connection string"))
{
connection.Open();
var command = connection.CreateCommand();
command.CommandText = "SELECT VALUE_NUMBER FROM MYTABLE";
command.CommandType = CommandType.Text;
using (command)
{
var reader = command.ExecuteReader(CommandBehavior.Default);
reader.Read();
decimal oracleDecimal = reader.GetDecimal(reader.GetOrdinal("VALUE_NUMBER"));
Console.WriteLine(oracleDecimal); // output: 1178.20 (NOT expected)
var bitsFromOracle = decimal.GetBits(oracleDecimal).Select(x => x.ToString());
var bitsFromOracleString = string.Join(",", bitsFromOracle.ToArray());
Console.WriteLine(bitsFromOracleString); // 117820,0,0,131072
// sanity check
const decimal constantDecimal = 1178.2M;
Console.WriteLine(constantDecimal); // output: 1178.2 (expected)
var bitsFromConstant = decimal.GetBits(constantDecimal).Select(x => x.ToString());
var bitsFromConstantString = string.Join(",", bitsFromConstant.ToArray());
Console.WriteLine(bitsFromConstantString); // 11782,0,0,65536
Console.WriteLine(oracleDecimal == constantDecimal); // True
}
}
How should this be explained?
Here's the table CREATE & INSERT script to make this work:
CREATE TABLE MYTABLE (
ID NUMBER(10) NOT NULL,
VALUE_NUMBER NUMBER
);
INSERT INTO MYTABLE(ID,VALUE_NUMBER) VALUES(1,1178.2);
UPDATE:
The answer by @Vash and his subsequent comment led to the right conclusion that the .Net decimal type actually contains information about it's number of digits, even if they are irrelevant for equality. 1178.2M
and 1178.20M
have a different bit representation because of this, although obviously the .Equals()
method and ==
operator consider these numbers equal.