-1

In our Oracle database, I have a table (say MYTABLE) with column VALUE_NUMBER of type NUMBER (NULL, no precision specified). This table contains a value 1178.2

When retrieving data from this column using standard ADO.Net (actually ODP.Net) and then converting the decimal value to a string, I get '1178.20'. Obviously, when converting the constant 1178.2M to a string, the output is 1178.2.

Digging deeper, I looked at the output of decimal.GetBits(), which turned out to be different, although when comparing both numbers they are really considered equal.

The code sample below illustrates the behaviour:

using (var connection = new OracleConnection("my connection string"))
{
    connection.Open();
    var command = connection.CreateCommand();
    command.CommandText = "SELECT VALUE_NUMBER FROM MYTABLE";
    command.CommandType = CommandType.Text;
    using (command)
    {
        var reader = command.ExecuteReader(CommandBehavior.Default);
        reader.Read();
        decimal oracleDecimal = reader.GetDecimal(reader.GetOrdinal("VALUE_NUMBER"));
        Console.WriteLine(oracleDecimal); // output: 1178.20 (NOT expected)
        var bitsFromOracle = decimal.GetBits(oracleDecimal).Select(x => x.ToString());
        var bitsFromOracleString = string.Join(",", bitsFromOracle.ToArray());
        Console.WriteLine(bitsFromOracleString); // 117820,0,0,131072

        // sanity check
        const decimal constantDecimal = 1178.2M;
        Console.WriteLine(constantDecimal); // output: 1178.2 (expected)
        var bitsFromConstant = decimal.GetBits(constantDecimal).Select(x => x.ToString());
        var bitsFromConstantString = string.Join(",", bitsFromConstant.ToArray());
        Console.WriteLine(bitsFromConstantString); // 11782,0,0,65536

        Console.WriteLine(oracleDecimal == constantDecimal); // True 
    }
}

How should this be explained?

Here's the table CREATE & INSERT script to make this work:

CREATE TABLE MYTABLE (
  ID  NUMBER(10)             NOT NULL,
  VALUE_NUMBER           NUMBER
);

INSERT INTO MYTABLE(ID,VALUE_NUMBER) VALUES(1,1178.2);

UPDATE:

The answer by @Vash and his subsequent comment led to the right conclusion that the .Net decimal type actually contains information about it's number of digits, even if they are irrelevant for equality. 1178.2M and 1178.20M have a different bit representation because of this, although obviously the .Equals() method and == operator consider these numbers equal.

jeroenh
  • 26,362
  • 10
  • 73
  • 104

3 Answers3

1

The main problem you have is that you do not apply any format to row data that you recieve from data base.

If you want to have specific format you should specyfy it. In documentation you will find Formatting Types, there are all information how to deal with formatting.

If you just want to have result

Console.WriteLine(valueNumber.ToString("N1", CultureInfo.InvariantCulture));
  • I'll try that tomorrow, but I'm wondering more why I get different formatting on what seemingly is the same input. – jeroenh Aug 09 '12 at 22:39
  • This indeed normalizes the formatting. Still not ideal since you need to set a fixed precision beforehand. Still wondering where the difference comes from in the first place... – jeroenh Aug 10 '12 at 06:34
  • You could check what, happned if you use 1178.20M; in your sanity check – Damian Leszczyński - Vash Aug 10 '12 at 11:25
  • I didn't expect this but then I indeed get the same behaviour. Apparently the decimal type contains information about the number of digits, even if the trailing digits are '0'. Good catch... – jeroenh Aug 10 '12 at 12:12
  • The the base line is that you should never really on the number it seflt but if you requiere from it to be represented as text, you should always specyffy the proper format for it. ;). – Damian Leszczyński - Vash Aug 10 '12 at 13:08
  • accepted this answer, but also read my update on the question – jeroenh Nov 21 '12 at 08:34
0

Is this causing display issues or calculation issues?

If you just want the output to display as desired, can you just use valueNumber.ToString("G")?

Rob H
  • 1,840
  • 16
  • 25
  • We were relying on the string representation of the decimal value to be part of a dictionary key. We can/should probably work around that but I'm still trying to understand what exactly is going on. – jeroenh Aug 09 '12 at 22:38
  • `valueNumber.ToString("G")` doesn't make a difference. The number retrieved from the database still formats differently. – jeroenh Aug 10 '12 at 06:31
-1

You can't. You will have to convert it to a String if you want to do that.

select 1123.80 num from dual;

select to_char(1123.80, 9999.99) from dual;

i.e. the only way is

SQL> INSERT INTO MYTABLE(ID,VALUE_NUMBER) VALUES(1,1178.20);
SQL> SELECT to_char(VALUE_NUMBER, 9999.99) NUM FROM MYTABLE

NUM
-------------------
1178.20 (string/varchar)

SQL> SELECT VALUE_NUMBER NUM FROM MYTABLE

NUM
-------------------
1178.2 (number)

This may cause issues while performing number(decimal) operations on the value later since it becomes a string and would require conversion/casting to number again, which is not a very recommended way. even if you delcare the precision to be NUMBER(10,2) from NUMBER(10), results will be same.

Anjan Biswas
  • 7,746
  • 5
  • 47
  • 77
  • Yes, You can. ;-). When you formad data on database you are restrictted to db format. That could not be apreciate be variuos local when application could be used. – Damian Leszczyński - Vash Aug 09 '12 at 16:38
  • In Oracle - YOU CAN'T. Not sure about other database. But since the question was tagged with `oracle` I posted my answer. – Anjan Biswas Aug 09 '12 at 16:42
  • I think you misread my question. I'm unexpectedly seeing the '1178.20' format when selecting a NUMBER from oracle as a .Net `decimal`, then converting it to string on the .Net side. (see the code sample) – jeroenh Aug 09 '12 at 22:41