2

I'm having some unexpected results when working with SqlDecimals. In the end it seems to boil down to scale issues when dividing 2 SqlDecimals.

c# example code looks like this :

[Microsoft.SqlServer.Server.SqlFunction]
[return: SqlFacet(Precision = 38, Scale = 8)]
public static SqlDecimal fn_divide([SqlFacet(Precision = 38, Scale = 8)]SqlDecimal x, [SqlFacet(Precision = 38, Scale = 8)]SqlDecimal y)
{
    var r = SqlDecimal.Divide(@x, @y);

    return r;
}

SQL test code looks like this :

DECLARE @x numeric(38, 8),
        @y numeric(38, 8)

SELECT @x = Replicate('1', 28) + '.12345678',
       @y = '0.25896314'

SELECT via = 'TSQL', x = @x, y = @y, r = Convert(numeric(38, 8),  @x / @y)
SELECT via = 'SQLCLR', x = @x, y = @y, r = dbo.fn_divide(@x, @y)

The second select returns me the following error :

A .NET Framework error occurred during execution of user-defined routine or aggregate "fn_divide": `
System.OverflowException: Arithmetic Overflow.
System.OverflowException: 
   at System.Data.SqlTypes.SqlDecimal.MultByULong(UInt32 uiMultiplier)
   at System.Data.SqlTypes.SqlDecimal.AdjustScale(Int32 digits, Boolean fRound)
   at System.Data.SqlTypes.SqlDecimal.op_Division(SqlDecimal x, SqlDecimal y)
   at System.Data.SqlTypes.SqlDecimal.Divide(SqlDecimal x, SqlDecimal y)
  at UserDefinedFunctions.fn_divide(SqlDecimal x, SqlDecimal y)

Searching the web I've found plenty of issues related to rounding errors when converting SqlDecimal to Decimal, but in my case everything is SqlDecimal from begin to end as I wanted to avoid exactly that. Likewise I would 'hope' the result would be identical to how this got implemented natively in SQL.. but alas. Even when it doesn't overflow it will give me different results in 'border-cases'.

Anyone has any tips on how to fix this or at least work around it ? Does the SqlDecimal internally use (.Net) Decimals and hence can't cram the information in it's 3 bytes ?!?!?

PS: I can't predict what combinations of numeric(p,s) will be passed to the function but 38,8 is 'acceptable'. I was hoping to make a faster SQLCLR version of my original UDF (which does a lot more than just divide btw =). Admittedly, it is faster all-right, but it should work for the same range of numbers.

Filip De Vos
  • 11,568
  • 1
  • 48
  • 60
deroby
  • 5,902
  • 2
  • 19
  • 33
  • SqlDecimal uses four unsigned four-byte integers internally, giving a maximum precision/scale of 38. See http://msdn.microsoft.com/en-us/library/dbew96f6(v=vs.85). – David W Jul 17 '12 at 22:57
  • Link doesn't work I'm afraid, but anyway : yes, I know it uses 4 bytes, as does a Numeric(38,8) use 4 bytes in SQL. The part I'm having trouble with is that -apparently- it treats these 4 bytes differently in SQL (native) vs in .Net (SqlDecimal class), which from my point of view is well... flabbergasting! – deroby Jul 19 '12 at 19:14
  • deroby - check that - a SqlDecimal uses *FOUR* four-byte ints, not just four bytes. Yes, the native implementations of the math beneath the hood between SQL .NET types and SQL Server itself differ... – David W Jul 19 '12 at 19:42
  • You're right! Hmm, not sure where I got the 4 bytes from, probably I somehow mistranslated it somewhere in my head as the documentation clearly states it actually takes 17 bytes to store a numeric with precision 29-38. SqlDecimal structure uses 4 unsigned ints which comes down to 16 bytes. Not sure why sql needs the extra byte but I'm guessing it has to do with NULL-ability and sign etc which is probably part of the SqlDecimal structure too but not mentioned... Anyway, regardless on how things are stored internally I'm quite disappointed they didn't use what's under the hood in SQL natively =( – deroby Jul 21 '12 at 12:09
  • @deroby Regarding both this question and the new one that you linked to this one: can you please post some sample values that you are actually using, not just test values that show the problem? As I mentioned in your related, multiplication question, I am wondering why you are passing in `DECIMAL(38, 8)` in the first place. It makes perfect sense to _return_ that type, or possibly `DECIMAL(38, 18)`, but the definition of the input params has a much greater affect on the result than the return type. – Solomon Rutzky Jan 30 '17 at 20:11
  • As mentioned in the other question, the problem is mostly that the function should be generic enough to handle 'any' input. In the past we had a pure-SQL function that worked fine. For performance reasons we moved to a SQLCLR solution and noticed that this part ran into issues thus proving that SqlDecimal wasn't quite the same as err...a SQL Decimal. The examples given here clearly are 'crafted', but we ran into the issue with (more) realistic numbers first. – deroby Jan 31 '17 at 20:31

0 Answers0