3

I was working on a bit of SQL for a project, and I noticed some seemingly strange behavior in SQL Server, with regard to what the answer looks like when dividing with decimals.

Here are some examples which illustrate the behavior I'm seeing:

DECLARE @Ratio Decimal(38,16)
SET @Ratio = CAST(210 as Decimal(38,16))/CAST(222 as Decimal(38,16));

select @Ratio -- Results in 0.9459450000000000

DECLARE @Ratio Decimal(38,16)
SET @Ratio = CAST(210 as Decimal)/CAST(222 as Decimal);

select @Ratio -- Results in 0.9459459459459459

For the code above, the answer for the query which is (seemingly) less precise gives a more precise value as the answer. When I cast both the dividend and the divisor as Decimal(38,16), I get a number with a scale of 6 (casting it to a Decimal(38,16) again results in the 0's padding the scale).

When I cast the dividend and divisor to just a default Decimal, with no precision or scale set manually, I get the full 16 digits in the scale of my result.

Out of curiosity, I began experimenting more with it, using these queries:

select CAST(210 as Decimal(38,16))/CAST(222 as Decimal(38,16)) --0.945945
select CAST(210 as Decimal(28,16))/CAST(222 as Decimal(28,16)) --0.9459459459
select CAST(210 as Decimal(29,16))/CAST(222 as Decimal(29,16)) --0.945945945

As you can see, as I increase the precision, the scale of the answer appears to decrease. I can't see a correlation between the scale of the result vs the scale or precision of the dividend and divisor.

I found some other SO questions pointing to a place in the msdn documentation which states that the resulting precision and scale during an operation on a decimal is determined by performing a set of calculations on the precision and scale of the divisor and dividend, and that:

The result precision and scale have an absolute maximum of 38. When a result precision is greater than 38, the corresponding scale is reduced to prevent the integral part of a result from being truncated.

So I tried running through those equations myself to determine what the output of dividing a Decimal(38,16) into another Decimal(38,16) would look like, and according to what I found, I still should have gotten back a more precise number than I did.

So I'm either doing the math wrong, or there's something else going on here that I'm missing. I'd greatly appreciate any insight that any of you has to offer.

Thanks in advance...

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
Redit0
  • 370
  • 1
  • 14
  • Have you seen [this link](http://stackoverflow.com/questions/14313598/why-precision-is-decreasing-when-multiply-sum-to-other-number)? – Mike Aug 13 '14 at 23:52
  • "So I'm either doing the math wrong ..." - maybe, but since you've not shown either your working nor even what results you *did* obtain, it's difficult to tell. – Damien_The_Unbeliever Aug 14 '14 at 07:12

1 Answers1

4

The documentation is a little incomplete as to the magic of the value 6 and when to apply the max function, but here's a table of my findings, based on that documentation.

As it says, the formulas for division are:

Result precision = p1 - s1 + s2 + max(6, s1 + p2 + 1), Result scale = max(6, s1 + p2 + 1)

And, as you yourself highlight, we then have the footnote:

The result precision and scale have an absolute maximum of 38. When a result precision is greater than 38, the corresponding scale is reduced to prevent the integral part of a result from being truncated.

So, here's what I produced in my spreadsheet:

p1 s1 p2 s2 prInit srInit prOver prAdjusted srAdjusted
38 16 38 16 93     55     55     38         6
28 16 28 16 73     45     35     38         10
29 16 29 16 75     46     37     38         9

So, I'm using pr and sr to indicate the precision and scale of the result. The prInit and srInit formulas are exactly the forumlas from the documentation. As we can see, in all 3 cases, the precision of the result is vastly larger than 38 and so the footnote applies. prOver is just max(0,prInit - 38) - how much we have to adjust the precision by if the footnote applies. prAdjusted is just prInit - prOver. We can see in all three cases that the final precision of the result is 38.

If I apply the same adjustment factor to the scales then I would obtain results of 0, 10 and 9. But we can see that your result for the (38,16) case has a scale of 6. So I believe that that is where the max(6,... part of the documentation actually applies. So my final formula for srAdjusted is max(6,srInit-prOver) and now my final Adjusted values appear to match your results.


And, of course, if we consult the documentation for decimal, we can see that the default precision and scale, if you do not specify them, are (18,0), so here's the row for when you didn't specify precision and scale:

p1 s1 p2 s2 prInit srInit prOver prAdjusted srAdjusted
18 0  18 0  37     19     0      37         19
Damien_The_Unbeliever
  • 234,701
  • 27
  • 340
  • 448
  • Ahh... your example clears this up for me. I wasn't doing the math wrong exactly, but I was misunderstanding when to do the adjustment to the scale... (I came out with like 38,13(and some change). But I was applying the adjustment to the scale at the end instead of inside of max(6, 21+p2+1). – Redit0 Aug 14 '14 at 15:57
  • Thank you! I also found useful the following article (https://www.codeproject.com/Articles/769823/Losing-Precision-After-Multiplication-in-SQL-Serve) – David Jan 11 '19 at 23:29