I’m working on a problem involving many sql decimal multiplications using a fairly high precision.
A recent bug prompted me to do some research, and I've come across a humbling article (http://msdn.microsoft.com/en-us/library/ms190476.aspx) on how the precision and scale of the decimal result are calculated.
It’s my understanding that in order to avoid truncation or unexpected rounding, it’s sometimes necessary to cast the operands and possibly the intermediate calculations to sizes that will work and give the desired results. Each operand is given to me as a decimal(19,5). There’s nothing I can do about this since we support such a wide range of values.
My question: Can I reduce/cast the type of these decimal(19,5) values 'on-the-fly' to the smallest decimal type (precision and scale) that will hold the value?
For example, can I cast this variable (@Cw) to a decimal(4,1) 'on-the-fly' before it will get used in my multiplication?
DECLARE @Cw decimal(19,5)
SET @Cw = 113.5
--Some type of CAST, i.e.
--CAST(@Cw as DECIMAL(GetActualPrecision(@Cw), GetActualScale(@Cw)))
-–should show 4 instead of 19
SELECT SQL_VARIANT_PROPERTY(@NewCw, 'Precision')
–-should show 1 instead of 5
SELECT SQL_VARIANT_PROPERTY(@NewCw, 'Scale')
Thanks!