I am currently redesigning a legacy database to run on SQL Server 2005, and I want to replace most of the old float-columns with decimals.
Decimal(15,4) would be sufficent for my needs but the SQL Server doc states that this would use the same storage space (9 bytes) as a Decimal(19,4). Having a larger number in the same storage space seems like a good idea. So: is there any reason why I should not use the maximal precision of (19,4)? Performance drawbacks perhaps? Mind that I won't do extensive calculations in the database, only some SUMs or multiplications in queries.