Is Decimal type in C# follow the same rules (formula,normalized/denormalized,implied 1,Exponent bias) of classic double representation (IEEE-754 standard) except the use of base 10 instead of base 2.
What does implied the use of base 10 instead of base 2 ?
Is there, like "IEEE-754 double", the same behavior namely some gaps between adjacent values even for a finite precision (like 28/29 digits)?