I was referencing the XSD spec today looking up the definition of a decimal type. In it, I found two different conflicting definitions of the datatype, a lexical definition and a canonical definition quoted below:
3.2.3.1 Lexical representation
decimal has a lexical representation consisting of a finite-length sequence of decimal digits (#x30-#x39) separated by a period as a decimal indicator. An optional leading sign is allowed. If the sign is omitted, "+" is assumed. Leading and trailing zeroes are optional. If the fractional part is zero, the period and following zero(es) can be omitted. For example: -1.23, 12678967.543233, +100000.00, 210.
3.2.3.2 Canonical representation
The canonical representation for decimal is defined by prohibiting certain options from the Lexical representation (§3.2.3.1). Specifically, the preceding optional "+" sign is prohibited. The decimal point is required. Leading and trailing zeroes are prohibited subject to the following: there must be at least one digit to the right and to the left of the decimal point which may be a zero.
In summary, the lexical representation permits the absence of a decimal point and trailing zeros if the fractional part of the decimal is zero, whereas the canonical representation explicitly states the decimal is required.
Which of these definitions is "correct"? My application is sending lexical representation and a consuming application is expecting canonical representation.