The ANSI X3.159-1989 "Programming Language C" standard states in the chapter "5.2.1.2 - Multibyte characters" that:
For both [source and execution] character sets the following shall hold:
- A byte with all bits zero shall be interpreted as a null character independent of shift state.
- A byte with all bits zero shall not occur in the second or subsequent bytes of a multibyte character.
Does it mean that for the translation and execution environments next statements are true?:
- Both source and execution character sets might have a multibyte value, used to represent the null character, for each different shift state. [Thoughts: if the translation or execution environment can switch between different shift states (that can differ the number of bytes used to represent a character), then it should somehow detect the null character - not only as the one byte "null character" from the basic character set, but as, for example, a two byte "null character" for a particular shift state.] P.S. that might be a misconception of how character values are being interpreted in a string literal and etc. by translation and execution environment.
- Those characters can be represent only as a values with the first byte set to "0" [i.e. first byte with all bits zero], so there is a wide range of how to represent it: "FFFF 0000", "ABCD 0000" and etc.
- The "null character" is defined only in the basic execution character set. Both rules in a quote below are applicable to both extended translation and execution character sets. So that, multibyte representation of the "null character" can be in both translation and execution environment, and it's possible to use the multibyte "null character" in source code without the use of escape-sequences, but instead writing that character directly in some kind of literal.
Or the "null character" can only be represent as a single byte value, and its one and only such character, defined by the basic execution character set?