There are certainly legacy character sets that use exactly two bytes for every character, but these generally do not encode ASCII characters at all, being intended to supplement a single-byte character set rather than replacing it. All of those that I am aware of exist to support Chinese, Japanese, and/or Korean ideograph characters.
There are plenty of legacy documents around that use such encodings, and I would not be surprised to find that in some places they are still used in new documents.
If you are trying to determine whether your software can ignore the existence of multi-byte character encodings other than the UTFs, then I'm afraid you won't come away with an easy answer. Of course your software can do so, in the same sense that it can ignore single-byte encodings other than ISO-8859-15, but only you can determine whether your program will adequately serve its purpose if it does so.