The author probably confused characters and bytes. You should also understand the related concept of encoding.
A byte is eight bits. A byte was traditionally often used to store a character, though very early computers only required 7 bits to store a character. The ASCII standard for encoding characters in 7 bits was ratified in 1963, though at the time there were also competing character encodings (of which EBCDIC still survives to this day).
When you only use 7 of the available 8 bits, you might have ideas for what to do with the spare bit. One of the common approaches was to encode additional non-standard characters which were not available in the ASCII standard. A large number of legacy 8-bit encodings have been defined, some of which have been published as standards as well. Some are still in popular use; some examples are ISO-8859-1 (aka Latin-1) and the Windows code pages (437, 850, and 1252 are still in common use in Western countries, despite their many drawbacks). Many of them are "extended ASCII" encodings which are compatible with ASCII in the first 128 characters; though the term "extended ASCII" is not really technically well-defined.
If you are processing a sequence of bytes, you do want to be able to cope with byte values in the range 0-255 and not just the ones which are defined in ASCII. On the other hand, if you have guarantees that none of the bytes you are going to process will have values above 127 (such as, for example, if your input is known to be ASCII because it comes from a source which is incapable of producing anything else), it is excessive to reserve room for values you know you are not going to need.
Going forward, most modern systems use Unicode in one form or another. On Windows, and apparently still in Java, you should expect UTF-16; elsewhere, UTF-8 is rapidly becoming the de facto standard. Both of these require your code to be able to handle 8-bit bytes cleanly, though the code points are not (necessarily, in UTF-8, or ever, in UTF-16) encoded in a single byte.
As for the code you posted, you are correct that 128 character positions is enough if you discard any byte whose value is larger than 127. On the other hand, depending on what data you expect to process, discarding non-ASCII characters may not at all be the right thing to do; and then, if you don't discard anything, you do need to handle all 256.
Either way, if you only discard values larger than 128, you need 129 positions in the array (there are 129 integers in the range 0 through 128). This is probably just a silly off-by-one bug.