Array type encoding from the API documentation:
boolean Z
byte B
char C
class or interface Lclassname;
double D
float F
int I
long J
short S
It seems counterintuitive to introduce J
for long
, instead of keeping L
for long
and using T
for reference types. I was unable to find a solid explanation of this encoding decision.
To clarify, the Java Language Specification sometimes has comments on syntax features influenced by other languages and similar "external" reasons. I'm looking for that kind of explanation. For instance, my guess for Z
is that it is a reference to 0
/!0
check in C.
This answer points to the JVM specification, but the explanation for J
, L
and Z
is still out there.