I was pointed to the reference implementation (hattip to the Anonymous Benefactor) and found that it is fairly straightforward to understand the behavior from there. To be complete, IMHO this behavior is unintuitive, but it nevertheless is well defined and matches the reference implementation.
Two CPython files are relevant, namely the ones describing list_subscript and PySlice_AdjustIndices. When retrieving a slice from a list as in this case, list_subscript is called. It calls PySlice_GetIndicesEx, which in turn calls PySlice_AdjustIndices.
Now PySlice_AdjustIndices contains simple if/then statements, which adjust the indices. In the end it returns the length of the slice. To our case, the lines
if (*stop < 0) {
*stop += length;
if (*stop < 0) {
*stop = (step < 0) ? -1 : 0;
}
}
are of particular relevance. After the adjustment, x[0:-len(x)-1:-1]
becomes x[0:-1:-1]
and the length 1 is returned. However, when x[0:-1:-1]
is passed to adjust, it becomes x[0:len(x)-1:-1]
of length 0. In other words, f(x) != f(f(x))
in this case.
It is amusing to note that there is the following comment in PySlice_AdjustIndices:
/* this is harder to get right than you might think */
Finally, note that the handing of the situation in question is not described in the python docs.