I've seen it many times, e.g. on UNIX, in MySQL timestamp etc.: the Epoch starts at 1970-1-1, but the maximum recordable year is 2038. Now let me count:
2^32/60/60/24/365+1970
2106
So if we used full 32 bits, we would naturally get to year 2106 without any problems. But apparently the year 2038 corresponds to 31 bits only. So why do we throw the one bit out? By using full 32 bits we could hope that we won't have to solve the problem since we'll probably destroy the Earth first...
Reaction to comments: of course it's because it's signed, but why would timestamp ever have to be signed?? That's the point of this question.