-2

I've seen it many times, e.g. on UNIX, in MySQL timestamp etc.: the Epoch starts at 1970-1-1, but the maximum recordable year is 2038. Now let me count:

2^32/60/60/24/365+1970
2106

So if we used full 32 bits, we would naturally get to year 2106 without any problems. But apparently the year 2038 corresponds to 31 bits only. So why do we throw the one bit out? By using full 32 bits we could hope that we won't have to solve the problem since we'll probably destroy the Earth first...

Reaction to comments: of course it's because it's signed, but why would timestamp ever have to be signed?? That's the point of this question.

BMW
  • 42,880
  • 12
  • 99
  • 116
Tomas
  • 57,621
  • 49
  • 238
  • 373
  • 1
    -1 is used to indicate error, so I'm guessing it must be signed, and to allow to time before 1970. – dalle Feb 10 '14 at 19:07
  • 5
    The sign bit allows for negative dates, i.e. dates before 1970. –  Feb 10 '14 at 19:07
  • @MikeW not for MySQL, at least. You can't represent dates before 1970 in timestamp. – Tomas Feb 10 '14 at 19:12
  • 8
    Did you [read through the 2038 problem](http://en.wikipedia.org/wiki/Year_2038_problem) completely? `changing time_t to an unsigned 32-bit integer, which would extend the range to the year 2106, would adversely affect programs that store, retrieve, or manipulate dates prior to 1970, as such dates are represented by negative numbers.` – admdrew Feb 10 '14 at 19:12

2 Answers2

6

It might sound crazy but people might want to represent dates prior to 1970. Switching the interpretation of the classic time_t value would cause nothing but trouble.

The 2038 problem can be side-stepped by switching to a 64-bit representation with the same specification. Exactly how this should be done is subject to debate, as being able to represent dates billions of years in the future is of dubious value when that precision could be used to represent sub-second times, but the naive solution works better than nothing.

The short answer is: We use a signed value because that's what the standard is.

tadman
  • 208,517
  • 23
  • 234
  • 262
  • 1
    ...and with specific regard to mysql, `DATETIME` is an alternative to `TIMESTAMP` that handles a much longer timeframe. – admdrew Feb 10 '14 at 19:22
  • 1
    When representing dates with times, `DATETIME` is only recommended approach. `TIMESTAMP` values are limited to the `time_t` range and have the 2038 limit so I'd strongly advise against using them. That's not the distant, abstract future, it's only 24 years away. – tadman Feb 10 '14 at 19:25
  • Agreed. I had to read through the mysql docs to even see why someone *would* use `TIMESTAMP` nowadays. – admdrew Feb 10 '14 at 19:26
  • The first `TIMESTAMP` in your record is automatically populated with the current time if you don't insert a value. While this probably seemed like a good idea at the time, this is of dubious utility today. – tadman Feb 10 '14 at 19:28
2

This probably falls under 'why is time_t signed and not unsigned' in which case you may be interested in hearing the reason behind this here:

There was originally some controversy over whether the Unix time_t should be signed or unsigned. If unsigned, its range in the future would be doubled, postponing the 32-bit overflow (by 68 years). However, it would then be incapable of representing times prior to 1970. Dennis Ritchie, when asked about this issue, said that he hadn't thought very deeply about it, but was of the opinion that the ability to represent all times within his lifetime would be nice. (Ritchie's birth, in 1941, is around Unix time −893 400 000.) The consensus is for time_t to be signed, and this is the usual practice. The software development platform for version 6 of the QNX operating system has an unsigned 32-bit time_t, though older releases used a signed type.

user12321
  • 78
  • 8