BTW, ctime (& ctime(3)) is documented as giving a string with the year represented by four digits (for a total of 26 bytes). So the maximal time is in the year 9999 (certainly less than maximal time_t
on a machine with 64 bits time_t
).
Also (as I commented), pragmatically, if time_t
has more than 40 bits (e.g. 64 bits) you don't care about the maximally representable time. You and everyone reading that forum (and all our grand grand children) would be dead, the computers running your program will all be destroyed, and at that time C won't exist anymore. The Y2038 problem don't practically have any 64 bits equivalent. So just special case when time_t
is 32 bits.
It is very unlikely than any C program would matter after the year 3000; software, hardware, standards, and human technical expertise don't last that long...
The POSIX ctime documentation says explicitly :
Attempts to use ctime()
or ctime_r()
for times before the Epoch or for times beyond the year 9999 produce undefined results. Refer to asctime.
BTW, musl-libc seems to be conformant to the standard: its time/__asctime.c
(indirectly called by ctime
) has a nice comment:
if (snprintf(buf, 26, "%.3s %.3s%3d %.2d:%.2d:%.2d %d\n",
__nl_langinfo(ABDAY_1+tm->tm_wday),
__nl_langinfo(ABMON_1+tm->tm_mon),
tm->tm_mday, tm->tm_hour,
tm->tm_min, tm->tm_sec,
1900 + tm->tm_year) >= 26)
{
/* ISO C requires us to use the above format string,
* even if it will not fit in the buffer. Thus asctime_r
* is _supposed_ to crash if the fields in tm are too large.
* We follow this behavior and crash "gracefully" to warn
* application developers that they may not be so lucky
* on other implementations (e.g. stack smashing..).
*/
a_crash();
}
and GNU glibc
has in its time/asctime.c file:
/* We limit the size of the year which can be printed. Using the %d
format specifier used the addition of 1900 would overflow the
number and a negative vaue is printed. For some architectures we
could in theory use %ld or an evern larger integer format but
this would mean the output needs more space. This would not be a
problem if the 'asctime_r' interface would be defined sanely and
a buffer size would be passed. */
if (__glibc_unlikely (tp->tm_year > INT_MAX - 1900))
{
eoverflow:
__set_errno (EOVERFLOW);
return NULL;
}
int n = __snprintf (buf, buflen, format,
(tp->tm_wday < 0 || tp->tm_wday >= 7 ?
"???" : ab_day_name (tp->tm_wday)),
(tp->tm_mon < 0 || tp->tm_mon >= 12 ?
"???" : ab_month_name (tp->tm_mon)),
tp->tm_mday, tp->tm_hour, tp->tm_min,
tp->tm_sec, 1900 + tp->tm_year);
if (n < 0)
return NULL;
if (n >= buflen)
goto eoverflow;
So I believe that both GNU glibc and musl-libc are better than MacOSX implementation (as cited in zneak's answer) on that aspect. The standards requires ctime
to give 26 bytes. Also, POSIX 2008 is marking ctime
as obsolete, new code should use strftime (see also strftime(3)).