There are "Linux specifications" but they do not regulate the behavior of the date
command much. What you have is really the opposite -- Linux (or more specifically the GNU user-space tools) has a large number of extensions which are not compatible with Unix by any reasonable definition.
There is a large number of standards which do regulate these things. The one you should be looking at is POSIX which requires
date [-u] [+format]
and nothing more to be supported by adhering implementations. (There are other standards like XPG and SUS which you might want to look at as well, but at the very least, you should require and expect POSIX these days ... finally.)
The POSIX document contains a number of examples but there is nothing for date conversion which is however a practical problem which many scripts turn to date
for. Also, for your concrete problem, there is nothing for reporting times with sub-second accuracy in POSIX.
Anyway, griping that *BSD isn't Linux isn't really helpful here; you just have to understand what the differences are, and code defensively. If your requirements are complex or unusual, perhaps turn to a scripting language like Perl or Python which perform these types of date formatting operations more or less out of the box in a standard installation (though neither Perl nor Python have a quick and elegant way to do date conversion out of the box, either; solutions tend to be somewhat tortured).
In practical terms, you can compare the MacOS date
man page and the Linux one and try to reconcile your requirements.
For your practical requirement, MacOS date
does not support any format string with nanosecond accuracy, but nor are you likely to receive useful results on that scale when the execution of the command will take a significant number of nanoseconds. I would settle for millisecond-level accuracy (and even that is going to be thrown off by the execution time in the final digits) and multiply to get the number in nanosecond scale.
nanoseconds () {
python -c 'import time; print(int(time.time()*1000*1000*1000))'
}
(Notice the parentheses around the argument to print()
for Python 3.) You will notice that Python does report a value at nanosecond accuracy (the last digits are often not zeros), though by the time you have run time.time()
the value will obviously no longer be correct.
To get an idea of the error rate,
bash@macos-high-sierra$ python3
Python 3.5.1 (default, Dec 26 2015, 18:08:53)
[GCC 4.2.1 Compatible Apple LLVM 7.0.2 (clang-700.1.81)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import time
>>> import timeit
>>> def nanoseconds ():
... return int(time.time()*1000*1000*1000)
...
>>> timeit.timeit(nanoseconds, number=10000)
0.0066173350023746025
>>> timeit.timeit('int(time.time()*1000*1000*1000)', number=10000)
0.00557799199668807
The overhead of starting Python and printing the value is probably going to add a few orders of magnitude of overhead, realistically, but I haven't attempted to quantify that. (The output from timeit
is in seconds.)