0

I am reading in time stamps as strings from a service that is UNIX time formatted in Nano seconds. This introduces an obvious problem in that I can't conduct standard operations to normalize the strings to seconds given how large they are. An example of one of these strings is '1589212802642680000' or 1.58921E+18 in scientific notation.

I was trying something like this: convert_fills_df['timeStamp'] = convert_fills_df.timeStamp.apply(lambda x: UNIX_EPOCH + (float(x)/1000000000)). But I overflow the float object when I try this; is there a string operation I can do without losing precision down to the second? Nanoseconds for my purpose are not necessary (though I appreciate their thoroughness). If I could keep the nanoseconds that's great too, but it is not a necessity.

I would like to just convert the time to a human readable format in 24 hour clock format.

StormsEdge
  • 854
  • 2
  • 10
  • 35

1 Answers1

1

The first 10 digits represents the seconds, the subsequent digits represent milli, micro & nanosecond precision

To keep all the information you can insert . at the right position, and pass the string to pd.to_datetime

df = pd.DataFrame({'ns': ['1589212802642680000']}) 
pd.to_datetime(df.ns.str[:10] + '.' + df.ns.str[10:], unit='s')
# outputs
0   2020-05-11 16:00:02.642679930
Name: ns, dtype: datetime64[ns]
Haleemur Ali
  • 26,718
  • 5
  • 61
  • 85