1

I ran the following code in python to append to a DolphinDB database in-memory table:

import dolphindb as ddb
import pandas as pd
import numpy as np
s = ddb.session()
s.connect("localhost", 8848, "admin", "123456")
script = """t = table(1:0,`id`date`ticker`price, [INT,DATE,STRING,DOUBLE])
share t as tglobal"""
s.run(script)

tb=pd.DataFrame({'id': [1, 2, 2, 3],
                 'date': np.array(['2019-10-30', '2019-10-30', '2019-10-30', '2019-10-30'], dtype='datetime64[D]'),
                 'ticker': ['AAPL', 'AMZN', 'FB', 'GOOG'],
                 'price': [243.26, 1779.99, 188.25, 1261.29]})
s.run("append!{tglobal}",tb)

However, I encountered an error:

pandas._libs.tslibs.np_datetime.OutOfBoundsDatetime: Out of bounds nanosecond timestamp: -1817286-04-17 00:00:00

Does anyone know what I did wrong?

1 Answers1

2

Since all data types related to time in Python pandas are datetime64, after uploading a DataFrame to DolphinDB, all time type columns are of nanotimestamp type, so when appending a DataFrame with time column, we need to check the time on the DolphinDB server Column data type conversion: upload the DataFrame to the server first, select each column in the table through the select statement, and perform time type conversion (this example converts the nanotimestamp type to the date type), and then append it to the memory table Medium, as follows:

tb=pd.DataFrame({'id': [1, 2, 2, 3],
                 'date': np.array(['2019-10-30', '2019-10-30', '2019-10-30', '2019-10-30'], dtype='datetime64[D]'),
                 'ticker': ['AAPL', 'AMZN', 'FB', 'GOOG'],
                 'price': [243.26, 1779.99, 188.25, 1261.29]})
s.upload({'tb':tb})
s.run("tableInsert(tglobal,(select id, date(date) as date, ticker, price from tb))")
dovish618
  • 156
  • 5