0

I would like to implement list of dict data to be loaded into SQL Server DB using pytds tablevalueparams tvp in my python code. \

I tried below mentioned code and it's not working as expected.

def call_sproc(val):
    server = 'my_server'
    user = 'my_user'
    password = 'secret'
    database = 'my_db'
    
    try:
        conn = pytds.connect(server=server, user=user, password=password, database=database)
        cursor = conn.cursor()

        tvp = pytds.TableValuedParam(type_name='dbo.tbl', rows=val)
        cursor.callproc("dbo.stproc", (tvp,))
        conn.commit()
        cursor.close()
        conn.close()

    except pytds.Error as e:
        print("Connection issue:", e)

###
val=[{'id': 'st001', 'name':'abc'}, {'id': 'st002', 'name':'pqr'}, {'id': 'st003', 'name':'xyz'}]
call_sproc(val)

**Error: **

Traceback (most recent call last):
File "c:/Users/mydesk/Desktop/test/insertpytds.py", line 60, in
connect_and_call_stored_procedure(val)
File "c:/Users/mydesk/Desktop/test/insertpytds.py", line 23, in call_sproc
cursor.callproc("[dbo].[stproc]", (tvp,))
File "C:\Program Files (x86)\Python38-32\lib\site-packages\pytds_init_.py", line 597, in callproc
return self.callproc(procname, parameters)
File "C:\Program Files (x86)\Python38-32\lib\site-packages\pytds_init
.py", line 562, in _callproc
self._exec_with_retry(lambda: self.session.submit_rpc(procname, parameters, 0))
File "C:\Program Files (x86)\Python38-32\lib\site-packages\pytds_init
.py", line 668, in exec_with_retry
return fun()
File "C:\Program Files (x86)\Python38-32\lib\site-packages\pytds_init
.py", line 562, in
self._exec_with_retry(lambda: self._session.submit_rpc(procname, parameters, 0))
File "C:\Program Files (x86)\Python38-32\lib\site-packages\pytds\tds.py", line 1032, in submit_rpc
serializer.write(w, param.value)
File "C:\Program Files (x86)\Python38-32\lib\site-packages\pytds\tds_types.py", line 2302, in write
self._columns_serializers[i].write(w, row[I])
KeyError: 0

I tried with list of tuples and it is working fine. But due to the risk in inserting the data wrongly in DB columns when interchange the positions in data (ex: interchange col1 to col3). I suppose list of dict data can be properly fit as per the keys in DB cols.

RSG
  • 11
  • 3
  • Assuming `TableValuedParam` accepts a list of dictionaries (PyTDS documentation is ... scarce), it will expect one dictionary per row with all columns, not one per cell. – Panagiotis Kanavos Aug 10 '23 at 08:33
  • None of [the unit tests](https://github.com/denisenkom/pytds/blob/bf6743506451c5909e411b36aeeefda105707058/tests/unit_test.py#L863) uses dictionaries and the [source code](https://github.com/denisenkom/pytds/blob/bf6743506451c5909e411b36aeeefda105707058/src/pytds/tds_types.py#L2300) iterates over the row values by index which means you can't use dictionaries. The [TableValuedParam](https://github.com/denisenkom/pytds/blob/bf6743506451c5909e411b36aeeefda105707058/src/pytds/tds_types.py#L2162) source shows it accepts a `columns` parameter though, that can be used to specify column names – Panagiotis Kanavos Aug 10 '23 at 08:47
  • HI @PanagiotisKanavos, Thanks for your response. Is it possible to share an example to specify `columns`? – RSG Aug 10 '23 at 08:55
  • Try passing either a tuple or a list of column names. As I said, the documentation is scarce, so are the help strings and the source has no types. You'll have to try and see. – Panagiotis Kanavos Aug 10 '23 at 09:37

1 Answers1

0

Load your data as auto normalised tables, I suggest using this library that infers schema, types the data and has schema evolution too https://pypi.org/project/dlt/

You can even use this library to do an upsert on your structured data afterwards, here's an example where we use an id in the json to update the sql table generated at destination

data = [{'id': 1, 'name': 'John'}]

# open connection
pipe = dlt.pipeline(destination='postgres',
                    dataset_name='raw_data')

# Upsert/merge: Update old records, insert new
# Capture the outcome in load info
load_info = pipe.run(data,
                      write_disposition="merge",
                      primary_key="id",
                      table_name="users")

We are looking into adding sql server support this week

AdrianBR
  • 2,762
  • 1
  • 15
  • 29