0

We currently have a 45mb CSV file that we're going to be loading into a Splunk kvstore. I want to be able to accomplish this via the python SDK but I'm running into a bit of trouble loading the records.

The only way I can find to update a kvstore is the service.collection.insert() function which as far as I can tell only accepts 1 row at a time. Being that we have 250k rows in this file, I can't afford to wait for all lines to upload every day.

This is what I have so far:

 from splunklib import client, binding
 import json, pandas as pd
 from copy import deepcopy

 data_file = '/path/to/file.csv'

 username = 'user'
 password = 'splunk_pass'
 connectionHandler = binding.handler(timeout=12400)
 connect_kwargs = {
     'host': 'splunk-host.com',
     'port': 8089,
     'username': username,
     'password': password,
     'scheme': 'https',
     'autologin': True,
     'handler': connectionHandler
 }
 flag = True
 while flag:
     try:
         service = client.connect(**connect_kwargs)
         service.namespace['owner'] = 'Nobody'
         flag = False
     except binding.HTTPError:
         print('Splunk 504 Error')

 kv = service.kvstore
 kv['test_data'].delete()
 df = pd.read_csv(data_file)
 df.replace(pd.np.nan, '', regex=True)
 df['_key'] = df['key_field']
 result = df.to_dict(orient='records')
 fields = deepcopy(result[0])
 for field in fields.keys():
     fields[field] = type(fields[field]).__name__
 df = df.astype(fields)
 kv.create(name='test_data', fields=fields, owner='nobody', sharing='system')
 for row in result:
     row = json.dumps(row)
     row.replace("nan", "'nan'")
     kv['learning_center'].data.insert(row)
 transforms = service.confs['transforms']
 transforms.create(name='learning_center_lookup', **{'external_type': 'kvstore', 'collection': 'learning_center', 'fields_list': '_key, userGuid', 'owner': 'nobody'})
 # transforms['learning_center_lookup'].delete()
 collection = service.kvstore['learning-center']
 print(collection.data.query())

In addition to the problem of taking forever to load a quarter million records, it keeps failing on a row with nan as the value, and no matter what I put in there to try to deal with the nan, it persists in the dictionary value.

saeed foroughi
  • 1,662
  • 1
  • 13
  • 25
Cdhippen
  • 615
  • 1
  • 10
  • 32

1 Answers1

1

You could interface with the REST endpoint directly, then use storage/collections/data/{collection}/batch_save to save multiple items as required.

Refer to https://docs.splunk.com/Documentation/Splunk/8.0.1/RESTREF/RESTkvstore#storage.2Fcollections.2Fdata.2F.7Bcollection.7D.2Fbatch_save

Simon Duff
  • 2,631
  • 2
  • 7
  • 15