0

I am using Elasticsearch in python. I have data in pandas frame(3 columns), then I added two columns _index and _type and converted the data into json with each record using pandas inbuilt method.

data =  data.to_json(orient='records') 

This is my data then,

[{"op_key":99140046678,"employee_key":991400459,"Revenue Results":6625.76480192,"_index":"revenueindex","_type":"revenuetype"},     
 {"op_key":99140045489,"employee_key":9914004258,"Revenue Results":6691.05435536,"_index":"revenueindex","_type":"revenuetype"},
......
}]

My mapping is:

user_mapping =  {
        "settings" : {
            "number_of_shards": 3,
            "number_of_replicas": 2
        },

        'mappings': {
            'revenuetype': {
                'properties': {
                    'op_key':{'type':'string'},
                    'employee_key':{'type':'string'},
                    'Revenue Results':{'type':'float','index':'not_analyzed'},
                }
            }
        }
    }

Then facing this error while using helpers.bulk(es,data):

    Traceback (most recent call last):
      File "/Users/adaggula/Documents/workspace/ElSearchPython/sample.py", line 59, in <module>
        res = helpers.bulk(client,data)
      File "/Users/adaggula/workspace/python/pve/lib/python2.7/site-packages/elasticsearch/helpers/__init__.py", line 188, in bulk
        for ok, item in streaming_bulk(client, actions, **kwargs):
      File "/Users/adaggula/workspace/python/pve/lib/python2.7/site-packages/elasticsearch/helpers/__init__.py", line 160, in streaming_bulk
        for result in _process_bulk_chunk(client, bulk_actions, raise_on_exception, raise_on_error, **kwargs):
      File "/Users/adaggula/workspace/python/pve/lib/python2.7/site-packages/elasticsearch/helpers/__init__.py", line 89, in _process_bulk_chunk
        raise e
    elasticsearch.exceptions.RequestError: TransportError(400, u'action_request_validation_exception', u'Validation Failed: 1: index is
 missing;2: type is missing;3: index is missing;4: type is missing;5: index is 
missing;6: ....... type is missing;999: index is missing;1000: type is missing;')

It looks like for every json object, index and type's are missing. How to overcome this?

Jack Daniel
  • 2,527
  • 3
  • 31
  • 52

1 Answers1

0

Pandas Data frame to json conversion is the trick which resolved the problem.

data =  data.to_json(orient='records')
data= json.loads(data)
Jack Daniel
  • 2,527
  • 3
  • 31
  • 52
  • I was just about to comment that this could be shortened to `data = data.to_dict(orient='records')`. Then I ran a short test on a dataframe with 1.000.000 rows and 50 columns and found out that your version is performing significantly faster... weird, `df.to_dict()` is astonishingly slow. – Dirk Dec 01 '16 at 17:46
  • 1
    I had a similar error and got rid of it by adding `include_meta=True` in `obj.to_dict(include_meta=True)` – Anupam Jul 14 '17 at 14:33