I have 50 hosts trying to run the map reduce job below on Riak. I am getting the error below where some of the hosts complain about the worker_limit
being reached.
Looking for some insights on whether I can tune the system to avoid this error? Couldn't find too much documentation around the worker_limit
.
{"phase":0,"error":"[worker_limit_reached]","input":"{<<\"provisionentry\">>,<<\"R89Okhz49SDje0y0qvcnkK7xLH0\">>}","type":"result","stack":"[]"} with query MapReduce(path='/mapred', reply_headers={'content-length': '144', 'access-control-allow-headers': 'Content-Type', 'server': 'MochiWeb/1.1 WebMachine/1.10.8 (that head fake, tho)', 'connection': 'close', 'date': 'Thu, 27 Aug 2015 00:32:22 GMT', 'access-control-allow-origin': '*', 'access-control-allow-methods': 'POST, GET, OPTIONS', 'content-type': 'application/json'}, verb='POST', headers={'Content-Type': 'application/json'}, data=MapReduceJob(inputs=MapReduceInputs(bucket='provisionentry', key=u'34245e92-ccb5-42e2-a1d9-74ab1c6af8bf', index='testid_bin'), query=[MapReduceQuery(map=MapReduceQuerySpec(language='erlang', module='datatools', function='map_object_key_value'))]))