1

I am currently trying to debug a script written in the Elasticsearch "painless" scripting language. This script is stored on the cluster and using parameters to update a document by ID. It is working fine when called through the ES dev console. However, when calling it from Java with the exact same parameters and same doc ID, I am not getting the expected results. The document simply remains unchanged.

Regardless of how the update happens exactly (which is using our own Kafka based update pipelines): What are good approaches to debug such a stored script? How can I log debug output and exceptions and where exactly would these log messages be showing up? Thanks!

final Map<String, Object> params = <my parameters here>;                
final Script script = new Script(ScriptType.STORED, null, MY_SCRIPT_NAME, params);              
UpdateRequest updateRequest = new UpdateRequest(MY_INDEX_NAME, ES_ID_OF_DOC).script(script);

// updateRequest then sent to ES via Kafka or via BulkIndexService
// neither of which leads to the desired doc update
martin_wun
  • 1,599
  • 1
  • 15
  • 33
  • 1
    Can you share the Java code you're using? – Val Apr 26 '22 at 07:07
  • @Val I can, at least partly, because it is using some internal stuff that wouldn't be of much help. ```final Map params = ; final Script script = new Script(ScriptType.STORED, null, SCRIPT_NAME, params); UpdateRequest updateRequest = new UpdateRequest(MY_INDEX_NAME, ES_ID_OF_DOC).script(script);``` – martin_wun Apr 26 '22 at 07:19
  • 1
    Please update your question, as code is more legible than in comments – Val Apr 26 '22 at 07:38

1 Answers1

0

As it turns out, the error was in the Kafka consumer. A first version of this script was throwing an exception from which the consumer was not able to recover, so it was stuck in an endless loop, preventing all further processing.

Still, if anyone has some hints on how to best debug painless scripts, feel free to add them here.

martin_wun
  • 1,599
  • 1
  • 15
  • 33