0

I am using the Job_Service API of databricks

job={"run_name":"Pythonjob","existing_cluster_id": "xxx","notebook_task":{"notebook_path": "xxx"}}
jobs_service=service.JobsService(api_client)
running_job=jobs_service.submit_run(json.dumps(job))

Independent of the content of job I get the error message:

HTTP 400 Client Error: Bad Request for URL xxx
Respone from server: {'error_code': 'INVALID_PARAMETER_VALUE',
'message': 'One of job_clsuter_key, new_cluster or existing_cluster_id must be sepecified.'

In the logs I found that the REST Endpoint api/2.0/jobs/runs/submit is used. The example provided here does not work either. What is wrong with my code?

user3579222
  • 1,103
  • 11
  • 28
  • Can you please mention the error message? seems like you have pasted the code again instead of the error message. – rainingdistros Mar 17 '23 at 09:53
  • sry, wrong code snippet – user3579222 Mar 17 '23 at 10:01
  • May I ask - what exactly you are trying ? Are you trying to get the logs of the run ? or trying to run a notebook ? – rainingdistros Mar 21 '23 at 07:16
  • Try based on example questions [Ex - 1](https://stackoverflow.com/questions/56153505/calling-databricks-notebook-using-databricks-job-api-runs-submit-endpoint) and [Ex - 2](https://stackoverflow.com/questions/71388513/azure-databricks-api-to-create-job-job-doesnt-get-created-after-successful-cal) you could try hard coding once to check... – rainingdistros Mar 21 '23 at 07:23
  • @rainingdistros: I want to wait until the job is completed - so I am interested in the status information – user3579222 Mar 21 '23 at 07:26
  • Just an idea - provided you know the job name - why not get the job id and loop till it is not running any more ? `databricks jobs list --all | grep | tr -s ' ' | cut -d' ' -f1 **** databricks runs list --job-id | head -1 | tr -s ' ' | cut -d' ' -f3 **RUNNING**` – rainingdistros Mar 22 '23 at 06:29

0 Answers0