1

i am using the following code snippet but would not found any luck. can anyone help me to pass custom job ID

job = {
    
    "placement": {"cluster_name": cluster_name},
    "spark_job": {
        "main_class": "org.example.App",
        "jar_file_uris": [
          "gs://location.jar",
        ],
        "args": [],
    },
}


operation = job_client.submit_job_as_operation(
     request={"project_id": project_id, "region": region, "job": job}
)

Thanks in Advance :)

  • Hi @Faisal Khan, For your requirement you can submit the job using gcloud command where you can mention `id` parameter to provide the custom job id. Ex: `gcloud dataproc jobs submit spark \ --id=job-id-name \ --cluster=cluster-name \ --region=region \ --class=org.apache.spark.examples.SparkPi \ --jars=gs://my.jar \ -- 1000`. Let me know if it's helpful or not? – Shipra Sarkar Jan 10 '23 at 14:38

1 Answers1

1

The following problem can be solved by adding reference attribute in the json like this.

"reference": {
  "job_id": "test101",
  "project_id": "1553sas207"
   }