0

kfp version 1.8.11

I have a pipeline and I need to use some pipeline/task parameters to keep track of my experiments and do the pathing for GCS.

I provide this as inputs of the components:

kfp.dsl.PIPELINE_JOB_ID_PLACEHOLDER
kfp.dsl.PIPELINE_TASK_ID_PLACEHOLDER

I need a big machine with GPUs and a mounted NFS. However, when I do it and I create the paths, they look like this (no transformation):

a/b/{{$.pipeline_job_uuid}}/{{$.pipeline_task_uuid}}

However, if I don't provide the machine (default machine) and I run the same code, I see the correct value, something like this:

a/b/792423523952395235/435153421543214

The machine config has these characteristics:

    machine:
      machine_type: n1-standard-32
      accelerator_type: NVIDIA_TESLA_V100
      accelerator_count: 4
      replica_count: 1
      nfs_mounts: [
          {server: "1.2.3.4", path: /train, mount_point: train}
        ]
      network: projects/project_id/global/networks/my_network

Any idea about what could be the issue?

100tifiko
  • 361
  • 1
  • 10
  • Are you facing any error messages? Where are you running the above code? Can you verify PLACEHOLDER contains the values you are expecting? – kiran mathew Aug 28 '23 at 13:43
  • No error messages, it just works as a string `{{$.pipeline_job_uuid}}` instead of the real value. It might be related to be a custom job and this specific feature not being handled. I also don't know if that is fixed in 2.0 For now, I'm just adding a previous component (default container) to gather the `job id` and pass it to the main component. I cannot do the same for the `task id` but I'm just using a timestamp over there. I have the job id and that is enough for now. – 100tifiko Aug 29 '23 at 14:10

0 Answers0