0

As documented in https://github.com/tektoncd/pipeline/blob/master/docs/resources.md I have configured an Image PipelineResource:

apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
  name: my-data-image
spec:
  type: image
  params:
    - name: url
      value: image-registry.openshift-image-registry.svc:5000/default/my-data

Now when I am using the above PipelineResource as an input to a task:

apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
  name: my-task
spec:
  inputs:
    resources:
      - name: my-data-image
        type: image
  steps:
    - name: print-info
      image: image-registry.openshift-image-registry.svc:5000/default/my-task-runner-image:latest
      imagePullPolicy: Always
      command: ["/bin/sh"]
      args:
        - "-c"
        - >
          echo "List the contents of the input image" &&
          ls -R "$(inputs.resources.my-data-image.path)"

I am not able to list the content of the image, as I get the error

[test : print-info] List the contents of the input image
[test : print-info] ls: cannot access '/workspace/my-data-image': No such file or directory

The documentation (https://github.com/tektoncd/pipeline/blob/master/docs/resources.md) states that an Image PipelineResource is usually used as a Task output for Tasks that build images.

How can I access the contents of my container data image from within the tekton task?

Georgios F.
  • 417
  • 5
  • 15

1 Answers1

2

Currently Tekton does not support Image inputs in the way that OpenShift's build configs support them: https://docs.openshift.com/container-platform/4.2/builds/creating-build-inputs.html#image-source_creating-build-inputs

Image inputs are only useful for variable interpolation, for example, "$(inputs.resources.my-image.url)" while `ls "$(inputs.resources.my-image.path)" will always print empty content.

There are several ways to access the contents of the Image though including:

  1. Export image to tar: podman export $(podman create $(inputs.resources.my-image.url) --tls-verify=false) > contents.tar
  2. Copy files from the image: docker cp $(docker create $(inputs.resources.my-image.url)):/my/image/files ./local/copy. The tool skopeo can also copy files however does not seem to offer sub-directory copy capabilities.
  3. Copy a pod directory to a local directory (https://docs.openshift.com/container-platform/4.2/nodes/containers/nodes-containers-copying-files.html): oc rsync $(inputs.resources.my-image.url):/src /home/user/source

Having said the above, I decided to simply use the OpenShift's built-in BuildConfig resources to create a chained build for my pipeline. The variety of Build strategies supported by OpenShift out-of-the box is sufficient for my pipeline scenarios and the fact that Image inputs are supported makes it much easier when comparing to Tekton pipelines (https://docs.openshift.com/container-platform/4.2/builds/creating-build-inputs.html#image-source_creating-build-inputs). The only advantage that Tekton pipelines seem to have is the ability to easily reuse tasks, however, the equivalent can be achieved by creating Operators for OpenShift resources.

Georgios F.
  • 417
  • 5
  • 15