1

I want to configure my tekton pipelines to use S3 workspaces Official tekton documentation https://tekton.dev/docs/getting-started/ has a section which says to delete config-artifact-pvc configMap and replace it with config-artifact-bucket configMap with the Aws secret and key. I followed this process however everytime I create a pipeline it still uses a PVC

apiVersion: v1
kind: ConfigMap
metadata:
  name: config-artifact-bucket
  namespace: tekton-pipelines
data:
  location: s3://mybucket
  bucket.service.account.secret.name: tekton-storage
  bucket.service.account.secret.key: boto-config
  bucket.service.account.field.name: BOTO_CONFIG
apiVersion: v1
kind: Secret
metadata:
  name: tekton-storage
  namespace: tekton-pipelines
type: kubernetes.io/opaque
stringData:
  boto-config: |
    [Credentials]
    aws_access_key_id = xxxx
    aws_secret_access_key = xxxx
    [s3]
    host = xxxx
    [Boto]
    https_validate_certificates = False

Do I need to have a custom s3 storage class setup before I configure tekton to use s3 buckets for my workspaces

My pipeline run still uses a claim template to backup my workspace. How do I change it to use a s3 bucket?

apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  generateName: hello-
spec:
  pipelineRef:
    name: hello
  workspaces:
    - name: output
      volumeClaimTemplate:
        spec: 
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 1Gi

Is there an example of a taskrun or pipeline run which are backed by s3?

user12985552
  • 23
  • 1
  • 3

1 Answers1

0

The issue is in the way the volumeClaimTemplate is defined. If you run kubectl get storageclass and look in the name column you will note that one has the text (default). In the absence of declaring a the storage class, the default will be chosen. To fix this set the spec.workspaces.volumeClaimTeamplte.spec.storageClassName field.

I am using IBM Cloud (for example), so I would (using the output of kubectl get storageclass) set storageClassName to "ibmc-s3fs-standard-regional":

workspaces:
  - name: output
    volumeClaimTemplate:
      spec: 
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
        storageClassName: "ibmc-s3fs-standard-regional"