0

I'm getting the same error message as this SO post. However after trying all the suggestions on that post, I'm still unable to resolve my issue, which is described in the following.

I'm using Kaniko to build and push images for later use. I've ensured the image pushing portion of the job works by building a test Dockerfile and --context from a publicly-accessible git repo.

So now I'm trying to build images by using a mounted hostPath directory /root/app/containerization-engine/docker-service as the build context, which you can see indeed exists, along with its Dockerfile, in the following shell output:

[root@ip-172-31-60-18 kaniko-jobs]# ll -d /root/app/containerization-engine/docker-service/
drwxr-xr-x. 8 root root 4096 May 24 17:52 /root/app/containerization-engine/docker-service/
[root@ip-172-31-60-18 kaniko-jobs]#
[root@ip-172-31-60-18 kaniko-jobs]#
[root@ip-172-31-60-18 kaniko-jobs]#
[root@ip-172-31-60-18 kaniko-jobs]# ll -F /root/app/containerization-engine/docker-service/
total 52
drwxr-xr-x. 6 root root   104 May  9 01:50 app/
-rw-r--r--. 1 root root 20376 May 25 12:02 batch_metrics.py
-rw-r--r--. 1 root root  7647 May 25 12:02 batch_predict.py
-rw-r--r--. 1 root root    14 May 25 12:02 dev_requirements.txt
-rw-r--r--. 1 root root   432 May 25 12:02 Dockerfile
-rw-r--r--. 1 root root   136 May 25 12:02 gunicorn_config.py
drwxr-xr-x. 2 root root    19 May  9 01:50 hooks/
drwxr-xr-x. 2 root root    37 May  9 01:50 jenkins/
-rw-r--r--. 1 root root   158 May 25 12:02 manage.py
drwxr-xr-x. 2 root root    37 May  9 01:50 models/
-rw-r--r--. 1 root root     0 May 25 12:02 README.md
-rw-r--r--. 1 root root   247 May 25 12:02 requirements.txt
drwxr-xr-x. 2 root root    94 May  9 01:50 utils/
-rw-r--r--. 1 root root   195 May 25 12:02 wsgi.py

The job manifest containerization-test.yaml that I'm running with kubectl apply -f is defined below:

apiVersion: batch/v1
kind: Job
metadata:
  name: kaniko-containerization-test
spec:
  template:
    spec:
      containers:
      - name: kaniko
        image: gcr.io/kaniko-project/executor:latest
        args: ["--dockerfile=Dockerfile",
               "--context=dir:///docker-service",
               "--destination=jethrocao/containerization-test:v0",
               "--verbosity=trace"]
        volumeMounts:
        - name: docker-service-build-context
          mountPath: "/docker-service"
        volumeMounts:
        - name: kaniko-secret
          mountPath: "/kaniko/.docker"
          readOnly: true
      restartPolicy: Never
      volumes:
      - name: docker-service-build-context
        hostPath:
          path: "/root/app/containerization-engine/docker-service"
          type: Directory
      - name: kaniko-secret
        secret:
          secretName: regcred-ca5e
          items:
          - key: .dockerconfigjson
            path: config.json
          optional: false

The job is created successfully, but the pods that are created to run the job keep on erroring out, and inspecting the log from one of these failed attempts, I see:

[root@ip-172-31-60-18 kaniko-jobs]# kubectl logs kaniko-containerization-test-rp8lh | head
DEBU[0000] Getting source context from dir:///docker-service
DEBU[0000] Build context located at /docker-service
Error: error resolving dockerfile path: please provide a valid path to a Dockerfile within the build context with --dockerfile
Usage:
  executor [flags]
  executor [command]

Available Commands:
  completion  Generate the autocompletion script for the specified shell
  help        Help about any command

To triple confirm that the hostPath directory and the Dockerfile it contains are both accessible when mounted as a volume into a container, I changed the batch job into a deployment object (running a different image not Kaniko), applied that, kubectl exec -it into the running pod, and inspected the mounted path /docker-service, which exists, along with the full contents of the directory. I then wrote to the Dockerfile within just to test write accessibility, and it worked as expected; and the written change persisted outside of the container in the cluster's node too.

I'm really at a loss at what could be the problem, any ideas?

Jethro Cao
  • 968
  • 1
  • 9
  • 19
  • Well, while doing that `exec`, did you run kaniko with those args to reproduce the bad outcome manually? Have you tried including `/docker-service/Dockerfile` to see if that makes it happier? – mdaniel May 25 '22 at 19:18
  • @mdaniel I did try passing in the absolute path `/docker-service/Dockerfile` to `--dockerfile`, with the exact same results. I'll give running Kaniko manually a shot from within the container now, and report back. – Jethro Cao May 25 '22 at 20:38
  • The error turned out to be a careless mistake on my part: I used 2 `volumeMounts` blocks in the `containers` block, when it should be a single one with 2 elements within it. – Jethro Cao Jun 25 '22 at 03:58

0 Answers0