0

I am trying to run this Dockerfile with distroless image (gcr.io/distroless/static:nonroot). docker build is happening successfully, but docker run -it image_name is giving me error:

2021-07-13T18:16:11.441Z   ERROR   controller-runtime.client.config  unable to get kubeconfig    {"error": "could not locate a kubeconfig"}
github.com/go-logr/zapr.(*zapLogger).Error
  /go/pkg/mod/github.com/go-logr/zapr@v0.1.0/zapr.go:128
sigs.k8s.io/controller-runtime/pkg/client/config.GetConfigOrDie
  /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/client/config/config.go:146
main.main
  /workspace/main.go:63
runtime.main
  /usr/local/go/src/runtime/proc.go:203

Debugging findings

  1. Keeping distroless image if I am removing last line ENTRYPOINT ["/manager"] then docker run -it image_name giving error as-: docker: Error response from daemon: No command specified. See 'docker run --help'.
    This same docker run command working for distroless(with ENTRYPOINT line) but not working with distroless(without ENTRYPOINT line)
  2. I replaced distroless image with alpine:latest. Here with ENTRYPOINT ["/manager"] (& without USER nonroot:nonroot) I am seeing same error as above ERROR controller-runtime.client.config unable to get kubeconfig... BUT without ENTRYPOINT line, I am able to login to container with docker run -it image_name.

Someone please let me know how to resolve this, so that I can make this dockerfile run with all required configs as in Dockerfile.

NOTE: I am afraid that my egress-operator pod might not run by changing image name, as it can lead to miss any configuration in dockerfile in order to make it run.

solveit
  • 869
  • 2
  • 12
  • 32
  • 2
    Inside that image, there's an operator which wants to interact with Kubernetes? It is probably expecting a KUBECONFIG env var which located the kubeconfig file to connect to Kubernetes – AndD Jul 13 '21 at 18:51
  • The error message you quote seems like an error from your application (as @AandD suggests, a Kubernetes controller, perhaps?), which implies the Docker-level wiring is probably more or less right. Is your problem about how to inject Kubernetes credentials into the container? Or something else? – David Maze Jul 13 '21 at 21:01
  • @David-: I want to fix this error so that dockerfile runs successfully. – solveit Jul 14 '21 at 01:12
  • What is ENTRYPOINT[/manager] command doing ? Will it cd to /manager folder or it will ( ./manager ) file? Just want to confirm its correct or not – solveit Jul 14 '21 at 01:16

1 Answers1

2

Short answer:

If you want to run your image, just do this:

you have 2 options for it:

  1. Run your image inside a Kubernetes Cluster
  2. Place your kubeconfig inside your image as $HOME/.kube/config

If you are trying to debug your image, try this:

docker run --rm -it --entrypoint bash image_name

replace bash with sh if command not found happened.


Explanation

Dockerfile part

According to the Dockerfile Docs about entrypoint,

An ENTRYPOINT allows you to configure a container that will run as an executable. Command line arguments to docker run <image> will be appended after all elements in an exec form ENTRYPOINT

Your command is docker run -it image_name without any arg, so that docker will treat it as:

  1. Run it in the image env
  2. Run entrypoint which is /manager

the /manager is built with kubebuilder, it will try to load the kubeconfig and die if not thing found.

If you do not want to run the /manager when execute docker run, you have to replace it by using --entrypoint arg.

Kubebuilder part

As I noticed you mentioned you afraid of missing the kubeconfig for your pod, appending this edit.

kubebuilder would try to find kubeconfig in 2 place by default:

  1. $HOME/.kube/config: the configuration file in the file system.
  2. in-cluster: a pod running inside kubernetes can get an in-cluster kubeconfig. /
PapEr
  • 815
  • 5
  • 16
  • If I am running this in Kubernetes cluster then the same dockerfile giving error as: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/manager": stat /manager: no such file or directory: unknown. Therefore I was running this dockerfile separately to see which line is failing. I raised a separate que for it as well https://stackoverflow.com/questions/68360450/kubernetes-pod-failing-because-of-incorrect-container-command – solveit Jul 14 '21 at 11:19
  • 1
    @solveit obviously, it is not the same error, it is kubeconfig not found in the question, now that you got manager not found, it could be something related to the libc if you can sure you have added the manager. Do you use alpine as base image now? If so, try not to – PapEr Jul 14 '21 at 11:28
  • Now I am trying to build the dockerfile in kubernetes cluster with ubuntu:18.04 image (i know it is heavier than alpine & distroless, but want to verify whether image causing issue or not) and replaced ENTRYPOINT["/manager"] with CMD["/manager"]. FYI , ubuntu image with CMD is working separately with docker run -it image_name /bin/bash. I am able to login in container and found manager file in "/" location. So now trying same combo in Kubernetes cluster. – solveit Jul 14 '21 at 11:40
  • 1
    @solveit look into my answers, you could also do ‘docker run xx bash’ when using entrypoint, just try the ‘—entrypoint’ flag for docker run. It is not the main problem here – PapEr Jul 14 '21 at 11:45