0

I've deployed a registry service into a namespace registry:

$ helm install registry stable/docker-registry

Service:

$ kubectl get service
NAME                       TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
registry-docker-registry   ClusterIP   10.43.119.11   <none>        5000/TCP   18h

this is my skaffold.yaml:

apiVersion: skaffold/v2beta1
kind: Config
metadata:
  name: spring-boot-slab
build:
  artifacts:
  - image: skaffold-covid-backend
    kaniko:
      dockerfile: Dockerfile-multistage
      image: gcr.io/kaniko-project/executor:debug
      cache: {}
  cluster: {}
deploy:
  kubectl:
    manifests:
    - k8s/*

Everything works fine, up to when kaniko is trying to push the image to above registry:

Get "http://registry-docker-registry.registry.svc.cluster.local:5000/v2/": dial tcp: lookup registry-docker-registry.registry.svc.cluster.local on 127.0.0.53:53: no such host

Skaffold command is:

$ skaffold build --default-repo=registry-docker-registry.registry.svc.cluster.local:5000 

This is the log:

$ skaffold build --default-repo=registry-docker-registry.registry.svc.cluster.local:5000 
INFO[0000] Skaffold &{Version:v1.7.0 ConfigVersion:skaffold/v2beta1 GitVersion: GitCommit:145f59579470eb1f0a7f40d8e0924f8716c6f05b GitTreeState:clean BuildDate:2020-04-02T21:49:58Z GoVersion:go1.14 Compiler:gc Platform:linux/amd64} 
DEBU[0000] validating yamltags of struct SkaffoldConfig 
DEBU[0000] validating yamltags of struct Metadata       
DEBU[0000] validating yamltags of struct Pipeline       
DEBU[0000] validating yamltags of struct BuildConfig    
DEBU[0000] validating yamltags of struct Artifact       
DEBU[0000] validating yamltags of struct ArtifactType   
DEBU[0000] validating yamltags of struct KanikoArtifact 
DEBU[0000] validating yamltags of struct KanikoCache    
DEBU[0000] validating yamltags of struct TagPolicy      
DEBU[0000] validating yamltags of struct GitTagger      
DEBU[0000] validating yamltags of struct BuildType      
DEBU[0000] validating yamltags of struct ClusterDetails 
DEBU[0000] validating yamltags of struct DeployConfig   
DEBU[0000] validating yamltags of struct DeployType     
DEBU[0000] validating yamltags of struct KubectlDeploy  
DEBU[0000] validating yamltags of struct KubectlFlags   
INFO[0000] Using kubectl context: k3s-traefik-v2        
DEBU[0000] Using builder: cluster                       
DEBU[0000] setting Docker user agent to skaffold-v1.7.0 
Generating tags...
 - skaffold-covid-backend -> DEBU[0000] Running command: [git describe --tags --always] 
DEBU[0000] Command output: [c5dfd81
]                   
DEBU[0000] Running command: [git status . --porcelain]  
DEBU[0000] Command output: [ M Dockerfile-multistage
 M skaffold.yaml
?? k8s/configmap.yaml
?? kaniko-pod.yaml
?? run_in_docker.sh
] 
registry-docker-registry.registry.svc.cluster.local:5000/skaffold-covid-backend:c5dfd81-dirty
INFO[0000] Tags generated in 3.479451ms                 
Checking cache...
DEBU[0000] Found dependencies for dockerfile: [{pom.xml /tmp true} {src /tmp/src true}] 
 - skaffold-covid-backend: Not found. Building
INFO[0000] Cache check complete in 3.995675ms           
Building [skaffold-covid-backend]...
DEBU[0000] getting client config for kubeContext: ``    
INFO[0000] Waiting for kaniko-rjsn5 to be initialized   
DEBU[0001] Running command: [kubectl --context k3s-traefik-v2 exec -i kaniko-rjsn5 -c kaniko-init-container -n registry -- tar -xf - -C /kaniko/buildcontext] 
DEBU[0001] Found dependencies for dockerfile: [{pom.xml /tmp true} {src /tmp/src true}] 
DEBU[0001] Running command: [kubectl --context k3s-traefik-v2 exec kaniko-rjsn5 -c kaniko-init-container -n registry -- touch /tmp/complete] 
INFO[0001] Waiting for kaniko-rjsn5 to be complete      
DEBU[0001] unable to get kaniko pod logs: container "kaniko" in pod "kaniko-rjsn5" is waiting to start: PodInitializing 
DEBU[0002] unable to get kaniko pod logs: container "kaniko" in pod "kaniko-rjsn5" is waiting to start: PodInitializing 
DEBU[0000] Getting source context from dir:///kaniko/buildcontext 
DEBU[0000] Build context located at /kaniko/buildcontext 
DEBU[0000] Copying file /kaniko/buildcontext/Dockerfile-multistage to /kaniko/Dockerfile 
DEBU[0000] Skip resolving path /kaniko/Dockerfile       
DEBU[0000] Skip resolving path /kaniko/buildcontext     
DEBU[0000] Skip resolving path /cache                   
DEBU[0000] Skip resolving path                          
DEBU[0000] Skip resolving path                          
DEBU[0000] Skip resolving path                          
INFO[0000] Resolved base name maven:3-jdk-8-slim to maven:3-jdk-8-slim 
INFO[0000] Resolved base name java:8-jre-alpine to java:8-jre-alpine 
INFO[0000] Resolved base name maven:3-jdk-8-slim to maven:3-jdk-8-slim 
INFO[0000] Resolved base name java:8-jre-alpine to java:8-jre-alpine 
INFO[0000] Retrieving image manifest maven:3-jdk-8-slim 
DEBU[0003] No file found for cache key sha256:53ce0b73ff3596b4feb23cd8417cf458276fd72464c790c4f732124878e6038f stat /cache/sha256:53ce0b73ff3596b4feb23cd8417cf458276fd72464c790c4f732124878e6038f: no such file or directory 
DEBU[0003] Image maven:3-jdk-8-slim not found in cache  
INFO[0003] Retrieving image manifest maven:3-jdk-8-slim 
INFO[0005] Retrieving image manifest java:8-jre-alpine  
DEBU[0007] No file found for cache key sha256:6a8cbe4335d1a5711a52912b684e30d6dbfab681a6733440ff7241b05a5deefd stat /cache/sha256:6a8cbe4335d1a5711a52912b684e30d6dbfab681a6733440ff7241b05a5deefd: no such file or directory 
DEBU[0007] Image java:8-jre-alpine not found in cache   
INFO[0007] Retrieving image manifest java:8-jre-alpine  
DEBU[0009] Resolved /tmp/target/*.jar to /tmp/target/*.jar 
DEBU[0009] Resolved /app/spring-boot-application.jar to /app/spring-boot-application.jar 
INFO[0009] Built cross stage deps: map[0:[/tmp/target/*.jar]] 
INFO[0009] Retrieving image manifest maven:3-jdk-8-slim 
DEBU[0011] No file found for cache key sha256:53ce0b73ff3596b4feb23cd8417cf458276fd72464c790c4f732124878e6038f stat /cache/sha256:53ce0b73ff3596b4feb23cd8417cf458276fd72464c790c4f732124878e6038f: no such file or directory 
DEBU[0011] Image maven:3-jdk-8-slim not found in cache  
INFO[0011] Retrieving image manifest maven:3-jdk-8-slim 
DEBU[0012] Resolved pom.xml to pom.xml                  
DEBU[0012] Resolved /tmp/ to /tmp/                      
DEBU[0012] Getting files and contents at root /kaniko/buildcontext for /kaniko/buildcontext/pom.xml 
DEBU[0012] Using files from context: [/kaniko/buildcontext/pom.xml] 
DEBU[0012] optimize: composite key for command COPY pom.xml /tmp/ {[sha256:53ce0b73ff3596b4feb23cd8417cf458276fd72464c790c4f732124878e6038f COPY pom.xml /tmp/ 7176510dcac61a3d406beab8d864708f21db23201dba11185866015a8dcd55b0]} 
DEBU[0012] optimize: cache key for command COPY pom.xml /tmp/ fc6a0ec8876277261e83ab9b647595b1df258352ba9acf92ec19c761415fb23e 
INFO[0012] Checking for cached layer registry-docker-registry.registry.svc.cluster.local:5000/skaffold-covid-backend/cache:fc6a0ec8876277261e83ab9b647595b1df258352ba9acf92ec19c761415fb23e... 
INFO[0012] Using caching version of cmd: COPY pom.xml /tmp/ 
DEBU[0012] optimize: composite key for command RUN mvn -B dependency:go-offline -f /tmp/pom.xml -s /usr/share/maven/ref/settings-docker.xml {[sha256:53ce0b73ff3596b4feb23cd8417cf458276fd72464c790c4f732124878e6038f COPY pom.xml /tmp/ 7176510dcac61a3d406beab8d864708f21db23201dba11185866015a8dcd55b0 RUN mvn -B dependency:go-offline -f /tmp/pom.xml -s /usr/share/maven/ref/settings-docker.xml]} 
DEBU[0012] optimize: cache key for command RUN mvn -B dependency:go-offline -f /tmp/pom.xml -s /usr/share/maven/ref/settings-docker.xml 18ffc2eda5a9ef5481cc865da06e9a4e3d543bf9befb35bd7ac3cb9dc3b62fc7 
INFO[0012] Checking for cached layer registry-docker-registry.registry.svc.cluster.local:5000/skaffold-covid-backend/cache:18ffc2eda5a9ef5481cc865da06e9a4e3d543bf9befb35bd7ac3cb9dc3b62fc7... 
INFO[0012] Using caching version of cmd: RUN mvn -B dependency:go-offline -f /tmp/pom.xml -s /usr/share/maven/ref/settings-docker.xml 
DEBU[0012] Resolved src to src                          
DEBU[0012] Resolved /tmp/src/ to /tmp/src/              
DEBU[0012] Using files from context: [/kaniko/buildcontext/src] 
DEBU[0012] optimize: composite key for command COPY src /tmp/src/ {[sha256:53ce0b73ff3596b4feb23cd8417cf458276fd72464c790c4f732124878e6038f COPY pom.xml /tmp/ 7176510dcac61a3d406beab8d864708f21db23201dba11185866015a8dcd55b0 RUN mvn -B dependency:go-offline -f /tmp/pom.xml -s /usr/share/maven/ref/settings-docker.xml COPY src /tmp/src/ 13724ad65fa9678727cdfb4446f71ed586605178d3252371934493e90d7fc7c5]} 
DEBU[0012] optimize: cache key for command COPY src /tmp/src/ 177d8852ce5ec30e7ac1944b43363857d249c3fb4cdb4a26724ea88660102e52 
INFO[0012] Checking for cached layer registry-docker-registry.registry.svc.cluster.local:5000/skaffold-covid-backend/cache:177d8852ce5ec30e7ac1944b43363857d249c3fb4cdb4a26724ea88660102e52... 
INFO[0012] Using caching version of cmd: COPY src /tmp/src/ 
DEBU[0012] optimize: composite key for command WORKDIR /tmp/ {[sha256:53ce0b73ff3596b4feb23cd8417cf458276fd72464c790c4f732124878e6038f COPY pom.xml /tmp/ 7176510dcac61a3d406beab8d864708f21db23201dba11185866015a8dcd55b0 RUN mvn -B dependency:go-offline -f /tmp/pom.xml -s /usr/share/maven/ref/settings-docker.xml COPY src /tmp/src/ 13724ad65fa9678727cdfb4446f71ed586605178d3252371934493e90d7fc7c5 WORKDIR /tmp/]} 
DEBU[0012] optimize: cache key for command WORKDIR /tmp/ cc93f6a4e941f6eb0b907172ea334a00cdd93ba12f07fe5c6b2cddd89f1ac16c 
DEBU[0012] optimize: composite key for command RUN mvn -B -s /usr/share/maven/ref/settings-docker.xml package {[sha256:53ce0b73ff3596b4feb23cd8417cf458276fd72464c790c4f732124878e6038f COPY pom.xml /tmp/ 7176510dcac61a3d406beab8d864708f21db23201dba11185866015a8dcd55b0 RUN mvn -B dependency:go-offline -f /tmp/pom.xml -s /usr/share/maven/ref/settings-docker.xml COPY src /tmp/src/ 13724ad65fa9678727cdfb4446f71ed586605178d3252371934493e90d7fc7c5 WORKDIR /tmp/ RUN mvn -B -s /usr/share/maven/ref/settings-docker.xml package]} 
DEBU[0012] optimize: cache key for command RUN mvn -B -s /usr/share/maven/ref/settings-docker.xml package f09ec8d47c0476fe4623fbb7bedd628466d43cd623c82a298c84d43c028c4518 
INFO[0012] Checking for cached layer registry-docker-registry.registry.svc.cluster.local:5000/skaffold-covid-backend/cache:f09ec8d47c0476fe4623fbb7bedd628466d43cd623c82a298c84d43c028c4518... 
INFO[0012] Using caching version of cmd: RUN mvn -B -s /usr/share/maven/ref/settings-docker.xml package 
DEBU[0012] Mounted directories: [{/kaniko false} {/etc/mtab false} {/tmp/apt-key-gpghome true} {/var/run false} {/proc false} {/dev false} {/dev/pts false} {/dev/mqueue false} {/sys false} {/sys/fs/cgroup false} {/sys/fs/cgroup/systemd false} {/sys/fs/cgroup/cpu,cpuacct false} {/sys/fs/cgroup/devices false} {/sys/fs/cgroup/net_cls,net_prio false} {/sys/fs/cgroup/pids false} {/sys/fs/cgroup/rdma false} {/sys/fs/cgroup/memory false} {/sys/fs/cgroup/freezer false} {/sys/fs/cgroup/cpuset false} {/sys/fs/cgroup/perf_event false} {/sys/fs/cgroup/blkio false} {/sys/fs/cgroup/hugetlb false} {/busybox false} {/kaniko/buildcontext false} {/etc/hosts false} {/dev/termination-log false} {/etc/hostname false} {/etc/resolv.conf false} {/dev/shm false} {/var/run/secrets/kubernetes.io/serviceaccount false} {/proc/asound false} {/proc/bus false} {/proc/fs false} {/proc/irq false} {/proc/sys false} {/proc/sysrq-trigger false} {/proc/acpi false} {/proc/kcore false} {/proc/keys false} {/proc/timer_list false} {/proc/sched_debug false} {/proc/scsi false} {/sys/firmware false}] 
DEBU[0014] Not adding /dev because it is whitelisted    
DEBU[0014] Not adding /etc/hostname because it is whitelisted 
DEBU[0014] Not adding /etc/resolv.conf because it is whitelisted 
DEBU[0018] Not adding /proc because it is whitelisted   
DEBU[0019] Not adding /sys because it is whitelisted    
DEBU[0026] Not adding /var/run because it is whitelisted 
DEBU[0080] Whiting out /var/lib/apt/lists/.wh.auxfiles  
DEBU[0080] not including whiteout files                 
INFO[0085] Taking snapshot of full filesystem...        
INFO[0085] Resolving paths                              
FATA[0095] build failed: building [skaffold-covid-backend]: getting image: Get "http://registry-docker-registry.registry.svc.cluster.local:5000/v2/": dial tcp: lookup registry-docker-registry.registry.svc.cluster.local on 127.0.0.53:53: no such host

At the same time kaniko Pod is running I've been able to perform some actions:

$ kubectl exec -ti kaniko-8nph4 -c kaniko -- sh
/ # wget registry-docker-registry.registry.svc.cluster.local:5000/v2/_catalog
Connecting to registry-docker-registry.registry.svc.cluster.local:5000 (10.43.119.11:5000)
saving to '_catalog'
_catalog             100% |**************************************************************************************************************|    75  0:00:00 ETA
'_catalog' saved
/ # cat _catalog
{"repositories":["skaffold-covid-backend","skaffold-covid-backend/cache"]}

So it seems it's able to connect to it, but at logs are saying it's not able to connect to it.

Any ideas about how to get access to this registry deployed inside the same kubernetes?

I've tried to get access to the registry from another pod:

$ kubectl exec -ti graylog-1 -- curl registry-docker-registry.registry:5000/v2/_catalog
{"repositories":["skaffold-covid-backend","skaffold-covid-backend/cache"]}

As you can see, it's able to get access to the registry.

I've also took a look on container /etc/resolv.conf:

$ kubectl exec -ti kaniko-zqhgf -c kaniko -- cat /etc/resolv.conf
search registry.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.43.0.10
options ndots:5

I've also checked connections during container is running:

$ kubectl exec -ti kaniko-sgs5x -c kaniko -- netstat
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
tcp        0    210 kaniko-sgs5x:40104      104.18.124.25:443       ESTABLISHED 
tcp        0      0 kaniko-sgs5x:46006      registry-docker-registry.registry.svc.cluster.local:5000 ESTABLISHED 
tcp        0      0 kaniko-sgs5x:45884      registry-docker-registry.registry.svc.cluster.local:5000 ESTABLISHED 
tcp        0      0 kaniko-sgs5x:39772      ec2-52-3-104-67.compute-1.amazonaws.com:443 ESTABLISHED 
Active UNIX domain sockets (w/o servers)
Proto RefCnt Flags       Type       State         I-Node Path

As you can see, it seems it's able that container has established connection to egistry-docker-registry.registry.svc.cluster.local:5000. However, when it tries to push it at registry, error log appears...

It's really strange.

Jordi
  • 20,868
  • 39
  • 149
  • 333

1 Answers1

0

If you look at the log numbers, they jump from 0020 to 0080. I suspect the lines from [0080,0085] are from your local Skaffold that is attempting to retrieve image details from the remote registry, which is inaccessible from your machine.

You might consider describing your situation to the following issue: https://github.com/GoogleContainerTools/skaffold/issues/3841#issuecomment-603582206

Brian de Alwis
  • 2,814
  • 2
  • 18
  • 32