0

I am trying to build an application, but when I try to create a Deployment, the container failed at creation stage with the error:

   "docker.pkg.github.com/XXXXX/XXXXXX/XXXXXXXXXXXX:latest": rpc error: code = NotFound desc = failed to pull and unpack image "docker.pkg.github.com/XXXXX/XXXXXXX/XXXXXXXXXX:latest": failed to copy: httpReaderSeeker: failed open: content at https://docker.pkg.github.com/v2/XXXXXXX/XXXXXX/XXXXXXX/manifests/sha........ not found: not found

I followed exactly along with this tutorial: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

And I tried with kubectl both on Cloudshell and on my local machine, same issue:

Then I tried minikube on my local machine, and deploy using the YAML, and the deployment created without any issue. Also, docker pull is also working. I guess then it should not be the github credential issue.

I really cannot figure out the reason for kubernetes behave so differently on azure and on minikube. Would appreciate any help on how to troubleshooting and solving this issue.

Just FYI, I use below shell cmd to create the resource on Azure, wondering if I miss any steps:

 az group create \
     --name $RESOURCE_GROUP \
     --location $REGION_NAME
    
 az network vnet create \
     --resource-group $RESOURCE_GROUP \
     --location $REGION_NAME \
     --name $VNET_NAME \
     --address-prefixes 10.0.0.0/8 \
     --subnet-name $SUBNET_NAME \
     --subnet-prefixes 10.240.0.0/16
    
 SUBNET_ID=$(az network vnet subnet show \
     --resource-group $RESOURCE_GROUP \
     --vnet-name $VNET_NAME \
     --name $SUBNET_NAME \
     --query id -o tsv)
    
 VERSION=$(az aks get-versions \
     --location $REGION_NAME \
     --query 'orchestrators[?!isPreview] | [-1].orchestratorVersion' \
     --output tsv)
    
 az aks create \
     --resource-group $RESOURCE_GROUP \
     --name $AKS_CLUSTER_NAME \
     --vm-set-type VirtualMachineScaleSets \
     --node-count 1 \
     --load-balancer-sku standard \
     --location $REGION_NAME \
     --kubernetes-version $VERSION \
     --network-plugin azure \
     --vnet-subnet-id $SUBNET_ID \
     --service-cidr 10.2.0.0/24 \
     --dns-service-ip 10.2.0.10 \
     --docker-bridge-address 172.17.0.1/16 \
     --generate-ssh-keys

echo 'xxxxxxx' | docker login https://docker.pkg.github.com -u xxxxxxx --password-stdin

pyy
  • 915
  • 3
  • 9
  • 25

1 Answers1

1

The failure was because the latest version of AKS (1.20 or higher) has deprecated the docker as a container runtime (use ContainerD instead). After I switch to the older version to 1.18, the problem solved.

pyy
  • 915
  • 3
  • 9
  • 25
  • Did you recreate the cluster from scratch as we don't have an inbuilt provision to downgrade AKS? – Jithin Zachariah Jul 01 '21 at 05:43
  • @JithinZachariah thanks for pointing it out. Yes, that's was a careless statement. I just recreate the new cluster with an old version of Kubernetes – pyy Jul 01 '21 at 06:43