0

I use a Vagrantbox holding a minikube that needs an application coming from my host (a local dev one). I send that application by a docker save and then execute, in a part of the Vagrant provision, that script:

docker load -i docker_image-backend-java-metier.tar
minikube image load ecoemploi/backend-java-metier
kubectl apply -f /vagrant/ecoemploi_scripts/ecoemploi-application-backend-metier.yaml

When I do the first vagrant up, everything is fine.

However, if I do a local change on my host, rebuild my local application, repackage it in a local docker image, docker save it, and then run a vagrant provision instead,

minikube looks detecting the changes, looks taking them into account, but leaves the deployment that is below (deployment.apps/backend-java-metier), unchanged:

metadata:
  labels:
    app: backend-java-metier

  name: backend-java-metier
  namespace: ecoemploi

spec:
  replicas: 1

  selector:
    matchLabels:
      app: backend-java-metier

  strategy: {}

  template:
    metadata:
      labels:
        app: backend-java-metier
        namespace: ecoemploi

    spec:
      containers:
        - image: ecoemploi/backend-java-metier
          imagePullPolicy: Never
          name: backend-java-metier
lebihan@debian:~/dev/Java/comptes-france/deploiement/vagrant_kubernetes$ vagrant provision
==> default: Running provisioner: shell...
    default: Running: inline script
    default: Démarrage de Minikube
    default: * minikube v1.26.0 on Debian 11.3 (vbox/amd64)
    default: * Using the docker driver based on existing profile
    default: * Starting control plane node minikube in cluster minikube
    default: * Pulling base image ...
    default: * Updating the running docker "minikube" container ...
    default: * Preparing Kubernetes v1.24.1 on Docker 20.10.17 ...
    default: * Verifying Kubernetes components...
    default:   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
    default: * Enabled addons: storage-provisioner, default-storageclass
    default: * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
==> default: Running provisioner: shell...
    default: Running: /tmp/vagrant-shell20230519-789538-x4wmfi.sh
    default: Création du déploiement Kubernetes pour ecoemploi
    default: namespace/ecoemploi unchanged
    default: configmap/postgres-config unchanged
    default: persistentvolume/postgres-pv-volume unchanged
    default: persistentvolumeclaim/postgres-pv-claim unchanged
    default: deployment.apps/postgres unchanged
    default: service/postgres unchanged
    default: service/zookeeper-service unchanged
    default: deployment.apps/zookeeper unchanged
    default: service/kafka-service unchanged
    default: deployment.apps/kafka-broker unchanged
    default: The image ecoemploi/backend-java-metier:latest already exists, renaming the old one with ID sha256:58c7590575aed3c7ad329580721339bc802f59234ea5824d80450d6a71b1c3b5 to empty string
    default: Loaded image: ecoemploi/backend-java-metier:latest
    default: service/backend-java-metier-service unchanged
    default: deployment.apps/backend-java-metier unchanged

Inside the Vagrantbox, the pod is still at its previous version.
The pod doesn't change its id.

If I touch my Kubernetes deployment file at bit, a way to force it attempting a redeploy, it will replace my pod with one having another id, but still having the same old content.

If I scratch my Vagrantbox and rebuild it entirely, vagrant destroy -f && vagrant up, the fresh and wished application is here.

Marc Le Bihan
  • 2,308
  • 2
  • 23
  • 41

0 Answers0