0

I am building Docker containers using gcloud:

gcloud builds submit --timeout 1000 --tag eu.gcr.io/$PROJECT_ID/dockername Dockerfiles/folder_with_dockerfile

The last 2 steps of the Dockerfile contain this:

COPY script.sh .
CMD bash script.sh

Many of the changes I want to test are in the script. So the Dockerfile stays intact. Building those Docker files on Linux with Docker-compose results in a very quick build because it detects nothing has changed. However, doing this on gcloud, I notice the complete Docker being re-generated whereas only a minor change in the script.sh has been created.

Any way to prevent this behavior?

helloworld
  • 223
  • 1
  • 7
  • 24

2 Answers2

1

Actually, gcloud has a lot to do:

The gcloud builds submit command:

  • compresses your application code, Dockerfile, and any other assets in the current directory as indicated by .;
  • uploads the files to a storage bucket;
  • initiates a build using the uploaded files as input;
  • tags the image using the provided name;
  • pushes the built image to Container Registry.

Therefore the compete build process could be time consuming.

There are recommended practices for speeding up builds such as:

  • building leaner containers;
  • using caching features;
  • using a custom high-CPU VM;
  • excluding unnecessary files from upload.

Those could optimize the overall build process.

mebius99
  • 2,495
  • 1
  • 5
  • 9
  • might it be wiser to build the Docker files locally and thereby employ Docker-compose that does it much quicker? And then upload the locally generated container? – helloworld Apr 15 '20 at 19:32
  • 1
    This is kind of a trade-off between: 1) universal, documented, supported, with provided computing capacity, perhaps not fitted to the particular needs so not the fastest solution; and 2) customized, not documented, supported by a local team, consuming local on-premise resources but fitted ideally to the particular needs one. It is up to you to choose. – mebius99 Apr 16 '20 at 08:10
1

Your local build is fast because you already have all remote resouces cached locally.

Looks like using kaniko-cache would speed a lot your build. (see https://cloud.google.com/cloud-build/docs/kaniko-cache#kaniko-build).

To enable the cache on your project run

gcloud config set builds/use_kaniko True

The first time you build the container it will feed the cache (for 6h by default) and the rest will be faster since dependencies will be cached.

If you need to further speed up your build, I would use two containers and have both in my local GCP container registry:

  • The fist one as a cache with all remote dependencies (OS / language / framework / etc).
  • The second one is the one you need with just the COPY and CMD using the cache container as base.
Iñigo González
  • 3,735
  • 1
  • 11
  • 27